Unnamed: 0
int64 0
31.6k
| Clean_Title
stringlengths 7
376
| Clean_Text
stringlengths 1.85k
288k
| Clean_Summary
stringlengths 215
5.34k
|
---|---|---|---|
300 | Effector movement triggers gaze-dependent spatial coding of tactile and proprioceptive-tactile reach targets | Reaches to objects require that the hand and the target are represented in the same spatial map in order to calculate the movement vector.This seems to be trivial at a first glance but becomes rather complex when considering that the hand and target positions can be derived from different sensory channels associated with different spatial reference frames.Previous studies have demonstrated that reaches to previously seen targets are represented in a gaze-dependent reference frame.When people reach to a remembered visual target presented in their visual periphery they tend to systematically overshoot its position, the so-called retinal magnification effect.Henriques, Klier, Smith, Lowy, and Crawford designed an experiment that took advantage of the RME in order to distinguish between head-centered and gaze-centered spatial updating of visual reach targets.They asked participants to first look at a visual target and then make a saccade to a peripheral fixation location after the target was extinguished.Horizontal reach errors were recorded and compared to the conditions where subjects reached to a target that they either viewed peripherally or centrally.In case of a head-centered spatial reference frame the errors should be similar to the condition where the target was directly viewed because head position remained unchanged after target encoding.In contrast, an error pattern displaying the RME as when the target was viewed in the visual periphery would indicate gaze-dependent spatial updating of the target location.Interestingly, reach errors depended on the target location relative to the current gaze direction after the gaze shift instead of the gaze direction during target presentation.This result suggests that visual reach targets are represented with respect to gaze and thus are updated/remapped in space for each gaze shift.Gaze-dependent spatial updating of visual targets has also been demonstrated for delayed reaches where the reach was carried out up to 1200 ms after target presentation, reaches with the dominant and non-dominant hand, and reaches from various start positions.Electrophysiological studies in monkeys identified the posterior parietal cortex as a site which plays an important role in reference frame transformations.Especially, neurons in the parietal reach region seem to discharge depending on eye position relative to the visual reach goal suggesting a representation of movement-related targets in eye coordinates.Consistent with the results in monkeys, human fMRI and MEG studies found evidence for gaze-centered spatial coding and updating of remembered visual targets for reaching movements in the PPC.While there is profound knowledge about the spatial coding scheme for visual reach targets, the dominant reference frame for somatosensory reach targets is far less clear.Behavioral studies on reaching to proprioceptive targets have demonstrated gaze-dependent reach errors similar to those obtained for visual targets indicating similar spatial coding mechanisms for different target modalities.These studies followed the paradigm of Henriques et al. and asked participants to reach with the right hand to the remembered location of their left thumb which was guided to a target location using a robot manipulandum.In addition, gaze was shifted to a peripherally flashed fixation light after target presentation and before the reach.In contrast, neuroimaging work using a repetition suppression approach to examine the reference frame for visual and proprioceptive reach targets suggests a flexible use of gaze-centered and body-centered coordinate systems depending on the sensory target modality.The authors varied the location of the targets with respect to the body midline and gaze and assessed the amount of repetition suppression in a consecutive trial that was similar vs. novel in either body or gaze coordinates.They reported stronger repetition suppression in areas of the PPC and premotor cortex for gaze coordinates when visual targets were shown and for body coordinates when proprioceptive targets were presented.Based on these findings, the authors suggest a dominant use of the gaze-centered reference frame for visual and the body-centered reference frame for proprioceptive targets.In studies which found gaze-dependent reach errors for proprioceptive targets, the target hand was moved and/or gaze was shifted before the reach.In contrast, the fMRI study by Bernier and Grafton included neither a movement of the target effector nor an intervening gaze shift.Instead, subjects held the target fingers stationary at the target positions and kept gaze at one of the fixation locations throughout the trial.However, the experiment by Pouget, Ducom, Torri, and Bavelier which also lacks a movement of the target effector and a shift in gaze did yield gaze-dependent reach errors for a proprioceptive target; but the gaze-centered error was considerably smaller compared to the visual and auditory targets of the very same experiment.In sum, previous data may suggest that beyond target modality, movement of the target effector and/or gaze influences the reference frame used for spatial coding and updating of proprioceptive reach targets.Gaze-dependent spatial coding has also been reported for tactile targets applied to the arm in spatial localization tasks.In these studies, participants compared the perceived location of a touch to a visual reference while maintaining eye position at various eccentricities during the presentation of the touch.Tactile spatial judgments were influenced by eye position; however, the direction of gaze-dependent errors differed from reach errors reported for proprioceptive targets.While studies on tactile targets found errors in the direction of gaze, i.e. an undershoot, studies on proprioceptive reaches demonstrated errors opposite to gaze direction, i.e. an overshoot, similar to reach errors to visual targets.Since Fiehler, Rösler, and Henriques observed a gaze-dependent overshoot effect also in a proprioceptive localization task, the discrepancy in error direction does not seem to be caused by the applied task, but rather by target modality, i.e. touch vs. proprioception.However, it is important to note that in the study of Fiehler et al. the target effector was moved to the target location while it remained stationary in the tactile localization tasks.Thus, differences in error direction might also be due to the movement of the target effector.Consistent with the hypothesis that movement of the target effector and/or gaze facilitates gaze-dependent spatial coding of somatosensory reach targets, Pritchett, Carnevale, and Harris recently demonstrated that a shift in gaze can alter the reference frame used to represent a tactile stimulus in space.When gaze was held eccentric during both the presentation of the touch and the response, touch location was primarily represented in a body-centered reference frame.Interestingly, when gaze was shifted after target presentation and before the response, spatial coding of tactile targets switched to a preferential use of a gaze-centered reference frame.So far, it is unclear whether a shift in gaze or the movement of the target effector or a combination of both factors influences the spatial reference frame of tactile and proprioceptive reach targets.We addressed this issue by investigating the effect of a) movement of the target effector and b) a gaze shift between target presentation and reaching movement on gaze-dependent spatial coding and updating of reach targets.To this end, participants reached towards remembered tactile and proprioceptive-tactile targets while gaze was varied relative to the target.Target presentation differed by whether the target effector was actively moved to the target location or remained stationary at the target location throughout the trial.Gaze was directed to a fixation light at the beginning of the trial where it kept fixed or it was shifted away from the target to a fixation location after target presentation and before the reach.The 2 conditions of target presentation were combined with the 2 gaze manipulations for the tactile and the proprioceptive-tactile targets resulting in 8 experimental conditions.Nine human participants volunteered to participate in this experiment.All participants had normal or corrected to normal vision, were right-handed according to the German translation of the Edinburgh Handedness Inventory and received monetary compensation for participation.Written informed consent approved by the local ethics committee was provided by each participant prior to participation.Subjects sat in a completely dark room in front of a table on which the apparatus was mounted.To avoid dark adaptation a small halogen table lamp was switched on for 1 s before each trial.The head was stabilized by a bite bar attached to a personalized dental impression.On top of the apparatus, 45 cm in front of and 13 cm below the eyes, a bar containing 7 green light emitting diodes was mounted on the rearmost end of a framed touch screen.The LEDs served as fixation stimuli and were placed at 15°, 10° and 5° to the left and to the right horizontal eccentricity as well as central to the right eye.A 19 in.touch screen panel was mounted horizontally and recorded reach endpoints with a resolution of 1920×1080 pixels.Successfully recorded touches were signaled by a beep.Below the touch screen three solenoids were mounted at 10° to the left and right and central to the right eye.The frame of the touch screen together with the height of the solenoids caused a spatial offset of 9 cm in the vertical plane between the touch screen and the stimulated skin location.When a current was applied to a solenoid it pushed out a small pin which touched the participants׳ skin for 50 ms. To mask the noise of the solenoids, subjects wore in-ear headphones presenting white noise.Touches were applied either to the left forearm or to the index finger/the 3 middle fingers of the left hand.The limb which received the touches is further referred to as target effector.In conditions that included a movement of the target effector we attached a rail with an oblong slider to the apparatus.The rail could be rotated by a step motor to guide the target effector from the start to one of the three touch positions.It restricted both the direction of the movement to the horizontal plane and the amplitude of the movement to the length of the rail.A mouse click signaled when the slider reached the endpoints of the rail and continued the trial.Reaches were performed with the right index finger in total darkness.Subjects kept this finger on a button which was mounted on the frame of the touchscreen at 0° relative to the right eye and 12 cm below the eyes.They released the button to reach to the remembered location of the felt target on the touchscreen.The trial ended when the finger returned and depressed the button.To ensure compliance with instructions, we recorded movements of the right eye by a head mounted EyeLinkII eye tracker system at a sampling rate of 250 Hz.Before each condition the eye tracker was calibrated with a horizontal 3 point calibration on the fixation LEDs at 10° left, 10° right and 0°.The experiment was performed using Presentation® software.The task required reaching towards remembered somatosensory targets while gaze was either fixed or shifted after target presentation and before reaching.Somatosensory targets were defined solely by tactile information or by both proprioceptive and tactile information depending on the target effector.The target effector stayed at the same position underneath the solenoids which delivered the touches or was actively moved before the reach.The combination of the 2 somatosensory target modalities with the 2 modes of target presentation resulted in 4 target conditions which are described in detail in the following sections.Target conditions were conducted in separate sessions in an order pseudorandomized across participants.Each target condition was further combined with the 2 gaze conditions resulting in 8 experimental conditions.Schematics of the experimental setup for each target condition are presented in Fig. 1B–E. Detailed information about the timing of the experimental conditions is shown in Supplementary material Table 1.Participants reached to the remembered location of touches delivered to the dorsal surface of their left forearm which was placed directly underneath the solenoids.The midpoint of the arm was roughly aligned with the central target.In the tactile-stationary condition, subjects placed their arm in the apparatus and kept it at that location until the end of the block.In contrast to the tactile-stationary condition, the left forearm was actively moved from a start position to the target position before touch presentation.At the beginning of each block the subjects׳ forearm was aligned in the touch position as described in the section above.Subjects placed their left hand on a movable slider and were instructed to adopt the start position by drawing the hand towards the body while the elbow joint stayed at the same place.The slider guided the movement along a rail which restricted the length of its path.In order to receive the touch, participants had to push the slider from the start position to the distal endpoint of the rail thereby placing their forearm underneath the solenoids.Contact of the slider with the endpoint of the rail caused a solenoid to drive out a pin which touched the forearm.After the touch was delivered participants moved their arm back to the start position and then reached to the remembered location of the touch.The tactile stimulation and the touch positions were identical to the tactile-stationary condition.Subjects reached to the remembered location of a touch which was delivered to one of the fingertips of the 3 middle fingers of the left hand.Hence, besides tactile information, task-relevant proprioceptive information about the target position was available due to the stimulation of separate fingers, i.e. the fingertip of the index, middle and ring fingers.Subjects stuck the 3 fingers through 3 separate rings that positioned the fingertips exactly below the solenoids while contact with the solenoids was prevented.The fingers were held stationary in this position until the end of the experimental condition.Targets were touches delivered to the tip of the left index finger.Instead of keeping three fingers at three different target locations touches were applied to one finger that was actively moved from a start position to the 3 target locations.Subjects stuck the index finger through a ring attached to a slider which moved along a rail.The rail was rotated by a step motor in the direction of one of the 3 target locations.Subjects started each trial with the slider at the start position which was approximately 10 cm in front of the body and at 0° in the horizontal plane with respect to the right eye.After the rail was rotated in the direction of the target, subjects moved the finger along the rail until they reached its endpoint located at the current target location, i.e. under one of the solenoids.Contact of the slider with the distal endpoint of the rail caused the solenoid to drive out the pin.After subjects received the touch they moved the slider back to the start position and then reached to the remembered target.Participants performed each of the 4 target conditions under 2 different gaze conditions.A schematic of trial timing accounting for the 2 gaze conditions combined with the 2 modes of target presentation is presented in Fig. 2.In the proptac-moved conditions, the start position of the movement was located at 0° so that movements were performed either in the sagittal or the diagonal plane.In the tactile-moved conditions, the target arm was moved diagonally by rotating the elbow joint.In the fixed-gaze condition, subjects fixated one of the seven LEDs throughout the trial.Each trial began with the illumination of the fixation LED for 750 ms. The fixation light was extinguished before the somatosensory target was presented.In the conditions where the target effector was kept stationary, the touch was presented after the fixation LED was turned off.In the moved-conditions, the touch was presented 200 ms after the target effector was placed at the target location.Gaze always remained at the fixation location until the reach endpoint was registered by the touch screen.In the shifted-gaze condition, participants first fixated the location where they felt the target and then directed gaze to the fixation LED which was illuminated for 750 ms. Participants received no feedback on the correctness of the felt target location.The mean fixation endpoints for each felt target location and the corresponding statistical comparisons are listed in Supplementary material.The fixation LED was illuminated 700 ms after the touch in the stationary target conditions and 200 ms after the target effector had returned to the start location in the moved target conditions.Reaches had to be initiated after the fixation LED was extinguished to be classified as valid.Each subject completed 2 blocks á 15 trials for each of the 4 target conditions where gaze was not restricted but was allowed to move freely.This condition served as baseline in order to check participants׳ general task performance.We varied the horizontal visual angle of fixation relative to the target and assessed its effect on horizontal reach errors for the experimental conditions.In each trial one of the 3 target locations was paired with one of the 7 fixation locations except for combinations that yielded an eccentricity larger than 15° of visual angle.These combinations were excluded to avoid saturation of the retinal magnification effect which had been observed in previous studies.One experimental block contained the remaining 17 fixation-target combinations in randomized order.Conditions with stationary target effector comprised 12 blocks and conditions with moved target effector 10 blocks thereby requiring a similar amount of time.We also included 1–3 short breaks within each condition where the light was turned on and participants could relax their arms and hands.The 4 different target conditions were conducted in 4 sessions that were pseudorandomized across participants.More specifically, all possible sequences of target conditions, 24 in total, were listed.The list was shuffled and the different sequences were assigned to the participants.Each session, and accordingly target condition, comprised the 3 different gaze conditions, namely the fixed-gaze condition, the shifted-gaze condition, and the free gaze condition, in randomized order.Eye tracking data was exported into a custom graphical user interface written in MATLAB R2007b to ensure subjects׳ compliance with instructions for every trial.Trials were classified as valid and included in data analyses if gaze stayed within +/− 2.5° of the LED fixation location until the reach endpoint was registered.In the shifted gaze conditions, we additionally checked if a gaze shift occurred between the application of the touch and the presentation of the fixation LED; however, gaze had not necessarily correspond to the physical target location before the gaze shift for the trial to be classified as valid.All in all, analyses of eye data resulted in 11.885 valid trials.All further computations were performed in SPSS.First, means of reach endpoints for each retinal error of a given subject and target were computed.Reach endpoints had to fall within the range of +/− 2 standard deviations of the individual mean.Otherwise they were regarded as outliers and discarded from further analyses which reduced the number of valid trials to 11.304.Because target presentation differed between tactile and proprioceptive-tactile targets, statistical analyses were carried out separately for tactile and proprioceptive-tactile targets.In the figures, descriptive data depict the mean and the within-subject standard error of the mean following the procedure described by Cousineau.We initially checked if participants were able to discriminate the 3 different target locations in the 4 experimental conditions and the 2 free-gaze control conditions.To this end, we conducted two-way repeated measures analysis of variance×target location: left, center, right) on horizontal and sagittal reach errors, separately for tactile and proprioceptive-tactile targets.In order to test for interactions between target location and gaze relative to target across all experimental conditions we further conducted a three-way RM ANOVA× target location×RE) on horizontal reach endpoints separately for each target modality.In these analyses the levels of retinal error were reduced to the 3 levels of gaze relative to target that were tested for each target location.All further analyses were conducted on horizontal reach errors which were computed as follows.First, the mean reach endpoints for each subject and target when gaze and target were aligned served to normalize the data across subjects and targets.More precisely, reach errors were calculated by subtracting the mean reach endpoint a subject produced when reaching to a target in line with the fixation from the reach endpoints obtained for the same subject and target when the retinal errors were different from 0°.By this linear transformation the shape of the reach endpoint pattern of each target location was preserved but differences in the absolute positions of reach endpoints between subjects and targets were eliminated.Thereby reach errors were defined as horizontal deviations from the individual mean reach endpoints when gaze and target were aligned.Second, after normalization the reach errors of the 3 targets were collapsed.In order to test whether reach errors varied as a function of gaze relative to target depending on target presentation and gaze condition, we conducted repeated-measures ANOVAs separately for tactile and proprioceptive-tactile targets.Specifically, we were interested in whether a movement a) of the target effector and/or b) of gaze affects the reference frame of tactile and proprioceptive-tactile targets indicated by gaze-dependent reach errors.Based on previous research, we expected a smaller or no effect of gaze relative to target when no movement was present.In contrast, we expected reach errors to systematically vary with gaze relative to target when an effector movement was present.First, we conducted a three-way RM ANOVA×gaze condition: fixed, shifted×RE) separately for tactile and proprioceptive-tactile targets.We then computed two-way repeated-measures analyses separately for tactile and proprioceptive-tactile targets in order to test how the movement of the target effector and/or gaze affects the reference frame of tactile and proprioceptive-tactile reach targets.These analyses were based on our a-priori hypotheses, i.e., planned comparisons.In the first analyses, we compared horizontal reach errors of the no-movement condition with the condition containing a movement only of the target effector by means of a two-way RM ANOVA×RE).In the second two-way RM ANOVA×RE), we contrasted the no-movement condition with the condition where only gaze was shifted.Interactions were followed up by one-way RM ANOVAs of reach errors as a function of gaze relative to target).Third, we compared the conditions containing one movement with the condition containing 2 movements.In detail, we contrasted the condition where only gaze was shifted but the target effector was kept stationary with the condition where both the target effector and gaze involved a movement before the reach×RE).Similarly, we compared the condition where the target effector was moved and gaze was fixed with the condition where both the target effector and gaze involved a movement before the reach×RE).When sphericity was violated as determined by Mauchly׳s test, Greenhouse–Geisser corrected p-values are reported.For follow-up one-way RM ANOVA alpha levels were adjusted according to Bonferroni–Holm.All other analyses were performed at alpha of .05.In this study we examined whether gaze-dependent spatial coding of somatosensory reach targets is modulated by target presentation and gaze condition.Specifically, we applied experimental conditions in which the target effector was moved or stationary and gaze was fixed at or shifted to an eccentric position in space.Fig. 3, 1st row displays the mean horizontal endpoints for reaches to the 3 target locations for the 4 experimental and 2 control conditions.We conducted a two-way RM ANOVA: 4 experimental conditions, 2 control conditions×target location: left, center, right) separately for each target modality.Reach endpoints significantly varied with target location for tactile and proprioceptive-tactile targets, indicating that subjects were able to successfully discriminate the different target sites.Nonetheless, the targets were generally perceived more leftward than their actual physical location, a phenomenon which has also been reported previously.In addition, we observed a main effect of condition for tactile but not for proprioceptive-tactile targets.Reach endpoints to tactile targets were shifted farther to the left if the target effector remained stationary than when it was moved.For reaches in the sagittal plane, we found a main effect of condition.Mean sagittal reach endpoints ranged between 3.46 cm for tactile and 4.74 cm for proprioceptive-tactile targets.Sagittal reach endpoints demonstrate an increase in reach amplitude from the experimental condition without effector movement to the conditions where the target effector and/or gaze were moved, i.e. subjects reached farther into the workspace the more movement was present before the reach.In addition, we observed a main effect of target location showing a linear increase of sagittal reach endpoints from the left to the right target although the physical target locations did not differ in the sagittal plane.These reaches were carried out mainly through a rotation of the shoulder joint thereby minimizing flexion and extension of the elbow joint.Thus, the farther the target is presented to the right the more the subjects׳ arm extended into the workspace leading to an increase in errors in the sagittal plane.We conducted a three-way RM ANOVA×target location×gaze) on horizontal reach endpoints for tactile and proprioceptive-tactile targets in order to account for putative interactions between target location and gaze relative to target.To this end, we included only the retinal errors that were obtained for all 3 target locations.We did not find a significant interaction of the respective factors neither for tactile nor for proprioceptive-tactile targets.Therefore, for the following analyses reach errors were collapsed across target locations.In order to investigate whether and if an effector movement after target presentation and before the reach affects the spatial representation of reach targets relative to gaze, we conducted a three-way RM ANOVA×gaze condition: fixed, shifted×RE) separately for tactile and proprioceptive-tactile targets.Target presentation significantly interacted with retinal error for each target modality.We further found significant interactions of gaze condition and retinal error for tactile and proprioceptive-tactile targets.In the following sections we report analyses of horizontal reach errors when one factor is held constant.We tested whether gaze-dependent spatial coding depends on target presentation, i.e. whether the target effector is moved or stationary.Therefore, we compared the effect of gaze relative to target on reach errors between stationary and moved target effector conditions for tactile and proprioceptive-tactile reach targets when gaze was kept fixed at one of 7 fixation LEDs throughout the trial.As shown in Fig. 4 target presentation significantly modulated gaze-dependent reach errors for both tactile and proprioceptive-tactile targets.While reach errors were unaffected by gaze when the target effector was kept stationary, they significantly varied with gaze when the target effector was actively moved before the reach.Next, we examined the effect of gaze relative to target on reach errors in the stationary conditions when gaze was either fixed at an eccentric location throughout the trial or shifted between target presentation and reaching.Results are shown in Fig. 4.In the tactile-stationary condition, reach errors varied as a function of gaze relative to target depending on the gaze condition, i.e. whether gaze was fixed or shifted.We observed a significant effect of gaze on reach errors in the shifted-gaze condition, which was absent in the fixed-gaze condition.For the proptac-stationary condition, we only found an overall effect of gaze relative to target which did not interact with the gaze condition.However, when we further explored this effect based on our a-priori hypothesis, a similar pattern arose as for the tactile-stationary condition.While gaze direction did not influence reach errors in the fixed-gaze condition, reach errors systematically varied with gaze in the shifted-gaze condition.To complete the picture, we explored how target presentation modulates a gaze-dependent spatial representation when gaze was shifted before the reach.For tactile targets, we found a main effect of gaze relative to target which did not interact with target presentation.However, for proprioceptive-tactile targets the gaze effect was significantly stronger when the target effector was moved showing a more pronounced effect of gaze for two effector movements compared to one effector movement.Finally, we contrasted shifted-gaze and fixed-gaze in the conditions where the target effector was moved.As reported in Sections 3.4 and 3.6, we found gaze-dependent reach errors in the tactile-moved condition for both fixed-gaze and shifted-gaze.When we directly compared the two gaze conditions, tactile reach errors varied even stronger with gaze relative to target when gaze was shifted before the reach.This indicates a stronger gaze-effect when two effector movements compared to one effector movement occurred.Similar results were obtained for proprioceptive-tactile targets while reach errors varied with gaze relative to target within each gaze condition this effect increased for shifted-gaze compared to fixed-gaze; although the effect did not reach significance but yielded a trend.We investigated whether or not tactile and proprioceptive-tactile reach targets are coded and updated in a gaze-dependent reference frame by analyzing horizontal reach errors while gaze was varied relative to the target.In particular, we studied the role of movement in determining a gaze-dependent reference frame: first, we varied movement of the target effector which was actively moved to the target location or was kept stationary at the target location; and second, we varied movement of gaze which was fixed in space or shifted away from the target after target presentation and before the reach.Tactile targets were indicated by touches on the forearm, while for proprioceptive-tactile targets touches were applied to the individual fingertips which provided additional proprioceptive information about target position.Thus, participants were provided with richer somatosensory information in the latter condition, but could have also solved the task by solely relying on the proprioceptive information and using the tactile stimulus as a cue.For tactile and proprioceptive-tactile targets, we found that horizontal reach errors systematically varied with gaze when an effector movement was present after target presentation and before the reach, but not when the target effector remained stationary while gaze was fixed.This result may dissolve inconsistent findings of previous studies on spatial coding of proprioceptive reach targets; with some studies arguing for gaze-independent and others for gaze-dependent coding.Bernier and Grafton found evidence for spatial coding of proprioceptive targets independent of gaze direction in a goal-directed reaching task.Their results rather suggest a predominant use of a body-centered reference frame for proprioceptive reach targets.Here, we also found evidence for a gaze-independent representation of proprioceptive reach targets in the condition where gaze was fixed at an eccentric position and the target effector remained stationary at the reach goal; the condition similar to the experimental task of Bernier and Grafton.However, our findings indicate that the dominant reference frame seems to switch from gaze-independent to gaze-dependent coordinates if the target hand is moved or gaze is shifted after the target presentation and before the reach.Previous studies in which the target hand was moved actively or passively from a start to the target location while gaze either remained eccentric at a fixation light or was directed at the target and then shifted away after target presentation consistently reported gaze-dependent reach errors for proprioceptive targets, similar to the errors found for visual targets.This is in accordance with our findings showing that proprioceptive reach errors vary with gaze when the target hand is moved to the target location while gaze is either fixed or shifted.We even found similar gaze-dependent reach errors when the target effector remained stationary at the target location but a shift in gaze occurred.Therefore, one movement, either of the target effector or gaze, sufficed to yield gaze-dependent errors.We revealed analogous findings for tactile targets which varied with gaze when the target arm and/or gaze was moved before the reach.In a recent study from our lab we found concurrent results by applying a spatial localization task where participants were asked to judge the location of a remembered tactile target relative to a visual comparison stimulus.The results indicated the use of a gaze-dependent reference frame for tactile targets when a gaze shift was performed after tactile target presentation in contrast to conditions where gaze remained fixed at the fixation location.This suggests that the observed effects are not restricted to hand movements but also account for spatial localization.In sum, our results suggest that an effector movement after target presentation and before the reach determines the use of a gaze-centered reference frame for tactile and proprioceptive-tactile targets.The effector movement seems to trigger a switch from a gaze-independent to a gaze-dependent coordinate system.Thus, the present findings support the notion that our nervous system operates in multiple reference frames which flexibly adapt to the sensory context, and, as shown here, also adapt to the motor context, i.e. the presence of effector movement between target presentation and reach.The present results further demonstrate that reach errors did not only vary with gaze when one effector movement was introduced but, in some conditions, even increased in magnitude if both effectors were moved.However, this effect was more variable for tactile targets.This result points to the use of multiple reference frames for reach planning which are integrated by a weighting function changing with context.Previous studies suggest that statistical reliability of each reference frame and the costs arising from transformations between reference frames determine the weight assigned to a signal estimate and thus its contribution to the estimate of target location.Following this idea, we assume that in our no-movement conditions, which required neither an update of the target relative to gaze nor an update of limb position, a gaze-independent reference frame dominates the computation of the movement vector.Thus, somatosensory reach targets remain in their natural frame of reference.However, as soon as a movement triggers an update of the target representation in space the weights seem to shift from an intrinsic reference frame, in which somatosensory targets reach the nervous system, to a transformed, but nevertheless more reliable extrinsic, gaze-dependent reference frame.This implies that the benefit of a higher reliability of sensory target information may override the costs of reference frame transformations.This assumption is consistent with the current view that spatial updating of motor goals from sensory modalities other than vision is implemented in gaze coordinates and the neural basis for such reference frame transformations probably involves gain field mechanisms.Based on our findings, we argue that effector movement promotes a gaze-centered representation by causing the need to update a target location in space irrespective of the sensory context in which the target was originally perceived.Assuming the use of multiple spatial target representations for reach planning, each movement/spatial update might increase the weight of a gaze-centered representation on the estimate of target location.This should result in stronger gaze-dependent errors when both, target effector and gaze, were moved, as we observed for tactile and proprioceptive-tactile targets.Although computational models exist which try to explain how different sensory signals are integrated for reach planning, none of the models includes specific predictions on how the sensory representations once established in multiple reference frames are affected by spatial updating either induced by an eye- or limb movement.This issue should be addressed in future research by varying the target modality and the effector movement to assess the effects on the reweighting and integration of multiple reference frames.For both, tactile and proprioceptive-tactile targets, we found reach errors in the opposite direction of gaze, i.e. an overshoot of the target location.This result is consistent with earlier findings on proprioceptive reaching and localization tasks.However, we did not observe gaze-dependent errors in the direction of gaze, as has been previously reported for tactile targets.It is unlikely that this discrepancy is caused by the type of tactile stimulation, the target effector or gaze eccentricity because they all applied a brief touch to the forearm placed in front of the body and varied gaze eccentricity in a comparable range.Since we found gaze-dependent errors opposite to gaze direction not only in the present tactile reaching task but recently also in a tactile spatial localization task, this effect does not seem to be task-dependent.We can only speculate that the difference in error direction might be caused by the applied procedures which allowed subjects to freely move their eyes during the response while, in the present study, gaze was held at the fixation location during the reach.As a necessary constraint, our experimental conditions differed in the time between target presentation and reach initiation.For example, in the moved target effector conditions, the touch was presented after the target hand had arrived at the target location and the reach was initiated after the target hand had returned to the start location; thus timing of the trial phases depended on the individual movement times.However, based on previous studies that did not find an influence of delay on gaze-dependent coding of reach targets we consider an effect of the temporal differences between the experimental conditions on the spatial coding scheme as unlikely.We conclude that effector movement before the reach determines the use of a gaze-dependent reference frame for somatosensory reach targets.Moreover, gaze-dependent reach errors, reflected by an overshoot of target location opposite to gaze direction, were comparable for tactile and proprioceptive-tactile targets suggesting similar spatial coding and updating mechanisms for both somatosensory target modalities.Future research should examine the relative shift of weights between gaze-independent and gaze-dependent reference frames as a function of effector movement causing an update of the target location in external space. | Reaching in space requires that the target and the hand are represented in the same coordinate system. While studies on visually-guided reaching consistently demonstrate the use of a gaze-dependent spatial reference frame, controversial results exist in the somatosensory domain. We investigated whether effector movement (eye or arm/hand) after target presentation and before reaching leads to gaze-dependent coding of somatosensory targets. Subjects reached to a felt target while directing gaze towards one of seven fixation locations. Touches were applied to the fingertip(s) of the left hand (proprioceptive-tactile targets) or to the dorsal surface of the left forearm (tactile targets). Effector movement was varied in terms of movement of the target limb or a gaze shift. Horizontal reach errors systematically varied as a function of gaze when a movement of either the target effector or gaze was introduced. However, we found no effect of gaze on horizontal reach errors when a movement was absent before the reach. These findings were comparable for tactile and proprioceptive-tactile targets. Our results suggest that effector movement promotes a switch from a gaze-independent to a gaze-dependent representation of somatosensory reach targets. © 2014 The Authors. |
301 | Intelligent Autonomous Vehicles in digital supply chains: A framework for integrating innovations towards sustainable value networks | The Internet of Things paradigm enables interconnection, intercommunication and interaction among supply chain actors that allow for the dynamic management of global network operations, hence promoting digital transformations in a cradle-to-grave perspective.Nowadays, owing to the technological innovations, IAVs are characterised by inherent business logic and the technical capability to sense and autonomously interact with the surrounding environment in a manner that promotes reduced emissions, rational economic expenditure and increased societal benefits such as improved safety and accessibility.To that end, under the IoT umbrella, Intelligent Autonomous Vehicles constitute a radical innovation that could assist in the efficient management of production lines, handling of warehouse inventories and supporting intra- and inter-logistics services in a gamut of economic sectors including port container terminals, agriculture, healthcare and industrial manufacturing.Notwithstanding the emerging popularity of IAVs, academic research on the integration and sustainability assessment of autonomous systems in a SC context is lacking, while the extant literature only myopically refers to the confined applicability of IAVs on specific network echelons.The incorporation of IAVs in a SC context to promote digital transformations is associated with considerable capital requirements and technical challenges.In this regard, the use of software simulation tools to elaborate and proactively evaluate the operational performance and sustainability implications of IAVs for fostering the establishment of bespoke SCs is necessitated.Simulation software tools provide the capability to make projections to the real world, assist SC actors in their decision-making process, including sustainability considerations, whilst also tackling system uncertainties and complexities.These simulations can provide valuable managerial insights prior to the deployment of autonomous operations across a digital SC.Despite the plethora of commercially available simulation software tools enabling the analysis of manufacturing and distribution operations, such off-the-shelf solutions often provide limited flexibility in capturing customised working environments and corresponding IAVs.Moreover, commercial software packages contain built-in libraries that might be either outdated or limited in terms of range of covered IAVs.To that end, considering the on-going innovations in the automations field, simulating IAVs with commercial software is typically governed by a magnitude of non-realistic assumptions."In particular, commercial software simulation tools do not provide the capability to capture an autonomous system's response to real-world environmental dynamics.A typical example is the impact of IAVs on operations and sustainability performance owing to multiple reconfigurations in the facility layout over time.Therefore, the understanding of the efficient use of IAVs in SC operations, from a sustainability perspective, and the use of software simulation to identify optimum deployment within a changing manufacturing context is the principal objective of this paper.The approach used builds on the established concepts of sustainability and Intelligent Autonomous Systems in the strategic management field."From both a theoretical and practice perspective, the study's contribution is fourfold including: a review of software simulation tools and platforms used in assessing the performance of IAVs interlinked with sustainability ramifications in a SC ecosystem, an integrated software framework for monitoring and assessing the sustainability performance of supply networks defined by the utilisation of innovative IAVs in operations, a translation of the proposed SC framework into a corresponding software application through a robust five-stage stepwise process, and a demonstration of the developed software tool through its application on the case of an IAV system operating in a customisable warehouse model.The remainder of this paper is organised as follows.In Section 2 the utilisation of IAVs in SC operations along with the associated sustainability ramifications are analysed, while in Section 3 a review of the available simulation software tools and platforms for evaluating and managing IAVs is provided.In Section 4 a novel framework proposes a software architecture for fostering the integration of IAVs in a SC context.The framework enables the sustainability performance assessment of customised supply networks that incorporate IAVs.Following, in Section 5 the applicability of the proposed framework is demonstrated through the actual development and application of a demonstration pilot simulation software tool focusing, at this initial stage of our on-going research, on the environmental sustainability dimension.In Section 6 we provide simulation results on a conceptual warehouse along with a discussion.In the final Section 7 we wrap-up with conclusions and limitations while we further outline beyond state-of-the-art research avenues for incorporating IAVs in a sustainable SC context.The advent of digitalisation signals advances in industrial information and enterprise systems that further instigate changes in both the intra- and the inter-organisational boundaries that every large, medium and small sized company has to realise in order to compete in a globalised context.IAVs are documented to foster the sustainability performance of SC systems across the economic, environmental and social sustainability constituents, including: increased productivity levels, labour cost savings, lower energy consumption, reduced emissions, and enhanced workforce safety.However, research that motivates the integration of IAVs’ sustainability ramifications onto the SC ecosystem is not sufficient.The aforementioned observation is further supported by the S2C2 tool provided by Bechtsis et al., a tool that evidently reveals opportunities for facilitating the adoption of IAVs into SC design and planning through identifying the key related decisions, as these are mapped on the relevant strategic, tactical and operational levels of the natural hierarchy.Opportunities can be identified basically on the procurement and on the sales tiers covering all levels as the incorporation of IAVs is observed to be limited.In Subsection 2.1 and Subsection 2.2 we exemplify the role of IAVs in a digital SC setting and we describe the key IAV-centric decision-making parameters that could impact SC sustainability.By 2025 global economy is expected to serve a well-informed population willing to compensate for personalised goods and services.Indicatively, the European Commission has set the Industrial Landscape Vision for the 2025-time horizon to facilitate new production systems that will foster innovation and competitiveness through analysing and prioritising societal, technological, economic, environmental and policy drivers.In this context, industries in Europe invest in advanced manufacturing systems and sustainable production methods enabled by information and communication digital technologies.Enterprise information systems are basically analysed at design, architecture, integration, interoperability and networking levels and enable the 3C triplet, i.e. communication, cooperation, and collaboration, among SCs and network actors through the use of standards and technological innovations.Especially, standards promote innovation and can be a useful instrument for crafting policies towards shaping the industrial landscape of the future.The term ‘Synchronised Production & Logistics’ is used to describe the operational level integration of manufacturing and logistics.Luo et al. define ‘Synchronised Production & Logistics’ as: “Synchronising the processing, moving and storing of raw material, work-in-progress and finished product within one manufacturing unit by high level information sharing and joint scheduling to achieve synergic decision, execution and overall performance improvement”.However, an opportune decision-making process is required that could foster the effective adoption of automations towards synchronised production and logistics systems.In particular, IAVs can be incorporated at all levels of an end-to-end SC, although their adaptation to the manufacturing shop floor tasks, and the warehouse facility layouts is the prevalent trend.Typically, IAVs are elaborated extensively at an operational level and can greatly influence SC flows while promoting added value and innovation within a digital SC ecosystem.Furthermore, to address the global SC challenges, all tiers of the supply network should function in a coordinated manner.Overall, automations can provide an integrated approach to an envisioned digital supply system where all decisions are evaluated in a holistic and systematic manner as conceptually depicted in Fig. 1.The identification of the appropriate IAVs in a customised setting is a challenging task due to the complex nature of the related SC operations under specific sustainability, functional and budget constraints.To that end, simulation is suggested to be a viable scientific approach to tackle the aforementioned issues, considering that this approach allows the study of a system without provoking any disturbance, and to further explore conceptual scenarios and evaluate the associated impact in the real world.Particularly, considering a SC as a complex system characterised by data structures, operations, and product and information flows, IAVs are expected to exhibit direct reactive action to each SC echelon.To this effect, agent-based simulation could be used to proactively represent the operation of IAV systems within a SC setting.Generally, an agent is a software or hardware object able of performing specific tasks autonomously.Particularly, Weiss provides the following definition: “Agents are autonomous, computational entities that can be viewed as perceiving their environment through sensors and acting upon their environment through effectors”.Following that, sensors and effectors can be either physical, i.e. field devices which are represented by software files or data streams, or agents that can be: autonomous, interacting, intelligent, and flexible.Therefore, multi-agent systems are recommended for simulating IAVs cooperating to tackle operational challenges that are beyond the capabilities or knowledge of each individual entity, especially within a sustainability context.Product and production life cycle must be aligned with sustainable SC and manufacturing activities in order to get a systematic and holistic approach that supports the decision-making process as: “Sustainable manufacturing satisfies the demand for functionality while adhering to environmental, economic and social targets over the entire life cycle of products and services”.However, the realisation of efficient and sustainable SC operations through elaborating IAV systems denotes that a systemic multi-criteria decision-making process has to be considered in the relevant analysis tools.To this effect, Bechtsis et al. provide a critical taxonomy of key decisions for facilitating the adoption of IAVs for the design and planning of sustainable SCs in the modern digitalisation era, while the related decisions are further mapped on the strategic, tactical and operational levels of the natural hierarchy.Firstly, at the economic dimension, the strategic decision-making includes the determination of the capital requirements, the appropriate selection of data sharing schemes for communication and coordination with the SC environment, the design of the facility layout environment and the feasibility analysis.The determination of the exact vehicle type and fleet size along with the economic key performance indicators can be either at strategic or tactical levels, supporting the determination of maintenance costs and the integration with the proper sensors.At the operational level, efficiency and performance must be taken into consideration while simultaneously fine-tuning the navigation, routing, scheduling and dispatching algorithms based on the economic key performance indicators.Secondly, at the environmental constituent, strategic decision-making focuses on the identification of the environmental goals towards establishing energy management and control policies.The determination of the required fuel type and the adoption of specific environmental key performance indicators are referring to the tactical level, along with the selection of the vehicles charging/refuelling strategy and the adoption of specific tools for the environmental assessment of the manufacturing plant.Following, at an operational level, the monitoring of the environmental efficiency performance on a daily basis along with the optimisation of the routing, scheduling and dispatching algorithms based on environmental criteria need to be considered."Finally, at the social dimension, the SCs' workforce safety and accessibility, along with health and safety of the IAV operators, is prevalent. "In addition, emphasis is provided to the continuous creation of skilled jobs, the constant improvement of ergonomics for the workers at the tactical level, the identification of opportunities for sensors' applicability to improve shop floor safety, and the adoption of tools for monitoring and assessing potential hazards.At an operational level, the fine-tuning of vehicles’ social implications ensures the social performance of the system.Developing sustainable supply networks embracing IAVs, whilst considering the trade-offs among significant capital investments, operational constraints and derived sustainability benefits, implies that simulation approaches are needed in order to benchmark such systems."Simulation, from the early 1970's till today, is a continuously evolving field of research with undoubted contribution to the progress of manufacturing systems.A vast number of software tools available for simulating IAV operations are available, but their detailed technical analysis extends the scope of the present research study.In this section, we rather provide a representative categorisation of selected software applications that consider logistics operations in industrial manufacturing settings, deriving from a review of the related research publications.Generally, software tools and platforms used for simulating IAV systems in manufacturing and SC contexts can be clustered into the following five categories:Dedicated software for IAVs in intra-logistics, and,Object-oriented programming languages for complex industrial systems.Except for the abovementioned categories, traffic simulation tools are widely used for replicating the kinematics of passenger cars and urban logistics."Such tools can precisely simulate a vehicle's movement and use a business logic layer for scheduling all the necessary activities.Traffic simulation tools could be also used as dedicated tools for the management of IAV systems for intra-logistics operations in manufacturing plants.This categorisation is not based on an exhaustive list of all existing software tools and platforms, but rather acts as a synthesis of all major applications that we have identified as part of our on-going research.The introduction of IAVs into the manufacturing sector triggered the development of associated simulation software.General purpose discrete-event simulation software usually involves the creation of robust components which can handle specific operations and communicate with each other in order to develop the simulation model.Moreover, general purpose discrete-event simulation software packages are used to represent manufacturing and industrial processes, like for example: general system design and facility layout, material handling, cellular manufacturing, and flexible manufacturing system designs.Specific add-on modules assure compatibility with every aspect of the manufacturing activities, including intelligent intra-logistics vehicles, while blocks with custom functionalities can be developed as well.In an industrial context, simulation software usually has 3D add-on functionalities for precisely representing the movement of machinery and other physical objects.In this regard, the emergent robotics and automation sector motivates the development of sophisticated robotics control software that includes advanced 3D graphics along with human-machine interface simulation techniques.Robotics software develops or incorporates modules for general purpose simulation activities while focusing on the robotic systems as the main research activity.In addition, multi-agent based techniques are prevailing in recent years due to the decentralised nature of IAVs, the complex SC environment with the multiple stakeholders and the mass customisation schemes.Existing commercial and open source multi-agent simulation software mainly refers to decentralised solutions that are more flexible and robust through avoiding the existence of a single point of failure and overriding any kind of disturbances.Software tools in this category enable the development of more flexible solutions in the context of mass customisation that may not always be optimum.Dedicated simulation software packages for IAVs and intra-logistics take into consideration every aspect of a vehicle and allow for a detailed consideration of vehicle parameters and kinematics, vehicle surrounding, network parameters, and real-time interaction with the working environment.It is worth mentioning some commercial autonomous vehicles vendors with dedicated software for the management of an intra-logistics vehicle fleet like for example the AGVE Group traffic control solution, the Egemin Transport Intelligent Control Center and the JBT Corporations Self-Guided Vehicle Manager Software.Finally, the use of general purpose object-oriented programming languages is continuously increasing in intra-logistic services due to the one-to-one correspondence between the physical objects and their digital representations.Every object is represented by a discrete class and each class follows the object-oriented concepts like encapsulation, abstraction, inheritance, polymorphism.As a result, objects can be easily extended, diversified and reused, while the maintenance requirement of the software significantly decreases.Overall, the use of simulation methods in manufacturing sectors and SC management is continuously growing and they could assist in the analysis of the expected sustainability impact of alternative IAV systems.Nevertheless, the emerging need for more efficient and customisable simulation techniques motivates the development of hybrid methods integrating simulation methods with agent-based simulation and artificial intelligence.The proposed framework represents an inclusive simulation software structure for the integration of IAVs onto the digital SC ecosystem in a holistic and systematic manner.The framework is divided into three tiers of abstraction following the general principles of the ISA95 model of the non-profit community MESA.Particularly, the ISA95 model divides a classical production pyramid into: business planning and logistics, manufacturing enterprise system, and control, and defines data structure and services at each corresponding tier.Fig. 2 depicts a high-level representation of the proposed simulation software framework.At the first tier, integration and interoperability of system agents in the context of a digital SC ecosystem is the main focus.At the second tier, the functional role of IAV systems towards promoting innovations and sustainability in digital SCs is more transparent.More specifically, the IAV entities are the enablers for the mass customisation scheme by providing the interface between the automation layer and manufacturing information systems.The third tier is the bottom level of the framework and is beyond the scope of the current research.In particular, at the third tier of abstraction all manufacturing equipment, from simple sensors and programmable logic controllers to sophisticated manufacturing equipment, are considered in order to ensure operationalised interconnection, intercommunication and interaction with IAVs."The first tier of the proposed framework is based upon the implementation of software agents for promoting collaboration and negotiation among the high-level entities of a SC, supported by an added layer of integration with the SC's information systems.Multiple agents register dynamically to the network in order to seize resources, manage the flows of the digital SC using the appropriate communication and coordination protocols."Directory facilitator agents can be used as a reliable source of information for the existence of ready to use agents and for keeping the agents' types and skills.Each agent possesses de facto knowledge about the real-world environment and tries to enrich this knowledge by exchanging information and negotiating with other agents."Software agents organise the typical flows while updating their knowledge base and the SC's information systems at every tier.Agent communication and message exchange follow Extensible Markup Language based ontologies in order to provide a clear context to all conversations and follow specific standards.Furthermore, the framework implements specific interfaces with enterprise resource planning and manufacturing oriented information systems.The business to Manufacturing Markup Language of the ISA95 model is an XML based solution for ontology driven conversations that defines the context of each message exchange.At the same time, IAVs provide feedback to the software agents directly from the physical entities of the manufacturing environment.IAVs are considered highly critical entities for the system as they enable the dynamic reconfiguration of the production life cycle according to the market needs and can be considered as the control unit that supervises the shop floor processes and data.To this effect, IAV agents organise the intra-logistics, share information and interact dynamically with the software agents in real-time.The framework illustrated in Fig. 2 assumes that the IAV systems implement the basic principles of a ready to use Cyber-Physical System, characterised by: real-time smart connection of the entities, conversion of data streams to information, a flexible cyber level, a cognition level for decision-making, and a dynamic reconfiguration level of the system.The detailed description of the second tier of the framework is hidden at high level interactions of the software agents as IAV agents provide the production data to the first tier and establish a two-way real-time communication with the shop floor.The middle tier integrates the IAVs into the SC ecosystem and seamlessly links the automation hierarchy levels.IAVs are among the enablers of the Industry 4.0 revolution by acting on behalf of market demand for end-to-end automation as they can be effective and efficient through providing flexible manufacturing, preparation activities for production and logistics for supporting mass customisation, overnight rearrangement of inventories, reporting activities, and monitoring and calculation of key performance indicators for the SC ecosystem.From an environmental and social point of view, IAV systems result in minimised waste flows and damaged products, enhanced energy optimisation, minimised property damage and human errors, and further prevent loss of life and fatalities in industrial manufacturing facilities.Below, we specifically discuss the key role of communication standards and the selected architectural backbone for the case of developing bespoke simulation software tools that integrate innovations in intelligent transportation systems, SC operations and sustainability ramifications.IAV systems are covering the field, control, process, plant management and enterprise resource planning levels of the automation hierarchy.In order to interoperate with many multi-disciplinary tiers, the role of communication standards is critical.Modelling techniques are necessary in order to increase the replicability of the framework and decrease ambiguities.Ontologies are creating the proper context for the entities, their properties and the underlying processes, enabling software tools to interpret data in a more sophisticated way.Indicatively, the CORA ontology and the underlying extensions proposed by Fiorini et al., under the umbrella of IEEE Robotics and Automation Society, describe an industrial environment with entities, their parts, their relationships and all the necessary variables.Modelling languages, like the MES Modelling Language, integrate all the views for describing manufacturing environments, the entities and the underlying interdependencies.Especially, the MES Modelling Language allows for the capturing of the technological structure of a plant, the specifications of the production processes and the description of the functions and manufacturing processes as to fully capture an industrial environment.In the same vein, general-purpose modelling languages, like the OMG Systems Modelling Language and the Unified Modelling Language, can be used for the representation of Cyber-Physical Production Systems in order to describe the entities of an industrial environment along with their interconnection, functional requirements and characteristics.Moreover, representing industrial processes with international standards leads to the automation of the sustainability assessment activities and to more accurate measurement of the sustainability impact of the production processes."IAV systems and simulation techniques can automate the recording and reporting of sustainability parameters, even at a product level, as every parameter is an inherent entity characteristic that is included in the entity's description.The proposed software framework interface allows for the monitoring of SC sustainability parameters in a bottom-up approach including the fuel consumption of the IAVs, the greenhouse gas emissions, the energy requirements of the manufacturing and storage equipment; hence, simulations support the evaluation of effluents and the assessment of controlled refuelling approaches.From a technical perspective, the proposed software framework is based upon the Model View Control approach."The MVC framework consists of three independent layers: the Model layer that describes the entities and their relationships, the View layer that provides the user interface forepart and presents the current state of the model and the output data, and the Control layer that dynamically alters the model's state after receiving events from the user interface.The MVC architecture depicted in Fig. 3 separates the user interface, the business logic, the control structures and the data access methods, and is recommended for modelling complex environments.The business logic and the control policies can be easily modified in order to highlight the role of innovations and sustainable performance.First, the Model layer uses entities to represent the physical structure of a SC system, including: resources, facility layout nodes, and transporters.Each entity corresponds to a specific data scheme in order to extensively describe the properties of the entity.The data scheme is part of a predefined ontology.Sustainability parameters are enclosed in the ontology of the model in order to precisely measure and report the sustainability impact at each operations level.Second, the View layer includes the user interface and the reporting tools.Particularly, the reporting tools are incorporated into our novel framework in order to enhance sustainability at all network echelons.Indicatively, economic viability, environmental impact and social implications that could be monitored with sensors are documented through the reporting tools.Interoperability to manufacturing and SC information systems can be established with the use of interfaces that can interlink with third-party software applications."Third, the Control layer provides a plethora of algorithms for localisation, navigation, collision and deadlock prevention, dispatching, routing, planning, and task scheduling, while taking into consideration the entities' properties and the global optimisation parameters.The algorithms determine the vehicles’ autonomy level that has been the subject of the Autonomy Level for Unmanned Vehicles framework.The Autonomy Level for Unmanned Vehicles framework recognises fully autonomous, semi-autonomous, tele-operated, remote controlled and automated robots.Sustainability focused optimisations are included in the implementation of the planning, scheduling and routing algorithms in order to minimise the sustainability impact of a SC.The Control layer is responsible for the characterisation of the system as hierarchical or decentralised.The second tier of abstraction can describe an IAV system in business terms and demonstrate its value at upper, middle and low management activities."Upper management activities involve the company's Business Logic and inform the Control layer with specific procedures and goals, while the results are presented in the View layer, provide feedback to the board members and conclude to strategic decisions for optimising the Business Logic.The modeller can represent the physical hardware components as simulation entities in the Model layer while the Control layer acts as a middleware that enables communication with the modelling elements of the Model layer.The Control layer interfaces lower layer physical devices using the Model layer in order to coordinate their activities, enable their cooperation in the manufacturing area and present the underlying activities in the View layer.Low management decisions are usually made at the Physical level and can leverage the output of the DSS and the MIS where middle management activities prevail.Following the key characteristics of the IoT, communication between the aforesaid three layers is critical in order to ensure operational excellence of IAVs and sustainable performance of modern SCs within a digital context.Therefore, collaboration and interoperation between the entities is handled by a middleware layer, including exchanging of messages and coordination among entities and activities.Data structures, information about the entities and the resulting sustainable performance for any system state are stored in database structures.To this effect, feedback mechanisms are necessary to reconsider the operations of all entities and promote supply network sustainability.The third tier is beyond the scope of the current research as it involves the functional details of the manufacturing and facility layout equipment, resources, third-party equipment standards, communication capabilities and interoperability issues of diversified devices.The second tier of the framework attributes each distinct equipment node or resource node with an interface that enables the input/output functions and the communication and coordination of the devices with the third tier."Resources and equipment with hardware and software interfaces enable the proper execution of the framework functions without the complete knowledge of the third layer's individual components. "Interfaces are focusing on an input/output basis, omitting technical details and functional requirements of third-party manufacturers' devices.The application of bespoke simulation software for assessing sustainability performance in SC operations is highlighted using a custom developed tool as a case study.Our modelling approach focuses on a decentralised multi-agent system for the control, communication and coordination of all the involving entities.Design, control and integration with manufacturing environment through agent-based systems are widely studied and all decisions are made in real-time.The developed simulation tool can model an IAV system operating at the second tier of the proposed software framework, consisting of two major modules: the basic model building module for the modelling of the facility layout, and the vehicle management module for realising the IAVs’ control layer and the environmental sustainability reporting.Both modules are in the pilot stage of development and are continuously being improved and expanded to allow greater functionality.The programming code for both the aforementioned modules was developed in the Microsoft Visual C# 2010 development environment.The basic model building module allows the representation of the facility layout of a working environment.Thereafter, following a number of steps the user can simulate a number of specific fuel type autonomous vehicles in a customised facility layout.The simulation stepwise process, illustrated in Fig. 4, includes the following:Step #1 – ‘Create Grid Layer’."The pilot software's interface resembles a grid with a menu for adjusting the parameters of the model.The resolution of the grid layout could be adjusted in order to better represent the physical world, capture the physical entities and enable a precise movement for the transporters.Except for the basic menu functionalities the user can select entities and insert them to a specific cell in the grid layout.Step #2 – ‘Insert Map Layer’.The user can load the digital drawing of a facility layout.This enables the immediate transfer of the real world to the simulation environment, with the proper scaling.Step #3 – ‘Insert Static Objects’."The user should use the basic menu to insert manually the facility's entry point, the storage separators, the loads, the refuelling points and the output gate.Step #4 – ‘Insert Dynamic Objects’.At this stage, the user can insert the number and the fuel type of the desired transporters, and select the preferred routing algorithm to pick up and deliver the load to the exit point.The research value of the algorithm is the ability to dynamically calculate optimal routes based on obstacles present in the layout.The user can then save the facility layout map along with the selected static and dynamic objects.Step #5 – ‘Run Simulation, Assess the Model’.At this stage, the user can run the simulation based on the defined entities and view, in real-time, the environmental sustainability report and status of each autonomous vehicle."The vehicle management component's interface allows the real-time reporting of the IAV's environmental sustainability performance through indicating the emissions of carbon dioxide, carbon monoxide, nitrogen oxides, total hydrocarbon and the total Global Warming Equivalent.The lift state is also dynamically presented as the vehicle operates in the facility layout, taking into consideration the loading and the unloading kinematics of the vehicle and the type of the vehicle.At the current stage of software development, the user can choose between three distinct forklift types based on the used power source: liquefied petroleum gas, diesel, and electric.Both loaded and unloaded travel time is taken into account for the measurements."It must be stated that the transporters' movement towards the load and the output gate is automatically triggered by the Control layer.The transporter follows the shortest path in the facility using the A star algorithm in order to optimise the total distance travelled in the facility, while avoiding the static and the dynamic entities.Existing research on IAVs in the SC domain is growing with industry needs pointing to more efficient and customisable simulation techniques.Considering also that IAVs are associated with significant capital requirements, simulation-based assessment of productivity and operational cost of IAVs could optimise SC material flows and thus justify a corresponding medium to long term investment.In this context, the present research sets out the process for the systematic development of simulation tools, from both architectural and practical aspect, that can support digital transformations in supply networks.Firstly, the analysis of the reviewed software simulation tools and platforms provides useful insights indicating that the development of software for integrating IAVs in sustainable supply chains should use a decentralised approach to comply with emerging communication standards, while the developed software tools must be hardware independent and should interoperable with available third-party software applications."Secondly, considering the findings of the software tools and platforms' review and categorisation, we propose a framework depicting an opportune software structure and key design elements for implementing interconnected IAVs across SC operations to evaluate the expected network's sustainability performance.The framework we propose is structured in a way that the analysis of IAVs can be directed towards sustainability evaluation for a range of facility layout designs, capacity scenarios, types of intelligent vehicles and routing algorithms, in order to provide more sustainable products, services and product-service combinations within the digital economy landscape.In this regard, a software application could adapt to advances in environmental regulations and support firms in considering regulations as enablers for growth."From a technical perspective, the proposed software framework enables practitioners and academicians alike to integrate commercial IAVs into the SC ecosystem by treating the vehicles' characteristics as member variables at the simulation Model layer through using the appropriate entities that fit the vehicles' characteristics. "The framework further enables the vehicle's navigation as well as dispatching, planning and scheduling activities at the Control layer and allows the comparative analysis of the derived sustainability performance in the View layer.Moreover, it is feasible to assess the sustainability performance of the same IAV units in multiple facility layouts by altering the shop floor entities at the Model layer of the framework.Sustainability ramifications comprise the fundamental basis of the proposed software framework as each entity at the Model layer has properties that capture the environmental, economic and social parameters.The Control layer embraces the actions that each entity can undertake considering that each action is associated with specific and measurable sustainability impacts.Sustainability reporting is inherent to the system and is automatically and dynamically generated.Particularly, the framework provisions the precise sustainability metrics for the entities, the processing and the overall IAV kinematics.Although the proposed framework could address demand volatility, the tier approach separates low-level shop floor activities, high-level shop floor activities and SC coordination.The agent-based approach provides the system with decentralised management and enables the dynamic reconfiguration of the SC network and the shop floor.Agents can dynamically register and unregister without disrupting the stability of the system while IAVs can adjust the manufacturing schedule by taking into consideration multi-optimisation parameters.Thirdly, the development of the demonstration pilot simulation software sets out a five-stage stepwise process recommended for the design of flexible tools that capture the characteristics of dynamic manufacturing contexts and support the effective integration of IAVs in operations.In this regard, at a first stage, a software application primarily needs to have a clear interface that allows a gamut of elements to be applied and imported.Following that, at a second stage, importing the layout of an industrial environment into the software should be simple and transparent to avoid challenges associated with advanced digital drawing skills.Importing ready-to-use layout maps allows, at a third stage, the manual insertion of static objects that represent the design specification elements of a facility including barriers, walls, loads, charging/parking stations, and input/output gates.This digital replication of the industrial environment along with its specifications could, at a fourth stage, inform about the available or required dynamic objects that define operations.More specifically, a simulation software should be able to capture real-world shop floor entities along with their properties, manufacturing machines, sensors, programmable logic controllers, actuators and finally the transport vehicles with their specific characteristics, properties and capabilities.At the last fifth stage, all entities have been recognised and captured in a software environment to proceed with simulating alternative scenarios and assessing IAVs’ operations efficiency and sustainability performance.In terms of demonstrating the applicability of the developed software, we used the case of a customisable warehouse facility as a simulation testbed.Table 2 summarises the simulated eco-indicator results according to the specific fuel type of utilised AGVs, taking into consideration the cumulative loaded and unloaded travel distances calculated in the presented model.Facility managers can identify the operational needs of the facility while considering environmental sustainability parameters.This research sets out the process for the systematic design and development of simulation software tools for integrating IAV systems in digital supply networks, further enabling the simulation and sustainability assessment of the performed operations.The applicability of the proposed framework is demonstrated through a custom-built simulation software tool that is tested on the case of an IAV system operating in a conceptual and highly customised warehouse.The pursuit of truly digital SCs and smart manufacturing systems exerts considerable pressure on the frontiers of automations by emphasising the adoption of IAVs to promote workable relationships among environmental performance, economic growth and social benefits.The transition from traditional SCs to digital networks in a viable manner that promotes sustainability compels the utilisation, integration and coordination of IAVs across different levels of operations.Taking into consideration this perspective, the present study initially provides a critical categorisation of software tools and platforms used for simulating IAV systems in SC ecosystems.The findings of the review indicate that the use of simulation methods in industrial manufacturing and SC management sectors is continuously growing while the incipient need for highly efficient and customisable simulation techniques encourages the elaboration of hybrid simulation methods combining agent-based simulation with artificial intelligence.In this context, this paper contributes to sustainable SC research by identifying a framework for the systematic development of simulation software tools that combine environmental sustainability performance with operational elements.In particular, the presented software framework supports the process of simulating and evaluating global supply networks through allowing for the development of customised simulation tools dedicated to sustainable SCs defined by IAV platform technologies, while contemporarily providing expanded applicability to all the abstraction layers for generating meaningful insights to real-world scenarios.Furthermore, we test the proposed framework by programming a corresponding simulation software tool, that is highly customisable, and we further recommend a five-stage stepwise process for developing software tools that support the effective integration of IAVs in sustainable supply networks.We finally test the developed software tool to the case of a customisable industrial warehouse while we monitor the resulting eco-indicators of the utilised IAV system.In conducting this study, some limitations are evident which, however, provide stimulating grounds for expanding our research horizons."Firstly, the provided analysis framework was developed using an extensive literature review and our narrow knowledge over real-word working environments, while testing and refinement was restricted to the application of the conceptual warehouse's intra-logistics operations.In this sense, input from academic experts and practitioners may provide greater insights into the framework of integrating IAVs in digital SCs for promoting sustainability, and would further facilitate generalisability of the provided findings.Secondly, the bespoke software tool presented in this study captures environmental sustainability performance metrics, but does not quantify economic and/or social considerations."On-going research programming work aims to extend the tool's capabilities and provide greater flexibility and customisability.With respect to future scientific directions, we aim to demonstrate the applicability of the proposed framework on real-world settings, initially through the case of last-mile logistics operations and specifically through the case of urban consolidation centres in clinical trial and pharmaceutical supply networks.To date, the extant literature on intelligent systems and simulation appears to have a myopic focus on warehouse management operations, meaning that the existing studies have largely not considered diversified, yet coordinated, operations across a supply network.This research could promote the development of a novel framework that integrates IAVs in end-to-end operations so as to foster sustainability performance and to ensure network competitiveness. | The principal objective of this research is to provide a framework that captures the main software architecture elements for developing highly customised simulation tools that support the effective integration of Intelligent Autonomous Vehicles (IAVs) in sustainable supply networks, as an emerging field in the operations management agenda. To that end, the study's contribution is fourfold including: (i) a review of software simulation tools and platforms used in assessing the performance of IAVs interlinked with sustainability ramifications in supply chain (SC) ecosystems, (ii) an integrated software framework for monitoring and assessing the sustainability performance of SCs defined by the utilisation of innovative IAVs in operations, (iii) a translation of the proposed SC framework into a corresponding software application through a robust five-stage stepwise process, and (iv) a demonstration of the developed software tool through its application on the case of an IAV system operating in a customisable warehouse model. Our analysis highlights the flexibility resulting from a decentralised software management architecture, thus enabling the dynamic reconfiguration of a SC network. In addition, the developed pilot simulation tool can assist operations managers in capturing the operational needs of facilities and assessing the performance of IAV systems while considering sustainability parameters. |
302 | Critical factors for crop-livestock integration beyond the farm level: A cross-analysis of worldwide case studies | During the mid-twentieth century, in numerous countries of the Northern hemisphere, agriculture has evolved towards mono-cultural production systems, aimed to maximize yield to satisfy both local and export food demands.This evolution occurred through accelerated mechanization; increased use of fossil fuels, fertilizers, and pesticides; and globalization of agricultural markets.These changes in farm technology and market conditions allowed for the specialization and enlargement of production.Since then, stringent environmental regulations, detailed animal welfare demands, and higher product quality standards strengthened this trend by requiring increased expertise from farmers, while the environmental impacts of specialized agricultural systems are no longer accepted by some society members.Diversified systems, such as integrated crop-livestock systems, promote ecological interactions over space and time between system components and allow farmers to limit the use of inputs through development of 1) organic fertilization from livestock waste and 2) diversified crop-grassland rotations to feed animals.When well suited to local conditions, such integration improves nutrient cycling by re-coupling nitrogen and carbon cycles.It can also generate higher economic efficiency by reducing production costs and risks, with regard to market fluctuations.However, the major constraints of on-farm integration are related to the limited farm workforce available, combined with a loss in the skills and knowledge required to optimize both crop and livestock sub-systems.As an alternative to on-farm integration, several authors suggest that integration can be structurally organized at larger scales than the farm, through cooperation among specialized livestock and arable farms.In such an organization, some of the synergies normally provided by on-farm integration can be obtained, but determine much smaller increases in farm workload, complexity of rotations, skills, and infrastructure for the individual farms involved.Since involved farms have opportunities to develop diversified crop rotations, integrate legumes or grasslands, and apply manure, they can also exploit a diversity of environmental benefits, such as biological regulation of pests and diseases, and improved soil quality.However, there may be several environmental limitations, depending on the level of spatial and temporal integration.These include green-house gas emissions associated with trucking around manure, and mismatches between nutrient supply and demand.Crop-livestock integration beyond the farm level can take several forms.According to several authors, three main types of integration projects can emerge, depending on the level of spatial, temporal, and organizational coordination among farms.The first and simplest form is a partnership between specialized crop and livestock farms, where they exchange raw materials.A second type of direct exchange can be organized by local groups of crop and livestock farmers negotiating land-use allocation patterns.Furthermore, a third type involves upscaling to, for instance, a regional scale where spatially separated groups of specialized livestock and crop farmers integrate through coordination by a third party.Here, the farmers involved are not necessarily communicating directly.Organizational challenges farmers face when they initiate, implement, and sustain projects of crop-livestock integration can be obstacles to the success of entire projects, regardless of their type.This is because integration beyond the farm level always requires coordination among multiple participants and the management of trade-offs between individual and collective objectives and performances.The time and money spent for coordination and management may be additional costs in addition to the implementation costs of on-farm integration, needing to be minimized.Due to a lack of adequate measures and framework for the analysis of organizational coordination, the critical determinants of the emergence and outcomes of integration beyond the farm level are not analyzed.As such, research has been sparse on how farmers strategically and collectively overcome these challenges.This lack of knowledge limits crop-livestock integration beyond the farm level.In this context, our study first proposes an analytical framework to address crop-livestock integration beyond the farm level, from the perspective of Williamson’s transaction costs economics and Ostrom’s institutional analysis and development framework.We use this framework for cross analyzing six projects as case studies, in which we assess the determinants of the emergence and outcomes of integration.Here, the emergence and outcomes are evaluated qualitatively as transaction costs derived from the three phases of project development: information gathering, collective decision-making, and operation and monitoring.Based on our interpretation of these six projects, we identify attributes crucial for crop-livestock integration development and durability.By so doing, we try to understand farmers’ collective strategies to reducing integration transaction costs.Finally, we conclude with policy implications and recommendations for the further development of crop-livestock integration beyond the farm level.Applications of the theory of transaction cost economics allowed us to analyze crop-livestock integration projects, to explore organizational challenges of farmers in initiating, implementing, and sustaining integration beyond the farm level.Transaction costs can be defined as the costs arising not from the production of goods, but from their transfer from one agent to another.They take numerous forms, and Matthews distinguished ex-ante and ex-post transaction costs respectively corresponding to the processes of achieving an agreement and continuing to coordinate its implementation.As already discussed by Asai et al., transaction costs have a major impact on the arrangement of integration beyond the farm level.Based on the literature, we identified three main types of transaction costs: information gathering, collective decision-making, and operational and monitoring costs.Information gathering costs comprise the costs of acquiring knowledge of the resource and its users, and of identifying suitable trading partners.Collective decision-making costs include cost incurred by planning and coordinating resource distribution, by taking the other farmers’ usage patterns into account and physically negotiating the terms of an exchange.Operational costs are the costs of actually carrying out integration and, in some cases, they may include the costs of formally drawing up a contract.The actual on-going exchange needs to be monitored to ensure the terms of the agreement are carried out by the partners, resulting in accumulation of monitoring costs, including those for resolving conflicts.Transaction costs are faced by all integration participants, but in this study, we mainly focus on: 1) farmers trying to make a farm-to-farm partnership arrangement; 2) local groups of farmers trying to integrate crop and livestock; and 3) farmers trying to contract with others through a local economic organization, such as cooperatives at the regional level.Application of transaction cost economics to analyzing crop-livestock integration projects from the viewpoint of transaction cost minimization enables us to understand the strategic choices of these farmers.Considering the transaction costs of crop-livestock integration as the unit of analysis, we propose an analytical framework that enables us to explore the organizational coordination among farmers regarding resources, land, and labor sharing.The framework encompasses three functions.First, it allows identifying and evaluating various factors that influence organizational coordination, measured by analyzing transaction costs.Second, it deals with the temporal dynamics of crop-livestock integration, covering each phase of contracting in relation to the three types of transaction costs, but also the entire process.Finally, it provides the analysis outcomes as strategic descriptions on how collective farmers try to minimize transaction costs under various conditions.We have built on Ostrom’s IAD framework to develop our own framework.We selected the IAD framework as the foundation, because: 1) it is well-suited to the analysis of collective actions across different resource systems; 2) it is highly adaptable, as demonstrated by the wide range of available applications; and 3) it supports transaction costs economics.Devaux et al. adjusted the IAD to analyze collective action in market chain innovation.We proceeded similarly by slightly modifying Ostrom’s framework to better match the challenges of crop-livestock integration beyond the farm level.The novelty of our framework is integrating the three phases of crop-livestock integration implementation in the Organizational Coordination Arena, particularly Ostrom’s Action Arena.The Organizational Coordination Arena is influenced by four sets of variables, leading to different outcomes.Based on the literature, we identified a number of factors that are likely to influence organizational coordination and thus transaction costs in the context of crop-livestock integration.These factors can be divided into external environment on one side, and internal characteristics on the other.The performance of crop-livestock integration contracting can be measured by analyzing the transaction costs associated with organizational coordination.The identification of factors that impact transaction costs can be assessed for 1) each phase and 2) the whole process.As indicated by the broken lines in Fig. 1, these outcomes may influence the processes that take place within the Organizational Coordination Arena.For example, successful coordination among farmers may stimulate participants to invest more time and resources into joint activities.Over time, outcomes may also influence the four groups of exogenous factors.For example, successful coordination may predispose policy makers to support future programs involving crop-livestock integration.The framework is applied to six case studies of crop-livestock integration beyond the farm level, already implemented or where implementation is ongoing.These six case studies are from four countries with distinct farming systems.There is a wide variety of studies about crop-livestock integration beyond the farm level, but we selected these six based on the following two criteria.First, case studies had to match the above-cited types of crop-livestock integration beyond the farm level: 1) farm-to-farm, 2) local groups, or 3) regional integration through a third party.Second, case studies had to have been observed and documented during the three phases of project initiation, implementation, and monitoring.We thus identify factors that increase/decrease the transaction costs of crop-livestock integration during each phase, but it may also be interesting to see, for instance, if high investments during one phase may result in lower costs in other phases or for finalized entire projects.Additionally, some projects may succeed in starting, but fail in the long term.Therefore, our case study approach focused not only on successful projects but also failures, to assess influential factors.Although most of the literature on crop-livestock integration beyond the farm level describes current situations or possible arrangements, literature on temporal dynamics is scarce.Based on the available literature and the authors’ participation in actual projects, we selected two farm-to-farm projects from the Netherlands and the USA, two local group projects from Japan and France, and two regional projects from Japan and France.It should be stressed that, as mentioned earlier, these six case studies are not success stories in all aspects.They have developed due to favorable conditions, but they also face challenges.From 1998 to 2004, the Louis Bolk Institute executed several crop-livestock integration projects among organic farmers in the Netherlands.Organic farming covered around 2.5% of all agricultural areas in 2003, with specialized dairy and arable production dominating land use, but with large regional differences.The Dutch government contributed to these projects by stimulating organic farming and facilitating legislative changes for organic feed and manure utilization.Integration typically involves informal, longer-distance business partnerships with limited communication and collective planning and searching for closer integration opportunities.In 2001, a project started in the south-western part of the Netherlands, primarily with recently converted farmers.Among the participants were two arable farmers managing 70 and 65 ha, respectively, who came into contact with two other dairy farmers 50 and 26 km away, respectively.To fulfil part of the dairy farmers’ feed demand, the arable farmers decided to produce grass-clover as roughage and not cereals for concentrate, as this seemed to fit better with their intensive arable rotations, dominated by potatoes and vegetables.Although transport costs would be high for grass-clover, it would provide soil organic matter and N-fixation, with very low labor requirements for cultivation, since all field work was done by a contractor coordinated and paid by the dairy farmers.The amount of manure exchanged depended mainly on the delivering capacity of the dairy farmers, which was below the obligatory 20% of the manure demand of arable farmers, who continued to buy organic manure on the market.Prices, volumes, seed mixtures, etc. were discussed during a yearly visit, while minor decisions were discussed by telephone.None of the agreements were written down or registered.In 2004, the two arable farmers merged their farms, while the first dairy farmer initiated an informal cooperation with three other livestock farmers to combine their demand for feed and manure supply.All livestock farmers knew each other from several producer organizations.This informal cooperation pooled organic manure supply and demand for grass-clover from the merged arable farm, in addition to buying from and selling to other arable farms.Consequently, the original second dairy farmer had to seek a new partner, a large arable farmer 14 km away.This limited distance facilitated the use of irregular vegetable surpluses to supplement the 12 ha of grass-clover.The intention of the informal cooperative was to seek opportunities to form regional integrated mixed farms, even considering the development of a “complementary organic arable farm” at closer distance.This appeared as over-ambitious, partly because the allocation and responsibility for financial risks appeared problematic and urgent, as concentrate ingredients seemed expensive to produce locally.Therefore, they continued their usual habit of one-to-one informal agreements, with prices and volumes loosely coordinated amongst the three dairy farmers.Maine, like other areas in the USA, has seen a trend toward the specialization and spatial separation of crop and livestock farms.In Maine, dairy farms are concentrated in the “dairy belt” in the central/south part of the state, which is more conducive to growing corn, while 90% of the potato farms are clustered in Aroostook County, to the north.Starting in the early 1990s, two pairs of potato and dairy farmers in central Maine started to integrate their cropping systems.Since these farmers are close neighbors and, in one case, related, establishing trust and a long-term vision of mutually shared benefits was easy.From the late 1990s to the mid-2000s, University of Maine researchers and Extension and farmers began quantifying and promoting the economic, agronomic, and community benefits of such “coupled” crop-livestock integration.Due to these efforts, eight other potato and dairy farmers started integration.The key driver to short-run “coupled” integration benefits is the negative profitability of low-value food or feed grains grown in rotation with potatoes.By growing more profitable dairy forage crops, coupled farmers can mutually share benefits.If the traditional potato rotation crops of small grains and maize grain were more profitable, this short-term integration benefit would not exist.Short-run coupled relationships typically start with land swapping and, in the long run, evolve into more complex exchanges, involving feed and even shared inputs such as equipment and work crews.After a decade, the economic benefits of crop-livestock integration include a 5% higher potato yield, especially in dry years due to manure amendment in the rotation year.Additionally, fertilizer and pesticide use decrease due to the expanded land available for crop rotation.Expansion and increasing yields boost revenues, and when combined with reduced input costs, profitability increases.While the spatial proximity of farmers that are integrating their cropping systems is necessary, it is not sufficient, since both producers have to get along.A specialized potato and dairy farmer in southern Aroostook are next-door neighbors, yet only briefly experimented with integration in the mid-2000s, ultimately having to terminate it, since they could not agree on pH management.Conversely, over the same period, another specialized dairy and potato producer pair in southern Aroostook got along, but the excessive distance between their fields led to the end of their integration experiment.Research and farmer investigation into dairy farm relocation to Aroostook County in 2006 to facilitate more coupled integrations has proved challenging due to the Great Recession and weak macroeconomic recovery, increased cost of milk transport, and resistance on the part of Aroostook potato farmers to use livestock manure.Although this produced a formal contract for potential integrators, it has remained unused due to a lack of new farmers interested in integration.Historically, long-term integration between partnering farmers has been based on informal verbal agreements.These informal arrangements rely on the faith that both parties benefit in the long run and de-emphasize which party may be benefiting more in the short run.Despite these initial successes, participation has not expanded beyond these initial dozen farms due to exhausting the limited group of potato and dairy farmers close enough both spatially and collaboratively.For the original two pairs of coupled producers, other challenges to long-term integration have arisen over the past decade.One of the original integrators has downsized its farm from potato to mixed vegetables, becoming less integrated with the dairy farm.Another original integrator reduced integration to bio-digest dairy manure for energy.Unlike integration, on one farm with both crops and livestock, coupled crop-livestock integration coordinated between two or more farms can strongly depend on the operational decisions of each coupler and can change even after several years of a stable working relationship.In a region of south-western France characterized by clay-limestone hills, where soil depth varies highly among fields, there is coexistence of crop and grasslands and, consequently, highly diversified agricultural landscapes.Farms are limited in size, impeding on-farm diversification in most cases.All farmers follow organic production standards.Despite the high added value of agricultural products through direct market sales, organic fertilizers and concentrated feeds are expensive enough to become inaccessible for crop and livestock farmers.The association of organic farmers initiated a reflection on self-sufficiency over inputs at the local level.At that time, the advisor of the association met a PhD student from INRA, developing participatory methods supporting crop-livestock integration beyond the farm level, resulting in a collaborative project.A first study involved 24 organic farmers interested in crop-manure exchanges in the region.Farm surveys were conducted to understand their motivations and farming systems.Integration scenarios were then developed and discussed with the farmers.In the end, farmers proved unwilling to start implementing these exchanges.Since livestock, crop, and combined areas were segregated, farmers felt the distance between farms as an overwhelming constraint.In the meantime, the French ministry of agriculture launched the agro-ecological plan, which promotes the development of agroecology in agriculture, that is, agricultural systems being more self-sufficient by relying on ecosystem services, and more efficiently using inputs, while minimizing negative impacts on productivity.A call for projects related to this plan was initiated in 2013 to support bottom-up initiatives favoring agroecology development.The association of organic farmers responded to this call and obtained funding for an advisor to support them in the implementation of small-scale crop-livestock integration among farms.To address the constraints related to distance between farms, the advisor of the association and INRA researchers agreed to focus on a small group of six neighboring organic farmers willing to explore and implement exchanges among farms.Four of them focus on dairy production, with cows, goats or ewes.Their feeding system is mainly based on owned grasslands and purchase of concentrated feeds.Three are diversified cash crops farmers, relying on purchased organic fertilizer and having contracts for certain crops.The six farmers involved were interested in developing crop-manure exchanges to achieve self-sufficiency for fertilizers and for animal feed at the group level.They also aimed to share skills and knowledge through theses exchanges.Again, individual farm surveys were conducted and then followed by participatory design and evaluation of integration scenarios.Three collective meetings were needed to focus and refine the scenarios.At the end, farmers felt satisfied by the scenarios and were collectively empowered to begin implementation.They collectively agreed on the crop rotations needed to produce the amounts of grain and fodder to be sold by crop to livestock farmers.An additional meeting was organized before sowing winter crops, to refine the scenarios according to the climate conditions of the year and establish a price index governing the exchanges.At this stage, INRA researchers quit the process, leaving the responsibility to support implementation to the advisor.Due to financial issues in the organic farmers association, the advisor’s missions changed substantially, limiting day-to-day coordination among farmers.One livestock farmer finally forgot to buy maize from a crop farmer, which created conflict among farmers.The project is currently on hiatus, but the farmers’ association obtained funding for re-investing into collective decision-making and implementation based on revised collective rules.Rice is the most widely produced crop in Japan, including in hilly and mountainous areas.Since the human consumption of rice has been decreasing, production of forage rice as whole-crop silage has been proposed as alternative utilization.This also supports the political goal of improving the country’s low food self-sufficiency by increasing the domestic production of feed for locally raised animals, since Japan is a massive importer of animal feed.The rice-crop diversion subsidy under production adjustment programs encourages farmers to cultivate forage rice, resulting in a large expansion of production area for rice WCS.Two projects of local crop-livestock integration were found in the hilly and mountainous areas of Hiroshima.In these areas, hill slopes prevent farmers from exploiting economies of scale and, thus, farm size is limited.Theses areas’ major socio-economic challenges were farmers’ aging and migration, resulting in the abandonment of paddy fields and destruction of local communities.Farm abandonment has particularly been seen as a critical issue, as rice paddy fields have many beneficial functions such as ground water retention, air temperature control, and flooding prevention.The first project was initiated by one of the dairy farmers in the community, who made the first field trial of forage rice under technical support from advisors.In the meantime, a group of 18 rice farmers agreed to aggregate their small fields to produce forage rice collectively, primarily for farmland preservation in the community.The dairy farmer was also a member of this collective management and, therefore, already knew whom to contact about cropping changes.Later on, the farmer became a leader, coordinating communal integration.In 2001, the leader together with another dairy farmer and the three rice farmers started a cooperative to buy a special machine to cultivate WCS, which is now owned by the cooperative.As of 2010, a total of 12 ha of paddy fields, coming from 7 ha from collective farmland among a group of small rice farmers and 5 ha from three individual rice farmers, was converted to produce forage rice for two dairy farmers in the same community.Manure from the dairy farmers is first composted and then applied to the paddy fields, while the forage produced is partly sold to a beef producer outside of the community.As integration was mainly organized between these five specialist farmers, decision making flexibility is high.Another project in a second community in Hiroshima involved two rice farmer associations, who started growing forage rice in 2002 on a total of 11 ha of their paddy fields.They sell WCS to four neighboring dairy farmers.These two associations are similar to the collective group in the first example, but they consist of more participants.One of the associations initiated the crop-livestock integration with an advisory service’s technical support, and now takes economic responsibility for producing, selling, and delivering WCS to dairy farmers.The association also receives manure from dairy farmers and applies it once composted to the paddy fields.The costs of this manure application may be higher than the benefits, but the association keeps this agreement to maintain stable partnerships with the dairy farmers.The first year’s transaction resulted in a bad reputation among dairy farmers due to low forage quality, with some dairy farmers even terminating their contracts.The association made their best efforts to improve WCS quality to re-build trust with dairy farmers.With the exception of sustaining rice forage quality, the association has a low willingness to increase yields and, thus, make more profits due to the production adjustment program.However, most participants are satisfied with the contract because forage rice was easy to adopt due to their past experiences of farming rice and it was well-subsidized.The Nasu region, located about 180 km from Tokyo, consists of three municipalities, covering the northern border of Kanto-plain and the Nasu highlands.Nearly 90% of the farmland in southern Nasu is used for rice paddy fields, whereas intensive livestock production is a major economic activity in the entire region.Dairy farmers are concentrated in central and northern Nasu and are major milk suppliers to the Tokyo metropolitan area.Since farm types are location-specific within the region, there were no contacts between livestock farmers and crop farmers.Prior to crop-livestock integration establishment, livestock farm size increased, resulting in: 1) declining self-sufficiency of animal feed as cattle per farm increased and 2) a greater need for managing excess manure.Under these circumstances, the regional scale exchange between forage rice and composted manure has been adopted.The establishment of a contractor company was triggered by a study group of dairy farmers using the same feed center built by local feed company in 1999.The study group participants innovatively improved animal feed intake and increased self-sufficiency, given the growing concerns over resource recycling and the environmental impacts of farming.This initiated regional level crop-livestock integration.In 2007, with financial and technical support from the feed company and advisory services, the contractor company was launched to coordinate WCS production and facilitate exchanges between dairy and forage rice farmers.Thirteen initial investors were responsible for 78% of the contractor company’s investments, including six livestock farmers from the study group, two crop farmers, and one mixed farmer.The idea was to connect livestock and crop farmers at the regional scale, so it was essential that farmers from both livestock and crop sectors joined company committees.The establishment of the contractor company was separate from the community-based networks, emphasizing functionality and unity of purpose.The representative of the company was selected from the livestock farmers.The main tasks of the contractor company are to produce WCS on the fields of about 35 contracted rice producers and sell the forage to about 30 dairy farmers.In 2013, 60 ha of paddy fields across three municipalities were under contract for forage rice production.Since long sustainable partnerships is one of the goals of the contractor’s committee, they offer a special contract to forage rice producers to buy their forage at a fixed price, so forage producers are always assured that they can use forage rice production as a stable income source.Since 2010, the contractor also spread composted manure on the fields of some of the contracted forage producers.The manure application cost has been subsided by the government for those who produce forage rice as WCS.To implement this contractor-based crop-livestock integration, certain economies of scale are needed to compensate for investment costs.For instance, in 2009, the company adopted a new harvesting machine to increase work efficiency.Prior to adoption, field trials and feasibility analysis, under technical advice from the research institute, had been conducted and discussed among committee members.Since then, the contractor has been more active in finding new WCS buyers.Attracting more buyers is feasible, since the price of imported feeds is uncertain, locally produced forage is preferred due to food safety concerns, and WCS quality is guaranteed among current users.However, the paddy fields under contract are scattered throughout the region, increasing harvesting and transportation costs for the contractor.Terrial is a private company belonging to the large agrofood cluster APRIL.In the 1990s, some confined dairy, pig, and poultry farms faced problems complying with EU standards for manure application on fields, caused by high animal densities.Farmers did not want to reduce herd size and could not identify additional land for spreading manure.Therefore, manure had to be exported from these farms.Terrial was built for this purpose in 1996, and organizes the production, processing, transport, and commercialization of composted manure from intensive livestock farms, selling to a diversity of farms in cropping areas.Farmers pay for this service to maintain their industrial efficiency, while complying with environmental standards.The manure collection area covers three French regions in western France: Brittany, Normandy, and Pays de Loire.Composted manure is distributed over large areas specialized in crop or vine production: cereal plains in central and south-western France and Bordeaux vineyards.It is sold as organic fertilizer to local cooperatives by the APRIL group.Some products are even organically certified.The supply chain is organized as follows.First, animal manure is composted either on farm, or on industrial composting platforms.The total amount of compost produced by Terrial is 100,000 metric tons/year.It is then processed by granulation at three sites using renewable energy, notably biomass boilers fed by sunflower processing waste from APRIL’s animal feed factories.Transport using large trucks is organized by Terrial.To sell these manure-based products, pricing has to be competitive with synthetic fertilizers.Processing, storage, and transport costs are added to the final price to determine whether the livestock farmer will earn something from the manure or have to pay for removal.Generally, if the manure is composted on farm, the livestock farmer will earn some money.However, if manure is collected, raw farmers pay a small amount per metric ton to cover composting costs.The organization and coordination of manure transfers by Terrial was made possible by large investments in composting platforms and granulation factories.Terrial has around 15 workers: drivers, workers on composting platforms and in granulation factories, commercial agents for selling products, and research/development elaborating and monitoring the organic fertilizer production process.Cost-benefits analysis and a market survey determined the development of Terrial, ensuring its economic viability.The connection with livestock farmers and reliability of the APRIL group have led to successful supply chain development.In the future, Terrial would like to supply animal feed factories with cereals and other products coming from farms that buy Terrial’s organic fertilizer.This could develop a circular economy at the supra-regional level.Table 4 synthesizes the major factors identified through the observation and documentation of each case study.Various drivers of crop-livestock integration beyond the farm level were identified.Those include regulations on manure application and input use in organic farming, financial incentives, and presence of coordinator.Technical support by external agents was commonly found in all case studies.The impacts of internal characteristics on the transaction costs of 1) information gathering, 2) collective decision-making, and 3) operation and monitoring are respectively described in the following subsections.Active social networks, such as farmer associations, play a key role in lowering the costs of identifying suitable integration partners in the JPN1, FRN1, NLD, and USA case studies.Uncertainty of information about what other farmers are doing is a critical barrier to starting crop-livestock integration collectively, which increases information gathering costs.Required information includes the quantity and quality of materials to exchange, farmers’ willingness to change their current practices towards increased coordination, and the equipment available to harvest, store, and transport the products being exchanged.On the other hand, as shown by JPN2 and FRN2, being highly connected with other farmers through social networks may not be required when an economic organization already developed a network of integrating farmers.Therefore, famers can lower information gathering costs by connecting to this organization, such as the case of the Group Environmental Farm Planning in Saskatchewan, Canada, where information gathering costs were reduced because of assistance received from local NGOs/NPOs.When resources are not scarce nor widely scattered in space, there is no need for farmers to invest extra time and money to look for integration partners matching their needs, as illustrated by the JPN1, JPN2, and USA case studies.Here farms engage in “external coordination” with other farms.However, high transaction costs, including those for information collection necessitate the “internal coordination” of crop-livestock integration through third-party organizations.The costs of collecting information increase as the degree of resource specificity increases.European regulations require 100% organic feed for organic dairy cattle and rules are tightening for manure application from conventional livestock production to organic cropping systems.Where there is no alternative to use specific resources, finding suitable trading partners may increase information gathering costs, as partly discussed in relation to the Danish organic crop production sector.These costs may also increase when usual partners encounter problems, such as shortage of harvests due to drought.Other examples of information gathering costs increasing along with resource scarcity are FRN2 and NLD: livestock farmers faced the challenge of complying with environmental regulations, making “land for spreading manure” a scarce resource.In NLD, livestock farmers could find partners, establish arrangements and reorganize practices.In FRN2, livestock farmers were unable to overcome the information gathering cost, and thus they had to transfer these costs on to an organization with the capacity to develop a network of crop farmers willing to pay for manure.Therefore, the costs of gathering information and actual operation should be high in areas with a high density of intensive livestock production units, as livestock farmers face high competition in gaining access to crop farmer fields, as shown by Asai et al.Besides resource scarcity and specificity, requirements for specific equipment, machinery, knowledge, or conditions for starting crop-livestock integration may be critical factors that increase or decrease information gathering costs.For instance, in JPN1 and JPN2, rice farmers were ready to adopt forage rice production as they were experts of rice farming, but they had to dedicate some time to get trained in using the special harvesting machine.By contrast, in FRN2, the absence of farmers’ specific skills enabled the fast development of the economic organization facilitating crop-livestock integration.As observed in all case studies, being supported by professional groups, hiring consultants, and/or collaborating with research institutes can be effective strategies to lower costs of, for instance, planning land-use by adjusting to other farmers’ needs, coordinating temporal and spatial distribution of resources, and choosing the best agricultural practices by considering partners’ needs.Previous studies found that past successful experiences of working together reduced the costs of collective decision-making, as they promote the development of social capital and trust.However, our observations on some of the case studies reveal that past working experiences may not be necessary, as long as adequate investments are made into researching the appropriate information about potential partners.By contrast, FRN2 minimized transaction costs through internal coordination by a private company with sufficient economies of scale, but the drawback of this strategy is low decision autonomy for farmers.Planning with an experienced and well-established partner is another strategy to minimize uncertainty, and can lower collective decision-making costs.However, selecting and working only with knowledgeable and stable partners is not always feasible, as seen in FRN1 and NLD.There, several French farmers were still in the development stage, which partly explains why crop-livestock integration took more time to emerge than originally planned.For NLD, it is the structural development of organic farms that made integration challenging.These findings suggest more integration stability and persistency require long-term interactions.Shared willingness to achieve long-term benefits, such as through ecosystem services like soil fertility or through more stable prices, should be essential, as previously suggested.The costs of achieving an agreement favoring long-term benefits can be reduced at the information gathering stage by ensuring that these benefits are acknowledged and targeted by all farmers.As commonly seen in the case studies organized through face-to face-interactions between partners or among group members, the establishment of clear rules with fair allocation of costs and benefits seems to be essential for lowering the costs of negotiation among partners/group members.These rules are not necessarily formal, as long as there are shared norms, and ideally rule-making should proceed and be coordinated by appropriate leadership, as seen in JPN1, or in a different way, where the “leader” has a dominant position towards farmers.This could be further enhanced if farmers perceive themselves as a group acting or responding jointly to a shared problem or resource, as pointed out by Mills et al. for landscape-scale resource management within agri-environment schemes in Wales.In most case studies, to avoid contract nonfulfillment or opportunistic behavior, making a formal contract with a statement of long-term trading is safer than an informal agreement.The disadvantage of informal agreements may be a lack of clarity regarding the procedures found in a contract for actions and the related outcomes.In FRN1, the project was compromised due to disrespect for informal agreements.However, exhaustive contracts are costly to develop, incurring costs from information collection requirements and from time and other resources required during contract negotiation and completion.These costs can be covered by stakeholders at the appropriate critical size, like in FRN2.Farmers may be also unwilling to face new constraints, such as loss of autonomy in individual decision-making and dependence on other farmers for decision-making and action.By contrast, some partnerships between US dairy and potato farmers are long-lasting without any formal contracts.Files and Smith emphasized that basic trust between individuals in Maine, USA was a key requirement for lengthy partnerships.Trust ensures that an exchange partner will not act in self-interest at another’s expense, and provides confidence in an exchange partner’s reliability and integrity, resulting in low costs for decision-making and monitoring.Moreover, studies on other types of farmer collaborations, such as joint farm ventures, environmental management, and machinery sharing, showed that only when an informal relationship had already been established there was a commitment to formalize a long-term partnership, as seen in the Maine case study.The success of crop-livestock integration beyond the farm level depends on the spatial proximity of farms, as shown in several case studies.Araji et al. highlight the distance travelled during hauling and spreading is the most important variable in terms of the cost of using manure as a crop fertilizer.Therefore, the operational costs, particularly those accrued from the physical distribution of resources, would be low when resources are available in close spatial proximity and when there is no need for specific equipment.Resource specificity can be a critical factor as well, as crop-livestock integration between organic dairy and crop farms may be costly in areas where organic farms are scattered.Although our case studies did not empirically compare between organic and conventional systems, a study from Denmark shows that organic crop farmers need to transport longer than conventional to receive manure from organic dairy farms.However, as illustrated in JPN2 and FRN2, the issue of spatial proximity can be overcome where the costs for participating actors to make integration happen are covered through extending the scale of integration in terms of coverage area and number of participants, to exploit economies of scale.In contrast to the USA case study, where crop-livestock integration happens when farmers are close enough, the JPN2 and FRN2 case studies show that other entities aside from farmers could cover the added costs of transporting resources regionally well beyond 20 km.This type of regional-level crop-livestock integration is relevant between areas with high specialization.Long distance to partners may prevent good communication and, thus, efficient access to proper information, increasing operation and monitoring costs.Therefore, appropriate coordination by a third party is essential.Furthermore, the operation and monitoring costs of regional crop-livestock integration can decrease due to an efficient scale of coordination, as long as the information and decision-making processes are fairly evaluated.Theory suggests that, when transactions between the same partners/group members are recurring, transaction costs across all transactions can be reduced by designing a suitable contract, which can reduce information collection and search costs for each individual transaction.A key concern for long-lasting integration is how to deal with intra- and inter-annual variations in weather and market conditions, which can compromise the amounts of resources exchanged.For instance, FRN2 crop farmers’ willingness to accept manure partly depends on the fluctuating prices of mineral fertilizers: they are more open to receive manure or use cover crops when the price of mineral fertilizers is high.Livestock farmers are obliged to dispose of manure to comply with environmental regulations, paying a fee to keep the price of manure competitive against that of mineral fertilizers.Therefore, long-lasting coordination between farmers requires that they work together with variation rather than against it within a framework of contracts.Despite the recognized benefits of agriculture sustainability, crop-livestock integration beyond the farm level has been poorly documented so far, which limits its implementation.We developed the IAD framework, which enabled us to assess critical determinants of the emergence and outcomes of integration, and therefore helped us better understand farmers’ collective strategies to reduce integration transaction costs.The application of the framework to six case studies demonstrated it can be applied to various projects, implemented at multiple organizational levels over distinct farming systems.It highlights that social and organizational resources mobilized for integration depend on the agricultural context, stakeholders involved and prior relationships, culture, etc.Therefore, no recipe or unique strategy could be specified, but public policies and institutions could evolve to reinforce attributes crucial for crop-livestock integration development and durability.In what follows, we conclude with policy implications and recommendations by highlighting the importance of financial and technical support, and strengthening social networks for further development of crop-livestock integration beyond the farm level.Specific policies should be developed to encourage the introduction and maintenance of integration beyond the farm level.They involve various forms of financial and technical support, targeted to different integration types, participants, and project stages.For example, crop-livestock integration at the farm-to-farm or local levels involves new transaction costs, especially in the beginning, preventing further development.Therefore, initial financial support from the government can be useful in promoting crop-livestock integration, because both the projects/organizations being developed and the financial basis of farmers are weak.For integration at local group and regional levels, initial transaction costs can be covered by intermediary actors such as farm advisors and firms.However, from a public policy viewpoint, these costs have to be balanced with the expected environmental benefits.Knowledge development and its implementation through technical advising is also crucial, in addition to financial support from governments.One of challenges of crop-livestock integration beyond the farm level is that farmers face more complexity from doing things better versus doing things differently.To receive economic and environmental benefits, locally-adapted crop-livestock integration systems need to be well-designed via collaboration with scientists and advisory services.The simulation models and participatory methods developed and implemented by these actors can support iterative design and evaluation of scenarios to characterize trade-offs among integration options and identify consensual solutions for farmers.The funding processes that develop social interactions can thus stimulate partnerships, particularly between farm associations and scientists at research institutes/universities to explore crop-livestock integration innovations and exchange knowledge, creating mutual benefits.Strong social networks can reduce the transaction costs associated with organizational coordination, as shown in all case studies.They help farmers identify suitable partners, develop plans collectively, and leverage available resources.Therefore, it is essential to strengthen social networks and involve the wider community, including the private sector.In France, for example, the Agroecological Plan has set up farmer groups, such as the Ecological and Economic Interest Group, to develop projects related to agroecology.These pioneering groups encourage collaboration with researchers, supply chains, and local stakeholders.Moreover, financial incentives from the government support these diverse groups of farmers.Crop-livestock integration beyond the farm level often requires new networks, as current networks often include either specialized arable or specialized livestock farmers, particularly if regional specialization exists.Stimulating new networks may require approaching local leaders or “reference farmers,” who have some influence on other farmers’ behaviors, since trust is essential to strengthening social networks.Policies should thus incorporate links with existing networks and institutional arrangements in designing crop-livestock integration beyond the farm level.Furthermore, governments can motivate potential actors by distributing information on successful cases and holding outreach events to identify the environmental and economic benefits of integration.A formal legal framework for establishing crop-livestock integration can be useful, since it can increase the credibility and stability of partnerships.For instance, the creation of a formal contract, in which participants only need to fill in the lines when they have agreed upon such arrangements, may resolve conflicts.In Denmark, for instance, farmers are obliged to submit annual fertilizer accounts to the authorities, reporting on produced, applied, received, and provided fertilizer and manure.If a livestock farmer has provided manure to another farmer, it requires submission of a formal letter with the manure receiver’s signature.This type of obligation helps ensure the partnership is not violated and that the terms of the agreement are carried out by the partners, resulting in lower monitoring costs and longer-lasting partnerships.This work was supported by JSPS KAKENHI Grant Number 15K18755. | Despite their recognized agricultural sustainability benefits, mixed crop-livestock farms have declined in the Northern hemisphere. As such, crop-livestock integration beyond the farm level is a promising alternative to this trend, but the knowledge of critical factors and strategies towards its successful implementation is still lacking. We developed an analytical framework to assess the critical determinants of the emergence and outcomes of integration, which helped us understand farmers’ collective strategies for reducing integration transaction costs. The resulting framework distinguishes between three types of transaction costs: information gathering, collective decision-making, and operational and monitoring costs. These costs are influenced by several factors: external environment attributes, resources engaged in crop-livestock integration, and participating actors and their arrangements. Application of the framework onto six case studies all across the world (Asia, Europe and America) demonstrated it can be utilized for various projects implemented at multiple organizational levels (farm-to-farm, local groups, and regional levels) over distinct farming systems (conventional and organic). Specific policies should be developed to strengthen social networks through the mutual understanding of such integration benefits, since they play a key role in lowering the costs of information gathering and collective decision-making. A legal framework to establishing a formal contract should contribute to lower long-term monitoring costs, especially when trust among actors developing. Operational costs largely depend on the spatial proximity of farms, but this can be overcome by extending the scale of integration in terms of covered area and number of participants. Here, appropriate coordination by third-party entities is essential, and should be targeted by financial and technical support. |
303 | Ambiguities regarding the relationship between office lighting and subjective alertness: An exploratory field study in a Dutch office landscape | Light entering the eyes reaches the rods and cones on the retina which then stimulate vision.In addition to these two photoreceptors, a third photoreceptor was discovered approximately fifteen years ago , the so called intrinsically photosensitive retinal ganglion cell.These ganglion cells capture light which has entered through the eyes and these cells initiate processes in both the Image-Forming and Non-Image-Forming centres of the brain."Previous studies indicated effects of light on human's health and well-being .These effects can be acute or circadian effects.Acute effects are, for example, alerting effects or distraction due to glare or flicker.Circadian effects are caused due to exposure to a lighting condition for a certain period of time and are, for example, the regulation of hormones or the organisation of the biological clock.The production of the hormone melatonin is one example of a hormone which is influenced by light exposure.Zeitzer et al. developed dose-response curves in order to determine a relation between light and melatonin.A mismatch between light exposure and individuals day/night rhythm can lead to a disrupted circadian system .This disruption is associated with poor health and a lower work performance .In addition, office lighting is often demonstrated to directly affect work performance .Demonstrated direct and indirect effects of lighting on health and work performance highlight the importance of the most appropriate light exposure at the right moment of time."An individual's daily light exposure consists of contributions from daylight and electric light sources. "One of the current challenges is to determine the individual's need for light to enhance their health.Since individuals differ in experiences, sensitivity, and preferences, each individual has different responses to light exposure .Therefore, it is recommended to investigate the relationship between light and health based on personal lighting conditions .The relationship between light exposure and occupational health is investigated in multiple studies .The experiments took place in laboratories, in simulated office rooms, or in realistic office buildings.The majority of the experimental studies included in the review of van Duijnhoven et al. , was performed under laboratory conditions whereas employees may react and behave differently in a real work environment."The actual effects of office light exposure on an employee's health need to be investigated and validated in real office environments.In order to investigate the relationship between office lighting and any outcome measure, the lighting environment needs to be identified.Identifying a lighting environment comprises multiple lighting measurements.Illuminances and correlated colour temperatures are the most common measures to map a certain lighting situation .Besides these two light parameters, the CIE proposed a protocol for describing lighting in an indoor environment including people, context, lighting systems and components, room surface light levels and distribution, task details, task area light distribution, high-luminance areas, modelling, colour appearance, and dynamic effects .In addition, light measurements can be performed continuously or at specific moments during the day.In addition, measurements can be performed person-bound or location-bound .Furthermore, light measurements can be performed inside or outside."To the authors' knowledge and based on the literature review , this is the first field study which investigates the relationship between personal lighting conditions lighting and subjective alertness, both measured at the same timestamp.No intervention to the lighting system was introduced in this study.All participants were exposed to their regular lighting environment.The study described in this research paper included continuous location-bound measurements to identify the indoor lighting environment and questionnaires to gather information about the health outcome measures.The study was conducted as part of a larger research project investigating the potential impact of office lighting on occupational health in office landscapes.The aim of this experiment was to investigate the ambiguities regarding the relationship between office lighting and SA.It was expected that the investigation of this relationship in a field study would be challenging due to multiple potential confounders.Another aim of this study was to search for aspects which potentially explain the relationship between horizontal illuminance and SA in order to be taken into account for future studies.All considered variables in this study were categorized into general, environmental, and personal variables.General variables consisted of day and time of the day, environmental variables were light, temperature and relative humidity, and the personal variables were user characteristics, self-reported sleep quality and health scores.It was expected that SA was related to all three types of variables.In addition, since individuals respond differently to changes in lighting conditions, it was expected that the correlation between SA and Ehor differed between the participants.Finally, it was expected that differences in correlations between the participants could be explained through the personal variables.The field experiment was performed during one 5-day work week in May 2016 in a two-floor office building in the Netherlands.The weather conditions varied from an overcast sky on Monday, Tuesday, and Wednesday towards a clear sky on Thursday and Friday.The dawn and dusk times were around the local times 5:30 and 21:45 respectively.The local times related to the daylight saving time in the Netherlands.The office hours of all the participants fell in this daylight period.The study location was a two-floor office building in the West of the Netherlands.This building was renovated in 2015 and transformed from a closed structure to an open structure with office landscapes.This office transformation is part of the new Flexible Working Arrangements ."Companies increasingly support this working practice in order to improve employee's productivity at work.The office building of the current study consists of two floors, each consisting one large office landscape.On the first floor there is one separate office landscape on the North side and there are four office spaces enclosed with glass throughout the whole office building.The first floor contains 52 desks and the ground floor contains 31 desks.The west façade on the ground floor contained daylight openings without sun shading devices.In contrast, on the first floor, the building façade was more open and this façade consisted of sun shading devices.It was not recorded when the shading devices were open or closed.In addition to the presence of daylight, electric lights were installed.The office landscapes were lit by dimmable suspended luminaires and dimmable LED spots.The electric lighting in the office landscapes was on during office hours and dimmed based on the amount of daylight.The dimming levels were logged in the lighting system.There were no desk lights available at the desks.Most lighting recommendations for Dutch office buildings are horizontally focused .In earlier times, when most offices were paper-based, it was important to focus on the horizontal light levels.Recently, the vertical lighting conditions are more important due to the digital world the office workers are currently working in.However, due to practical reasons, only Ehor at desk level were measured in this study.In order to gather continuously measured Ehor at all work places throughout the office building, the non-obtrusive method developed by van Duijnhoven et al. was applied.This method consists of reference locations at which continuous measurements are performed and predictive models between the reference locations and all other workplaces inside the office in order to estimate the lighting conditions at all workplaces.Between two and four relation measurements were performed per outcome location to create the predictive models.During the relation measurements, an overcast sky prevented direct sunlight entering the office building.The average fit between the relation measurements and the developed predictive models was 0.98 with the best at 0.99 and the worst at 0.89.The predictive models were applied using inter- and extrapolation of the relation measurement data points.The continuous estimated lighting conditions at all workplaces were used for the analysis of the relationship between light and subjective alertness in this study.In this study, Ehor was continuously measured at three reference locations throughout the office building.Fig. 1 shows the floor plans of the office building in which the red dots indicate the three measurement locations.Two measurement locations were situated on the ground floor, respectively at a distance of 6 m and 2 m from the facade, one was located on the first floor, at a distance of 6.5 m from the facade.The three locations were spread throughout the office building and chosen based on a prior observation before the start of the study regarding the occupancy of the desks during the experiment period.Ehor at desk level was measured using Hagner SD2 photometers.The estimated Ehor at all desks used by participants during the study, varied between 219 lx and 4831 lx throughout the work week.Two days the maximum Ehor was around 2200 lx whereas the maximum Ehor on the other three days reached over 4000 lx.On the first floor, the sun shading devices caused fast decreases in Ehor whereas a more closed façade on the ground floor led to a lower variation in Ehor compared to the measurements on the first floor.The lighting measurements were accompanied by questionnaires completed by employees.Participants received a unique participant number after signing the informed consent form, in order to analyse all data anonymously."Four questionnaires were distributed during the day via participants' work email addresses.The participant number and desk number were asked at the beginning of each questionnaire.Desk numbers were asked because of a flexible workplaces policy in the office building.In reality, there were only limited changes of workplaces.The 46 participants worked at 49 different desks throughout the experiment period.Within the questionnaires, the Karolinska Sleepiness Scale was applied to measure SA.The KSS measures on a scale from 1 to 10 providing 1 = extremely alert and 10 = extremely sleepy ."The KSS questionnaire refers to the sleepiness level the last 5 min before completing the questionnaire and is a non-obtrusive way to investigate office workers' alertness.The four questionnaires were distributed at 9 a.m., 11:15 a.m., 2 p.m., and 4:15 p.m."Besides the Ehor and SA, additional aspects were objectively and subjectively measured in order to obtain more information about the work environment and the participant's conditions.In addition to the objective lighting measurements, temperature and relative humidity were continuously measured at the three reference locations.Rense HT-732 transmitters were used for measuring.The subjective KSS data was also extended with more survey results."The Short-Form 36 items is a set of easily administered quality-of-life measures and was used to measure functional health and wellbeing from the individual's perspective .This questionnaire was distributed only once, at the beginning of the study period.The health of employees is described by the World Health Organisation in the definition of occupational health: a combined term which includes all aspects of health and safety in the workplace, ranging from prevention of hazards to working conditions .The health data from the SF-36 health questionnaire resulted into eight aspects: physical functioning, role physical, bodily pain, general health, vitality, social functioning, role emotional, and mental health.All aspects were assessed using a 0–100 score, 100 indicating the healthiest.An extra question concerning sleep quality was added to every questionnaire at 9am.The statement ‘I slept well last night’ with a 5-point scale answer possibility was added to the questionnaire to include self-reported sleep quality."In addition to the regular questionnaires, a general questionnaire was distributed to obtain participant's user characteristics. "Age of the participant was asked with the answer options ‘younger than 25’, ’25–34 years, ‘35–44 years’, ’45–54 years, ‘55–65 years’, and ‘older than 65’.Participants were recruited after providing general information about the study.54 out of 70 employees agreed to participate and signed the informed consent form.Participation was voluntary and anonymous.In total, 570 completed questionnaires were collected.46 participants filled in at least three questionnaires.The average number of completed questionnaires was 12 with a maximum of 20.The median age was “35–44 years”, approximately 65% of the participants reported to have a 5-day work week and the average working hours regarding all participants were 7.7 hours per work day.The majority of the participants used corrective lenses and that was most of the time due to myopia.Nearly all participants rated their general health as good, very good, or excellent.The objective and subjective data were analysed using MATLAB R2015a and SPSS Statistics 22.The data analysis consisted of four steps."First, Kendall's tau correlation coefficients were calculated between SA and other variables potentially being a confounder in the relationship between Ehor and SA.All subjective alertness scores from the data set were included in these correlation analyses.These non-parametric correlations were used because the majority of the data was not normally distributed and because the SA values in the analysis were ordinal variables.All the tests were two-sided using a significance level of 0.05 to indicate statistical significance.Secondly, the relationship between Ehor and SA was investigated.The estimated Ehor at the specific desks for the same time of the day as filling in the questionnaires were selected to perform the statistical analysis.All Ehor together with the KSS data were tested on significant correlations.Individual differences for filling in the KSS required a within-subject statistical analysis.Thirdly, partial correlations were calculated for the relationship between Ehor and SA including all variables identified as confounder in the first step of the data analysis.The last step was to investigate the differences between two groups: the participants with a significant correlation between Ehor and SA and the participants without this significant correlation.The non-parametric Mann-Whitney test was applied to test whether these differences between both groups were significant.Exact significance levels were used due to relatively small sample sizes.This section provides results regarding aspects correlating with SA, the relationship between Ehor and SA with and without confounders, and the differences between participants with a significant correlation between Ehor and SA and the participants where this correlation was not significant.In this paper, the tested variables which potentially predict SA were categorized into general, environmental, and personal variables.Day of the week and time of the day were the included general variables.Light, temperature, and relative humidity were the environmental variables.User characteristics, self-reported sleep quality, and health scores were the personal variables.All general, environmental, and personal variables which correlate significantly with SA were included as confounding variables when investigating the relationship between Ehor and SA.A significant correlation between KSS and day of the week indicated a slightly higher sleepiness in the beginning of the week compared to the end of the week.In addition, a significant correlation between KSS and time of the day demonstrated higher subjective sleepiness towards the end of the day compared to the beginning of the day.Although the correlations were low to medium, they were significant and both day and time should be included as potential confounders for SA.The correlations between Ehor and SA and between temperature and SA were not significant.The correlation between relative humidity and SA, however, was significant.Again, although the correlation was weak, relative humidity should also be considered as a potential confounder for SA.Personal variables were subdivided into user characteristics, self-reported sleep quality, and health scores obtained from the SF-36 questionnaire.In this paragraph, the relationships between multiple user characteristics and SA were determined based on correlations.A negative significant correlation between gender and SA indicated that the female participants reported to be slightly more alert compared to the male participants.In addition, the use of corrective lenses correlated significantly with SA.The correlation between age category and SA was not significant.The participants were all working for the same company and performed similar work tasks.However, the number of work days a week and work hours a day differed between participants.SA did not correlate significantly with the number of work days.However, the number of work hours during a work day correlated significantly with SA.Self-reported sleep quality was obtained via one question in the morning questionnaire.A significant correlation between this statement and SA indicated that self-reported sleep quality was a potential predictor for SA.The positive correlation suggests that individuals who reported to disagree with the statement reported to feel sleepier in the morning.The associations between SA and the eight different health scores were tested.No significant correlation was found for SA and PF, RP, BP, or RE.Participants with a higher GH score reported to be more alert.The same applies for the VT, SF, and MH scores.All variables which showed a significant correlation with SA were included as potential confounders in the analysis of the relationship between Ehor and SA.The general variables caused medium effects on the explanation of the total variance in SA.The effect of the environmental aspect relative humidity on SA was small.The correlations between personal variables and SA were the strongest compared to the general and environmental variables.However, the correlations between personal variables and SA were still small to medium.In section 3.1.2, a non-significant correlation was described between Ehor and SA.Whereas this correlation was based on the data of all participants together, calculating correlations for each individual participant resulted in a group of six participants out of the total 46 for whom a significant correlation was found between Ehor and SA.Five of the six correlations were negative correlations indicating office workers being more alert when exposed to a higher Ehor.For one participant, a significant positive correlation was found.All negative correlations had a medium to large effect explaining the variance of SA and the effect regarding the positive correlation was medium.For the calculation of these correlations the number of data points varied between 7 and 17 per participant.Although Fig. 10 showed significant initial correlations between Ehor and SA, the correlations needed to be calculated including the confounding variables identified in section 3.1.4.Partial correlations were calculated by inserting the confounding variables for all participants together and for the six specific participants for whom a significant correlation was found between Ehor and SA excluding confounding variables.For all participants together, the correlation between Ehor and SA remained non-significant when all confounders were included in the analysis.For the determination of the partial correlations for the individual participants, the majority of personal variables identified as potential confounders were not used as confounder as they did not vary within subjects.The only personal confounder included in the calculation of the partial correlations was the self-reported sleep quality as this could have varied throughout the five experiment days.Table 1 shows the correlations when including none, one, or more groups of confounders.Differences were analysed between the two groups: group 1: the group of 40 participants in which no significant correlation was found between Ehor and SA and group 2: the group of 6 participants where this correlation was significant.SA in group 1 did not differ significantly from group 2.The median SA was 3 in both groups; however, the mean in group 1 was slightly lower compared to group 2, which may suggest a slightly higher sleepiness in group 2.Ehor, by contrast, differed significantly between the two groups.The mean Ehor in group 1 was 981 lx whereas the mean Ehor in group 2 was 862 lx.In addition to Ehor, the environmental parameter temperature differed significantly between both groups.The mean temperature for group 1 was 22.85 °C whereas the mean temperature for group 2 was slightly lower.The third environmental parameter, relative humidity, did not differ significantly between both groups.The mean relative humidity for group 1 was 45.46% and for group 2 46.31%.Group 2 included four male and two female participants, all aged between 25 and 44 years.They were working throughout the entire office building and their most performed task was ‘using the computer’.Four participants of these six used corrective lenses, mostly because of myopia.None suffered from colour vision problems, but one participant indicated an unspecified medical eye problem.No large differences were noticed between the groups for these categorical variables location of the office worker inside the office landscape, gender, age category, most performed task, job type, the use and reason of corrective lenses, colour vision problems, and medical eye problems.The number of work days during one week was significantly lower for group 2 compared to group 1.In addition, the number of work hours for group 1 differed significantly from group 2.The self-reported sleep quality in group 1 did not significantly differ from group 2.Regarding the eight health category scores, no significant differences were reported between both groups for PF, RP, SF, and MH.Bodily Pain, General Health, Vitality, and Role Emotional differed significantly between both groups.Table 2 provides the health category means and standard deviations for both groups.The current study investigated the ambiguities regarding the relationship between Ehor and SA, based on findings from a Dutch field study.The first step in the analysis was to identify aspects significantly correlating with SA in order to include these later as potential confounders while investigating the relationship between Ehor and SA.Both investigated general variables, one environmental variable, and eight personal variables were found to significantly correlate with SA.The clothing value of the participants was not included and this may explain the absence of significance for the relation between air temperature and SA.All of the significant correlations were of small to medium size.This was in accordance with the hypothesis that all three types of variables would influence SA and need to be included as confounder.The second step was to investigate the relationship between Ehor and SA.Initial correlations were calculated and this showed a significant correlation between Ehor and SA for six participants out of the total 46.However, including the confounders as identified in the first step, removed all the significance for the relationship between Ehor and SA.Including the general or environmental confounders led to some differences in significance levels whereas including the personal confounders led to no significant correlations at all anymore.This may indicate that personal variables had more influence on the SA compared to the effect Ehor had.These results are in contradiction to multiple lab studies demonstrating beneficial effects of light on SA .This discrepancy may be explained by the amount, duration or timing of the light exposure or the absence of confounders.In the current study the estimated Ehor varied throughout the entire office building at minimum between 232 lx and 2157 lx over a day and at maximum between 219 lx and 4831 lx throughout a week.However some lab studies used vertical illuminances, this Ehor range falls within their applied ranges of vertical illuminances.Ehor in the current study changed gradually over the day whereas in the mentioned lab studies the contrast between the bright and dim light condition was more noticeable.The International Commission of Illumination highlighted that the dose-response relationship between light exposure and daytime effects on alertness is essential information to determine whether or not illuminance recommendations during the day are adequate to support NIF functions .In this study, six out of the total 46 participants had significant initial correlation between Ehor and SA, excluding the confounders.However, when the confounders were included in the statistical analysis, the correlations for those six participants were no longer significant.It is of high importance to include all potential confounders while investigating the relationship between light and health.Multiple laboratory studies demonstrated effects of different lighting conditions on SA and human health .The advantage of performing a lab experiment is that the researchers are able to control potential confounders and to change only the independent variable to be investigated.The benefit of performing a field study is that the results of tests in controlled environments can be validated in a real office environment and this leads to realistic results.The major challenge of field studies is to investigate a specific relationship in a constantly varying office environment.This field study showed a significant correlation between Ehor and SA which is a ‘response-to-light percentage’ of ±13%.Similar results were found in another pilot field study performed in the Netherlands, i.e. for one out of the eleven participants a significant correlation was found between light and alertness level .These percentages may indicate that not all individuals are equally sensitive to changes in the lit environment.The last step was to explore differences between the groups with and without a significant initial correlation between Ehor and SA.Remarkable was the significant lower Ehor in the group where a significant relation between Ehor and SA was found compared to the other group.In addition, group 2 reported significantly less work days but more work hours per day compared to group 1.More working hours per day may cause higher sleepiness and this may have increased the probability of responding to light.Regarding the personal health scores, there were significant differences between the two groups for BP, VT, GH, and RE.Notably, the BP and VT scores were significantly lower in group 2 compared to group 1, whereas the GH and RE scores were significantly higher in group 2 compared to group 1.The relationship between light and health is often, as also done in this study, determined by measuring illuminance levels or correlated colour temperatures .Illuminance levels are often reported in the forms of horizontally measured values at desk level or vertically measured at eye level.Lighting designs typically aim for recommended values for Ehor as this parameter is included in standards .In contrast, the amount and type of light entering human eyes is relevant since this light causes the light-related health effects.This amount is often expressed as the vertical illuminance measured at eye height.Khademagha et al. proposed a theoretical framework to integrate the non-visual effects of light into lighting designs .They identified three luminous and three temporal light factors to be relevant for triggering NIF effects.A limitation of the current study is that Ehor, as applied in this study, only covers the quantity light factor."In addition, the non-obtrusive method was applied to estimate Ehor at every participant's workplace.This method consists of location-bound measurements and does not include location changes and the corresponding light exposures for each office worker.Rea et al. mentioned that duration of the light exposure is one of the aspects of lighting conditions which support the circadian system functions in addition to the visual system functions.In order to measure the light exposure per participant, the exact location and viewing direction of each participant is required in addition to continuous measurements throughout the entire office building."Another method to measure individual's light exposure is by using person-bound measurement devices.These devices, however, bring along practical and comfort issues as well as certain measurement inaccuracies .In order to be as unobtrusive as possible for the participants, the LBE method was applied in this study.Finally, all health-related variables were subjectively measured."Individual's sleep quality, functional health and wellbeing, and alertness were all self-reported measured and may therefore deviate from objective health measures.Alertness, for example, was subjectively measured by including the KSS in the distributed questionnaires.The KSS was validated by a study of Kaida et al. including sixteen female participants.The number of participants as well as the user characteristics may be questioned for correct validation.It is dubious how many participants are required to eliminate the potential disinterest of participants completing the questionnaire."In addition, it is uncertain how large a difference on the KSS needs to be in order to be relevant, for example, for human health or employee's work performance.The potential relationship between lighting conditions and subjective alertness may be influenced by the circadian rhythm of subjective alertness as well.Regardless varying lighting conditions, subjective alertness was already proven to be influenced by time of the day .This diurnal variation of subjective alertness was not included in this research.Based on limitations of this study, implications for theory and practice, several recommendations for further research were determined.The differences between the groups with a significant initial correlation between Ehor and SA may be questionable because of the limited sample size.Further research needs to include more participants.A limitation potentially caused by this limited sample size is the absence of normality in the data.Therefore, the data analysis in this study was mostly performed based on correlation coefficients.The drawback of a correlation coefficient is that the direction of the correlation is uncertain."In this study, it is uncertain whether the health scores influenced participant's SA or that SA influenced the health scores.Changes in lighting conditions may have impacted human health.Aries et al. also mentioned that physical conditions at work influence home life.The small differences between the two groups may also be explained by the included variables.Further research should include light-dependent user characteristics such as light sensitivity, sensitivity to seasonal depressions, chronotype, sleep-wake rhythms, and activity patterns.Maierova et al. found, for example, significant differences in SA between morning chronotypes and evening chronotypes.Although both chronotypes were more alert in the bright light condition compared to the dim light condition, these significant differences in SA may be of relevance while investigating the relationship between Ehor and SA.In addition, the environmental physical aspects light, air temperature, and relative humidity were included in this study.Al Horr et al. discusses eight physical factors which affect occupant satisfaction and productivity in an office environment .It is recommended to include these personal and environmental factors in further research investigating the relation between light and alertness.Adding the two above mentioned recommendations to further research may explain why certain individuals respond to light and why certain people do not.This study investigated ambiguities regarding the relationship between Ehor and SA based on findings from a Dutch field study.The results showed that multiple confounders were identified suggesting they should be taken into account when investigating the relationship between office lighting and human health.In addition, the initial relationship between Ehor and SA was established for six participants out of the total 46.Differences between the groups with and without the significant initial correlation between Ehor and SA did not explain why certain individuals respond to changes in the lit environment and others do not.The current study demonstrated discrepancies between this field study and previously executed laboratory studies.The benefit of performing a field study is that the results of tests in controlled environments can be validated in a real office environment and this leads to realistic results.This study highlights the importance of validating laboratory study results in field studies.Further research should incorporate a larger sample size and additional potential confounders for the relationship between Ehor and SA.Further research including these recommendations may explain individual variability in the response to light. | The current field study investigated the ambiguities regarding the relationship between office lighting and subjective alertness. In laboratory studies, light-induced effects were demonstrated. Field studies are essential to prove the validity of these results and the potential recommendations for lighting in future buildings. Therefore, lighting measurements and subjective health data were gathered in a Dutch office environment. Health data was collected by questionnaires and includes data on functional health, wellbeing and alertness. Multiple general, environmental, and personal variables were identified as confounders for the relationship between light and alertness. For six out of the total 46 participants a statistically significant correlation was found between horizontal illuminance (Ehor) and subjective alertness. Further research needs to incorporate a larger sample size and more potential confounders for the relationship between Ehor and alertness. Further research including these recommendations may explain why certain people respond to light while others do not. |
304 | A multi-layer network approach to MEG connectivity analysis | A core feature of healthy human brain function involves the recruitment of multiple spatially separate and functionally specialised cortical regions, which are required to support ongoing task demand.Such inter-areal connectivity has been shown to be a consistent feature of measured brain activity, even when the brain is apparently at rest."Moreover, significant evidence shows that this network formation is altered in pathologies ranging from developmental disorders) to neurodegenerative disease) making it a critically important area of study. "Measurement and characterisation of networks of functional connectivity is a focus of many neuroimaging studies, and recent years have seen rapid advances in the use of magnetoencephalography for this purpose.MEG assesses electrical activity in the human brain, based upon measurement of changes in magnetic field above the scalp induced by synchronised neural current flow.MEG offers non-invasive characterisation of brain electrophysiology with excellent temporal resolution.In addition, recent improvements in modelling the spatial topographies of scalp level field patterns allow for spatial resolution on a millimetre scale."This unique combination of high spatial and temporal resolution, coupled with the direct inference on brain electrophysiology, makes MEG a highly attractive option for connectivity measurement, particularly given recent findings that dynamic changes in connectivity occur on a rapid timescale.Despite its excellent promise, MEG based characterisation of connectivity is complicated by the rich information content of electrophysiological signals.MEG measurements are dominated by neural oscillations which occur at multiple temporal scales, ranging from 1 Hz to ~ 200 Hz.These oscillations have been shown to be integrally involved in mediating long range interactions across the cortex.However, many studies probe only single frequency bands in isolation without reference to a bigger ‘pan-spectral’ picture.In addition, the richness of the signal facilitates multiple independent measures of functional connectivity.These include fixed phase relationships between band limited oscillations, as well as synchronisation between the amplitude envelopes of the same band limited oscillations.Furthermore, evidence shows that in addition to neural interactions within specific frequency bands, connectivity may also be mediated by between frequency band interactions.These might include synchronisation of oscillatory envelopes as well as an influence of low frequency phase in one region, on high frequency amplitude in another region.Ongoing mental activity certainly necessitates the simultaneous formation of multiple networks of communication and it seems likely that the brain employs multiple frequency bands, as well as cross frequency interactions and potentially independent modes of connectivity in order to achieve this.It therefore follows that a single framework in which to combine pan-spectral and cross frequency interactions to assess the efficiency of the brain as a single multi-dimensional network would be highly desirable.A potential solution to this problem is a multi-layer network.This concept, which is well studied in physics), can be understood using the simple example of a transport network.An individual can move between European cities in multiple ways, including by air, rail or road.These three modes of transport can be represented by three seemingly independent networks, with the network nodes being different cities, and the strength of connections between them representing the number of aircraft, trains, or cars that travel between them each day.In order to determine the efficiency of the system, it may be tempting to analyse each network in isolation.However, to understand the overall picture, one must also realise that each network depends critically on the other two.For example, a broken rail link between Nottingham and London would increase road traffic between the two cities, and might decrease passengers on flights from London airports.For this reason, a multi-layer network model is required which characterises the three separate networks as individual layers in the model, and also measures the dependencies between these networks as between layer interactions.This model allows a more complete characterisation of the overall transport system, taking into account all modes of transport and their interdependencies.This multi-layer framework has been applied to many complex systems, including the human brain.Here we aim to apply it to MEG derived functional connectivity.In this paper, we use envelope correlation as a means to quantify connectivity between spatially separate brain regions."This metric has been used extensively in recent years and has been described as an ‘intrinsic mode’ of functional coupling in the human brain.We estimate ‘all-to-all’ connectivity between a-priori defined brain regions, which are based on an atlas.Connectivity is estimated within multiple separate frequency bands and these within frequency band interactions define the separate layers in the model.Connectivity is also estimated between frequency bands; for example, we might measure correlation between the alpha envelope of brain region 1, and the gamma envelope of brain region 2.This forms the between layer interactions in the above example).In this way we aim to form a more complete picture of the brain as a multi-layer dynamic system.In what follows we will test our multi-layer approach on MEG data recorded during a simple visuo-motor task.Further, we will use the same framework to identify perturbed network formation in patients with Schizophrenia."All data used in this study were acquired as part of the University of Nottingham's Multi-modal Imaging Study in Psychosis and have been described in a previous paper.The study received ethical approval from the National Research Ethics Service and all participants gave written informed consent prior to taking part.23 healthy control subjects with no history of neurological illness were recruited to the study.An equal number of patients with schizophrenia were also recruited with the two groups matched for age, sex and socio-economic background.In order to derive a score for overall severity of psychotic illness in the patients, the three characteristic syndromes of schizophrenia were quantified, speed of cognitive processing assessed using a variant of the Digit Symbol Substitution Test and scores from the Social and Occupational Function Scale, respectively).These measurements were combined in a principal component analysis and the first principal component was extracted to give a single score representing the severity of the persistent symptoms of schizophrenia for each patient.We have demonstrated previously that this first component is a suitable measure of severity of residual illness that correlates with several measures of brain function.All subjects completed a visuomotor task.The paradigm comprised visual stimulation with a centrally-presented maximum contrast vertical square wave grating.The grating subtended a visual angle of 8° and was displayed along with a red fixation cross on a grey background.In a single trial, the grating was presented for 2 s followed by a 7 s baseline period where only the fixation cross was shown.During presentation, participants were instructed to repeatedly press a button with the index finger of their right hand.Participants could press the button as many times as they wanted during the stimulus.A total of 45 trials was used, giving a total experimental time of 7 min.Visual stimuli were back-projected via a mirror system onto a back projection screen inside a magnetically shielded room at a viewing distance of approximately 46 cm.Button presses were recorded using a response pad.MEG data were acquired throughout the task using a 275 channel CTF MEG system operating in the third order synthetic gradiometer configuration.Data were acquired at a sampling frequency of 600 Hz, and all subjects were oriented supine.Three electromagnetic head position indicator coils were placed on the head as fiducial markers.The locations of these fiducials were tracked continuously during the recording by sequentially energising each coil and performing a magnetic dipole fit.This allowed both continuous assessment of head movement throughout the measurement, and accurate knowledge of the location of the head relative to the MEG sensors.Prior to the MEG recording, a 3-dimensional digitisation of the subjects head shape, relative to the fiducial markers, was acquired using a 3D digitiser.In addition, as part of the MISP programme, all participants underwent an anatomical MRI scan using a Philips Achieva 7 T system.Coregistration of the MEG sensor geometry to the anatomical MR image was subsequently achieved by fitting the digitised head surface to the equivalent head surface extracted from the anatomical MR image.This coregistration was employed in all subsequent forward and inverse problem calculations.MEG data were initially inspected visually."Any trials deemed to contain an excessive amount of interference, for example generated by eye movement or muscle activity, were removed from that individual's data.In addition, any trials in which the head was found to be more than 7 mm from the starting position were excluded.Following this pre-processing, data were analysed using beamforming for source localisation, and a multi-layer network framework.Application of the beamforming method to each AAL region yielded 78 regional timecourses and we initially aimed to assess which of those timecourses exhibited a significant task induced response.Regional timecourses were frequency filtered into four separate frequency bands; alpha, beta, low gamma and high gamma.The resulting timecourses were then Hilbert transformed in order to generate the analytic signal.The absolute value of the analytic signal was then computed to yield the amplitude envelope of each timecourse.Hilbert envelopes were averaged across trials.In order to determine the AAL regions that exhibited a significant task related power change, the fractional change in oscillatory amplitude was measured between a ‘stimulus’ window and a ‘rebound’ window .,The statistical significance of the fractional change between windows was determined using a two-sided signed rank test of the null hypothesis that the change in Hilbert envelope originated from a distribution whose median is zero.The threshold for significance was Bonferroni corrected to account for multiple comparisons across all 78 regions.In four AAL regions of interest a time frequency spectrogram was generated.Again this employed the Hilbert transform, however in order to increase spectral resolution, Hilbert envelopes were generated in 33 overlapping frequency bands in the 1 Hz to 150 Hz range.Hilbert envelopes were averaged across all trials and then concatenated in the frequency dimension to form a time-frequency spectrogram for the average trial.These TFSs were then averaged across subjects.The overall aim of our connectivity analysis was twofold.First, to examine significant changes in functional connectivity induced by the visuomotor task in healthy individuals.Second, to probe differences in functional connectivity between schizophrenia patients and controls.To achieve these aims, all connectivity analyses were applied within predefined time windows, on a trial by trial basis, using unaveraged beamformer projected data.In addition, note that the longer the time window used, the more reliable the connectivity estimate becomes.For this reason the two windows were made as long as possible and equal in length to allow for robust and unambiguous contrast.To compare controls to patients with schizophrenia, we measured connectivity across the whole trial using a window.This was done separately in the two groups and results compared.In all cases, functional connectivity was computed between every pair of AAL regions.Regional timecourses were again frequency filtered into four separate frequency bands; alpha, beta, low gamma and high gamma.These bands were chosen based upon previous literature; specifically, previous work has shown robust effects in visual cortex in the alpha and gamma bands as well as robust effects in motor cortex in the beta band.A schematic diagram of the multi-layer framework is shown in Fig. 1; note however that for simplicity we only depict 3 frequency bands.Estimation of electrophysiological functional connectivity is non-trivial and warrants some discussion.The most significant confound in MEG connectivity analysis is that of signal leakage between beamformer projected timecourses.This is generated as a result of the ill-posed inverse problem and means that projected timecourses can be artifactually correlated."This problem, and associated solutions, have been well documented in the literature.Here we employed a pairwise leakage reduction scheme which exploits the fact that leakage manifests as zero-time lag correlation between beamformer projected timecourses from separate regions.Such zero-time lag linear dependency was removed using linear regression to ensure that, prior to connectivity estimation, the underlying band limited windowed signals were orthogonal."It is important to note that, in studies of this type where separate time windows are to be compared, orthogonalisation must be carried out on each window separately, rather than on the whole timecourse, since task induced changes in signal variance can also introduce significant changes in the magnitude of leakage).Following leakage reduction, the Hilbert envelope was computed for the orthogonalised seed and test timecourses.In addition to leakage, artifacts due to muscle activity were also a concern, particularly for high gamma band connectivity estimation.It is well known that increased muscle activity in, for example, the jaw or neck, generates increased oscillatory signals in the high gamma band.Such artifacts are typically bilateral and can cause spurious inflation of interhemispheric gamma envelope correlation.For this reason, the regional beamformed timecourses were also filtered into the 120 Hz–150 Hz band.This band was deemed to be higher than any neural activity of interest but would accurately capture any artifacts resulting from the magnetomyogram."Prior to calculation of connectivity, the Hilbert envelope of these magnetomyogram data was computed and regressed from both the seed and test timecourses in order to reduce the influence of muscle artifact on functional connectivity measurement who use a similar method).Following leakage and magnetomyogram reduction, connectivity was calculated between windowed timecourses as the Pearson correlation coefficient between windowed oscillatory envelopes in the seed and test regions.As noted above, correlation coefficients were computed within each time window, and each trial separately, and the mean correlation coefficient over all trials computed.This same procedure was applied:Within each frequency band and between each region pair.This generated four 78 × 78 adjacency matrices showing inter-regional connectivity for each of the four bands separately.These formed the 4 separate layers of the multi-layer model.Between each pair of frequency bands and between each region pair.This generated a further six 78 × 78 AMs showing inter-regional connectivity for each of the six frequency band pairs.These formed the between layer interactions of the multi-layer model.These processes yielded a total of 10 adjacency matrices.These were combined to generate a single ‘super-adjacency matrix’, an example of which is shown in Fig. 1.The SM contains a complete description of both within frequency band and between frequency band connectivity, measured across the entire brain.A single SM was generated for each time window, meaning that three separate SMs were available for each subject.Contrasting these separate SMs allows testing for differences in network connectivity between task and rest, or patient and control."Note that the individual tiles making up the SM have different symmetries: The within frequency band matrices have diagonal symmetry, since correlation, for example, between visual alpha and motor alpha, is identical to correlation between motor alpha and visual alpha.However, this diagonal symmetry is not reflected in the off diagonal tiles.This is because a high correlation between, for example, visual alpha and motor gamma does not necessarily imply a high correlation between visual gamma and motor alpha.To test for an effect of the visuomotor task on connectivity, we contrasted SMs measured in the active and the control time windows.This was done via subtraction, generating a single matrix for each subject showing the difference in connectivity between time windows.These difference-SMs were then averaged across subjects.In order to assess statistical significance, a permutation test was employed.It was reasoned that if the task had no effect, then the labelling of the two time windows would have no meaning.For each element in the SM, we therefore constructed a null distribution.This was calculated via the generation of multiple ‘sham’ dSMs where the window labels were switched randomly.20,000 sham matrices were constructed and a null distribution of connectivity differences derived.For each dSM element, the ‘real’ difference between windows was compared to the null distribution and a p-value generated.In order to correct for type I errors due to multiple comparisons across matrix elements, we applied a false discovery rate correction based on the Benjamini–Hochberg procedure.This procedure resulted in a thresholded dSM showing which connectivity values in the dSM were modulated significantly by the task.In the case of testing for effects of schizophrenia on connectivity, we employed SMs generated using a single time window spanning the whole trial .In order to probe the relevance of our connectivity measurements to schizophrenia, two tests were used.First it was reasoned that if connectivity was abnormal in schizophrenia, then a difference between mean connectivity values across the patient and control groups would be observed.This is henceforth termed the effect of diagnosis and was measured by subtraction of patient and control SMs.Second, it was reasoned that if such a difference was meaningful clinically, then connectivity values measured within individual patients would correlate significantly with their severity of symptoms.This is henceforth termed effect of severity and was measured, on an element by element basis, by Pearson correlation between severity and estimated connectivity in each element of the SM.These two tests yield two new matrices, both equal in size to the SM, which represent the effect of diagnosis and the effect of severity.Under a null hypothesis that there is no systematic effect of either diagnosis or severity on functional connectivity measurements, then it would be predicated that there would be no significant relationship, across elements, between matrices representing diagnosis and severity.However, if the MEG connectivity measures are truly descriptive of schizophrenia, then those matrix elements most affected by the patient–control difference might be expected to be the same elements that are most correlated with severity.Hence a relationship between the diagnosis and severity matrices would be observed.With this in mind, we measured correlation across matrix elements, on a ‘tile-by-tile’ basis.,To test this statistically we used a permutation test.First, patient/control labels were switched randomly and a new average difference between sham groups computed.Second, the individual patient disease severity scores were randomised across subjects and the correlation with connectivity score recomputed.This yielded two ‘sham’ matrices which could be compared, and again we measured tile correlation.10,000 iterations of this test were used to generate a null distribution and comparison with the ‘real’ tile correlation value yielded a probability that the result occurred by chance.We used a two tailed test: meaning that we allow the possibility that those patients with the worst symptoms could look more like controls than patients with lesser symptoms — though apparently counter-intuitive such an effect is conceivable and could result from compensation mechanisms.Finally, since testing each tile individually led to 10 separate tests, Bonferroni correction was performed.Statistical significance was therefore defined at a threshold of p < 0.05, which is corrected to p < 0.0025 to account for the two tailed test and 10 separate comparisons.Anything at p < 0.025 was considered a ‘trend’.It should be noted here that, in principle, a standard parametric test could also be employed; however this would require direct estimation of the degrees of freedom in the correlation.The spatial smoothness inherent in the tiles of the SM means that the number of degrees of freedom in the correlation is vastly less than the number of matrix elements.Estimating the reduction in degrees of freedom, whilst possible, is non-trivial.For this reason we employ the permutation approach, where spatially smoothness in the measured tiles is also mirrored in the sham tiles.The tile correlation test was used to identify tiles in the SM in which connectivity values were related significantly to schizophrenia.Following this, tiles deemed significant were used in order to visualise which individual brain connections were driving the observed significant correlation.To do this, for each matrix element within a significant tile, we first measured the effect of diagnosis; second we measured the effect of severity.These tests were treated independently and those matrix elements significant in both tests were used in visualisation.Fig. 2 shows the change in oscillatory amplitude induced by the visuomotor task.Fig. 2A shows time frequency spectrograms extracted from the left primary sensorimotor cortex, and left primary visual cortex.Note that, as expected, in sensorimotor cortex a reduction in beta amplitude is observed during stimulation with an increase above baseline immediately following movement cessation.In visual cortex, an increase in gamma amplitude is observed during stimulation alongside a concomitant decrease in alpha amplitude.These results are further shown in Fig. 2B, where the coloured circles show the locations of AAL region centroids with a significant change in neural oscillatory amplitude between stimulus and rebound windows.The sizes of the circles reflect the magnitude of the change.Note that significant changes are observed in motor cortex for beta and low gamma bands, and in visual cortex in the high gamma band.Figs. 3 and 4 show task induced change in functional connectivity.Firstly, Fig. 3A presents a schematic diagram showing the structure of each individual adjacency matrix tile and how these tiles are used to form the Super adjacency matrix.In the upper panel, regions of the adjacency matrix corresponding to the visual, motor and visual-to-motor networks are highlighted in red, blue and yellow respectively.Fig. 3B shows SMs, averaged across all subjects, in the active and control time windows.Note first that a high degree of structure is observable in both matrices, particularly in the alpha and beta bands.Note also that, particularly in high frequency bands, increased structure is observable in the active compared to the control window.These results are further confirmed in Fig. 3C which shows the average difference between active and control windows and the thresholded difference.Comparison of the individual tiles of Fig. 3B and 3C with the upper panel of Fig. 3A show clearly that visual networks are observed in the alpha and gamma bands, alongside a sensorimotor network in the beta band.Note also an anti-correlation between motor cortex beta oscillations and visual cortex high gamma oscillations.This manifests as significant clusters in the beta to high gamma band tile.Note the asymmetry meaning that a reciprocal ‘motor gamma to visual beta’ network is not observed.Fig. 4 shows visualisation of the transient brain networks formed during the active window of the visuo-motor task.The central panel shows the dSM, and in the outer images, red lines denote the connections between brain region pairs that exhibit a significant task induced change in functional connectivity.The thickness and colour of the line denotes the strength of connection.Within frequency band changes are observed in the beta and gamma ranges.The beta band shows a transient task induced increase in connectivity within a motor network.Specifically, connectivity is increased between the left and right primary motor regions as well as between left primary motor cortex, pre-motor cortex, supplementary motor area and the left secondary somatosensory area."This finding is in good agreement with previous results in motor tasks).The high gamma band also demonstrates increased connectivity in a visual network which includes primary visual regions and associated visual areas.Again this is in good agreement with the well-known effect of increased gamma oscillations with presentation of visual gratings.Significant between frequency band interactions are also observed.Beta to low gamma band connectivity is increased during the task within a network of brain areas which includes bilateral pre-motor cortex and left primary motor cortex.Note the spatial difference between this beta to low gamma band interaction and the beta network, the former being centred on premotor regions whilst the latter is centred on primary motor cortices, making it tempting to speculate that these networks perform different functional roles.Finally, a beta to high gamma band reduction in connectivity is observed between the visual cortex and the left sensorimotor region.These effects will be addressed further in our discussion.Fig. 5 shows the effects of schizophrenia on multi-layer network connectivity.Fig. 5A shows the mean SMs computed in controls and patients.Recall that these matrices are computed within a single window spanning the entire length of the task trial, with connectivity estimated for each trial separately and averaged across trials, and subsequently subjects.Fig. 5B shows the difference between groups which we term the effect of diagnosis.Note that clear structure in the difference matrix is observable, particularly within the tile representing alpha-to-alpha connectivity.Fig. 5C shows the cross subject correlation between functional connectivity and the severity of persistent symptoms of schizophrenia, which we term the effect of severity.Again a clear structure is observable, particularly in the alpha-to-alpha tile.Under a null hypothesis where connectivity metrics are unaffected by illness, then the effect of diagnosis and the effect of severity would be completely unrelated and show no similarity.However, visually it is easy to see a clear relationship within some tiles within these matrices.Fig. 5D formalises this relationship: each element in the matrix represents tile correlation between effect of diagnosis and effect of severity.Relationships are measured as Pearson correlation coefficients across all matrix elements within each tile.Notice that, as would be expected from Fig. 5B and 5C, alpha-to-alpha connectivity shows a significant relationship between effects of diagnosis and severity, implying that these connectivity estimates are affected by schizophrenia.Interestingly, no other tiles show a significant relationship following multiple comparison correction.Having shown a significant effect of schizophrenia within alpha-to-alpha connectivity, we further investigate these effects in Fig. 6.Fig. 6A highlights the brain regions between which connectivity differs between groups.Again the lines denote connectivity between AAL regions and their width indicates the magnitude of the difference between patients and controls.Note that a clear network structure is observed with the occipital lobe being most strongly implicated.Fig. 6B shows mean connection strength, averaged across the observed occipital network, in both patient and control groups.The bar chart shows mean group connectivity and error bars represent standard error across subjects.Fig. 6C shows mean connection strength computed separately in 23 patients and plotted against illness severity.Note how, in patients with less severe symptoms, alpha band connectivity tends to a value close to that of controls, whereas in those patients with more severe symptoms, the mean alpha band connectivity is markedly reduced.This important point implies direct clinical relevance of the results shown, which will be further addressed in the discussion below.Finally, Fig. 7 shows results of a post-hoc analysis of primary visual cortex activity and connectivity in the alpha band.Fig. 7A shows timecourses of alpha band Hilbert envelope, averaged over trials and subjects.The blue line shows the mean alpha envelope for controls whereas red shows the equivalent envelope in patients.The left hand plot shows the case for left visual cortex and the right hand plot shows right visual cortex.Note that there is relatively little difference in trial averaged alpha envelopes between patients and controls; both groups exhibit marked alpha desynchronisation during stimulation with the largest changes from baseline occurring shortly after stimulus onset and offset.The similarity of the trial averaged alpha band envelopes is further confirmed in Fig. 7B. Here, the left and right bar charts show mean change in alpha amplitude between a stimulus window and a control window , in left and right visual cortices respectively.Note that amplitude is reduced during stimulation; however there is no measurable difference between patients and controls.Fig. 7C shows alpha connectivity measured between left and right visual regions.In the left hand plot, distinct from the rest of this study, “connectivity” is measured between trial averaged Hilbert envelopes; i.e. the bar chart reflects correlation between the trial averaged alpha band Hilbert envelopes measured in left and right visual cortex. ,In the right hand plot, connectivity is measured using the standard method in unaveraged data.Note that a significant difference in connectivity is observed between groups in the unaveraged case, but not in the averaged case.Averaging across trials prior to connectivity estimation causes a marked reduction in any signal fluctuations that are not time locked to the stimulus — meaning that trial averaged “connectivity” is a reflection of the degree to which task induced change is coordinated between regions.It thus follows that the reduction in alpha connectivity observed in Figs. 5 and 6 is not due to atypical coordination of the task induced response between regions; rather, the primary effect is due to the superposition of atypical task independent activity that that fails to synchronise between regions.This will be addressed further below.Recent years have shown the critical importance of inter-regional neural network connectivity in supporting healthy brain function.Such connectivity is measurable using neuroimaging techniques such as MEG, however the richness of the electrophysiological signal makes gaining a complete picture challenging.Specifically, connectivity can be calculated as statistical interdependencies between neural oscillations measured across a large range of frequencies, as well as between frequency bands.This pan-spectral nature of network formation likely helps to mediate the simultaneous formation of multiple brain networks, which support the demands of ongoing mental tasks.However, to date, in studies of electrophysiological connectivity this has been overlooked, with many studies treating individual frequency bands in isolation.Here, we combine envelope correlation based assessment of functional connectivity with a multi-layer network model in order to derive a more complete picture of connectivity within and between frequency bands.Using a visuomotor task, we have shown that our method can highlight simultaneous and transient formation of a motor network in the beta band, and a visual network in the high gamma band.More importantly, we have used this same methodology to demonstrate significant differences in occipital alpha band functional connectivity in patients with schizophrenia relative to controls.This methodology represents an improved means by which to obtain a more complete picture of network connectivity, whilst our findings in schizophrenia demonstrate the critical importance of measuring connectivity in clinical studies.Methodologically, this paper demonstrates the utility of a multi-layer model in characterising within and between frequency interactions.In our visuomotor application, it was our intention to demonstrate this framework using a well characterised task that is known to induce robust changes in neural oscillations in multiple frequency bands.It is well known that finger movement induces a drop in beta band oscillatory amplitude in primary sensorimotor cortex during movement, followed by an increase above baseline shortly following movement cessation.Furthermore, it is also known that beta band envelopes are associated with long range motor network connectivity.Here we added to this picture by showing directly that unilateral finger movement is supported by the transient formation of a broad network of brain regions including left and right primary motor cortices as well as pre-motor cortices, SMA and secondary somatosensory regions; further, this network is mediated in the beta band.In addition, passive viewing of a visual grating has long been known to increase the amplitude of gamma oscillations in primary visual cortex.Here we have shown that induced gamma envelopes are correlated across visual regions.Whilst this interaction may be expected, it is interesting to note that it is not simply due to signal leakage between hemispheres.Linear interactions have been removed via our leakage reduction methodology.The significant increase in connectivity observed therefore represents envelope correlation mediated by non-zero phase lagged events in the underlying neural signals."To the authors' knowledge this is the first direct measurement of this effect, which may warrant further investigation in future studies.Finally, significant task driven changes between frequency bands were also observed.A network involving bilateral pre-motor and left primary motor areas was observed as a beta to low gamma interaction and the spatial differences noted between this and the motor network limited to the beta band makes it tempting to speculate that the cross frequency interaction serves a different functional role, however this requires significant further investigation.An anti-correlation between the motor and visual regions was also measurable as a beta to high gamma interaction.Whilst it may be tempting to interpret this as a network that coordinates activity between these two regions, it should be pointed out that, given the task is well known to increase gamma amplitude and simultaneously decrease beta amplitude in the visual and motor areas respectively, such an interaction would be expected.In fact, the likelihood is that this transient anti-correlation results from two independent stimulus driven variations, rather than a functional network per se.This said however, this cross frequency network also potentially warrants further investigation.Overall, despite some ambiguity, the visuomotor task represents a useful testbed for the multi-layer network framework and its ability to extract simultaneous transiently forming networks both within and across frequency bands.In terms of the method itself, there are four core components that warrant discussion: cortical parcellation; source space projection; the connectivity metric and statistical analysis.First, regarding the AAL parcellation, this was chosen based on its successful use in previous MEG investigations).However, our method could be used with any cortical parcellation.It is noteworthy that the separate AAL regions vary markedly in size, meaning that our use of a single full width at half maximum of the Gaussian function) may mean that some regions are better represented than others; this represents a limitation of the present method.Related, the inhomogeneous spatial resolution of MEG may mean that, in some cases multiple AAL regions may generate degenerate timecourses, whilst in other cases a single region may contain multiple independent signals.In future, the use of brain parcellations based directly on the MEG data may therefore prove instructive.However this is non-trivial and should be a subject of future investigation.Secondly, for source localisation, we used a beamformer technique.Beamforming has been shown previously to be particularly useful in the characterisation of neural oscillations, and has been used successfully in the measurement of connectivity.The reasons for the success of this algorithm in such studies has been addressed at length in previous papers, and will not be repeated here.However, we do point out that other inverse solutions could be substituted for beamforming in the present processing pipeline, and would likely generate similar results.Thirdly, regarding the choice of functional connectivity metric: here we choose to use envelope correlation based on the previous success of this measurement in facilitating long range connectivity estimation.However, it is important to point out that the multi-layer network framework is not limited to envelope metrics, but could be extended to other electrophysiological measurements of functional connectivity.Recent years have seen the emergence of a number of metrics for functional coupling, including within frequency band and between frequency band interactions.It is easy to conceive how such metrics could be employed to form a set of super adjacency matrices similar to those employed here.For example the diagonal tiles could easily be generated using either the imaginary part of coherence or the phase lag index.When considering between frequency band interactions obviously the notion of phase coupling becomes problematic.However, one could consider measuring a fixed phase relationship between two bands where, for example, the duration taken for n cycles of frequency band one always coincides with the duration taken for m cycles of frequency band two.In addition, cross frequency interactions can also be quantified via coupling between the phase of low frequency oscillations and the amplitude of high frequency oscillations.Finally, following derivation of super-adjacency matrices, there are many ways in which to analyse those matrices statistically.Here, a simple approach was employed in which significant differences between task and rest was sought on an element by element basis.We used this approach since it allowed direct inference on both task driven networks and patient-control differences.However, more complex analyses may be highly informative: In particular, graph theoretical metrics such as algebraic connectivity have become a popular way to analyse single layer networks in neuroimaging and are equally applicable to multi-layer models.Such measures would offer summary statistics regarding changes in the efficiency of the network as a whole.Such measures may be of significant utility in characterising task, compared to rest, or patients versus controls.Overall, it is possible to conceive multiple ways of forming and analysing a multi-layer network equivalent to that used here.This same framework will offer unique insight into how the brain employs multiple temporal scales in order to simultaneously form, and dissolve, networks of communication in the task positive and resting states.Following testing of the multi-layer framework, we sought to further demonstrate its utility by gaining insights into the neuropathology underlying schizophrenia.Abnormalities in motor function have been noted since the earliest descriptions of schizophrenia and are a well-accepted feature of the disorder.Similarly, patients with schizophrenia exhibit deficits in low-level visual function.For this reason, the visuomotor task represents a useful means by which to probe abnormalities in this debilitating disorder.Using multi-layer connectivity assessment, we observed significantly reduced alpha band functional connectivity in a network of brain regions spanning the visual cortex.Furthermore, the clinical relevance of this difference was confirmed since the magnitude of measured alpha connectivity in visual cortex inversely correlated with behavioural measures representative of the persistent features of the disease.This result adds weight to an argument that impaired connectivity is a feature of Schizophrenia.Our result is further summarised in Fig. 7 which shows activity within and connectivity between the left and right primary visual regions.First note that there is no significant difference in the magnitude of stimulus driven alpha amplitude change, between patients and controls.In agreement with this, the alpha envelope timecourses in patients and controls are remarkably similar: both show an overall loss in amplitude during stimulation, and both show a transient dip shortly after stimulus onset and offset meaning their overall structure is the same.We did observe a moderate difference in amplitude between controls and patients in a small time window at around 3 s post stimulus; however this was not found to be significant following FDR correction across independent time samples.When measuring connectivity between left and right visual cortices we observed a significant reduction in the patient group.This difference is due neither to altered leakage in patients, nor to altered SNR.Recall that connectivity is measured as amplitude envelope correlation within each trial individually, prior to trial averaging.Our result thus shows that in unaveraged data, there is greater coordination between the visual areas in controls compared to patients.Put another way, there are signal components – asynchronous across regions – which occur in patients and not in controls.Importantly, these additional signals are not task related and therefore average out across trials, since they have no observable impact on trial averaged alpha envelope timecourses.Further, there is no significant difference between groups when “connectivity” is measured using trial averaged data.This key point shows the importance of measuring connectivity between areas using unaveraged data.It is important to remember that this is an exploratory analysis in a small group.For this reason, results should not be over interpreted and they require replication in a second patient cohort.However, given the relatively well characterised role of alpha oscillations it is tempting to speculate on what these measurements might imply.Our multi-layer network model captured connectivity across the entire 8–100 Hz frequency range.This analysis encompassed many pan-spectral networks including the beta band sensorimotor network and the gamma band visual network.It is therefore of significant note that only the occipital alpha network demonstrated a robust relationship to schizophrenia.Visual alpha oscillations have been observed since the first EEG recordings.For many years, these effects were treated as epiphenomena, with little or no relevance to neural processing.However, in recent years important insight has been gained into the functional role of these oscillatory effects.Specifically, a link has been made between alpha activity and attention, with high alpha amplitude being thought of as a marker of inattention.This is shown clearly in studies in which individual subjects are asked to switch their attention from one visual region to another.If, for example, attention is switched from the left visual field to the right, one sees an increase in alpha oscillations in the right hemisphere and a decrease in the left.The reverse is true when switching attention from the right visual field to the left.Furthermore it has been proposed that these alpha oscillations act to gate information flow to higher order cortical regions.Given this hypothesis, it follows that a lack of coordination between alpha envelopes across brain regions may be reflective of an inability to direct visual attention appropriately, and more specifically an inability to accurately gate incoming visual information to higher order brain regions.This, in turn, may have an influence on a number of the ongoing persistent symptoms of schizophrenia including an apparent disorganization or impoverishment of mental activity.We therefore speculate that this may be why reduction in alpha connectivity correlates well with behavioural measures of persistent illness severity.For this reason, whilst this remains an exploratory analysis, future studies of schizophrenia patients using MEG should use this same technique to further probe alpha band attentional effects and their relationship to the core symptoms of schizophrenia.We have combined oscillatory envelope based functional connectivity metrics with a multi-layer network model in order to derive a complete picture of connectivity within and between oscillatory frequencies.We demonstrate our methodology in a visuomotor task, highlighting the simultaneous and transient formation of motor networks in the beta band and visual networks in the high gamma band, as well as cross-spectral interactions.More importantly, we employ our framework to demonstrate significant differences in occipital alpha band networks in patients with schizophrenia relative to controls.We further show that these same measures correlate significantly with symptom severity scores, highlighting their clinical relevance.Our findings demonstrate the unique potential of appropriately modelled MEG measurements to characterise neural network formation and dissolution.Further, we add weight to the argument that dysconnectivity is a core feature of the neuropathology underlying schizophrenia. | Recent years have shown the critical importance of inter-regional neural network connectivity in supporting healthy brain function. Such connectivity is measurable using neuroimaging techniques such as MEG, however the richness of the electrophysiological signal makes gaining a complete picture challenging. Specifically, connectivity can be calculated as statistical interdependencies between neural oscillations within a large range of different frequency bands. Further, connectivity can be computed between frequency bands. This pan-spectral network hierarchy likely helps to mediate simultaneous formation of multiple brain networks, which support ongoing task demand. However, to date it has been largely overlooked, with many electrophysiological functional connectivity studies treating individual frequency bands in isolation. Here, we combine oscillatory envelope based functional connectivity metrics with a multi-layer network framework in order to derive a more complete picture of connectivity within and between frequencies. We test this methodology using MEG data recorded during a visuomotor task, highlighting simultaneous and transient formation of motor networks in the beta band, visual networks in the gamma band and a beta to gamma interaction. Having tested our method, we use it to demonstrate differences in occipital alpha band connectivity in patients with schizophrenia compared to healthy controls. We further show that these connectivity differences are predictive of the severity of persistent symptoms of the disease, highlighting their clinical relevance. Our findings demonstrate the unique potential of MEG to characterise neural network formation and dissolution. Further, we add weight to the argument that dysconnectivity is a core feature of the neuropathology underlying schizophrenia. |
305 | Mains water leakage: Implications for phosphorus source apportionment and policy responses in catchments | Phosphorus is a vital element for all life.However, P is also the subject of an environmental paradox.One the one hand, world food security and the growing production of biofuels rely on enhanced P inputs to ecosystems, largely through the application of inorganic fertilisers and feed supplements manufactured from finite phosphorite deposits.Volatility in global markets can lead to dramatic increases in the price of P fertiliser, for example by 800% in 2008, meaning that parsimonious use and management of P resources is judicious.On the other hand and in parallel with increased mining and processing of phosphate rock, widespread enrichment of aquatic ecosystems with P has occurred in many parts of the globe.Anthropogenic inputs of P to these ecosystems have far-reaching effects, impairing water quality through stimulation of eutrophication with profound impacts on ecosystem function and health.In turn, these ecosystem impacts can be directly linked to significant economic costs.Research has attempted to quantify the absolute and relative contribution of agriculture and sewage treatment work effluent to total P loadings in aquatic ecosystems.Further work has considered the potential P load from other sources, including septic tank systems and atmospheric deposition.In response to P enrichment of aquatic ecosystems, policy and mitigation practices have predominantly targeted reductions in the export of P from agricultural land and from sources of waste water, involving changes in fertiliser, manure/slurry and other land management practices alongside the introduction of tertiary treatment technologies for P removal at sewage treatment works.However, these responses have had varying success with respect to improving water quality and reversing eutrophication within aquatic ecosystems.Here, we argue that P loads to the environment from mains water leakage could be important in the context of eutrophication in aquatic ecosystems, but have not been sufficiently well constrained to date.Current P loads from MWL are potentially significant, especially within highly populated areas.Further, without action, the relative importance of MWL-P is likely to grow as P loads from other sources decline following the introduction of appropriate policies and mitigation practices.Therefore, the need to address MWL-P in order to protect and to restore aquatic ecosystems in the face of eutrophication is likely to increase in the future.Phosphate dosing of mains water supplies was introduced in the USA during the first half of the 20th century to prevent calcite precipitation within distribution networks.The additional benefits associated with reduced iron corrosion from distribution pipes were quickly established.However, widespread dosing of mains water supplies with PO4 in the UK, parts of Europe and the USA was not adopted until the 1990s, largely in response to legislative requirements to reduce lead and copper concentrations in drinking water due to the impacts of heavy metal exposure on human health.In the USA, a standard of 50 μg L− 1 for both Pb and Cu in drinking water was originally adopted.However, since 1991 an action level of 15 μg L− 1 Pb has been introduced under the lead and copper rule.If the LCR is exceeded, appropriate action must be taken by the relevant water utility, including introduction or optimisation of PO4-dosing.As permitted concentrations of Pb in drinking water have been reduced across Europe, for example from 25 μg L− 1 to 10 μg L− 1 in 2013, there has been an increase in both the concentration and the spatial extent of PO4-dosing to ensure better compliance with these more stringent standards.Current PO4-dosing for drinking waters in the UK typically achieves final P concentrations between 700 and 1900 μg L− 1 and is essentially applied nationally.In the U.S., more than half of water utilities use a range of PO4-based corrosion inhibitors.Where applied and optimised, PO4 dosing of mains water represents an effective technological solution to reduce Pb and Cu concentrations in drinking water.However, leakage from mains drinking water networks is a globally-significant issue, with the volume of water that leaks costing water utilities worldwide an estimated $14 billion per year.Mains leakage from the distribution network in England and Wales is currently estimated to be 22% of treated water, equivalent to around 3200 ML·day− 1, which has declined considerably since the mid-1990s when leakage peaked at just over 30% of treated water.Pipe failure in drinking water distribution networks is also a major concern within North America, where recent data from the USA and Canada suggest a current failure rate of 11 failures 100 miles− 1 year− 1, with highest failure rates over 5 years for cast iron, ductile iron and steel pipes.Further, there has been a significant deterioration in the overall condition of drinking water distribution networks over the last three decades in the USA, with 68% classified as excellent in 1980, 42% in 2000 and 32% in 2010.A recent assessment of utility water loss in China found that the average leakage rate was approximately 18%, with 40% of water utilities suffering leakage rates > 20% whilst some smaller utilities had leakage in excess of 60%.Although Holman et al. noted that leakage of PO4-dosed mains water could be an important source of P, research has only recently attempted to quantify the load of P delivered to the environment from MWL.Within the UK, Gooddy et al. estimated the total P load from MWL to be approximately 1000 tonnes year− 1.Subsequently, using a more sophisticated national-scale modelling approach, Ascott et al. revised this figure to 1200 tonnes·P·yr− 1.In this paper, we highlight the importance of properly accounting for MWL-P by developing an approach to quantify MWL contributions to P loads within the River Thames catchment over the past 30 years.The River Thames catchment is characterised by a high population density and variable mains leakage rates, and we compare estimates of MWL-P with P loads from both agricultural land and from STW effluent within the same catchment.Subsequently, we discuss how environmental policy could be adapted in the future to balance both protection of human health by minimising heavy metal exposure through drinking water and protection of aquatic ecosystems through reducing P loads derived from MWL.A first estimate of the MWL-P load across the period 1994–2011 for the River Thames catchment was made using historic data for water company leakage rates, PO4 dosing concentrations and dosing extents.Annual water company level historic leakage rates for the four water utilities above are available for 1998–2011.Water resource zone level leakage rates for this period were derived by back-extrapolating the observed WRZ data for 2011, assuming the same trend in leakage would occur at the WRZ and company level.For the period 1994–1998, historic leakage rates are only available for Thames Water.The same back-extrapolation approach was used to derive both company level and WRZ leakage trends for this period for all the water utilities in the Thames catchment.For 1994–1998, it was assumed that the trends in leakage for Thames Water are the same as the other three companies in the catchment.Very limited data are available to determine historical dosing extents or dosing concentrations for P in mains drinking water.On the basis that PO4 dosing only began in earnest in 1994, a linear increase in dosing concentration from zero to 646 μg·P·L− 1 for 1994–2000 was assumed.The dosing concentrations reported by Comber et al. between 2000 and 2006 were then applied in our analysis.For the period 2006–2013, it was assumed that dosing concentrations remained constant at ~ 1000 μg·P·L− 1.This is likely to be a conservative estimate because the tightening of the Pb standard for drinking water in the EU in 2013 likely necessitated an increase in the concentration of P required within mains water in some areas.The spatial extent of PO4 dosing has previously been estimated to have increased from 90 to 95% between 2007 and 2011.Given the limited data available with which to constrain the extent of dosing, a sensitivity analysis was undertaken.Two further estimates of the temporal variation in MWL-P load for the River Thames catchment were derived: Using a dosing extent 25% lower than the estimate above; and Using a dosing extent 25% greater than the estimate above, limited to a maximum value of 100%.Table 1 and Fig. 3 summarise how P loads to the River Thames catchment from agriculture, STWs and MWL have changed over the past 30 years.Across the period 1981–2011, P loads from STWs within the catchment have decreased by 84%, whilst the load of P from agricultural land has fallen by 54% over the same period.These analyses indicate that policy and practice have successfully reduced the input of P to the River Thames from STW and agricultural sources.However, evidence suggests that despite this dramatic reduction in P loads, in-river P concentrations continue to exceed critical ecological thresholds and that the reduction in P loads has delivered little impact in terms of nuisance algal growth.Biological response to reduced P load and concentration seems to be delayed in many systems and/or P concentrations remain above biological thresholds despite significant reductions in P load.The maintenance of P concentrations above thresholds that drive biological change may be due to the persistence of alternative sources of P that have not been properly accounted for to date in source apportionment work, including MWL-P.Over the period 1994–2013, the relative and absolute contribution of MWL-P to the River Thames catchment have increased substantially.Depending on the proportion of MWL-P delivered to receiving waters, MWL-P loads may now be approaching a comparable order-of-magnitude to P loads from diffuse agricultural sources and from STW effluent.Based on national figures for dosing concentrations of between 0.5 and 2 mg/L, the relative proportion of MWL-P could be from 12% to 47% of sewage treatment effluent.Fig. 3 compares a worst case scenario, in which the maximum possible contribution of MWL-P to the River Thames P budget occurs, assuming conservative transport of P and therefore 100% delivery of MWL-P to the river network, with scenarios of 25%, 50% and 75% delivery of MWL-P.The scenario of 100% MWL-P delivery reflects two conditions: no return of MWL back into the sewer network; and no retention of P along hydrological pathways between the point of leakage from a mains distribution network and the catchment outlet.Based on this worst case scenario, our analyses suggest that if the trend in declining STW effluent P concentrations continues, for example due to more stringent and/or more widespread consents on final effluent P concentrations, the relative contribution of MWL to P loads within the River Thames could exceed STW P by the end of 2016.River P concentrations that persist above biological thresholds may also be due to legacy P in catchments, associated with the accumulation and subsequent chronic release of P from environmental pools along the land-water continuum.In a scenario where 100% of MWL-P arrives at the river network without significant storage in the catchment, there would be no contribution from MWL to legacy P.However in other more probable scenarios with < 100% transfer of MWL-P to the river network, some MWL-P will be retained within the catchment and could be released at a later date, thereby contributing to legacy P effects within catchments.By integrating average annual MWL-P loads from 1994 when P-dosing first started until our most recent estimates of MWL-P, it is possible to estimate the total mass of P that has been released into the River Thames catchment due to MWL.Assuming a scenario in which 50% of MWL-P arrives at the river network without accumulation in the catchment, and the remaining MWL-P is stored within one or more environmental pool, the legacy contribution to P within the River Thames catchment from MWL is approximately 1 kt P over the period 1994–2013.Ascott et al. estimated that ~ 15% of MWL fluxes may recharge to groundwater in the River Thames catchment.Given water residence times within the shallow groundwater that MWL is likely to recharge within this catchment, it is probable that any MWL-P recharged to groundwater 20 years ago may now be discharging into river networks, assuming relatively conservative transport of P in shallow groundwater systems.Two highly significant challenges define the context for MWL-P.Firstly, minimising human health risks associated with exposure to contaminants such as Pb and Cu in drinking water.Secondly, minimising the contribution made by MWL-P to nutrient enrichment within the environment.Water utilities have invested significant capital and operating resources in reducing P loads delivered to receiving waters, both by working with land owners and land managers to mitigate P losses from agricultural land as well as by enhancing P removal at STWs.As illustrated by the recent public health crisis in Flint, USA that was partly associated with inadequate PO4 dosing of raw water sources, fundamental human health, social and reputational effects mean that cessation of PO4 dosing within distribution networks in which lead piping remains will never be an option to reduce P loads delivered to the environment.Below, we consider alternative responses to the challenge of reducing MWL-P whilst continuing to ensure that human health risks associated with drinking water are minimised.The most obvious alternative to continued P-dosing of raw water sources is wholesale replacement of lead piping within drinking water distribution networks.This should include not only the communication pipes that are owned by water utilities, but also the below-ground supply pipes within the boundary of land that is the responsibility of homeowners and plumbing within domestic properties up to the final point of distribution at the domestic tap.Partial replacement of lead pipes within drinking water distribution networks is not a suitable response.Partial replacement has the potential to exacerbate corrosion of lead pipes and thereby increase human exposure to lead within drinking water, due to galvanic corrosion between the original lead piping and the replacement pipe that is often constructed from copper.However, full replacement of lead pipes has very significant cost implications.For example, in the USA the American Water Works Association has estimated the cost of replacing drinking water infrastructure at around $1 trillion over the next 25 years, whilst a $70 billion programme under the True LEADership Act has recently been proposed by US senators that would include lead service line replacement.In the UK, it has been estimated that the market price of P used to dose raw water must go up by a factor of 20 before the replacement of lead piping would be financially viable.Lead rehabilitation has previously been tested and new methods are market-ready for deployment, including lining the internal walls of pipes with non-lead bearing materials.Rehabilitation would be cheaper than replacement as fewer excavations are required.However, the cost limited life-span of liners and the timescales involved in widespread lining or replacement may ultimately make these actions an unlikely solution to MWL-P in anything but the long-term.Assuming that the present-day extent and final concentration of PO4 dosing for drinking water is likely to continue, at least in the short to medium term, alternative approaches to reducing MWL-P loads merit consideration.Such approaches should focus on how MWL can be minimised, with the consequence that reductions in MWL will result in lower P loads being delivered to the environment from mains water.The economic level of leakage, the leakage rate at which it would cost more to make further reductions in leakage than to produce replacement water from another source, is an important factor in long term investment planning within the water industry.The SELL also represents a minimum level of leakage against which the performance of water companies can be assessed.Preliminary research carried out two decades ago suggests that theELL is highly sensitive to assumed water cost, for example a 1% increase in the value of the lost water could lead to theELL falling by 10%.The current methodology for calculating the SELL in the UK incorporates estimates for a number of externalities associated with MWL, for example the carbon costs, the interruption to water supplies, the disturbance to vehicle movement and the impact of noise pollution due to leakage, alongside the environmental benefits of reduced water abstraction following reductions in MWL.However, the SELL methodology does not currently include any estimate of the environmental damage costs associated with MWL-P."Given that the methodology for calculating the SELL is currently under review ahead of the next price review of water customer's bills in England and Wales, there is a timely opportunity to consider whether to incorporate MWL-P as an externality within a revised SELL methodology.For example, assuming a damage cost of c.$47 per kg of P and the estimate of 1200 tonnes of MWL-P yr− 1 in the UK from Ascott et al., multiplying these figures gives the total damage costs associated with P from MWL, which would be approximately $57 million yr− 1 in the UK.Clearly, this estimate assumes that all MWL-P remains within the environment and contributes to environmental damage.Significant uncertainty surrounds these assumptions, emphasising the need to better constrain the ultimate fate of MWL-P if more accurate assessments of the damage costs associated with this source of P are to be made.The consequence of incorporating MWL-P as an externality would be to lower the SELL and thereby to reduce P loads delivered to the environment from MWL, assuming that SELL targets were met.However, a proportion of any additional capital or operating costs associated with meeting a lower SELL target would be borne by water customers, which would require approval from the economic regulator in England and Wales and may well meet resistance from water customers.Finally, theELL framework could be broadened to encompass a sustainable environmental/economic level of P release, thereby recognising MWL-P as a source of P that must be quantified and managed as part of landscape-scale controls on P delivery to the environment.The basis for such a framework already exists in the form of the Total Maximum Daily Load approach developed in the USA to deliver the requirements of the Clean Water Act.In the UK, initial trials of catchment-wide P permits, led by the Environment Agency in collaboration with the water industry although currently focussed solely on STWs, provide a similar opportunity to incorporate MWL-P within landscape-scale controls on P export to the environment.Within either a TMDL or catchment-wide P permit, MWL-P could be quantified and subsequently allocated a proportion of the TMDL, or a proportion of the catchment P permit where a catchment permitting framework was extended beyond STWs.Where a TMDL or catchment P permit was exceeded following incorporation of current levels of MWL-P, a number of options would be available to water companies.Firstly, reductions in MWL and thereby in MWL-P could be specifically proposed by the water company in order to meet the TMDL or catchment-wide P permit.Secondly, a water company may choose to offset MWL-P by delivering an equal reduction in P load from other sources that fall within their remit, particularly through enhancing P removal at STWs.Finally, the potential to trade the reduction in P due to MWL-P required in order to meet a TMDL or a catchment P permit could be considered, for example by water companies contributing financially towards reductions in P export from agricultural land that matched this requirement.However, incorporating MWL-P within either the TMDL or catchment P permit framework would require accurate estimates of MWL-P loads that are derived from mains distribution networks.Accurate quantification of the ultimate fate of MWL-P would also be required, to constrain the proportion of MWL-P that is delivered to receiving waters as opposed to being returned to sewer or entering long-term storage within a catchment.Effective strategies to reduce phosphorus enrichment of aquatic ecosystems require accurate quantification of the absolute and relative importance of individual sources of P. Assuming that mains water supplies will continue to be dosed with PO4, MWL-P loads must be quantified more widely and the ultimate fate of MWL-P within the environment better understood.Addressing these challenges would underpin more accurate P source apportionment models, enabling policy and investment to be effectively targeted in order to protect and restore aquatic ecosystems facing the risk of eutrophication.Perhaps more fundamentally, this information will provide insight into the way in which finite P resources are used to maintain drinking water supplies, supporting optimisation of this demand for P in the future. | Effective strategies to reduce phosphorus (P)-enrichment of aquatic ecosystems require accurate quantification of the absolute and relative importance of individual sources of P. In this paper, we quantify the potential significance of a source of P that has been neglected to date. Phosphate dosing of raw water supplies to reduce lead and copper concentrations in drinking water is a common practice globally. However, mains water leakage (MWL) potentially leads to a direct input of P into the environment, bypassing wastewater treatment. We develop a new approach to estimate the spatial distribution and time-variant flux of MWL-P, demonstrating this approach for a 30-year period within the exemplar of the River Thames catchment in the UK. Our analyses suggest that MWL-P could be equivalent to up to c.24% of the P load entering the River Thames from sewage treatment works and up to c.16% of the riverine P load derived from agricultural non-point sources. We consider a range of policy responses that could reduce MWL-P loads to the environment, including incorporating the environmental damage costs associated with P in setting targets for MWL reduction, alongside inclusion of MWL-P within catchment-wide P permits. |
306 | A single-step fabrication approach for development of antimicrobial surfaces | Surgical site infections are one of the most devastating complications after surgical procedures.More seriously, with the increased use of antimicrobial drugs, the threat of antimicrobial resistance is significant and is increasingly being recognized as a global problem.Thus, the increasing incidence of healthcare-associated infections and overuse of antibiotics leads to the need for alternative strategies which can decrease antibiotic consumption, such as the development of antimicrobial medical devices.Surface treatment of medical devices by coating with antibacterial agents is a promising solution.Currently, silver and its compounds are the most commonly used antibacterial materials, due to their strong, broad-spectrum antimicrobial effects against bacteria, fungi, and viruses.Recently, silver nanoparticles have received interest for antimicrobial applications as they can enter bacterial membranes and deactivate respiratory chain dehydrogenases to inhibit respiration and growth of microorganism.Due to this, AgNPs are believed to have good potential for application in silver-based dressings and silver-coated medical devices without promoting microbial resistance.A variety of physical and chemical methods was developed to prepare AgNPs on biomaterial substrates.For physical methods, Cao et al. employed silver plasma immersion ion implantation process to embed AgNPs on titanium substrates.The prepared samples were extremely effective in inhibiting both Escherichia coli and Staphylococcus aureus strains while exhibiting noticeable activity in promoting propagation of the osteoblast-like cells.Echeverrigaray et al. prepared stainless steel specimens with silver atoms by ion implantation process at low energy on a reactive low voltage ion plating equipment.Ferraris et al. deposited silver nanocluster/silica composites onto AISI 304 L stainless steel via a radio frequency co-sputtering deposition method.After one month of immersing in diverse food relevant fluids, these coated specimens showed a good property for the reduction of bacterial adhesion.However, the high cost and low efficiency of the above physical methods limited the industrial application of AgNPs.Researchers have also resorted to wet chemical procedures to synthesize AgNPs on biomaterials.Inoue et al. prepared sodium titanate thin films with a porous network structure through the reaction of titanium samples with NaOH solutions, then immersed in CH3CO2Ag solution for 3 h to conduct silver ion exchange treatment.Soloviev and Gedanken employed ultrasound irradiation to deposit AgNPs on stainless steel from AgNO3 solution, which comprised aqueous ammonia and ethylene glycol.Diantoro et al. used sodium borohydride, mercaptosuccinic acid and methanol to finish the reduction reaction of AgNPs from silver nitrate solution.Heinonen et al. applied sodium hydroxide, ammonia and glucose to prepare the superhydrophobic surface with AgNPs by sol-gel technology.Jia et al. presented a strategy of mussel-adhesive-inspired immobilization of AgNPs.Moreno-Couranjou et al. employed catechols to realize the reduction of silver nitrate to obtain AgNPs.Cao et al. used dopamine as a reducing reagent to manufacture AgNPs on 304 stainless steel in a weak alkaline aqueous solution.All the testing results illustrated that the existence of silver nanoparticles is essential for the antibacterial activity of silver-containing surfaces.The current chemical synthesis methods are not environmentally friendly as they involved at least two different chemical reagents in the chemical reaction.Thus, how to reduce the participant types of chemical reagents, even only using silver nitrates, is another challenge for the chemical synthesis method from the viewpoint of sustainable chemistry.In addition to the coating approach, research has also demonstrated that microstructures of certain geometries can reduce surface adhesion of bacteria.Ferraris et al. proved that microgrooves on titanium surfaces prepared by electron beam surface structuring technology help to reduce adhesion of bacteria.For instance, in surgical tools, ultra-sharp knife-edges in combination with textured surfaces in the knife-tissue contact region could lead to significant reductions in forces and consequent tissue damage.The microstructures act as stores to realize immobilization and release of silver ions into the surgical point.In addition, microstructures will protect the AgNPs from detachment and wear when subjected to external forces.Thus, the synergistic effect of AgNPs and micro-structures will lead to even better antibacterial results.In this research, an innovative StruCoat approach is proposed for the preparation of anti-bacterial microstructures with AgNPs coatings, through a single step process.It is a hybrid fabrication approach which combines laser ablation technology for micro-structuring, and laser-assisted thermal decomposition and deposition for synthesizing and coating AgNPs from silver nitrate solution simultaneously.The StruCoat approach offers advantages for the synthesis of “green” AgNPs.There is no requirement for reducing and stabilizing agents involved in the chemical reaction, so the type of chemical reagent is reduced.More importantly, it offers durable silver coated microstructured anti-bacterial surfaces.This paper will explore the mechanism of StruCoat and the effects of laser power and molarity of silver nitrate on the morphology of microstructures and the size of AgNPs.It will also evaluate the antimicrobial performance of specimens prepared by StruCoat.The schematic of StruCoat is illustrated in Fig. 1.In this work, an ultrasonic atomizer was used to produce micro/nano drops of AgNO3 from liquid based on vibrating piezo crystal due to its robustness and capability of working at low pressure.As shown in Fig. 1, micro liquid drops of aqueous solutions of AgNO3 emerging from the ultrasonic atomizer are transported to the nanosecond pulsed laser ablation zone.Laser heating will cause the melting and even gasification of stainless steel.The vapour and plasma pressure will result in the partial ejection of the molten materials from the cavity and formation of surface debris.The recast layer is formed as the thermal energy rapidly dissipates into the internal material.During the laser-materials interaction, the laser ablation zone is in a high-temperature state, so the adherent AgNO3 drops are thermally decomposed to AgNPs and deposited on the surface continuously.Heating will result in decomposition of most metallic nitrates into their corresponding oxides.However, the decomposition product of silver nitrate is elemental silver as silver oxide has a lower decomposition temperature than silver nitrate.Qualitatively, decomposition of silver nitrate is tiny under the melting point, but it is becoming increasingly apparent at about 250 °C, while total decomposition will take place at 440 °C.The chemical decomposition equation of silver nitrate can be described as:2 AgNO3 → 2 Ag + O2 + 2 NO2,Fig. 2 illustrates the whole chemical reaction processes.The water starts to evaporate when drops of silver nitrate solution make contact with the high-temperature molten layer.Solid silver nitrate crystals are formed on the surface, but they start to decompose to silver oxide and silver when the temperature is higher than 250 ℃ and decomposes completely when the temperature is above 440 ℃.In addition, the silver oxide is continuously decomposed to silver if the temperature is still higher than 300 ℃.In the laser machining process, the absorption of laser energy leads to a rapid increase of local temperature, the maximum temperature realized 3500–14500 K, which is higher than the vapour temperature of stainless steel.This temperature is much higher than the decomposition temperature of silver nitrate; so, there is sufficient thermal energy to finish the decomposition reaction as shown in Fig. 2.Then, the AgNPs deposit on the surface during the solidification of the molten materials in the laser ablation zone.The AISI 316 L stainless steel plates were used as the experimental specimens in this research.Before laser machining, the stainless steel plates were machined by a flat end mill, as described by, giving a surface roughness of about 0.2 μm.Silver nitrate and deionized water were used to prepare chemical solutions with different molarities of 25–200 mmol/L.Fig. 3 shows the hybrid ultra-precision machine used for experiments.The machine contains a nanosecond pulsed fibre laser which has a central emission wavelength of 1064 nm.The laser source has a nominal average output power of 20 W and its maximum pulse repetition rate is 200 kHz.An ultrasonic atomizer was employed to generate micro liquid drops as shown in Fig. 3.This research will investigate the effect of laser power and molarity of AgNO3 on the surface topography and the size of AgNPs.Details of the operational conditions for the two experiments are shown in Tables 1 and 2.All specimens were cleaned ultrasonically with deionized water, acetone and ethanol for 10 min to remove any organics on the surface before and after the experiments.Then, these specimens were dried in an oven at 100℃ for 20 min.The surface chemistry and the morphology of laser structured Gaussian holes and deposited AgNPs were characterized by scanning electron microscopy and X-ray diffraction.Antibacterial experiments were implemented to assess the susceptibility of three different kinds of specimens to bacterial attachment and biofilm growth; smooth, laser ablated and specimens fabricated by StruCoat.Samples were cleaned before each experiment using 70% ethanol to remove any contaminant bacteria already on their surface.The bacteria used in all experiments was Staphylococcus aureus and was selected as it is widely associated with commonly contracted medical device-related infections.S. aureus was cultured in 100 ml nutrient solution for 18 h at 37 °C with a rotational speed of 120 rpm.Post-incubation, the bacterial culture was centrifuged at 3939 ×g and the pellet resuspended in phosphate buffered saline, before being serially diluted to a concentration of 104 CFU/ml for experimental use.Stainless steel specimens were immersed in 5 ml 104 CFU/ml bacterial suspension in multiwell culture plates and incubated at 37 °C for 24 h to permit attachment and subsequent biofilm formation.Following incubation, the samples were rinsed in sterile PBS to remove any excess planktonic bacteria not attached to the biofilm.The samples were then placed into 9 ml PBS, and the surface-attached bacteria were physically removed from the surfaces using the following methodology: 10 s manual agitation followed by 300 s in an ultrasonic bath followed by a further 10 seconds manual agitation.This process facilitated the release of the attached bacteria from the surface into the PBS ‘capture fluid’, with this fluid then being serially diluted and samples spread plated onto nutrient agar.Plates were incubated at 37 °C for 24 h, and results enumerated as CFU/ml.In the laser ablation process, the material was removed from the substrate surface due to high peak power results in thermal energy higher than breakdown thresholds of material which would lead to material melting, ablation and vapour generation.The thermal energy also helped to form the high-temperature zone around the laser radiation area.The thermal decomposition of silver nitrate to silver particles relied on the heat generated in the laser ablation process.Thus, the size of microstructure and AgNPs could be tightly controlled by the laser power and molarity of silver nitrate.This section will analyze the influence of the above factors.Fig. 4 shows the SEM images of the smooth surface and laser-ablated microstructures under different laser powers but at constant molarity of silver nitrate of 50 mmol/L.The increasing size of laser ablated microstructures was observed with the increase of laser power.The increased diameter of laser ablated holes and the thickness of casting layers is the result of molten metal flow driven by surface tension and recoil pressure formed by the evaporation.Fig. 4–, shows all specimens contained a certain amount of silver particles deposited on the surfaces.The possession of silver nanoparticles was further confirmed by XRD analysis results shown in Fig. 5.At the laser power of 2 W, the heat dissipates quickly, so micro drops of silver nitrate have a very short time period to decompose to AgNPs.The theoretical diameter of liquid drops calculated by Eq. is around 8.2 μm.However, the obtained maximum diameter of the microstructures in the experiment was approximately 20 μm at the laser power of 2 W.This indicated that the droplets have less probability falling in the laser ablated high-temperature area.When the laser power increased to 8 W, the maximum diameter of microstructure reached 50 μm.Some AgNPs were also formed on microstructures due to the high temperature of the molten layer.Fig. 4 and show that more AgNPs were formed at a laser power of 14 W than 8 W.The diameter and depth of the melt pool increased with the increase of laser power as more energy was transferred into the heat-affected zone.The sputtering area was formed at 14 W due to the vertical movement of liquid during irradiation caused by the vapour flow that expands in the Gaussian hole.As a result, AgNPs were deposited on both the spatter area and Gaussian holes.However, the flake-like silver started to form on the microstructures when laser power further increased to 20 W.The thermal stress accumulation increased with the increase of laser power.This explained the increased quantity of AgNPs from lower to high laser powers.At the low laser power of 4 W, not enough accumulated thermal stress and physical space was generated for the silver nitrate to finish the decomposition process.However, when the laser power increased to 20 W, the laser ablated area was overheated.The excess heat energy led to a longer cooling time, so much more silver drops participated in the chemical reduction.These silver particles accumulated and formed silver particles with large dimensions.On the other hand, the evaporation and sputtering phenomenon would be enhanced significantly under high laser power, which had a negative effect on the deposition of AgNPs.Therefore, overheating would not beneficial for growing more AgNPs on the laser-ablated structures.Proper thermal energy would be necessary for the deposition of AgNPs.It is also known that the uniform distribution of AgNPs is beneficial to anti-bacterial properties.As such, specimens processed at the laser power 14 W had the most homogeneous size distribution of the AgNPs, and thus it was deemed the best result for deposition of AgNPs on the laser ablation zone.The XRD patterns of the smooth surface, laser-machined surface and StruCoat surface of 316 L stainless steel are shown in Fig. 5.In Fig. 5, there are three sharp diffraction peaks corresponding to the XRD pattern of austenite and one peak for ferrite.For the laser-machined surface, it was found that austenite, Fe3O4 and Fe2O3 were recognized on the XRD pattern.In Fig. 5, the presence of pure silver is confirmed by the diffraction peaks at 2θ = 38.2°, 44.4°, 64.6° and 77.5° on StruCoat surface, which correspond to scattering from, and planes of pure silver.Thus, the XRD pattern in Fig. 5 proves the existence of AgNPs.The molarity of silver nitrate is another critical processing parameter in StruCoat for deposition of AgNPs.In this section, different molarities of silver nitrate, as listed in Table 2, were employed to conduct the experiment.Fig. 6 showed the morphologies of microstructured surfaces processed by StruCoat at different molarities of silver nitrate solutions varying from 25 mmol to 200 mmol, while the laser power was fixed at 14 W. For the specimens which employed 25 mmol, 50 mmol and 100 mmol silver nitrate solutions, the AgNPs could be clearly observed.The distribution density of AgNPs was significantly higher while the molarity of the silver nitrate solution was 50 mmol.The density of silver ions increased with the increase of molarity of silver nitrate solution.Low molarity of silver ions required less thermal energy in the chemical reduction process, thus, the excess heat leads to AgNPs being evaporated furtherly.This explains the increase in distribution density of AgNPs while the molarity of silver nitrate solution was increased from 25 mmol to 50 mmol.However, when the molarity of silver nitrate solution increased to 100 mmol, aggregation and clumping of the AgNPs were observed.Some adjacent AgNPs started to weld together, with some silver bars starting to appear on the microstructure.There were a number of reasons which could explain these observations.Firstly, the silver nitrate solution of higher molarity required more energy to finish thermal decomposition reaction, resulting in insufficient heat energy for evaporation of the silver particles.Secondly, the surface tension and density of drops of silver nitrate increased with the increase of molarity of silver nitrate, thus the adjacent drops were more possibly connected to each other when they were deposited on the microstructure and formed larger drops.Thirdly, the high molarity of silver ions in every drop could have resulted in more silver being deposited on the substrates.As shown in Fig. 6, the aggregation and clumping of the AgNPs became more significant when the molarity of silver nitrate solution increased to 200 mmol.The size distribution of AgNPs was shown in Fig. 7.The length of 100 particles with a clear profile was measured manually by image processing software based on the SEM image in Fig. 6.The size of AgNPs was found to be dependent on the molarity of the silver nitrate solution.At low molarity, the mean particle size of microspheres was 400–600 nm.At high molarity, the mean particle size reached micron level.Nevertheless, the particle size of 500 nm had the maximum proportion for all the specimens.In addition, a low standard deviation indicated that the data points tended to close to the mean value, while a high standard deviation indicated that the data points were spread out over a wider range of values.Thus, the best molarity was 50 mmol as it led to specimen with a minimum mean particle size of 480 nm and minimum standard deviation of 224 nm.This indicated that too high a molarity was not beneficial for growing more silver particles in nanoscale on the laser-ablated structures, and applicable molarity of silver nitrate solution would be necessary for the generation of AgNPs with uniform distribution in the average size.Comparison between the predicted and measured diameter of silver particles is shown in Fig. 8.The predicted value was closer to the measured median value than the average value.In theory, with the increase of molarity of silver nitrate, the predicted particles size increased gradually due to the increased silver included in the micro drops.The experimental results showed the same tendency except at 50 mmol.It obtained similar particle sizes of approximately 500 nm at molarities of 25 and 50 mmol in experiments.Thus, the theoretical and experimental results indicated that it was not necessary to employ silver solutions with high molarity as it could lead to the increased size of deposited particles.Fig. 8 Comparison between predicted and measure particles size,In order to observe the interface between AgNPs and the substrates of stainless steel Focused Ion Beam milling was used to make a cross-section on the StruCoat processed surfaces.Fig. 9 shows SEM images of the subsurface topography.It could be observed that the AgNPs were firmly connected with the stainless steel after the wielding effect in the laser ablation process, and this helps to attain the high strength of interfacial bonding.During the laser machining process, the rapid heating and cooling lead to modification of material microstructure.The laser machining heat affected zone is defined as the area that has not melted but has undergone thermal induced microstructural modification by laser pulses.This section will investigate the cross-sectional material microstructure of HAZ of 316L austenitic stainless steel in the traditional laser machining process and the StruCoat process.Metallographic polishing methods were used to etch cross-sections of specimens in order to evaluate the changes to substrate structures.SEM images of the metallographic structure of stainless steel surfaces are shown in Fig. 10.The linear line intercept method was employed to measure grain size.The average grain sizes after laser machining process and StruCoat were about 9.3 μm and 4.5 μm, while the average size of the original grain in the as-received 316L stainless steel was about 24.6 μm.The significant grain size refinement was due to the laser reversion annealing through the intense heat input during the laser machining process.As a result, the grain refinement effect would lead to an increase in both material strength and fracture toughness.More importantly, the specimen had even higher cooling rates in StruCoat than in laser machining due to the evaporation of the aqueous solution, which resulted in a further decrease of grain size.In addition, it could be clearly seen from Fig. 10 that the depth of the HAZ subjected to the laser machining process was about 97 μm, while the depth of HAZ in the StruCoat was about 62 μm.The reduced depth of HAZ in StruCoat was also due to the increased cooling rate in StruCoat.In this study, the antibacterial capabilities of the two stainless steel specimen processed by laser ablation and StruCoat were evaluated after 24 h cultivation with bacterial contamination.A smooth stainless steel specimen with no surface modifications was included as a comparative control.Results showed that specimens machined by laser ablation and StruCoat both demonstrated reductions in bacterial attachment and biofilm formation compared to the unmodified control, as shown in Fig. 11.Specimens processed by StruCoat exhibited a significantly greater reduction in bacterial attachment than laser ablated specimens with a total decrease in bacterial count of 86.2% compared to the unmodified material, thus, the coating of AgNPs was critical for enhancing the antimicrobial capabilities of specimens manufactured by StruCoat.The slight antibacterial activity evidenced by the laser-ablated specimens without AgNPs can likely be attributed to the generation of iron oxide during the laser ablation process; an effect which was documented in a study by Fazio et al.Jia et al. explored the synergistic effect of AgNPs and microstructures, and proved that microstructures had a special antibacterial mode named “trap & kill”.Fig. 12 illustrates the possible sterilize modes engaged in the antibacterial process.First, the released silver ions from AgNPs killed some bacteria before it contacts with the surface, termed ‘release killing’.After the silver ion treatment, the bacterial membrane interacted with silver ions and resulted in cytoplasmic membrane shrinking and damage.Secondly, some bacterial kill would attribute to direct contact with silver particles, termed ‘contact killing’.The accumulation of AgNPs in the bacterial membrane led to a significant increase in permeability, which results in the death of bacteria.More importantly, bacterial cells with a negative charge would be introduced into the microstructures of the surfaces, causing binding with AgNPs via electrostatic attraction, and these were then killed through the contact killing mechanism.In addition, the microstructures could act simultaneously as storage pockets of AgNPs to attain sustainable release of silver ions, protecting AgNPs from friction-induced particle detachment.In terms of the significant antimicrobial effects observed in the present study, further work is required to determine the exact mechanism of action, and correlate with that of other studies using AgNPs such as that of Jia et al.A single-step fabrication approach.Decreasing grain size will increase material strength and fracture toughness.Antimicrobial efficacy testing also demonstrated the enhanced antibacterial properties of StruCoat, with 86.2% anti-bacterial rate against Staphylococcus aureus, compared to unmodified samples, in the present study.All data underpinning this publication are openly available from the University of Strathclyde KnowledgeBase at https://doi.org/10.15129/af048a45-c713-450f-ad78-84d9615ca7cf. | In recent years, the increasing incidence of healthcare-associated infections and overuse of antibiotics have led to high demand for antimicrobial-coated medical devices. Silver nanoparticles (AgNPs) have attracted tremendous attention as a subject of investigation due to their well-known antibacterial properties. However, current physical and chemical synthesis methods for AgNPs are costly, time-consuming and not eco-friendly. For the first time, this paper proposes a novel single-step fabrication approach, named StruCoat, to generate antimicrobial AgNPs coated microstructures through hybridizing subtractive laser ablation and additive chemical deposition processes. This new approach can offer antimicrobial micro-structured silver coatings for medical devices such as surgical tools and implants. The StruCoat approach is demonstrated on 316 L stainless steel specimens structured by using nanosecond pulsed laser, while AgNPs are decomposed and coated on these microstructures from the micro drops of silver nitrate solution simultaneously generated by an atomizer. According to the experimental results, silver nitrate with a molarity of 50 mmol and jet to the stainless steel machined at 14 W are the best-operating conditions for chemical decomposition of drops of silver nitrate solution in this research and results in AgNPs with a mean size of 480 nm. Moreover, an investigation of the material microstructures of stainless steel surfaces processed by StruCoat shows significant reduction of material grain size (81% reduction compared to that processed by normal laser machining) which will help improve the fracture toughness and strength of the specimen. Antimicrobial testing also demonstrated that specimens processed by StruCoat exhibited excellent antibacterial properties with 86.2% reduction in the surface attachment of Staphylococcus aureus compared to the smooth surface. Overall, this study has shown StruCoat is a potential approach to prepare antimicrobial surfaces. |
307 | The influence of deformation conditions in solid-state aluminium welding processes on the resulting weld strength | Solid bonding is welding without the addition of a brazing filler at a temperature significantly below the base metals’ melting points."Solid state welding includes some of the world's oldest welding processes, such as forge welding, which was used to produce the 1600-year-old ‘Iron pillar of Delhi’ and the folded steel katana swords used by the samurai of ancient Japan.Forge welding of wrought iron was routine practice until the end of the nineteenth century."For example, it was the process used to make the propeller shaft and sternframe of Brunel's ship, the Great Eastern, launched in 1858.The first scientific study of solid bonding was by Desaguliers in 1724; he demonstrated to the Royal Society that two lead balls, when pressed together and twisted, could result in a weld with a strength up to that of the bulk metal.The next significant study was not until 1878, when Spring investigated the adhesion of various non-ferrous metals by pressing together the bases of hot metal cylinders.It was found that aluminium in particular produced a strong weld at low deformations.At the beginning of the twentieth century, the growing use of steel and the development of fusion welding for both steel and aluminium led to a lack of interest in solid bonding."However, the development of roll bonded clad metals in the mid-twentieth century prompted a flurry of research, summarized in Tylecote's comprehensive 1968 review, The Solid Phase Welding of Metals.In recent decades, researchers have investigated several aluminium solid bonding processes: extrusion of hollow cross-sections and compacted machining chips, accumulative roll bonding, and friction welding."As research on these processes highlights the importance of solid bonding and informs the basis of this work's literature review, a brief explanation of each process and its relevance is given below.Extrusion of cast billets into hollow sections is described by Xie et al.The process typically uses a porthole die, which has a central mandrel that the metal deforms around, forming the hollow section.A bridge to the die rim supports the base of the mandrel.The metal billet splits around this bridge before welding back together before the die exit."The strength of this weld determines the extrudant's mechanical properties.In 1945, Stern patented an extrusion process for directly producing finished articles from aluminium scrap, the scrap fragments welding together in the extrusion die.Etherington considered recycling aluminium manufacturing scrap by using the conform process, a continuous version of extrusion.Lazzaro and Atzori describe an industrial take-up, where the conform process is used to bond granulated saw trimmings to produce rod for steel deoxidant.The solid bonding process avoids remelting the scrap, Allwood et al. and Güley et al. calculating energy savings compared with conventional recycling of over 90%.The process may also increase the material yield of reprocessing, as conventional recycling of machining chips in particular is difficult due to the chips’ large surface area to volume ratio, resulting in a recycling yield as low as 54%.Accumulative roll bonding is a process developed by Saito et al. where one strip of aluminium is stacked on top of another and then rolled, bonding the strips together as they go through the roll bite.Researchers in the aerospace industry have investigated the process as a means of introducing intense straining into a bulk material, reducing the grain size to less than 1 μm.This results in a very high strength material because of the Hall–Petch relationship.Friction stir welding was developed at The Welding Institute in the UK.The process is described by Dawes.Bonding is achieved by a combination of frictional heat and deformation combined with pressure.It is an attractive alternative to conventional fusion welding as the base metals do not melt and retain more of their original properties.It may be used on highly alloyed 2000 and 7000 series aerospace aluminium alloys previously thought unweldable.Additionally, when welding dissimilar metals, a difference in the metals’ thermal expansion coefficient and conductivity is of much less importance than in fusion welding.In contrast to these processes, in which a strong bond is wanted, several spacecraft failures due to unwanted ‘cold welding’ are noted in a 2009 report for the European Space Agency.Understanding the influence of deformation conditions on the strength of resulting solid welds is therefore important not only in evaluating the potential of the above manufacturing processes and devising new ones, but also in helping to prevent solid welds from forming when they are unwanted."A review of the previous theoretical and experimental work on the influence of deformation conditions is given in Section 2, and informs the definition of the current paper's scope.Previous work on solid bonding has focused on two aspects: explaining the formation of the bond, and process-specific parametric investigations on increasing its strength.This section presents a critique of the main solid state welding theories and a review of the parametric studies, their findings and the mathematical models used to explain the observed trends.When aluminium atoms – electron configuration 3s2 3p1 – combine to form aluminium metal the 3s and 3p valance electrons form an enormous number of delocalized electrons, resulting in a face-centred cubic lattice of positive ions in a ‘sea’ of electrons.The metallic body is held together by the attraction between the positive ions and the free electrons.Inter-atomic and van der Waals forces are the major sources of attraction between the atoms.When two atoms are widely separated, these forces are negligible; however, when intimate contact of less than 10 atomic spacings is achieved the attractive inter-atomic force will form a joint, the crystal mismatch causing a non-cohesive grain boundary.For such close contact to occur, there must be no intervening film of oxides or other contaminants.This explains why solid bonding can cause problems with mechanical components in space; the lack of an atmosphere prevents the oxidation of metal substrate that has been exposed in space by the cracking of surfaces when struck by gear teeth.Van der Waals forces of attraction act over greater distances than inter-atomic forces.They will, therefore, be present across an entire interface, whereas inter-atomic forces will be limited to areas of asperity tip contact.Despite this, Inglesfeld shows that the ratio of inter-atomic to van der Waals forces across an interface is typically very large, implying that bonding is the result of inter-atomic forces when contact is made between clean metal surfaces.The film theory and energy barrier theory have been proposed to explain the characteristics of solid-state welding processes.The film theory is consistent with the above theory of forces, stating that intimate contact between metal surfaces causes a weld to form and that the presence of different surface oxides and contaminants explains the varying propensity of metals to weld in the solid state.The research of Conrad and Rice supports this theory, finding that the adhesion strength between clean metal surfaces previously fractured in a vacuum is almost equal to the load applied, implying that areas in close contact have bonded.In the presence of surface films, bonding requires that, substrate metal must first be exposed by cracking the surface films and, that a normal contact stress then establishes close contact between the substrate metal.The surface films may include contaminants and absorbed water vapour, as well as the surface oxide.Several researchers report that the contaminants and water vapour can be removed, or at least reduced, by using chemical surface treatments or heating the surface.The energy barrier mechanism has yielded two theories: the ‘mismatch of the crystal lattice’ and ‘recrystallisation’ theories.The mismatch of the crystal lattice theory, proposed by Semenov, dictates that some distortion of the crystal lattices of the two surfaces must be achieved to obtain bonding, representing an energy barrier that must be overcome.However, the Conrad and Rice experiments indicate that bonding is possible without deformation if intimate contact is made between clean surfaces.In a review of the state of the art of cold welding, Zhang and Bay believe that any energy barriers are associated with the plastic deformation needed to establish intimate contact between the surfaces and to fracture the surface films, rather than with any distortion of the crystal lattice.The recrystallisation theory, proposed by Parks, suggests that crystal growth during recrystallisation eliminates the films as a non-metallic barrier.In this theory, deforming the metal produces heat, decreasing the temperature necessary for recyrstallisation.Pendrous et al., however, find that no recrystallisation occurs during low temperature solid bonding.Previous research on the influence of deformation conditions on the bond strength has focused on accumulative roll bonding and porthole die extrusion.The following deformation parameters have been identified in the literature review as important to the welding process: normal contact stress across the bonding interface, temperature, the longitudinal strain at the bonding interface, strain rate, and shear.Studies on the effect of aluminium ARB parameters on the bond strength, such as Jamaati and Toroghinejad on cold rolling and Eizadjou et al. on warm and cold roll bonding, find that higher temperatures, greater reductions and slower wheel speeds result in stronger bonds.It is consistently found that a threshold reduction of approximately 35% is necessary for any welding to occur and that this value slightly decreases as the process temperature increases, but is independent of the normal contact stress."Bay's model assumes that any stretching of the interface immediately cracks the brittle cover-layer, and that these brittle cover-layers crack together, creating channels through which the aluminium substrate is extruded.The oxide film, however, is assumed not to crack immediately, but only after a pre-determined rolling reduction, after which any exposed substrate metal immediately welds.In reality, it is likely that some pressure will be required to extrude the aluminium through the cracks in the oxide.Bay performs plane strain compression tests on aluminium interfaces to evaluate the model.The results are very dispersed, but a general trend following the theoretical results can be observed."Bay's model does not attempt to quantify R′ without experimentation, nor is there any consideration of the spacing of welded portions of the interface.The model does not consider hot rolling and assumes that the aluminium sheets can be modelled as perfectly flat surfaces; practically, however, the topography of the surfaces will cause local surface shear forces and air to be trapped between the two surfaces as they are rolled.Given that an oxide film of 2–4 nm will form within milliseconds of exposure to the air, this entrapped air may oxidize some of the exposed metal, decreasing weld strength.The pressure–time criterion ignores the effect of strain; however, Edwards et al. found that ‘surface stretching’ is a key parameter in bonding.The pressure–time criterion also makes long time periods essential to bonding, suggesting that diffusion plays an active role in bonding even during high strain rate processes.Consistently, Gronostajski et al., in a study on extrusion of machining chips, explain the poor bonding produced with a higher ram speed with the hypothesis that the higher speed reduces the time for the diffusional transport of matter.However, Wu et al. claim that diffusion is likely to be irrelevant in extrusion processes because of the short time period in which material passes through the die.A few studies have investigated the effect of shear on the weld strength.Bowden and Rowe find that two contacting specimens experiencing both a tangential and normal force have higher real contact area and bond strength than two specimens subject to a normal force.Cooke and Levy make welds by rotating one metal bar against another under a normal load, at 260 °C.‘Satisfactory’ welds were created with minimal lateral strain.Cui et al. investigate solid bonding of aluminium chips via cold compaction followed by equal channel angular pressing at 450–480 °C.All the material experiences intense shearing because it deforms around a 90° bend.Despite the presence of voids in the centre and hot tearing on the surface of the resulting specimens, bonding has occurred.These studies have attempted to assess the influence of a shear stress between bonding surfaces.However, none of these experiments test for the influence of shear alone."For example, in ECAP processing very high pressures are required and, given Mohr's circle of strain, a plane of chips experience only normal strain.Models of weld strength have been found for both PDE and ARB.Models of PDE are based on the energy barrier theory of welding and do not account for the effect of interfacial strain on the weld strength."Bay's model of ARB) is derived from the film theory of bonding and considers both strain and normal contact stress. "Bay's model is a good indicator of bond strength in roll bonding and could be extended to take account of temperature and strain rate by modelling the material strength as a function of these deformation conditions.This review indicates that no general model of solid bonding exists, so the first aim of this paper is to produce a bonding model that addresses the limitations of previous attempts.The second aim is to devise an experiment where aluminium can be solid bonded whilst as many as possible of the relevant deformation conditions are controlled independently.Analysing the resulting bond strengths can inform understanding of the role of each deformation parameter on the resulting solid weld.The experimental results can also be used to validate or challenge aspects of the model.As this investigation is focused on the effects of deformation variables, rather than material variables, neither the model nor the experiments deal with samples that have undergone mechanical surface preparation prior to bonding.The following is proposed: that a combination of normal contact stress and shear establishes close contact between two surfaces.The oxide remains along the interface, but as an applied strain stretches the material, clean metal becomes exposed.Entrapped air oxidizes some of the exposed metal; however, provided the strains are great enough, some of the clean metal will be extruded through the ever-widening cracks in the oxide.A bond forms, the strength of which is equal to the strength of the base metal at room temperature, once clean metal surfaces are within atomic distances.The model presented in the following sub-sections considers plane strain deformation and a perfectly plastic material.It is acknowledged that this percentage is approximate and that experimentation or detailed contact modelling could improve its accuracy.The flow stress of the metal is dependent on the strain, strain rate and temperature.Initial contact will be between the oxide films.Nicholas investigates ceramic–ceramic and ceramic–metal bonding, finding no bonding at temperatures below 1000 °C or in the presence of air.Aluminium and its oxide are mutually insoluble; therefore, there is no diffusion through the oxide films to help create a weld.Bonding must be due to stretching of the interface exposing substrate aluminium.As discussed in Section 2.2, there is a threshold stretching deformation of the interface before which welding will not occur.Researchers have typically assumed this corresponds to the deformation necessary to crack surface films."However, aluminium oxide is very brittle: a tensile strength of 260 MPa and Young's modulus of 350 GPa suggests that it has a failure strain of less than 1%.As would be expected given such brittleness, Sherwood and Milner find that the threshold reduction for welding aluminium in a vacuum is less than 1%.In light of this, it is proposed in this work that the significant threshold strains observed in atmospheric conditions are due to entrapped air oxidizing aluminium exposed at low strains.Only when all the entrapped oxygen has chemically bonded to this aluminium can any aluminium exposed at higher strains exist in an inert atmosphere.To quantify the fraction of the surface that the entrapped air will oxidize requires an estimate of its oxygen content and therefore its volume.A typical value of η is 0.35.Further stretching cracks the oxide films in an oxygen-free environment.For bonding to occur exposed aluminium on both sides of the interface must be overlapping.Force equilibrium analyses on oxide fragments show that the fragments experience a greater maximum tensile stress when adjacent oxide layers break-up together.It is therefore assumed that substrate aluminium exposed on one side of the interface is always adjacent to – completely overlaps with – substrate aluminium exposed on the other side, as shown in Fig. 1.The length of oxide fragments therefore increases still further for higher shear stresses at the interface.Derivations of Eqs. and are shown in Appendix B.Substituting typical AA1050-O parameters into Eq. produces an oxide fragment aspect ratio of approximately 14.This compares to an aspect ratio of up to 13 observed by Barlow et al. in a transmission electron microscopy analysis of the internal surfaces of roll bonded AA1050 foil.The similarity between the calculated and observed aspect ratios suggests that the methodology used to model the fracture of the oxide layers is valid.A typical value of e is 65 nm.A typical value of pex is 95 MPa.The effect of each deformation parameter on the bond strength is accounted for in Eq.For example:A higher strain increases the exposed area and oxide crack width, increasing ν and decreasing pex respectively.Increases in strain rate increase the flow stress of the metal, increasing both Y and pex.Increases in normal contact stress increase σn.Increases in bonding deformation temperature decrease the threshold strain and flow stress of the metal.The reduced threshold strain increases ν, and the reduced flow stress of metal decreases both Y and pex.A higher shear stress increases τapp and increases the oxide crack width, decreasing pex."Evaluating the model requires that the strength of welds produced under various deformation conditions are compared to the model's predictions of these welds’ strengths.Section 4.1 describes the physical experiments performed to bond aluminium samples and to test the strength of the weld.In order to use the model to make predictions of the weld strength, it is necessary to understand the deformation conditions in the physical experiments.This was achieved by simulating the experiments using finite element software.Details of these FE simulations are presented in Section 4.2.The full range of physical and simulated experiments is described in Section 4.3.In rolling, extrusion welding the extension of the interface is the result of the perpendicular compressive strain.Therefore, although many deformation parameters can be varied in processes such as accumulative roll bonding and porthole die extrusion, they are strongly dependent on each other.For example, increasing the pressure between the rolls in ARB also increases the reduction ratio and therefore strain at the interface."In this work's experiments, in addition to an interfacial force, a tensile stress was applied parallel to the welding plane, decoupling the interface strain from the normal contact stress.Adjacent aluminium strips were stretched in a tensile testing machine and simultaneously squeezed in a perpendicular direction by two heated flat tools, pushed together by hydraulic pancake rams.The flat tools squeeze a 50 mm length of the aluminium strips.In most of the tests the aluminium strips were stretched simultaneously in the same direction; however, in a small set of experiments the two strips were stretched separately in opposite directions in order to generate a contact shear stress.The flat tools and pancake rams were contained in a tool steel housing situated in a carriage that was mounted on two vertical lead screws.The carriage could be moved up and down via a motor.Fig. 2 presents a schematic and photograph of the experimental set-up.The strain and strain rate were dependent on the top crosshead displacement and speed.These could be controlled using the tensile testing machine software.The normal contact stress was dependent on the interfacial force between the samples, set using an input current to a proportional control valve on the hydraulic power pack.Before testing, the valve setting was calibrated to the resulting force using a load cell located in the steel housing.The force could be controlled within ±0.3 kN.During testing, the carriage must remain equidistant from the top and bottom crossheads to prevent the samples buckling."A linear variable differential transducer was situated between the rig base and carriage, providing feedback on the carriage's position.Proportional and integral control was used to adjust the power sent to the motor, maintaining positional accuracy within ±0.25 mm.The temperature was controlled using eight 95 W ∅1/8″ cartridge heaters.These heat the flat tools, which were pressed against the aluminium samples for 2 min prior to testing, ensuring the contact region was at the tool temperature.Four heaters, and one thermocouple, were inserted into each tool.The thermocouple provided feedback for full proportional-integral-derivative control of the power sent to the heaters, setting their temperature to within 1 K.The specification of the heaters was determined using a generic fin analysis of conductive and convective heat loss from the aluminium samples, as outlined by Incropera and Dewitt.Ceramic plates separated the heated rams from the hydraulic pancake rams, ensuring the oil remained cool.The above system was integrated and synchronized using a National Instruments compactRio real-time system controlled with Labview2012.Annealed AA1050 samples were used in these experiments.This material was chosen because it is soft and a non-heat treatable alloy; the force capability of the hydraulic system was sufficient to compress the samples by at least 35%) and the post-bonding analysis of weld strength was simplified by avoiding precipitation-hardening effects.The geometry and material properties are shown in Fig. 3 and Table 1 respectively.Before testing, the samples were cleaned using ethanol and fully annealed at 500 °C for 30 min.The aluminium samples had a trapezoidal cross-section.This helped to prevent out of plane buckling of the specimens as they were compressed, as shown in Fig. 4.A more detailed description of how the specimens deform during testing and the local welding conditions is given in Section 4.2.Lubricant reservoirs were placed in shallow holes on the softer aluminium surface in order to decrease the friction hills between the tools and samples.These friction hills are unwanted as they produce differential strains and normal contact stresses both along and across the interface, as depicted in Fig. 5."Lubricant reservoirs are used in Rastegaev upsetting tests, where a metal's flow curve is determined via the force-displacement relationship when compressing a cylinder of the metal.Recesses are machined into the top and bottom of the cylinders and filled with lubricant, greatly reducing friction and subsequent barreling of the compressed cylinders."Inspired by Rastegaev tests, lubricant reservoirs were used in this work's experiments.The samples’ ram-contacting surfaces were polished and had nine 4.5 mm diameter holes running along their centre.During testing, these shallow holes were filled with Teflon, chosen because it exhibits very low friction and is stable up to 200 °C, the maximum temperature in these tests.The bond strengths were determined using shear tests.The shear tests were conducted on a tensile testing machine at 298 K, with a crosshead speed of 10 mm/min.Narrow 1 mm wide slots were cut on both sides of the bonded samples so that, when pulled, the bonded areas experienced only a shearing force.The weld finally failed in shear with minimal rotation of the samples.The distance between the two slots was 15 mm, as shown in Fig. 6.During the bonding experiments, high interfacial forces meant that the chamfered surfaces of adjacent samples sometimes made contact.When this occurred the bonded samples were machined, reducing the interface width to 2 mm.This eliminated the possibility of any bonding of the chamfered edges affecting the shear test results.The nominal bonded area was therefore a consistent 30 mm2.In order to assess the quality of the weld and interpret the way in which the bond forms, Scanning Electron Microscope images were taken of the welded samples’ cross-sections.In order to use the model to predict weld strength, the deformation conditions experienced during weld creation must be known.This was achieved by conducting a finite element simulation of each physical experiment.The deformation variables used to plot the figures in Section 5 were the average of their final values over the 30 mm2 area at the end of each test.For example, Fig. 7 shows the finite element simulation of an experiment that created a weak weld at 373 K.The finite element simulations were conducted using Abaqus/Standard v6.10, implementing an implicit time integration analysis with ‘Static, General’ steps.Each aluminium sample was modelled as a 3D deformable body and meshed using around 8000 brick elements.A convergence study was performed to ensure that this number of elements is sufficient to provide accurate results, and to avoid excessive element distortion.The tools were simulated using analytical rigid surfaces.The material model for the aluminium samples assumed a von Mises material with isotropic hardening.Different flow curves were used for simulating different process temperatures.The flow curves corresponded to the equations shown in Table 1.The friction coefficient between the samples and heated rams needed to be determined so that the finite element analysis accurately simulated the deformations.This was done by performing a tensile test while compressing the specimen, analogous to open die forging with the addition of a tensile stress stretching the forged material as it is pressed.A standard slab analysis of this process was used to identify the friction coefficient value that correctly predicted the compressive force.A Coulomb friction coefficient of 0.15 was calculated using this method.This value was used in all simulations with a Coulomb friction law, and a contact stabilization value of 0.001.Several checks were performed to ensure the accuracy of the simulations.Within the model it was ensured that both the stabilization energy and artificial energy were small compared to the internal energy.Comparing the predicted geometry of bonded samples to the results from experimental trials provided final validation of the simulations.Table 2 presents this comparison for simulations and experimental tests over a range of temperatures, pressures and strains.The simulated and experimental results agree to within a maximum error of 10%.An example result from the finite element model is shown alongside the equivalent experimental result in Fig. 8.A sensitivity analysis was performed to investigate the effect of experimental error on the accuracy of the simulated deformation conditions.There are two main sources of experimental error: the hydraulic force applied to the samples can only be controlled within ±0.3 kN, and the position of the carriage within ±0.25 mm.Finite element simulations were conducted of 3 physical experiments performed at 298 K with a 50 mm crosshead displacement and interfacial force of 20, 26 and 32 kN.In the simulations of each experiment the interfacial force and carriage position were varied by ±0.3 kN and ±0.25 mm.It was found that the simulated strain and normal contact stress varied by a maximum of 2%.The experimental errors were not, therefore, expected to have a significant effect on the results.The tests conducted were designed to reveal the accuracy of the new model, and with it increase understanding of the influence of each deformation parameter on the resulting weld strength.Table 3 presents a list of the experiments conducted in this study.Each row of the table represents a matrix of experiments, where an experiment was conducted for each combination of temperature, crosshead displacement and hydraulic force.Each successful experiment was repeated three times and the repeatability shown in the results as error bars.The independent variation of normal contact stress and interface strain was limited due to the inherent instability of large tensile strain deformation: significant deviations from pure shear caused necking, as shown in Fig. 9.The Levy-Mises flow criterion was used to define an experimental normal contact stress versus strain envelope, limiting the crosshead displacement to a practical maximum of 54 mm.Set A investigated the effect of varying strain, normal contact stress and temperature.Set B investigated the effect of increasing the strain rate at different temperatures.Set C investigated the effect of a shear stress applied between the samples during bonding.High shear and normal contact stresses alone caused the samples to distort and neck before bonding had occurred; therefore, in the Set C experiments a two-step process was performed, as shown in Fig. 10.Firstly, the end of one sample was gripped in the bottom crosshead and the other end left unconstrained.Asymmetrically, one end of the other sample was gripped in the top crosshead and the other end left unconstrained.The top crosshead then moved vertically upwards by 7 mm while a 3 kN interfacial force was applied to the samples.This caused a small shear stress to develop across the interface.In this stage of the process the top crosshead displacement was limited to 7 mm as greater movement caused severe distortion of the samples.In the second part of the process, the unconstrained ends of both specimens were gripped in the crossheads and the test proceeded to the final crosshead displacement and interfacial force, as defined in Table 3.This section presents the results of the experimental shear tests on solid-state welds created under various deformation conditions.The resulting measured bond strengths are compared to those predicted by the new model).Table 4 presents the material parameters used in the new model.The new model predicts solid-state weld strengths as a function of 5 deformation variables: strain, normal contact stress, temperature, strain rate and shear.This section is structured around examining the influence of each of these deformation conditions on the weld strength.In the experiments, the relatively low values of each of these variables mean that the strengths of the welds are less than 50% of the strength of the parent metal.Fig. 11 presents experimental and predicted bond strengths as a function of strain and normal contact stress.Fig. 11 shows a positive correlation between increasing the normal contact stress or strain and the resulting bond strength.A high normal contact stress alone is unable to create a weld and, similarly, even at relatively high strains, some normal contact stress is required to create a weld.Increasing the normal contact stress reduces the minimum strain required for bonding.This has not been observed in previous literature due to the coupling between normal contact stress and strain in roll bonding experiments.The model correctly predicts the observed trends and in Fig. 11 the predicted weld strengths typically lie within the error range of experimental results.Fig. 11 contains grey regions where no experiments were successfully performed due to the material necking just outside of the rams.It was also found that necking occurs at lower strains for high normal contact stresses.This may be because high fictional forces developed due to the high contact stresses involved, restraining the flow of material from between the rams.Fig. 12 presents the experimental and predicted bond strengths for temperatures ranging from ambient to 473 K. Fig. 12 confirms that both the threshold strain and bond strength are very sensitive to temperature, with the threshold strain reducing from 72% at 298 K to 25% at 473 K.The model correctly predicts these trends, but underestimates bond strengths at the highest temperature.A prediction of the bond strength at 923 K is also shown in Fig. 12.The melting temperature of aluminium is 660 °C; therefore, this prediction represents the limiting case of solid bonding.Fig. 13 presents the experimental and predicted weld strengths as a function of strain rate.The bond strengths are expressed as an index, with the strength of welds created at a strain rate of 0.03 s−1 equal to 100.The effect of strain rate variations is predicted by modelling the aluminium flow stress as a function of strain rate.Fig. 13 shows that increasing the strain rate significantly reduces the weld strength at higher temperatures.At lower temperatures the weld strength still reduces, but by less than 10% for process temperatures of 298 K and 373 K.The model predicts strengths that lie within the experimental error range at these lower temperatures.At 423 K and 473 K, however, there are significantly larger decreases in the bond strength than predicted by the model.Fig. 14 presents the experimental and predicted strengths of welds created with and without a 30 MPa interfacial shear stress.The shear stress increases the subsequent bond shear strength, and decreases the threshold strain from 60% to 42%.The results of the experiments show that, as predicted by the model, a minimum strain is required for bonding and that increasing the temperature, normal contact stress or shear stress can reduce its value and increase the strength of any subsequent welds.The new model often predicts bond strengths within the range of strengths created in the physical experiments.At higher temperatures, however, it underestimates bond strengths and the effect of increasing the strain rate on decreasing the bond strength.As an additional measure of weld quality, alongside bond strength, microscopy images were taken of welds produced at different temperatures.Fig. 15 presents SEM images of welded samples’ cross-sections.Fig. 15a shows the cross-section of a sample created at 373 K.The weld line is clearly visible with only small regions where the interface line disappears.Fig. 15b shows a bond created at 423 K. Approximately half of the interface is not visible, with intermittent 50 μm long cracks spaced along the weld line.Fig. 15c presents the cross-section of a weld created at 473 K.The weld line could not be found by scanning the cross-section alone, so a peel test was partially performed, cracking some of the weld.The left hand image of Fig. 15c shows that no weld line can be seen, indicating very good bonding.Fig. 15 shows that, as expected, the bond line becomes less visible for stronger welds created at higher temperatures.In addition, Fig. 15b presents evidence of the film theory of bonding; the presence of regular cracks is consistent with the existence of poor surface matching and unbonded oxide islands along the welded interface.The regular cracks are more easily seen when looking at a polished specimen under an optical microscope.An example is shown in Fig. 16.A key reason for conducting solid-state recycling research is the potential to reduce energy use.The dependence on a high temperature to create a strong bond conflicts with the aim of minimizing energy use.However, over a third of the energy required to melt aluminium is the latent heat of melting, not the energy to heat the material to its melting point.Even high temperature solid-state processing of scrap could, therefore, save energy compared to conventional recycling.The proposed model correctly predicts the experimental trends.Evidence for the film theory of bonding can be evaluated using microscopy analysis.Fig. 16 shows the cross-section of a weld with welded zones interspersed with cracks 2–10 μm long.The film theory of bonding suggests that these cracks are either the oxide fragments that remain along the weld line after the interface is stretched, or that they are due to the bonding process failing to establish close contact between the surfaces in these regions.Eq. in the model derivation predicts an oxide fragment length of approximately 140 nm; therefore, the 2–10 μm long cracks in Fig. 16 are likely to be due to the absence of close contact between the two surfaces in these regions.Transmission electron microscopy of roll bonded AA1050 foil by Barlow et al. suggests that, at a finer length scale than observable with optical microscopy, the regions of good bonding shown in Fig. 16 are likely to resemble that of Fig. 17, with perfect bonding obtained between islands of oxide fragments between 40 and 400 nm in length.The observed fragment lengths in Fig. 17, despite covering a wide range, are of the same order of magnitude as predicted by Eq., suggesting that the mechanisms assumed in the proposed model are valid.where x is the characteristic diffusion distance, t is time, and D is the diffusion coefficient.Consistent with diffusion being important to bonding at higher temperatures, Fig. 13 shows an average 35% and 25% drop in shear strength when the strain rate is doubled at 423 K and 473 K respectively.Aluminium and its oxide are mutually insoluble; therefore, diffusion is unlikely to create contact between substrate aluminium, but may act to decrease the surface mismatch and improve the quality of the weld once contact has been made.Higher temperatures may also have helped to break-down any absorbed water vapour or other contaminants on the samples’ surfaces.In these experiments the maximum strain was limited by the onset of unstable necking.The strain rates were also very low because of the low maximum crosshead velocity.It would be worthwhile conducting similar roll bonding experiments, which would cause large friction hills to be developed between the sheets, but would allow high strain and strain rates to be tested.A back-tension could be applied to the sheets to decouple the normal contact stress and strain, and independent control of the rolls’ speeds could produce shearing at the interface.The new model presented in this work) builds on the work of Bay, whose model of weld strength in accumulative roll bonding) was reviewed in Section 2.2.In order to compare the two models, Fig. 18 presents weld strengths predicted from both models and measured weld strengths from the Set A experiments."Bay's model assumes that aluminium surfaces consist of an oxide film covering a fraction, ψ, and a brittle cover-layer of work-hardened aluminium, created by scratch brushing, covering the remaining area. "As no scratch brushing took place in this work's experiments ψ is taken as equal to one for the sake of constructing. "Fig. 18 shows that the new model is more accurate than Bay's model at predicting the strength of solid-state welds. "Bay's model was derived for the purpose of predicting bond strengths in rolling, where the strain and normal contact stress are highly coupled; therefore, it is perhaps unsurprising that Bay's model is inaccurate for the arbitrary combinations of normal contact stress and strain shown in Fig. 18. "In addition, the weld strengths shown in Fig. 18 are relatively low, whereas Bay's model is normally used to predict relatively strong welds. "Bay's model was not, therefore, designed to predict weld strengths for deformation conditions such as considered in Fig. 18.This may explain some of the differences in accuracy between the two models; nevertheless, there are differences in the models’ assumptions that are also important."Bay's model assumes a constant threshold strain before the onset of welding, whereas the new model calculates threshold strain as a function of the temperature and normal contact stress, resulting in a variable threshold strain. "The new model predicts a shallower rise in bond strength than Bay's model. "This is mainly because the calculated value of the pressure that is required to micro-extrude the substrate aluminium through the cracks in the oxide layer is higher in the new model than Bay's model.This is due to the new model accounting for the fragmentation of the oxide layer, resulting in islands of oxide)."In contrast, although Bay's model estimates the total area of exposed aluminium substrate, inherent in his calculation of the micro-extrusion pressure is an assumption that all the exposed aluminium is grouped together, resulting in a relatively low micro-extrusion pressure being calculated.Physical evidence for oxide fragmentation, of the type shown in Fig. 19a, was presented in the TEM image in Fig. 17, which shows fragments of aluminium oxide dispersed along a AA1050 roll bonded weld line.In this work a new model of bonding strength is presented which, building on the well-known work by Bay, takes account of all the relevant deformation parameters in bond formation.An experiment was designed and built that successfully decouples the application of the relevant parameters.Over 150 tests were conducted to evaluate the model and investigate the effect of each deformation parameter on the weld strength.The experiments have established the basic relationships between deformation parameters and weld strength, of which it is important for engineers to be aware when considering solid-state fabrication, forming and recycling processes.The relationships are as follows: an aluminium interface must be stretched by a threshold strain for it to weld. Increasing the normal contact stress, temperature, or shear stress decreases the threshold strain and increases the strength of any welds. Normal contact stresses above the yield strength of the material are necessary to create strong bonds.This is most likely due to a higher normal contact stress increasing the real contact area and micro-extruding more substrate aluminium through cracks in the oxide layers. Increases in strain rate have little influence on the bond strength at low temperatures, but significantly decreases the bond strength at temperatures over 423 K. The weld strength is very sensitive to temperature.For example, for a bond created with an interface strain of 80% and normal contact stress of 110 MPa an increase in temperature from 298 K to 473 K corresponds to a shear strength increase from 1.3 MPa to 12.5 MPa. | Solid bonding of aluminium is an important joining technology with applications in fabrication, forming and new low-carbon recycling routes. The influence of deformation conditions on the strength of the resulting weld has yet to be fully assessed, preventing optimization of current processes and development of new ones. In this work, an extensive literature review identifies the deformation parameters important to weld strength: interface strain, strain rate, normal contact stress, temperature and shear. The film theory of bonding is used to derive a model that quantifies the relevance of these parameters to the weld strength. This model is then evaluated using an experiment in which the interface strain and normal contact stress are decoupled, and the friction hills between both the tooling and the samples and between the samples themselves minimized. Neither the model nor the experiments deal with samples that have undergone mechanical surface preparation (for example, scratch brushing) prior to bonding. The experiments show that a minimum strain is required for bonding. Increasing the temperature, normal contact stress or shear stress can reduce this minimum strain. A normal contact stress above the materials' uniaxial yield stress is necessary to produce a strong bond. Increasing the strain rate has little affect on the weld strength for bonds created at low temperatures, but can significantly reduce the strength of bonds created at higher temperatures. The proposed model correctly predicts these trends; however, for higher temperatures it underestimates bond strengths and the influence of strain rate, suggesting that diffusion mechanisms increase the strength of bonds created at higher temperatures. © 2014 The Authors. |
308 | Financial Implications of Car Ownership and Use: a distributional analysis based on observed spatial variance considering income and domestic energy costs | With increasing digitisation of vehicle records, new opportunities are being afforded to researchers interested in exploring car usage at the level of individual vehicles.In particular, periodic vehicle safety and emissions inspections are providing a fruitful source of new data.Globally, these tests are becoming increasingly common, taking place in all 27 EU Member States, 32 States in the US, and at least 17 countries in Asia.Data from these tests are being put to a range of uses, including understanding spatial patterns and elasticities of car ownership and usage, understanding geographical patterns of vehicle emissions, relationships between vehicle usage and urban form, implications of future city growth on travel and associated greenhouse gas emissions, issues of environmental and energy justice and the potential positive and negative impacts of pay-per-mile vehicle insurance.In this paper, we explore the financial implications of car use by combining annual data from around 30 million vehicles from the UK vehicle inspection test with accompanying registration data on the location of the registered keeper of the vehicle.We use this to calculate costs of Vehicle Excise Duty and fuel costs at both a per vehicle and an aggregated area level.We then place these costs in the context of domestic expenditure on electricity and gas use by using energy consumption data from 24.5 million electricity meters and 21 million gas meters.While much previous work has looked at motoring costs longitudinally, particularly with respect to price elasticities of road fuel, in this paper we look instead at how expenditure on motoring varies spatially and in relation to levels of median income.This places the work more in line with previous work on household expenditure.However, this existing body of work generally has no, or very limited, spatial detail as it tends to be based on limited sample survey data, predominantly the UK Living Costs and Food Survey which has an annual sample size of 6000 households in the UK per year.We present the work here as an important complementary perspective to these survey based approaches.Whilst our datasets present universal information on vehicle and energy usage, we are cognisant of a number of limitations of this approach.First, due to both the size and security considerations of the datasets used, it is necessary to undertake analysis predominantly on the basis of data that is spatially aggregated.Second, the motoring costs that we are able to base our assessment on are those that are dependent specifically on vehicle characteristics and usage, rather than costs such as insurance which are dependent heavily on the characteristics of the driver.Due to this second point, in this paper, our examination of expenditure has focused predominantly on VED and fuel costs.These are important as they are relatively inflexible and are the motoring costs most directly influenced by national taxation policy, therefore reflecting political decisions.Additional work has been carried out that has provided estimations of vehicle depreciation costs as well as the proportion of motoring costs used through travel to work.These have been presented elsewhere.Initially, this paper sets out the general costs of motoring from survey based work, before establishing the political history of both VED and fuel duty.This context is important for understanding the longstanding tension between viewing automobility as either a luxury or a necessity, and the impacts this has on what are considered to be appropriate taxation structures.The overall methodology is then described before setting out a number of different analyses.These are: relationships between VED and fuel costs, first at the level of individual vehicles and then as household averages at an areal level; relationships of VED and fuel costs to income and between road fuel costs and domestic energy costs; and finally looking at the proportion of income spent on these costs.There is then a discussion and conclusion section which explores the implications of the findings within the context of current and future mobility and energy policy.The costs of running a car are made up of fixed annual costs, sporadic costs, fuel costs and, greatest of all, depreciation.The overwhelming impact of the balance of these costs is that “annual average cost per mile decreases as the annual mileage increases and is frequently perceived as merely the cost of fuel”.Fig. 1 shows the average annual household costs of car ownership by income decile calculated from the UK Living Costs and Food Survey.These vary in total from £660 for the lowest income decile, to £7649 for the highest.The proportion of this that is spent on fuel varies between 32.3% for the highest decile and 42.6% for the second highest decile, given that purchase costs are included.The living costs survey accounts for VED as a subsection of ‘Licences, Fines and Transfers’ alongside Stamp Duty for house purchases.Although the overall section is split by income decile, no such split is available for VED and motoring fines separately, so in Fig. 1 these have been allocated proportionally according to the whole section.The overall average VED paid is £156 per household.The LCFS accounts for the cost of a vehicle in terms of purchase price, which is calculated as an average over all the households.Another common way of reflecting this cost is in terms of depreciation.This has been estimated at around 15% per year, and was estimated, in 1994, to represent 42% of average annual vehicle costs.This compares with between 21% and 35% for purchase costs in the LCFS for 2011, as shown in Fig. 1.To illustrate the difficulties in calculating the full costs of car ownership, which extend beyond the costs outlined above into a range of non-direct and non-monetary costs, it is worth considering Lynn Sloman’s analysis from her book Car Sick:“The typical car owning, Briton today devotes nearly 1,300 hours a year to his or her car.It takes him over 500 hours to earn the money first to buy the car and then to pay for petrol, insurance, repairs and parking.He spends another 400 hours every year sitting in his car while it goes and while it waits in traffic jams.More than 250 hours are devoted to a myriad of small tasks associated with a car: washing it, taking it to the garage for repair, filling it with petrol, looking for the car keys and walking to the car, de-icing the windscreen in winter, and finding a parking space at the end of every trip.Finally, he has to work about 100 hours every year to earn the money to pay the extra building society interest because he has chosen a house with a garage rather than one without.All in all, the typical British car driver in 2005 devoted three and a half of his sixteen waking hours to his car.For this time, he travels a little less than 10,000 miles per year.His average speed is less than 8 miles an hour roughly the same as the speed at which he could travel on a bicycle.,.A highly detailed spatial analysis might also consider the impact of local policies on motoring costs, such as residential parking, workplace parking levies, low emissions zones, congestion charging and so forth.However, as already stated, this paper does not attempt to consider the full costs of car ownership and use, but focuses specifically on VED and fuel cost, representing around 40% of total car costs and constituting the proportion of costs that national level policy has direct control over.We describe these briefly below.Taxation of motor vehicles was first introduced in the UK in the 19th Century under the Customs and Inland Revenue Act 1888 which extended the definition of ‘Carriage’ from “any vehicle drawn by a ’horse or mule, or horses or mules’, to ‘embrace any vehicle drawn or propelled’ upon a road or, tramway, or elsewhere than upon a railway, by steam or electricity, or any other mechanical power”.Key issues that have surrounded VED from the start have involved issues of fairness and equity as well as questions over the appropriate purpose of the tax.As early as 1909, there were objections to the imposition of the tax.In a House of Commons debate around the introduction of a graduated VED based on horsepower, Mr Joynston William Hicks, then Conservative MP for Manchester North West stated: “I hold that a motor car has now become almost a necessity, that it is very largely a commercial vehicle, not used, it is true, for carrying goods in that sense, but used by doctors and travellers, and by many people for other than purely pleasure purposes.In that sense I do not think a motor car can be classed as a luxury, and, therefore, should not be taxed as such.,.He goes on to provide informative figures on the ownership and usage of vehicles: “In 1905…there were 2732 motor cars of the average value of £374.Therefore the fashion is not so very luxurious after all.A very large proportion were small power cars.In 1906 the motor cars travelled 44,352,000 miles, and there were only 16 accidents.,.He then proceeds to set out a range of arguments around vehicle taxation as relevant today as they were then, including whether it is reasonable to charge a flat rate for access to the roads, whether the funds raised should be ring-fenced for road maintenance, what justification there may be for charging motor cars but not horses, and whether the tax should be graded on the basis of size, engine size/power or the amount of dust resulting from them.A comprehensive history of UK VED is provided in Butcher, but key changes to the basic framework established at the start of the 20th Century are set out in Table 1.Up until 1992, in addition to VED, the UK had a 10% car purchase tax, but this was ended as part of plans to increase fuel duty with the ‘fuel duty escalator’.However, a new graded first year rate of VED was introduced in 2008.Current rates of VED are shown in Table 2.Exemptions currently exist for a number of vehicles under the Vehicle Excise and Registration Act 1994, in particular “electrically propelled vehicles” and “light passenger vehicles with low CO2 emissions”.Disabled people are also exempt, however it has not been possible to account for this within this study.It is also worth noting that for people who do not wish to or cannot afford to pay VED in an annual lump sum, options to pay monthly by Direct Debit or for only six months increase costs by 5% and 10% respectively.Fuel costs are comprised of two main elements: basic costs of fuel and taxation.In the UK, fuel duty for petrol and diesel is one of the highest in the world at £0.5795 per litre, with standard rate VAT added on top.Between January 1990 and October 2015, this resulted in the total tax being paid on a litre of petrol comprising between 53% and 86% of the total pump price.When adjusted for inflation, petrol prices have increased by only 18% overall between October 1990 and October 2015, however there have been significant price spikes, with a maximum in April 2012 when petrol costs reached a 2015 equivalent of £1.47/litre.Between 1992 and 1999, in a move towards an increasing tax on use rather than ownership, the UK government introduced the ‘fuel duty escalator’.This was an annual increase in fuel duty above the rate of inflation.Initially it was a 5% per annum increase, and then from 1997 it increased to 6%.The initial intention of the Conservative government was to double the price of fuel at the pump in order to a) encourage manufacturers to develop more efficient vehicles, b) discourage non-essential car-use, and c) provide a more even playing field for public transport.The tax allegedly went largely unnoticed for most of the decade due to falling oil prices in real terms.However, as prices began to rise in 1999 its effects started to become more apparent, particularly for road haulage companies, leading to campaigns to abolish it.Even after it was abolished in November 1999, increases in the price of crude oil led to continuing price rises at the pump resulting in campaigns to reverse the historic increases and, eventually, to the UK-wide fuel protests of September 2000.Through analysis of vehicle characteristics and the annual distance driven, it has been possible to estimate both annual VED and fuel costs for every private vehicle in Great Britain, including cars, minibuses, vans and two and three wheeled vehicles, and to consider these figures in association with income data.Due to limitations of available data on income, we have only performed the analysis for England and Wales.This analysis has focussed on 2011 in order to utilise UK Census data from that year.The basic principles of this analysis are set out in detail in Chatterton et al. but are summarised below.Further to that analysis, the MOT test record dataset has been ‘enhanced’ through the addition of a number of new parameters that have been acquired through a UK vehicle stock table from the Driver Vehicle Licensing Agency.In particular, the DVLA data allows the linking of each vehicle to the Lower-layer Super Output Area of the registered keeper; the CO2 emissions; as well as an indication as to whether the vehicle is registered by a private individual or a corporate entity.This last parameter has allowed us, for the purposes of this analysis, to investigate only privately owned vehicles.Also, the provision of data from the DVLA stock table has allowed the identification and tracking of vehicles less than three years old.For the purposes of this analysis, the fields of interest from the MOT/DVLA dataset are: LSOA of registered keeper, date of first registration, MOT test class, fuel type and engine size.The analysis is done for all LSOAs in England and Wales, and unless stated otherwise, where figures for vehicle costs or fuel use are given per household, these refer to only those households with cars.Following a modified version of the methodology set out in Wilson et al., an estimate of annual distance travelled has been calculated for each vehicle.For vehicles without a valid MOT test in the base year due to being less than three years old, the annual distance has been estimated by taking the odometer reading at the first test and averaging this between the date of the test and the date of first registration.Then, using the methodology from Chatterton et al., the fuel economy has been calculated for each vehicle and a CO2 rating calculated for those vehicles which do not have an official CO2 emissions banding from the DVLA data.Where any vehicle does not have complete data for a field, this has been infilled with an average value for the other vehicles from that area.Where vehicles do not have a valid fuel type, these have been classified as petrol.Then, on the basis of MOT test class, registration date, engine size and CO2 emissions, each vehicle has been placed in a VED class and assigned an annual VED rate according to the categories set out in Table 2.On the basis of the annual km travelled, fuel economy and fuel type, the annual fuel consumption and cost for each vehicle was then calculated.The latter was based on 2011 average prices of £1.33 per litre for standard unleaded, £1.39 for diesel and £0.73 for LPG.In the absence of prices from DECC or other UK sources on the cost of CNG as a road fuel, this has been set to £0.54, based on the LPG: CNG cost ratio obtained from the US.For electric vehicles, a figure of £0.033 per km has been used based on an average 2011 domestic electricity price of £0.141 per kWh and an 80kw Nissan Leaf using the NextGreenCar fuel cost calculator.1,Costs have been allocated to households, and households with cars, using 2011 Census data about the numbers of each in each local area.Income data has been used from Experian estimates of median income.Fig. 3 shows the distribution of VED per vehicle as a proportion of combined annual fuel and VED costs.This indicates that, for the majority of vehicles, VED costs make up around 10–20% of the total amount of these costs.However, across the whole fleet, mean costs for fuel and VED per kilometre are £0.159/km.Fig. 4 shows maps of average household expenditure on VED and road fuel.The left hand two maps are scaled in deciles.Urban areas stand out particularly sharply on these maps because, even though households without cars have been excluded, in these areas those households that have cars still tend to own fewer vehicles than in rural areas, leading to much lower average per household costs.This may be because there is less need for cars due to greater accessibility of services and/or better public transport provision, or it may be due to prohibitive factors such as higher on-street parking charges or significantly higher property prices for urban properties with off-street parking.These latter are, however, examples of costs that we cannot account for in this analysis.The bivariate plot on the right allows the identification of areas of high VED/low-medium fuel costs which are mainly suburban areas on the periphery of London and the Home Counties.This combination is likely to denote areas of greater wealth but lower mileage vehicles.In general, rural areas are particularly characterised by high VED and high fuel costs.Areas with lower VED but high mileage appear to be more prevalent in the north of England and in Wales.Fig. 5 shows differences in expenditure on road fuel between urban and rural areas.It uses the UK Office for National Statistics Urban-Rural categorisation which groups areas into classes.It is evident that, in general, urban areas lead to lower expenditure on road fuel and rural areas spend significantly more on road fuel with a gradual increase as areas become more rural.The plots are Tukey style box and whisker plots created using R software and where notches of two plots do not overlap there is ‘strong evidence’ that the two medians differ).Fig. 6 shows average household expenditure on VED in relation to median household income at the LSOA level.The plots indicate a significant increase in outlay on VED with increasing income.In the left-hand plot, there is a notable downward spike where there are lower household VED costs at lower incomes.Comparing this to the right-hand plot, it is evident that these are tending to occur in the second, third and fourth income quartiles.Fig. 7 shows average household expenditure on road fuel in relation to median household income at the LSOA level.This indicates that although there is a tendency for expenditure on fuel to increase with income, this is not nearly as strong as for VED.Of note in the scatter plot are some areas that stand out with low income/low fuel costs, and high income/low fuels costs.The box and whisker plot indicates that the former tend to be in the second to fourth income deciles rather than the lowest and they also appear to correspond to a similar effect observed for VED in Fig. 6.Given the increasing push to electrify transport, as well as space/water heating and cooking, there is a need to begin to understand how energy use from cars relates to domestic energy consumption.Fig. 8 shows data from the Living Cost and Food Survey for relative expenditure on domestic energy.These range from £723 for the lowest income decile to £1149 for the highest.This compares with the greater range for the fuel component of motoring costs in Fig. 1 running from £260 to £2574.For the work presented in this paper, average prices for gas and electricity were calculated from the UK Department of Energy and Climate Change 2012 Quarterly Energy Report for a kWh of gas and electricity based on a ‘typical’ annual household consumption of 18,000 kWh and 3300 kWh respectively.The calculated prices based on the standard credit payment differentials) across all suppliers was £0.042 per kWh for gas and £0.143 per kWh for electricity.These were then applied to LSOA level data from DECC on average household gas and electricity consumption.Use of other fuels has not been incorporated into the analysis, but as Fig. 8 shows, this is a small fraction of expenditure overall.However, it is also very unevenly distributed, particularly with regard to where use is due to properties not being connected to the mains gas grid.Fig. 9 provides a comparison of the fuel costs of car use alongside expenditure on domestic gas and electricity consumption.Average household expenditure on gas and electricity tends to increase together, although the distribution indicates expenditure on gas compared to electricity varying by up to a factor of two.In terms of expenditure on road fuel, again expenditure increases together, with those households spending more on one, tending to spend more on the other.However, there is a divergent tendency in the areas of higher expenditure, with one cluster having very high expenditure on road fuel but not on domestic energy, as well as a group that have lower expenditure on car fuel but high domestic energy consumption.In order to better evaluate the financial impact of expenditure on VED, road fuel and domestic energy in different areas, the average household expenditure has been calculated as a percentage of median income for each LSOA.The plots in Fig. 10 show on the x-axis, the mean of the median income values for each income decile, and on the y-axis, the mean expenditure as a percentage of income for these 95% confidence intervals around the mean, and noting that these are the means of the area aggregates – not of individual households).Following Santos and Catchesides, costs for road fuel and VED are presented for all households and only those households with cars.Then, for domestic energy costs and total costs, results are only provided across all households as it is not possible to attribute differentials in domestic energy use separately to households with and without cars.Overall, the percentage of income spent on motoring costs decreases as income increases, with the lowest income deciles spending around twice as much of their income on the car and domestic energy components as the highest income deciles.When the motoring costs are examined across all households, and not just ones with cars, this effect is still present but less strong and with a flattening out of the curve for the second to fifth percentiles.Fig. 11 presents the data on spending as a percentage of income spatially.As with Fig. 4 these are scaled in deciles.These same deciles have been used for both maps to highlight the differences more clearly.The maps show a strong tendency for the proportion of income spent on fuel and VED to increase towards the peripheries of the country as wages and accessibility reduce, and to decrease along the spine of the country and particularly around London where income and connectivity are highest.This analysis has taken a novel approach to the calculation of motoring costs.Conventional studies have tended to use household expenditure surveys as their basis.Here, we have used calculated fuel and VED expenditure based on data from all individual private vehicles in England and Wales.However, this comes with limitations: i) It has only been possible to calculate fuel and VED costs, and not purchase/depreciation, insurance or other costs; ii) The data available do not permit analysis at a true household level, relying instead on averages from figures aggregated over spatial areas; iii) There are other household costs relevant to mobility that have not been considered, such as expenditure on public transport.Here, we have compared motoring costs in relation to expenditure on domestic energy consumption, due both to the availability of readily compatible data, but also because of the increasing inter-relation between these due to the current and predicted trends towards the electrification of vehicles.However, there are other spatial data that might merit consideration in future work, such as housing costs.Further work in this area would be beneficial, as although theory suggests that households trade-off increased housing costs with transport costs, evidence often suggests that things are much more complicated than this.It is also important that average house price data is considered in conjunction with information on tenure.However, given the universal nature of the data sources used here, this analysis should provide a valuable insight into patterns of expenditure both in its own right, and in comparison to studies based on data from different sources.There is a significant debate about whether existing taxes on car use are socially regressive which relate to the extent to which car use can be considered a luxury or a necessity.The relatively simple analysis provided here does not provide great insight into how ‘essential’ cars are for different people, or in different locations.However, it does indicate the strong tendency for expenditure on VED and fuel costs together with other household energy costs to be regressive, in that expenditure on these items represents a higher proportion of household income at lower income bands, particularly if only households that own cars are considered.The actual effects of this are likely to be greater in actuality than represented here due to the inability of poorer households to pay by the cheapest means which will exacerbate these costs.Moreover, although expenditure on fuel often has a discretionary element to it, for many people, some car use will be regarded as a basic need and so, however low income is, expenditure will not reach zero.At the same time, it needs to be remembered that a significant proportion of households don’t have access to a car and are reliant on other forms of transport, which, in turn, may be dependent on tax revenue to operate.Consequently, the case for reducing motoring taxation as a socially progressive policy is highly complex.It can be argued that the grading of VED by age and CO2 band of vehicle enables it to be less regressive as a mode of taxation than a fixed rate, as it allows people to effectively choose what rate of tax they are happy to pay and to choose a vehicle accordingly.However, in reality, whilst vehicle size is often a choice, it is also the case that newer vehicles also tend to be more expensive, whilst older, more inefficient cars which attract higher rates of VED may be more affordable at the point of purchase, locking poorer households into higher running costs in the long-term.Future work will enable investigation of the interplay between vehicle age, size and price, and the extent to which VED appears to have influenced purchasing patterns by different income groups.The future changes to VED that are due to apply from 2017 will set a standard rate of VED at £140 after the first year for all except electric vehicles and thus remove any VED incentive towards purchasing cleaner non-electric vehicles.It may be the case that we are moving to a time in the uptake of electric vehicles where this absolute tax distinction between ‘zero-emission’ and ‘polluting’ is appropriate.However, VED is not the only way in which those able to afford to purchase EVs will enjoy significant financial benefits, as not only are EVs more efficient to run in terms of energy, but, in the UK, the fuel is taxed significantly less.In 2015, domestic electricity invoked a total tax of 5% VAT, compared to a mean total tax of over 68% on petrol.2,Given that the initial purchase price of electric vehicles is relatively high, the greater ability of the wealthy to purchase access to cheaper mobility through EVs is going to have significant implications both for social justice and the Government’s tax revenue.However, increasing tax on electricity would potentially only exacerbate the already regressive nature of energy prices illustrated above. | This paper presents a new perspective on assessing the financial impacts of private car usage in England and Wales using novel datasets to explore implications of motoring costs (principally Vehicle Excise Duty and road fuel costs) for households as part of the overall costs of their energy budget. Using data from an enhanced version of the Department for Transport ‘MOT’ vehicle test record database, combined with data on domestic gas and electricity consumption from the Department for Business, Energy and Industrial Strategy (formerly the Department of Energy and Climate Change), patterns of car usage and consequent energy consumption are investigated, and the costs of Vehicle Excise Duty and road fuel examined as a proportion of total expenditure on household direct energy consumption. Through the use of these new datasets it is possible to analyse how these vary spatially and in relation to levels of median income. The findings indicate that motoring costs are strongly regressive, with lower income areas, especially in rural locations, spending around twice as much of their income on motoring costs as the highest income areas. |
309 | Overcome procrastination: Enhancing emotion regulation skills reduce procrastination | Procrastination is a widespread and well-known phenomenon that refers to the voluntary delay of activities which are intended, despite the delay may have negative consequences.Individuals differ in the extent they postpone tasks."Chronically engaging in problematic procrastination has been reported by about 15% of adults and the prevalence is even higher in specific populations: Up to 50% of college students procrastinate consistently and problematically.Numerous studies indicate that procrastination is associated with significant impairment of work and academic performance.Students often engage in activities like sleeping, reading, or watching TV instead of learning.Moreover, procrastination reduces well-being, increases negative feelings such as shame or guilt, increases symptoms of serious mental health problems such as depression, and affects health behavior, such as delaying to seek proper care for health problems.In an attempt to explain this widespread and potentially harmful phenomenon, several authors have proposed that negative emotions are an important antecedent of procrastination.Evidence for this assumption comes from studies showing that people procrastinate more when they are sad or upset and that the subjective pleasantness of the distractor moderates the link between feeling upset and procrastination.Moreover, depressed affect, neuroticism, and lack of control over distressing situations have been found to be associated with procrastination.Finally, it was shown, that the positive effects of self-forgiveness on procrastination were mediated by the reduction of negative affect.Thus, emotion regulation plays a critical role for understanding the self-regulatory failure of procrastination.Individuals postpone or avoid aversive task in order to gain short-term positive affect at the cost of long-term goals.Regarding details of this process, Sirois and Pychyl suggest considering counterfactual thinking as an explanation of emotional misregulation that may promote procrastination.Counterfactual thinking means that individuals compare “… unfavourable outcomes that did occur in the past to possible better or worse outcomes that might have occurred”.In short, upward counterfactuals can cue aversive emotions that may initiate correcting future behavior.Considering that aversive emotions like shame or guild cause self-regulation to break down, upward counterfactuals may increase procrastination.On the contrary, downward counterfactuals improve actual feelings but leads to poorer future performance.Not only aversive emotional states cue procrastination, but also susceptibility to pleasurable temptations increase procrastination if individuals try to maximize pleasant feelings on coast of long-term goals.But ironically, engaging in enjoyable activities while procrastinating do not increase positive but negative affect because individuals feel guilty about their task avoidance.As aversive affective states have been shown to cue procrastination by misregulation, it can be hypothesized that the ability to adaptively cope with aversive affective states reduces the risk of procrastination."According to Berking et al., ER skills include subcomponents such as: the ability to be aware of one's emotions, to identify and label emotions, to correctly interpret emotions related to bodily sensations, to understand the prompts of emotions, to support one's own self in emotionally distressing situations, to actively modify negative emotions in order to feel better, to accept emotions, to be resilient, to confront emotionally distressing situations in order to attain important goals, to support oneself, and to modify aversive emotions.Preliminary support for the assumption validity of this model comes from several studies in clinical and non-clinical populations.Regarding all ER skills, in the heuristic framework of Berking and Znoj the ability to tolerate and the ability to modify aversive emotions play key roles.Findings of Berking and colleagues support this; both abilities moderate the effects of the remaining ER skills.There is ample evidence that shows how deficits in affect regulation skills are associated with various mental health problems.Moreover, there is evidence that emotional self-regulation reduces procrastination.It was shown that interventions which induct positive moods or interventions of self-affirmation enhance self-regulation capacity, which is needed to overcome procrastination.At last, recent research found that the association between health-related intention and actual engaging in health-related behavior was moderated by ER skills."Although there is a body of evidence that emotional self-regulation is associated with procrastination, little is known about the association between the different abilities to adequately process and respond to one's feelings and procrastination.Thus, the aim of the present study is to clarify the role of emotion regulation skills in order to reduce the tendency of procrastination.With regard to the ER subcomponents, the framework of Berking and colleagues as well as findings of previous ER studies indicate that the ability to tolerate and the ability to modify aversive emotions mediate the relations between all other sub-skills and mental health.But with regard to procrastination, little is known about the role of these two sub-skills.Thus, we aim to clarify the roles of the ER skills resilience and modification in the interplay of ER skills.For this purpose we first tested the hypothesis that the availability of adaptive emotion regulation skills would be cross-sectionally associated with procrastination.In a second study, we clarified whether the prospective effects of ER skills would negatively predict subsequent procrastination.In a third study, we tested the hypothesis that a systematic training of adaptive ER skills would reduce procrastination in a randomized controlled trial of 83 employees of different professions.Participants were recruited among students from the Leuphana University in Lueneburg during February 2011.They were asked to complete questionnaires about their study behavior in lectures.Consenting participants completed a paper-and-pen-based survey that included the questionnaires described in this section below.All procedures of the study were approved by the Institutional Review Board and complied with APA ethical standards.The final sample consisted of 172 students.Average age was 22.1 years."Regarding the sample's career choice, 86 participants studied economy, 84 studied to become teachers, one studied psychology, and another studied education sciences.Procrastination was measured by the Academic Procrastination State Inventory, which is a self-report instrument with 23 items that utilizes a 5-point Likert-type scale to assess procrastination in academic domains.Participants were asked to rate how often they engaged in the behavior stated by the items during the previous week.An example of an item is: “Gave up studying because you did not feel well”.The inventory includes three subscales.Relevant for the present study is the APSItotal score that is computed as the average of all items.Internal consistency of the total score was good.ER skills were assessed using the Emotion Regulation Skills Questionnaire.The ERSQ is a self-report instrument that includes 27 items and utilizes a 5-point Likert-type scale to assess adaptive emotion regulation skills.The ERSQ assesses nine specific ER skills with subscales composed of three items each.The items are preceded by the stem, “Last week …”.Items include: “I paid attention to my feelings”; “my physical sensations were a good indication of how I was feeling”; “I was clear about what emotions I was experiencing”; “I was aware of why I felt the way I felt”; “I accepted my emotions”; “I felt I could cope with even intense negative feelings”; “I did what I had planned, even if it made me feel uncomfortable or anxious”; and “I was able to influence my negative feelings”.Emotion regulation was successfully assessed by averaging all of the items and computing a total score.In a first step, we conducted four regression analyses, first on APSItotal, second on APSIprocrastination, third on APSIfear for failure, and fourth on APSIlack of motivation.We calculated the explained variance of all subscales and the standardized regression weights of each subscale.In order to clarify the roles of the ER skills resilience and modification in the interplay of ER skills, we conducted mediating analyses.We investigated whether the association of each ER skill and procrastination is mediated by the Subscale ERSQresilience or by ERSQmodify.For these analyses we used the SPSS MACRO PROCESS.For all statistical analyses, significance level was set at p < 0.05.SPSS 22.0 and AMOS 22.0 were used for all analyses.Table 1 shows descriptive statistics and intercorrelations of the variables.Consistent with our hypothesis, the APSItotal score and all APSI subscales were significantly predicted by the ERSQ subscales.Although all ERSQ subscales were correlated with the APSI sum score and the subscales, only ERSQresilience was a significant predictor in the four regression analyses.In line with our assumption, the mediation analyses outline that ERSQresilience mediated the association of all other ERSQ subscales on the procrastination scales.Although Berking and colleagues conceptualized ERSQresilience and ERSQmodify as key variables, in the present study ERSQmodify moderates only a link between procrastination and ER skills.For details see Table 3.Findings indicate that ER skills were associated with procrastination.But surprisingly regression analyses including all ERSQ subscales revealed that only ERSQresilience is a significant predictor for procrastination.These findings indicated that most of the common variation of the ERSQ subscales on procrastination was explained by ERSQresilience.In the light of the mediation-hypotheses, these findings are not that surprising.In line with the framework of Berking and colleagues, results of the mediation analyses outlined, that ERSQresilience mediated the connection between the other ERSQ subscales and procrastination.Contrary to this framework, ERSQmodifiy, results were very inconsistent.Considering the results shown in Table 3, it may be suggested that the ability to modify aversive emotions may be important for emotional processing, whereas the ability to tolerate aversive emotions seems to be necessary for all adaptive emotional responses and processes, in order to deal with aversive or boring tasks.This is highly plausible, because individuals, who are not able to tolerate aversive emotions, will postpone or avoid aversive or boring tasks.Then they will have no reason to become aware of these emotional states, to understand, nor to modify them.Despite the high plausibility, Study 1 is very limited by the cross-sectional design.No causal interpretation of the results is possible.In order to overcome this limitation, the prospective impact of ER skills on procrastination and vice versa was investigated in Study 2.To further clarify whether cross-sectional associations between ER skills and procrastination result from a causal effect of ER skills on procrastination, we conducted a second study to test prospective associations between ER skills and procrastination.Increasing workload leads to more perceived stress and aversive emotions.If, in addition to the increasing workload, no fixed timetable exists, procrastinators are likely to regulate the aversive emotions and the perceived stress by postponing or avoiding aversive tasks.DeArmond, Matthews, and Bunk found an indirect impact from increasing workload on procrastination.On the other hand, ER skills increase the probability to regulate aversive emotions adaptively.Thus, we assume that ER skills prevent individuals from procrastinating when workload increases.With regard to the key role of the ability to tolerate and the ability to modify aversive emotions, we particularly expect that deficits in these sub-skills are coupled with a rise of subsequent procrastination.As in the previous study, participants were recruited among students from the Leuphana University.They also were asked to complete questionnaires about study behavior.Assessments were conducted in the last week of lecture period and one week later, during the first week of the non-lecture period.Typically, the deadline for assignments and examinations comes to its closing point during the first week of the non-lecture period, which usually implies an increase in student workload.In order to evaluate prospective effects of ER on procrastination under stress, we assessed increased workload in the first week of the non-lecture period compared to the last week of the lecture period and excluded participants if they did not report an increase.To encourage students to participate in the present study in spite of their already heavy workload, we raffled four Amazon-vouchers at the value of 20 Euro as incentives.At both assessment points, consenting participants completed the Emotion Regulation Skills Questionnaire and General Procrastination Scale as described in the previous study."All procedures were approved by the university's Institutional Board and complied with APA ethical standards.The final sample consisted of 79 students, of which 76 were female.The average age was 23.1 years.The first assessment was completed by 190 participants.Forty-two of them were excluded because they reported a decreased work load for the non-lecture period.The second assessment was completed by 79 students.Of the final sample population 63 participants were studying to become teachers, 7 studied education science, 3 studied environmental and sustainability studies, 2 studied human resources management, and one participant studied in each one of the following careers: cultural sciences, politics, English studies, and economics.As in Study 1, we assessed ER skills with the Emotion Regulation Skills Questionnaire.The internal consistency of the ERSQtotal was good.Procrastination was measured with the German short version of the General Procrastination Scale.The GPS is a self-report instrument with 9 items that utilizes a 4-point Likert-type scale.Four items are inversed.A total score was obtained by summing all items and then dividing them by nine.The authors report an internal consistency of α = 0.86.The internal consistency of the GPS in the present study was good.To clarify the direction that prospective effects of ER skills might have on procrastination, we conducted cross-lagged regression analyses based on path analysis modeling.This method allows to investigate time-lagged reciprocal effects of two variables, while, at the same time, controlling for autoregression effects.We conducted nine cross-lagged panels to investigate the reciprocal effects of each ERSQ subscale and procrastination.For all statistical analyses, significance level was set at p < 0.05.SPSS 22.0 and AMOS 22.0 were used for all analyses.Correlations between ER sub-skills and procrastination are presented in Table 4.To investigate the prospective effect of ER skills on procrastination, nine CLP were conducted.The model fit for the path analyses of three emotional processing models, for the sum score, and for three regulation-orientated subscales were very good.Good to acceptable were the model fits for ERSQclarity and ERSQresilience.Regarding the fit indices, the model including ERSQreadiness to confront did not fit.In line with our assumption, individuals scoring high on ERSQmodify at pre-assessment decreased subsequent procrastination, whereas procrastination measured at pre-assessment seemed to have no impact on subsequent ERSQmodify.Contrary to our expectations, no other ERSQ subscale predicted a reduction of subsequent procrastination.Surprisingly, findings indicated that a high procrastination level decreased subsequent ability to tolerate aversive emotions.Study 2 was conducted in order to investigate the prospective reciprocal effects of ER skills and procrastination.We assumed that ER skills were negatively associated with subsequent procrastination.Indeed, the ability to modify aversive emotions was negatively associated with subsequent procrastination.But all other subscale of the ERSQ did not cue a decrease of procrastination.Moreover, procrastination seemed to reduce the subsequent ability to tolerate aversive emotions but not vice versa.Although we supposed that the ability to tolerate aversive emotions reduces subsequent procrastination, the present findings seem to be plausible.If someone procrastinates in order to avoid aversive emotions or boredom, it is a kind of negative reinforcement.If the individual postpones or avoids the task, the expected undesired affective state disappears.Instead of standing the aversive affect the individual learns not to tolerate the aversive emotional state.Thus, the decrease of ERSQresilience may be a result of such a learning process.Several limitations of Study 2 need to be addressed.First, it has been argued that the validity of self-reports of emotional competence is limited.However, subjective appraisals of emotion regulation may often be at least as valid as alternative measures of emotion regulation.Nevertheless, it is important that future studies replicate the analyses using alternative instruments such as observer ratings or physiological measurements.Second, self-reported procrastination estimates may be also a problem.Meta-analytic findings suggest that “…those in poorer moods are more likely to indicate that they procrastinate, regardless of their actual behavior.,.Future research should overcome this limitation by external assessment.Third, the increase of workload was assessed by a self-report item.The response may also depend on the mood of the participants.However, the dates of the two assessments were chosen because workload typically increases in the beginning of the non-lecture period for German students.The results of Study 2 suggest that the ability to modify aversive emotions has a unidirectional negative effect on subsequent procrastination.In Study 3, we aim to replicate this finding in an experimental design.We assume that individuals, who train their ability to modify aversive emotions cued by tasks, reduce procrastination.Additionally, we suppose that the decrease in procrastination is mediated by an increase in the ability to modify aversive emotions.Therefore, Study 3 focused on the implementation of a randomized control trial to test the impact of an online-training focusing on ER strategies in order to overcome procrastination of aversive tasks.We assume that the training of emotion-focused strategies reduces procrastination.Furthermore, we hypothesize that the training of emotion-focused strategies increases ER skills.The emotion-focused strategies included tolerating as well as modifying aversive emotions.Moreover, we suppose that the effects on procrastination are mediated by an increase of these ER skills.The participants of this third study were recruited through newspaper articles about the current study and through the website www.training-geton.de, which was a platform for internet-based trainings and training research of the Leuphana University Lueneburg.Interested individuals applied to participate by writing an email to the primary study investigator.Individuals were asked to provide an informed consent and complete an online baseline questionnaire.Then, participants were randomized to an intervention group or a waiting list control using the online tool RANDOM.ORG.A list of participants was entered in the tool which then changed the listing order randomly.Participants with an even listing number were allocated to the IG and got access to the online intervention.Participants with an uneven number were allocated to the WLC.They were asked to wait about two weeks for the post-assessment and subsequent access to the online training by email.Two weeks later, all participants were invited to complete the same questionnaire as a post-assessment."All procedures were approved by the university's Institutional Review Board and complied with APA ethical standards.From 215 individuals who were interested in the online training, 83 provided the informed consent and completed the pre- and post-questionnaires.Fifty-seven participants were women and the average age was 40.8 years.Four individuals reported to be unemployed, six were students, and one person was retired.All other participants were employed.Forty-four participants of the final sample were allocated to the IG and 39 participants were randomized to the WLC.The two-week web-based intervention promoted emotion-focused strategies to overcome procrastination.The strategies tolerate and modify aversive emotions, are appropriate to cope adaptively with emotions.Thus, the intervention focused on these two strategies.In the intervention, participants were asked to choose one of their daily tasks which they were most likely to procrastinate and identify whether the task characteristics are associated with aversive emotions or with a lack of positive affect.Depending on this, participants were encouraged to tolerate the lack of positive affect or the aversive emotions."Following Berking and Whitley, the strategy to tolerate aversive emotions included intentionally permitting aversive emotions to be present, then reminding oneself of one's toughness and resilience, and finally reminding oneself of the affective commitment with task.On this basis, participants could try to modify their emotions.In order to do that, they either tried to increase positive affect or to reduce aversive emotions.The strategy to modify aversive emotions consisted of first practicing a short relaxation-exercise, then reappraising the harm and the probability of the potential threat, and lastly deciding whether to execute the task.After completing the chosen task, participants evaluated how successfully they coped with aversive emotions or with a lack of positive affect.This procedure took about 10 min and was repeated daily for two weeks.We assessed procrastination as our primary outcome with the German short version of the General Procrastination Scale as described in Study 2."In this study, Crombach's alpha of the GPS was acceptable.To evaluate to what extent the intervention actually enhances ER, we also assessed the effects of the intervention on ER.As the intervention primarily focused on acceptance, resilience, and modification of aversive emotions, we focused on these three aspects of ER and included the ERSQ scales acceptance, resilience, and modification as secondary outcomes.Reliability of these subscales in the present study were subscales of αt1 = 0.77 and αt2 = 0.82 for acceptance, αt1 = 0.77 and αt2 = 0.77 for resilience, and αt1 = 0.80 and αt2 = 0.77 for modification.Our hypothesis was that the training increases the abilities to tolerate and to modify aversive emotions.Therefore, in a first step, we checked whether the training influenced those ER skills by conducting ANCOVAs, by controlling the respective pre-measured ER skills.In a second step, we tested if the training of emotion-focused strategies to cope with aversive tasks reduces procrastination.Therefore, we conducted another ANCOVA by controlling pre-measured procrastination.The effect size was calculated.In a third step, we investigated if the effects on procrastination were mediated by the increase of ER strategies.We conducted a mediation analysis by applying the SPSS MACRO PROCESS.First, we tested the direct effects of the independent variable treatment on procrastination.Then, we tested the indirect effects of the change in the ERSQ subscales ERSQresilience and ERSQmodify.Therefore, we conducted separate analyses.To calculate the change of each ERSQ subscale we subtracted the pre-measure from the post-measure.In each analysis, we controlled pre-measured procrastination statistically.We aimed to investigate the de facto influence of applying ER strategies on aversive emotional states that were triggered by tasks.Thus, we conducted per-protocol analyses, using SPSS 22.0 for all analyses.An ANOVA indicated no significant differences between the treatments regarding age, procrastination, and all nine ER skills in pre-measurements.With regard to gender a chi-square-test was conducted, no differences between treatments were found.In line with our assumption, an ANCOVA indicated that the training of emotion-focused strategies reduced procrastination.Fig. 1 displays the development of procrastination from baseline to post-measurement.Reported means of procrastination in the WLC did not differ significantly, whereas the reduction in means of the IG was significant.Participants of the IG group reported a significant increase in their abilities to tolerate aversive emotions and modify aversive emotions compared to the WLC group.Table 7 shows the means, SDs for baseline and post-treatment, and the test-statistics for all outcome measurements separately.To test whether the effect of the training on procrastination was mediated by increasing the ER skills ERSQresilience and ERSQmodify, an analysis of indirect effects was conducted.Procrastination was controlled.There were significant indirect effects of the ER treatment on procrastination through the change in both ER subscales.Additional, analyses indicated that the ER subscales ΔERSQacceptance and ΔERSQreadiness to confront were also significant indirect pathways between treatment and reduction in procrastination.Following Baron and Kenny, a mediation effect needs a significant pathway from the independent variable on the dependent variable before including the mediator, a significant pathway from the independent variable on the mediator, and a significant pathway from the mediator on the dependent variable.Mediation analyses outline that only for ΔERSQresilience and ΔERSQmodify all pathways were significant.Results of Study 3 indicated that the online-based training reduced procrastination and increased all ER skills, including the ability to modify and to tolerate aversive emotions.Regarding the mediation hypotheses, Table 8 indicated that indirect pathways from treatment on procrastination via ERSQacceptance, ERSQresilience, ERSQreadiness to confront, and ERSQmodify were significant.However, the path from ERSQacceptance on procrastination was only marginal significant and the path from treatment on ERSQreadiness to confront was not significant.Following Baron and Kenny, the significance of all pathways is a premise of mediation.Thus, the reduction of the procrastination level seems to be mediated by the increase in ERSQresilience and ERSQmodify.Concerning the ability to modify aversive emotions, the results of studies 1–3 were quite consistent.The ability to modify aversive emotions seems helpful in order to overcome procrastination.Understanding procrastination as dysfunctional emotion regulation, this finding is very plausible.However, results of Study 1 indicated that the association between ERSQmodify and procrastination is mediated by the ability to tolerate aversive emotions.Moreover, the association of all other subscales and procrastination is also mediated by the subscale ERSQresilience.It seems that the ability to tolerate aversive emotions plays a key role in the interplay of ER sub-skills.Yet, the results concerning ERSQresilience look like they were inconsistent.Thus, we had to discuss the ostensive discrepancy concerning the subscale ERSQresilience in Study 2 and Study 3 in order to understand the relation between the ability to tolerate aversive emotions and procrastination.Results of Study 2 indicated that procrastination has a unidirectional negative effect on the subsequent ability to tolerate aversive emotions.We suggested negative reinforcement as an explanation.To overcome disorders caused by negative reinforcement, a classical intervention in cognitive behavioral therapy is confrontation with response prevention.If individuals train to tolerate aversive emotions cued by aversive or boring tasks, they may increase their ability to tolerate aversive emotions as this intervention is similar to response prevention.The training of ER-focused strategies may operate like response prevention.The participants were encouraged to bear aversive emotions, before they tried to modify them.If they were not able to modify aversive emotions cued by the task, they had to remind themselves that they were able to tolerate these feelings.Although the effects of ER skills on procrastination were comparatively small, and modification), they were significant.As procrastination has multiple causes and is stable over time, we did not expect large effects, neither as direct prospective effects nor as indirect effects.The small effect size between intervention group and waiting list control is in line with this assumption.According to previous findings showing that procrastination is a kind of short-term mood repair, the results of Study 3 suggested that individuals applying ER skills resilience and modification were able to overcome the temptation to regulate their mood by procrastination.Several limitations need to be addressed.First, comparing a treatment with a waiting list control results may be confounded by a placebo effect.Therefore, future research should overcome this limitation by applying a placebo control.Second, it is important to investigate the treatment adherence in order to analyze the effects of adherence on the findings.Unfortunately, we did not assess the adherence to the treatment or to specific ER strategies.Future research should investigate how often participants choose which ER strategies and which strategies were linked to the reduction of procrastination.Third, Study 3 is lacking a follow-up assessment.Thus, no interpretation with regard to long-term effects is possible.In order to obtain information about the stability of these effects, future research should replicate this study with follow-up assessments.The fourth limitation concerns the measure of procrastination across the studies.In Study 1, procrastination was measured by the Academic State Procrastination Inventory which assesses academic procrastination.In Study 2 and Study 3 general procrastination - instead of academic procrastination - was measured by the General Procrastination Scale.This change is grounded in better psychometric properties of the German version of the GPS, which did not exist when Study 1 was conducted.Although there is a difference between academic and general procrastination, the associations between emotion regulation and both forms of procrastination seem to hold across studies.However, future studies should clarify the association between ER skills and different domains of procrastination.A practical implication of our results is to integrate ER strategies in already existing procrastination interventions, in order to find additional ways to overcome procrastination.To the best of our knowledge, no procrastination interventions incorporate increasing different ER skills, until today.With regard to the potential economic damage for individuals as well as companies that is subsequent to procrastination, a plausible strategy to counterbalance this could be to provide employees a service that would teach them to use the same ER skills that were applied in the above mentioned training and that showed to be beneficial to avoid procrastination.Additionally, courses to cope with aversive emotions seem to be highly relevant for students. | Procrastination is a widespread phenomenon that affects performance in various life domains including academic performance. Recently, it has been argued that procrastination can be conceptualized as a dysfunctional response to undesired affective states. Thus, we aimed to test the hypothesis that the availability of adaptive emotion regulation (ER) skills prevents procrastination. In a first study, cross-sectional analyses indicated that ER skills and procrastination were associated and that these connections were mediated by the ability to tolerate aversive emotions. In a second study, cross lagged panel analyses showed that (1) the ability to modify aversive emotions reduced subsequent procrastination and that (2) procrastination affected the subsequent ability to tolerate aversive emotions. Finally, in a third study, a two-arm randomized control trial (RCT) was conducted. Results indicated that systematic training of the ER skills tolerate and modify aversive emotions reduced procrastination. Thus, in order to overcome procrastination, emotion-focused strategies should be considered. |
310 | pyPcazip: A PCA-based toolkit for compression and analysis of molecular simulation data | Molecular dynamics simulations of biological molecules and complexes can give insights into the relationship between macromolecular structure and dynamics at the atomistic level, and the complex emergent properties of the system at much longer length and timescales.Continuing developments in hardware and software mean that researchers are faced with ever increasing volumes of raw data from the simulations that need to be stored, analysed, and shared.The key raw data are trajectories: snapshots of the time-evolution of the system output at regular intervals, where each snapshot records the three dimensional coordinates of each atom in the simulation at that moment in time.The widening gap between highly scalable molecular simulation codes that enable simulation of multi-million atom systems over microseconds of time, and legacy sequential analysis tools, that were designed to deal with tens to hundreds of thousands of atoms over nanoseconds of time, is exposing a new bottleneck in the process of obtaining scientific insights from the computational experiments.As one of the efforts to address this gap we have developed pyPcazip, a suite of software tools that can compress molecular simulation data to a small fraction of their original size without significant loss of information.According to their interests, users can control the balance between the pyPcazip degree of compression and precision of the molecular simulation data.Subsequently, the compressed data opens the door to a manifold of analysis methods that produce objective, quantitative and comparative metrics related to convergence and sampling of molecular simulation as well as metrics on the similarity between molecular simulation trajectories.pyPcazip uses Principal Component Analysis, a dimensionality reduction technique at the core of its algorithms for compression and analysis of MD trajectory data.Dimensionality reduction techniques such as PCA are increasingly being applied to the analysis of molecular simulation data , as well as other types of data that report on variations in biomolecular conformation such as NMR ensembles , collections of crystal structures , and Monte Carlo simulations .PCA allows the dominant modes of molecular flexibility to be identified in a rigorous manner, and presented in the form of variations in the values of a small number of collective coordinates, rather than the 3N independent Cartesian coordinates of the individual atoms, so greatly easing interpretation and visualisation.PCA provides the gateway to a range of analysis methods that provide quantitative and comparative metrics related to convergence and sampling, and the similarity between one trajectory and another.We have previously described how this method can be applied to compression of the data , and as a route to enhanced sampling using our Fortran1 and C2 software codes.These software codes which we have developed in previous years, have very limited documentation and a very basic functionality.For these reasons we have undertaken the current code development, re-engineering and substantial functionality enhancement in order to provide the user community with a complete suite of software tools that they can use much more easily, flexibly and compatibly with their needs and that they can cite.In fact, here and now, with pyPcazip we present a complete new suite of software tools written in Python that includes some redesigned and reengineered algorithms of our Fortran and C codes but a much better engineered software and a much more extended range of functionalities with respect to these formerly used codes of ours.Originalities of pyPcazip include but are not limited to: A better handling of memory issues when dealing with very large datasets; On-the-fly selection of subsets of atoms of interest for the PCA analysis from the available datasets; Flexible support for the simultaneous analysis of multi-trajectory datasets that vary in their molecular topology and number of atoms; MPI support for input processing and internal calculations; Compliance with High Performance Computing architectures such as ARCHER; Unit testing and an automatic testing suite; Compliance with a large range of state-of-the-art formats and analysis tools of MD code outputs.As a suite of software tools, pyPcazip is composed of four main components and related functionalities:pyPcazip itself takes one or many input MD trajectory files and converts them into a highly compressed, HDF5-based,3 .pcz format.The program has options to select subsets of atoms, and/or subsets of snapshots from the trajectory files for analysis.The file-reading capabilities of pyPcazip draw extensively on the MDAnalysis Python toolkit .In addition to providing pyPczdump as a tool to post-process the .pcz format we also provide a customised reader of the output files produced by pyPcazip as part of a module of the software.This module4 can be found within the source code and could be easily integrated and used in third-party software if needed upon citation of this work.pyPcaunzip can decompress a .pcz file back into a conventional trajectory file in a range of formats.pyPczdump extracts information such as eigenvectors, eigenvalues, and projections from a .pcz file.It can also produce multi-model PDB format files to animate eigenvectors.pyPczcomp permits the quantitative comparison of the data from two congruent .pcz files.An example might be the dynamics of a protein in the presence and absence of a ligand, or a comparative analysis of the dynamics of a wild-type protein and a mutant.pyPcazip is a Python software code that provides command-line tools for the compression and analysis of molecular dynamics trajectory data using PCA methods.The software is designed to be flexible, scalable, and compatible with other Python toolkits that are used in the molecular simulation and analysis field such as MDAnalysis .Many stages in the pyPcazip workflow such as the input reading process and the covariance matrix calculation of the PCA analysis are amenable to parallelisation, which has been implemented using MPI.Fig. 1 shows scaling data of pyPcazip on up to 96 cores on ARCHER-UK national supercomputing service.Each of the ARCHER compute nodes contains two 2.7 GHz, 12-core processors for a total of 24 cores per node.Where the number of cores is less than 24, MPI processes are assigned to the one processor first, then the second, and in the other cases multiples of 24 processes have been used, keeping the nodes fully populated.The bend in this figure at 12 cores occurs as the first processor of the first node has become fully occupied.With less than 12 cores used, each core has access to proportionally more cache space and memory bandwidth, so the higher performance is obtained.The dataset for this scalability analysis includes 10,000 snapshots of a Mouse Major Urinary Protein trajectory that is atom-filtered on-the-fly selecting the backbone related atoms only.An automatic testing suite has been developed for pyPcazip that is activated by a single command.This validates the correct functioning of the installed software.In addition, individual python modules are instrumented with extensive unit tests.Installation instructions, details of underlying algorithms, detailed performance data, and use-case examples are available at https://github.com/ElsevierSoftwareX/SOFTX-D-15-00082.The compression achieved by pyPcazip depends on the nature of the molecular system and, as a “lossy” method, on the chosen quality threshold.In particular, the variance captured considering a small number of the most important principal components retains crucial insights for conformational investigations as these modes directly relate to the highest amplitude motions of molecular systems whereas the less important principal components relate to the high frequency small amplitude atomistic fluctuations.For this reason we would not recommend the use of this suite of tools for the investigation of phenomena such as reaction mechanisms where the retention of sub-angstrom accuracy in e.g. bond lengths is required.Table 1 illustrates performance for three example datasets, each of 1000 snapshots: a short peptide; a DNA 18mer ; and the mouse major urinary protein .The two metrics are the degree of compression achieved and the compression error expressed as the average RMSD between snapshots in the original file and those in the compressed file.Clearly for many purposes compression to 10%–20% of the original file size is quite possible.The code has been tested on a variety of platforms ranging from laptops to national HPC facilities, and compression/decompression gives results identical to within an RMSD of less than 0.02 Å, even when run in parallel.The pyPczdump and pyPczcomp utilities allow a range of PCA-related metrics extracted or calculated from the compressed trajectory files.Output is in the form of ASCII data files that may be easily rendered using the user’s preferred graph plotting packages and molecular visualisation tools.Fig. 2 illustrates the sampling projected onto the subspace defined by the first two Principal Components, PC1 and PC2, for simulations of MUP, resolving three conformational states, while the time series of PC1 for one trajectory as shown in Fig. 3 reveals how it moves between these.Finally, for a different biomolecular system, Fig. 4 illustrates the animation of the first PC extracted from MD simulations of a DNA tetranucleotide .In addition to the here presented illustrative examples, an introductory tutorial that includes a variety of analysis examples that can be performed through this software, is available at https://github.com/ElsevierSoftwareX/SOFTX-D-15-00082/blob/master/ramonbsc-pypcazip-3d7ab553c8dc/README.mdl.The software presented in this paper, pyPcazip, is an easy to use, flexible and extensible package for PCA-based investigations of molecular simulation data generated by most common state-of-the-art simulation packages such as AMBER, CHARMM, GROMACS and NAMD.PCA methods are of growing importance in the Biosimulation field, as the volumes of data that can be produced using modern HPC facilities overwhelm more traditional qualitative and human operator-intensive analysis techniques.The method has been used for some time to provide key insights into the relationship between biomolecular structure, dynamics, and function.Examples of our own work include the analysis of sequence-dependent DNA dynamics , protein–ligand interactions and GPCR dynamics but to date there has been no open source software product designed specifically to perform this type of analysis, or compatible with all common simulation packages.pyPcazip gives insights into structure and behaviour of molecules in addition to enabling highly compressed data storage of simulation trajectory files with insignificant loss of information.Through its analysis components the software provides a variety of methods that produce objective, quantitative and comparative metrics related to convergence and sampling of molecular simulation as well as metrics on the similarity between molecular simulation trajectories.We envisage a large potential user base for pyPcazip given that PCA methods have been in use by the Biomolecular simulation community for about 15 years and that, as discussed above, the need for such automated and quantitative routes to data analysis is growing rapidly, together with the fact that the approach is applicable to trajectory data from any type of simulation.Moreover, pyPcazip has been promoted to the Biomolecular simulation community at several international conferences that have contributed to the expansion of its user base.It is easy to install and use on a wide variety of different platforms ranging from personal Unix-based workstations to national HPC resources.Finally, the source code repository of pyPcazip, together with accompanying documentation, installation instructions for a variety of platforms, testing data and examples, is distributed under the BSD license version 2.In order to increase the visibility and usage of our software package that we present in this work but also for the benefit of the user community, we have made it freely and anonymously available for download at the official Python package register.More than 1158 downloads in the last month were recorded at the beginning of April 2016.pyPcazip gives insights into structure and behaviour of molecules in addition to enabling highly compressed data storage of simulation trajectory files with insignificant loss of information.It provides an easy to use, flexible approach to undertaking PCA-based investigations of MD trajectory data generated by all of the most common current simulation packages.The modularity of the software enables the integration and planning of future methodologies to complement the insights obtained from the use of the PCA method.Currently, a sparse-PCA approach that has been implemented using the main data structures of the pyPcazip package is being investigated for potential better insights on collective motions in molecular systems.Although due to significant differences in terms of functionality and usability we could not directly compare the suite of software tools we present here with the old Fortran and/or C codes that we cite in the “Problems and Background” section, we would expect the pyPcazip performance in terms of speed to be close to the performance of Fortran and C codes given that most of the heavy algebra calculation sections in pyPcazip have been implemented via SciPy whose time-critical loops are usually implemented in C or Fortran.In addition, pyPcazip has also been designed for use on HPC architectures so that the workload can be spread across different nodes and cores of a cluster making the final results available to the user at a fraction of the time of a serial run.In addition, for the sake of even more improved performance, the replacement of the most computationally expensive routines of pyPcazip with corresponding Fortran code is being validated and an up to 10-fold improvement of performance has been observed during preliminary tests of performance analysis.We plan future software releases to incorporate these enhancements.Finally, as an open source Python package, pyPcazip is amenable to end-user driven development and integration with the growing number of other Python-based packages in the molecular simulation domain.Source code, supporting material and users support through the bitbucket ticketing system is publicly available at https://github.com/ElsevierSoftwareX/SOFTX-D-15-00082.The code is accompanied with extensive information on the application of this software, detailed installation instructions for desktop workstations but also HPC architectures and details of the code performance for differing system configurations, all available at the same public access repository. | The biomolecular simulation community is currently in need of novel and optimised software tools that can analyse and process, in reasonable timescales, the large generated amounts of molecular simulation data. In light of this, we have developed and present here pyPcazip: a suite of software tools for compression and analysis of molecular dynamics (MD) simulation data. The software is compatible with trajectory file formats generated by most contemporary MD engines such as AMBER, CHARMM, GROMACS and NAMD, and is MPI parallelised to permit the efficient processing of very large datasets. pyPcazip is a Unix based open-source software (BSD licenced) written in Python. |
311 | Glass compositions and tempo of post-17 ka eruptions from the Afar Triangle recorded in sediments from lakes Ashenge and Hayk, Ethiopia | The Afar Triangle of northern Ethiopia represents one of the best examples of an active rifting system on Earth, marking the juxtaposition of the Arabian, Somalian and African plates above a mantle plume.During the Quaternary, explosive eruptions occurred at many volcanoes in the Afar Triangle and the adjacent Ethiopian Rift Valley.Volcanic ash ejected by explosive eruptions may be dispersed over ranges of hundreds or thousands of kilometres, forming widespread chronostratigraphic markers in sedimentary archives.Pleistocene tephras have been correlated throughout Ethiopia, Kenya and the Gulf of Aden and have been crucial in providing chronological control for regional palaeoanthropological sites.Regional volcanic activity has continued into the Holocene.However, of the ∼40 Holocene volcanoes in the Afar Triangle, very few have recorded historical eruptions.Recent explosive volcanism has occurred from the Nabro Volcanic Range, which extends ∼110 km across the northern Afar towards the Red Sea."The Dubbi volcano, located along the same volcanic lineament, was home to Africa's largest historical eruption.The AD 1861 eruption dispersed volcanic ash ∼300 km to the west, towards the Ethiopian Highlands, and the eruption culminated with the effusion of ∼3.5 km3 of basaltic lava.Proximal ignimbrite and pumice deposits from the AD 1861 eruption are trachytic to rhyolitic in composition.An eruption witnessed from the Red Sea in AD 1400 may also have been derived from the Dubbi volcano.Nabro volcano itself forms a double caldera with the neighbouring volcano, Mallahle.The volcano is constructed from trachytic pyroclastic deposits and lava flows, with rhyolitic obsidian domes and trachybasaltic flows inside the caldera.The formation of this caldera most likely represents the largest explosive eruption in northern Afar during the Quaternary, which may have produced a widespread tephra.The first recorded eruption of Nabro in 2011 dispersed volcanic ash over Africa and Eurasia, disrupting air traffic.At the time of the eruption no volcanic monitoring network existed, so there was no warning.Located to the west of the Nabro Volcanic Range and closer to the Rift shoulders, Dabbahu is a Pleistocene-Holocene volcanic massif.The Dabbahu volcanic products have evolved through fractional crystallisation dominated by K-feldspar, clinopyroxene and apatite to produce a basalt-pantellerite suite.Fission track dates on obsidians from the upper flanks give ages of ∼44 ka and ∼1.5 ka.An eruption in 2005 deposited tephra over 100 km2 and formed a small rhyolitic lava dome.Recent studies, including the Ethiopia-Afar Geoscientific Lithospheric Experiment and Afar Rift Consortium, have provided a wealth of information on magmatic and tectonic processes in the Afar.However, late Pleistocene and Holocene tephra deposits in the Afar are yet to be systematically studied and the recent eruptive history of the region remains poorly constrained, due in part to the logistical difficulties of undertaking fieldwork in this remote area.Distal tephra, including lacustrine and marine, records give insight into the frequency of past eruptions from a range of sources, which provides information on the likelihood of future activity.These records often provide more comprehensive and accessible records of long-term volcanism, whereas outcrops near volcanoes may be poorly exposed, buried or eroded, or have no clear stratigraphic context.For example, the record of Lago Grande di Monticchio in Italy has been used to constrain the tempo of Italian volcanism, especially that of Ischia.Furthermore, marine cores off the coast of Monserrat have been used to obtain long-term records of activity and assess the link to variations in sea level.To date, few studies have used east African lake sediments to investigate past eruption frequency.However, marine tephra records from the Indian Ocean and the Gulf of Aden have provided valuable information on the timing and magnitude of Oligocene eruptions from the Afro-Arabian flood volcanic province and Plio-Pleistocene events from the East African Rift System.Furthermore, lake sediments have good potential for radiocarbon dating and these dates can be used in conjunction with stratigraphic information in Bayesian age models, e.g. OxCal, to further constrain the ages of the events.Prior to interpreting distal records of past volcanic activity, it is important to consider factors that may determine the flux of tephras to individual sites and their subsequent preservation.These factors include eruption magnitude and intensity, wind speed and direction, and the grain-size of ejecta.The wind direction varies seasonally over Ethiopia and this will determine the dispersal of tephra and consequently the likelihood of it being preserved in a distal record.During the winter, easterly winds may disperse tephra from volcanoes in the Afar towards the Ethiopian Highlands; whilst, during the summer the winds reverse causing tephra produced in the Afar to be predominantly dispersed to the east.However, these are modern wind regimes which may have changed over the late Quaternary.On an individual site basis, processes including sediment focussing, bioturbation, slumping and tectonic movement may compromise the stratigraphic position, and therefore the apparent age, of a tephra layer, causing misinterpretation of past eruption tempo.Despite these challenges, our research demonstrates the great potential to achieve a detailed understanding of the composition and tempo of late Pleistocene to Holocene explosive volcanism in Ethiopia via the investigation of lacustrine records.This research aims to produce the first detailed lake sediment tephra record from lakes Ashenge and Hayk, spanning <17 cal.ka BP.This initial tephra record will provide the most complete assessment of explosive eruption frequency and tephra dispersal from the Afar during the late Pleistocene and Holocene so far available.This study involves the identification of both visible volcanic ash layers and crypto-tephra layers - fine grained and dilute tephras that cannot be identified in the host sediments by the naked eye.Using glass shard compositions and Bayesian age-modelling techniques, our study provides the first reference dataset for comparison with tephra deposits in the region.Our findings demonstrate the great potential for building upon these records and better evaluating volcanic hazards by using this distal lake sediment stratigraphic approach.Lakes Ashenge and Hayk are located on the flank of the Ethiopian Highlands, <70 km from Holocene volcanoes in the Afar Triangle.Lake Ashenge is located in a graben consisting of mid-Tertiary basalts at an elevation of 2500 m a.s.l and has a maximum water depth of ∼20 m.An 8 m long sediment core was extracted from the lake using a Livingstone Piston Corer in 2003, at a water depth of 9 m. Lake Hayk has a maximum depth of 88 m and is situated at 1900 m a.s.l. in an extensional basin developed in Miocene to early Pliocene basalts, tuffs and rhyolitic lava flows.A 7.5 m long sediment core was extracted from the lake in 2010 at a water depth of 78 m, using a UWITEC corer.Proximal samples from the Dubbi Volcano have been analysed in this study to ascertain whether this is the source for some of the distal tephras deposited in lakes Ashenge and Hayk.Pyroclastic flow deposits associated with the Dubbi AD 1861 eruption were sampled 12 km to the southwest of the vent at 13° 30′53″N, 41° 42′48″E).The physical properties, depths and thicknesses of visible tephras in the Ashenge and Hayk sediments were recorded and sampled.Samples were taken from the full thickness of visible tephra layers; were wet sieved and the 90−250 μm fraction was used for analysis.To locate cryptotephras, the standard extraction methods detailed in Blockley et al. were followed.This method allows for the extraction and identification of glass shards that are silica-rich in composition.Given that most large explosive eruptions are from evolved systems that are silica-rich, this method is appropriate for identifying widespread tephra layers.However, volcanism in the Afar and Main Ethiopian Rift is bimodal and there may also be a number of basaltic units that have not been investigated in this study.Contiguous and continuous samples were collected from the sediment at 10 cm intervals, however samples were not taken from depths at which visible tephras occur.Samples were dried, weighed and treated with 1 M HCl to remove carbonates, then sieved to >25 μm and density separated to 1.95−2.55 g/cm3.Glass shards isolated from the host sediments were then counted under a transmitted light microscope.Regions of sediment containing elevated shard concentrations were re-sampled at a 1 cm resolution, reprocessed and the extracted shards counted to identify the stratigraphic position of the cryptotephra horizon.Visible and crypto-tephras containing high concentrations of glass shards clearly exceeding background levels were numbered as Ashenge Tephra layers 1–9.Peaks associated with lower shard concentrations were not explored further in this study.If cryptotephras are reworked, glass shards may be dispersed throughout the stratigraphy.Primary cryptotephra deposits are typically characterised by a rapid basal rise in the concentration of glass shards, which then decline upwards through the stratigraphy.Samples for geochemical analysis were taken at the depth of the initial increase in shard counts in the stratigraphy.Glass shards from visible and crypto-tephras were mounted in epoxy resin and the internal surfaces exposed and polished.Single grain major and minor element concentrations were measured using a Jeol 8600 wavelength dispersive electron microprobe at the Research Laboratory for Archaeology and the History of Art, University of Oxford.To reduce alkali migration in the glass a defocussed beam with a 10 μm diameter, 15 kV accelerating voltage and 6 nA beam current was used.Sodium was collected for 10 s, Cl and P were collected for 60 s and other major elements were collected for 30 s.A suite of mineral standards were used to calibrate the instrument, and the MPI-DING volcanic glasses were used as secondary standards.All analyses presented in the text, tables and graphs have been normalised to an anhydrous basis, to remove the effects of variable secondary hydration in the glasses.Raw data and secondary standard analyses can be found in the Supplementary Information.Trace element compositions of single glass shards were determined using laser ablation ICP-MS at Aberystwyth University.Analyses were performed using a Coherent GeoLas ArF 193 nm Excimer laser coupled to a Thermo Finnigan Element 2 ICP-MS, with a laser energy of 10 J cm−2, repetition rate of 5 Hz and 24 s acquisition time.The minor 29Si isotope was used as the internal standard, the SiO2 content having previously been determined by EPMA.Trace element concentrations were calculated by comparing the analyte isotope intensity/internal standard intensity in the shard to the same ratio in the NIST SRM 612 reference material using published concentrations from Pearce et al.Analyses using <20 μm spot sizes were corrected for variations in element fractionation.The rhyolitic MPI-DING reference material, ATHO-G, was analysed during each analytical run to monitor accuracy and precision; these analyses are given in the Supplementary Information.This study tests the potential for tephra correlations between lakes Ashenge and Hayk primarily using selected major and trace element bi-plots.Compositional similarities were further tested using principal component analysis.Yttrium, Zr, Nb, Ba, La and Th, were selected as variables for the PCA, these elements having shown the most variability in bi-plots.To assess eruption frequency, age models were constructed for each record, using a combination of new and published radiocarbon dates on bulk sediment samples.New radiocarbon dates on the Ashenge sediments were undertaken at the Scottish Universities Environmental Research Centre accelerator mass spectrometry Laboratory."Radiocarbon dates on the Hayk sediments were undertaken at the Oxford Radiocarbon Accelerator Unit, University of Oxford, and the 14 CHRONO Centre, Queen's University Belfast.Organic-rich bulk sediment samples for radiocarbon dating were digested in 2 M HCl for 8 h at 80° C, washed with distilled water and homogenised.Pre-treated samples were heated with CuO in sealed plastic tubes to recover the CO2, which was then converted to graphite by Fe/Zn reduction.The radiocarbon was then measured using AMS.All radiocarbon ages were calibrated using IntCal13.Bayesian P_Sequence depositional models were run for both sequences, using OxCal version 4.2 with outlier analysis.Interpolated tephra ages were retrieved using the Date function and are quoted herein as 95.4% confidence intervals.Prior to analysis, sediment depths were converted to event free depths that do not include tephras of >0.5 cm thickness, which are presumed to have been deposited instantaneously.Due to the presence of a significant hiatus at around 650 cm depth in the Lake Ashenge 03AL3/2 stratigraphy, separate P_Sequence age models were run to model the sediment deposition above and below this point.Full details, along with the OxCal code, for each age model can be found in the Supplementary Information.The Lake Ashenge sediments contain 9 tephras, labelled here AST-1 to AST-9 and ranging in age from 15.3−0.3 cal.ka BP.Five cryptotephras, containing high glass shard concentrations, were identified through density separation techniques.Visible tephras are grey-white in colour, normally graded and range in thickness from 1.0−2.5 cm and in grain-size from coarse to fine volcanic ash.The youngest tephra dates to the historical period, between 546−321 cal.a BP.This eruption followed a >4 ka interval during which no tephras were deposited at Lake Ashenge.Between ∼7.5−∼4.8 cal.ka BP 6 tephra layers were recorded.Below the hiatus in the sediment record, 2 more tephra layers are dated to between ∼ 13.5 and ∼15.3 cal ka BP.No tephras are recorded in the Ashenge sediments between ∼15.3 cal.ka BP and the base of the core at ∼17.0 cal.ka BP.Precision on the tephra ages varies within the model, from ∼200 years for AST-1 and AST-2, to nearly ∼1500 years for AST-9.The major and trace element composition of glass shards in the Ashenge tephras is given in Table 3 and shown in Fig. 4.Glass shards within the Ashenge tephras have a rhyolitic composition; containing 70.56−74.80 wt% SiO2, 8.95−14.30 wt% Al2O3, 2.92−5.86 wt% FeOT and 9.71−11.61 wt%.The Ashenge glass shards are peralkaline).AST-1; 2; 5; and 7 are further classified as comendites) whereas other Ashenge tephras are pantellerites).Yttrium, Zr, La and Th behave as incompatible elements in the Ashenge glass shards; forming positive linear trends when plotted against one another.High Zr concentrations are related to the high solubility of Zr in peralkaline melts.The Ashenge glass shards show three different Y/La and Zr/Th ratios which are interpreted as representing three different groups of fractionating magma.The geochemistry of the Ashenge tephras is discussed below in terms of these compositional groups.Glass shards in Group 1 tephras have lower Y/La ratios and lower Ba concentrations than other Ashenge tephras.Glass shards in the Group I tephras also have typically lower Zr/Th ratios than other Ashenge tephras.The Group I tephras have a wide range of ages and tephras of this composition are not recorded in the Ashenge sediments between ∼6.2−∼5.0 cal.ka BP.The younger and older Group I tephras can be distinguished between: AST-1 and AST-2, contain lower SiO2 and FeOT concentrations and higher Al2O3 concentrations than the older AST-8 and AST-9.AST-8 glass shards are compositionally bimodal; one population has a similar composition to other Group I tephras whilst those in the second population are more evolved, containing glass shards with comparatively higher Y, Zr, La and Th concentrations.The first tephra recorded after the hiatus in the lake record, AST-7, contains glass shards which cannot be compositionally distinguished from other Group I Ashenge tephra glass shards.Group II Ashenge tephra glass shards are restricted to the mid-Holocene sediments.Their glass shards have higher Y/La ratios and contain higher Ba concentrations than Group I tephras.Group II tephra glass shards contain broadly higher Zr/Th ratios when compared to Group I tephra glass shards.Glass shards in Group II tephras contain the highest SiO2 and FeOT and the lowest Al2O3 concentrations when compared with other Ashenge tephras.The Group II Ashenge tephras have differing glass shard compositions.Glass shards in AST-3 contain lower SiO2 and higher Al2O3 concentrations than AST-4 and AST-6.AST-5 has a distinct composition when compared with all other Ashenge tephras and is the sole member of Group III.Glass shards in AST-5 have the highest Y/La and Ba concentrations amongst the Ashenge tephras.Group III tephra shards have the highest Zr/Th ratios of the Ashenge tephras.Plots of SiO2 against Al2O3 and FeOT concentrations in the Ashenge glass shards show that AST-5 glass shards have distinct major element ratios when compared with other Ashenge glass shards.The Hayk sediments contain a total of 12 tephras, ranging in age from ∼13.0 to ∼1.6 cal.ka BP.HT-2 and HT-4 are visible tephras, comprised of well sorted, grey-white coloured fine to medium grained ash and occur as 0.5−1 cm thick discontinuous layers in the core.Ten cryptotephras were identified through glass shard counting.Unfortunately, due to heavy sampling of the core after cryptotephra processing, more material from HT-1, HT-3, HT-8 and HT-11 could not be sampled for geochemical analysis.Historical tephra layers are not recorded in the Hayk sediments, the youngest tephra dating to ∼2.6−∼1.6 cal.ka BP.Tephras are also not recorded in the sediments beneath the oldest tephra and the base of the core at 16.0 cal ka BP.Precision on the ages of these tephras varies within the age model, ranging from ∼400 years for HT-2 and HT-3 to ∼2000 years for HT-8; 9 and HT-10, at the 95.4% confidence level.The major and trace element composition of the Hayk tephra glass shards is given in Table 4 and shown in Fig. 5.The Hayk glass shards have a rhyolitic composition; containing 72.78−75.41 wt% SiO2, 9.73−14.39 wt% Al2O3, 1.48−5.44 wt% FeOT, 7.63−10.36 wt%.The majority of Hayk glass shards have a peralkaline affinity, however, HT-2 and HT-4 are marginally metaluminous) and HT-5 and HT-6 are marginally peraluminous).HT-7 and HT-12 can be further classified as comendites and HT-9 and HT-10 as pantellerites.Yttrium, Zr, La and Th form positive linear arrays when plotted against one another; indicating they are all incompatible in the Hayk glass shards.Glass shards in the Hayk tephras show curvilinear Y/Nb positive trends; indicating that Nb becomes compatible at the onset of a new mineral phase crystallising, whilst Y remains incompatible.The Hayk tephras can be divided into five groups based on the Zr/Th and Y/La ratios of their glass shards, the composition of these tephras is discussed below in relation to these groups.Glass shards in Group I tephras are the youngest tephras in the Hayk core to have been analysed and have the differing Zr/Th ratios to older Hayk glass shards.Group I tephra glass shards contain lower FeOT concentrations than Hayk Group IV and higher FeOT concentrations than Hayk Group II, III and V glass shards.Glass shards in the Group I tephras are depleted in Y, Nb, La and Th relative to glass shards in all other Hayk tephras.Glass shards in Group I tephras contain higher Ba concentrations than Group IV and V glass shards and lower Ba concentrations than Group II and III tephra glass shards.HT-2 and HT-4 tephras cannot be distinguished compositionally; although HT-2 glass shards contain broadly higher Y concentrations than HT-4.The late-Holocene HT-5 is the sole tephra member of Group II, with a distinct Zr/Th ratio to glass shards in other Hayk tephras.Glass shards in this tephra are distinguished from other Hayk tephras on the basis of their higher Ba concentrations.HT-5 and HT-6 glass shards have broadly lower Y/La ratios when compared with other Hayk tephras.However, HT-5 contains higher Y, Zr, La and Ba concentrations than HT-6 glass shards.HT-6 is the only tephra in Group III; its glass shards contain distinctly lower Zr concentrations than all other Hayk tephras.Group IV Hayk tephras are easily distinguished from other Hayk tephras, containing higher FeOT and Zr and lower Al2O3 than other Hayk tephras.Group IV tephras have the widest ranges of ages when compared with all other tephra groups recorded in the Hayk sediments.HT-7 contains higher Zr, La and Th and lower Al2O3 and Ba when compared with glass shards in HT-12, the oldest tephra identified in the Hayk sediments.Group V tephras are restricted to the early Holocene section of the Hayk sediments.Their glass shards have distinct Zr/Th ratios when compared with other Hayk tephra glass shards.Glass shards in these tephras are more enriched in Th when compared with other Hayk tephras.Glass shards in HT-9 contain higher Zr, Nb, La and Th compared to glass shards in HT-10.The major and trace element composition of the Ashenge and Hayk tephra glass shards is compared in Figs. 6 and 7 to test whether there are potential tephra correlations between the archives.For the majority of tephra deposits in the Ashenge and Hayk cores, the major and trace element compositions of their component glass shards are distinct.Nonetheless, some tephra deposits within both lakes have compositionally similar glass shards.Hayk Group IV tephras have similar Y/La and Zr/Th ratios to Ashenge Group II tephras.Bi-plots of the first three principal components from Principal Component Analysis of Y, Zr, Nb, Ba, La and Th concentrations in the Ashenge and Hayk tephra glass shards are shown in Fig. 10.This demonstrates that there are statistical similarities between the compositions of the Ashenge Group II and Hayk Group IV glass shards.HT-7 glass shards are compositionally similar to AST-3.However, HT-7 is also more compositionally evolved than AST-3, containing higher concentrations of SiO2 and incompatible trace elements and lower concentrations of Al2O3 than AST-3.Furthermore, HT-7 is too young to correlate with AST-3.This is consistent with HT-7 and AST-3 being produced by two separate eruptions from the same source, with a time interval between allowing fractional crystallisation of feldspar and ilmenite from the source magma chamber and the subsequent eruption of the more evolved HT-7.Glass shards in HT-12 have similar incompatible element ratios to the Ashenge Group II tephras.However glass shards in HT-12 are enriched in Ba relative to AST-3, 4 and 6 and HT-12 is too old to correlate with the Ashenge Group II tephras.It is apparent that the lake Ashenge and Hayk sediments record different eruptive events, however, glass shards in Ashenge Group II and Hayk Group IV tephras have similar incompatible element ratios, suggesting they may be derived from the same source.Furthermore, the new age models presented in this study reveals that, of the 9 tephra layers in Lake Ashenge and 12 in Lake Hayk, there is very little temporal overlap.Tephra HT-8, dated to ∼7.8−∼5.5 cal.ka BP, shows the only possible chronological correlation, overlapping within 95.4% confidence intervals with tephra layers AST-4, 5, 6, and 7.Unfortunately, HT-8 could not be analysed, so the potential for correlation here cannot be tested.A number of samples between 110−290 cm depth in the Ashenge archive revealed tephra glass shards at lower concentrations which were not investigated in this study.Such cryptotephra, if they can be resolved and analysed, could potentially provide correlative layers to HT-1 to HT-7 which date from the same time period.Further work is, however, needed to investigate cryptotephras associated with low shard concentrations throughout the Ashenge archive, and this may reveal further correlations.A sedimentary hiatus within the Ashenge stratigraphy, associated with an early Holocene lowstand, spans ∼11.8 to 7.6 cal ka BP.During this lowstand, tephras deposited at Lake Ashenge may have been eroded or reworked, and this may limit the potential for correlation to HT-9 and HT-10.The lack of correlations between archives collected from lakes <140 km apart in the Ethiopian Highlands may be related to additional factors.Lake Hayk is located in a sheltered setting, separated from the Afar Triangle to the east by a series of horsts attaining <2400 m height.Lake Ashenge is located <10 km from the rift margin and the elevation to the east drops rapidly into the Afar Triangle.Therefore, Lake Ashenge is more exposed to the Afar Triangle and more likely to receive tephra deposits; particularly if the eruptions are localised.Lake Ashenge is located to the north of Lake Hayk and this may determine the type and frequency of tephras received; eruptions from the northern Afar may be supplying the tephras in Lake Ashenge, whilst the tephras recorded in Lake Hayk be derived from the southern Afar.To deposit a tephra from the same event in these different lakes, a change in wind-direction during the eruption may be required.Lakes Ashenge and Hayk are alkaline lakes.Rhyolitic glass is more soluble in alkali conditions and variations in lake alkalinity through time may therefore determine the preservation of glass shards in the archives.Tephra glass shards observed in the Ashenge and Hayk archives are pristine, however, variable glass preservation could be responsible for the lack of correlations.The distribution of visible and crypto-tephras in the Ashenge and Hayk stratigraphies gives an insight into the frequency of past eruptions.Despite the limitations associated with taphonomic processes at lakes Ashenge and Hayk, this work presents the most comprehensive information on past eruption frequency for this region so far available.Table 2 shows Bayesian modelled ages of the tephras recorded in the Ashenge and Hayk archives.Thirteen of the total 21 tephras documented in both archives occur at ∼7.5−∼1.6 cal.ka BP; potentially reflecting a peak in explosive volcanism during this period.Tephras are recorded frequently in both archives between ∼15.3−∼1.6 cal.ka BP, indicating explosive eruptions in this area occurred at an average of every ∼1000 years during this period.The only recent tephra recorded over the past 1.6 cal ka BP is the ∼0.5−∼0.3 cal.ka BP AST-1, in the Ashenge core.Selected compositional parameters from glass shards in separate tephra groups are plotted against their age in Fig. 8.Glass shards from each individual tephra in lakes Ashenge and Hayk occupy a wide compositional range.This compositional heterogeneity suggests that these tephras are derived from evolving or compositionally zoned magma chambers.The Ashenge Group I tephras have a wider range of ages than other tephra compositional groups recorded in the archives.The Ashenge Group I tephras may represent intermittent eruptions from a distant caldera active over a long time period.A lack of documented Ashenge Group I tephras at ∼16.7−∼13.6 cal.ka BP and ∼4.8−∼0.3 cal.ka BP and are potentially associated with periods of repose.Whilst the similar incompatible element ratios of the Ashenge Group I tephras suggests these tephras have a co-magmatic origin, their tephra glass shards become depleted in SiO2, FeOT and Y and enriched in Al2O3 concentrations through time.This trend is the opposite to that which would be expected for simple crystal fractionation of a feldspar dominated assemblage.To constrain the petrogenesis of the tephras, detailed mapping and sampling of the potential source volcanoes is required.However, it is apparent that other processes are involved in their petrogenesis.Other compositional groups recorded in the archives are comprised of eruptions covering relatively short times spans.These tephras are more chemically homogeneous than eruptions depositing the Ashenge Group I tephras, this may be an indication of compositional zoning developing in the magma chambers with relatively longer repose periods.However, the Hayk Group I and V tephra glass shards show enrichment in Y through time when compared to older tephras from the same compositional group.This indicates that the evolution of the Hayk tephras was dominated by fractional crystallisation of feldspar, differing to the Ashenge Group I melt evolution.The closest volcanoes to lakes Ashenge and Hayk that are thought to have been active during the Holocene are located to the east in the Afar Triangle.Given the lack of correlations between lakes Ashenge and Hayk, it is likely that the tephras recorded in these archives are locally derived.There is a scarcity of geochemical and chronological data from Holocene volcanic deposits in the Afar Triangle.Therefore, it is currently not possible to assess correlations with all possible source volcanoes which could have generated tephras deposited in lakes Ashenge and Hayk during the Holocene.The Ashenge and Hayk tephras are compared here with published glass analyses on proximal pumice and obsidians from Dabbahu volcano and new glass analyses on proximal tephra deposits from the Dubbi volcano.Dabbahu is the closest volcano to lakes Ashenge and Hayk with published geochemical data.Glass analyses of proximal pumice and obsidian samples from Dabbahu are compared with Ashenge and Hayk tephra glass shard analyses in Fig. 9.Dabbahu glass contains similar Y/La ratios to some of the Hayk Group I, IV and V tephra glass shards.However, Dabbahu glass contains higher Zr/Th ratios than the Ashenge and Hayk tephra glass shards.Fig. 10 shows the bi-plots of the first three principal components from a PCA comparing the composition of the Ashenge and Hayk tephras with obsidian and pumice samples from Dabbahu.It is apparent that the Ashenge and Hayk tephras are statistically different to the Dabbahu obsidian and pumice.Therefore further sampling and‘side-by-side’ analyses of glass from Dabbahu proximal deposits are required to assess the similarity of the incompatible element ratios to some of the Hayk tephras.The uppermost tephra in the Ashenge archive is the only historical tephra documented in the Ashenge and Hayk records.The AD 1861 eruption of Dubbi dispersed volcanic ash ∼300 km to the west on the Ethiopian Plateau.A previous eruption from Dubbi is believed to occurred in AD 1400 and this has a comparable age to the modelled date of AD 1404−1629 for AST-1.Analyses of tephra glass shards from the Dubbi AD 1861 pyroclastic flow deposits are compared with the composition of AST-1 glass shards in Fig. 9 to test whether AST-1 was deposited by a possible older eruption from Dubbi.The AST-1 tephra glass shards have similar Zr/Th ratios to the Dubbi glass shards.However, AST-1 glass shards contain lower Al2O3 and incompatible element concentrations and higher FeOT concentrations than the Dubbi AD 1861 tephra glass shards.Further glass analysis of a wider range of proximal samples from Dubbi is therefore required to investigate the source of AST-1.Distal tephras in lake sediment archives from the Ethiopian Highlands provide a <17 cal.ka BP record of peralkaline volcanism from the Afar Triangle.Here we present an initial late Pleistocene to Holocene tephra framework for the Afar region, the first to cover this temporal and spatial range.This is the first instance where cryptotephras have been identified and dated in terrestrial archives from Ethiopia; of the 21 tephra layers across the two study sites, 15 of these were identified as cryptotephra layers.These results highlight the essential contribution of cryptotephra studies to our understanding of past volcanism.Furthermore, this study provides the first database of shard-specific major and trace element compositions for tephras from Ethiopia over this temporal range.Lakes Ashenge and Hayk record different eruptive events.The lack of correlations between archives collected from <140 km apart may be associated with numerous factors.The taphonomic and preservation issues at Ashenge and Hayk highlight the importance of compiling data from multiple sediment archives in order to provide more complete records of volcanic activity through time.Nonetheless, the Ashenge and Hayk archives provide a valuable insight into the regional eruption history.This tephra record demonstrates that potentially seven volcanic centres in the Afar have erupted frequently and explosively over the past <15 cal.ka BP; with the majority of tephras recorded at ∼7.5−∼1.6 cal.ka BP.The only historically documented tephra layer recorded in the archives occurs at ∼0.5−∼0.3 cal.ka BP.Our new tephra framework provides an insight into the volcanic history of the Afar that has important implications for hazard assessments in a region where the record of recent volcanism has remained largely undocumented.The cryptotephra study focused on the identification of silica-rich tephra units, however, basaltic volcanism is common in the area and future studies may be able to build basaltic units into this initial tephra framework.Lake sediments have been shown to provide an accessible record of past volcanism from the remote Afar Triangle.A greater network of sites must now be studied, which would capture not only the eruption frequency, but also the patterns of dispersal.This approach is, however, challenged by the lack of published major and trace element analyses of tephra glass shards from outcrops proximal to volcanoes in the Afar Triangle.Further geochemical characterisation of the regional volcanoes is therefore essential to identify the sources of tephras recorded in the lakes Ashenge and Hayk sediments as well as those that will be uncovered in future studies. | Numerous volcanoes in the Afar Triangle and adjacent Ethiopian Rift Valley have erupted during the Quaternary, depositing volcanic ash (tephra) horizons that have provided crucial chronology for archaeological sites in eastern Africa. However, late Pleistocene and Holocene tephras have hitherto been largely unstudied and the more recent volcanic history of Ethiopia remains poorly constrained. Here, we use sediments from lakes Ashenge and Hayk (Ethiopian Highlands) to construct the first <17 cal ka BP tephrostratigraphy for the Afar Triangle. The tephra record reveals 21 visible and crypto-tephra layers, and our new database of major and trace element glass compositions will aid the future identification of these tephra layers from proximal to distal locations. Tephra compositions include comendites, pantellerites and minor peraluminous and metaluminous rhyolites. Variable and distinct glass compositions of the tephra layers indicate they may have been erupted from as many as seven volcanoes, most likely located in the Afar Triangle. Between 15.3−1.6 cal. ka BP, explosive eruptions occurred at a return period of <1000 years. The majority of tephras are dated at 7.5−1.6 cal. ka BP, possibly reflecting a peak in regional volcanic activity. These findings demonstrate the potential and necessity for further study to construct a comprehensive tephra framework. Such tephrostratigraphic work will support the understanding of volcanic hazards in this rapidly developing region. |
312 | Synthesizing Signaling Pathways from Temporal Phosphoproteomic Data | High-throughput proteomic assays illuminate the amazing breadth and complexity of the signal transduction pathways that cells employ to respond to extracellular cues.These technologies can quantify protein abundance or post-translational modifications.Mass spectrometry, in particular, offers a broad view of PTMs, including phosphorylation, ubiquitination, acetylation, and methylation, and is not restricted to a predefined list of proteins.Here, we show how to discover new facets of signaling cascades from complex proteomic data by integrating observed PTMs with existing knowledge of protein interactions.Many gaps persist in our understanding of phosphorylation signaling cascades.For example, our mass spectrometry experiments show that nearly all proteins that are significantlyphosphorylated when the epidermal growth factor receptor is stimulated are absent from EGFR pathway maps.The low overlap is consistent with previous temporal phosphoproteomic studies of mammalian signaling.Discordance between mass spectrometry studies and pathway databases can be caused by extensive crosstalk among pathways, context-specific interactions, cell- and tissue-specific protein abundance, and signaling pathway rewiring.Network inference algorithms can explain the phosphorylation events that lie outside of canonical pathways and complement curated pathway maps.Specialized algorithms model time series data, which inform the ordering of phosphorylation changes and support causal instead of correlative modeling.Temporal protein signaling information can be used to reconstruct more accurate and complete networks than a single static snapshot of the phosphoproteome.A complementary challenge to interpreting off-pathway phosphorylation is that the cellular stimulus response includes mechanisms that are not captured in phosphoproteomic datasets.There is an interplay between phosphorylation changes and other integral parts of signaling cascades.Phosphorylation can affect protein stability, subcellular localization, and recognition of interaction partners.Phosphoproteomic studies measure only one type of PTM, and not all phosphorylated proteins are detected by mass spectrometry.Additional information is required to infer comprehensive signaling cascades that include non-differentially phosphorylated proteins.Protein-protein interaction networks serve this purpose by identifying interactions that connect observed phosphorylation events.We present the Temporal Pathway Synthesizer, a method to assemble temporal phosphoproteomic data into signaling pathways that extend beyond existing canonical maps.“Synthesizer” refers to applying computational program synthesis techniques to produce pathway models from experimental data, not synthetic biology.TPS overcomes both of the aforementioned challenges in interpreting phosphoproteomic data: modeling signaling events that are not captured by pathway databases and including non-phosphorylated proteins in the predicted pathway structures.TPS first transforms a PPI graph into a condition-specific network by using mass spectrometry data to filter out irrelevant interactions.Next, TPS finds the orientation and sign of edges in the condition-specific interaction graph based on the order of the phosphorylation events.Phosphorylation timing is modeled separately for each observed phosphorylation site on a protein.TPS systematically explores all signed, directed graphs that may explain how signaling messages propagate from the stimulated source protein.Finally, TPS summarizes the valid graphs into a single aggregate network that explicitly tracks confident and ambiguous predictions.Our temporal pathway visualizer tool interactively visualizes the summary network alongside the temporal phosphoproteomic data.We study the dynamic signaling responses to human EGF stimulation and yeast osmotic stress.TPS recovers networks that explain how stimulus-responsive proteins are activated or inhibited via chains of physical interactions stemming from the upstream receptors.The highest-confidence TPS predictions are well supported by prior knowledge and consistent with kinase perturbations.These insights into well characterized human and yeast pathways exemplify how TPS can produce condition-specific pathway maps.To quantify global EGFR-mediated cellular signaling changes in HEK293 EGFR Flp-In cells with phosphoproteomics, we used in-line two-dimensional high-performance liquid chromatography separation coupled to tandem mass spectrometry.We stimulated the cells with EGF for 0, 2, 4, 8, 16, 32, 64, or 128 min and collected three biological replicates with two technical replicates each.We identified 1,068 phosphorylation sites that were detected in all biological replicates, which were then used for TPS network modeling.Phosphorylation intensities were well correlated across the three biological replicates.We assessed how much of the observed phosphorylation could be explained by existing pathway databases.To obtain a comprehensive view of EGFR-mediated signaling, we collected eight EGFR-related reference pathways.Despite the diversity of the pathway diagrams, they all fail to capture the vast majority of significant phosphorylation events triggered by EGF simulation in our system.Among the 203 significantly differentially phosphorylated proteins, typically 5% or fewer are present in a reference pathway.85% of phosphorylated proteins are missing from all of the EGFR-related pathway maps.Additionally, most of the proteins in the EGFR pathway maps are not differentially phosphorylated, reflecting a combination of relevant proteins that do not undergo this particular type of PTM, phosphorylation events missed by the mass spectrometry, and interactions that are relevant in some contexts, but not in EGFR Flp-In cells.The low overlaps agree with phosphoproteomic studies of other mammalian signaling pathways.Less than 10% of insulin-regulated proteins were members of a curated insulin pathway.In a study of T cell receptor signaling, only 21% of phosphorylated proteins were known to be involved in the pathway.Phosphosites regulated by transforming growth factor β stimulation were not enriched for the TGF-β pathway.Crosstalk does not explain the low coverage.Most phosphorylated proteins are not present in the EGFR pathways or any BioCarta, Reactome, or PID pathway, demonstrating the need for a context-specific representation of EGFR signaling pathway.We applied TPS to model the dynamic signaling response to EGFR stimulation in EGFR Flp-In HEK293 cells.Our workflow consists of three major steps: preprocessing the protein-protein interaction network and temporal phosphorylation data; transforming temporal information, subnetwork structure, and prior knowledge into logical constraints; and summarizing all valid signaling pathway models to discover interactions with unambiguous directions and/or signs.We first discretized the time series phosphoproteomic data, using Tukey’s honest significant difference test to determine whether a peptide exhibits a significant increase, significant decrease, or no change in phosphorylation at each post-stimulation time point.263 peptides, corresponding to 203 proteins, significantly change at one or more time points.Second, we used the prize-collecting Steiner forest network algorithm to link the phosphorylated proteins to EGF, the source of stimulation, weighting proteins based on their HSD test significance.PCSF identifies a PPI subnetwork of 316 nodes and 422 edges.This subnetwork comprises the interactions through which signaling messages are most likely to propagate.Third, TPS combined the discretized temporal activities of the 263 significantly changing peptides, the PCSF network, and prior knowledge to generate a summary of all feasible pathway models.Each type of input was translated into logical constraints, which were used to rule out pathway models that are not supported by the data.In contrast to the reference EGFR pathway diagrams, which capture at most 11% of the differentially phosphorylated proteins, the predicted network from TPS contains 83% of the responding proteins in its 311 nodes.Each of these proteins is linked to the EGF stimulation with high-confidence protein interactions and has timing that is consistent with the temporal phosphorylation changes of all other proteins in the pathway.These interactions are depicted as directed, signed edges in a graph, where the sign reflects that the proteins have the same or opposite activity changes.Of the 413 edges in the network, 202 have a consistent direction in all of the valid pathway models, a strong assertion about the confidence in these edge directions.Thirty-eight of the directed edges have a consistent sign as well.The PPI connections, phosphorylation timing, and prior knowledge of kinase-substrate interaction direction all play distinct, important roles in reducing the number of valid pathway models.The timing of protein activation and inactivation in the TPS pathway reveals a rapid spread of signaling post-stimulation.Although nearly all differentially phosphorylated proteins lie outside traditional EGFR pathway representations, 29 of the 273 phosphorylated proteins and 5 of the 38 unphosphorylated connective proteins in the TPS network are recognized as EGFR pathway members.We find strong evidence for many of the predicted directions as well.In total, 82 of 202 interaction directions are supported by our semi-automated evaluations using EGFR reference pathways, the PhosphoSitePlus input data, and natural language processing software.The vast majority of the remaining directions can neither be confirmed nor refuted.Our additional analyses show that TPS also recovers high-quality pathway models when applied to existing EGF response datasets with lower temporal resolution.The TPS network can be used to prioritize proteins and interactions for additional experimental testing.To illustrate this process, we focused on edges for which the direction or sign were predicted confidently and one of the two proteins is a member of an EGFR reference pathway.For each interaction, we inhibited the predicted upstream protein and measured the effect on the predicted target’s phosphorylation using western blotting.From our list of ten candidate interactions, we selected the three edges for which the antibodies reliably produced clean and quantifiable bands at the right molecular weight: MAPK1-ATP1A1; ABL2 → CRK; and AKT1 → ZYX.These proteins are already known to physically interact.The novelty of the TPS predictions is the interactions’ relevance to the EGF response.The inhibitors used to inhibit the upstream proteins were SCH772984 for MAPK1, dasatinib for ABL2, and MK-2206 for AKT1.After serum starvation, the cells were treated with an inhibitor for one hour and then stimulated with EGF.We collected data at two time points based on the timing of the phosphorylation events in our mass spectrometry data.Lysates were then assayed by western blot to quantify the level of phosphorylation of the downstream protein.Dasatinib decreased phosphorylation of CRK pY221, consistent with the TPS pathway edge.Inhibiting AKT1 increased phosphorylation of Zyxin.In both cases, the predicted interaction direction is supported.MAPK1 inhibition increased ATP1A1 pY10 phosphorylation.The TPS model predicted an inhibitory interaction between these proteins, but the direction was ambiguous.Our data agree with the predicted edge sign and suggest that MAPK1 is upstream of ATP1A1.Truly validating the predicted edges would require more direct manipulation of the relevant kinases because Dasatinib is a multi-target inhibitor; SCH772984 inhibits both MAPK1 and MAPK3; and MK-2206 inhibits AKT1, AKT2, and AKT3.However, these inhibitor experiments demonstrate how TPS can generate testable predictions from global phosphoproteomic data.We compared TPS to two existing methods that combine PPI networks and time series data and a third that uses only the phosphorylation data.The dynamic Bayesian network infers posterior peptide-peptide interaction probabilities from time series data and network priors.TimeXNet formulates pathway prediction as a network flow problem.FunChisq uses an adapted chi-square test to detect directed relationships between phosphorylated proteins.Comparing the four predicted EGF response pathway models demonstrates the impact of the diverse algorithmic strategies.Almost all of the protein-protein edges are unique to a single method, and no edges are predicted by all four methods.Despite greater overlap among the predicted nodes, the four pathways are divergent.Because most of the differentially phosphorylated proteins are not members of any reference pathway, these pathways cannot be used to assess the overall quality of the predictions.The TimeXNet pathway, the largest of the three predicted networks, generally captures the most reference pathway interactions when ignoring edge direction and sign.However, a closer examination that accounts for the predicted interaction direction shows that TPS typically makes the fewest errors, even when controlling for the size of the predicted pathways.Although they are still not fully characterized, stress-response signaling cascades in the yeast Saccharomyces cerevisiae are better understood than their human counterparts and are not subject to cell-type-specific effects.Thus, we applied TPS to model the yeast osmotic stress response to assess its ability to recapitulate this frequently studied pathway and reveal additional interactions.The hyperosmotic stress response is primarily controlled by the high osmolarity glycerol pathway.Kanshin et al. profiled the rapid response to NaCl, an osmotic stressor, measuring phosphorylation changes for 60 s post-stimulation at uniform 5-s intervals.They identified 1,596 phosphorylated proteins, including 1,401 dynamic phosphopeptides on 784 proteins based on their fold changes in the salt stress time series with respect to a control.We used these data to construct a TPS pathway model of the early osmotic stress response.The TPS osmotic stress pathway contains 216 proteins and 287 interactions.Thirty-six of these proteins have been previously annotated as osmotic stress pathway proteins.Focusing on the subset of interactions that connect known HOG pathway members reveals that many of the edges connecting them are correct as well.TPS recovers the core part of the Kyoto Encyclopedia of Genes and Genomes high-osmolarity pathway, including the interactions Sho1 → Ste50, Sho1 → Cdc24, Sho1 → Pbs2, Ssk2 → Pbs2, and Pbs2 → Hog1.In addition, it correctly places Hog1 as the direct regulator of Rck2 and the transcription factors Hot1, Msn2, and Sko1.TPS identifies Sch9 as an additional regulator of Sko1.Following hyperosmotic shock, Hog1 is recruited to Fps1, consistent with the TPS prediction.The predicted feedback from Hog1 to Ste50 is also well supported in osmotic stress.Many predicted interactions that deviate from the canonical HOG pathway model can be attributed to the input phosphorylation data and background network, not the TPS algorithm.After confirming the TPS osmotic stress model agrees well with existing models, we investigated novel candidate pathway members.The TPS model captured the cascade Hog1 → Rck2 → Eft2 and predicted additional Rck2 targets.To test these predictions, we compared them to a recent phosphoproteomic study of an RCK2 mutant subjected to osmotic stress.All four proteins that TPS predicts are activated by Rck2 have defective phosphorylation on at least one phosphosite in rck2Δ five minutes after osmotic insult.Thus, Rck2 likely directly phosphorylates Fpk1, Pik1, Rod1, and YLR257W upon osmotic stress, as TPS predicts.In addition to the four activated substrates, TPS predicts that Rck2 directly regulates seven additional proteins with an ambiguous sign.Three of these seven predicted targets—Mlf3, Sla1, and YHR131C—have a phosphosite that is dependent on Rck2 during osmotic stress, supporting the TPS predictions.The three protein-protein edge signs are ambiguous because some phosphosites on the proteins exhibit a significant increase in phosphorylation and others decrease.Similarly, we verified that 67 out of 91 predicted Cdc28 targets have at least one phosphosite with defective phosphorylation following Cdc28 inhibition.The high-quality TPS osmotic stress pathway demonstrates the algorithm is broadly useful beyond our own EGF stimulation study.It not only recovers many major elements of the classic HOG pathway representation but also prioritizes condition-specific kinase targets that are supported by independent perturbations.The pathway structure illuminated by the phosphorylated proteins in our EGFR Flp-In cells differs considerably from the simple representations in pathway databases.Interpreting signaling data requires reconstructing models specific to the cells, stimuli, and environment being studied.TPS combines condition-specific information—time series phosphoproteomic data and the source of stimulation—with generic PPI networks and optional prior knowledge to produce custom pathway representations.The predicted EGFR signaling network highlights alternative connections to classic EGFR pathway kinases and extends the pathway with interactions that are supported by prior knowledge in other contexts or kinase inhibition.Combining different constraints on pathway structure from PPI network topology and temporal information is computationally challenging, and we identify predictions that can be obtained only through joint reasoning with all available data.TPS integrates information from PPI networks, phosphosite-specific time series phosphoproteomic data, and prior knowledge by introducing a powerful constraint-based approach.Existing classes of signaling pathway inference algorithms do not offer the same functionality as TPS.Methods that identify dependencies in phosphorylation levels omit pathway members without observed phosphorylation changes.TPS does not require perturbations to reconstruct pathways.Participants in the HPN-DREAM network inference challenge inferred signaling networks from time series data for tens of phosphoproteins, but the top methods either did not scale to our dataset or did not perform well.Other algorithms that integrate temporal information with PPI networks do not evaluate and summarize all pathway models that are supported by the network and phosphorylation timing constraints.This summarization strategy is what enables TPS to scale to solution spaces that are substantially larger than those typically considered by declarative computational approaches.The Supplemental Experimental Procedures contain additional related software beyond these representative examples.TPS offers a powerful framework for combining multiple types of declarative constraints to generate condition-specific signaling pathways.The constraint-based approach could be extended to include additional types of data, such as perturbation data that link kinase inhibition or deletion to phosphorylation changes.Both temporal and kinase perturbation phosphoproteomic data are available for the yeast osmotic stress response.Modeling multiple related conditions could allow TPS to learn not only the signs of interactions but also the logic employed when multiple incoming signals influence a protein.TPS could also accommodate user-defined assumptions or heuristics about pathway properties, such as restrictions on pathway length.Such complex constraints cannot be readily included in approaches like DBN or TimeXNet.For scalability, TPS requires hard logical constraints instead of probabilistic constraints.Discrete logic models for noisy biological data require modeling assumptions in order to balance model ambiguity and expressiveness.These tradeoffs and assumptions provide additional opportunities to modify and generalize the TPS model, for instance, a potential TPS extension to infer feedback in networks that is described in the Supplemental Experimental Procedures.As proteomic technologies continue to improve in terms of depth of coverage and temporal resolution, the need to systematically interpret these data will likewise grow.TPS enables reasoning with temporal phosphorylation changes and physical protein interactions to define what drives the vast protein modifications that are not represented by existing knowledge in pathway databases.TPS receives three types of input: a time series mass spectrometry phosphoproteomic analysis of a stimulus response; an undirected PPI subnetwork; and optional prior knowledge about interaction directions.The undirected graph is obtained through a static analysis in which the significantly changing proteins are overlaid on a PPI network.A network algorithm recovers connections among the affected proteins, removing interactions that do not form critical connections between these proteins and nominating hidden proteins that do, even if they are not themselves phosphorylated.We recommend PCSF to select the PPI subnetwork but also successfully applied other methods.TPS transforms the input data into logical constraints that determine which pathway models can explain the observed phosphoproteomic data.Topological constraints stem from the filtered PPI network and require that phosphorylated proteins are connected to the source of stimulation, such as EGF, by a cascade of signaling events.These signaling events propagate along the edges of the filtered PPI network.Temporal constraints ensure that the order of the signaling events is consistent with the timing of the phosphorylation changes.If protein B is downstream of protein A on the pathway, B cannot be activated or inhibited before A. Prior knowledge constraints guarantee that if the direction or sign of an interaction is known in advance, the pathway may not contain the edge with the opposite direction or sign.Typically, many possible pathways meet all constraints, so TPS summarizes the entire collection of valid pathways and identifies interactions that are used with the same direction or sign across all models.A symbolic solver reasons with these logical constraints and produces the pathway summary without explicitly enumerating all possible pathway models.To illustrate this process, consider a hypothetical signaling pathway that contains a receptor node A and six other downstream proteins that respond when A is stimulated.The first input is time series mass spectrometry data measuring the response to stimulating the receptor, which quantifies phosphorylation activity for six proteins.Node B is absent from the phosphorylation data because it is post-translationally modified, but not phosphorylated, by A.The second input is an undirected protein-protein interaction graph.These are detected independently of the stimulation condition but filtered based on their presumed relevance to the responding proteins with an algorithm such as PCSF.By combining phosphorylation data with the PPI subnetwork, this topology can recover “hidden” components of the pathway that are not phosphorylated.Finally, TPS accepts prior knowledge of directed kinase-substrate or phosphatase-substrate interactions, such as the edge C → D. Each of these inputs can be used individually to restrict the space of plausible pathway models.Reasoning about them jointly produces more unambiguous predictions than considering each resource separately.To formulate temporal constraints, we transform the time series data into a set of discrete signaling events for each node, taking an event-based view of the signaling process.We determine time points for each node that correspond to statistically significant phosphorylation changes.These discrete events are then used to rule out network models that contain signed, directed paths that violate the temporal ordering of these events no matter which event is chosen for each node.For example, there can be no edge from E to D in any model because D is activated strictly earlier than E regardless of whether E is activated at 1 to 2 min or 2 to 5 min.Because the time series data measure the response to a specific stimulus, we also devise topological constraints that ensure all signaling activity originates from this source.In our example, this asserts that all edges in a solution network must be on a directed path that starts at node A. Finally, our third input, the set of directed interactions, requires that no model violates this prior knowledge by including an edge from D to C.Figure 6 shows the pathway models that can be learned using each type of constraint alone and in combination.When we enforce only temporal constraints, which corresponds to reasoning locally with phosphorylation data for pairs of nodes to see whether one signaling event strictly precedes another, we obtain a single precise prediction from D to E.The topological constraints by themselves are sufficient to orient edges from the source A and from node D because D forms a bottleneck.The prior knowledge constrains the direction of the edge from C to D, but its sign remains unknown.Jointly enforcing all of these constraints has a nontrivial impact on the solution space.For instance, we can infer that F must activate G.If the edge direction was reversed, F would be downstream of E, but the data show that activation of F precedes activation of E.The final model that includes all available data closely resembles the true pathway structure.The edges incident to node B are ambiguous, and the interaction between E and G cannot be uniquely oriented, but all other interactions are recovered.The summary for the combination of all constraints produces precise predictions that cannot be obtained by intersecting the summaries for the individual types of constraints.For instance, TPS infers that the relationship between F and G must be an activation from F to G because the sole way G can reach F in a tree rooted at A is through E, but F’s activation precedes E’s.This inference cannot be made by combining the models in panels A, B, and C.The simple example also highlights the differences in how the TPS constraint-based approach improves upon related methods based on correlation or the time point of maximum phosphorylation change.See also Figure S7.TPS takes the undirected network from PCSF and transforms it into a collection of signed, directed graphs that explain dynamic signaling events.To find pathway models that agree with the phosphorylation dynamics, TPS first performs a discretization step that determines time intervals in which each protein may be differentially phosphorylated.The discrete set of activation and inhibition state changes is then used to rule out networks that violate the observed temporal behavior.The transformation consists of finding time points for each profile where phosphorylation significantly differs from either the baseline or the previous time point.In the baseline comparison, this time point is accepted only if it is not preceded by an earlier, larger change with respect to the baseline.If there is a hypothetical phosphorylation level at which the protein is activated and acts upon its downstream targets, a signaling event occurs only at the first time this threshold value is reached.This criterion does not apply when comparing to the phosphorylation level at the previous time point.TPS supports missing values in the time series data.The time points for which a phosphopeptide is missing data are assumed to be insignificant in the discretized data.In our EGF study, we use Tukey’s HSD test to find significant differential phosphorylation.If comparing a time point to the baseline or the previous measurement produces a p value below a user-defined threshold, the time point is marked as a possible activation or inhibition event depending on whether the phosphorylation level increased or decreased relative to the earlier time point to which it was compared.We assume at most one signaling event happens for every node across time points.Our logical solver can explore all possible activation and inhibition events for every node, but the data are often too ambiguous to allow multiple events per node given a single type of stimulation.In the absence of perturbation experiments that test the pathway behavior under different initial conditions, it is impossible to distinguish between different Boolean logic functions governing the behavior of each node and whether a node responds to one or multiple regulators.We therefore formalize pathway models as signed, directed trees, which provide a sufficient basis for explaining the dynamic system behavior under these assumptions.TPS transforms each input into a set of constraints that declaratively specify valid signed, directed tree models that agree with the data.These constraints are expressed as Boolean formulas with linear integer arithmetic, ranging over symbolic variables that represent choices on edge signs and orientations as well as how the temporal data are interpreted.The constraints can then be solved by a satisfiability modulo theories solver to find a network model that satisfies all constraints along with dynamic timing annotations for each interaction in the network.Using constraints, we restrict the possible orientation and sign assignments to signed, directed tree networks rooted at the source node.Furthermore, constraints express how every tree model must agree with the time series data by establishing a correspondence between the order of nodes on tree paths and their temporal order of activity according to the time series data.Finally, we declaratively rule out models that contradict the prior knowledge of kinase-substrate interaction directions.These constraints define a very large space of candidate networks that agree with the data.TPS can reason with large state spaces by summarizing all valid pathways instead of explicitly enumerating them.A summary network is the graph union of all signed, directed tree networks that satisfy the stated constraints.Timing annotations are summarized by computing the set of possible annotations for each node over all solutions.In the graph union, some edges have a unique direction and sign combination, which signifies that this was the only observed signed, directed edge between two given nodes across the solution space.However, this does not guarantee that the edge between the interacting proteins must be present in all valid pathway models.Ambiguous directions or signs in the summary means that there are valid models with different direction or sign assignments.We compute the summary graph by performing a linear number of SMT solver queries in terms of the size of the input graph.Each queries whether at least one signed, directed model contains a specific signed, directed edge.Because individual queries are computationally cheap, we can summarize the entire solution space without enumerating all models, which is typically intractable.The summary graph over-approximates the solution space.It is not possible to recover the exact set of valid models from the summary, only a superset of the models.This tradeoff must be made in order to analyze such a large state space.TPS uses the Z3 theorem prover via the ScalaZ3 interface to solve the constraints it generates.It also provides a custom data flow solver specifically for computing pathway summaries.The custom solver and the symbolic solver produce identical pathway summaries.However, the custom solver is much more scalable because it is specifically designed to address our synthesis task and can handle networks containing more than a hundred thousand edges and phosphosites.We stimulated EGFR Flp-In cells with 23.6 nM EGF for 0, 2, 4, 8, 16, 32, 64, or 128 min.Cells were lysed and proteins were extracted, denatured, alkylated, and trypsin digested.Following digestion, the tryptic peptides were either lyophilized, stored for future use, or directly processed for mass spectrometry analysis.To quantify dynamic changes in protein phosphorylation, all peptides were isobarically labeled, enriched using phosphotyrosine-specific antibodies and/or immobilized metal affinity chromatography, and analyzed on a Thermo Fisher Velos Orbitrap mass spectrometer in data-dependent acquisition mode.We determined peptide sequences using Comet and quantified the iTRAQ signals with Libra.Across three biological replicates, we quantified 5,442 unique peptides in at least one replicate and 1,068 peptides in all replicates and used Tukey’s honest significant difference for statistical testing.See the Supplemental Experimental Procedures for details and data processing.Also see our p value sensitivity analysis.We used 25 nM Dasatinib, 400 nM SCH772984, and 800 nM MK-2206 for kinase inhibition and antibodies pY221-CRK, pY10-ATP1A1, and pS142/143-Zyxin for western blotting.We normalized loading with β-actin and imaged blots with an Odyssey Infrared Imaging System.We used the Omics Integrator PCSF implementation with msgsteiner to recover the most relevant PPIs connecting the phosphorylated proteins.The Supplemental Experimental Procedures describe how we selected parameters, ran PCSF multiple times to identify parallel connections between proteins, generated prizes from the phosphoproteomic data, and created a weighted interaction network from iRefIndex and PhosphoSitePlus. | We present a method for automatically discovering signaling pathways from time-resolved phosphoproteomic data. The Temporal Pathway Synthesizer (TPS) algorithm uses constraint-solving techniques first developed in the context of formal verification to explore paths in an interaction network. It systematically eliminates all candidate structures for a signaling pathway where a protein is activated or inactivated before its upstream regulators. The algorithm can model more than one hundred thousand dynamic phosphosites and can discover pathway members that are not differentially phosphorylated. By analyzing temporal data, TPS defines signaling cascades without needing to experimentally perturb individual proteins. It recovers known pathways and proposes pathway connections when applied to the human epidermal growth factor and yeast osmotic stress responses. Independent kinase mutant studies validate predicted substrates in the TPS osmotic stress pathway. Köksal et al. present a computational technique, the temporal pathway synthesizer (TPS), that combines time series global phosphoproteomic data and protein-protein interaction networks to reconstruct the vast signaling pathways that control post-translational modifications. |
313 | Laboratory spectra of hot molecules: Data needs for hot super-Earth exoplanets | There are vast areas of the Universe thinly populated by molecules which are cold.However, there are also huge numbers of important astronomical bodies which support hot or highly-excited molecules.It is the spectroscopic demands of studying these hot regimes we focus on in this review.We will pay particular attention to the demands on laboratory spectroscopy of a recently identified class of exoplanets known as hot rocky super-Earths or, more colourfully, lava and magma planets.These planets orbit so close to their host stars that they have apparent temperatures such that their rocky surface should melt or even vaporise.Little is known about these planets at present: much of the information discussed below is derived from models rather than observation.Of course hot and cold are relative terms; here we will take room temperature as the norm which means, for example, that so-called cool stars which typically have temperatures in the 2000–4000 K range are definitely hot.Much of the cold interstellar medium is not thermalised and excitation, for example by energetic photons, can lead to highly excited molecules.This can be seen, for example, from maser emissions involving transitions between highly excited states, which is observed from a range of molecules from a variety of interstellar environments.Similarly the coma of comets are inherently cold but when bathed in sunlight can be observed to emit from very high-lying energy levels.Turning to the consideration of exoplanets.At the present it even remains unclear how to conclusively identify which planets of a few to ten Earth masses are actually rocky.From density observations some of them appear to be rocky, or with a fraction of ice/iron in the interior.Others suggest a structure and composition more similar to gas giants like Neptune.Density alone is not a reliable parameter to distinguish among the various cases.In addition to there is a class of ultra-short period exoplanets which are thought to be undergoing extreme evaporation of their atmosphers due to their close proximity to their host star.These objects are undoubtedly hot but as yet there are no mass measurements for USP planets.Spectroscopic investigations of atmospheres of super-earths and related exoplanets holds out the best prospect of learning about these alien worlds.The prospects of observing the atmospheric composition for the transiting planets around bright stars make us confident we will be in a much better position in a few years time with the launch of the James Webb space telescope and future dedicated exoplanet-characterization missions.From the laboratory perspective, the observation of hot or highly excited molecules places immense demands on the spectroscopic data required to model or interpret these species.As discussed below, a comprehensive list of spectroscopic transitions, a line list, for a single molecule can contain significantly more than 1010 lines.This volume of data points to theory as the main source of these line lists.A line list consists of an extensive list of transition frequencies and transition probabilities, usually augmented by other properties such as lower state energies, degeneracy factors and partition functions to give the temperature dependence of the line and, ideally, pressure-broadening parameters to give the line shape.For radiative transport models of the atmospheres of hot bodies, completeness of the the line list to give the opacity of the species is more important than high accuracy for individual line positions.This is also true for retrievals of molecular abundances in exoplanets based on the use of transit spectroscopy which, thus far, has largely been performed using observations with fairly low resolving power.However, the situation is rather different with the high-dispersion spectroscopy developed by Snellen and co-workers, which is complementary to transit spectroscopy.This technique tracks the Doppler shifts of a large number of spectroscopic lines of a given species, by cross-correlating them to the reference lab data on the line positions.This exciting but challenging technique requires precise frequencies with R ≥ 100,000, as well as a good spectroscopic coverage, available laboratory data is not always precise enough for this technique to work.This review is organised as follows.First we summarise what is known about hot rocky super-Earth exoplanets.We then consider the laboratory techniques being used to provide spectroscopic data to probe the atmospheres of these bodies and others with similar temperatures.In the following section we summarise the spectroscopic data available making recommendations for the best line list to use for studies of hot bodies.Molecules for which little data appears to be available are identified.Finally we consider other issues associated with spectroscopic characterization of lava planets and prospects for the future.As of the end of 2016 there are well over 100 detected exoplanets which are classified as hot super-Earths.These planets are ones which are considered to be rocky, that is with terrestrial-like masses and/or radii, see e.g. Seager et al., and which are hot enough for, at least on their dayside, their rock to melt.Only a handful of these planets are amenable to spectroscopic characterization with current techniques, which makes these few objects the ones suitable for atmospheric follow-up observations.All these rocky planets have very short orbits, meaning that they are close to their star and hence have hot atmospheres.Some of these planets are evaporating with water vapour as a major constituent of the atmosphere.The atmospheres of these planets are thought to have a lot in common with the young Earth and the atmosphere of a rocky planet immediately after a major impact planet is expected to be similar.However, we note that as they are generally tidally-locked to their host star, hot rocky super-Earths will generally have significant day-night temperature gradients.According to the NASA Exoplanets Archive, key hot exoplanets with masses and radii in the rocky-planet range include CoRoT-7b, Kepler-10b, Kepler-78b, Kepler-97b, Kepler-99b, Kepler-102b, Kepler-131c, Kepler-406b, Kepler-406c, and WASP-47e, with Kepler-36b and Kepler-93b being slightly cooler than 1673 K.Most of the rocky exoplanets that have so far been studied are characterized by the high temperature of their atmospheres, e.g., about 1500 K in Kepler-36b and Kepler-93b, 2474 ± 71 K in CoRoT-7b Leger et al., 2360 ± 300 K in 55 Cnc e, and around 3000 K in Kepler-10b.Somewhat cooler but still hot rocky planets include temperatures of 700 K in Kepler-37b, 750 K in Kepler-62b, 580 K in Kepler-62c, and 400–500 K in GJ 1214b.If the main constituent of these atmospheres is steam, it will heat the surface of a planet to the melting point of rock.For example, the continental crust of a rocky super-Earth should melt at about 1200 K, while a bulk silicate Earth at roughly 2000 K.The gases are released from the rock as it heats up and melts, including silica and other rock-forming elements, and is then dissolved in steam.The main greenhouse gases in the atmospheres of hot rocky super-Earths are steam and carbon dioxide, which lead to development of a massive steam atmosphere closely linked to magma ocean at the planetary surface.At temperatures up to 3000 K, and prior to significant volatile loss, the atmospheres of rocky super-Earth are thought to be dominated by H2O and CO21 for pressures above 1 bar, see Schaefer et al.These objects will necessarily have spectroscopic signatures which differ from those of cooler planets.At present interpretation of such signature is severely impacted by the lack of the corresponding spectroscopic data.For example, recent analysis of the transit spectrum of 55 Cnc e by Tsiaras et al. between 1.125 and 1.65 µm made a tentative detection of hydrogen cyanide in the atmosphere but could not rule out the possibility that this signature is actually in part or fully due to acetylene because of the lack of suitable laboratory data on the hot spectrum of HCCH.The massive number of potential absorbers in the atmosphere of these hot objects also have a direct effect on the planetary albedo as well as the cooling and hence evolution of the young hot objects; comprehensive data is also crucial to model these processes.Atmospheric retrievals for hot Jupiter exoplanets such as HD 209458b, GJ 1214b and HD 189733b show that transit observations can help to establish the bulk composition of a planet.However, it is only with good predictions of likely atmospheric composition allied to a comprehensive database of spectral signatures and proper radiative transfer treatment that the observed spectra can be deciphered.The completeness of the opacities plays a special role in such retrievals: missing or incomplete lab data when analysing transit data will lead to overestimates of the corresponding absorbing components.The typical compositions of steam atmospheres have been considered by Schaefer et al., with an example for low atmospheric pressure shown in Fig. 1.The chemical processes on these objects are very similar to the young Earth and have been studied in great detail.The major gases in steam atmospheres with pressures above 1 bar and surface temperatures above 2000 K are predicted to be H2O, CO2, O2, HF, SO2, HCl, OH, CO with continental crust magmas) and H2O, CO2, SO2, H2, CO, HF, H2S, HCl, SO, for bulk silicate Earth magmas.Other gases thought to be present but with smaller mole fractions include NaCl, NO, N2, SO3, and Mg2.At temperatures above about 1000 K, sulfur dioxide would enter the atmosphere, which leads the exoplanet’s atmosphere to be like Venus’s, but with steam.SO2 is a spectroscopically important molecule that is generally not included in models of terrestrial exoplanet atmospheric models.In high concentrations, more than one spectral feature of SO2 are detectable even in low resolution between 4 and 40 µm, see also Fig. 2.This suggests that SO2 should be included when generating models of atmospheric spectra for terrestrial exoplanets.At high temperatures and low pressures SO2 dissociates to SO.Other atmospheric constituents of Venus-like exoplanets include CO2, CO, SO2, OCS, HCl, HF, H2O, H2S.Kaltenegger et al. studied vulcanism of rocky planets and estimated the observation time needed for the detection of volcanic activity.The main sources of emission were suggested to be H2O, H2, CO2, SO2, and H2S.Again SO2 should be detectable at abundances of a few ppm for wavelengths between 4 and 40 µm.Apart from SO2, significant amounts of CH4 and NH3 are expected, especially in BSE atmospheres at low temperatures.Although photochemically unstable, these gases are spectroscopically important and should be considered in spectroscopic models of atmospheres.When sparked by lighting, they combine to form amino acids, as in the classic Miller-Urey experiment on the origin of life.Models of exoplanets suggest that NO and NO2, as well as a number of other species, are likely to be key products of lightning in a standard exoplanet atmosphere.Further thermochemical and photochemical processing of the quenched CH4 and NH3 can lead to significant production of HCN.It has been suggest that HCN and NH3 will be important disequilibrium constituents of exoplanets with a broad range of temperatures which should not be ignored in observational analyses.Ito et al. suggested that SiO absorption dominates the UV and IR wavelength regions with the prominent absorption features at around 0.2, 4, 10 and 100 µm, see Fig. 3.In particular, in the cases of Kepler-10b and 55 Cnc e, those features are potentially detectable by the space-based observations that should be possible in the near future.Models suggest that a photon-limited, JWST-class telescope should be able to detect SiO in the atmosphere of 55 Cnc e with 10 hours of observations using secondary-eclipse spectroscopy.Such observations have the potential to study lava planets even with clouds and lower-atmospheres.Other abundant species that may contribute to the transmission spectrum include CO, OH, and NO at high temperatures.These molecules should be present in a planet with an O2-rich atmosphere and magma oceans, such as were recently suggested as the composition of the super-Earth GJ 1132b by Schaefer et al.It is suggested that for atmospheres of hot rocky super-Earths with high temperature and low pressure almost all rock is vaporised, while at high pressure much of this material is in the condensed phase.Most elements found in rocks are expected to be soluble in steam, including Mg, Si, and Fe from SiO2-rich silicates and MgO-, FeO-rich silicates.This can lead to gases such as Si4, Mg2, Fe2, Ni2, Al3, Ca2, NaOH, and KOH.Silica dissolves in steam primarily via formation of Si4, while MgO in steam leads to production of gaseous Mg2, see, for example, Alexander et al.However it seems likely that at the temperatures under consideration many of these more complex species would fragment into diatomic or triatomic species, and water.The predicted vaporised constituents of the steam atmosphere at higher temperatures include Fe and FeO2 fragmentation), MgO, Titanium dioxide TiO2, PO2 and then PO, MnF2 and MnO, CrO2F, CrO2, and CrO, Ca2 and AlO.TiO2 can lead to TiO, which is well-known to be a source of major absorption from near-infrared to the optical spectral regions of M dwarfs.There have been attempts to detect and a recent reported detection of TiO in exoplanet atmospheres.Whether complex polyatomic molecules like Fe2, Ca2, CrO2F and P2O5 will survive at T > 1000 K is questionable.It should be noted that it is the lower pressure regimes that hold out the best prospects for analysis using transit spectroscopy, as the high pressures will tend to result in opaque atmospheres.55 Cnc e is currently the most attractive candidate magma planet for observations; its atmosphere is amenable to study using secondary-eclipse spectroscopy and high-dispersion spectroscopy observations.It is thought that during its formation of the atmosphere of the early Earth was dominated by steam which contained water-bearing minerals.As Lupu et al. pointed out, modern state-of-the-art radiative transfer in runaway and near-runaway greenhouse atmospheres are mainly based on the absorption of H2O and CO2, with rather crude description of hot bands and neglecting other opacity sources.It is important, however, that the line-by-line radiative transfer calculations of outgoing longwave radiation include greenhouse absorbers of a rocky exoplanet atmosphere affecting its cooling.Discussion of such data is given below.It should be noted that clouds and hazes can lead to flat, featureless spectra of a super-Earth planet, preventing detection of some or all of the spectral features discussed above.As Morley et al. argued, it is however possible to distinguish between cloudy and hazy planets in emission: NaCl and sulphide clouds cause brighter albedos with ZnS known to have a distinct feature at 0.53 µm.A summary of the molecules important for the spectroscopy of hot melting planets is given in Table 1.The following sections in turn discuss how suitable spectroscopic data can be assembled and the present availability of such data required for retrievals from the atmospheres of rocky super-Earths which are essential for analysis of the exoplanetary observations.Exactly these types of hot rocky objects will be the likely targets of NASA’s JWST and other exoplanet transit observations.Models suggest that magma-planet clouds and lower-atmospheres can be observed using secondary-eclipse spectroscopy and that a photon-limited JWST-class telescope should be able to detect SiO, Na and K in the atmosphere of 55 Cnc e with 10 hours of observations.Furthermore, albedo measurements are possible at lower signal to noise; they may correspond to the albedo of clouds, or the albedo of the surface.High quality is also needed for complementary high-dispersion spectroscopic.For example TiO could not be detected in the optical transmission spectrum of HD 209458b due to poor quality of the TiO spectral data.The spectroscopic data required to perform atmospheric models and retrievals comprise line positions, partition functions, intensities, line profiles and the lower state energies E′′, which are usually referenced to as ‘line lists’.Given the volume of data required for construction of such line lists is far from straightforward.When considering how this is best done it is worth dividing the systems into three classes:Diatomic molecules which do not contain a transition metal atom which we will class as simple diatomics;,Transition metal containing diatomics such as TiO;,For simple diatomics it is possible to construct experimental line lists which cover the appropriate ranges in both lower state energies and wavelength.There are line lists available which are based entirely on direct use of experimental data or use of empirical energy levels and calculated, ab initio, dipole moments and hence transition intensities.It is also possible to generate such line lists by direct solution of the nuclear motion Schrödinger equation for a given potential energy curve and dipole moment function.This means that while there are still simple diatomics for which line lists are needed, it should be possible to generate them in a reasonably straightforward fashion.When the diatomic contains a transition metal, things are much less straightforward.These systems have low-lying electronic states and it is necessary to consider vibronic transitions between several states plus couplings and transition dipole moments between the states.The curves required to give a full spectroscopic model of systems for which vibronic transitions are important are summarized in Fig. 5 for aluminium monoxide, AlO.AlO is a relatively simple system which only requires consideration of three electronic states.This should be contrasted with the yet unsolved case of iron monoxide, FeO, where there are more than fifty low-lying electronic states which means that a full spectroscopic model will require consideration of several hundred coupling curves and a similar number of transition dipoles.Experimentally, open shell transition metal systems are challenging to prepare and the resulting samples are usually not thermal which makes it hard to obtain absolute line intensities.Under these circumstances it is still possible to measure decay lifetimes which are very useful for validating theoretical models.Lifetime measurements are currently rather rare and we would encourage experimentalists to make more of these for transition methal systems.Furthermore, the many low-lying electronic states are often strongly coupled and interact, which makes it difficult to construct robust models of the experimental data.From a theoretical perspective, the construction of reliable potential energy curves and dipole moment functions remains difficult with currently available ab initio electronic structure methods.The result is that even for important systems such as TiO, well-used line lists are known to be inadequate.For polyatomic molecules there have been some attempts to construct line lists directly from experiment, for example for ammonia and methane.However, this process is difficult and can suffer from problems with both completeness and the correct inclusion of temperature dependence.The main means of constructing line lists for these systems has therefore been variational nuclear motion calculations.There are three groups who are systematically producing extensive theoretical line lists of key astronomical molecules.These are the NASA Ames group of Huang et al., the Reims group of Tyuterev, Nikitin and Rey who are running the TheoReTS project and our own ExoMol project.While there are differences in detail, the methodologies used by these three groups are broadly similar.Intercomparison for molecules such as SO2, CO2 and CH4, discussed below, are generally characterized by good overall agreement between the line lists presented by different groups with completeness and coverage being the main features to distinguish them.Thus, for example, both the TheoReTS and ExoMol groups pointed out that the 2012 edition of the HITRAN database contained a spurious feature due to methane near 11 µm, which led to its removal in the 2016 release of HITRAN.Fig. 6 illustrates the procedure whereby line lists of both rotation-vibration and rotation-vibration-electronic transitions are computed using variational nuclear motion calculations.These calculations are based on the direct use of a potential energy surface to give energy levels and associated wavefunctions, and dipole moment surfaces to give transition intensities.For vibronic spectra such as those encountered with the open-shell diatomics the spin-orbit, electronic angular momentum and transition dipole moments curves are also required.The procedure is well established in that for all but a small number of systems with very few electrons, the PES used is spectroscopically determined.That is, an initial high-accuracy ab initio PES is systematically adjusted until it reproduces observed spectra as accurately as possible.Conversely, all the evidence suggests that the use of a purely ab initio DMS gives better results than attempts to fit this empirically.The PE, SO, EAM andDM surfaces are usually interpolated by appropriate analytical representations to be used as an input for the nuclear motion program.The quality of the PES is improved a priori by refining the corresponding expansion parameters by comparison with laboratory high resolution spectroscopic data.This refinement, particularly of PESs, using spectroscopic data is now a well-developed procedure pursued by many groups.For example, the Ames group have provided a number highly accurate PES for small molecules based on very extensive refinement of the PES starting from initial, high accuracy, ab initio electronic structure calculations.Our own preference is to constrain such fits to remain close to the original ab initio PES; this has the benefit of forcing the surface to remain physically correct in regions not well-characterized experimentally.Such regions are often important for calculations of extensive, hot line lists.Further discussion of the methods used to refine PESs can be found in Tennyson.Our computational tools include the variational nuclear-motion programs Duo, DVR3D, and TROVE which calculate the rovibrational energies, eigenfunctions, and transition dipoles for diatomic, triatomic and larger polyatomic molecules, respectively.These programs have proved capable of producing accurate spectra for high rotational excitations and thus for high-temperature applications.All these codes have been adapted to face the heavy demands of computing very large line lists and are available as freeware.Duo was recently developed especially for treating open-shell system of astrophysical importance.To our knowledge Duo is currently the only code capable of generating spectra for general diatomic molecules of arbitrary number and complexity of couplings.DVR3D was used to produce line lists for several key triatomics, including H2S, SO2, H2O, CO2, HCN.DVR3D is capable of treating ro-vibrational states up to dissociation and above.A new version appropriate for the calculation of fully-rotationally resolved electronic spectra of triatomic species has just been developed and tested for the X – C band in SO2.TROVE is a general polyatomic code that has been used to generate line lists for hot NH3, PH3, H2CO, HOOH, SO3, CH4.Intensities in TROVE are computed using the new code GAIN which was written and adapted for graphical processing units to compute Einstein coefficients and integrated absorption coefficients for all individual rotation-vibration transitions at different temperatures.Given the huge number of transitions anticipated to be important at elevated temperatures, the usage of GPUs provides a huge advantage.However TROVE requires special adaptation to treat linear molecules such as the astronomically important acetylene.An alternative theoretical procedure has been used by Tashkun and Perevalov from Tomsk.Their methodology uses effective Hamiltonian fits to experimental data for both energy levels and transition dipoles.This group has provided high-temperature line lists for the linear CO2 molecule and the NO2 system.This methodology reproduces the positions of observed lines to much higher accuracy than the variational procedure but generally extrapolates less well for transitions involving states which are outside the range of those that have been observed in the laboratory.In particular, comparisons with high-resolution transmission measurements of CO2 at high temperatures for industrial applications suggest that indeed the CDSD-4000 CO2 line list loses accuracy at higher temperatures.We note that the Ames group have produced variational line lists for CO2 designed to be valid up to 1500 K and 4000 K.The MARVEL energy levels can also be used to replace computed ones in line lists.This has already been done for several line lists.This process is facilitated by the ExoMol data structure which does not store transition frequencies but instead computes them from a states file containing all the energy levels.This allows changes of the energy levels at the end of the calculation or even some time later should improved energy levels become available.The polyatomic molecules discussed above are all closed shell species.However the open shell species PO2 and CaOH are thought to be important for hot atmospheres.There have been a number of variational nuclear motion calculations on the spectra of open shell triatomic systems, largely based on the use of Jensen’s MORBID approach.However, we are unaware of any extensive line lists being produced for such systems.The extended version of DVR3D mentioned above should, in due course, be applicable to these problems.For closed-shell polyatomic molecules, such as NaOH, KOH, SiO2, for which spectra involve rotation-vibration transitions on the ground electronic state, one would use a standard level of ab initio theory such as CCSD-f12/aug-cc-pVTZ on a large grid of geometries to compute both the PES and DMS.For diatomic molecules characterized by multiple interacted curves the multi-reference configuration interaction method in conjunction with the aug-cc-pVQZ or higher basis sets is a reasonable choice, with relativistic and core-correlation effects included where feasible.The potential energy and coupling curves should then be optimized by fitting to the experimental energies or transitional wavenumbers.Indeed where there is a large amount of experimental data available, then the choice of initial potential energy curves becomes almost unimportant.However, the ab initio calculation of good dipole curves is always essential since these are not in general tuned to observation.The ExoMol line lists are prepared so that they can easily be incorporated in radiative transfer codes.For example, these data are directly incorporated into the UCL Tau-REx retrieval code, a radiative transfer model for transmission, emission and reflection spectroscopy from the ultra-violet to infrared wavelengths, able to simulate gaseous and terrestrial exoplanets at any temperature and composition.Tau-REx uses the linelists from ExoMol, as well as HITEMP and HITRAN with clouds of different particle sizes and distribution, to model transmission, emission and reflection of the radiation from a parent star through the atmosphere of an orbiting planet.This allows estimates of abundances of absorbing molecules in the atmosphere, by running the code for a variety of hypothesised compositions and comparing to any available observations.Tau-REx is mostly based on the opacities produced by ExoMol with the ultimate goal to build a library of sophisticated atmospheres of exoplanets which will be made available to the open community together with the codes.These models will enable the interpretation of exoplanet spectra obtained with future new facilities from space and the ground, as well as JWST.Of course there are a number of other models for exoplanets and similar objects which rely on spectroscopic data as part of their inputs.These include modelling codes such as NEMESIS, BART, CHIMERA and a recent adaption of the UK Met Office global circulation model called ENDGame.More general models such as VSTAR are designed to be applied to spectra of planets, brown dwarfs and cool stars.The well-used BT-Settl brown-dwarf model can also be used for exoplanets.There are variety of other brown dwarfs and cool star models.These are largely concerned with the atmospheres of the hydrogen rich atmospheres which are, of course, characteristic of hot Jupiter and hot Neptune exoplanets, brown dwarfs and stars.Besides direct input to models, line lists are used to provide opacity functions whose reliability are well-known to be limited by the availability of good underlying spectroscopic data.Cooling functions for key molecules are also important for the description of atmospheric processes in hot rocky objects.These functions are straightforward to compute from a comprehensive line lists; this involve computation of integrated emissivities from all lines on a grid of temperatures typically ranging between 0 to 5000 K.Spectroscopic studies of the Earth’s atmosphere are supported by extensive and constantly updated databases largely comprising experimental laboratory data.Thus for earth-like planets, by which we mean rocky exoplanets with an atmospheric temperature below 350 K, the HITRAN database makes a good starting point.However, at higher temperatures datasets designed for room temperature studies rapidly become seriously incomplete, leading to both very significant loss of opacity and incorrect band shapes.The strong temperature dependence of the various molecular absorption spectra is illustrated in figures given throughout this review which compare simulated absorption spectra at 300 and 2000 K for key species.HITRAN’s sister database, HITEMP, was developed to address the problem of high temperature spectra.However the latest release of HITEMP only contains data on five molecules, namely CO, NO, O2, CO2 and H2O.For all these species there are more recent hot line lists available which improve on the ones presented in HITEMP.These line lists are summarised in Table 2 below.Table 1 gives a summary of species suggested by the chemistry models as being important in the atmospheres of hot super-Earths.Spectroscopic line lists are already available for many of the key species.Most of the species suggested by the chemistry models of such objects are already in the ExoMol database, which includes line list taken from sources other than the ExoMol project itself.This includes H2O, CH4, NH3, CO2, SO2.Line lists for other important species, such as NaOH, KOH, SiO2, PO, ZnS and SO are currently missing.Table 2 presents a summary of line lists available for atmospheric studies of hot super-Earths.Line lists for some diatomics are only partial: for example accurate infrared line lists exists for CO, SiO, KCl, NaCl, NO, but none of these line lists consider shorter-wavelength, vibronic transitions which lie in the near-infrared, visible or ultraviolet, depending on the species concerned.NIR will be covered by the NIRSpec instrument on the board of JWST only at lower resolution and therefore the completeness of the opacities down to 0.6 µm will be crucial for the atmospheric retrievals.Such data, when available, will be important for the interpretation of present and future exoplanet spectroscopic observations.Below we consider the status of spectroscopic data for key molecules in turn.H2O: As discussed above, water is the key molecule in the atmospheres of rocky super-Earths.There are a number of published water line lists available for modelling hot objects.Of these the most widely used are the Ames line list of Partridge and Schwenke, or variants based on it, and the BT2 line list, which provided the basis for water in the HITEMP database and the widely-used BT-Settl brown dwarf model.The Ames line list is more accurate than BT2 at infrared wavelengths but less complete meaning that it is less good at modelling hotter objects.Recently Polyansky et al. have computed the POKAZaTEL line list which is both more accurate and more complete than either of these.We recommend the use of this line list, which is illustrated in Fig. 7, in future studies.CO2: Again there are number of line lists available for hot CO2.In particular Taskun and Perevalov distribute these via their carbon dioxide spectroscopic databank, an early version of CDSD formed the input for HITEMP.The Ames group produced a variational line list valid up to 1500 K.Recent work on CO2 has improved computed transition intensities to point where they as accurate as the measured ones; this suggests that there is scope for further improvement in hot line lists for this system; some work in this direction has recently been undertaken by Huang et al.Fig. 8 illustrates the temperature-dependence of the CO2 absorption spectrum in the infrared.CH4: methane is an important system in carbon-rich atmospheres and the construction of hot methane line lists has been the subject of intense recent study by a number of groups both theoretically and experimentally.The most complete line lists currently available are our 10to10 line list, which is very extensive but only valid below 1500 K, and the Reims line list, which spans a reduced wavelength range but is complete up to 2000 K.In fact we extended 10to10 to higher temperature some time ago but the result is a list of 34 billion lines which is unwieldy to use.We have therefore been working data compaction techniques based on the use of either background, pressure-independent cross sections or super-lines.This line list has just been released.Fig. 9 illustrates the temperature-dependence of the methane absorption spectrum in the infrared.The strongest bands are at 3.7 and 7.7 µm.SO2 and SO3: A number of line list for SO2 have been computed by the Ames group; the most compressive is one produced in collaboration between ExoMol and Ames, see Fig. 2.This line list was validated using experimental data recorded at the Technical University of Denmark.ExoMol have also provided line lists for SO3.The largest of these, appropriate for temperatures up to 800 K, contains 21 billion lines.However, validation of this line list against experiments performed at DTU points to significant differences in the line intensities, suggesting that more work is required on the SO3 dipole moment.HCN: Line lists for hydrogen cyanide were some of the first calculated using variational nuclear motion calculations.Indeed the first of these line list was the basis of a ground-breaking study by Jørgensen et al. showed that use of a comprehensive HCN line list in a model atmosphere of a ‘cool’ carbon star made a huge difference: extending the model of the atmosphere by a factor of 5, and lowering the gas pressure in the surface layers by one or two orders of magnitude.The line list created and used by Jørgensen and co-workers only considered HCN.However HCN is a classic isomerizing system and the HNC isomer should be thermally populated at temperatures above about 2000 K.More recent line lists consider both HCN and HNC together.All these line lists are based on the use of ab initio rather than spectroscopically-determined PESs, which can lead to significant errors in the predicted transition frequencies.However the most recent line list, due to used very extensive sets of experimental energy levels obtained by Mellau for both hot HCN and hot HNC to improve predicted frequencies to, essentially, experimental accuracy.This line list was used for the recent, tentative detection of HCN on super-Earth 55 Cancri e.The line list of Barber et al. is illustrated in Fig. 12.CO: is the most important diatomic species in a whole range of hot atmospheres ranging from warm exoplanets to cool stars from a spectroscopic perspective recently produced comprehensive line lists for the nine main isotopologues of CO.Fig. 13 illustrates the absorption spectrum of the main isotopologue, 16C12O.NO: a new comprehensive line list for nitric oxide has recently been released by Wong et al., see Fig. 14.SiO: Fig. 3 illustrates the absorption spectrum of SiO molecule.SiO is well known in sunspots Campbell et al. and is thought likely to be an important constituent of the atmosphere of hot rocky super-Earths.An IR line list for SiO available from ExoMol and a less accurate UV line list is provided by Kurucz.There are a number of systems which have been identified as likely to be present in the atmospheres of hot rocky super-Earths for which there are no available line lists.Indeed for most of these species, which include NaOH, KOH, SiO2, MgO, PO2, Mg2, SO, ZnS, there is little accurate spectroscopic data of any sort.Clearly these systems will be targets of future study.Probably the most important polyatomic molecule, at least for exoplanet and cool star research, for which there is still not a comprehensive hot line list is acetylene.Acetylene is a linear molecule for which variational calculations are possible and an extensive effective Hamiltonian fit is available.One would therefore expect such a line list to be provided shortly.All the discussion above has concentrated very firmly on line spectra.However there are a number of issues which need to be considered when simulating or interpreting exoplanet spectra.A discussion of procedures for this is given in Chapter 5 of the recent book by Heng.General codes, such as HELIOS; Malik et al., 2017) and our own ExoCross, are available for taking appropriate line lists and creating inputs suitable for radiative transfer codes.The first issue to be considered is the shape of the individual spectral lines.Lines are Doppler broadened with temperature due to the thermal motion of the molecules and broadened by pressure due to collisional effects.While the total absorption by an optically thin line is conserved as function of temperature and pressure; this is not true for optically thick lines.For these lines use of an appropriate line profile can have a dramatic effect.The nature of primary transit spectra, where the starlight has a long pathlength through the limb of the exoplanet atmosphere, is good for maximizing sensitivity but also maximizes the likelihood of lines being saturated.This means that it is important to consider line profiles when constructing line list for exoplanet studies.While it is straightforward to include the thermal effects via the Doppler profile; pressure effects in principle depend on the collision partners and the transition concerned.Furthermore, there has been comparatively little work on how pressure broadening behaves at high temperatures.Studies are beginning to consider broadening appropriate to exoplanet atmospheres.However, thus far these studies have concentrated almost exclusively on pressure effects in hot Jupiter exoplanets, which means that molecular hydrogen and helium have been the collision partners considered.The atmospheres of hot rocky super-Earths are likely to be heavy meaning that pressure broadening will be important.Clearly there is work to be done developing appropriate pressure-broadening parameters for the atmospheres of these planets.We note, however, that line broadening parameters appropriate for studies of the atmosphere of Venus are starting to become available, largely on the basis of theory.Besides broadening, it is also necessary to consider collision induced absorption in regions where there are no spectral lines.On Earth it is know that the so-called water continuum makes an important contribution to atmospheric absorption.Similarly collision induced absorption in by H2 is well known to be important hydrogen atmospheres.CIA has also been detected involving K–H2 collisions.What CIA processes are important in lava planets is at present uncertain.Finally it is well-known that the spectra of many exoplanets are devoid of significant features, at least in the NIR.It is thought that this is due to some mixture of clouds and aerosols, often described as hazes.Such features are likely to also form in the atmospheres of rocky exoplanets.It remains unclear precisely what effect these will have on the resulting observable spectra of the planet.To conclude, the atmospheres of hot super-Earths are likely to be spectroscopically very different those of other types of exoplanets such as cold super-Earth or gas giant due to both the elevated temperatures and the different atmospheric constituents.This means that a range of other species, apart from the usual H2O, CH4, CO2 and CO, must be also taken into consideration.A particularly interesting molecule that is likely to feature in atmospheric retrievals is SO2.Detection of SO2 could be used to differentiate super-Venus exoplanets from the broad class of super-Earths.A comprehensive line list for SO2 is already available.SiO, on other hand, is a signature of a rocky object with potentially detectable IR and UV spectral features.Another interesting species is ZnS, which can be used to differentiate clouds and hazes.At present there is no comprehensive line list for ZnS to inform this procedure.Models of hot super-Earths suggest that these exoplanets appear to resemble many properties of the early Earth.An extensive literature exists on the subject of the early Earth, which can be used as a basis for accurate prediction of the properties of the hot rocky exoplanets.Super-Earths also provide a potential testbed for atmospheric models of the early Earth which, of course, are not amenable to direct observational tests.Post-impact planets may also be also very similar in chemistry and spectroscopy.From different studies of the chemistry and spectroscopy of hot super-Earth we have identified a set of molecules suggested either as potential trace species or sources of opacities for these objects.The line list for a significant number of these species are either missing or incomplete.Our plan is systematically create line lists for these key missing molecules and include into the ExoMol database. | The majority of stars are now thought to support exoplanets. Many of those exoplanets discovered thus far are categorized as rocky objects with an atmosphere. Most of these objects are however hot due to their short orbital period. Models suggest that water is the dominant species in their atmospheres. The hot temperatures are expected to turn these atmospheres into a (high pressure) steam bath containing remains of melted rock. The spectroscopy of these hot rocky objects will be very different from that of cooler objects or hot gas giants. Molecules suggested to be important for the spectroscopy of these objects are reviewed together with the current status of the corresponding spectroscopic data. Perspectives of building a comprehensive database of linelist/cross sections applicable for atmospheric models of rocky super-Earths as part of the ExoMol project are discussed. The quantum-mechanical approaches used in linelist productions and their challenges are summarized. |
314 | Clinical significance of serum omentin-1 levels in patients with pancreatic adenocarcinoma | Pancreatic adenocarcinoma is the eighth leading cause of cancer deaths in men and the ninth in women worldwide.The majority of these tumors are adenocarcinomas arising from the ductal epithelium.Approximately 48.960 patients are diagnosed with cancer of the exocrine pancreas annually in the United States, and almost all are expected to die from the disease due to its aggressive nature .The link between high body mass, lack of physical activity, and PC risk has been illustrated in several studies .Several molecular factors may play an important role in the association between obesity and cancer, including insulin resistance, aberrant insulin like growth factor expression, sex hormones disorder and adipocytokines .Currently adipose tissue is considered as an active endocrine organ with metabolic and immune regulatory roles .Adipose tissue secrets a variety of proteins, including adipokines.Furthermore, adipocytokines might cause the proliferation and growth of tumor cells, induce inflammation and anti-apoptosis pathways, which subsequently can prompt cancer metastasis .Omentin-1 is a 34-kDa adipocytokine that is primarily secreted from stromal vascular cells of visceral adipose tissue and enhance insulin sensitivity and glucose metabolism in normal weight individuals.Omentin-1 is considered to play a role in inflammatory responses and cell differentiation, and also promotes apoptosis of cancer cells .In prostate and colorectal carcinoma, serum omentin levels were found to be high and in renal cell carcinoma it is found to be decreased .In acute and chronic pancreatitis, the elevation in omentin levels was due to the anti-inflammatory effects of omentin and elevated omentin levels improved insulin resistance, caused a significant reduction in glucose levels .In literature, there is limited data about omentin and cancer relationship.To our knowledge; our study is the only one in pancreatic carcinoma.The data of 33 patients with histologically confirmed diagnosis of PA were recorded from their medical charts.The staging of metastatic patients was done by using computed tomography, magnetic resonance imaging, and positron emission computed tomography scan.Patients were staged according to the International Union Against Cancer TNM classification.Chemotherapy was given to the majority of the patients with metastatic disease.Regimens of single or combination CTx were selected based on the performance status of patients and extension of the disease.CTx schemes were applied as follows: combination of gemcitabine with platinum or capecitabine, or gemcitabine alone.Response to treatment was determined by radiologically after 2–3 cycles of CTx according to revised RECIST criteria version 1.1.by the investigators and classified as follows: complete response, partial response, stable disease, or progressive disease.The tumor response after 2 months of CTx was used for statistical analysis.Follow-up programs of metastatic disease consisted of clinical, laboratory, and imaging by using a CT or MRI depending on which imaging methods were used at baseline and performed at 8-week intervals during CTx or every 12 weeks for no anticancer treatment.Patients with either PR or SD were classified as responders, and patients with PD were considered non-responders.The possible prognostic variables were selected based on those identified in previous studies.Serum carcino-embryonic antigen and carbohydrate antigen 19.9 levels were determined by microparticle enzyme immunoassay.Serum erythrocyte sedimentation rate, lactate dehydrogenase levels, albumin and, whole blood count assays were measured at presentation in our biochemical laboratory.Serum LDH activity was determined immediately after collection by the kinetic method on a Targa-3000 autoanalyzer at 37 °C.The laboratory parameters were evaluated at diagnosis within the normal ranges of our institution.For comparison of serum levels of omentin, age, sex and BMI matched 30 healthy controls were included in the analysis.Blood samples were obtained from patients with PA at first admission before any treatment.Institutional review board approval was obtained from each subject prior to the commencement of the study.This assay is a Sandwich ELISA based on: 1) capture of omentin-1 molecules in the sample by anti-omentin IgG and immobilization of the resulting complex to the wells of a microtiter plate coated by a pre-titered amount of anchor antibodies, 2) and the simultaneous binding of a second biotinylated antibody to omentin-1, 3) wash away of unbound materials, followed by conjugation of horseradish peroxidase to the immobilized biotinylated antibodies, 4) wash-away of free enzyme, and 5) quantification of immobilized antibody-enzyme conjugates by monitoring horseradish peroxidase activities in the presence of the substrate 3,3′,5,5′-tetra-methylbenzidine.The enzyme activity is measured spectrophotometrically by the increased absorbency at 450 nm, corrected from the absorbency at 590 nm, after acidification of formed products.Since the increase in absorbency is directly proportional to the amount of captured omentin-1 in the unknown sample, the concentration of omentin-1 can be derived by interpolation from a reference curve generated in the same assay with reference standards of known concentrations of omentin-1.The color development is stopped and the intensity of the color is measured using an automated ELISA reader.The results were expressed as ng/mL.Continuous variables were categorized using median values as cut-off point.For group comparison of categorical variables, Chi-square tests or One-Way Anova tests were used and for comparison of continuous variables, Mann–Whitney U test or Kruskal-Wallis tests was accomplished.Overall survival was calculated from the date of first admission to the clinics to disease-related death or date of last contact with the patient or any family member.Kaplan-Meier method was used for the estimation of survival distribution and differences in OS was assessed by the log-rank statistics.All statistical tests were carried out two-sided and a p value ≤ 0.05 was considered statistically significant.Statistical analysis was carried out using SPPS 21.0 software.From February 2010 to July 2013, 33 patients with a pathologically confirmed diagnosis of PA were enrolled in this study.The baseline histopathological characteristics and the demographic characteristics of the patients are listed in Table 1.The median age at diagnosis was 59 years, range 32–84 years; majority of the patients in the group were men.The tumor was located in the head of pancreas in 21 patients.Thirty-nine percent of 23 metastatic patients who received palliative CTx were CTx-responsive.The most common metastatic site was liver in 23 patients with metastasis.Surgery was performed in 8 patients; 5 patients underwent pancreaticoduodenectomy and 3 patients had palliative surgery.The levels of serum omentin assays in patients with PA and healthy controls are shown in Table 2.The baseline serum omentin levels were significantly higher in patients with PA than in the control group.Table 3 shows the correlation between the serum levels omentin of and clinico-pathological factors.Serum omentin levels were significantly higher in large pathologic tumor size compared with small pathologic tumor size.The median follow-up time was 26.0 weeks.At the end of the observation period, thirty-two patients were dead.Median OS of the whole group were 41.3 ± 8.3 weeks .While 1-year OS rates were 24.2%.Older age, worse performance status, metastatic disease, lack of liver metastases and the CTx-unresponsiveness were found to be significant prognostic factors.However, serum omentin levels had no significantly effect on OS rates.Although omentin-1 levels were found to be changed in some cancers, its possible clinical significance has remained unclear in patients with pancreatic cancer.Only a few studies have been previously performed.Both colorectal and pancreatic cancers are related with obesity, metabolic syndrome and BMI.Recently clinical studies show that cancers such as liver , prostate and colorectal are associated with increases in omentin serum levels independent of various factors such as BMI, glucose, lipid parameters, disease differentiation .In a new study, higher omentin concentrations were associated with a higher colorectal cancer risk independent of obesity .To the best of our knowledge, there are no additional studies directly associating the anti-inflammatory and tumor-suppressing effects of omentin on other cancers.There is also no data in literature about the relationship of serum omentin-1 levels and PA.There is only limited data about pancreatitis and omentin levels.The elevation in omentin levels in early stage of pancreatitis was found; it caused insulin resistance and reduction in glucose levels .In our study, we showed that in patients with PA, serum omentin-1 levels were elevated.Serum omentin levels were significantly higher in large pathologic tumor size compared with small pathologic tumor size.This finding is really interesting.In cancer studies, omentin was suggested to promote cancer cell growth by triggering genomic instability and PI3K/Akt signaling pathways and the cancer-promoting effects of omentin was independent of its abilities to regulate obesity-induced metabolic risk .Omentin may also show a number of effects reflecting cellular immune responses.In the area of oncologic treatments, immunooncology is a promising topic.Maybe, omentin shows its effects as an antiinflammatory marker.In conclusion, the present study revealed that serum levels of omentin-1 were only a diagnostic marker in pancreatic cancer patients.However, its predictive and prognostic values were not determined.In addition, no correlation was observed in serum omentin level and response to chemotherapy.The small sample size of the present study may be considered as significant limitation and may have influenced these results.Further studies in a larger patient population are needed.The Transparency document associated with this article can be found, in online version. | Background Omentin is related with metabolic syndrome and obesity. Pancreatic adenocarcinoma (PA) is a lethal and obesity-linked malignancy. This study was conducted to investigate the serum levels of omentin in patients with PA and the relationship with tumor progression and known prognostic parameters. Material and methods Serum samples were obtained from thirty-three patients on first admission before any treatment. Age, sex and body mass index (BMI) matched 30 healthy controls were included in the analysis. Both serum omentin levels were measured using enzyme-linked immunosorbent assay (ELISA). Results The median age at diagnosis was 59 years (32–84 years). Twenty (61%) patients were men and the remaining were women. The most common metastatic site was liver in 23 patients with metastasis (n = 19, 83%). Thirty-nine percent of 23 metastatic patients who received palliative chemotherapy (CTx) were CTx–responsive. Median overall survival of the whole group was 41.3 ± 8.3 weeks [95% confidence interval (CI) = 25–58 weeks]. The baseline serum omentin levels were significantly higher in patients with PA than in the control group (p < 0.001). Serum omentin levels were significantly higher in patients with larger pathologic tumor size compared with smaller size (p = 0.03). Conversely, serum omentin concentration was found to have no prognostic role on survival (p = 0.54). Conclusion Serum levels of omentin may have a good diagnostic role in patients with PA. |
315 | Does reputation matter? Evidence from share repurchases | Firms establish a reputation through their past behaviors.This reputation could influence how the stock market perceives the credibility of subsequent announcements made by firms.While firms have been shown to establish a reputation through their prior earnings forecasting behavior), they have also been shown to establish a reputation with respect to repurchase completion).Specifically, Hutton and Stocken document that the stock price response to management forecasts of earnings news increases in prior forecast accuracy and in the length of the forecasting record.Bonaimé, on the other hand, finds that prior repurchase completion rates are positively correlated with current completion rates and announcement returns."Although the stock market may consider prior repurchase completion rates when evaluating a firm's subsequent repurchase announcements, it is plausible to assume that the stock market may also consider the firm's already established reputation through earnings forecasts issued by its management, since such practice typically occurs more frequently and has a longer history than repurchases.Given the circumstances under which different reputations are developed, we refer to the reputation with respect to prior earnings forecast accuracy as “forecast reputation” and prior repurchase completion rates as “repurchase reputation”.This paper asks whether the forecast reputation has a spillover effect on how the stock market reacts to new repurchase announcements given the repurchase reputation within the firm.Besides, not all firms engage in share repurchases."This means that not all firms have prior repurchase completion rates, which the stock market could consider when evaluating the firms' new repurchase announcements. "Our paper addresses this issue by investigating whether the stock market would consider a firm's forecast reputation when the firm is conducting a share repurchase for the first time.We answer the above research question using the Japanese setting.This setting has a number of advantages for examining the economic consequences of share repurchases compared to the US.First, Japanese firms are required to announce the results of the repurchase program, which allows for a more accurate calculation of the repurchase completion rates.1,Second, there is currently no regulation in the US requiring firms to complete the share repurchase program within a certain time period, whereas firms in Japan are required to complete a share repurchase program within a year.The shorter planned repurchase period in Japan implies that repurchase completion rates are less susceptible to noise induced by exogenous shocks.2,This in turn means that the completion rates of Japanese firms tend to be more stable and better reflect the original intention of managers of firms undertaking share repurchases than the US.Accordingly, we argue that repurchase completion rates calculated using the Japanese setting are a better proxy for repurchase reputation.Third, Japanese firms have been required to provide initial management earnings forecasts at the beginning of the fiscal year for a long period.3,This long tradition of forecasting leads us to believe that Japanese firms would already have a well-established reputation with respect to earnings forecasts."It is therefore interesting to examine whether this forecast reputation has an incremental effect on how the stock market evaluates the firm's subsequent repurchase announcements, conditional on the firm's repurchase reputation.While there are some distinct features between the US and Japan with respect to share repurchases, we also highlight a few similarities that may overcome external validity concerns arising from our single-country study.Like the US, open market share repurchases are the most common method of repurchases in Japan, and prior studies document that the primary reason for why Japanese firms undertake OMRs is consistent with the undervaluation hypothesis; Ota and Kawase).Further, the distributions of repurchase completion rates and announcement returns are comparable between the US and Japan.Namely, the average completion rates are between 73% and 79% in the US; Bonaimé), while we find in this study that the average completion rate in Japan is 77%.The market reactions to the announcements of OMRs are around 2–3% in both jurisdictions.Among the various methods of share repurchases, OMR is the only method by which firms are not committed to buy back the number of shares that are officially announced, giving them considerable flexibility over the amount of shares to be repurchased."Therefore, it is not uncommon to observe firms' actual repurchases often deviate substantially from the announced amount; Bonaimé).In fact, OMRs could lead to low repurchase completion rates.For instance, Stephens and Weisbach find that while 60% of firms have a completion rate of 100%, 10% of firms have a completion rate of less than 5%.Their results suggest that OMR announcements could be used to inflate the share prices by firms without the real intention to actually follow through on the repurchases).Nevertheless, if the firm has consistently low repurchase completion rates, the market might perceive subsequent repurchase announcements made by the firm to be less credible, thereby resulting in reputational loss.We adapt a model based on Bonaimé to test our research question.We show that current repurchase rates are positively associated with both forecast and repurchase reputations in all of our various model specifications, suggesting that firms with a record of more accurate earnings forecasting and higher prior repurchase completion rates are more likely to complete the current repurchase programs."We also find that investors incorporate the firm's prior earnings forecast accuracy and prior repurchase completion rate into their reactions to OMR announcements, providing evidence of forecast and repurchase reputational effects on the market's assessment of the credibility of OMR announcements. "Analysis of the interaction effect between the two reputation variables further reveals that the stock market responds more to the firm's forecast reputation when its repurchase reputation is low. "Taken together, our findings indicate that a firm's forecast reputation has a spillover effect on the stock market reaction to the firm's current repurchase announcement, given its repurchase reputation.We perform additional analyses to investigate whether the stock market turns to other sources of reputation within the firm to evaluate the credibility of the OMR announcements, when a firm announces a share repurchase program for the first time.Using a subset of firms that have undertaken OMRs for the first time, we find that the stock market does indeed turn to the forecast reputation of the firm in the absence of prior repurchase completion rates."Our study contributes to the literature on the effect of firms' reputation on stock market reaction to new corporate announcements.While prior studies find that firms can establish a reputation from an event-specific announcement, the question of whether firms can establish a reputation through other sources of announcements has so far been ignored.Our study fills this gap in the literature by providing evidence that firms can establish a reputation through multiple sources of announcements."Further, our study improves our understanding about the dynamics of a firm's reputation and how the stock market utilizes the reputation to evaluate the credibility of the firm's subsequent announcements.The structure of this paper is organized as follows.The next section provides a discussion of the institutional background and related literature, and Section 3 specifies the research design and variables.Section 4 describes our sample and presents descriptive statistics.Section 5 provides completion rate analysis, while Section 6 provides market reaction analysis.Section 7 presents the results of the additional analysis.Finally, we offer a summary and conclusion in Section 8.Prior to 1994, the Commercial Law prohibited the use of share repurchases and dividend payments were the only form of corporate payout in Japan.Although the Commercial Law was amended to allow firms to repurchase shares in 1994, share repurchases had only increased in popularity after 1995.This is because according to Japanese accounting rules, share repurchases would have an effect of increasing the per share capital of the remaining outstanding shares, which would attract a ‘presumed’ dividend tax).Consequently, the dividend tax had dissuaded Japanese firms from buying back their own shares.This tax rule was removed in 1995, a change that spurred share repurchases in Japan.The Company Act in Japan governs share repurchase practices of Japanese public firms.The Act outlines four platforms on which shares can be repurchased:On-market trading;,Off-market self-tender offer;,An offer to transfer to all shareholders; and,Negotiated transactions with selected shareholders.Listed firms in Japan generally choose platforms and to repurchase shares.In this paper, our focus is on share repurchases through on-market trading).On-market trading can be conducted either during auction or off-auction hours.On-market trading during auction hours occurs in the morning session and the afternoon session in an open market, and is widely known as an OMR throughout the world.On the other hand, on-market trading during off-auction hours takes place before the morning session starts through the Tokyo Stock Exchange Trading Network.4,Fig. 1 presents the implementation schedule of an OMR.An OMR in Japan is generally executed as follows.The firm typically announces the repurchase program on day t – 1 at 3:30 pm following the close of the afternoon trading session at 3:00 pm.This announcement includes the intended size of the repurchase plan as a dollar value and the number of shares to be repurchased, and the length of repurchase period.Next, the firm makes the actual repurchase, which generally occurs around 60 days after the announcement.In contrast to the US, where the actual repurchase generally occurs over several years after the announcement of the repurchase program, the timeframe between the announcement and the completion of the repurchase program is shorter in Japan.Finally, the results of the repurchase program are announced.There is abundance of evidence that shows the credibility of management forecasts is correlated with prior forecasting behaviors, suggesting the importance of reputational effect."Hutton and Stocken, for instance, document that the stock price response to a firm's current management forecast is positively associated with the firm's prior forecast accuracy and also the length of the firm's forecasting record.Yang studies manager-specific forecasting behavior instead of the usual firm-specific forecasting behavior and finds that the market reaction to management forecasts is stronger when the manager has a history of issuing more accurate forecasts.Ng et al. also provide evidence that the credibility of management forecasts influences how the market reacts to management forecast news at the time of its release and thereafter.Specifically, they find that more credible management forecasts are associated with a larger price reaction in the short window and a smaller post-management forecast drift in returns.Their findings suggest that firms can mitigate the credibility concerns created by the uncertain and non-audited nature of management forecasts, by continually providing the market with accurate forecasts thereby establishing a good reputation among investors."The credibility of management forecasts can also influence analysts' forecasting behaviors; Baginski and Hassell; Williams; Ota; Nara and Noma).For instance, Hassell et al. find that management forecasts provide firm-specific information that is useful to analysts in producing less biased and more accurate earnings forecasts.Williams extends this study by proposing that prior management forecast usefulness to be measured by relative forecast accuracy.The intuition behind the measure of relative forecast accuracy is that if the accuracy of management earnings forecasts is higher than that of analyst earnings forecasts, then management earnings forecasts are considered to be useful to analysts.Using the measure of relative forecast accuracy, she documents that analysts have a tendency to modify their earnings forecasts for firms that provide more useful prior management earnings forecasts, after controlling for other determinants of believability.In a related study, Hirst, Koonce, and Miller conduct an experimental study using MBA students with four years of work experience on average as subjects."They find that the prior accuracy of management forecasts and the form of the forecasts jointly influence the participants' judgements on purchasing shares.Overall, the findings of these studies suggest that management acquires a forecasting reputation among analysts as well as in the market.The extant literature also shows that a firm can develop a reputation from other sources.Bonaimé focuses on the discretion that management has over how many shares are to be bought back in an announced repurchase program.She proposes that a firm develops a reputation from its prior repurchase completion rates, and finds that the stock market reaction to new repurchase announcements made by less reputable firms is smaller.Specifically, she finds a 1-standard-deviation increase in the lagged completion rate follows a 36-basis-point increase in five-day market-adjusted returns around the announcement of the next repurchase.She also ascertains that firms are more likely to announce accelerated share repurchases when the firms are concerned about their reputation in the stock market.5,Bargeron et al. document firms use ASRs to strengthen the reliability of the repurchase announcements when such announcements do not appear to have an initial impact on the stock market."Based on our review of the literature, a question that is raised is whether the stock market considers a firm's already established reputation through prior management earnings forecasting, when it evaluates the firm's new repurchase announcement.Specifically, these firms that regularly provide management earnings forecasts would have established a “forecast reputation” in their communications with the stock market.If the firm has a strong record of accurate forecasting, then the stock market might perceive any announcements made by the firm to be more credible and would react more favorably to the news.Further, not all firms have prior repurchase completion rates.That is, not all firms have a repurchase reputation on which the stock market can assess the credibility of repurchase announcements made by firms.As such, it would be interesting to investigate how the stock market evaluates a new repurchase announcement made by a firm that does not have a record of share repurchases."In this case, the stock market might consider the firm's forecast reputation in order to evaluate the credibility of the firm's new repurchase announcement. "That is, the forecast reputation has a spillover effect on how the stock market perceives the credibility of the firm's new repurchase announcement.CompRate: open market share repurchase completion rates, which is the ratio of the actual repurchases to the announced repurchase plan size;,CAR: 2-day market-adjusted abnormal returns over the event window t = 0 to 17;,Reputation: industry-median adjusted management earnings forecast accuracy of net income averaged over three years prior to year y multiplied by −1;,LagCompRate: the completion rate associated with the most recent prior repurchase announcement;,PlanSize: the planned size of the repurchase program, measured by the number of shares to be repurchased divided by the total number of shares outstanding;,LnPlanDays: planned acquisition days, which is the natural logarithm of the planned repurchase period expressed in trading days;,LagReturn: cumulative abnormal returns from 30 days to 1 day before the announcement of the repurchase program;,EmergeMkt: a dummy variable that equals to 1 if the firm is listed on the Mothers section of the TSE;, "LnMVE: the natural logarithm of the firm's market value of equity at the end of the month prior to the repurchase announcement;",BMR: the book-to-market ratio at the end of the most recent quarter prior to the repurchase announcement;,Cash: cash and short-term investments divided by the market capitalization at the end of the most recent quarter prior to the repurchase announcement;,CF: the trailing 12 months operating cash flow of the most recent second or fourth quarter prior to the repurchase announcement divided by market capitalization;,Leverage: total liabilities divided by total assets at the end of the most recent quarter prior to the repurchase announcement;,SDReturn: the standard deviation of stock returns for the 200-day period from 210 days to 11 days prior to the repurchase announcement;,SDCF: the standard deviation of semi-annual operating cash flows over the three years divided by the market capitalization at the end of the most recent quarter prior to the repurchase announcement;,Motive Dummies: dummy variables that equal to 1 for eight reasons of the share repurchase: Flexible capital policy, Capital efficiency, Shareholder value, Stock option, Return to shareholders, Share exchange, Capital restructure, Others; and8,.9,Year Dummies: fiscal year dummy variables.The subscripts i, y, j indicate firm, fiscal year, and order in multiple share repurchases in the same fiscal year, respectively.Furthermore, all variables except dummy variables are winsorized at the top and bottom 1%.Reputation and LagCompRate are the reputation variables."A firm's forecast reputation established through a record of accurate forecasting may imply that the firm would suffer losses in the forecast reputation if the firm does not follow through on its repurchase announcements.Based on this argument, we predict a positive coefficient on Reputation in Eq.Bonaimé documents that prior repurchase completion rates are positively correlated with current completion rates, suggesting that repurchase completion rates have a persistent nature.We therefore predict a positive coefficient on LagCompRate in Eq.With regard to Eq., we predict a significantly positive coefficient on each of the two reputation variables, suggesting that new repurchase announcements made by firms with high forecast and repurchase reputations are perceived as more credible in the stock market.With respect to the control variables in Eqs. and, the planned repurchase size and the planned repurchase period relate to the repurchase limit."A firm's difficulty to acquire all of the shares is increasing in the planned repurchase size, leading to lower current repurchase completion rates.Therefore, we predict a negative sign on PlanSize in Eq."We predict a positive sign on PlanSize in Eq. because the stock market is likely to react more favorably to the firm's repurchase announcement, when the firm plans to repurchase more shares.LnPlanDays is unique to Japan.There is no regulation in the US that requires firms to complete the share repurchase program within a certain time period, whereas Japanese firms are required to complete a share repurchase program within a year, and the timeframe for repurchasing shares always forms part of the announcement.A shorter time period may indicate that the firms are more willing to complete the repurchase program.Therefore, a negative sign is predicted on the coefficient of LnPlanDays for Eq.We predict a negative sign on the coefficient of LnPlanDays in Eq. because the investors might expect that there could be a higher demand in shares when planned acquisition days are shorter.Consistent with the undervaluation hypothesis, we include LagReturn, LnMVE, and BMR in Eqs. and, while Cash and CF are included to be consistent with the free cash flow hypothesis.10, "In line with the optimal capital structure hypothesis, we include Leverage to control for the motive of the firm to repurchase shares in order to inflate the firm's leverage until it reaches the level perceived by the firm to be suitable; Bonaimé et al.; Lei and Zhang).SDReturn and SDCF are included to be consistent with the flexibility hypothesis, where firms use their discretion over the number and timing of shares to buy back; Bonaimé et al.)."Consistent with Bonaimé, we include eight binary variables to capture firms' motives to repurchase shares.We source the fiscal, forecast, and share price data from Nikkei Financial QUEST.The data relating to share repurchases are obtained from the Financial Data Solutions share repurchase database based on the following sample selection criteria:The resolution on matters relating to share repurchases is made between 1 September 2003 and 31 December 2017;,Firms that repurchase their own shares must be listed on the first, second, or Mothers sections of the TSE; and,Share repurchases for special reasons, shares repurchases from certain shareholders, and repurchases of unlisted preferred shares are removed.Note in regard to above, the coverage of the FDS share repurchase database begins in September 2003, at which time the Commercial Law was amended to allow firms to repurchase shares solely upon the approval of the board of directors.11,However, due to insufficient availability of certain key items in the early years of the coverage, we are unable to conduct analysis of the entire coverage period.Therefore, we use the 2003–2007 period solely for the purpose of obtaining the lagged repurchase completion rate, LagCompRate.12,The above criteria yield a sample of 5648 share repurchases.In order to provide more robust tests of the stock market reaction to repurchase announcements, we remove the following share repurchases from the sample: share repurchases via off-market self-tender offers; share repurchases through General Shareholders Meeting resolutions based on Article 156, Para. 1 of the Act; share repurchases via the ToSTNeT market; and share repurchases using both OMR and ToSTNeT repurchase.Table 1 describes the sample selection procedure for this study.Our initial sample consists of 5648 share repurchase announcements.OMR announcements account for more than 60% of the initial sample, which is consistent with OMRs being the most common form of share repurchase in Japan.For the purpose of this study, we analyze the 3495 cases of OMR announcements.Table 2 describes the characteristics of the 3495 OMR announcements in the sample.Panel A shows the largest number of OMR announcements occurs in 2008, probably because the firms used share repurchases to support the stock prices following the financial crisis in 2008.13,Panel B shows that 80.4% of all OMR announcements are made by firms with large market value of equity.Panel C shows that a total of 1219 companies repurchased shares through OMR for 3495 times between 2008 and 2017.Further, 60% of the firms in the sample have repurchased shares multiple times during the ten-year period.Table 3 Panel A presents the descriptive statistics for the regression variables of Eqs. and.On average, firms have 77.38% and 4.24% current completion rates and announcement returns, respectively.With respect to the reputation variables, firms have an average of −0.0046 and 76.41% industry-adjusted prior forecast accuracy and prior repurchase completion rates, respectively.The result for Reputation implies that the management earnings forecasts of repurchasing firms are less accurate than their industry peers on average, though the median value of 0.0034 suggests otherwise.The planned number of shares to be repurchased is on average 2.23% of the number of shares outstanding.The mean value of 4.0487 for LnPlanDays indicates that firms plan to spend around three months to complete the share repurchase program.With respect to the motive variables, 83% of the firms in the sample state flexible capital policy as a reason for share repurchases.Also, 30–35% of firms choose to repurchase shares for capital efficiency and return to shareholders related reasons.Table 3 Panel B shows the distribution of the motives for share repurchases.One-half of the firms repurchase shares for a single reason, while the other half of the sample firms repurchase shares for multiple reasons.Table 4 provides the correlation coefficients between independent variables in Eqs. and.The table shows that the Pearson and Spearman correlation coefficients between the forecast reputation and the repurchase reputation are 0.1426 and 0.0965, respectively.Although the two reputation variables are significantly positively correlated, the correlation coefficient values are not high, suggesting that the two variables are capturing different aspects of firm reputation.Interestingly, both reputation variables, Reputation and LagCompRate, are most highly correlated with the firm size with the Pearson correlation coefficient of 0.2887 and 0.2241, respectively.This is consistent with the univariate analysis findings in prior studies that document large firms have a greater tendency to issue more accurate earnings forecasts and a higher repurchase completion rates; Bonaimé).Table 5 presents the univariate results on the determinants of completion rate.We divide the sample into “low” and “high” subsamples according to the median value of each variable except for a dummy variable, EmergeMkt.In the case of EmergeMkt, “low” and “high” subsamples consist of observations that take the value of 0 and 1, respectively.With respect to the motive variables, we divide the sample according to whether or not the motive was stated in the announcement.We then compare the two subsamples based on their average completion rates.Difference-in-means tests are performed to compare the completion rate between the two subsamples of each determinant of the completion rate.With regard to the forecast reputation of the firms, firms whose forecast reputation below the median have an average completion rate of 75.56%, while firms whose forecast reputation above the median have an average completion rate of 80.10%.This difference between the mean values is statistically significant at the 1% level.The average completion rate of firms with prior repurchase completion rates below the median is 22.69 percentage points lower than that of firms with prior repurchase completion rates above the median.The 22.69 percentage points difference in mean current completion rates is considerably higher than the 7.4 percentage points difference reported in Bonaimé.Despite the observed difference in magnitude between Japan and the US, our results are consistent with repurchasing behavior persisting within firms.Overall, these results regarding the reputation variables suggest that both prior earnings forecast accuracy and prior repurchase completion rates are positively associated with current repurchase completion rates.The results in relation to other variables are generally consistent with Bonaimé.Current repurchase completion rates are negatively related to repurchase plan size, repurchase plan days, emerging market, cash and short-term investments, leverage, standard deviation of returns, and standard deviation of semi-annual operating cash flows over the three years.On the other hand, current completion rates are positively related to firm size.With respect to the repurchase motives, average completion rates for firms with “flexible capital policy” included in the announcement as the share repurchase motive is 75.93%, while the average completion rates without it is 84.79%.This mean difference is statistically significant.Average completion rate for firms with “capital efficiency”, “shareholder value”, “stock option”, “return to shareholders”, and “share exchange” stated as the motive for share repurchases is significantly higher than those without it by 4.91, 3.39, 8.58, 8.20, and 10.14 percentage points, respectively.Fig. 2 analyzes the relation between the effect of forecast reputation and repurchase reputation on current completion rates.Specifically, we first divide the sample according to the sign of Reputation.This results in 913 observations for the low Reputation group and 1442 observations for the high Reputation group.Next, within each group, we further partition the sample into three equally-sized subsamples according to the value of Reputation.Therefore, Low3 consists of observations with lowest forecast reputation.We then examine the average completion rate along these six categories."Fig. 2 shows the average current completion rates are increasing in the firms' forecast reputation, suggesting that firms with a record of accurate forecasting are more likely to complete the repurchase. "Next, we divide the sample into 11 categories according to the firm's prior repurchase completion rates.Each category represents a range of LagCompRate values, and is between 0 and 1, incremental by 0.1.The lowest LagCompRate has a range of 0.0 to 0.09, while the highest LagCompRate is 1."Fig. 2 shows the average current completion rates are increasing in the firms' reputation with respect to prior repurchase completion rates, suggesting repurchase completion persists within the firm.Overall, Figs. 2 and 2 show the two types of reputation are positively associated with current completion rates, although the positive association is stronger for repurchase reputation than for forecast reputation.Table 6 Columns and present the results from estimating the Tobit models of CompRate using forecast reputation and repurchase reputation, respectively, whereas Columns and consider Reputation and LagCompRate jointly.The coefficient on Reputation is 1.0601, 0.5930, and 0.5817 in Columns, and, respectively, and is statistically different from zero.Untabulated results of the marginal effects at means show that a 1-standard-deviation increase in forecast reputation follows an increase in current completion rates of 1.94 to 3.54 percentage points, depending on the specifications of the model.The coefficient on LagCompRate is 0.6239, 0.6197, and 0.6173 in Columns, and, respectively, and is statistically significant.Again, untabulated results of the marginal effects at means reveal that a 1-standard-deviation increase in repurchase reputation is associated with an increase in current completion rates of 18.85 to 19.05 percentage points.14,The comparison of marginal effects between Reputation and LagCompRate indicates that repurchase reputation has a larger economic impact on the current repurchase completion rates than forecast reputation.Nevertheless, both Reputation and LagCompRate have an incremental explanatory power in our model of current repurchase completion rates.Further, Table 6 suggests that repurchase plan size, repurchase plan days, and firm leverage are significantly negatively related to completion rates, while firm size and book-to-market ratio are significantly positively related to current completion rates."Lastly, Column shows that even after controlling for the stated motives, both Reputation and LagCompRate remain positive and statistically different from zero, suggesting that a firm's forecast reputation is positively associated with current repurchase completion rates, given the firm's repurchase reputation.In this section, we describe the relation between our reputation proxies and the perceived credibility of repurchase announcements."The extant literature documents that the stock market incorporates a firm's repurchase reputation with respect to prior repurchase completion rates into its reaction to the firm's subsequent repurchase announcement.Given that forecast reputation is positively correlated with current repurchase completion rates, we are interested in whether the forecast reputation of the firm also affects how the stock market responds to share repurchase announcements."That is, does the stock market incorporate a firm's forecast reputation into its reactions to share repurchase announcements?",Similar to the approach in Table 6, Columns and of Table 7 consider the effect of forecast reputation and repurchase reputation on stock returns around OMR announcements, respectively, while Columns and of Table 7 consider the joint effect of both reputation variables.We find the announcement returns are increasing in prior repurchase completion rates for all specifications in Columns through.The estimated coefficient on LagCompRate of around 0.022 indicates a 1-standard-deviation increase in LagCompRate is associated with an increase in announcement returns of 67.17-basis-point."Interestingly, the significantly positive coefficient on Reputation in Column provides evidence that the stock market considers the firm's forecast reputation established through the record of accurate earnings forecasting when evaluating the firm's OMR announcement.The coefficient on Reputation in Columns and is also positive and significantly different from zero, even after controlling for prior repurchase completion rates and other factors.The estimated coefficient on Reputation of nearly 0.101 suggests a 1-standard-deviation increase in Reputation is associated with an increase of 33.73-basis-point in announcement returns.The marginal effects of Reputation and LagCompRate are both economically meaningful, considering their effects represent 7.96% and 15.84% of the mean value of announcement returns, respectively."Overall, these results suggest that the firm's forecast reputation has an incremental power in explaining how prior repurchasing behavior influences the market reactions to subsequent repurchase announcements.With respect to the control variables, announcement returns are significantly positively related to PlanSize and BMR, but negatively related to LnPlanDays and LnMVE15.Among the motives variables, the coefficients on Shareholder value, Stock option, and Capital restructure are negative and statistically significant.LowLagCompRate: a dummy variable that equals to 1 if the value of LagCompRate is in the bottom quartile of the distribution; and,HighLagCompRate: a dummy variable that equals to 1 if the value of LagCompRate is in the top quartile of the distribution.With respect to Eq., the coefficient on Reputation, α2, represents the effect of forecast reputation for firms with non-low repurchase reputation on the announcement returns, while the sum of the two coefficients on Reputation and Reputation*LowLagCompRate, α2 + α3, captures the effect of forecast reputation for firms with low repurchase reputation on the announcement returns.As for Eq., the coefficient on Reputation, β2, represents the effect of forecast reputation for firms with non-high repurchase reputation on the announcement returns, while the sum of the two coefficients on Reputation and Reputation*HighLagCompRate, β2 + β3, captures the effect of forecast reputation for firms with high repurchase reputation on the announcement returns.Table 8 reports the results from estimating Eqs. and.Column of the table shows that the effect of forecast reputation on announcement returns is 0.2766 and 0.0366 for firms with low repurchase reputation and non-low repurchase reputation, respectively.Column of the table displays that the effect of forecast reputation is 0.0482 and 0.1382 for firms with high repurchase reputation and non-high repurchase reputation, respectively.These results from the analysis of the interaction effect between forecast and repurchase reputations suggest that the impact of forecast reputation on the repurchase announcement returns is significantly more pronounced when the firm has a low repurchase reputation.On the other hand, when the firm has a high repurchase reputation, the impact of forecast reputation on the announcement returns appears to be negligible.We conceive that it is possible for a firm with no history of share repurchases to make an OMR announcement."If this is the case, the stock market would not have the firm's prior repurchase completion rates on which the credibility of the firm's OMR announcements can be assessed. "Further, given that the requirement for Japanese firms to provide initial management earnings forecasts at the beginning of the fiscal year has been in force for an extensive period of time, it would be interesting to investigate whether the stock market would turn to the firm's forecast reputation in the absence of repurchase reputation, when evaluating the credibility of its first OMR announcement.We construct two samples.First, a firm is classified as a first timer of share repurchase if the firm has not announced a share repurchase in the last three years.Second, a firm is classified as a first timer of share repurchase if the firm has never announced a share repurchase in the past.We rerun the regressions of Eqs. and without the repurchase reputation variable, LagCompRate, on the two samples.LagCompRate is dropped because first timers of share repurchase do not have prior repurchase completion rates.The second and fourth columns of Table 9 report the results from estimating the Tobit model of Eq. using the two samples, while the third and fifth columns of Table 9 report the results from estimating the OLS model of Eq. using the two samples.With regard to the completion analysis in Eq., the estimated coefficient on Reputation is 0.3806 and 0.4886 for the first and second samples, respectively."This suggests that a firm's forecast reputation is positively associated with current repurchase completion rates.The market response analysis in Eq. also reveals that the estimated coefficient on Reputation is significantly positive with its value of 0.1347 and 0.1210 for the first and second samples, respectively.Moreover, these estimated coefficient values on Reputation in Eq. are larger than those reported in Table 7, implying that the economic impact of forecast reputation on announcement returns is stronger for firms that repurchase shares for the first time."These results indicate that the stock market does indeed turn to the firm's reputation established through a record of accurate management earnings forecasting in the absence of prior repurchase completion rates, when it evaluates the credibility of the firm's OMR announcements.In the analysis thus far, we define forecast reputation as the managerial ability to predict earnings accurately, and thus the absolute value of forecast error is used to measure forecast accuracy.However, this definition of forecast reputation assumes that the market perception of forecast reputation is indiscriminate between underestimation and overestimation of earnings.Although we could not find any prior studies on forecasting reputation that examine the differential effects of signed forecast error, we find some related evidence in the literature on CEO turnover.16, "Trueman suggests that management earnings forecasts provide a public signal regarding a manager's ability to anticipate future changes in the firm's business environment and to adjust the firm's operations accordingly.Following this argument, Lee et al. investigate whether the probability of CEO turnover is related to management earnings forecast accuracy, and find that the CEO turnover rate is higher for firms in both the most pessimistic and optimistic earnings forecast groups.Lee et al. argue that beating the management earnings forecast targets is not enough for CEOs to retain the post."Based on this finding, they conclude that boards of directors use management forecast accuracy as a signal of CEOs' managerial ability, and that the cost of issuing inaccurate forecasts is borne by managers. "These findings suggest that it is plausible for market participants to formulate firms' forecast reputation based on the magnitude of the forecast errors, regardless of their signs.Pessimistic: a dummy variable that equals to 1 if the sign of the forecast error, defined as management forecast of net income minus realized net income deflated by market value of equity, averaged over three years prior to year y is negative; and,Optimistic: a dummy variable that equals to 1 if the sign of the forecast error, defined as management forecast of net income minus realized net income deflated by market value of equity, averaged over three years prior to year y is positive.As a preliminary analysis, Fig. 3 plots the relation between earnings forecast errors and current completion rates.We first divide the sample according to the sign of Forecast Error.This results in 1175 observations for the pessimistic Forecast Error group and 1180 observations for the optimistic Forecast Error group.Next, within each group, we further partition the sample into three equally-sized subsamples according to the value of Forecast Error.Therefore, Pes3 consists of observations with most pessimistic forecast errors.We then examine the average completion rate along these six categories.Fig. 3 shows the average current completion rates is distinctively lower for the most optimistic group than other groups.Interestingly, the most pessimistic group also has the lower average completion rate than other groups except for Opt3.Overall, Fig. 3 roughly displays a concave shape, which suggests that firms with a history of both overly underestimating and overestimating the earnings forecasts tend to complete the repurchase programs to a lesser extent.Table 10 Panel A reports the results of multivariate analyses in Eqs. and.The estimated coefficient on Reputation*Pessimistic and Reputation*Optimistic in Eq. is 0.1184 and 0.8361, respectively, indicating that the degree of optimism in earnings forecasts has a larger impact on current completion rates than the degree of pessimism.However, the F-test that examines the difference between the two coefficients shows the difference of −0.7177 is not statistically significant.The market response analysis in Eq. produces similar results.It reveals that the estimated coefficient on Reputation*Pessimistic and Reputation*Optimistic is 0.1376 and 0.0792, respectively, and the difference between the two estimated coefficients, 0.0584, is not statistically significant.Thus, consistent with the findings in Lee et al., the market does not appear to discriminate between underestimation and overestimation of earnings forecasts."Firms establish a reputation through the consequences of their past announcements, which could influence how the stock market perceives the credibility of the firms' subsequent announcements. "Previous studies document that the stock market considers prior repurchase completion rates when evaluating the firm's repurchase announcement.Nevertheless, the question of whether reputation established from other sources of announcements also affects the stock market reaction has remained unanswered."This paper asks if the reputation established through a history of management earnings forecasting has a spillover effect on the market response to new repurchase announcements, given the firm's repurchase reputation.Using a sample of 3495 OMR announcements over the period 2008–2017, we show that current repurchase completion rates are positively related to both forecast and repurchase reputations in various model specifications.We also document that these reputations are both drivers of announcement returns, and that the stock market reaction to forecast reputation is particularly strong when the repurchase reputation is low.Further, using a subset of firms that have undertaken OMRs for the first time, we find that the stock market turns to forecast reputation within the firm on which the credibility of repurchase announcements is assessed."Overall, the findings of this study suggest that a firm establishes a reputation through multiple sources of announcements, which can in turn affect how the stock market assesses the credibility of the firm's subsequent announcements. | This paper examines whether the stock market considers the firm's reputation established through a history of management earnings forecasting when it evaluates open market repurchase announcements. We refer to this established reputation as the firm's “forecast reputation”. We find that while the stock market considers the firm's “repurchase reputation” (proxied by prior repurchase completion rates), it also considers the firm's forecast reputation established from the accuracy of prior management earnings forecasting, suggesting a spillover effect of forecast reputation. Further, interaction test between the two reputation variables reveals that the market reacts more to the firm's forecast reputation when its repurchase reputation is low. Additional analyses suggest that when a firm announces a share repurchase program for the first time (i.e., when there is no repurchase reputation), investors turn to the forecast reputation within the firm as an alternative source of reputation, on which the credibility of repurchase announcements is assessed. Overall, our study provides evidence that firms establish a reputation in the market through multiple sources of announcements. |
316 | Melanotic neuroectodermal tumour of infancy: A report of two cases | We present two cases of Melanotic Neuroectodermal Tumour of Infancy, a rare but clinically and histopathologically distinct benign tumour of neural crest origin, found chiefly in new-born infants.It was first described by Krompecher in 1918, who named it congenital melanocarcinoma .Historically, the tumour was known by many terms, including melanotic epithelial odontoma, pigmented teratoma, retinal anlage tumour and melanocytoma.The current designation of melanotic neuroectodermal tumour of infancy was adopted in 1992 by the World Health Organisation classification of odontogenic tumours .Management of this rapid growing, locally aggressive tumour entails complete excision with a safety margin of 0.5–1 cm.Adjuvant chemotherapy and, to a lesser extent, radiotherapy can be utilised in conjunction with surgery.Larger lesions that cannot be resected primarily may benefit from neoadjuvant chemotherapy .In this retrospective case series, we report two cases of MNTI that presented at our unit within a short period in 2015.This case series aims to shed a light on this rarely-reported lesion and the prompt surgical intervention which was curative in both patients under our care.This case series has been reported in line with the Preferred Reporting of Case Series In Surgery criteria .Patient one, a 3-month-old female patient, presented in March 2015.Her parents had noticed a rapidly growing maxillary swelling during the previous month.The patient’s medical history was insignificant.On examination, a firm swelling measuring 3 × 4 cm was detected on the anterior maxilla.The overlying mucosa was ulcerated in the middle, with a deciduous incisor exfoliating through the lesion.Multislice Computed Tomography revealed a well-defined osteolytic lesion encroaching on the right anterior maxillary wall.Incisional biopsy, performed by a team led by author FAM, confirmed a diagnosis of melanotic neuroectodermal tumour of infancy.Subsequently, a second surgery was performed in April 2015, with tumour excision via a transoral approach.Possibly due to the conservative nature of the surgical excision and/or tumour seeding, a recurrence of the lesion occurred four months later in August 2015.Via a Weber Ferguson approach, a right subtotal maxillectomy was performed to resect the recurrent tumour with a safety margin of 1 cm.Histopathology affirmed the diagnosis of MNTI.The patient’s subsequent recovery was uneventful; she has been followed up for over three years, with no incidence of recurrence clinically or radiographically.The second patient was a 4-month-old female infant, who presented to our unit in December 2015 after her parents noticed a progressively growing left maxillary mass of gradual onset.On examination, a well-defined firm mass of the left maxilla was detected.The lesion was roughly 4 × 5 cm in size and smooth in texture, with an ulcer measuring 1 × 1 cm located at the lesion’s surface.Computed Tomography revealed an expansile lesion of the left maxilla with poorly-defined margins.An incisional biopsy revealed a diagnosis of Melanotic Neuroectodermal Tumour of Infancy.Histologically, the specimen showed groups of round cells with abundant cytoplasm and pale nuclei, surrounding nests of neuroblast-like cells possessing scant or fibrillar cytoplasm.Immunohistochemistry confirmed the specimen was positive for both HMB45 and Synaptophysin.A thorough work-up was subsequently performed, including Computed Tomography of the chest, abdomen and pelvis to rule out any metastasis; this was negative for any tumor spread.Via a Weber Ferguson approach, a surgical team headed by author ME performed a left subtotal maxillectomy and the tumour was excised with a safety margin of 1 cm.The surgical defect was closed primarily with the use of a buccal fat pad and no reconstructive procedure was taken.A follow-up CT was taken 18 months postoperatively, with no recurrence detected.Accordingly, a minor residual soft tissue defect in the left premaxilla was closed via a local flap in July 2017.The patient has been followed up for over two years following the MNTI excision, with no signs of recurrence clinically or radiographically.First described as congenital melanocarcinoma by Krompecher in 1918, melanotic neuroectodermal tumour of infancy is a tumour of neural crest origin.The presence of vanillylmandellic acid is pathognomonic for the neural crest component.Up to 1990, roughly 200 cases had been reported in the literature; that has increased to roughly 486 cases reported to date .MNTI most commonly occurs within the first year of life, with a peak incidence between two to six months of age; with 80% of patients under 6 months old.The lesion has a slight male predilection and an affinity for the head and neck, most commonly the maxilla, with 92.8% of cases occurring in the craniofacial region.MNTI may be found both centrally and peripherally; most commonly arising in the anterior maxilla, skull, mandible, the upper limb, thigh and epididymis .Radiographically, MNTI often manifests as an expansile lesion with hazy or ill-defined margins, often causing displacement of any adjacent teeth.The tumour is frequently radiolucent but may also present as either a radiopaque or mixed radiolucent/radiopaque lesion.Owing to their melanin content, soft tissue components of the lesion may appear hyperdense on Computed Tomography.Magnetic resonance imaging with gadolinium contrast is also helpful in the imaging of the tumour, appearing isointense on T1-weighted images, with intratumoural melanin appearing hyperdense .Histologically, this biphasic tumour is characterised by large, polygonal epithelioid cells with intracellular brown granular pigmentation and smaller neuroblast-like cells in a stroma of fibrous tissue containing fibroblasts and blood vessels.Immunohistochemically, cytokeratin, HMB45 and vimentin are positive in the larger epithelioid cells.Synaptophysin and enolase are positive in the neuroblast-like cells .Differential diagnosis for MNTI includes other small round cell neoplasms such as neuroblastoma, rhabdomyosarcoma, peripheral neuroepithelioma, Ewing’s Sarcoma, myeloid sarcoma, melanoma and lymphoma .This locally aggressive tumour’s nature initially led authors to believe it was malignant in nature, whereas it exhibits malignant transformation in 6.5%–6.97% of cases.Metastatic MNTI most frequently spreads to regional lymph nodes and is often fatal .Although Melanotic Neuroectodermal Tumour of Infancy is somewhat rare in the literature, most authors agree with regards to surgical management; the gold standard is wide excision with clear margins.A safety margin of 0.5–1 cm has been reported to suffice in extensive lesions.However, it must be mentioned that a systematic review by Rachidi et al found no difference in recurrence rates in patients treated by curettage only and patients who underwent resection.Adjuvant and neoadjuvant therapy may be utilised in cases of recurrence, cases exhibiting malignant transformation and larger lesions unamenable to primary surgical intervention .The tumour has a frequently reported recurrence rate of up to 20%, but recurrence rates of up to 60% have been cited.A systematic review by Rachidi et al showed that the recurrence rate for tumours seen in patients younger than two months of age is quadruple that of tumours seen in patients 4.5 months of age and older.Recurrence is also reported to occur more commonly within four weeks of surgery.This may be either to incomplete surgical excision, tumour dissemination during surgery or multicentricity.Kruse-Lösler et al reported the relative risk for recurrence appeared to be highest in tumours occurring in the mandible, which showed recurrence rates of up to 33%, compared to 19.3% in maxillary lesions .Different prognostic factors have been hypothesised regarding MNTI.Higashi et al reported that the percentage of smaller neuroblast-like cells in the neoplasm is directly proportional to tumour aggressiveness.Accordingly, MNTI with abundant neuroblast-like cells are rapid-growing, while tumours with predominately large cells are slow-growing.Another potential prognostic factor for tumour aggressiveness is the neuroblast-like cell population staining positive for Ki-67 & CD99 Immunohistochemically .Due to this high rate of local recurrence, numerous authors emphasise the importance of monthly follow-up appointments for the first year postoperatively, complemented with an annual MRI of the tumour site .We were exposed to two cases of MNTI within a relatively short period of time at our unit.Both patients were female, and the principles of immediate intervention and total excision with a safety margin were adhered to in both cases.One patient exhibited recurrence following initial surgical excision, most likely attributed to incomplete surgical excision.Following a subsequent resection with a 1 cm safety margin, she has been followed up for over three years with no signs of recurrence.Despite the postoperative follow-up period for both patients being relatively short, our management of both cases has yielded satisfactory results insofar.Ideally, a longer follow-up period is required to reach more concrete conclusions.Further understanding of this tumour on a microscopic level is also needed to determine clear, unequivocal prognostic factors for melanotic neuroectodermal tumour of infancy.A potential risk faced when treating locally aggressive lesions such as MNTI is treating it as a malignant lesion; resulting in overly aggressive resection of the lesion.This can subsequently limit the patient’s postoperative reconstruction options and, ultimately, quality of life.This emphasises the importance of finding a balance between surgical excision and preserving a healthy tissue bed for future rehabilitation.No conflict of interest to report.No funding bodies or sponsors were involved in this study.Ethical Approval was obtained from the Department of Oral & Maxilloacial Surgery, Nasser Institute Hospital.Patient1: Written informed consent was obtained from the patient’s guardian for publication of this case series.Patient 2: Written informed consent was obtained from the patient’s guardian for publication of this case series.Research Registry UIN: researchregistry4251.Shady Abdelsalam Moussa, Corresponding Author.Not commissioned, externally peer reviewed. | Introduction: Melanotic neuroectodermal tumour of infancy (MNTI) is a benign tumour of infancy, most commonly affecting the head and neck region. First described in 1918, less than 500 cases have been reported in the literature. MNTI is aggressive in nature & has a high rate of recurrence. Presentation of cases: In this retrospective case series, we report two cases of MNTI that presented at our unit; both cases were managed by wide excision and have been followed up uneventfully for over two years. Discussion: MNTI has a recurrence rate of up to 20%. Patient's age can play a significant role in recurrence rate. Although this neural crest tumour is somewhat rare in the literature, there is a consensus with regards to surgical management; the gold standard remains to be wide excision with safety margin. Select cases may benefit from adjuvant and neoadjuvant therapy. Conclusion: Owing to its locally aggressive nature and high recurrence rate, prompt diagnosis and surgical intervention is advised in cases of MNTI. Further understanding of this tumour is needed on a microscopic level in order to determine clear prognostic factors. |
317 | Reassessing the role of internalin B in Listeria monocytogenes virulence using the epidemic strain F2365 | Listeria monocytogenes is a facultative intracellular bacterium that causes listeriosis .After ingestion of contaminated food, L. monocytogenes disseminates to the liver, spleen, brain and/or placenta .L. monocytogenes infections can be fatal, as exemplified by the 2017–2018 outbreak of listeriosis in South Africa affecting 1060 patients, 216 of whom died.Strains of L. monocytogenes are grouped into lineage I, lineage II and lineage III .Major listeriosis epidemics have been associated with lineage I strains .However, most reports investigating listeriosis pathophysiology have studied what are essentially strains from lineage II .The most important virulence factors of L. monocytogenes strains are encoded in the inlA-inlB locus and in the pathogenicity islands LIPI-1, LIPI-3 and LIPI-4 .The inlA-inlB locus encodes for internalin A and internalin B, two bacterial surface proteins that bind the host cell receptors E-cadherin and Met, respectively, to induce bacterial uptake into nonphagocytic eukaryotic cells .Expression of the inlA-inlB locus and LIPI-1 is regulated by the transcriptional regulator PrfA .Importantly, the strain EGD displays a PrfA mutation leading to constitutive production of InlA and InlB .However, one isolate carrying a PrfA mutation that leads to the constitutive production of InlA, InlB and LIPI-1 virulence factors has been found in a L. monocytogenes variant that diverged from a clinical isolate .All studies performed to understand the role of InlB in deep organ infection have used the EGD strain .While a clear contribution for InlB has been demonstrated for placental invasion , in spleen and liver infections it has been observed either as a contribution for InlB in conventional mice or as no contribution for InlB in a transgenic humanized E-cadherin mouse model .The genome of the lineage I strain F2365 responsible for the 1985 California outbreak, one of the deadliest bacterial foodborne outbreaks ever reported in the United States , shows that the F2365 isolate carries a nonsense mutation in inlB .We thus decided to restore the expression of InlB in the F2365 strain and to examine the consequences of InlB expression during in vitro and in vivo infections.An isogenic mutant strain containing a functional InlB) was used.The InlB amino acid sequence of L. monocytogenes EGDe and F2365 strains has 94% amino acid sequence identity.Cell infection was performed as previously described using multiplicity of infection values of 2, 5 or 25.Luciferase reporter system experiments were performed by creating a transcriptional fusion by cloning 308 nucleotides upstream from the inlB initiation codon into SwaI- and SalI-digested pPL 2lux as described .For in vivo bioluminescence experiments, mice were infected orally with 5 × 109 F2365 InlB+inlB::lux as described elsewhere .Mouse infections were performed intravenously with 104 CFU of the indicated strain as reported elsewhere .Half of the organ was used to assess bacteria load, and the other half was used for histopathologic analysis at 72 and 96 hours after infection.This study was carried out in accordance with the French and European laws and was approved by the animal experiment committee of the Institut Pasteur.A point mutation in the inlB codon 34 was performed to generate a F2365 strain carrying a functional inlB, termed F2365 InlB+) .Both F2365 and F2365 InlB+ strains were tested for entry into epithelial cells which express only the InlB receptor Met, epithelial cells expressing both the InlB and the InlA receptors Met and E-cadherin, respectively, or RAW 264.7 macrophages.Quantification of the number of viable intracellular L. monocytogenes showed that in HeLa and JEG-3 cells, the F2365 InlB+ strain was ≈9-fold and ≈1.5-fold more invasive than F2365, respectively.We thus report for the first time that chromosomal restoration of InlB promotes a gain of entry associated to the presence of the InlB receptor Met).In macrophages, F2365 InlB+ and F2365 strains invaded similarly, showing that InlB does not play a role in entry into phagocytic cells.In L. monocytogenes EGD, inlA and inlB are transcribed in vitro both individually and in an operon by PrfA-dependent and -independent mechanisms .Here, we investigated whether inlB is transcribed in vivo from its own promoter in the epidemic lineage I L. monocytogenes F2365.For this purpose, we fused the 308 nt located upstream from the inlB initiation codon to a Lux reporter plasmid and integrated it into the chromosome of the F2365 strain.Upon oral infection of 12 conventional BALB/c mice with 5 × 109 L. monocytogenes F2365InlB+inlB:lux, no bioluminescent signal was detected in organs of infected animals from 24 to 72 hours after infection).To discard the possibility that the absence of bioluminescence in the liver and spleen could be due to a low number of CFUs in these organs, the two organs were dissected, homogenized, serially diluted and plated onto brain–heart infusion plates.Mice orally infected yielded ≈1 × 107.5 CFU in the liver or spleen at 48 hours after infection).To analyse the potential contribution of InlB to the F2365 InlB+ virulence, we performed intravenous inoculations of BALB/c mice with the F2365 and F2365 InlB+ strains.In all the organs tested at 72 hours, bacterial counts for F2365 InlB+ were significantly higher compared to the F2365 strain).Furthermore, histopathologic assessment showed that the F2365 strain displayed a reduced number of necrotic foci in the spleen and liver 72 hours after infection compared to the F2365 InlB+ strain and Supplementary Fig. 3).The ratio of necrotic area to total area was significantly higher in the spleen of mice infected with the F2365 InlB+ strain at 72 hours).In our study, chromosomal restoration of InlB promoted a gain of entry into eukaryotic cells associated with the presence of the InlB receptor Met.Previous studies performed in our laboratory demonstrated that less than ≈107 L. monocytogenes CFUs distributed across the entire intestine were sufficient to produce a bioluminescent signal in this organ .The present results therefore suggest that in vivo, inlA and inlB are transcribed in an operon from a promoter located upstream of inlA.The present in vitro and in vivo results demonstrate that InlB expression increases the virulence of the F2365 InlB+ strain and show that InlB plays an essential role in spleen and liver infection by lineage I L. monocytogenes.InlB is highly conserved in the genome of L. monocytogenes, suggesting a critical role for this molecule during infections .In conclusion, it could be speculated that a spontaneous mutation in InlB could have prevented a more severe L. monocytogenes disease during the 1985 California outbreak.This work was supported by the Institut Pasteur, the Institut National de la Sante et de la Recherche Medicale, the Institut National de la Recherche Agronomique, Universite Paris Diderot, grants from Region Ile-de-France, the Institut Pasteur ‘Programmes Transversaux de Recherche’, Agence Nationale de la Recherche, Fondation Le Roch Les Mousquetaires, European Research Council and Region Ile-de-France.PC is an international senior research scholar of the Howard Hughes Medical Institute.JGL is supported by a ‘Ramón y Cajal’ contract of the Spanish Ministry of Economy and Competitiveness.All authors report no conflicts of interest relevant to this article. | Objectives: To investigate the contribution to virulence of the surface protein internalin B (InlB) in the Listeria monocytogenes lineage I strain F2365, which caused a deadly listeriosis outbreak in California in 1985. Methods: The F2365 strain displays a point mutation that hampers expression of InlB. We rescued the expression of InlB in the L. monocytogenes lineage I strain F2365 by introducing a point mutation in the codon 34 (TAA to CAA). We investigated its importance for bacterial virulence using in vitro cell infection systems and a murine intravenous infection model. Results: In HeLa and JEG-3 cells, the F2365 InlB + strain expressing InlB was ≈9-fold and ≈1.5-fold more invasive than F2365, respectively. In livers and spleens of infected mice at 72 hours after infection, bacterial counts for F2365 InlB + were significantly higher compared to the F2365 strain (≈1 log more), and histopathologic assessment showed that the F2365 strain displayed a reduced number of necrotic foci compared to the F2365 InlB + strain (Mann-Whitney test). Conclusions: InlB plays a critical role during infection of nonpregnant animals by a L. monocytogenes strain from lineage I. A spontaneous mutation in InlB could have prevented more severe human morbidity and mortality during the 1985 California listeriosis outbreak. |
318 | Household inclusion in the governance of housing retrofitting: Analysing Chinese and Dutch systems of energy retrofit provision | One of the key challenges of sustainable development is effectively retrofitting existing old urban neighbourhoods .Existing urban neighbourhoods and their buildings are particularly prioritized because they account for 32% of global carbon emissions .To curtail urban carbon emissions, improving apartment buildings by means of retrofitting offers the most cost-effective way to reduce global building carbon emission by at least 25–30% at the end of the 2020s .Especially, the energetic transformation of housing comes with substantial improvement of householder’s quality of life and wellbeing of low-incomes .Local governments, construction companies, private developers and housing associations are the main stakeholders provisioning large-scale urban retrofitting of housing estates.These stakeholders are responsible for the financing, production and distribution of retrofit improvements.Unfortunately, a series of governmental failures in China and market shortfalls in the Netherlands hinder exploiting the full prospective of retrofit provision .The ambition is to make fifty to sixty percent of the existing residential housing stock energy efficient towards energy label B in Dutch social housing and a theoretical energy saving target of 50–65% in Chinese housing estates .Retrofit providers in both countries have to deal with liberalisation, decentralisation and limited financial resources which problematizes the scope of retrofitting to tackle energy saving standards and householders’ demand for quality of life.Inclusive community participation and ways of democratising decision-making are restricted due to the current standardisation of limiting financial frameworks in retrofitting .Currently, decision-making is largely based on pre-determined objectives after centralised decisions have been taken.Retrofit providers frame the management of supply chains as mainly linear with household-consumers only playing a role at the end of the chain .This contributes to an exclusive orientation on ‘upstream’ systemic dynamics and an overall lack of attention for ‘downstream’ perspectives which strengthens the separation between retrofit provision and energy consumption.A study revealed that after energy retrofitting of housing complexes, the realised energy savings are 30–40% lower than theoretically estimated which points to possible unused potential in retrofit provision.Also in both China and the Netherlands, current retrofit providers ignore the fact that the institutional mechanisms, expert-led organisation of retrofit decision-making and the management responsibilities of retrofit packages influences the daily activities of householders after the retrofit .The ignorance leads to a poor alignment between the technological and the social side of retrofitting which could lead to barriers in the use of retrofit packages.Little is known about how retrofit governance interferes in the ways of executing domestic routines like heating, cooling, ventilating and waste treatment .Ultimately these domestic practices determine whether the home is environmentally sound and energy efficient or not .The problematic division between the sphere of retrofit provision and the seemingly separate sphere of energy consumption in China and the Netherlands necessitates a re-shifting of roles between the government, market and consumers .Conventional approaches fail to capture the diversity in forms and levels of housing systems.In academic studies, the blurring in the institutional order of purely market-based, public-based or community-based governance arrangements is not picked apart into dimensions of differentiation through retrofit supply chains.It is not just the fact that retrofit packages are generated by different institutional stakeholders but also that the variety of new institutional orders lead to different retrofit packages, responsibilities and relations between providers and consumers in many diverse ways.Here, the missing link is to retrofit housing not only materially in building elements and technologies, but to accommodate retrofit packages in domestic practices during the retrofit process and afterwards .Retrofit governance in this study builds on Dowling et al. who draw attention to the interplay between the development of policies, markets, technologies and participation activities to unravel the way energy consumption is co-constructed and co-managed in a multi-actor and multi-level context.Analysing the socio-technical systems of retrofitting is a prerequisite to bridge the governance implications of the sustainable city to householders’ everyday lives .Building upon these studies, we take as starting point that the effectiveness of urban retrofitting arises from the different ways interactions are organised between retrofit providers and household-consumers.Environmental innovation, decentralisation and liberalisation in supply chains have led to hybrids of public, private and community governance arrangements in a variety of financial resources, retrofit packages, and decision making power vis-a-vis consumer roles.Beyond an artificial production-consumption division, the “system of provision” approach pinpoints to various vertical ways of interaction as connective tissue between production and consumption.By doing so, the SoP approach unites all different forms of interaction by documenting a patchwork of structures, processes, agents/agencies and relations .This framework does not isolate aspects of production and consumption.The structures in institutional settings for financing determine power in the organisation of decision-making .This becomes visible in the distribution of responsibilities for retrofit improvements between provisioning and consuming agents.Retrofit practices in the organisation of decision-making and the distribution of responsibilities for retrofit improvements co-shape the well-being and energy saving in domestic practices .To draw attention to the different ways in which production and consumption are linked, this paper offers an overview of emerging systems of retrofit provision using cross-national qualitative case studies .Each of these systems of retrofit provision appears to represent a unique institutional and social configuration to embody different principles of demand management .The main goal of this study is to bring householder-consumers back into view and to acknowledge their multiple roles for co-shaping sustainable transitions in the more public-led Chinese and the more private-led Dutch retrofitting of housing estates.Accordingly, this paper recognises that the different forms of interaction between providers and household-consumers in the management of infrastructures are fruitful points to analyse .Such an integrated analysis of regulations, technologies, supportive organisational-institutional frameworks and social practices is required to begin designing the pathways towards a more sustainable energy consumption at the domestic level .This paper aims to answer the question: How do interactions between providers and consumers in different systems of retrofit provision affect the formation of sustainable retrofit practices in China and the Netherlands?,We assess how urban retrofitting is governed by whom, for what reason, and with which policy outcome at the level of householders.Using this assessment, we evaluate the systems of provision in Chinese and Dutch retrofitted housing estates for low-incomes.The social housing sector of Amsterdam is chosen as typical case study area in the Netherlands.The large size of China and differences between governance arrangement among provinces, however, make it necessary to focus on more than one city.For this reason, retrofit is studied in affordable former public housing in the stringent governmental organised and centrally heated megacity of Beijing and Mianyang as smaller, and more experimental people-oriented city without central heating.The outline of the paper is as follows: the second section introduces the system of provision approach resulting in an analytical framework.Methods are discussed in the third section.The fourth section gives an overview of retrofit governance in systems of retrofit provision in Mianyang, Beijing and Amsterdam.The fifth section provides the main conclusions with regard to the governance in systems of provision and discusses retrofit policy recommendations.To be able to conceptualise complexities and dynamics of specific supply chains, the SoP approach was introduced by Fine and Leopold and developed further in Fine et al. as a methodologically and theoretically open approach.As an analytical tool to map provider-consumer interaction, the categories of this comprehensive framework partly overlap because they hang together in a system as “complex wholes” that cannot easily be reduced to component parts.To assemble the elements and linkages to configure particularly decisive sub-aspects for SoP functioning, the specificities of historical, economic, socio-cultural, geographical and material dimensions need to be followed across structures, processes, agents/agencies and relations.By operationalizing the SoP approach to housing retrofitting in different governance contexts, the characteristics of institutional structures are employed with respect to retrofit financing in 2.1.In 2.2 we specify how these institutional structures determine power in decision-making processes.The next section focuses on the dealings between agents/agencies in the responsibility for retrofit improvements.The way relations between the organisation of decision-making and responsibilities for retrofit improvements co-shape the formation of sustainable retrofit practices is addressed in 2.4.The four categories are brought together in the paper’s analytical framework in Section 2.5.Structures in a SoP approach intercede in numerous institutional ways through the chain of provision by creating program standards, resources and regulatory frameworks.These social and economic interventions structure financing, production, distribution and consumption.Structural divisions have originated among public and private supply and also on public and private demand, not at least in patterns of ownership, control and delivery.Structures of provision are part of historically-progressed and socially-particular constructions with differing public-led or private-led dominance.The SoP approach views markets as organised to a significant extent by the state in conducts that are ceaselessly developing .In the structures of housing retrofit, the institutional settings to financing have become especially decisive because arrangements of the market and community are becoming more prominent which also changes the role of the state in retrofitting.The rules and resources can confirm certain patterns but structures are never fixed-for-ever .State intervention in retrofit provisioning has historically been justified by the role of housing as a basic human necessity and place of shelter .Liberalisation requires a different but not a diminished role for government stakeholders .More concretely, the privatization of social housing and increasing housing homeownership make financial settings, such as financial burden sharing patterns and responsibilities between participating stakeholders, increasingly prominent .Privatisation in the housing sector commonly refers to ownership of economic and financial assets, which easily leads to issues around distribution of costs and benefits in retrofitting.Retrofit governance is changing to project-driven collaborative modes of provision in either more autonomous, universal, simple, integrated or marketised arrangements .This urges the call to balance the roles of householders and providers in the institutional setting for financing, consisting of regulatory arrangements, funding frameworks, financial burden-sharing patterns, consumer incentive structures and consumer inclusion methods.Processes in a SoP approach consist of various modes of financing, production, distribution and marketing at the providers’ end of the chain which interact with certain modes of access and use at the consumer’s end of the chain.Processes are decisive with specific organisational mechanisms for consulting, resident representation and power distribution .There are distinct sets of governing procedures with specific power dynamics between state, market and community in each stage of the process cycle from production to consumption.The SoP approach argues that within the institutional structures certain processes unite a specific pattern of production with a specific pattern of consumption .The process organisation in retrofit decision-making is especially decisive due to differing power dynamics of private, public and community stakeholders in subsequent phases.The main phases in the retrofit process are technical audit, design of plan options, construction, commissioning and occupancy or use .Providers’ interference in these phases are complicated because the process interferes with existing inhabited buildings by householders who need to give a majority approval.This interference is specified in technological change as a more or less one-way process of technology transfer or as a two-way process of technical learning and spillover .To understand dynamics in the socio-cultural process-cycles, which form chains of retrofit provision, focus is needed on the opportunities for creative dialogues with householders .Involving householders in retrofit design can be a solution to overcome problems of pre-bound and rebound effects, non-acceptation, sabotage or misuse of technology .The organisation of decision-making between providers and consumers is indicated by core procedures of the retrofit, project timelines, consulting mechanisms, delegated powers to consumers and models of consumer demand management.Different agents/agencies in a SoP approach have different responsibilities in the configuration of materials, post-consumption feedback, control and conservation .Agents/agencies compete and distribute control to manage material attributes of technology networks along the chain of provision.The SoP approach focuses on emerging responsibility outcomes from settlements between internal groupings of agents/agencies in three domains: citizens, the private sector and the state .This leads to differing symbolic meanings for the physical supply chain items .With regard to public, private and community agents/agencies, the specific scope of retrofit improvements is especially decisive due to differing responsibilities and roles for participating stakeholders.The management about the distribution and the use of retrofit innovations are determined between construction companies, local governments, housing associations and household-consumers to realise energy saving targets.Technology is not universal, neutral and independent but inherently social and part of societal dynamics by balancing energy saving and the quality of life .Some retrofit projects focus only on the “inside” of the housing and the painting and adding of exterior walls, bathroom retiling, new heating- and ventilation equipment while others give more attention to improving “outdoor” facilities in security, parking, garbage cans and sport facilities.Choices for technical control in retrofit packages interfere with existing and emerging domestic practices , and determine the specific roles householders have in controlling and maintaining their homes.The distribution of responsibilities for material improvements between providers and consumers depends on who are the key players; the scope of material interventions and problems in post-retrofit material control, and lastly, consumer needs in retrofit products and the technical representation of consumers roles.In a SoP approach, relations between social structures, processes and agents/agencies are shaping and being shaped by the retrofit practices of steering and empowering.Domestic practices are co-shaped by organised interactions during retrofit processes.Both socio-cultural and material aspects constitute stakeholders’ practices among the public, private and community domain.By locating consumption in the context of a chain of processes and structures brought together by relating practices of agents and agencies, the SoP approach opens the way to a more grounded interpretation of consumption .This stresses the importance of exploring retrofit practices co-shaping everyday consumption practices and the degree to which they are supporting or obstructing sustainable consumption.Relations between providers and consumers in different private, public or community governance contexts result from the specific organisation of decision-making and the distribution of agents’ responsibilities for material improvements.The specifics of these economic, institutional, technological contexts co-shape what householders do in their everyday lives in different forms of retrofit steering.In this light, it becomes clear that the differing visualisation and education methods by which retrofit providers accommodate householders’ practices in organising retrofit processes can either barricade or enable householders on the road to sustainable consumption .Finally, the problems and solutions in housing retrofitting do not only derive top-down but also arrive from the organised intermediation support and the everyday experiences of householders .The embedded nature of energy in the home requires a smooth formation of retrofit practices to capture collective routine behaviour of domestic tasks in heating, cooling, ventilating, cooking, washing and treating waste .The formation of sustainable retrofit practices is constituted by retrofit steering in visualisation tools and education services, consumer communication designs and consumer conflict management.Building on the theoretical inspiration in the work of Fine & Leopold and Fine et al. , we distinguish the following elements of focus for this paper on provider-householder interactions in retrofitting SoPs.In the table below we present how we operationalise the general SoP categories, by specifying determinants of retrofit systems of provision.The latter will be used as headings to organise the empirical sections.This research examines housing complexes, which have been retrofitted to meet higher energy- and life quality standards, as starting point to explore householder-provider interactions.New householder-provider interactions in housing retrofitting arise as a result of liberalisation and decentralisation leading to hybrid partnerships across the traditional state-market-society divisions.By acknowledging the broad variety of possible hybrid partnerships and to avoid a bias towards the specific circumstances of one city, this study focuses on multiple retrofit project cases in three different cities.Comparative case studies can identify similarities and differences between cases to provide more generalizable cross-case insights .The Amsterdam metropolitan region offers the embodiment of different retrofit projects with active citizen-involvement.Public social housing has mainly been developed during the last century and especially in Amsterdam.Today 30% of its housing stock is owned by semi-private housing associations, who are privatised in 1994 but still have a public goal to provide housing for low-incomes .To contrast the Dutch retrofit project cases, the decision is made to focus on China with the largest building energy consumption and residential retrofit challenges of the world .Today approximately 19% of its housing stock consists of affordable former public housing as a result of different housing reforms in the 1990s .Retrofit projects in Beijing, a city with more than 20 million inhabitants, are illustrative for its top-down public-private leadership as the city hosts the government.However, to obtain better insight in the differences in Chinese housing retrofitting governance it was necessary to choose additional retrofitting project cases outside Beijing.We choose to include retrofitting projects in Mianyang, a smaller city with roughly 1.3 million inhabitants.This is a unique so-called Science and Technology city in the Chinese Torch program and a representation of Chinese experimental people-oriented public governance.After selecting the three cities, a quick scan was executed into the specifics of different ways of retrofit governance to make sure the cities are comparable to the extent of our variables.Similarities between the three cities are that the financing of retrofitting is institutionally set in regulatory arrangements around apartment building programs.Decision-making is organised in terms of core procedures around technical audits; resident committees and legislation about obligatory majority approvals concerning planned retrofit projects.Also the responsibilities for material improvements are mainly governed by the institutional stakeholders as key players.In the formation of sustainable retrofit practices there is organised support for intermediation in retrofit steering.Apart from these similarities the three cities differ in:The funding frameworks and financial burden-sharing patterns.The project timelines and consulting mechanisms.The scope of the material interventions and problems in post-retrofit material control.The visualisation tools and education services.We realised that all cities have a significant contribution to make to the research as they are located on a different angle of the institutional triangle in sustainable development governance .The analysed projects in the three cities represent three typical models of housing retrofit provisioning for housing estates.The overall methodological framework is based on guidelines for case studies of specific geographical disclosed neighbourhoods.This paper is primarily based on 45 expert interviews with 15 interviews executed in each city.Using semi-structured interview techniques, data was gathered from local government officials, housing association officials, construction companies, and private developers.These interviews of around sixty minutes were directed on the one hand to describe and to understand the governance of retrofit projects, and on the other hand to identify broader trends of urban retrofitting in the three cities.Specifically, topics of the interviews have been: 1) general questions about the institutional structure, financial burden sharing and specific regulations, etc.; 2) questions targeting at the planning- and decision-making process, such as who initiates the project, who are mainly involved and what is the role of end-users; 3) questions aiming at design and construction, such as the objective and scale of the retrofitting housing project, and which distribution of responsibility for retrofit improvements is in place, etc.; 4) questions aiming about the intermediation for the use of the retrofitted house, including who is in charge of the apartment management and maintenance and how do the occupants evaluate the retrofitted apartments.The interviews were transcribed, and along with the notes, coded and analysed by identifying key themes, concepts and specific phrases, with reference to the conceptual framework.The interview findings have been triangulated with site observations of visited retrofitted neighborhoods, which helped to understand the physical retrofit improvements and the practices of providers and householders.Occasionally observing interactions between retrofit providers and householders has helped to analyse not only their sayings but also their doings.These findings have been triangulated with reviews of policy documents to strengthen the validity of the generated data.In the public social partnerships of Mianyang, the limited public financial support reveals the boundaries of what engaged collective householders can achieve by themselves in simple fixed retrofit packages for energy saving.However, the lack of financial sources may also contribute to a larger role of resident committees in long-term governance and supervision.Public-led retrofitting of housing estates for urban low-incomes is at an early stage in the city of Mianyang.In 2015, the local government launched the first four-year program of “Urban Old Community Governance and Work Guidance for Retrofit” to target existing residential communities consisting of approximately 50–200 households and which were built before 2000.These housing communities were built for employees of specific companies, many of which have now been closed and therefore not liable for any financial contribution to the retrofit.Financing retrofitting is largely dependent on public funding from governments.The city government of Mianyang invested ¥130 million in the new four-year program aiming to retrofit 430 communities in the urban districts1 .The different government subsidies resulted in limited project budgets of on average ¥275.0002 .Based on financial information from the case studies this means 75–300 ¥/m2.Retrofit governance is implemented by assigning a significant role todistrict governments and resident committees.Within the boundaries ofdistrict governments, these resident committees help to implement the government agenda with only ¥25.000 as their own freely usable budget3 .After the retrofit, the resident committees organised their own community to set up funds for their own management, cleaning, maintenance and security.Besides these self-organising communities, the householders needed to pay a little amount for the retrofitting of their properties.A one-time maintenance contribution of 10 ¥/m2 to a general maintenance fund is obligatory4 .In one of the 12 urban districts the district government is exploring whether householders can contribute 20% of the total project costs in the next batch of retrofitting.A common timeframe for retrofitting processes in Mianyang includes six months for establishing a resident committee and getting householder’ agreement and three months for executing the construction of retrofitting.The district government created a provisional program of requirements based on an influential techno-economic examination of old buildings suitable for retrofitting.The first step to be taken by the householders is to represent themselves in a resident committee of 5, 7 or 9 residents.The resident committee together with the sub-district government officials informed householders via introductory meetings and through posters about the retrofit.All householders are requested to fill in a open-ended survey to give their recommendations and suggestions for the retrofit project to initiate an application to the district government.The approval by at least 2/3 of the householders is compulsory for application of the retrofit project.After householders’ application, the district government decided on reasonability and made a specified retrofit plan.Also the specified retrofit plan needed approval from 2/3 of the householders.The retrofit construction had to be carried out by a qualified construction company carefully chosen via an open tender within specified financial limits.Retrofit construction works needed to be verified in detail, ratified by a supervision company and monitored by the resident committee.When the retrofit construction was finished the district government evaluated the used budgets.The district government used the resident committee as a “representative bridge” towards the householders.The last step for the householders in the procedure was to encourage the resident committee to set up their own funding for a limited form of long term “self-governance”.Evaluation of the project is planned after one year.The government takes responsibility for integrated urban retrofitting.Table 4 illustrates that priority is given to improving basic quality of life and repairing earthquake damage, not to environment.This is combined with improving the environment and preserving cultural heritage.An example of fixed products to enhance cultural values is the retrofit of a community gate in traditional style.The beautification of outdoor spaces and communal facilities has often been neglected because of lack of investment power in the past, resulting in a low basic quality of the apartments, especially windows and window shades.Nowadays, the district government emphasised the most urgent collective needs of householders.These highest needs, as mentioned in the questionnaires, are leaking and dilapidating of their sewage systems and roof leakages.In many cases the retrofitting projects in this city with a hot-summer-cold-winter climate without central heating focused only on one superficial environmental measure, like sockets for e-bikes, also because energy saving standards are not so strict.Regarding the energy saving, target setting is based on the level of theoretical energy efficiency in the 1980s.The local governments obliged the construction company to advance the buildings to 50% energy reduction in relation to the theoretical energy use of the 1980s while the reduction target of Beijing is recently changed to 75%.From the urban planners’ point of view the “inside” of their individual apartments is the householders’ own responsibility.This made energy saving measures, like efficient solutions for heating and ventilation, in new windows and window shades largely dependent on the engagement of householders.Retrofitting in Mianyang is only recently introduced and has a simple chain of provision and limited financial resources.The simple character is expressed in the substantial limitations in available technical, financial and managerial resources.The current house-ownership structures give only limited opportunities to district governments to organise activities in the retrofit process and to interfere in private properties.From an energetic point of view, urban retrofitting in Mianyang is largely restricted by the exclusion of energy saving improvements “inside” the apartments to accomplish the relatively low energy standards within the existing regime.Householders complain about still jamming windows, and because flexible window shades have not been introduced, there is a higher use of air-conditioning after the retrofit.As retrofit packages only target the outer parts of apartment buildings, and not the “inside” of the apartments, there is obviously no need for example houses to visualise retrofit improvements as posters are considered as satisfactory.In terms of organised activities, the district and sub-district governments focused on technical audits, resident questionnaires with some open questions, resolving consumer conflicts and occasionally organising meetings to educate residents in energy saving and waste treatment.The public space improvements can be destroyed easily by householders without perspective on long term maintenance.The main strategy of the urban planners of Lishan and Muzongchang was to stimulate new resident committees to cultivate pro-environment volunteerism and investments via self-governance of the community.This strategy came without strong incentives to innovate.The possibilities to invest in their apartment differ between householders.The resident committees organised and encouraged residents to behave more environmentally friendly.An example of the steering role for the resident committee in their daily practices was to check whether all garbage is put in the correct garbage can.They played parental and pastoral roles at the grassroots level, intervening and mediating domestic cooperation and regulating individuals’ life in relation to building improvement.The volunteers of the resident committee, who know all householders well because of shared working history, organised a collective purchase of unified solar-protecting shades in the Lishan community.The public-private partnerships in retrofitting projects of Beijing have, compared to those in Mianyang, more public financial support available for retrofit packages.The lack of individual fine-tuning in the retrofit packages makes householders rather passive.However, the fact that there are still strong ties between householders as former employees and their former employers, who founded the apartment building, make the former employees willing to act as informal retrofit ambassadors.Their contribution could potentially be a way to unburden all householders in the retrofit process.The Beijing government is active to combine both retrofit and renewal projects in large residential communities of former state-owned or enterprise-assigned public housing through a program launched in 2011.The program, named “Comprehensive Treatment of Anti-Earthquake and Energy-saving for Old Housing Areas” is led by the deputy vice-mayor who decided to make community retrofitting the responsibility of the district governments.In case the community is related to a still existing work unit institution, the affiliated employer is sometimes able to contribute financially.Yet, retrofits in Beijing rely to a large extent on public funding from national to local level to meet the energy performance requirements.Besides the ¥4.6 billion of subsidies from the central government to retrofit 400 million m2 in the cold climate zones arranged during the 11th Fifth Year Plan period, the local government and districts of Beijing invested ¥30 billion m2 of subsidy in retrofitting in three years5 .The local government contributed at least ¥100 for every retrofitted m2 in Beijing.The subsidy of the local government is higher compared to other areas in China.The costs for retrofitting in Beijing6 is on average between 250–2250 ¥/m2.Householders had to pay a small amount as a contribution, in one specific project of 400 ¥/m2 only 10 ¥/m2.A main target group consists of large communities of approximately 300 up to more than 1000 households, inhabiting high-rise buildings.Additionally, a very small personal investment is required from the residents for the optional new windows and wall decorations.Other financial costs are the removal of illegal constructions, like self-built shades.In most cases the householders could stay living in their apartment during the retrofit.Easy expropriation is counteracted with the new Property Law of 2007 with more reasonable compensations for removal or expropriation7 .The most common timeframe for a retrofit process consists of a pre-retrofit phase of 4–7 months and retrofit construction phase of 3–5 months.The district government created a provisional program of requirement using qualified technical companies to make an influential techno-economic examination of old buildings suitable for retrofit.After this, the district and sub-district government organised meetings in the community house with the former work unit and sometimes appointed building representatives.In contrast to Mianyang there are in many cases no bottom-up organised resident committees.To draw the attention of householders announcement posters were put near the building entrances.The sub-district government organised a demonstration visit to neighbouring projects in some of the cases or occasionally created an on-site example house with samples of retrofit packages.After this, the governmental sub-district officials used questionnaires as consultation mechanism to ask for suggestions, recommendations from householders and approval for application.Compulsory for application of the retrofit project to the district government is an approval rate of at least 2/3 of the householders and 100% agreement for the introduction of elevators or sewerage improvements.However, householders had to decide individually on whether or not to approve the replacement of the current windows by more energy efficient ones.They could walk in the governmental office or in an installed kiosk in the courtyard of the building or text their decision using WeChat on cell phone.The district government decided on the financial budgets and selected the construction company from a list of 20 leading architectural companies who are certified to do retrofitting projects and are waived of bidding in a competition with other firms.The responsibility of the district government is to stimulate energy saving, to make the buildings earthquake-resistant and increase low-income citizens’ living conditions.In contrast to Mianyang, reducing energy consumption by retrofitting existing building stock is conceived as a priority in Beijing.Due to the cold climate and central heating system, the use of mass products to improve wall insulation is seen as easy win.Before the retrofit, condensation problems were common and householders described the temperature inside the apartments after the retrofit often as too cold or too hot due to imbalance in the heating systems.Regarding the energy saving, target setting is based on the level of theoretical energy efficiency in the 1980s.Since 2015, the local governments obliged the construction company to advance the buildings to a 65%, very lately even 75% energy reduction as compared to the theoretical energy use of the 1980s8 .Besides energy saving, occasional objectives from the householders are included.A shift of objectives is made in the district governments from economy first to people first in the retrofit of old housing projects.This directed to more integrated strategies and objectives across many aspects.However, adjustable central heating, or programs for more energy efficient air-conditioning are not introduced.For the passive householders, comfort reasons are the main motive to appreciate the retrofit, rather than a lower energy bill.The latter is already rather low in relation to living costs, as a result of high prices of apartments and a heating bill which is still based on the floor size of the apartment instead of actual consumption of heat.The large scale of retrofit communities in Beijing, with normally more than 300 residents in high-rise apartment buildings leads to a largely universal provision chain and hampers the establishment of sustainable retrofit practices.Instead of focussing on the varying needs in different apartments, householder needs are largely considered as a universal given and met by standardised products that must be used in all projects at all costs.Financial limitations restricted a more far reaching retrofit and led to a situation in which heating is still centrally controlled on the building block level.So some householders still need to open their windows when it is too hot in the apartment while others still need to heat with their air-conditioning devices.Post-retrofit information about the poor performance of individual apartments is often inaccurately managed.Heating company managers pursue their own utility rather than paying for the public interest and continue to make householders dependent.This shows the different perception of economic or technical efficiency.Consumer roles in retrofitting have always been largely captive, meaning that the range of alternatives to choose from is limited.This led to resistance from residents who felt that the public interventions would invade their “private” spaces and rights.Nowadays, retrofit providers and former employers who founded the buildings still define the goals as well as the problems and put only limited effort in collecting feedback from the public.Participative policy-making focused on environmental behaviour is still at an early stage.Conscious steering of everyday practices was not observed in Beijing retrofitting, although the visit to neighbouring community or the use of walk-in houses to demonstrate the new windows and ceramic tiles did occur.Members of the resident committees are not in the proximity of communities because often they commonly live elsewhere.Above this, they largely need to stand for the government which makes it difficult to represent the differing interests of the large number of householders.Supervision companies and property management companies occasionally represented the householders in bringing in their needs but they also have their own private interests.Based on shared information, television, WeChat and internet, active householders are slowly organising themselves as informal retrofit ambassadors, especially around health issues, like elevators for elderly.This is also shown in the rise of homeowner organisations.The voice of the differentiated community is badly listened to because of the absence of real bottom-up community-based organisations.In the private social partnerships in Amsterdam we observed high funding abilities, leading to multiple retrofit packages for energy saving.The multiple options lead to complex negotiations on finances and technologies which are only understood by well-informed technology-minded householders.However, social housing in Amsterdam encompasses many householder rights organisations and voluntary energy coaches.Their contribution can potentially counterweight the economic and technical reasoning behind retrofit programs.Since the 1970s housing associations in Amsterdam have been involved in the improvement of energy efficiency in housing estates using national government-tied targets and regulations.Recently, the national government concluded agreements with housing associations of Amsterdam to raise the minimum energy standard of all 190.000 properties before 2020.The local government obliged the housing associations of Amsterdam to a large scale efficiency improvement of 16.000 houses9 with low energy performance between 2015-2018.Housing associations are used to retrofit serial dwellings of 50–250 apartments, like gallery entrance flats and apartment blocks built in the 1970s or before.The resources for financing the retrofit of these apartment blocks are provided by housing associations supplemented with subsidies from the national government and the local government.The national government subsidises between €1.500-4.900 and the local government €2.000–14.00010 for every retrofitted apartment depending on the scale of improvements.The retrofit costs are mostly around €30.000 for every apartment and in occasional cases around €100.00011 .Due to the private structure of housing associations they cannot gain more subsidies from the government, making it rather hard for housing associations to balance large amounts of investment capital at once.To provide a stable financial position for themselves, the housing associations sometimes need to sell units of the retrofitted apartments.Some householders are disturbed by the regulation for temporary re-housing.Although in most cases householders can stay in their houses during the retrofit, in case householders need to move out, they can receive approximately €5.000 as a compensation.Typically, the broad community participation in retrofitting leads to lengthy procedures to reach consensus with and between the residents.The duration of retrofit processes in Amsterdam is approximately two years until more than five years in complicated projects.The processes of retrofitting start with a provisional program of requirements and an influential techno-economic examination of housing qualities.The responsible housing association decides on the scheduling of the extensive maintenance and appoints a project team.A legal commitment is that a residents’ committee must be formed as representation of the householders.The resident committee is the discussion partner of the housing association with the help of non-profit tenant right organisations as qualified bridging partners.Small demonstration visits are organised for the householders as part of the retrofit communication plan.Householders are invited to a demonstration visit of a showcase-house, are informed via leaflets, advertisements and are asked to fill in a public questionnaire with their suggestions.Housing associations decide on the basis of suggestions from householders and of a techno-economic examination to a qualified advice on the preferred scenario of the retrofit plan.A legal approval rate of at least 70% of the tenants in every building block is needed to proceed with the proposed retrofit plan.The agreement is personalised to an individual retrofitting proposal for every householder.In most cases householders can choose between basic packages and more ambitious retrofitting measures with financial consequences for the rent.After this, the housing associations start a tender selection process to select the constructor.In case of conflict about the implementation of the agreements, the housing association and the residents committee have the option to go to the conciliation committee with representatives from the local government, social housing sector and organisations for householder rights.The housing associations take responsibility to improve the energy performance of their real estate properties.Their retrofit plans to improve their apartments focus predominantly on technical energy performance and improving comfort and to a lesser extent on liveability and quality of life in the moderate sea-climate.Due to agreements on national level, the aim of housing associations is to upgrade all their dwellings via multiple products to an average energy performance label B or improve at least by two energy performance label classes.This improvement is roughly half of the theoretical energy-use.The priorities of the housing association and the well-informed householders are often conflicting.Representatives of housing associations need to ensure the long-term theoretical energetic sustainability of the real estate properties in a cost-efficient way while residents are often quite focused on their own comfort and beautification of indoor improvements, like a new kitchen or bathroom.Also the expectation about how to use the retrofit improvements of complex individual heating- and ventilation systems is not always clear.This results in polarized confrontations and imbalance in urban retrofitting projects due to unbridled social differentiation and uncertainty.In the eyes of householders the major improvements in retrofitting can be framed as overdue maintenance.Some householders have been living in their apartments for a long period of time and therefore feel like being experts of their living situation12 .One of the biggest challenges to establish sustainable retrofit practices in Amsterdam is the semi-private role of housing associations, market fragmentation and the search for an efficient use of financial resources which leads to a largely marketised provision chain.The market character becomes visible in financial negotiations between the housing association and multiple construction companies about technical solutions.These technical solutions do not always match with present demands for comfort, cleanliness and convenience."The needs in householders' demand to facilitate domestic practices are highly negotiable in contrast to retrofit provider’ attempts to manipulate and manage passive buildings or smart homes.In the eyes of caretakers from housing associations made the tension of a rent-increase retrofit plans subject to contestation and resistance by the tenants in retrofit processes.A non-profit householder right organisation is made available to householders to support them in their struggles with retrofit providers.There are hardly any post-occupancy evaluations to monitor the “real” energy effects of the retrofit in consumption patterns.The material improvements of retrofitting are framed by ideas about what is good from a theoretical energy use perspective, as legitimised in energy label steps, instead of real consumption patterns."As a result, the targeted improvements are trapped in technical audits and evaluations and interfere only occasionally with householders' everyday practices.Householders commonly misuse their retrofitted heating and ventilation installations, which leads to increased energy consumption, despite the occasional personal home visits, model apartment, public education, technology tours, and information sessions to visualise the retrofit plans and bring them closer to the residents’ perspective.Housing associations partly outsourced the retrofit steering of sustainable consumption to voluntary householder energy coaches to motivate and instruct tenants about the use and meaning of energy facilities in their retrofitted homes.Tailor-made instructions concern the best ways to adjust the indoor temperature and air-quality for health, well-being and financial positioning.Energy coaches are important in social housing because energy use and energy saving are abstract phenomena to many tenants and sustainable use of the retrofitted apartments is not ensured without proper training.In this paper we asked the question: How do interactions between providers and consumers in different systems of retrofit provision affect the formation of sustainable retrofit practices in China and the Netherlands?,Empirical evidence from China and the Netherlands shows the implications of institutional, social and technical arrangements for the relationships between consumers and providers.Our results reveal that the formation of sustainable retrofit practices is co-constituted in shifting constellations of retrofit governance along the public-private-community divide.We distinguish three different supply chains: public-social, public-private and private-social governance hybrids.The findings concerning householder-provider interactions in the SoP are displayed in Table 7 below.Our findings on the governance of retrofit projects in Beijing, Mianyang and Amsterdam point us to the relevance of the organisational and technical voids between provision and consumption in the retrofit process.Ignoring these action spaces leads to householders being stowed with post-retrofit housing equipment for heating, cooling and ventilation, which they do not use efficiently in their domestic practices.These new forms of dependency are the result of simply “rolling out” of standard retrofit packages.Clearly in all three case cities, the objectives on what a retrofit entails are driven by financial incentives which are earmarked to certain predetermined retrofit packages.These provision-based retrofit programs define “the rules of the game” by enabling how retrofit processes are organised in terms of visibility of products, conflict mediation, communication and further instrumentation.The ways in which the retrofit is made available to residents varies widely in terms of possibilities of consultation and responsiveness of retrofit providers.Differences in financing arrangements of governance modes, along the public-private-community divide, lead to different organisational support for intermediation in the formation of sustainable retrofit practices.By doing so, each of the systems of provision has generated a specific kind of householder inclusion in the retrofit.In general, householders, as end-users of the retrofitted apartments, turn out to be scarcely involved in the decision-making on retrofit interventions.Most retrofit providers decide about retrofit packages upfront, instead of allowing householders to participate in the pre-retrofit analysis and the maintenance of the retrofit intervention.The findings of the study show a governance gap in regulatory frameworks, participation mechanisms and retrofit packages to embrace specificities of existing domestic practices and to orchestrate support for new domestic practices.To identify new provider-consumer relations and social practices that can contribute to services for energy efficiency at the household level retrofit interventions must be viewed more broadly than as only a set of traditional financial incentives and information dissemination.Public, private and community modes of provision to housing retrofitting co-exist in all three cities, but they do seem to converge.From the different governance hybrids across the public, private and community domain specific challenges for domestic practices arise.In the public-social partnerships of Mianyang, the boundaries of what collective householders can achieve by themselves in retrofit packages for energy saving are set by the available public financial support.In the public-private partnerships of Beijing, the lack of individual fine-tuning in standardised retrofit packages for energy saving is an important limitation, although compared to Mianyang more public financial support is available.Lastly in the private-social partnerships of Amsterdam only well-informed technology-minded householders can oversee the retrofit packages for energy saving, as a consequence of the complex negotiations on finances and technologies.In all three case cities, these challenges lead to a lock-in of householders’ practices into their retrofitted homes.Understanding that retrofit packages do not only entail material interventions, but also have social and political implications in the energy efficiency of domestic practices, points to a need for enhanced consumer involvement .The fact that many of our findings are similar for the diverse contexts in the Netherlands and China suggests that they may also apply to retrofit cases of apartment buildings in other countries.The elaboration on what retrofit governance is and what it can ‘do’ leans on the underlying theorisation of societal dynamics and sustainability perspectives.On a theoretical level, this paper offers a comprehensive, non-functionalistic, open account of the ways in which retrofit practices are shaping and being shaped by systems of retrofit provision .The analytical framework has proved beneficial in analysing the different contexts of housing retrofit in China and the Netherlands.This paper also helps to move away from the image of practice research as being exclusively micro-situated, ethnographic, ad-hoc and a-historic .Instead this study engages with the analysis of wider practice-arrangement bundles and networks in the supply chain.In this perspective, the material, social, institutional and legal conditions in systems of provision are finally shaping – and are being shaped by – consumption practices .Systems of provision approaches are especially strong in characterizing the messy relationships of situated practices by householders and providers in wider configurations of retrofit provision to explain how competition for crucial resources can result in power inequality.Their final contribution is in the presentation of promising legitimate institutional arrangements within the supply chain.Existing knowledge, policies and instruments in retrofitting for energy efficiency at the domestic level do not seem to be up to the task.This points to new approaches in terms of both understanding and organising retrofit programs.In terms of inclusive retrofit governance, this paper points towards the need to complement top-down, technology oriented forms of retrofit governance with bottom-up, socio-technical and life-world oriented forms of retrofit governance .This would mean to allow householders at least to further co-determine the retrofit plan and to facilitate the embedding of domestic practices into the proposed retrofit packages.This makes the social-inclusive visualisation tools, communication designs, education services and consumer conflict management increasingly prominent.In an intermediate approach, consumer roles in retrofitting could change from only captive, to co-designer or co-decision maker .This would mean less power for vested interests in housing and less reliance on generic regime-preserving solutions.Finally, in all three contexts voluntary householders and household organisations are motivated to be involved in the co-management of their neighbourhood, which can be seen as an in-kind financial contribution.To align key domestic practices towards sustainability, acknowledgement is needed for the grassroot role of resident committees like we have seen in Mianyang for long term governance; voluntary energy coaches and intermediating organisations like in Amsterdam; and householder-to-householder contacts like in Beijing.Environmental innovation processes in retrofit production-consumption chains offer potential for consumer inclusion.Rather than replacing traditional modes of retrofitting, perspectives on householders’ everyday lives should be built in to processes leading to a more sustainable housing retrofit. | One of the most important governance challenges in terms of energy saving is the physical upgrading of apartment buildings via housing retrofitting. In urban studies, little focus has been applied to the shape and character of the retrofit governance frameworks to realise inclusion of householders. Little is known about how these different frameworks, and the systems of provision they represent, impact on householders to achieve energy saving in their retrofitted houses. By recognising the importance of the relationship between provision and consumption, this study aims to analyse household inclusion in Chinese and Dutch systems of energy retrofit provision to suggest strategic improvements for intermediation. The empirical data is gathered in qualitative case studies of housing retrofitting in Amsterdam, Beijing and Mianyang (Sichuan province, China) by interviewing local retrofit providers, combined with site observations and reviews of policy documents. This paper shows how the formation of sustainable retrofit practices is co-constituted in shifting constellations of retrofit governance along the public-private-community divide. Public and private modes of housing retrofit provision seem to converge in Beijing, Mianyang and Amsterdam. The findings point to how regulations, processes and technical infrastructures should be adjusted to realise sustainable retrofit practices. The paper concludes that energy housing retrofitting in both Chinese and Dutch contexts requires co-management among householders and social intermediaries. |
319 | Immersion anaesthesia with ethanol in African giant land snails (Acathina fulica) | There is an increasing worldwide demand for unconventional pets, including invertebrates.Among these, the giant African land snail has gained increasing popularity within the last few years.Besides being kept as pets, these molluscs are often kept in zoos, or as part of private collections and conservation projects.A common indication for snail anaesthesia is the need to perform clinical, surgical or diagnostic procedures ."Unfortunately, very little is published about the anaesthetic management of these gastropods, particularly those kept as pets and, to the best of the authors' knowledge, there are no prospective studies focusing on the anaesthetic of Achatina fulica.Immersion anaesthesia is a common method to anaesthetise various unconventional small-sized species, including amphibians and land snails, and various agents with anaesthetic properties, such as etomidate, alfaxalone and ethanol, have been used to prepare the anaesthetic bath solution .One study investigated safety and efficacy of immersion anaesthesia, with various agents, in Biomphalaria snails and found that, whilst sodium thiopental was toxic to the snails and the association of Cetamine base with Tiazine chloridrate produced only partial anaesthetic effects, sodium pentobarbital resulted in safe and predictable anaesthesia .Tricaine, also called MS222, has been used for immersion anaesthesia in various species of pulmonate snails, including Biomphalaria, Helisoma, Bulinus and Lymnaea ; similarly, menthol, either alone or in combination with chlorohydrate, has been used for bath immersion of Lymnaea, Physa and Bulinus snail species .The immersion in ethanol per se is not a novel technique; previous work suggests that 5% ethanol solution is an effective method to provide anaesthesia in land snails; however, the authors only reported that the snails recovered from anaesthesia within two hours from the end of immersion, and did not provide any detail pertaining to quality of recovery and occurrence of post-anaesthetic adverse effects .The purpose of this prospective clinical trial was to evaluate the anaesthetic effects and anaesthetic-related complications of immersion in 5% ethanol, in 30 client-owned African pet land snails.An ethical approval was obtained from the Clinical Research Ethical Review Board of the Royal Veterinary College prior to commencing the trial.The snails were presented at a referral practice for exotic animal species for diagnostics.Information about the general health of the animals were obtained through detailed anamnesis and visual exam, to exclude external damage or other lesions.Moreover, the muscle tone and the tentacle withdrawal reflex were assessed preoperatively .General anaesthesia was required to perform biopsies from the foot muscle of the snails and process the specimens for screening of parasites, upon request of the owner.The snails were transferred to the anaesthetic solution, prepared with 120 mL of dechlorinated water, by hand, always by the same operator wearing latex-free gloves.Water temperature was 22 ± 2 °C.Time of immersion, as well as the variables defined below, were recorded.Time to anaesthetic induction was defined as the minutes elapsed from the beginning of immersion in 5% ethanol and the achievement of anaesthetic induction, characterised by immobility and loss of tentacle withdrawal response to gentle stimulation ."Time to recovery from anaesthesia was defined as the minutes elapsed from removal of the snails from the ethanol solution to regain of normal posture, muscular tone and tentacle withdrawal reflex in response to foot pricking with blunt forceps.The time elapsed from removal from the anaesthetic bath to return of tentacle withdrawal reflex was also annotated on the anaesthetic record.The occurrence of undesired effects of 5% ethanol, namely the production of bubbles, body retraction, expulsion of mucus and/or faeces, prolonged recovery, dehydration/desiccation, and death, were recorded.Descriptive statistics applied, with the Kolmogorov Smirnov test used to analyse data distribution.A commercially available software was used for statistics.Data were not normally distributed and are presented as medians and 25–75% ranges.The shell of the snails, aged 1.6 years, reached up to 7 inches in length.Of the 30 African snails included in the study, one had a fatal outcome approximately 20 minutes from immersion in the ethanol solution.The remaining 29 snails completed the study and recovered from anaesthesia and none of them showed any kind of reaction during surgical biopsy.Time to anaesthetic induction and time to recovery from anaesthesia were 25 and 50 minutes, respectively.Recovery was prolonged in one snail, which required 210 minutes to regain normal muscular strength.Time from removal from the ethanol solution to return of tentacle withdrawal reflex was 20 minutes.Beside death, other observed adverse effects were production of bubbles in 4 out of 30 animals and mucus secretion in other 4 snails; this accounted for a total proportion of snails showing adverse effects equal to 26.6%.Within the 4 weeks following anaesthesia, the owner of the snails did not notice any change in behaviour or physical appearance in any of the animals.The results of this report suggest that immersion in 5% ethanol solution may be regarded as a suitable anaesthetic technique for African giant snails, as it consistently produces induction of general anaesthesia within a reasonable time.Nevertheless, the considerable variation in recovery time among snails, together with the observation of one prolonged recovery which lasted more than three hours from removal of the snail from the anaesthetic bath, raises the concern that recovery from anaesthesia of African giant snails after ethanol immersion may be prolonged and unpredictable.Beside the one death that occurred during immersion, anaesthesia-related side effects were regarded by the authors as mild and were mostly represented by secretion of foamy mucus and bubbles.These adverse effects may be either the result of environmental stress or, alternatively, they may represent the attempt of the body to eliminate the anaesthetic agents, perceived as toxic substances .Although the long-term follow up was limited to collection of information from the owner of the snails, it seemed that mucus and bubbles secretion was short term and did not result in long-term complications.Regarding the snail that died during immersion, although it is challenging to speculate about the causes of its death, it is hypothesised that hypoxia could have contributed to this adverse outcome.Similarly, hypoxia might have played a role also in the one prolonged recovery observed in another study snail.Regarding the duration of surgical depth of anaesthesia, although this study was not designed to investigate the analgesic properties of ethanol, the relatively quick return of tentacle withdrawal reflex seems to indicate that, whilst full recovery might be prolonged, surgical anaesthesia may instead have short duration.If this was true, immersion in 5% ethanol would be suitable only for brief surgical procedures implying a mild to moderate nociceptive stimulation.In conclusion, immersion in 5% ethanol produced reliable and consistent anaesthesia in African giant snails of duration sufficient to allow foot muscle surgical biopsies.The potential for side effects, together with the lack of evidence of effective and long-lasting antinociception, seems to suggest that the use of this anaesthetic technique should be limited to healthy snails undergoing non-invasive or minimally invasive short clinical procedures."Dario d'Ovidio: Conceived and designed the experiments; Performed the experiments; Contributed reagents, materials, analysis tools or data; Wrote the paper.Paolo Monticelli: Conceived and designed the experiments.Mario Santoro: Contributed reagents, materials, analysis tools or data.Chiara Adami: Conceived and designed the experiments; Analyzed and interpreted the data; Wrote the paper.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper. | Giant African land snails (Achatina fulica) are becoming increasingly popular pets and may be anaesthetised to allow diagnostics and surgical procedures. The objective of the present study was to evaluate the anaesthetic effects and anaesthetic-related complications of immersion in 5% ethanol in client-owned African pet land snails, anaesthetised to allow biopsies of the foot for screening of parasites. Variables such as minutes elapsing from immersion to anaesthetic induction and from removal from the bath to return of tentacle withdrawal reflex and recovery from anaesthesia were recorded, as well as the occurrence of adverse effects. Of the 30 snails enrolled, one (3.3%) had a fatal outcome whereas the remaining 29 (96.7%) snails completed the study and recovered from anaesthesia. Time to anaesthetic induction was 25 [25–29] minutes. Recovery was prolonged in one snail, which required 210 minutes to regain normal muscular strength. Time from removal from the ethanol solution to return of tentacle withdrawal reflex was 20 [14–42] minutes. Beside death, other observed adverse effects were production of bubbles (n = 4; 13.3%), and mucus secretion (n = 4; 13.3%). Immersion in 5% ethanol may be regarded as suitable anaesthetic technique for African giant snails for brief and moderately invasive surgical procedures. Nevertheless, recovery from anaesthesia may be prolonged and unpredictable. |
320 | CENP-A Ubiquitylation Is Inherited through Dimerization between Cell Divisions | CENP-A is a centromere-specific histone H3 variant that is required to ensure kinetochore assembly for proper chromosome segregation; defects in CENP-A function lead to aneuploidy and thereby cancer.In most species, except for the budding yeast, centromere identity relies not on the DNA sequence but on the presence of a special nucleosome that contains CENP-A.CENP-A-containing nucleosomes are formed with canonical histones H2A, H2B, and H4 at the active centromeres, but the nucleosome structure remains controversial.CENP-A nucleosomes localize to the inner plate of mammalian kinetochores and bind to the 171-bp alpha-satellite DNA in humans.Active centromeres require CENP-A nucleosomes to direct the recruitment of a constitutive centromere-associated network and the kinetochore proteins in a DNA sequence-independent manner, and together this CCAN and the kinetochore proteins orchestrate the kinetochore-microtubule attachment and regulate cycle progression through the spindle checkpoint.Therefore, CENP-A is proposed to be the epigenetic mark of the centromere, and recently, through the use of gene targeting in human cells and fission yeast, this mark was demonstrated to act through a two-step mechanism to identify, maintain, and propagate centromere function indefinitely.Evidence regarding the mechanism by which the epigenetic mark of the centromere is generated has been divergent and somewhat contradictory.This variation appears to be derived from the variety of species and cell types studied.In gametes of the holocentric nematode Caenorhabditis elegans, and possibly in plants, the centromere marking is independent of CENP-A/CenH3.A recent study suggested that in C. elegans, pre-existing CENP-A/HCP-3 nucleosomes are not necessary to guide the recruitment of new CENP-A nucleosomes.In contrast, in Drosophila melanogaster, CENP-A/CID is present in mature sperm, and the amount of CID that is loaded during each cell cycle appears to be determined primarily by the pre-existing centromeric CID, a finding that is consistent with a “template-governed” mechanism.However, it is unclear how CENP-A works as the epigenetic mark at the molecular level in humans.Numerous studies have found that CENP-A can be experimentally mistargeted to noncentromeric regions of chromatin and that this mistargeting leads to the formation of ectopic centromeres in model organisms.Chromosome engineering has allowed the efficient isolation of neocentromeres on a wide range of both transcriptionally active and inactive sequences in chicken DT40 cells.More than 100 neocentromeres in human clinical samples have been described.They form on diverse DNA sequences and are associated with CENP-A localization, but not with alpha-satellite arrays; thus, these findings provide strong evidence that human centromeres result from sequence-independent epigenetic mechanisms.However, neocentromeres have not yet been created experimentally in humans; overexpression of CENP-A induces mislocalization of CENP-A, but not the formation of functional neocentromeres.Lacoste et al. reported that mislocalization of CENP-A in human cells depends on the chaperone DAXX.Identifying and analyzing factors essential to the generation of human neocentromeres is important in clarifying the mechanism of epigenetic inheritance of centromeres.In our previous study, we showed that CENP-A K124 ubiquitylation serves as a signal for the deposition of CENP-A at centromeres.Here, we report that CENP-A K124 ubiquitylation is epigenetically inherited through dimerization.Based on this molecular mechanism, models in which the location of the centromere is inherited are proposed.It has been suggested that the epigenetic centromere mark is generated through a “template-governed” mechanism: the pre-assembled “old” CENP-A nucleosomes may act as a template, allowing the local stoichiometric loading of new CENP-A nucleosomes during each cell cycle.We have previously shown that CENP-A K124 ubiquitylation serves as a signal for the deposition of CENP-A at centromeres.Therefore, we hypothesized a model in which CENP-A K124 ubiquitylation is epigenetically inherited.This model predicts that CENP-A K124 ubiquitylation depends on pre-existing K124-ubiquitylated CENP-A.In human cells, 10% of the normal level of CENP-A is sufficient to drive kinetochore assembly.We performed CENP-A small interfering RNA knockdown, targeting the 5′ and 3′ UTRs of CENP-A mRNA to reduce the quantity of endogenous CENP-A to 7.2% of its normal level in HeLa cells.This severe loss of endogenous CENP-A prevented endogenous CENP-C localization at centromeres; this effect confirmed the previous result.The severe loss of endogenous CENP-A also substantially abrogated ubiquitylation and centromere localization of exogenously coexpressed CENP-A WT-FLAG.We further confirmed the localization of FLAG-tagged CENP-A proteins by chromosome spreading.These results suggest that the presence of endogenous CENP-A is required for ubiquitylation of newly synthesized CENP-A and for centromere localization.In this experiment, we confirmed that the maximum expression level of CENP-A WT-FLAG was achieved when a substantial amount of endogenous CENP-A was already depleted from cells 48 hr after cotransfection.We utilized the constitutively monoubiquitylated CENP-A “mutant” to test the requirement of the presence of ubiquitylated CENP-A.Interestingly, coexpression of untagged, monoubiquitin-fused CENP-A K124R restored monoubiquitylation and diubiquitylation of CENP-A WT-FLAG and the localization of CENP-A WT-FLAG at centromeres.However, coexpression of untagged CENP-A WT did not restore monoubiquitylation and diubiquitylation of CENP-A WT-FLAG, nor did this coexpression restore the localization of CENP-A WT-FLAG at centromeres.We further confirmed the localization of FLAG-tagged CENP-A proteins by chromosome spreading.These results indicated that the presence of ubiquitylated CENP-A is required for ubiquitylation of newly synthesized CENP-A and for centromere localization.We confirmed that exogenous “untagged” CENP-A WT did not localize to the centromeres when endogenous CENP-A was decreased to less than 10% of the normal level; thus, our result eliminated the possibility that “FLAG-tagged” CENP-A WT did not localize to the centromere because it was not functional.To confirm the result obtained in our experiment in which CENP-A was depleted, we used CENP-A KO cells instead of siRNA.Fachinetti et al. reported that only 1% of the initial CENP-A level is detectable in 7 days after Ad-Cre infection and that no centromere-bound CENP-A was detected 9 days following the excision of CENP-A alleles.Consistent with these findings, the endogenous CENP-A level was reduced to approximately 2% of the initial level in 6 days and to less than 1% of the initial level in 7 days after transient expression of Cre recombinase.In our experiment, the retrovirus expressed exogenous FLAG-CENP-A WT 4 days after retro-Cre infection.2 days after infection by the FLAG-CENP-A-expressing retrovirus, the severe loss of endogenous centromeric CENP-A substantially abrogated ubiquitylation and centromere localization of exogenously expressed FLAG-CENP-A WT.Again, coexpression of untagged, monoubiquitin-fused CENP-A K124R restored monoubiquitylation and diubiquitylation of FLAG-CENP-A WT, as well as the localization of FLAG-CENP-A WT at centromeres.We confirmed that exogenous “untagged” CENP-A WT did not localize to the centromere when endogenous CENP-A was reduced to approximately to 2% of the initial level after 6 days of retro-Cre infection; this result eliminated the possibility that “FLAG-tagged” CENP-A WT did not localize to the centromere because it is dysfunctional.These results confirmed the finding that the presence of ubiquitylated CENP-A is required for ubiquitylation and centromere localization of newly synthesized CENP-A.Previously, we showed that 6xHis-CENP-A WT is monoubiquitylated by the purified Cul4A-Rbx1-COPS8 complex in vitro.Therefore, we examined whether the in vivo results are true in vitro.Addition of purified monoubiquitin-fused GST-CENP-A K124R into the reactions enhanced ubiquitylation of 6xHis-CENP-A WT.Such enhancement supports the in vivo results.If the presence of ubiquitylated CENP-A is required for ubiquitylation of newly synthesized CENP-A, then why can purified 6xHis-CENP-A WT be ubiquitylated in vitro?,We assumed that 6xHis-CENP-A WT expressed in insect cells contained some ubiquitylated 6xHis-CENP-A, but the levels were undetectable.Therefore, we depleted presumably existing ubiquitylated CENP-A from purified 6xHis-CENP-A WT by using Agarose-TUBE 2.In vitro ubiquitylation was then performed with the remaining nonubiquitylated 6xHis-CENP-A WT.Indeed, depletion of ubiquitylated CENP-A abolished ubiquitylation of 6xHis-CENP-A WT.Addition of GST-CENP-A K124R-Ub restored ubiquitylation of 6xHis-CENP-A WT, a result that is consistent with those of the in vivo experiments described in the preceding text.We also tested bacterially expressed and purified CENP-A, which is not ubiquitylated, as substrate for in vitro ubiquitylation assay, and consistent results were obtained.Taken together, our results indicate that pre-existing ubiquitylated CENP-A is required for ubiquitylation of newly synthesized CENP-A and for centromere localization.Thus, CENP-A ubiquitylation appears to be inherited epigenetically between cell divisions.If our “epigenetic” model is correct, then the CUL4A E3 complex should recognize a heterodimer of nonubiquitylated CENP-A and K124-ubiquitylated CENP-A, but not a homodimer of nonubiquitylated CENP-A.In this case, heterodimerization would be required for K124 ubiquitylation of nonubiquitylated CENP-A.Thus, CENP-A K124 ubiquitylation would depend on pre-existing K124-ubiquitylated CENP-A.In D. melanogaster, disruption of the dimerization interface of CENP-A/CID reduces its centromere localization in vivo.In human cells, SDS-resistant CENP-A dimers have been reported.Bassett et al. reported that a human CENP-A dimerization mutant cannot stably assemble into chromatin.Consistent with this finding, the CENP-A H115A/L128A mutation reduced dimerization of CENP-A in cell lysates and abrogated dimerization in immunoprecipitation analysis.Moreover, the CENP-A H115A/L128A mutation abrogated ubiquitylation of CENP-A in vivo and localization of CENP-A to the centromeres.It should be noted that monoubiqutin-fused CENP-AH115A/L128A was not able to properly interact with CENP-A WT.These results suggested that dimerization is required for CENP-A localization to the centromere.We could not conclude that dimerization is directly required for CENP-A K124 ubiquitylation, because the dimerization mutant may disturb proper nucleosome formation.Therefore, we examined ubiquitylation of CENP-AH115A/L128A in vitro.Indeed, CENP-AH115A/L128A protein was not ubiquitylated in vitro, and this absence of ubiquitylation strongly suggested that dimerization is required for CENP-A K124 ubiquitylation.To evaluate the importance of heterodimerization for CENP-A localization at the centromere, we hypothesized that the loss of CENP-A dimerization and centromere localization by the H115A/L128A mutation could be rescued by the addition of a dimerization domain that has previously been used to force the dimerization of specific proteins.To test this hypothesis, we fused 28 or 24 amino acids derived from the dimerization domain of the Saccharomyces cerevisiae PUT3 protein or the D. melanogaster Ncd protein to the C-terminal end of the CENP-A H115A/L128A protein.In our hypothetical scheme, in the presence of endogenous CENP-A proteins, overexpression of FLAG-H115A/L128A or FLAG-H115A/L128A-D does not lead to heterodimer formation with endogenous CENP-A.Thus, neither subsequent K124 ubiquitylation nor centromere localization of these exogenous proteins occurs.If untagged WT-D is coexpressed, FLAG-H115A/L128A-D is hypothesized to form heterodimers with it through the dimerization domain and to localize to the centromere.We confirmed the coexpression of each exogenous protein in cell lysates, and overexpression of FLAG-H115A/L128A, FLAG-H115A/L128A-PD, or FLAG-H115A/L128A-ND alone did not result in significant ubiquitylation or centromere localization, whereas the loss of the FLAG-H115A/L128A homodimer formation was rescued by the addition of PD or ND.Indeed, FLAG-H115A/L128A-PD or FLAG-H115A/L128A-ND localized to the centromere when untagged WT-PD or untagged WT-ND, respectively, were coexpressed; FLAG-H115A/L128A that lacked PD or ND did not colocalize to the centromere.In addition, we confirmed the localization of FLAG-tagged CENP-A proteins by chromosome spreading.In summary, the status of centromere localization of each FLAG-tagged CENP-A protein matches that of ubiquitylation.These results indicate that CENP-A heterodimerization with pre-existing ubiquitylated CENP-A is required for ubiquitylation and centromere localization of new CENP-A.Taken together, our results suggest that the CUL4A E3 complex recognizes a heterodimer of nonubiquitylated CENP-A and K124-ubiquitylated CENP-A to ubiquitylate nonubiquitylated CENP-A.Thus, CENP-A K124 ubiquitylation is epigenetically inherited.In Drosophila cell lines, CENP-A overexpression causes mislocalization of CENP-A into noncentromeric regions.These ectopic centromeres are able to attract downstream kinetochore proteins and cause chromosome segregation defects, presumably as a result of dicentric activity.In humans, overexpression of CENP-A induces misloading of CENP-A at noncentromeric regions and assembly of a subset of kinetochore components, including CENP-C, hSMC1, and HZwint-1.However, the microtubule-associated proteins CENP-E and HZW10 were not recruited, and neocentromeric activity was not detected.Recently, Lacoste et al. reported that ectopic mislocalization of CENP-A in human cells depends on the H3.3 chaperone DAXX rather than on the specific centromeric CENP-A chaperone HJURP.We previously reported that addition of monoubiquitin to the CENP-A K124R mutant restores centromere targeting and interaction with HJURP; this finding demonstrated that monoubiquitylation is an essential signaling modification required for efficient interaction with HJURP and subsequent recruitment of CENP-A to centromeres.Therefore, we hypothesized that if CENP-A K124 ubiquitylation is required to determine the location of the centromere, overexpression of constitutively ubiquitylated CENP-A K124R-Ub could induce the formation of neocentromeres at noncentromeric locations.We induced transient overexpression of FLAG-CENP-A K124R-Ub or FLAG-CENP-A WT in HeLa Tet-Off cells for 48 hr and performed four-color immunostaining of chromosome spreads.Endogenous CENP-B was stained as preassembled native centromeres, and endogenous central-outer kinetochore proteins or chaperone proteins were stained as both native and “putative” ectopic centromeres.Structures with FLAG and central-outer kinetochore protein-positive but CENP-B-negative signals were counted as “putative” neocentromeres at ectopic sites in HeLa Tet-Off cells.Because SKA1 requires microtubules to localize to centromeres, we interpreted the result regarding SKA1 immunostaining as indicating that overexpression of constitutively monoubiquitylated CENP-A induces functional neocentromeres.This finding suggests that CENP-A ubiquitylation plays a role in determining the location of the centromere.The number of “paired” putative neocentromeres with SKA1-positive sister chromatids significantly increased from 48 hr to 96 hr after the induction of FLAG-CENP-A K124R-Ub expression.This result indicates that newly created neocentromeres are duplicated and inherited epigenetically between cell divisions.New centromeres can be generated artificially at ectopic sites through the assembly of CENP-A nucleosomes at Lac operator-containing arrays.We applied this LacO/LacI ectopic centromeric chromatin assembly system to address the ability of CENP-A fused to the Lac repressor to recruit CENP-A chaperons and/or kinetochore components at arrays of Lac operator sequences on chromosome 1 of U2OS cells.First, we confirmed that the expression levels of HA-LacI-CENP-A are comparable in the cells.In addition to the loci of LacO arrays, HA-LacI-CENP-A WT localized at endogenous centromeres, whereas the K124R mutation substantially abrogated the centromere localization of HA-LacI-CENP-A.Compared with the expression of the Vec control, the expression of HA-LacI-CENP-A WT also increased signals of “punctuated” localization of CENP-A chaperons and outer kinetochore proteins at endogenous centromeres.The K124R mutation diminished the localization of CENP-A chaperons and that of outer kinetochore proteins at endogenous centromeres, presumably because of dominant-negative effects of the K124R mutation.The HA-Lac-I-CENP-A K124R-Ub mutant, which mimics monoubiquitylated CENP-A, localized to endogenous centromeres as previously observed when FLAG-tagged CENP-A proteins were used.Also, the HA-LacI-CENP-A K124R-Ub mutant, compared with the K124R mutant, increased signals of “punctuated” centromere localization of CENP-A chaperons and outer kinetochore proteins.The K124R mutation significantly reduced the recruitment of CENP-A chaperons and outer kinetochore proteins at LacO arrays, after ectopic loci forcibly were determined through LacO-LacI interaction.Addition of monoubiquitin to the CENP-A K124R mutant, when compared with CENP-A WT, significantly restored and enhanced the recruitment of CENP-A chaperons and outer kinetochore proteins at LacO arrays.Taken together, these results suggest that CENP-A ubiquitylation contributes not only to determining the position of centromeric loci but also to the assembly of kinetochores after ectopic loci were forcibly determined.Our results of in vivo and in vitro ubiquitylation assays using the constitutively ubiquitylated CENP-A mutant clearly show that ubiquitylated CENP-A is required for ubiquitylation of nonubiquitylated CENP-A.Therefore, the heterodimer is presumably recognized by the CUL4A complex, and the new CENP-A is ubiquitylated and maintained at the centromeres.Our previous studies showed that CENP-A K124 is ubiquitylated in the M and G1 phases.Based on these results, we provide two models of epigenetic inheritance of CENP-A ubiquitylation for the control of CENP-A deposition and maintenance at centromeres.Bui et al. reported that native CENP-A nucleosomes are tetrameric during the early G1 phase, are converted to octamers at the transition from the G1 phase to the S phase, and revert to tetramers after DNA replication.CENP-A binds to HJURP during the G1 and G2 phases, but not during the S phase.However, the current model of interconversion between tetrameric and octameric CENP-A nucleosomes in the cell cycle remains controversial, although the structures of the homotypic and heterotypic CENP-A particles have been solved.Therefore, we provide two models of epigenetic inheritance of CENP-A ubiquitylation: a tetramer model and an octamer model.Dunleavy et al. reported that histone H3.3 is deposited at centromeres in the S phase as a placeholder for CENP-A, which is newly assembled in the G1 phase.Thus, in the tetramer model, formation of the CENP-A octamer nucleosome can be established only during the S phase as Bui et al. suggested, because of the dimerization of the CENP-A tetrameric nucleosome.CENP-A nucleosomes, where each nucleosome has one single CENP-A molecule, are divided/diluted between the two daughter centromere-DNA sequences, and either is replaced with an H3 nucleosome or leaves a nucleosome-free gap during replication/S phase.In the octamer model, two CENP-A dimers in one nucleosome are split/diluted between the two daughter centromere-DNA sequences, and one CENP-A molecule either is replaced with one H3 molecule or leaves a molecule-free gap during replication/S phase.The following lines of evidence were collected from our studies and others.Zasadzińska et al. demonstrated that HJURP itself dimerizes through a C-terminal repeat region, which is essential for centromeric assembly of nascent CENP-A.Dunleavy et al. showed that HJURP localizes at centromeres during late telophase, which is when newly synthesized CENP-A is incorporated at centromeres in humans.Phosphorylation and DNA binding of HJURP were suggested to determine its centromeric requirement and function in CENP-A loading.Our previous UbFC analysis suggested that K124-ubiquitylated CENP-A exists at centromeres and the nuclear region.In addition, we previously showed that K124-ubiquitylated CENP-A is found in the insoluble chromatin fraction.Our previous study indicated that CENP-A K124 ubiquitylation is required for efficient interaction with HJURP, and in the present study, our in vitro and in vivo ubiquitylation assays revealed that HJURP itself contributes to CENP-A K124 ubiquitylation.Addition of purified HJURP itself did not induce ubiquitylation of CENP-A in vitro but enhanced CENP-A ubiquitylation about 2-fold in the presence of CENP-A K124R-Ub.Heterodimerization of new CENP-A with pre-existing ubiquitylated CENP-A is required for ubiquitylation of new CENP-A and localization of new CENP-A to the centromere.K124R-Ub mutant increased signals of “punctuated” centromere localization of HJURP, whereas K124R mutation abrogated the centromere localization of HJURP.The evidence summarized in this paragraph supports our proposed models in which HJURP preferentially binds to ubiquitylated, preassembled “old” CENP-A, which resides predominantly in nucleosomes, especially at the initial step of the ubiquitylation of nascent CENP-A.During this process, newly synthesized free CENP-A targets ubiquitylated centromeric CENP-A through its attraction to HJURP, which is preassembled with “old” ubiquitylated, centromeric CENP-A.Subsequently, new CENP-A is ubiquitylated in the proximity of the nucleosome and/or inside the nucleosomes in a heterodimerization-dependent manner during the M and G1 phases, and HJURP partly contributes to ubiquitylation.In the tetramer model, heterodimerization could be internucleosomal.Thus, in these models, ubiquitylation and the location of the centromere are inherited epigenetically.To date, “functional” neocentromeres have not resulted from the experimental mistargeting of overexpressed CENP-A in humans.In our study, overexpression of the monoubiquitin fusion protein FLAG-CENP-A K124R-Ub led to sufficient recruitment of HJURP and central-outer kinetochore components to noncentromeric chromatin regions.In particular, SKA1 recruitment on the ectopic centromere verified that the neocentromeres are functional, because SKA1 centromere localization requires the formation of kinetochore-microtubule interactions.In our experiment, SKA1-positive putative neocentromeres replicated and were inherited epigenetically between cell divisions.Our assay using the LacO/LacI ectopic centromeric chromatin assembly system clearly revealed that CENP-A ubiquitylation contributes to the recruitment of CENP-A chaperons and outer kinetochore components at LacO arrays.It is possible that ubiquitylation of CENP-A contributes to maintain and stabilize ectopic neocentromeres in humans; in D. melanogaster the E3 Ligase CUL3/RDX controls centromere maintenance by ubiquitylating and stabilizing CENP-A in a CAL1-dependent manner.Overexpression of FLAG-CENP-A-K124R-Ub induced the colocalization of more endogenous HJURP at putative neocentromeres than FLAG-CENP-A WT did, whereas the colocalization of endogenous DAXX at putative neocentromeres was greater with FLAG-CENP-A-WT than with FLAG-K124R-Ub.We previously found that the affinity of the K124R-Ub mutant is greater for HJURP than for CENP-A WT in vitro.Therefore, we hypothesized that overexpression of FLAG-CENP-A-K124R-Ub can induce neocentromere formation as FLAG-CENP-A-K124R-Ub recruits HJURP efficiently to noncentromere sites in which FLAG-CENP-A localizes, whereas overexpression of FLAG-CENP-A WT recruits DAXX to noncentromere sites in which FLAG-CENP-A localizes, which is consistent with the results from the study by Lacoste et al.Results of our immunoprecipitation assays using cell lysates showed that the affinity of FLAG-CENP-A-K124R-Ub with endogenous DAXX is lower than that of FLAG-CENP-A WT; this difference in affinity supports this model.However, we found no significant difference between CENP-A WT and K124R-Ub mutant regarding their affinity to endogenous HJURP.We speculate that the CENP-A binding to HJURP may have been saturated in the immunoprecipitation experiments using the cell lysates.Otherwise, this might be because the binding of CENP-A with HJURP is not stable through the cell cycle,We also found that the fusion of monoubiquitin to the CENP-A K124R mutant significantly restored and enhanced the recruitment of CENP-A chaperons and outer kinetochore proteins at LacO arrays; this result seemed to be inconsistent with the results regarding neocentromeres in Figures 6A and 6B.However, the results of the LacO/LacI ectopic centromeric chromatin assembly assay should represent the status “after” CENP-A assembly into chromatin, because in this system HA-LacI-CENP-A WT and mutants are forcibly incorporated into chromatin containing LacO arrays.Therefore, we speculate that HJURP and DAXX function in two different ways before and after CENP-A deposition at the centromere.Indeed, a dual chaperone function of HJURP in coordinating CENP-A and CENP-C recruitment was recently proposed.DAXX also may have a dual role, because heat shock increases the accumulation of DAXX at CEN/periCEN and DAXX interacts with CENP-C.CENP-A has been proposed to be the epigenetic mark of the centromere identity on the basis of the following findings: CENP-A is localized only at active centromeres on dicentric chromosomes; CENP-A can be experimentally mistargeted to noncentromeric regions of chromatin; and this mistargeting leads to the formation of ectopic centromeres in model organisms.However, in humans, we have shown that overexpression of CENP-A itself is not sufficient for the creation of a neocentromere at a noncentromeric region.Ubiquitylation of CENP-A is necessary for the formation of neocentromeres and for the epigenetic inheritance of the centromere location.Considering that histone posttranslational modifications are defined as “epigenetic marks” traditionally, we propose that CENP-A ubiquitylation is a candidate for the epigenetic mark of centromere location, i.e., the centromere identity.Ectopic incorporation of overexpressed CENP-A can lead to genomic instability, which occurs in particularly aggressive cancer cells and tissues.In humans, CENP-A overexpression can lead to its ectopic localization to chromosome regions with active histone turnover, as seen in cancer cell lines.At these ectopic loci, CENP-A forms heterotypic nucleosomes occluding CTCF binding, and their presence may increase DNA damage tolerance in cancer cells.Arimura et al. revealed a “hybrid” structure of the heterotypic CENP-A/H3.3 and suggested that the stable existence of the CENP-A/H3.3 nucleosome may cause ectopic kinetochore assembly, which could lead to neocentromere formation and chromosome instability in cancer cells.Our models of epigenetic inheritance of CENP-A ubiquitylation suggest that errors in CENP-A targeting, heterodimerization, and/or ubiquitylation induce abnormal accumulation of heterotypic nucleosomes.Hence, our findings may provide a basis for potential insights into understanding the mechanisms of cancer development.Please see Table S1 for the antibodies, Table S2 for the siRNA sequences, Table S3 for the plasmid vectors, and Table S4 for the baculovirus extracts used in this study.HeLa or HeLa Tet-Off cells were cultured in high-glucose DMEM supplemented with 10% fetal bovine serum and 1% penicillin/streptomycin.Cells were grown at 37°C in 5% CO2 in a humidified incubator.Cells were transfected with annealed double-stranded siRNA or mammalian expression plasmids by using Lipofectamine 2000, Lipofectamine 3000, Lipofectamine LTX, Lipofectamine RNAiMAX, or linear polyethylenimine.HeLa Tet-Off cells were cultured without tetracycline/doxycycline and transiently transfected with the pTRM4 overexpression vector whose transcription was regulated by the TRE promoter.In Figures 1B–1E, 3C, 3D, 4B–4D, 5, 6A, 6B, S1D, S1E, S1F, S3G, S4B, S4C, S5, and S6 cells were also cotransfected with CA-UTR #2 siRNAs for partial depletion of endogenous CENP-A, but this partial depletion did not disrupt endogenous CENP-C localization at centromeres.CENP-A−/F hTERT RPE1 cells were generously provided by Dr. Don W. Cleveland and cultured as described previously.Retro-Cre was added to CENP-A−/F RPE1 cells infected with retrovirus produced by transient co-transfection of 293T cells with pBabe-puro-Cre, psPAX2, and pMD2.G.4 days after retro-Cre infection, cells were further infected with retrovirus produced by stable Plat-GP cells with pQCXIP-FLAG-CENP-A or transient cotransfection of Plat-GP cells with the indicated pQCXIP constructs and pCMV-VSG-G.For the in vivo ubiquitylation assay of CENP-A−/F RPE1 cells, pCGN-HA-Ubiquitin was also transfected by using Fugene HD 4 days after retro-Cre infection.Cells were collected or fixed 6 days after retro-Cre infection and used in each analysis.Taxol or TN16 treatment was performed as described previously.For indirect immunofluorescent staining of mitotic CENP-A-/F RPE1 cells, Taxol or TN16 was added 24 or 2.5 hr before cell fixation, respectively.The in vivo CENP-A ubiquitylation assay was performed as described previously with the following minor modifications.HeLa Tet-Off cells were transfected with the indicated expression vectors and incubated with 5 μM MG132 for 24 hr.Cells were then collected and lysed in denaturing buffer A1 by a sonication and freeze-thaw process.Proteins were immunoprecipitated, and the immunoprecipitates underwent western blot analysis with the indicated antibodies.For CENP-A−/F RPE1 cells, the experiment was performed according to the time-course scheme in Figure S2A. Cells were infected with retroviruses harboring the indicated vector constructs, cultured, collected 6 days after retro-Cre infection, lysed, and analyzed as described for the HeLa Tet-Off cells.Conceptualization and Methodology, Y.N. and K.K.; Investigation, Y.N. and R.K.; Writing – Original Draft, Y.N. and K.K.; Writing – Review & Editing, Y.N. and K.K.; Funding Acquisition and Supervision, K.K. | The presence of chromatin containing the histone H3 variant CENP-A dictates the location of the centromere in a DNA sequence-independent manner. But the mechanism by which centromere inheritance occurs is largely unknown. We previously reported that CENP-A K124 ubiquitylation, mediated by CUL4A-RBX1-COPS8 E3 ligase activity, is required for CENP-A deposition at the centromere. Here, we show that pre-existing ubiquitylated CENP-A is necessary for recruitment of newly synthesized CENP-A to the centromere and that CENP-A ubiquitylation is inherited between cell divisions. In vivo and in vitro analyses using dimerization mutants and dimerization domain fusion mutants revealed that the inheritance of CENP-A ubiquitylation requires CENP-A dimerization. Therefore, we propose models in which CENP-A ubiquitylation is inherited and, through dimerization, determines centromere location. Consistent with this model is our finding that overexpression of a monoubiquitin-fused CENP-A mutant induces neocentromeres at noncentromeric regions of chromosomes. |
321 | Increased heartbeat-evoked potential during REM sleep in nightmare disorder | Nightmare disorder is a parasomnia characterized by extremely dysphoric dreams, usually occurring during REM sleep.Nightmares may involve images, feelings or thoughts of physical aggression, interpersonal conflict, failure/helplessness, and emotions like fear, anxiety, anger, sadness, and disgust.The prevalence of nightmare disorder at a clinically significant frequency varies from 1% up to 7% of individuals.Nightmares may be idiopathic or associated with a broad range of other disorders including post-traumatic stress disorder, substance abuse, depression, stress and anxiety, borderline personality disorder, schizophrenia, and other psychiatric illnesses.The pathophysiology of nightmare disorder remains largely unknown.It has been proposed that nightmare disorder involves a dysfunction of a network that encompasses limbic, paralimbic and prefrontal regions, which may also explain why patients present altered emotion regulation in response to stressors that are temporary or persistent.A genetic component has been documented although the functional role of this contribution is not well understood.As experiences occurring during sleep, nightmares might reflect intensified emotional arousal and heightened emotional reactivity during dreaming.Fear is the most frequently reported emotion in nightmares, while physical aggression is the most reported theme.Physiologically, heightened arousal during the sleep of nightmare recallers is suggested by increased leg movements, increased high alpha power and more frequent awakenings.One study also reported that patients with nightmare disorder also have a high sympathetic drive during REM sleep.More specifically, during post-deprivation recovery sleep, nightmare subjects were found to show higher than normal low-frequency spectral power in the ECG, which reflects sympathetic influences on the heart, and low high-frequency spectral power, which reflects respiration-driven vagal modulation of the heart.These changes were most prominent in REM sleep.In terms of waking personality traits, nightmare sufferers have been found to be more open, sensitive, and affected by experiences, including being more vulnerable to stress and trauma.These patients score higher on both state and trait anxiety, neuroticism, novelty seeking and anticipatory worry, and show heightened physical and emotional reactivity and maladaptive coping.A breakdown of emotion regulation processes such as fear extinction has been suggested to be happening during nightmares, resulting in emotional dysfunction, as found in depression and PTSD.Because nightmare disorder is described as primarily affecting fear-related processes, it may reflect a pathological breakdown of the normal functioning of fear expression, memory and regulation during sleep and dreaming.Heightened sensory processing sensitivity has been recently described as an appropriate trait to describe nightmare sufferers.Amplified sensory processing sensitivity involves a deeper cognitive processing of both external and internal information that is driven by higher emotional reactivity.A plethora of theories and experimental observations supports that the perception of bodily signals is a key component of emotional experience.Hence, measuring the cortical representation of bodily responses elicited by emotional events may serve as a reliable neural marker of affective states and emotional arousal.In this context, the heartbeat evoked potential, occurring about 200–600 ms after the R-peak of the ECG waveform, has emerged as a useful tool to assess interoceptive processing.Critically, HEP amplitude is increased in states of high emotional arousal, heightened motivation, and stress.Conversely, HEP amplitude is reduced in depression, consistent with decreased bodily awareness, alexithymia, and blunted emotional reactivity found in depressed patients.Based on the literature reviewed above, we hypothesized that patients with nightmare disorder may show increased emotional arousal predominantly during REM sleep.We therefore investigated HEPs during wakefulness, NREM and REM sleep in patients with nightmare disorder and healthy controls.We expected that HEP amplitude would be higher in nightmare disorder during REM sleep, reflecting increased emotional arousal during this sleep stage.Because highly negative emotions engage the amygdala, insula, and anterior cingulate cortex, we also predicted that HEP increases, as measured by scalp EEG, would predominate over frontal electrodes around 200 to 600 ms post R-peak time window.Finally, we examined whether HEP measures may also correlate with depression level, as previously reported.Eleven carefully selected patients with nightmare disorder were included).The patients sought consultation on their own or were referred by medical doctors of the Geneva area because of intense dreaming with negative emotional content.During the first consultation, diagnosis of nightmare disorder was done by a sleep specialist according to the International Classification of Sleep Disorders diagnostic and coding manual.In addition, a neuropsychiatric evaluation was performed to assess possible comorbidities, such as depression), psychosis or anxiety disorder.We excluded any patient with symptoms of obstructive sleep apnea syndrome, restless legs syndrome, or using medications that would be likely to produce nightmares; any patient with moderate or severe depression, generalized anxiety disorder, PTSD, known psychotic disorder; and any patient with a neurological disease.Eleven age and sex-matched healthy good sleepers controls were also included).They all had no history of neurological, psychiatric or sleep disorder, including absence of nightmare disorder.All controls had <1 nightmare at home during the past month.Signed informed consent was obtained from all participants before the experiment, and ethical approval for the study was obtained from the Ethical Committee of the Geneva University Hospitals.Polysomnography was recorded using 20 EEG electrodes according to the international 10–20 system).Right and left electrooculogram, chin electromyogram and electrocardiogram were recorded using conventional bipolar recording leads.To control for the presence of apnea and hypopnea, nasal and oral airflows were recorded with a pressure transducer and thoracic and abdominal respiratory movements were acquired with strain gauges.Oxygen saturation was continuously measured with a finger oximeter.Right and left anterior tibialis muscle EMG activity was recorded using bipolar surface electrodes to monitor lower limb motor activity.EEG and EMG signals were sampled at 512 Hz.Sleep was scored according to the AASM Manual for the Scoring of Sleep and Associated Events.In order to assess the propensity of nightmare sufferers to experience negative affect, we used the Beck Depression Inventory).Based on previous literature, we made the a priori directional hypothesis that, as an index of emotional reactivity, HEP amplitude in emotion-related regions during REM sleep would negatively correlate with individual BDI scores in nightmare patients.Note that for the control group, we included the BDI scores for 10 participants, as one participant did not complete this questionnaire.Preprocessing and averaging were conducted using Fieldtrip toolbox.Continuous EEG and ECG data were down-sampled to 256 Hz, offline filtered between 1 and 40 Hz.EEG data were re-referenced to a common average reference.Independent component analysis was conducted on the continuous EEG signals and stereotypical independent components reflecting eye movements and eye blinks were removed based on the visual inspection of all the independent components.For HEP analysis, we first selected all the continuous time windows representing at least 5 min spent in one sleep stage or wakefulness, and then concatenated them separately in AWAKE, REM and NREM conditions.Heartbeat evoked potentials were computed on EEG signals locked to the R-peak of the ECG, separately for each condition.We detected R-peaks on ECG by correlating the ECG signal with a template QRS complex defined on a subject-by-subject basis, and identified local maxima within episodes of correlation larger than 0.7.Epochs showing excessive noise were excluded from further analysis.After artifact correction, 6431/2819, 11601/10896, and 4084/3570 epochs were averaged to compute HEP, respectively for AWAKE, NREM, and REM periods in nightmare/control groups.In addition, HEP is known to be heavily contaminated by ECG artifacts, as the ECG can be recorded even at scalp electrodes overlying cortical regions.To check the possibility that possible differential HEP amplitude between nightmare and control groups did not result from such ECG-based EEG difference, we further analyzed whether any HEP difference is accompanied by ECG difference in the same time window, without applying ICA to attenuate ECG artifact.Furthermore, we also checked whether possible HEP differences were associated with differences in interbeat interval or heart rate variability.Mean interbeat interval was computed by averaging intervals between two consecutive ECG R-peaks for each condition considering the continuous time windows in which HEP were assessed.Similarly, heart rate variability was obtained by computing the standard deviation of the interbeat intervals for each condition.HEPs difference between nightmare and control groups was tested using the cluster-based permutation t-test as implemented in the Fieldtrip toolbox.Individual samples whose t-value exceeded a threshold were clustered based on temporal and spatial adjacency.Each cluster defined in time and space by this procedure was assigned cluster-level statistics, corresponding to the sum of the t-values of the samples belonging to that cluster.To define neighboring electrodes, a triangulation algorithm was used.This method generates triangles between nearby electrodes, and is not affected by distance between electrodes.A minimum of two significant electrodes was considered as a cluster.Type-I error rate was controlled by evaluating the maximum cluster-level statistics under the null hypothesis: condition labels were randomly shuffled 1000 times to estimate the distribution of maximal cluster-level statistics obtained by chance.The two-tailed Monte-Carlo p-value corresponded to the proportion of the elements in the distribution of shuffled maximal cluster-level statistics that exceeded the observed maximum or minimum original cluster-level test statistics.Because this method uses maxima, it intrinsically corrects for multiple comparisons in time and space.This procedure was applied at the electrode level in the time window from 200 to 600 ms after the R-peaks.In the nightmare group, the frequency of nightmares at home was 3.9 ± 2.1 episodes per week.Three subjects had a well-remembered nightmare in the laboratory, as reported in a morning dream diary, whereas no nightmares were reported in controls.The nightmare and control groups significantly differed for mood scores, although all scores were within the normal range.The sleep characteristics in the nightmare group did not differ significantly to the ones observed in healthy controls.To test the hypothesis that nightmare disorder might be characterized by increased neural responses to heartbeats, we contrasted the amplitude of HEP between nightmare and controls groups.The amplitude of HEPs significantly differed between the Nightmare and Control group during REM sleep.This effect was found over right-frontal regions and during the 449–504 ms post R-peak period.No significant HEP differences were found in wakefulness or NREM sleep between both groups.We further tested whether the observed HEP difference within the cluster was specific for REM, but not for NREM sleep.For that, we first computed the mean HEP amplitude within the observed significant cluster for each of the four conditions.As shown in Fig. 2, a significant mean HEP difference between nightmare and control groups was observed during REM, but not during NREM sleep.In addition, difference of the mean HEPs between REM and NREM differed between nightmare and control groups, further suggesting that the observed HEP difference between group was specific for REM sleep.Next, we checked whether the observed HEP effect was driven by three nightmare patients who had nightmare in the laboratory.We therefore computed the mean HEP amplitude within the observed significant cluster, then compared them using a two sample t-test.Excluding these three patients, we still observed the differential HEP effect between the groups.We then ensured that the observed HEP effects were specifically associated with neural heartbeat signals, and did not merely reflect some persistent difference in neural activity between nightmare and control groups.We computed surrogate R-peaks that have same interbeat intervals as real R-peaks but were randomly shifted in time, and conducted the same HEP analysis repeatedly.We found only one summed cluster t-statistics greater than the one computed from the real R-peaks, supporting that the observed differential HEP amplitudes are time-locked to the heartbeat.Next, we verified that the differential HEP amplitudes between nightmare and control groups were not associated with differences in several basic cardiac parameters such as the ECG amplitude, interbeat interval, and heart rate variability.There were no such effects.The mean ECG amplitudes within the time window where HEP effects were found did not differ between nightmare and control groups, confirming that observed HEP difference did not reflect mere ECG artifacts.Moreover, neither interbeat interval nor heart rate variability during REM sleep differed between nightmare and controls."Because of the non-normal distribution of the BDI scores in controls, the non-parametric Kendall's tau correlation coefficient was calculated between the BDI scores and the HEP scores in the frontal clusters for the 2 groups.Across the 11 nightmare patients, the mean HEP amplitude in the observed frontal cluster negatively correlated with BDI scores.No such correlation was observed in controls.In order to test if the correlation coefficients significantly differed, we first performed a conversion of tau values to Pearson r values, which yielded rco = 0.08 and rni = 0.71, for the controls and nightmare patients, respectively.Then we used a Fisher r-to-z transformation for these r values, and the one-tailed p value for the difference between these coefficients was 0.059, which indicates a statistical trend for significance.The current study is, to our knowledge, the first to quantify neural responses to cardiac signals in patients with nightmare disorder and compare it with healthy controls.Our main finding is that patients with nightmare disorder demonstrate a stronger HEP response than healthy subjects in a frontal cluster, at a specific latency and only during REM sleep.As more positive HEP has been linked to heightened emotional arousal, motivation and interoceptive awareness, this result supports that HEP may be used as a biomarker of increased emotional/reward and sensory processing during REM sleep in nightmare patients, and is in accordance with the observation that sensory processing sensitivity, including reward sensitivity, is amplified in nightmare patients.Higher HEP amplitude in a frontal cluster is consistent with a stronger engagement of frontal limbic cortical structures implicated in emotional/reward processing and negative emotions during the period of sleep associated with nightmares.Finally, we also found a negative correlation between frontal HEP amplitude and mood scores in the nightmare group as predicted, although the difference of correlations between groups showed only a statistical trend for significance.Hyperactivity of fear-related structures such as amygdala and anterior cingulate cortex and increased cortical excitability in REM sleep compared to NREM sleep and wakefulness are in line with the idea that REM sleep may offer a favorable neural condition for experiencing negative emotions in dreams and nightmares.Here, higher HEP in nightmare patients was restricted to REM sleep, which supports the notion that nightmares are a typical REM parasomnia.However, note that in some cases, nightmares were found to occur not exclusively during REM sleep but also NREM sleep, and abnormalities in N2 sleep micro-structure have been reported in frequent nightmare recallers.In our study, increased HEP amplitude in REM sleep was found in nightmare patients, even after excluding from the analysis the patients who had a nightmare in the laboratory.This observation strongly supports the idea that the HEP effect was not primarily driven by the intense emotional arousal associated with the nightmares experienced during the experimental night by the excluded patients.Instead, our findings for HEP measures likely describe a physiological characteristic of REM sleep in nightmare patients, and not specifically periods with nightmares.Depending on the cognitive or emotional state investigated, HEP modulations may engage several cortical structures, such as the posterior insula, anterior cingulate cortex/ventromedial prefrontal cortex, amygdala, somatosensory cortex and the parietal cortex.For example, the frontal HEP component has been observed when people performed cardiac perception and reward tasks, while the parieto-occipital HEP component was found in visual perception tasks.Considering the limited spatial localization capacity of the current study and based on evidence from previous HEP work, we would like to cautiously suggest that the frontal cluster may result from ACC/vmPFC activity.Increased HEP amplitude in these regions has been associated with subjective experience and awareness and its presence in nightmare disorder is not surprising, as these structures are known to contribute to the appraisal and expression of negative emotion.The present HEP findings may thus reflect an increased salience assigned to incoming negative stimuli during aversive or threatening situations dreamed by the patients.Indeed, HEPs are increased when levels of emotional arousal, motivation, or stress are high, as tested during wakefulness.Altogether, these findings support higher sympathetic nervous system reactivity and intensified emotional arousal in nightmare disorder.Importantly, heightened emotional reactivity has been also associated with adaptive functioning, such as increased processing and awareness of the environment and enabling fast response to a threat, supporting the idea that nightmares may realize a biologically adaptive function, that of a threat simulation and preparation for future relevance.The role of frontal regions like the ACC in shaping the emotional arousal state in nightmares has been hypothesized in a previously proposed neurocognitive model of nightmares.In this model, a hyperactive ACC would augment affect distress, which is a trait-like factor consisting of a long-standing tendency to experience heightened distress and negative affect in response to emotional stimuli.Affect distress would thus determine the level of distress that one individual may experience both during and after a nightmare.Consistent with this hypothesis, as well as with our findings of increased HEP amplitude in frontal regions and with the prevalence of negative emotions in nightmares, the ACC is typically more activated in REM sleep than in wakefulness.In addition, higher theta activity in a frontal region which roughly corresponds to ACC was found in REM sleep of nightmare patients compared to controls.Our finding may also suggest that, compared to control subjects, nightmare patients show increased responsiveness to sensory signals during REM sleep.Indeed, these results are in line with recent research claiming that sensory processing sensitivity is a trait marker that underlies the unique symptoms and imaginative richness found in individuals with nightmares.According to Aron et al., sensory processing sensitivity is determined by four main factors: 1) stronger emotional reactions; 2) deeper cognitive processing of information; 3) greater awareness of environmental subtleties; and 4) becoming overwhelmed when stimuli are too strong.Increased sensory processing of both external stimuli and interoceptive signals may reflect enhanced cortical excitability and deficient inhibition of these signals, as found in a ‘classic’ disorder characterized by cortical and physiological hyperarousal, such as insomnia disorder.Our results support that HEP can be used as a biomarker of increased emotional/sensory processing during REM sleep in nightmare patients and future research studying a potential HEP modulation with treatment for nightmares would support this finding.We also found that the more positive is the HEP amplitude in the frontal cluster in REM sleep, the less depressed were the nightmare sufferers.Although this finding may have implications concerning the links between affective state in wakefulness and emotional arousal in REM sleep or dreaming, we cannot draw any strong conclusion in this regard because the difference of correlations between groups showed only a statistical trend for significance.Therefore, it remains unclear whether nightmares, compared to normal dreaming, can offer an adaptive function, similarly to normal dreaming.It seems that such a role would diminish as nightmares become more severe or recurrent in nature.Indeed, although patients with idiopathic nightmares are normothymic, having nightmares is associated with an increased risk of developing PTSD upon subsequent trauma exposure and nightmares following trauma are associated with more severe PTSD.These observations support that extinction learning fails when fear is exaggerated, as in nightmare sufferers.Furthermore, compared to controls, nightmare patients show decreased activity in regions associated with extinction learning at wake and impaired frontal inhibitory functions.The two main limitations of this study concern the low spatial resolution and the relatively small sample.Regarding the precise localization of the frontal HEP, our hypothesis that the effect may be maximal in ACC/vmPFC should be taken with caution, although the frontal location of HEP was expected and concerned several frontal electrodes.The use of high-density EEG, as in other studies, would increase spatial resolution and localization accuracy.On the other hand, while the sample size of this study was relatively small, also due to a very careful clinical selection of idiopathic nightmare sufferers, it is compensated by the very high number of intra-subject HEP measurements.We can therefore consider that the HEP results are robust.In conclusion, the current findings support an increased interoceptive sensitivity in nightmare disorder, as indexed by the increased positive amplitude of a frontal HEP cluster in REM sleep.These results are in line with the idea that increased emotional arousal and sensory processing sensitivity participate in the pathophysiology of this sleep disorder and that the HEP may be a biomarker of this change.This work was supported by the Bertarelli Foundation, the Pictet Foundation, the BIAL Foundation grant and the Swiss National Science Foundation. | Nightmares are characterized by the experience of strong negative emotions occurring mainly during REM sleep. Some people suffer from nightmare disorder, which is defined by the repeated occurrence of nightmares and by significant distress in wakefulness. Yet, whether frequent nightmares relate to a general increase in emotional reactivity or arousal during sleep remains unclear. To address this question, we recorded heartbeat-evoked potentials (HEPs) during wakefulness, NREM and REM sleep in patients with nightmare disorder and healthy participants. The HEP represents a cortical (EEG) response to the heartbeat and indexes brain-body interactions, such as interoceptive processing and intrinsic levels of arousal. HEP amplitude is typically increased during states of high emotional arousal and motivation, and is decreased in depression. Here we compared the amplitude of HEPs between nightmare patients and healthy controls separately during AWAKE, NREM, REM periods, and found higher HEP amplitude in nightmare patients compared to healthy controls over a cluster of frontal regions only during REM sleep. This effect was not paralleled by any group difference in cardiac control measures (e.g. heart rate variability, interbeat interval). These findings corroborate the notion that nightmares are essentially a REM pathology and suggest that increased emotional arousal during REM sleep, as measured by HEP, is a physiological condition responsible for frequent nightmares. This result also supports that HEP may be used as a biomarker of increased emotional and sensory processing during REM sleep in these patients. |
322 | Physical and monetary ecosystem service accounts for Europe: A case study for in-stream nitrogen retention | Integrated assessments of economic, social and environmental impacts are key to supporting public and private sector decisions related to land and water resources.An essential part of integrated assessments is the identification of the links between ecosystem functions and processes and human wellbeing, a task to which theoretical frameworks, principles, definitions and classifications have been devoted by numerous studies.A number of policy initiatives have incorporated ecosystem service quantification and valuation.For example, the Europe 2020 strategy has the manifest intention of mainstreaming environmental issues into other policy areas by preserving the resource base required to allow the economy and society to function.The EU Biodiversity Strategy to 2020 includes ecosystem services alongside with biodiversity, to highlight the key role of ecosystems in biodiversity protection.In particular Action 5 of the Strategy requires that ecosystem service assessment and valuation be integrated into accounting and reporting systems, so as to relate environmental assets to other statistics and data on environmental, economic and social characteristics already used by analysts and policy makers.At all levels, a fully integrated economic and environmental analysis is increasingly recognised as crucial for policy design and implementation.To meet this call, national statistical offices and international agencies have been working on ways to make national accounting and reporting systems more inclusive of ecosystems.1,Traditional national economic accounts based on the System of National Accounts, developed 50 years ago when little thought was given to environmental damage, do not consider ecosystem assets and services.Although there have been some revisions,2 the SNA does not yet account for the degradation and depletion3 of natural resources.Over the last 40 years a number of efforts have been made to develop methods that integrate traditional macroeconomic indicators with environmental information.In the early 1990s the statistical unit of the United Nations proposed a single System for Integrated Environmental and Economic Accounting as a way to standardize different frameworks and methods.The original 1993 SEEA handbook focused on the adjustment of existing macro-indicators.The subsequent SEEA 2003 framework comprised four categories of accounts, made up of several environmental accounting modules.More recently, the SEEA Central Framework, which covers the main component of the physical environment, is being adopted as an international statistical standard.Natural resource accounts, however, only tell part of the story, because ecosystems are a lot more than just land and water.An ecosystem is an interconnected and interacting combination of abiotic and biotic components, and the depletion of its stock - the natural capital - may cause the loss of multiple services now and in the future.This is the reason why ecosystem accounts, aimed at monitoring the capacity of ecosystems to deliver services, are the focus of increasing attention within economic-environmental accounting.The Land and Ecosystem Accounting framework is an early attempt at ecosystem accounting.In LEAC the consumption of natural capital, considered as the asset, is measured as the restoration cost required after intensive exploitation and/or insufficient maintenance.However, the LEAC framework does not incorporate direct measurement of ecosystem services.A white cover version of the SEEA-Experimental Ecosystem Accounts was released in June 2013 and officially published in 2014, developed and recommended by the United Nations, European Commission, World Bank, OECD and FAO.The SEEA-EEA is an experimental accounting framework to be reviewed in light of country experience and conceptual advances.The framework is intended for ‘multidisciplinary research and testing’ and urgently calls for applications and case studies.SEEA-EEA Technical Guidelines were released in April 2015 and made available for global peer review in December 2015 to support national efforts at ecosystem accounting.The Technical Guidelines state that central in applying the SEEA-EEA framework to ‘support discussion of sustainability’ is the concept of capacity.The notion of capacity is important to assess the integrity/degradation of the ecosystem in relation of how ecosystem services are used and managed.However, some aspects of the notion of capacity in the SEEA-EEA have not been tackled in a definitive way.Specifically: i) whether to attribute the notion of capacity to the ecosystem as a whole or to each individual ecosystem service, and ii) whether to consider ecosystem service supply independent of service demand.There is the need to address these questions because some assumptions regarding capacity are required in order to set up a complete and consistent accounting system.Our paper investigates these two questions by applying the SEEA-EEA to the regulating ecosystem service of water purification in Europe, using in-stream nitrogen retention as a proxy for water purification.To our knowledge this is the first application of SEEA-EEA based approaches to ecosystem services measurement at a continental scale.We begin with a brief introduction to the SEEA-EEA framework, followed by the description of how the water purification ecosystem service is quantified here to be consistent with SEEA-EEA principles.The results are expressed in terms of the SEEA-EEA procedure.The challenges raised by our case study and discussed in Section 4 aim at developing a notions of capacity able to link the accounting principles of stock and flows with ecosystem services, considering that the definition of capacity as join concept between ecology and economy is still a matter of debate.The SEEA-EEA framework contains ecosystem service accounts and ecosystem asset accounts for individual services and assets.As in all conventional accounting frameworks, the basic relationship is between stocks and flows.Stocks are represented by ecosystem assets.Ecosystem assets are defined as ‘spatial areas containing a combination of biotic and abiotic components and other environmental characteristics that function together’.Ecosystem assets have a range of characteristics.In accounting there are two types of flows: the first type of flow concerns changes in assets, the second type of flow concerns the income or production arising from the use of assets.The accounting for ecosystem services regards the second type of flow although consistency is needed with the flow representing changes in ecosystem assets.According to the SEEA-EEA, the flows can be within an ecosystem asset and between ecosystem assets.The combination of ecosystem characteristics, intra-ecosystem flows and inter-ecosystem flows generates ecosystem services that impact on individual and societal wellbeing.In the SEEA-EEA tables are grouped in ecosystem assets and ecosystem services.Accounts for ecosystem assets record changes in the stocks, for example using area estimates.Accounts for ecosystem services record the flow of ecosystem services and their use by beneficiaries.Accounting for the capacity of an ecosystem to generate services is critical for determining whether the flow of an ecosystem service for human benefit is sustainable.By means of indicators describing ecosystem condition or quality, it should be possible to assess how changes in the stock of ecosystem assets affect such capacity.Indeed, the SEEA-EEA Technical Guidelines include within the ecosystem accounts an ‘ecosystem capacity account’ that should be compiled.As far as we are aware, however, there are no examples of ecosystem capacity accounts.In order to make ecosystem capacity accounts operational, there needs to be clear definitions of key concepts and methods based on robust scientific knowledge on ecosystem functioning as well as on the relationships between ecosystem capacity, ecosystem service flows, and their benefits to humans.Edens and Hein define ecosystem services, within the context of ecosystem accounting, as the input of ecosystems to production or consumption activities.They make a strong link to economic activities by identifying the direct contribution of ecosystems to the production process.This form of accounting is feasible for provisioning services, where natural/ecological processes are combined with other kinds of capital inputs to produce goods.It is however difficult to apply to the other categories of services.Edens and Hein acknowledge that the impact of regulating ecosystem services is external to direct economic activities or to people, stating that ‘regulating services can only be understood by analysing the specific mechanism through which they generate benefit’.Our case study focusses on this point, by measuring the benefits of a regulating service – water purification.For reporting purposes it may be necessary to aggregate ecosystem services to reduce complexity.The SEEA-EEA framework proposes three ways to aggregate ecosystem services for inclusion in accounts: i) aggregation of the various ecosystem services within a spatial area; ii) aggregation of a single ecosystem service across multiple areas within a country, and iii) aggregation of all ecosystem services across multiple areas within a country.Our case study falls within the second approach, as we account for a single ecosystem service across multiple river catchments in Europe.To align with SEEA-EEA definitions and methods, we use a four step procedure:Identify the ecosystem service classification and the underlying conceptual framework.Quantify in physical terms the targeted ecosystem service.The quantification procedure can range from simple to complex, or can be multi-tiered4 because there is presently no reference framework or standard to follow.Translate the quantitative assessment into monetary terms by choosing an economic valuation technique that as much consistently as possible links to the biophysical model.Populate SEEA-EEA tables consistently with the resulting data.In step 1 we use the Common International Classification for Ecosystem Services as proposed in the SEEA-EEA.The underlying conceptual framework is the ecosystem services cascade model.In the cascade model the biophysical structure and processes of ecosystems determine their functions, which, in turn, underpin the capacity of ecosystems to provide services.To achieve consistency between the cascade model and the SEEA-EEA framework, it is important to highlight the holistic components that guarantee the flow of individual ecosystem services and which are accounted for in the SEEA-EEA."In the cascade model's function element we distinguish a ‘process’ which occurs within the ecosystem considered as a whole, and a ‘process’ which determines the capacity of an ecosystem to generate single ecosystem services.Measurements of ecosystem functions progresses from a holistic measurement to an individual measurement.For example, processes such as nutrient and carbon cycling, as well as photosynthesis, operate within the ecosystem as a whole and depend on the condition of the ecosystem.Holistic functioning of the ecosystem and its inherent processes determines the capacity to supply single or multiple ecosystem services.In our application: the holistic process that operate within the ecosystem is nutrient cycling, the capacity is the amount of water purification that the ecosystem is able to provide now and in the future, water purification is the flow of the service provided now.Step 2 involves the physical quantification of the selected ecosystem service.The approach most compatible with SEEA-EEA to quantify the capacity of the ecosystem to provide a service is to measure ecosystem conditions using indicators such as biomass index and soil fertility.In the SEEA-EEA handbook ecosystem condition provides a link between ecosystem capacity and ability to supply ecosystem services.Here we use a biophysical model to quantify the actual flow of the ecosystem service, i.e. the amount used by society.In the supply-use accounting table the sustainable flow corresponds to the service supply, while the actual flow plus the difference between sustainable and actual flow corresponds to service use.In UNEP it is in fact left open the possibility to record what flows back to ecosystem units when the supply of ecosystem service has a ‘larger scope’.The actual flow of an ecosystem service is not necessarily sustainable.In overfished fisheries, for example, the actual flow exceeds the capacity of the marine ecosystem to maintain the stock, with a resulting declining stock value and the risk of collapse.A sustainable use of ecosystems requires the actual flow of the service to be equal or lower than the maximum sustainable flow that the ecosystem is able to provide.For management purposes it is therefore important to measure or estimate the sustainable flow – which remains a challenge for regulating services because it is hard to establish thresholds for sustainability.Here we define capacity as the stock generating a sustainable flow and quantify its value by estimating the Net Present Value of the present and future sustainable flow.We think that for accounting purposes capacity should be quantified with reference to single ecosystem services, rather than for ecosystems as a whole.In the accounting terminology, the opening stock in our approach is the capacity of the ecosystem to generate a given specific service, and it is calculated as the NPV of the ecosystem service sustainable flow.The changes to be recorded are the actual flow of the ecosystem service that is used by humans.The capacity is not the maximum theoretical flow the river system can generate for e.g. one year, but it represents the current and future flows measured at a sustainable rate.Capacity is thus intended as a stock and not as a flow.Consistently with these definitions, the actual flow can be higher, equal or lower than the sustainable flow, but not higher than the capacity.When the actual flow of the ecosystem service is lower than sustainable flow the implication is no degradation.Actual and sustainable flows are separate tables and maps.In economic terms you might choose to only value actual flow, however the sustainable flow remains whether or not a monetary value is estimated.If actual flow is lower than sustainable flow the capacity to provide the service remains intact.Conversely, if actual flow exceeds sustainable flow, the stock will be degraded and the capacity will be reduced.Population density, for example, affects capacity only when it drives the actual flow beyond the sustainability threshold, and its specific role can be identified provided it is explicitly included in the modelling equations behind the biophysical assessment.However, it must be acknowledged that the basis for these assumptions about capacity is that there are no other changes in the ecosystem, i.e. we assume that the condition of the ecosystem is not affected by any other changes.Step 3 translates biophysical quantities into monetary terms.Following the SEEA-EEA guidelines, it is important to distinguish between welfare values relevant in some public policy decision making contexts, and exchange values, relevant in an accounting context.The former include consumer surplus,5 while the latter considers prices at which goods and services are traded and hence will include the producer surplus.6,One set of methodologies includes both producer and consumer surplus while the other set includes only producer surplus.Although methodologies based on exchange values might in some cases underestimate the value of ecosystem services, they provide more robust values than those calculated on the basis of subjective preferences.As the focus of ecosystem accounting is on integration with standard economic accounts, ecosystem services should be estimated with reference to exchange values.Step 4 reports the physical and monetary outputs in accounting tables in three ways:Accounting for actual flow of services received by economic sectors and households;,Accounting for the sustainable flow of services;,Accounting for the capacity of ecosystems to provide a sustainable flow of the ecosystem service, calculated as the NPV of the sustainable flow.The empirical objective of this case study is to value the water purification service taking place in rivers in Europe.The retention of Nitrogen from point and diffuse sources is used as a proxy for water purification.Excessive nitrogen loading is a leading cause of water pollution in Europe and globally which makes nitrogen a useful indicator substance for water quality.We define N retention as the process of temporary or permanent removal of nitrogen taking place in the river.This includes the processes of denitrification, burial in sediments, immobilization, and transformation or simply transport.According to this definition, N retention varies with the characteristics of the stream and of the living organisms in the aquatic ecosystem, and hence depends on the ecological functioning of the system.Previous studies show that N retention is affected by N concentration in streams.Mulholland et al. showed that the efficiency of biotic uptake and denitrification declines as N concentration increases and Cardinale concluded that biodiversity in aquatic ecosystems has a positive effect on nitrogen retention.At the same time, biodiversity is threatened by high nutrient loadings in freshwater and coastal waters.We use the Geospatial Regression Equation for European Nutrient losses model to estimate the in-stream nitrogen retention in surface water, which is considered in this paper as the actual flow of service provision.GREEN is a statistical model developed to estimate nitrogen and phosphorus flows to surface water in large river basins.The model is developed and used in European basins with different climatic and nutrient pressure conditions and is successfully applied to the whole Europe.The model contains a spatial description of nitrogen sources and physical characteristics influencing the nitrogen retention.The area of study is divided into a number of sub-catchments that are connected according to the river network structure.The sub-catchments constitute the spatial unit of analysis.In the application at European scale, a catchment database covering the entire European continent was developed based on the Arc Hydro model with an average sub-catchment size of 180 km2.For each sub-catchment the model considers the input of nutrient diffuse sources and point sources and estimates the nutrient fraction retained during the transport from land to surface water and the nutrient fraction retained in the river segment.In the case of nitrogen, diffuse sources include mineral fertilizers, manure applications, atmospheric deposition, crop fixation, and scattered dwellings, while point sources consist of industrial and waste water treatment discharges.In the model the nitrogen retention is computed on annual basis and includes both permanent and temporal removal.Diffuse sources are reduced both by the processes occurring in the land, and those occurring in the aquatic system, while point sources are considered to reach directly the surface waters and therefore are affected only by the river retention.In natural systems nitrogen retention is related to nitrogen input.The residence time of water is a key variable for in-stream nitrogen retention since it directly affects the processing time of nitrogen within an aquatic system.Longer residence times increase the proportion of nitrogen input that is retained and removed from the water.We use modelled nitrogen retention as indicator for the actual flow of the water purification service, and this assessment in turn represents the basis for the calculation of the sustainable flow and the translation of this assessment from physical to monetary terms.Our initial hypothesis to calculate a sustainable flow of in-stream nitrogen retention is that there is a threshold in the nitrogen concentration of surface water below which the removal of nitrogen by the different ecological processes is sustainable from an ecosystem point of view.A similar threshold exists for atmospheric nitrogen deposition on terrestrial ecosystems with suggested critical nitrogen loads between 5 and 25 kg ha−1 year−1.Here we propose to use a tentative threshold concentration of 1 mg N l−1.This threshold is based on eutrophication risk.A global synthesis of published literature on the ecological and toxicological effects of inorganic nitrogen pollution in aquatic ecosystems suggests that levels of total nitrogen lower than 0.5–1.0 mg l−1 could prevent aquatic ecosystems from developing acidification and eutrophication.For potential risk of eutrophication for European surface water related to nitrogen concentration see also Grizzetti et al.This threshold concentration serves as an example for the purpose of this paper and will change depending on the vulnerability of different aquatic ecosystems to nitrogen loading.For instance, it does not apply for ecosystems naturally rich in nitrogen such as estuaries where a higher threshold could be used or for catchments with very vulnerable lakes where a lower threshold should be used.Spatially explicit sustainable targets for thresholds of total nitrogen concentration in freshwater systems can be set based on the European Water Framework Directive requirements for good or high ecological status.Eq. gives the sustainable in-stream nitrogen retention, also referred to in our paper as sustainable flow.It is important to stress that the exponent factor in Eq. is introduced in this study to account for trade-offs that arise between water purification and other ecosystem services in conditions where nitrogen loads and concentrations are unsustainable.Studies unlike this one which analyse multiple ecosystem services delivered by aquatic ecosystems can use simply use Ncrit as value for Nsustainable without applying the exponent function.For the monetary valuation of water purification we adopt a ‘cost-based approach’.We do not use a ‘damage-based approach’ because of the difficulty to exhaustively identify all the benefits that could be lost if the water purification service offered by the ecosystem is no longer available.These benefits range from the availability of clean water for drinking or swimming, to the presence of fisheries, to the aesthetic perception that influences both recreational activities and real estate markets.The benefits from water purification also overlap in many cases with benefits from other ecosystem services, which risks to give rise to double counting.By using, instead, a cost-based approach rather than methodologies based on stated preferences we make an attempt to get closer to SEEA-EEA guidelines that preferably ask for exchange value estimates7; as already mentioned the choice of adopting a cost-based approach instead of a damage-based approach allows to deliver more robust and credible figures, even if it might result in an underestimation of the value of the ecosystem services.8,Finally, we can operationalize the underlying concept that monetary values depend upon biophysical assessments, which is a crucial prerequisite for integrated valuation.The rationale of a cost-based approach to valuation is well known.By cleaning up discharges from human activities, aquatic ecosystems provide for free a valuable ecosystem service and thus avoid a degradation of the ecosystem that would impact on human health and living conditions.Since human activities will not stop, there will always be the need for this ecosystem service even after river bodies will not be able to provide it any longer.The operational hypothesis of our valuation exercise is that an artificial replacement would be required in order to maintain the water purification service, and replacement would entail a cost.Considering the relevant pollution sources, the best proxy we can use as replacement cost are constructed wetlands.Wastewater treatment plants would be inappropriate because: they are not applicable to the primary sector and what is discharged by the secondary sector and by households is already treated by wastewater treatment plants before reaching water bodies.9,Constructed wetlands provide ecosystem functions similar to those delivered by aquatic ecosystems.Their construction cost refers to ecosystem engineering work, which is more objective than values obtained through stated preferences, with a survey questioning citizens on the value they would place on nitrogen retention.The rationale is that artificial wetlands are also able to retain N present in relatively low concentrations, as opposed to urban wastewater treatment plants that need high concentration of the pollutant for efficient removal.A review of the value attributed to nitrogen retention is available from a previous study where it is clearly shown how the choice of replacement costs is very popular among environmental economists.Wastewater treatment plants are much more expensive than CW; moreover, in our valuation exercise we differentiate between typologies of CW in order not to overestimate the cost, in fact the more extensive typology of CW is the less expensive solution.We thus use the cost of CWs as proxy for the valuation of nitrogen retention, which represents a proxy for water purification.Specifically, the amount of nitrogen that is retained and removed by rivers and lakes will be converted to a CW area equivalent, i.e. the total area of CW that is needed to result in the same nitrogen retention as the river network in each sub-catchment.Once we have this CW area equivalent, we calculate the costs of the corresponding typology of CWs based on cost data.Differently from previous applications undertaken on water purification the monetary values here are not derived from other studies but calculated ad hoc for the specific engineering works hypothesized.The typologies of CW are differentiated according to the types of pollutant sources.Free Water Surface CWs are densely vegetated basins that contain open water, floating vegetation and emergent plants.They basically need soil to support the emergent vegetation.The FW constructed wetlands reproduce closely the processes of natural wetlands, attracting a wide variety of wildlife, namely insects, mollusks, fish, amphibians, reptiles, birds and mammals.FWS-CWs are the best choice for the treatment of nutrients from diffuse primary sector activities.Horizontal subsurface Flow CWs consist of waterproofed beds planted with wetland vegetation and filled with gravel.The wastewater is fed by a simple inlet device and flows slowly in and around the root and rhizomes of the plant and through the porous medium under the surface of the bed in a more or less horizontal path until it reaches the outlet zone.HF-CWs represent the best choice for treating point sources.The flow Q is separated in two different sub-flows: a first one containing only nitrogen from diffuse sources, which is calculated as the product of surface basin and annual precipitation; and a second one containing only nitrogen from point sources, whereby the point input sources were converted according to Eq. to a flow value by using population data and by assuming person equivalents.We assumed that the nitrogen load removed by HF and FWS is proportional to the ratio between non-point and point sources discharging into the basin.In order to assess the ratio between ci and ce) we perform the calculations in Eqs. and.Once we have the CW area equivalent, we can calculate the costs of the corresponding typology of CWs.Total costs include direct construction costs, indirect construction costs and costs of labour and material.Indirect costs have been included as a standard percentage of construction costs.12,Labour cost values have been extracted from the Eurostat labour statistics, which reports costs from 1997 to 2009.For countries with missing data, we estimate approximate values based on those of adjacent countries with similar economic conditions.The costs of filling materials are obtained by a direct survey conducted among CW designers and builders in different European countries and by data available in the peer-reviewed literature.To account for price differentials across countries, construction costs have been divided in three components: a fixed component; labour costs; filling materials costs.For each country the total cost is obtained as the sum of fixed costs, labour costs and filling material cost for HF and as sum of fixed costs and labour cost for FWS.On the ground of a series of case studies examined, we assume an operating and maintenance cost equal to 3850 € ha-1 for FWS and 7700 € ha−1 for HF.We should take into account on one hand the economy of scale effect, and on the other hand the fact that different countries in Europe have different costs.The two aspects cannot be calculated together because the imposition of fake thresholds would unrealistically affect the final result.We thus calculate separately the economy of scale effect and the price difference effect.After few simulations were run, the most reliable outcomes result from the combination that considers a 70-30 breakdown, i.e. 70% of the cost is based on an assessment of the price difference effect and 30% of the cost is based on the economies of scale model and).14,We present the accounts at two spatial scales: i) at the European scale, to show how service capacity and service flow can be quantified through the accounting tables proposed by the SEEA-EEA, and; ii) at the country scale, to put sustainable and actual flow and valuation estimates into context.We report monetary estimates in constant year 2000 values: valuation is used here as a translation in monetary terms of the biophysical assessment, and including inflation in the estimates would overshadow their comparability over time.Using current rather than constant prices is obviously feasible and may be desirable for different purposes.In Europe, over the 20-year time period considered, total nitrogen input to river basins varies between 50 and 80 million ton, the largest share originating from the agricultural sector and entering the basin as diffuse sources.This total represents the combined input of different nitrogen sources on the land after take up by crops.After basin retention, around 5 million tons reach the river network.Nitrogen emissions from industries and households enter the river network as point sources and amount to 1.1 million ton of nitrogen.15,Tables 1, 2 present stock and flow accounts, respectively, of the delivery of water purification services by the European river network as indicated by in-stream nitrogen retention.We calculate that replacing this ecosystem service capacity would require approximately one million ha of constructed wetland, representing a net present value of between 310 billion € in 1990 and 459 billion € for the year 2005.The flows of total annual service vary between 21 and 31 billion euro assuming sustainable service delivery.The actual service flow aggregated at the European scale is worth around 16 billion euro annually.Economic sectors and households are the polluting subjects who actually use the water purification service.The total values aggregated for Europe suggest that the sustainable flow is higher than the actual flow.Relative values disaggregated at the country level will read differently.The separation between the primary sector and other economic activities and households has been determined by the features of the biophysical model that explicitly differentiate retention values for diffuse source and point sources.The possibility to frame the results according to economic sectors offers the possibility to integrate this information with economic accounts, all expressed in monetary terms.In Tables 3, 4 we report estimates expressed respectively in 103 kg km−1 year−1and euro km−1 year−1, so as to assess sustainability independently of the size of the country.Total values are mapped in Figs. 2–4, at the European level as well as for the 34 countries covered by the model extent.Tables 3, 4 account for the ecosystem service flow at a country level, estimated in physical and monetary terms, respectively, for 1985, 1995 and 2005.Table 3 also presents statistics on the total size of river basins and the national river network as well as total nitrogen emissions.These latter statistics can be used to convert the accounts expressed per kilometre into national totals.The results reported in Table 3 demonstrate that for many countries the sustainable flow, measured in physical units, is below the actual flow.Consequently, monetary values based on physical accounts show their same pattern.Please be aware that sustainable flow does not represent the whole possible flow.It does represent the level of the flow that can be used without degrading the capacity of the ecosystem to provide it.Actual flow can indeed be higher than the sustainable flow but this over-exploitation will affect the degradation of the ecosystem and thus future levels of sustainable flow.Furthermore, Table 3 shows that in most countries total nitrogen emissions have gradually declined between 1985 and 2005.Given the positive relation between nitrogen input and actual in-stream nitrogen retention, the physical flow accounts follow this emission trend and show, on average, a decline in the average amount of nitrogen retained per unit length of river network.How far a country is from a sustainable situation depends on the magnitude of past N inputs.Consider the Netherlands: they have substantially decreased N input in the last 15 years, but the difference between actual N emissions and the sustainable limit is nonetheless the largest in Europe.For almost all countries the actual flow is higher than the sustainable flow, which means that river ecosystems in Europe are progressively degrading as a result of nitrogen pressure.Sustainable use is achieved in Estonia, Finland, Norway and Sweden, where actual flows for 2005 were on average lower than the sustainable flows.In all other countries considered, in-stream nitrogen retention occurs at unsustainable levels.The apparently contrasting results between Tables 1–4 offer few lines of thought.Considering absolute values provide a rather different picture than relative values; it is thus important to establish what is the figure we choose to analyse and for what purpose.Moreover, the countries to be included does affect the final value: including or omitting one or few countries can overturn the results, if these countries have economic activities with a highly impacting effect and/or a considerable size.A few important points are worth highlighting. The capacity to generate sustainable nitrogen retention, shown in Fig. 2, and the sustainable flow exhibit the same distribution, but with a different order of magnitude.16,Whereas both distribution and order of magnitude differ in the trend of actual flows relative to capacity. Sustainable flow and actual flow exhibit different distributions but same order of magnitude.Interesting observations emerge also from the monetary flow accounts.Firstly, variation between countries is much higher than observed in the physical accounts.This is largely the result of different price levels among different countries in Europe, with highest values for Scandinavian countries and lowest values for Balkan countries.Secondly, the annual variation in actual flow within countries is limited as a result of the high fixed costs relative to variable costs used in the replacement cost model.These points will be discussed more in depth in the Discussion.The accounts reported in Table 1 should always be consistent with those reported in Table 2.Consistency is guaranteed by the use of the same biophysical model to which, in the case of the assessment of sustainable flows, a critical threshold concentration is applied.Finally, it should be recalled that the nitrogen retention takes place in soils, surface water including streams, river and lakes, wetlands, and coastal and marine sediments.Our accounts, however, are limited to the river network.The crucial note we address with this case study is the definition, in accounting terms, of stocks and flows of ecosystem services.Ecosystem services depend on the functioning and health of the ecosystem as a whole.Ecosystem resilience is related to the capacity of ecosystems to generate ecosystem services, now and in the future.However, the notion of capacity is controversial.In Burkhard et al. a difference is made between ‘ecosystem service potential’, defined as the hypothetical maximum yield of selected ecosystem services, and ‘ecosystem service flow’, defined as the used set of ecosystem services.This definition of ecosystems services potential follows the notion of stock, as long as it is clear that ‘ecosystem service potential’ differs from ‘ecosystem service potential supply’.Potential supply versus actual flow is what we define as sustainable flow versus actual flow."In Villamagna et al. service capacity is the ‘ecosystem's potential to deliver services based on biophysical properties, social condition, and ecological functions’.This definition theoretically links ecosystem services to the notion of stock.However, in both Villamagna et al. and Schröter et al., examples are provided in which the flow of ecosystem service can be higher than the capacity.In our approach we suggest that accounting notion of capacity should be defined as the stock generating the sustainable flow."Thus, the actual flow can be higher than the sustainable flow, but the result is a depletion of the ecosystem's capacity to generate future flows.Our application identifies several challenges that need to be addressed before a standard framework for integrated ecosystem and economic accounting can be proposed.The first is the difference between potential flows and sustainable flows.Potential flow is the maximum flow of a given service that the ecosystem is able to generate; sustainable flow is the flow that does not exceed the regeneration rate.For provisioning services it is possible to quantify the difference between the two.For regulating and maintenance services it is possible to measure the sustainable flow once a sustainability threshold has been identified, but it is unclear whether it would be possible to measure potential flow.This is a key point that needs to be addressed in order to make the accounting for ecosystem services operational and rigorous.Even establishing a sustainability threshold is not trivial because the conditions and vulnerability of ecosystems vary in space and time.One feature of our application that needs to be highlighted is the use of constructed wetlands for valuing the NPV of water purification sustainable flow.Ideally the quantification of ecosystem capacity and services should be based on the assessment undertaken in physical terms and not be dependent on the valuation methodology.In many cases, however, this turns out to be not possible.In our case study, for example, the available biophysical model is based on a statistical approach, using regression analysis to build a statistical relation between retention and explanatory variables such as land cover, climate, and so on.The model does not include equations representing the physical functions of the ecosystem.For future applications and wherever possible, process-based models should be used to quantify stock-capacity and flow-service.17,Directly related with the choice of using CWs as replacement cost is the choice of lifetime of the resource and of the discount rate used in calculating the Net Present Value."Moreover, we not only consider operation and maintenance costs but we also 'incorporate' building costs considered over the 20 years of the CW life.One important consequence is that fixed costs play the most important role, consistently with our hypothesis that substitute costs have to be incurred once natural processes have been impaired.Underlying assumptions must however be kept in mind when comparing monetary values resulting from different valuation techniques.Finally, although CW are likely to affect the concentration of other pollutants and to provide other ecosystem services, we only related CW to nitrogen emissions, the pollutant we used as proxy of anthropic pressure on the water purification service.In a future application the application of the whole cost should be maybe re-proportionated based on feasible hypotheses.Another point highlighted by our application is the critical role played by the way in which the sustainability threshold is calculated and spatially differentiated according to physical conditions.The latter, in fact, causes the sustainable flow be very sensitive to changes in emissions.A sensitivity analysis has been conducted and we demonstrate that the drivers of changes in the final outcome mainly depend on the biophysical assessment: 56% depends on the model input and parameters, 27% depends on the parameters used to size the area of CW necessary to retain the amount of N and only 17% depends on the purely economic figures, i.e. building and O&M costs and their coefficients, the discount rate and life expectancy of the CWs.As a final note, the exercise presented in this paper shows once more that quantifying and valuing ecosystem services often involves complex, fit for purpose models.This imposes heavy reliance on derived data.If developing data standards is generally considered one of the objectives of ecosystem service accounting, we therefore suggest that a standard on metadata settings, in addition to a standard for data, is equally required.Ecosystem service accounting is in its infancy and substantial work is needed to refine and test methods for use in National Accounts.Our case study addresses the issue of capacity in integrated ecosystem and economic accounting by structuring ecosystem services within a consistent stocks and flows perspective – something not fully addressed in previous applications of ecosystem services accounting.The first key message from our application is that capacity can be calculated for ecosystem services one-by-one.The second key message is that capacity can be intended as the stock that provides the sustainable flow of the service.The third key message is that the sustainable flow must be calculated jointly with the actual flow.Current use of any ecosystem service should be assessed against sustainable use.The capacity should describe the sustainable flow of each service that the ecosystem can generate, even if not currently fully exploited.The common underlying element underneath the first two messages is that the physical flow accounts provide the basis upon which other accounting tables are built.By calculating capacity and flow, in our case study, we demonstrate how to make the theory and concepts described in the SEEA-EEA operational for use in upcoming standards of integrated ecosystem and economic accounting. | In this paper we present a case study of integrated ecosystem and economic accounting based on the System of Environmental Economic Accounting — Experimental Ecosystem Accounts (SEEA-EEA). We develop accounts, in physical and monetary terms, for the water purification ecosystem service in Europe over a 20-year time period (1985–2005). The estimation of nitrogen retention is based on the GREEN biophysical model, within which we impose a sustainability threshold to obtain the physical indicators of capacity – the ability of an ecosystem to sustainably supply ecosystem services. Key messages of our paper pertain the notion of capacity, operationalized in accounting terms with reference to individual ecosystem services rather than to the ecosystem as a whole, and intended as the stock that provides the sustainable flow of the service. The study clarifies the difference between sustainable flow and actual flow of the service, which should be calculated jointly so as to enable an assessment of the sustainability of current use of ecosystem services. Finally, by distinguishing the notion of ‘process’ (referred to the ecosystem) from that of ‘capacity’ (pertaining specific services) and proposing a methodology to calculate capacity and flow, we suggest an implementable way to operationalize the SEEA-EEA accounts. |
323 | A carrier-free multiplexed gene editing system applicable for suspension cells | Harnessing immune system has great merit to fight cancer by boosting anti-tumor immunity.One of the most effective strategy in cancer immunotherapy is the use of inhibitors directed to immune checkpoint molecules such as cytotoxic T lymphocyte-associated molecule-4, programmed cell death-1 and programmed cell death ligand-1 .In particular, monoclonal antibody-based inhibitors targeting PD-1/PD-L1 axis by blocking the interaction between PD-1 and PD-L1 have achieved impressive therapeutic effects in several solid cancers .PD-1/PD-L1 signaling pathway plays crucial roles in tumor immune escape by inhibiting the proliferation, survival and effector functions of T lymphocyte .PD-1, known as CD279, is an inhibitory receptor expressed on immune cells, particularly cytotoxic T cells that is involved in immune tolerance and T cell exhaustion.Together with the PD-1 receptor, PD-L1 and PD-L2 as two PD-1 ligands negatively modulate the immune response .The distribution and expression of PD-L1 and PD-L2 are differentially regulated .Reportedly, PD-L1 is expressed broadly on immune cells as well as numerous cancers, whereas PD-L2 expression is restricted to antigen presenting cells such as macrophages and dendritic cells .However, recent studies have demonstrated that expression of PD-L2 is also detected in many cancers including renal cell carcinoma, bladder carcinoma, melanoma, non-small cell lung cancer, triple-negative breast cancer and gastric carcinoma, and even in PD-L1 negative tumors .Tumor-associated PD-L1 expression has been evaluated as an important indicator of the predictive clinical response to anti-PD-1 antibody therapies .Although PD-L1 is a significant marker in cancers, subsets of some PD-L1 negative patients have often shown a promising response to anti-PD-1 therapies .This clinical response could be explained by the status of PD-L2 expression, nevertheless PD-L1 was negative.Interestingly, a recent paper showed that tumor-associated PD-L2 was associated with clinical response to pembrolizumab as an anti-PD-1 inhibitor in patients with head and neck squamous cell carcinoma, which is independent of PD-L1 expression .Further, PD-L2 expression can correlate with the clinical response to anti-PD-1 therapies.Together with PD-1, PD-L2 would be one of the key factors in the PD-1/PD-L1 axis-targeted therapies.Currently available therapies that disrupt the interaction of PD-1 and PD-L1/PD-L2 are the use of monoclonal antibodies directed at PD-1 .However, systemic administration of PD-1 blocking antibodies carries several drawbacks, with regards to low target specificity, very long half-life, risk of the autoimmune response, and the limitation of antibody manufacturing process .Recent approaches accessing gene editing technology, instead of the use of monoclonal antibodies, showed the possibility and feasibility to lead effective anti-tumor immunity .Based on this idea, we developed the clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9-enabled multiplex gene editing platform simultaneously to disrupt PD-L1 and PD-L2 particularly in suspension cancer.CRISPR/Cas9 system has prevalently been used in a wide range of biological applications .In particular, through the incorporation of multiple sgRNAs, the CRISPR/Cas9 system has achieved multiple gene editing, possibly providing promising opportunities to correct disease .Currently, many alternative approaches are accessible to differentially expressed multiple sgRNAs, including the use of multigene cassettes via a gene cloning method and DNA-free direct incorporation of those generated by in vitro transcription .Further, the multiple manipulations of cells via conventional transfection methods cannot be applied to all cell types .The efficacy of transfection depends on several factors including the adherence ability of the cells, leading low transfection efficiency in suspension cells .To enhance editing capacity of multiple genes, several issues regarding delivery of editing components should be addressed for the potent clinical application.Ovalbumin–specific TCR transgenic mice and RAG knockout mice were obtained from Dr. Se-ho Park and Dr. Rho Hyun Seong, respectively.OT-I Tg mice were further crossed with RAG KO mice to generate OT-I Tg/RAG KO mice.All mice used in this study were on the C57BL/6 genetic background, maintained at Sejong University, and at 6–12 weeks of age.They were maintained on a 12-h light/12-h dark cycle in a temperature-controlled barrier facility with free access to food and water.Mice were fed a γ-irradiated sterile diet and provided with autoclaved tap water.Age- and sex-matched mice were used for all experiments.The animal experiments were approved by the Institutional Animal Care and Use Committee at Sejong University.In vivo experimental procedures were performed according to the regulations of the Institutional Animal Care and Use Committee of KIST.5 × 106 EG7 cells were pre-treated with 200 nM of single or multiple Cas9 RNPs, suspended in 50% matrigel solution, and then were subcutaneously inoculated in the left flank of naive C57BL/6 wild-type mice.Four groups of mice were administered a single intravenous injection.Individual tumors were monitored every 3 days for 30 days, and the mice were sacrificed for further experiments.For immunofluorescence staining, tumor sections were embedded in Optimal Cutting Temperature compound, fixed in 4% PFA, and then permeabilized according to standard procedures.To detect PD-L1 and PD-L2 expression, the cells were incubated with PE-conjugated anti-mouse PD-L1 or FITC-conjugated anti-mouse PD-L2.Each sgRNA was prepared via hybridization of sequence-specific crRNA with tracrRNA in IDT duplex buffer at 95 °C for 5 min and cooling down to 20 °C.All crRNAs and tracrRNAs were synthesized by IDT.For generation of multiple Cas9 RNPs, a low-molecular weight protamine-carrying engineered Cas9 proteins and multiple sgRNAs at a weight ratio of 1:5 were mixed in phosphate-buffered saline buffer for 30 min at 37 °C.Each 60 pmol of sgRNAs was used for further experiments, and engineered Cas9 proteins were prepared via the Ni-NTA purification system as described in our previously published paper .For example, in the complexation of Cas9-LMWP proteins with two sgRNAs, Cas9-LMWP:sgRNA#1:sgRNA#2 at a weight ratio of 1:5:5 were incubated in PBS buffer.Finally, multiple Cas9 RNPs-containing solutions were directly treated into cells for 48 h, and then proceeded for further experiments.Using a Zetasizer Nano ZS, the hydrodynamic sizes and zeta potential of Cas9 only and complexed Cas9 RNPs suspended in PBS buffer were determined, and one of each representative result from triplicate measurements was shown.For atomic force microscopy analysis, the single or multiple Cas9 RNP-containing solution was dropped on freshly cleaved mica, and then air-dried.AFM imaging was obtained by contact mode with XE-100 AFM and processed using a PARK system XEI software program.OVA-expressing EG7 cells were provided from the American Type Culture Collection and maintained in 10% FBS-containing RPMI 1640 at 37 °C in the presence of 5% CO2.Briefly, 150 nM of single or multiple Cas9 RNPs were treated in EG7 cells at a density of 2 × 105 cells per well.For lipofectamine 2000-mediated transfection experiments, Cas9 RNPs without LMWP were used.For combinatory treatment with electroporation, cells incubated with 100 nM of multiple Cas9 RNPs were subsequently treated with a Neon electroporator set at four different conditions: 1400 mV, 1500 mV, 1600 mV, and 1700 mV for E#1−E#4, respectively, at 1 pulse for 20 ms each.To visualize the localization of intracellular Cas9 proteins on EG7 tumors, the multiple Cas9 RNP-treated EG7 cells were fixed and permeabilized with the BD Cytofix/Cytoperm™ solution kit.The fixed cells were incubated with a His-tagged primary antibody overnight at 4 °C, followed by incubation with an Alexa Fluor 488 secondary antibody for 1 h. DAPI was stained, and then cells were analyzed by a confocal microscopy using a Zeiss LSM 700.To monitor the cellular internalization process for multiple Cas9 RNPs, cells were treated with four different endocytotic inhibitors at 37 °C for 1 h: 40 μM monensin, 4 μM filipin, 10 mg/mL methyl-β-cyclodextrin, and 200 μM chloroquine.Cells were incubated with 200 nM of multiple Cas9 RNPs at 37 °C for 1 h.Then intracellular staining of Cas9 with a FITC-conjugated anti-Cas9 antibody was performed using BD cytofix/cytoperm fixation and permeabilization solution according to standard procedures.As a positive control, the localization of IFNγR using a PE-conjugated anti-IFNγR was observed in the pre-treated cell with individual inhibitors after IFNγ stimulation.Efficacy of gene editing was evaluated by in vitro T7 endonuclease Ι assay, western blotting, flow cytometry and confocal imaging analysis."For detection of indel mutations, genomic DNA was isolated using a QIAamp DNA mini kit according to the manufacturer's instructions.Further, the in vitro T7E1 assay was performed as described previously .The primers used for the specific PCR amplicons are listed in Table S1, Supporting Information.For western blotting, the cell lysates were obtained using RIPA buffer supplemented with a protease inhibitor.A total of 30 μg of proteins was detected using primary antibodies specific for PD-L1, PD-L2, β-actin, and subsequent incubation with secondary immunoglobulin antibodies linked to horseradish peroxidase.For flow cytometry and confocal microscopic imaging analysis, the cells were incubated with PE-conjugated anti-mouse PD-L1 or FITC-conjugated anti-mouse PD-L2.OT-I Tg/RAG KO mice were immunized via subcutaneous injection with 200 μg of the OVA protein emulsified in CFA containing 5 mg/mL of the heat-killed H37Ra strain of Mycobacterium tuberculosis into the lower back.Two weeks after immunization, splenic CD8+ T cells were enriched from OT-I Tg/RAG KO mice by negative selection of CD8+CD11c+ dendritic cells using anti-CD11c MACS and LD column, followed by positive selection with the CD8+ T cell MACS system.Cell populations included >94% CD8+ T cells among all MACS-purified populations."EG7 cells were labeled with 5-carboxyfluorescein diacetate succinimidyl ester using the 7-AAD/CFSE cell-mediated cytotoxicity assay kit according to the manufacturer's instructions.Subsequently, the CFSE-labeled target cells were incubated with the OVA-specific CD8+ T cells as effectors isolated from OT-I Tg/RAG KO mice.At 48 h post incubation, the collected whole cells were stained with 7-AAD solution and analyzed by flow cytometry.For assessment of intracellular effector production in OVA-specific CD8+ T cells, brefeldin A was added for the last 6 h of the co-culture.The cells were harvested and fixed with 4% PFA solution.Subsequently, the cells were stained with anti-mouse CD8α-APC plus anti-mouse IFNγ-FITC, anti-mouse TNFα-PE, or anti-mouse perforin-PE for 1 h at 4 °C.After washing, the cells were subjected to flow cytometric and confocal microscopic analysis.For ELISA assay, the supernatants were harvested at 48 h post incubation."Subsequently mouse IFNγ and TNFα were detected using an ELISA kit according to the manufacturer's instructions.To enhance cytotoxic CD8+ T cell-mediated immune response by achieving simultaneous disruption of both PD-L1 and PD-L2 expression in suspension cancers, we established a simplified multiplex gene editing system with simultaneous incorporation of individual sgRNA targeting PD-L1 and PD-L2, designed from a multifunctional Cas9 fusion protein we previously reported as a one-step Cas9 RNP system .The use of our previously developed ternary Cas9 RNPs has proven as an innovative method to treat cancer in vitro and in vivo, by engineering multifunctional Cas9 proteins with complexation reagent abilities and cell penetrating and nuclear translocation properties .In a carrier-free approach, the engineered Cas9 fusion proteins can deliver sgRNAs into the nucleus of cells.For the application of our Cas9 system in immunotherapy, our next goal was to achieve multiplex gene editing via incorporation of individually synthesized sgRNAs in a simple way.Thus, we attempted to simultaneously incorporate individual sgRNAs targeting PD-L1 and PD-L2 into engineered Cas9 proteins via electrostatic interaction.Positively charged LMWP sequence within Cas9 proteins allows self-assembly of multiple sgRNAs and simultaneous delivery into the nucleus.First, we evaluated the efficacy of insertion/deletion mutations triggered by our ternary Cas9 RNP system in OVA-expressing murine EG7 tumor cells.We employed two non-overlapping sgRNAs against PD-L1 and PD-L2 to disrupt PD-1 ligands under IFNγ treatment to induce high expression of both.Flow cytometric analysis and T7E1 assay exhibited that each of sgRNAs targeting PD-L1 and PD-L2 resulted in the dramatic reduction of their gene expression, when complexed with an engineered Cas9 protein.Approximately 70–80% of targeted cells displayed reduced levels of gene expression at 48 h post incubation, especially when treated with PD-L1 or PD-L2 targeting sgRNA#2.Furthermore, levels of PD-L1 or PD-L2 were barely detected in PD-L1 or PD-L2 Cas9 RNPs-treated cells, respectively, using western blotting and confocal imaging analysis.Taken together, each sgRNA#2 targeting PD-L1 and PD-L2 has been chosen for further experiments.Next, to achieve the multiplexed gene editing using our carrier-free Cas9 system for disruption of both PD-L1 and PD-L2, multiple sgRNAs and engineered Cas9 proteins were mixed in PBS buffer for 30 min.Subsequently, the size of complexed Cas9 proteins was determined by using dynamic light scattering.Similar size distribution at approximately 100 nm was observed for the self-assembled Cas9 RNPs with single sgRNA or multiple sgRNAs at the ratio of 1:5, whereas the size of the Cas9 protein only was over 800 nm.Consistently with the DLS results, AFM analysis showed a small size distribution of single and multiple Cas9 RNPs, respectively.We confirmed that the incorporation of multiple sgRNAs could be condensed with engineered Cas9 proteins.Additionally, the surface charge of complexed Cas9 RNPs was determined by measuring the zeta potential.After complexation of single or multiple Cas9 RNPs, their surface net charge, which contained a 1:5 ratio of Cas9-LMWP:sgRNAs, decreased from 10.95 mV to 1.52 mV and 1.02 mV for single and multiple Cas9 RNPs, respectively, showing the self-assembly of Cas9 RNPs.To further investigate whether multiple Cas9 RNPs can be internalized, cellular uptake efficacy was measured.Based on flow cytometric analysis, over 62% of multiple Cas9 RNPs were internalized into EG7 cells at 2 h post incubation.Further, Cas9 proteins were nicely detected in multiple Cas9 RNPs-treated cells at 2 h post-treatment.Green signals represent the presence of Cas9 proteins, visualized by confocal microscopy.To define the cellular mechanism of internalization of the multiple Cas9 RNPs into target cells, EG7 cells were treated with several endocytic inhibitors to block the intracellular uptake of the multiple Cas9 RNPs.Chemical compounds including monensin and chloroquine function to inhibit clathrin-dependent endocytotic pathways, whereas filipin is known to block the caveolae-mediated endocytosis .mβCD has been used to deplete cholesterol from the membrane, affecting cholesterol-dependent endocytosis .As a positive control, the endogenous behaviour of the IFNγ receptor, which is localized on the cell surface of EG7 cells, was monitored upon treatment with individual inhibitors plus IFNγ using confocal microscopy.Cell surface-localized IFNγR after ligand binding can be internalized via two different intracellular pathways: caveolae and/or clathrin-coated pits .Indeed, the translocation of IFNγR into intracellular compartments upon treatment with each of the four different endocytotic inhibitors was not detected, whereas IFNγR accumulated in the intercellular compartments without exposure to the inhibitors.Next, we investigated the cellular internalization process of multiple Cas9 RNPs into EG7 cell using confocal microscopy and FACS analysis.After treatment with the individual inhibitors, the cellular uptake of multiple Cas9 RNPs was monitored.Intriguingly, the internalization of the multiple Cas9 RNPs was significantly inhibited upon treatment with all four inhibitors, suggesting multiple intracellular routes for their cellular uptake.The cellular uptake of multiple Cas9 RNPs was mainly inhibited by treatment of monensin, filipin, and chloroquine.However, the treatment with mβCD only mildly affected the uptake of multiple Cas9 RNPs, suggesting the function of cholesterol raft-independent endocytosis pathways for internalization of multiple Cas9 RNPs.In general, cytokine receptors can be internalized via clathrin-mediated endocytosis and/or clathrin-independent endocytosis according to membrane environment.In particular, caveolae-mediated endocytosis of IFNγR contributes to its nuclear localization .In addition, the multiple Cas9 RNPs can be additionally internalized via cholesterol raft-independent endocytosis pathways.Thus, the multiple Cas9 RNPs can traffic to the nucleus via multiple intracellular endocytosis pathways.Next, we determined the multiple gene editing efficacy when treating EG7 cells with multiple Cas9 RNPs.The treatment of multiple Cas9 RNPs targeting both PD-L1 and PD-L2 simultanously disrupted the expression of both PD-1 ligands, showing a significant reduction at the loci of PD-L1 and PD-L2, respectively, which were evaluated by T7E1 assay, western blotting assay, and flow cytometric analysis.In particular, based on flow cytometric analysis, the reduction efficacy of multiple Cas9 RNPs had 69% and 89% for PD-L1 and PD-L2, respectively.In contrast, transfection with lipofectamine 2000 to deliver multiple sgRNAs and Cas9 revealed lower reduction efficacy than multiple Cas9 RNPs.Specifically, lipofectamine-mediated multiple sgRNAs deliveries obtained 64% and 61% of the reduction in gene expression of PD-L1 and PD-L2, respectively.Additionally, we tested the possibility that multiple gene editing efficacy of multiple Cas9 RNPs can be enhanced when combined with electroporation.Electroporation was applied in various conditions to optimize delivery of multiple Cas9 RNPs targeting both PD-L1 and PD-L2 into target cells.Surprisingly, the electroporation-mediated delivery of multiple Cas9 RNPs showed an incredible increase of gene reduction compared to that of treatment with the multiple Cas9 RNPs individually.Specifically, the electroporation condition #2 had a 6.5-fold reduction in PD-L1 and PD-L2 double positive population compared to treatment with the multiple Cas9 RNPs alone.Taken together, treatment with multiple Cas9 RNPs individually or in combination with electroporation are capable of multiplexed gene perturbation at multiple loci with high gene editing efficacy.To determine the feasibility of utilizing multiple Cas9 RNPs in anti-PD-1 cancer therapy, we tested the cytotoxic CD8+ T cell-mediated immune response after disruption of both PD-1 ligands mediated by multiple Cas9 RNPs.We used CD8+ T cells as effector cells isolated from OT-I Tg/RAG KO B6 mice after immunization with OVA to evaluate CTL response to OVA-expressing EG7 tumor cells.CD8+ T cell-mediated cytotoxicity against EG7 cells was determined by CFSE-based flow cytometric analysis.CFSE-labeled EG7 cells as target cells were incubated with OVA-specific CTLs as effector cells at an effector: target ratio of 2:1 and 6:1 for 48 h. Subsequently, cytotoxicity was measured.Depletion of either PD-L1 or PD-L2 enhanced cytotoxicity compared to the negative control.Furthermore, the CTL response against multiple Cas9 RNPs-treated EG7 cells was significantly increased in comparison to each single Cas9 RNPs-treated EG7 cells.Co-suppression against both PD-L1 and PD-L2 synergistically induced a 4- to 6-fold increase than each single suppression in cytotoxic activity, indicating that both PD-L1 and PD-L2 expressions on EG7 cells participate in synergistic inhibition of CTL-mediated anti-tumor immune responses.Next, we monitored the release of cytotoxic effector molecules including IFNγ, TNFα, and perforin, as key regulators of CD8+ T cell-mediated anti-tumor immunity, directly acting on target cell differentiation and killing .The intracellular expression of cytotoxic molecules was evaluated using intracellular staining-based flow cytometry.Simultaneous disruption of PD-1 ligands remarkably enhanced production of effector molecules relative to each single disruption.Notably, the production of IFNγ was most significantly induced ~2.5-fold upon co-suppression of PD-1 ligands, when compared with that of each single suppression.Interestingly, the confocal imaging analysis revealed that green signals representing IFNγ were highly detected in CD8+ T cells as well as in tumor cells, especially when treated with multiple Cas9 RNPs, suggesting direct killing effects of IFNγ on tumor cells.APC-conjugated anti-CD8 antibody was applied, which was specifically detected at the T cell-surrounding membrane, to visualize CD8+ T cells.Furthermore, when treated with multiple Cas9 RNPs, increased amounts of IFNγ in the supernatant were measured by ELISA assay.Compared to the negative controls, treatment with even single Cas9 RNPs showed much more effective cytotoxicity and increased production of cytotoxic molecules.Taken together, our recently developed multiple Cas9 RNPs can be functional in vitro in the regulation of multiple gene expression.We further tested anti-tumor activity to evaluate whether our multiple Cas9 system could be effectively functional in vivo.OVA-expressing EG7 cells, pre-treated with single or multiple Cas9 RNPs, were subcutaneously inoculated in the left flank of naive C57BL/6 wild-type mice.Following, individual tumors were monitored for 30 days.During the first 9 days after tumor injection, there was no significant difference among treatment groups.However, on day 15, each PD-1 ligand-depleted tumors started to grow slowly and finally effectively decreased.Remarkably, blockade of both PD-L1 and PD-L2 via multiple Cas9 RNPs showed a synergistic reduction, considering OVA-specific CD8+ T cell infiltration.To confirm whether the multiple Cas9 RNPs-mediated gene editing functioned properly in vivo, the expression of PD-1 ligands was determined using immuno-fluorescent labeling assay on each tumor sections.Consistent with in vitro experimental results, the expressions of PD-L1 and PD-L2 were hardly detected in indel mutation-induced tumor sections although PD-L1 was slightly detected in tumors, while untreated tumors showed high expression of PD-1 ligands.Taken together, the depletion of PD-1 ligands in tumor cells via multiple Cas9 RNPs could synergistically enhance the function of cytotoxic CD8+ T cells by blocking the interaction between PD-1 and PD-L1/PD-L2, resulting in improved anti-tumor immunity.These studies provide strong evidence to support gene editing technology for the effective development of immune checkpoint inhibitors.Finally, we tested targeted deletion of triple genomic loci using our simplified editing system.Using three sgRNA each, we simultaneously knocked out PD-L1, PD-L2, and TIM-3 in EG7 suspension cells upon IFNγ treatment to induce high expression of immune checkpoint molecules.In a simple way, we achieved approximately between 70 and 90% deletion efficacy depending on the sgRNA, as evaluated by FACS analysis.Thus, our multiple Cas9 RNP system can provide an alternative approach with a high gene editing efficacy, not using viral transfection or electroporation system with a high gene editing efficacy.Several strategies to achieve multiplexing gene editing have been developed in various ways.The most widely used method for production of the multiple sgRNAs is to generate each individual sgRNAs with each individual cassettes within an expression vector via gene cloning.However, there are several concerns, such as size limitation and cloning efficacy when achieving several sgRNAs simultaneously.Thus, the development of improved gene expression cassettes is essential for efficient multiple gene editing.The blockade of PD-1 and PD-L1 interaction resulted in promising therapeutic effects for some cancer patients.However, challenges to overcome the immune escape mechanism of cancer cells for effective cancer treatment still remain.In this study, co-suppression of both PD-L1 and PD-L2 in suspension cancer cells via the simplified multiple gene editing system resulted in the improved anti-tumor immunity, enhancing CD8+ T cell-mediated cytolysis and effector secretion.Additionally, combinatory treatment with multiple Cas9 RNPs and electroporation showed significant enhancement of multiple gene editing in suspension cells.Thus, use of the multiple gene editing system can provide considerable therapeutic advantages.The authors declare no competing financial interest. | Genetically engineered cells via CRISPR/Cas9 system can serve as powerful sources for cancer immunotherapeutic applications. Furthermore, multiple genetic alterations are necessary to overcome tumor-induced immune-suppressive mechanisms. However, one of the major obstacles is the technical difficulty with efficient multiple gene manipulation of suspension cells due to the low transfection efficacy. Herein, we established a carrier-free multiplexed gene editing platform in a simplified method, which can enhance the function of cytotoxic CD8+ T cells by modulating suspension cancer cells. Our multiple Cas9 ribonucleoproteins (RNPs) enable simultaneous disruption of two programmed cell death 1 (PD-1) ligands, functioning as negative regulators in the immune system, by accessing engineered Cas9 proteins with abilities of complexation and cellular penetration. In addition, combination with electroporation enhanced multiple gene editing efficacy, compared with that by treatment of multiple Cas9 RNPs alone. This procedure resulted in high gene editing at multiple loci of suspension cells. The treatment of multiple Cas9 RNPs targeting both ligands strongly improved Th1-type cytokine production of cytotoxic CD8+ T cells, resulting in synergistic cytotoxic effects against cancer. Simultaneous suppression of PD-L1 and PD-L2 on cancer cells via our developed editing system allows effective anti-tumor immunity. Furthermore, the treatment of multiple Cas9 RNPs targeting PD-L1, PD-L2, and TIM-3 had approximately 70–90% deletion efficacy. Thus, our multiplexed gene editing strategy endows potential clinical utilities in cancer immunotherapy. |
324 | Serum levels of periostin and exercise-induced bronchoconstriction in asthmatic children | Exercise-induced bronchoconstriction is an acute phenomenon where the airways narrow as a result of physical exertion.Although EIB is not observed in all cases of asthma, a significant number of asthmatic patients experience exercise-induced respiratory symptoms, as exercise is one of the most common triggers of bronchoconstriction in these patients.1,EIB can only be diagnosed when there are changes in lung function induced by exercise, regardless of symptoms.1,2,However, the exercise provocation test needed for diagnosis may be difficult for some patients, particularly young children.The development of several possible surrogates for exercise testing, such as eucapnic voluntary hyperpnea or hyperventilation and dry powder mannitol, has facilitated easier diagnosis of EIB.1,2,The pathophysiology of EIB has been elucidated over the last two decades.2,It is clear that during EIB, inflammatory mediators, including histamine, tryptase, and leukotrienes, are released into the airways from cellular sources, including eosinophils and mast cells.3,4,Serum levels of periostin is a promising biomarker of TH2-induced airway inflammation,5 eosinophilic airway inflammation,6 and response to TH2-targeted therapy.5,Periostin is induced by IL-13, which is a member of the TH2 cytokine family and a product of eosinophils, basophils, activated T cells, macrophages, and mast cells.7,Periostin is induced by IL-13 and can induce proinflammatory cytokines, including thymic stromal lymphopoietin.8,9,Recently, it was suggested that TSLP in combination with IL-33 increases mast cell formation of eicosanoids, which are important in patients with EIB.10,We hypothesized that periostin levels would be higher in asthmatics with EIB than in healthy children.Furthermore, as we reported previously that serum levels of periostin are associated with airway hyperresponsiveness to methacholine and mannitol,11 we also hypothesized that periostin may be correlated with AHR induced by exercise in asthmatic children.Our objective was to evaluate the relationship between serum levels of periostin and EIB in asthmatic children.Subjects were recruited from outpatient clinics at Hallym University Kangdong Sacred Heart Hospital, Seoul, Korea.We systematically recruited all new asthma patients at their first visits because of suspected asthma, and all diagnoses were verified by clinical examination, pulmonary function testing, and methacholine bronchial provocation tests.The patients were newly diagnosed with asthma and had undergone maintenance therapy for 0.5–2 years.Asthma was defined as the presence of symptoms with less than 16.0 mg/mL inhaled methacholine, which induced a 20% decrease in FEV1.12,Severity of asthma was classified according to the guidelines of the Global Initiative for Asthma using an algorithm including medication dose, FEV1, medication adherence, and symptom levels.13,Patients were given inhaled short-acting β2-agonists on demand to relieve symptoms, with or without controller medications.The controls were healthy children matched by age and gender, who had applied for a routine health checkup or vaccination.They had no history of wheezing or infection over the 2 weeks before the study.Exclusion criteria included acute exacerbation of asthma requiring systemic corticosteroids during the previous 6 months and parenchymal lung disease evident in chest radiographs performed 4 weeks before the study.Of the healthy controls, those at any risk for atopy or subclinical eosinophilic inflammation were excluded using fractional exhaled nitric oxide of ≥20 parts per billion.14,Atopy was defined as the presence of at least one positive allergen-specific IgE test result or a positive finding in skin prick tests.A schema of the study design is shown in Fig. 1.After a 4-week run-in period, the asthmatic patients made three visits to our clinic at the same time of day.During the observation period, all patients were asked to discontinue controller medications, and were excluded if they experienced asthma exacerbations requiring recommencement of such medication.At the first of the three visits, blood samples were taken and FeNO levels were measured by a physician.Each subject was evaluated using SPTs and pre- and post-bronchodilator spirometry.At the second and third visits, separated by intervals of at least 1 week, BPTs with exercise and mannitol challenge were performed.Healthy controls made two visits to our clinic at the same time of day.At the first of the three visits, FeNO levels were measured by a physician and each subject was evaluated using SPTs.Those at no risk for atopy or eosinophilic inflammation were included and made a second visit.At the second visit, blood samples were taken and exercise challenges were performed.Blood samples were stored at −70 °C before determining the periostin levels in serum.At the third visit, mannitol challenge was performed.The spirometry and exercise challenge tests were performed by a trained technician.All procedures were approved by the Medical Ethics Committee of Hallym University Kangdong Sacred Heart Hospital, Seoul, Korea, and all subjects and/or parents provided written informed consent.Exercise challenges were conducted in accordance with American Thoracic Society standards12 and performed by running on a treadmill with the nose clipped using a standardized protocol.Heart rate was monitored continuously with a radiographic device.The temperature in the laboratory was maintained at 22 °C, with humidity of 40–50%.Inspired air temperature and humidity were measured.The treadmill speed was increased until the heart rate was ∼85% of the predicted maximum and maintained for 6 min.Spirometry was performed 20 and 5 min before each exercise challenge and repeated 0, 3, 6, 10, 15, and 20 min afterwards.The results of exercise challenges were considered positive with a ≥15% decrease in FEV1 after exercise.12, "Dry powder mannitol was administered according to the manufacturer's recommendations, and FEV1 values were recorded as prescribed by current guidelines.15",The FEV1 recorded after inhalation of a placebo capsule served as the baseline value.The challenge was completed when a ≥15% drop in FEV1 from baseline occurred, which was considered a positive response, or when the maximum cumulative dose of mannitol was administered.Asthmatics exhibiting a decrease in FEV1 of at least 15% from baseline after inhalation of ≤635 mg mannitol were enrolled in the positive-mannitol BPT group, and the others were placed in the negative-mannitol BPT group.For positive challenge results, the cumulative provocative dose causing a 15% drop in FEV1 was calculated by log-linear interpolation of the final two data points.The responses to mannitol were expressed as PD15 values.FeNO was measured using a portable nitric oxide analyzer that provided measurements at an exhalation flow rate of 50 mL/s expressed in ppb.16,Determinations made with the device were in clinically acceptable agreement with measurements provided by a stationary analyzer according to the guidelines of the American Thoracic Society.17,Blood samples were obtained between 08:00 and 09:00.Periostin levels were measured by Shino Test Corp. using an enzyme-linked immunosorbent assay, as described previously.18,Briefly, The SS18A mAb was incubated overnight at 25 °C on ELISA plates.Then the ELISA plates were blocked by blocking buffer overnight at 4 °C and then washed three times with washing buffer.To measure periostin levels, diluted serum samples or recombinant periostin standards were added and incubated for 18 h at 25 °C.After washing five times, the peroxidase labeled SS17B mAb was added followed by incubation for 90 min at 25 °C.After washing five times to remove excess Ab, reaction solution was added, followed by incubation for 10 min at 25 °C and then the reaction was stopped by adding the stop solution.The values were calculated by subtracting the absorbance at 550 from the absorbance at 450 nm.Periostin concentrations in the serum were calculated simultaneously using the recombinant periostin proteins.The assay was performed in duplicate.The data were analyzed using SPSS ver.21.0.Continuous data are expressed either as means with standard deviations or as medians with interquartile ranges depending on the data distribution.Groups were compared using the Kruskal–Wallis test for continuous variables or χ2 tests for categorical variables.Post hoc pairwise comparisons were performed using the Tamhane test.Numerical parameters with non-normal distributions were log-transformed."Correlations between periostin levels, lung function, total IgE levels, eosinophil counts in peripheral blood, eosinophil cationic protein levels, and FeNO values were evaluated by calculating Spearman's rho.The effects of log-transformed periostin levels on the log-transformed maximum percentage change in FEV1 from baseline to after exercise, and mannitol PD15 data were analyzed by linear regression to allow adjustment for age, sex, atopy, and PB eosinophil count.The estimates obtained were regression slopes for log-transformed periostin levels against the log-transformed maximum percentage change in FEV1 from baseline to after exercise and mannitol PD15 value.The overall test performance of periostin for identifying asthmatic patients with positive exercise BPT and for identifying asthmatic patients with positive mannitol BPT was reviewed based on receiver operating characteristic curve analyses.The overall accuracy of the test was measured as the area under the ROC curve.The prevalence of disease used in the analyses was estimated from the ratio of positive and negative cases in the dataset.The 95% confidence interval for test characteristics was calculated using MedCalc v.14.8.1.A total of 90 subjects were recruited and took part in this study.The patient group consisted of 60 asthmatics and a control group included 30 healthy subjects.During the run-in period, four subjects with asthma dropped out because of a failure to discontinue controller medications due to exacerbation of asthma.Eighty-six subjects who finished the study were enrolled in the final analyses.The 56 asthmatic children were divided into four groups: asthmatics with positive exercise BPT and positive mannitol BPT, asthmatics with positive exercise BPT but negative mannitol BPT, asthmatics with negative exercise BPT but positive mannitol BPT, and asthmatics with negative exercise BPT and negative mannitol BPT.The demographic data and pulmonary function parameters of the subjects are summarized in Table 1.There were no differences between the asthmatic and healthy children in age, sex, or body mass index.Of the 56 subjects with asthma, 14 had mild intermittent asthma, 23 had mild persistent asthma, and 19 had moderate asthma according to the GINA guidelines.There were no statistically significant differences in atopy, prior inhaled corticosteroid use, or asthma severity among the four asthmatic groups.As expected, the baseline FEV1 levels and FEV1/forced vital capacity ratios were significantly lower in asthmatics than in healthy controls, while the bronchodilator responses were significantly greater.In group comparisons, the bronchodilator responses were significantly greater in asthmatics with both positive exercise BPT and positive mannitol BPT than in asthmatics with negative exercise BPT and with negative mannitol BPT.There were no differences in methacholine PC20 among the four asthmatic groups.The maximum decrease in FEV1 after exercise was significantly greater in asthmatics with positive exercise BPT and positive mannitol BPT than in the other asthmatic groups.Biomarker levels are shown in Table 1.The total IgE levels, PB eosinophil counts, and FeNO levels were significantly higher in asthmatics than in healthy controls.The total IgE levels and PB eosinophil counts were not significantly different among the four asthma groups.FeNO levels were significantly greater in asthmatic children with positive exercise BPT and positive mannitol BPT than in those with negative exercise BPT and negative mannitol BPT and in controls.Serum levels of periostin were significantly greater in asthmatic children with positive exercise BPT and positive mannitol BPT than in those with negative exercise BPT and negative mannitol BPT and controls."Periostin levels were not significantly correlated with lung function but were significantly correlated with PB eosinophil levels, FeNO, and total IgE levels.There were no significant correlations between serum levels of periostin, age, and sex in any group.After adjusting for age, sex, atopy, and PB eosinophil level, the log was significantly associated with the log, and with the log.Table 3 shows the ROC curve for using periostin levels to predict positive exercise BPT and to predict positive mannitol BPT.To differentiate asthmatic patients with EIB from those without EIB, the ROC curve for using periostin level had an AUC of 0.722.To differentiate asthmatic patients with positive exercise BPT from those with negative exercise BPT, the ROC curve for using FeNO level, eosinophil count, and total IgE had AUCs of 0.625, 0.519, and 0.530, respectively.To discriminate asthmatic patients with positive mannitol BPT from those with negative mannitol BPT, the ROC curve had an AUC of 0.596.The AUCs of periostin level, FeNO level, eosinophil count, and total IgE level did not differ significantly.To discriminate asthmatic patients with positive mannitol BPT from those with negative mannitol BPT, the ROC curve for using FeNO level, eosinophil count, and total IgE had AUCs of 0.733, 0.537, and 0.520, respectively.The AUCs of periostin level, FeNO level, eosinophil count, and total IgE level did not differ significantly among the groups.We investigated the relationship between serum levels of periostin and EIB in pediatric asthma patients.The inflammatory cells most commonly involved in the pathogenesis of EIB are mast cells and eosinophils.10,19,20,Mast cells secrete PGD2, cysteinyl leukotriene receptor, and histamine, which are mediators that trigger airway smooth muscle contraction, sensory nerve activation, and mucus secretion.Mast cells and eosinophils also produce IL-13, a pleiotropic TH2 cytokine, which is also secreted by basophils, activated T cells, and macrophages.7,Periostin is induced by IL-13, and we showed previously that serum levels of periostin are significantly higher in asthmatic children than in healthy controls.11,In this study, serum levels of periostin were significantly greater in asthmatic children with both positive exercise BPT and positive mannitol BPT than in those with both negative exercise BPT and negative mannitol BPT and also healthy controls.Several studies have shed light on the possible mechanism by which periostin may be involved in EIB.Masuoka et al.8 reported that periostin acts directly on keratinocytes via αv integrin to induce secretion of proinflammatory cytokines, including TSLP.7,A recent study also showed that periostin is produced by mast cells and can act directly on epithelial cells via integrin-binding activation, resulting in TSLP secretion.9,TSLP, in turn, was shown to intensify the EIB-associated granule phenotype and increase IgE receptor-mediated CysLT production in human cord blood-derived mast cells.10,Based on these reports, we speculated that periostin may be associated with EIB via TSLP, but further studies are required to clarify this association.In addition to mast cells, eosinophils seem to play a major role in the pathogenesis of EIB.Peripheral blood eosinophil counts are associated with severity of EIB,21 and asthmatic patients with EIB are more likely to have a greater concentration of eosinophils in sputum than those without EIB.4,22,In the present study, PB eosinophil counts were significantly higher in asthmatics than in healthy controls, but these levels were not significantly different among the four asthma groups.FeNO is a possible biomarker of airway inflammation in asthma, as it is correlated with eosinophilic activity in the airway.Scollo et al.23 reported that the baseline FeNO value was related to the extent of post-exercise bronchoconstriction, suggesting that the FeNO level may predict AHR to exercise.One study showed that FeNO levels were significantly predictive of EIB in atopic wheezy children,24 while another demonstrated that FeNO level can be used to screen asthmatic children to determine the need for EIB testing.25,In agreement with these observations, we also found that the FeNO levels in asthmatic children with positive exercise BPT and positive mannitol BPT were significantly greater than those in asthmatic children with negative exercise BPT and negative mannitol BPT as well as controls.Serum levels of periostin, eosinophil counts, and FeNO levels all reflect a TH2-driven inflammatory response, but the relationship between these distinct biomarkers may be complex and variable.26,27,Jia et al.28 collected peripheral blood, sputum, and bronchoscopy biopsy samples to identify noninvasive biomarkers of TH2 inflammation in asthmatic patients, and observed that while both FeNO and periostin levels were consistently low in eosinophil-low patients, FeNO showed a greater overlap between eosinophil-low and eosinophil-high subjects.In the present study, periostin levels were significantly correlated with both PB eosinophil and FeNO levels.We found that not only FeNO levels but also periostin were associated with EIB in asthmatic children.After adjusting for age, atopy, and PB eosinophil count, serum levels of periostin were significantly associated with EIB.EIB is frequently documented with asthma and reflects insufficient control of underlying asthma.2,In this study, serum periostin levels in the asthmatic children with both positive exercise and mannitol BPT were significantly greater than those in the asthmatic children with both negative exercise and mannitol BPT.Although there was no statistically significant difference, there was more moderate, persistent asthma in asthmatic children with both positive exercise and mannitol BPT than the other groups.However, it is unclear in our study whether periostin is associated only with EIB or with asthma control because we did not have a group of patients exhibiting frequent exacerbations of asthmatic symptoms.A prognostic relationship between periostin and risk of asthma exacerbations has been observed in clinical studies.29,There have been several studies reporting that periostin was associated with poor asthma control.In the omalizumab EXTRA study, in which subjects were required to have experienced at least one exacerbation in the previous year, severe exacerbation rates over 48 weeks in the placebo arm were 0.93 and 0.72, respectively, in the periostin-high and periostin-low subgroups.30,The severe exacerbation rate per year in the LUTE and VERSE lebrikizumab studies was also higher in placebo-treated patients with high serum periostin levels than in periostin-low patients.31,There were several limitations to the present study.First, the sample size was small.Second, we could not discuss how our findings may be linked to poor asthma control, because we did not have a group of patients exhibiting frequent exacerbation of asthmatic symptoms.Third, periostin may not be a dependable biomarker in growing children because it is an extracellular matrix protein secreted by osteoblasts.However, the levels in our study subjects aged 6–15 years old were no higher than published values for adults26,27 and were not significantly associated with age.As few data on periostin levels in infants and children are available, such values should be investigated further in both asthmatics and healthy controls.To the best of our knowledge, this is the first controlled observational study of the relationship between serum levels of periostin and EIB in asthmatic children.In addition, we also assessed AHR by performing both exercise and mannitol challenge tests.Serum levels of periostin were significantly greater in asthmatic children with both positive exercise and positive mannitol BPT than in those with both negative exercise and negative mannitol BPT and controls.Therefore, periostin levels may serve as a clinically useful biomarker for identifying EIB in asthmatic children.All procedures were approved by the Medical Ethics Committee of Hallym University Kangdong Sacred Heart Hospital, Seoul, Korea, and all subjects and/or parents gave written informed consent.The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.This research was supported by Hallym University Kangdong Sacred Heart hospital Research Fund.The authors declare that they have no competing interests.Ju Hwan Cho, PhD: conception and design of the study, collection of the data and analysis and interpretation of the data, and preparation and revision of the manuscript.Kyubo Kim, MD, PhD: collection of the data, interpretation of the data and preparation of the manuscript.Jung Won Yoon, MD: design of the study, interpretation of the data and preparation of the manuscript.Sun Hee Choi, MD, PhD: conception and design of the study, interpretation of the data, and preparation of the manuscript.Youn Ho Sheen, MD, PhD: conception and design of the study, interpretation of the data, and preparation of the manuscript.ManYong Han, MD, PhD: conception and design of the study, interpretation of the data, and preparation of the manuscript.Junya Ono, MS: collection of the data, and analysis and interpretation of the data.Kenji Izuhara, MD, PhD: conception and design of the study, interpretation of the data, and preparation of the manuscript.Hey-Sung Baek, MD, PhD: conception and design of the study, collection of the data and analysis and interpretation of the data, and preparation and revision of the manuscript.All authors read and approved the final manuscript. | Background: Periostin is induced by IL-13 and has been studied as a biomarker of asthma. The present study explored the relationship between serum levels of periostin and exercise-induced bronchoconstriction (EIB) in asthmatic children. Methods: The study population consisted of 86 children 6–15 years old divided into an asthmatic group (n = 56) and healthy controls (n = 30). We measured the levels of periostin in serum and performed pulmonary function tests including baseline measurements, post-bronchodilator inhalation tests, exercise bronchial provocation tests (BPTs), and mannitol BPTs. Results: The 56 asthmatic children were divided into four groups: asthmatics with positive exercise BPT and positive mannitol BPT (n = 30), asthmatics with positive exercise BPT but negative mannitol BPT (n = 7), asthmatics with negative exercise BPT but positive mannitol BPT (n = 10), and asthmatics with negative exercise BPT and negative mannitol BPT (n = 9). Serum levels of periostin in asthmatic children with both positive exercise and mannitol BPT were significantly greater than those in asthmatic children with both negative exercise and mannitol BPT (95.0 [75.0–104.0] vs. 79.0 [68.0–82.5] ng/mL, P = 0.008) and controls (74.0 [69.75–80.0] ng/mL, P < 0.001). Periostin levels were significantly correlated with both the maximum decrease in %FEV 1 and mannitol PD 15 value. Conclusion: Serum levels of periostin in asthmatic children with both positive exercise and mannitol BPT were significantly greater than those in asthmatic children with both negative exercise and mannitol BPT and also greater than in healthy controls. |
325 | Interaction of amylin species with transition metals and membranes | Islet Amyloid Polypeptide, i.e., amylin, is a 37-residue polypeptide that is stored as a prohormone within secretory granules inside the β-cells of the pancreas before it is processed to mature hormone and secreted extracellularly .Since it is co-localized and co-secreted with insulin in response to nutrient stimuli, the effects of amylin are complementary to the effects of insulin in regulating and maintaining blood glucose levels.Amylin is also a key component of the protein aggregates accumulating in the islets of Langerhans of patients with type 2 diabetes mellitus, and it has been implicated in the disruption of the cellular membrane of β-cells by causing membrane-associated stress due to the uncontrolled influx of ions into the cell .Previous preclinical studies suggested that amylin has the ability to bind with high affinity to particular regions of the brain, including the area called Postrema.Therefore, it is considered a neuroendocrine hormone with an important role in regulating the rate of glucose influx into circulation after meals.The native form of amylin is amidated at the C-terminus, and has a Cys2 to Cys7 disulfide bridge .Recent studies correlated the presence of amylin aggregates and their direct toxicity to β-cells ; therefore, subsequent studies investigated the possible causes underlying amylin aggregation .Other studies have shown that human amylin has the ability to generate hydrogen peroxide during amyloid fibril formation in vitro.Electron Spin Resonance detection was used in combination with the Amplex red assay technique to demonstrate that hIAPP, which is amyloidogenic and toxic, generates H2O2 in vitro , whereas rat amylin, which is nontoxic and non-fibrillogenic, does not.Also, it is established fact that rats do not develop T2DM.Thus, oxidative stress due to the presence of Reactive Oxygen Species is implicated to participate in the development and progression of T2DM. ,However, it has to be mentioned that other authors have reported that there is some of evidence suggesting that also rIAPP may be harmful to β-cells .For instance, rIAPP was found to be able to produce comparable H2O2 concentrations, indicating that amylin aggregation and amylin-induced ROS are unrelated processes. ,It has to be pointed out that some authors demonstrated that the rIAPP peptide and its short derivate rIAPP are cytotoxic to cultured RIN-M cells, and the previous treatment with antioxidants can revert this effect .These recent studies suggest that the issue of the possible toxicity of rIAPP needs to be clarified during the future studies.Work on the potential role of metals in amylin aggregation has rapidly increased over the last two decades due to its possible biomedical importance .Metals such as zinc, copper, and iron, have been widely associated with T2DM and amylin aggregation .Amylin and Zn2+ play an important role in glycemic regulation.Zn2+ ions are stored and packed along with amylin in the granules of the β-cells of the pancreas ; therefore, these granules contain the highest concentration of zinc ions found in the body, reaching up to ~10–20 mM in the interior of the dense granule core .Zinc deficiency is common in T2DM .Aggregation of amylin and progression of T2DM occur when the balance of the peptide and zinc ions is disrupted .In the case of copper ions, there is controversy about their role in the aggregation of amylin.Some studies mentioned that they can inhibit amylin fibrillation as well as toxicity , while others indicate that they may contribute to the cell toxicity by forming amylin oligomers, which may contribute to cell death more than the fibrils .Iron was shown to enhance amylin β-pleated sheet formation .In form of heme, iron can bind to amylin and form a heme–amylin complex, which leads to the formation of H2O2 via oxidative stress .David et al. conducted the only study on nickel so far, and described its coordination mode with the selected shorter fragment of rat amylin .In addition to metal ions, the pH of solution and the protonation state of histidine 18 also affect the aggregation, misfolding, and fibrillization of amylin .A recent study of four amylin mutants revealed that His18 plays a key role in the binding of hIAPP to the cellular membrane .It also explained that is particularly important for the intra and intermolecular interactions that occur during fibril formation and may involve residue charge, size of the fiber, and hydrophobicity .As mentioned above, the secretory granules of β-cells have high concentrations of Zn2+ ions .Studies have suggested that a missense mutation within the zinc transporter protein ZnT8 of the secretory granule is linked to an increased risk of developing T2DM.Nuclear Magnetic Resonance experiments showed that Zn2+ ions bind to the His18 residue of the monomeric form of amylin .It was also reported that Zn2+ ions bind with the geometry of tetra, penta, and hexahedral coordination sites and coordinate with the amino acid residues within the proteins, with or without the involvement of water molecules .However, it was proven that Zn2+ ions prefer the tetrahedral coordination mode in which four amino acids or water molecules are coordinated to the Zn2+ ion in the complex with proteins ."Previously, Ramamoorthy's group reported that the Zn-hIAPP complex had a much higher X-ray Absorption Near Edge Structure peak than the IAPP fragment.This double peak is characteristic of outer-shell scattering, like the one that is seen for imidazoles, implying that the average Zn–imidazole binding is stronger in the Zn–hIAPP complex than in the Zn–hIAPP complex.Since the hIAPP peptide has only a single histidine, the binding of hIAPP to Zn2+ may promote higher order aggregates, such as three or four hIAPP peptide molecules per zinc, than the binding of the shorter hIAPP fragment .Some experiments that have been done at acidic pH, which causes a partial neutralization of the effect of the change in charge upon Zn2+ binding, showed that the observed inhibited aggregation is mainly due to an electrostatic effect that happens at His18 ."Ramamoorthy's group also suggested that Zn2+ ions promote the formation of amylin fibrils, i.e., cross-β structures of amylin.Further investigation is needed to reveal the effect of the coordination mode of Zn2+ ions on the self-assembled cross-β structure of amylin oligomers ."According to Ramamoorthy's group, Zn2+ ions have different effects on amylin aggregation depending on their concentration and the different stages of the amylin aggregation process itself.At high concentrations and in the early stages of aggregation, Zn2+ ions promote the formation of large Zn2+–amylin aggregates compared with amylin aggregation when Zn2+ ions are absent.In the same stages of aggregation, but at low concentrations, Zn2+ ions induce the formation of even larger Zn2+–amylin aggregates than those that are formed at high concentrations of Zn2+ ions.During the last aggregation stages, fiber formation is inhibited at low concentrations of Zn2+ and accelerated at higher concentrations .Although zinc is greatly reducing the total amount of fibers, the overall morphology of the individual amyloid fibers remains almost intact.In conclusion, Zn2+ does not significantly promote a breakage of the fiber or greatly alter the lateral attachment of protofilaments to mature amyloid fibers .The biological importance of the zinc ions and their protective effect against diabetes has been underlined by in vivo studies in diabetes induced rats.Zinc acts against diabetes induced peripheral nerve damage by stimulating metallothionein protein synthesis which has the ability of controlling the oxidative stress. ,It is also known that zinc supplementation or injection can significantly induce the synthesis of anti-oxidant MT in the pancreatic islets, kidneys, liver and heart of diabetes-induced animals .Łoboda et al. used several techniques, including mass spectrometry, potentiometry, NMR and Atomic Force Microscopy, to investigate Zn-pramlintide complexes.Pramlintide is a synthetic analogue of human amylin that is an injectable drug used to lower sugar levels in the blood .It differs from hIAPP in 3 of 37 amino-acid residues, carrying proline residues at positions 25, 28 and 29 of the peptide chain, respectively, replacing one alanine and two serine residues.Zn ions bind to the His18 imidazole ring of pramlintide and the N-terminal amino group of the Lys1 residue, causing loop formation between these residues of the peptide.This complex has much higher stability than the Zn-amylin complex, implying that additional stability of the Zn-pramlintide complex comes from interactions with residues.Region of pramlintide also appears to influence the time-delayed fibrillization of the complex.The initial Zn-pramlintide species that is characterized as well-soluble and non-aggregating forms than oligomeric aggregates after a lag-time of 20 h. David et al. performed studies on Zn complexes involving the and fragments of rIAPP to reveal the role of internal asparagine residues in anchoring and stabilizing rIAPP.Low-stability complexes were detected that exclusively bind to the amino terminus.The stability constants of the GGHSSNN-NH2 peptide are much higher than those of SSNA-NH2.The different coordination modes explain this stability difference between the two peptides.Particularly, the only available binding mode for SSNA-NH2 is, while in the case of GGHSSNN-NH2 the stability of the complexes can be influenced by the imidazole side chain of the histidine residue.Finally, Luiza et al. focused on the interaction of rat amylin with zinc ions in vitro.The authors stated that the regulation of the rat amylin self-assembly is highly associated with the effects of zinc and pH. To investigate the interaction of zinc with both monomeric and oligomeric rIAPP, they used ion-mobility mass spectrometry.The binding of zinc ions to rIAPP was confirmed using NMR.Some residues were affected by the zinc ion addition; the most affected in the amide group were Asn3, Thr4, Cys7, Ala8, Val17, and Arg18.The signal of Cys2 became invisible, and the signal of Asn3 significantly decreased.In the aliphatic regions of the 13C-1H Heteronuclear Single Quantum Correlation spectrum, Lys1, Cys7, and Ala13 were the most affected residues.The two Cys residues are the only predicted Zn ions binding sites.In oxidized amylin, however, a disulfide bridge is formed between the Cys residues.As result, one would not expect presence of a particular specific canonical binding site for Zn ions in an oxidized amylin peptide.The current results may suggest transient interactions with positively charged residues, more specifically Arg18 and the N-terminal Lys1.Zinc accelerated the process of rIAPP aggregation into amyloid fibrils.The ESI-IMS-MS data showed the binding of zinc to a monomer, a dimer, and a trimer occurring in the low concentration of micromolar range, reaching saturation at about 500 μM.These data provide an indication of the affinity of rat amylin to zinc ions, below the typical concentration of millimolar range that is found in the secretory granules of pancreatic β-cells .All of these findings provide new information about the role of zinc in diabetes, and how it can be beneficial to use pramlintide as an antidiabetic drug, also giving room to think about the formation of the Zn-pramlintide complex, the molecular basis and the role that Zn ions play in the mechanism of fibril formation .The interactions between copper ions and amylin were studied by different techniques.A study by Li et al. using Mass Spectrometry showed that copper disrupts amylin fibril formation by inhibiting the arrangement of amylin dimers.This may involve the creation of a complex from a copper ion and the extended β-hairpin of amylin .Ion Mobility Separation was used to evaluate the influence of Cu2+ ions on the hIAPP conformation in solution and to detect the Cu2+ ion binding sites.These results showed a preferential association between the β-hairpin fragment of the amylin monomer and the Cu2+ ion.Moreover, the Cu2+ ion bound strongly to the –18HSSNN22– fragment of hIAPP.Amylin dimers were observed in the absence of Cu2+ ions, while no dimers were observed in the presence of Cu2+ in the solution.The authors concluded that Cu2+ ions disrupt the association pathway leading to the formation of amylin fibrils rich in β-sheet motifs, most likely due to the preferential binding of Cu2+ ions to the β-hairpin conformer.Riba et al. found that there are two binding sites for copper ions in full-length hIAPP.One site, alanine 25, is known for its importance for the ability of amylin to misfold.The other binding site is most likely positioned between the 32 and 37 residues, near the C-terminus of the peptide.Other studies used the standard Thioflavin-T aggregation assay and Amplex-UltraRed H2O2 detection assays to test the effect of Cu2+ on the hIAPP aggregation process, and the resulting formation of aggregates.They proved that the presence of Cu2+ increased the activation energy of the reaction, thereby prolonging the lag phase and slowing down the rate of hIAPP aggregation by about threefold.These findings provide evidence for the intrinsic ability of Cu2+ ions to stabilize hIAPP in its native, non-toxic random coil conformation, which prevents amylin from aggregating after the β-sheet motifs are formed.The binding of Cu2+ ions to amylin also reduces its ability to induce apoptosis and to form a low affinity complex with hIAPP, displaying low pro-oxidative activity in vitro and in cells .Several different experimental techniques as well as Constant Temperature Molecular Dynamics simulations have been applied to the hIAPP by Sinopoli et al. , in order to estimate its conformation in the presence and absence of copper.Results showed that both the fiber structure and aggregation kinetics of hIAPP have been extremely affected by Cu ions, as well as its tendency to be degraded by proteases.Specifically, MS data show an equilibrium between two conformations of hIAPP, with the more flexible state being the most dominant.On the other hand, results obtained by Circular Dichroism spectra of hIAPP are in accordance with a random coil conformation, and that the aggregation can be formed via incubation of the peptide alone.The structurally compacted conformer is formed due to the presence of copper ions, and will be eventually the only existing structure.However, by using spectroscopic patterns, the authors found that there are no clear signs of β-sheet conformation recorded in the presence of copper ions, thus these observations suggest that hIAPP fibril formation can be inhibited the metal ion.The copper-hIAPP complexes are less exposed to enzyme and metalloprotease degradation, indicating that the binding site of the metal occurs within the hIAPP region.The coordination modes of this fragment are still in progress.Mixed parallel/antiparallel arrangements are indicated by the solid-state NMR as well as CTMD simulations, providing evidence for the randomness of the copper ion-induced aggregation process.The binding stoichiometry of hIAPP–Cu2+ complexes was studied as a function of Cu2+ concentration by using Laser Ablation Electrospray Ionization on samples with different concentrations of Cu2+.The results indicated a binding stoichiometry with a 1:1 ratio between the peptide and the metal ions for all of the Cu2+ concentrations, even when there was a twentyfold excess of Cu2+ ions .Further studies showed that in peptide fragments of rat amylin inclusion of the –19SSNN22– sequence is necessary for copper ion binding at neutral pH range.Deprotonated amide nitrogen atoms are exclusively involved in metal binding, with the side chain amide group of asparagine being the primary binding site.The anchoring role of the amino group was dominant in the presence of free amino terminus, but there was additional stabilization from the –19SSNN22– sequence observed.Moreover, it was also found by David et al. that the –17VRSSNN22–NH2 sequence as N-terminally free hexapeptide can form stable dinuclear copper ion complexes in which the amino terminus and the fragment –19SSNN22– sequence are considered as the binding sites of the metal.These results provide evidence that internal positions of IAPP, including asparagine and surrounding polar side chains, can act as anchors for the coordination of copper ions , even in the absence of histidine, which is a strongly coordinating side chain .Sanchez-Lopez et al. performed a spectroscopic study of the binding of Cu2+ ions to hIAPP residues using several techniques, e.g., Electron Paramagnetic Resonance, NMR, electronic absorption, and CD.Their results showed that Cu2+ ions bind to the imidazole N1 of His18, and to the deprotonated amides of Ser19 and Ser20.There are two ways to provide the oxygen-based ligand of Ser20, either via its hydroxyl group or its backbone carbonyl, while N22 may also play a role as an axial ligand.Ser20 was found to stabilize the coordination of the Cu2+ ions toward the C-terminal.Moreover, the role of copper ions in the aggregation of hIAPP, which is directly connected to the histidine binding, is further supported by the fact that in rIAPP sequence the His18 is replaced by arginine residue, and the rat amylin fragments are not susceptible to aggregation .Mukherjee et al. used different spectroscopic techniques, such as absorption, resonance Raman, and EPR, on various lengths,, and) of hIAPP to confirm that iron in the form of heme can bind to hIAPP.They also investigated the active-site environment of heme-hIAPP complexes.To determine the heme binding domain, two different fragments of hIAPP have been chosen: The hydrophilic fragment and the amyloidogenic fragment.hIAPP has three residues known to bind heme in the native conditions.It was shown that His18 residue and the two cysteine residues are able to bind to heme in several different proteins, i.e., hemoglobin, myoglobin, cytochrome c oxidase and others, as well as cytochrome P450, nitric oxide synthase, etc.However, since the thiolate groups of both cysteine residues are oxidized and together form a disulfide bridge, they are no longer able to coordinate heme.Further, the amyloidogenic fragment contains another residue that is known to act biologically as a binding site for heme in catalase, the tyrosine residue Tyr37.Thus, several specific site mutants of hIAPP were considered to determine the residue that act as a heme binding site.The single mutations His18Gly and His18Asn were used for the examination of histidine coordination, while single mutation Arg11Asn, double mutation Arg11Asn; His18Asn were used to identify the effects of both Arg11 and His18.Finally the hIAPP fragment was tested against the coordination of heme coordination without the disulfide bridge.In the same study, Mukherjee et al. found that heme is able to bind to two peptides, native hIAPP and hIAPP, by examining the absorption spectra.However, hIAPP incubated with heme shows a spectrum that is identical to that of free heme, even after prolonged incubation, which indicates that heme is not able to bind to hIAPP.This provides evidence that the fragment of the peptide sequence is where the heme binding residue lies, and eliminates the possibility of Tyr37 acts as heme binding residue.On the other hand, hIAPP fragment, without the Cys2-Cys7-disulfide bridge, also shows spectral changes similar to native hIAPP.Thus, this possibly indicates that heme does not coordinate to the cysteine residues as the native peptide-heme complex and the heme complex of the fragment without the disulfide bridge shows similar spectral features.Nickel has been investigated in only one study, where David et al. tested Ni for its ability to bind to rIAPP .Potentiometric, UV–visible, CD, and NMR spectroscopic methods were used to study the Ni complex of the N-terminal free peptide fragments of rIAPP: SSNN-NH2, SSNA-NH2, AANN-NH2, VRSSNN-NH2, and GGHSSNN-NH2.Their results indicated that the –19SSNN22– residues of the rIAPP cannot be the primary site for the anchoring of Ni ions.However, an increased stability of the corresponding complexes revealed by the NMR measurements performed on the N-terminally free peptide SSNA-NH2, indicating that an equilibrium was reached between the common) and,N–) coordination modes in basic solution.Ni ions are well known for their ability to promote amide deprotonation and coordination in a significant high pH range .For that, the peptide fragments that are N-terminally protected cannot form complexes with Ni ions in a biologically relevant pH range .It was also reported that nickel ions might damage insulin function and induce glucose metabolism deregulation through the ROS pathway .Moreover, Gupta et al. found that in rats nitric oxide synthase levels can be increased by nickel ions, along with cyclic guanosine monophosphate, which might lead to hyperglycemia by stimulating endocrine secretion.However, whether these findings in animal models can explain the association of nickel ion exposure with diabetes in humans needs thorough investigation in the future.Gold complexes have remarkable effects on the aggregation of hIAPP.Inhibitory effects on the fibrillization of hIAPP were applied by three gold complexes with different nitrogen-containing aromatic ligands: 1-, 2-Cl, and 3-Cl.The study by Lei et al. , applied several experimental techniques such as the ThT fluorescence assay and AFM to examine different characteristic changes of hIAPP, while Dynamic Light Scattering experiments were used to determine the particular effects of these gold complexes on protein aggregation.Electrospray Ionization-Mass Spectrometry and intrinsic fluorescence method were employed to investigate the binding properties between the gold complexes and hIAPP.By exploring the details of the binding site using NMR spectroscopy, Lei et al. found that complexes 2 and 3 strongly inhibited the aggregation of hIAPP, compared to a fluctuated effect displayed by complex 1 on the fibril formation at high concentration.The effects inhibiting protein aggregation were derived from multiple interactions, including the possible coordination between the gold and the histidine residue, ligand steric effects, and the π–π stacking interaction between the nitrogen-containing aromatic ligands and hIAPP aromatic residues.Moreover, gold complexes showed the ability to inhibit the aggregation of hIAPP through dimerization, stabilize hIAPP as monomers, and thus prevent further fibrillization of the peptide.Furthermore, gold ions showed a non-interchangeable effect on the assembly of the peptide.Another study used similar techniques to investigate the interactions of the amyloid peptides PrP106-126 and hIAPP with two tetra-coordinated gold–sulfur complexes, dichloro diethyl dithiocarbamate gold complex and dichloro pyrrolidine dithio-carbamate gold complex .The results showed that gold complexes bind to amyloid peptides with high affinity via metal coordination and hydrophobic interaction.In that study, the binding was stronger with PrP106-126 than with hIAPP.Histidine residue may play an important role in the metal binding with both PrP106-126 and hIAPP.The metal coordination was notably exhibited by both of the gold complexes, as found in the peptide–Au–ligand adduct.The amyloid peptide fibrils scattered into nanoscale particles when the peptide and gold complexes interacted, decreasing the level of amyloid peptide cytotoxicity.These results suggest that tetra-coordinated gold–sulfur complexes may inhibit amyloidosis-related diseases.Mononuclear ruthenium complexes were recently proven to inhibit the aggregation of hIAPP .However, Gong et al. found that binuclear Ru complexes have a greater ability to inhibit aggregation than the corresponding mononuclear Ru complexes, possibly as a result of the second metal center in the binding of the monomeric species of amylin.The authors used a ThT assay to elucidate the effects of the Ru complexes on the aggregation of hIAPP.The ThT fluorescence intensity was high in the absence of Ru complexes, reflecting the fibrillation of hIAPP.After co-incubation with binuclear Ru complexes, the intensity of the fluorescence was drastically decreased.In addition, the concentration of Ru complexes also affects the inhibition of hIAPP aggregation .A study by He et al. , where aromatic-containing Ru complexes were used, had the same approach as Gong et al. in that it used a ThT assay, Transmission Electron Microscopy, and Atomic Force Microscopy to further confirm the inhibition of hIAPP aggregation by Ru complexes.The results proved that the inhibition of hIAPP fibril formation is due to the interaction of Ru complexes with hIAPP, and that Ru complexes also promote the disaggregation of formed fibril.Furthermore, NMR spectroscopy and Matrix-Assisted Laser Desorption/Ionization Time Of Flight Mass Spectrometry were used to study the interactions between the Ru complexes and hIAPP.The results indicate that Ru complexes induce conformational changes in hIAPP by binding to the peptide, both in metal coordination and in non-bonded interactions.Changes in the amide chemical shift of several residues, e.g., S20, L27, and S28, indicate that the C-terminal of the hIAPP could be involved in the binding of the Ru complexes.These conformational changes in hIAPP show that the lower fraction of the β-sheet structure of hIAPP occurs immediately after binding with the Ru complexes.Thus, it significantly reverses the aggregation of hIAPP .Zhu et al. investigated ruthenium polypyridyl complexes.Their results agree that Ru complexes cause the disaggregation of hIAPP fibrils into small nanoparticles.Furthermore, using MTT-dimethyl-2-thiazolyl-2,5-diphenyl-tetrazolium bromide assay), they found that Ru complexes are also capable of reducing the cytotoxicity induced by hIAPP in insulinoma cell line INS-1.Ma et al., used ThT assays to investigate the influence of four specific Ru complexes on hIAPP fibrillation.Information obtained from the decreased tyrosine intrinsic fluorescence and ThT fluorescence signals confirmed that Ru complexes inhibit the fibril formation of hIAPP .In this study, the authors did not specify the binding mode between the Ru complexes and hIAPP at the atomic level.Vanadium ions have been known for their in vitro insulin-mimetic effects since 1979 .Since then, V ions have been investigated for their potential in treatment of chronic diabetes .As metal ions can affect the activities of amyloid peptides, some studies have examined the effects of V complexes on hIAPP, and whether these complexes are able to inhibit the aggregation of hIAPP.He et al. used different experimental techniques, like ThT,, and spectrofluorometric measurements, to test the effects of six V complexes, 1- ammoniumdioxovanadate]); 2-bis-oxovanadium; 3-bis-oxovanadium2·H2O); 4- potassium oxalatooxo-diperoxovanadate2]·2H2O); 5-ammoniumoxodiperoxovanadate)·4H2O); and 6-ammoniumoxodiperoxovanadate·2H2O), on the hIAPP peptide.These six complexes and their derived active species interact either hydrophobically or electrostatically hIAPP to significantly inhibit aggregation.The V complexes have strong inhibitory effects on peptide aggregation due to their high binding affinity and large ligands.To confirm that the V complexes affected hIAPP-induced cytotoxicity, which has been associated with T2DM, changes in cell viability were investigated.The tests proved that V complexes protected INS-1 insulinoma cells well from cytotoxicity induced by hIAPP.The clinical drug BMOV had the greatest effect on reversing the aggregation and reducing cytotoxicity.Jha et al. showed that His18 acts as an electrostatic switch, inhibiting fibrillization in its charged state, which is heavily pH-dependent.Moreover, His18 plays an intrinsic role in hIAPP binding to the cellular membrane .The primary region that is responsible for the disruption of the membrane is the N-terminal region of hIAPP.When bound to the membrane, hIAPP causes membrane disruption in a similar range as the full-length peptide, but without the formation of amyloid fibers .The truncated rIAPP fragment, which is both non-toxic to the cell and non-amyloid forming, has only one different amino acid from the human peptide: Arg18 in the rat variant while His18 in the human variant.A previous study has measured the effects of the fragment of rIAPP and hIAPP on islet of Langerhans and model membranes to explain the effect of the difference in the amino acid residue.Authors noticed that the levels the intracellular calcium of islets cells significantly increased with the addition of hIAPP, which indicate that the cellular membrane has been disrupted.rIAPP peptide had significantly less effect on the membrane, and it showed reduced ability to penetrate β-cell membranes .Dye leakage assays and experiments on model liposomes showed that at low peptide to lipid ratios the rIAPP is unable to bind to or disrupt lipid membranes, indicating that the aggregate formation necessary for membrane binding and disruption is dramatically less in the rIAPP than hIAPP.In contrast, at pH 6.0, where His18 is protonated, hIAPP resembles rIAPP in its limited ability to cause membrane disruption.Furthermore, using differential scanning calorimetry, the authors found that rIAPP has a different binding mode to the membrane compared to hIAPP.The later peptide shows a minor effect on the phase transition of lipid vesicles, suggesting a membrane peptide orientation in which the acyl chains mobility of the membrane is practically unaffected.However, at low concentrations, rIAPP shows a strong impact on the phase transition of lipid vesicles, suggesting that is not easy for the peptide to be inserted into the membrane after binding to the surface.The given results indicate that the modulation of the peptide orientation in the membrane by His18 can be the primary reason for the toxicity of non-amyloidogenic forms of hIAPP .The aggregation properties of amylin are very much dependent on the positions of residues other than His18.In 1990, Westermark et al. claimed that positions 25, 29, and especially 28 are important for the properties of amylin aggregation.These findings are in line with the fact that in rIAPP, in contrast to hIAPP, these positions are occupied by proline residues that do not favor β-sheet formation."The results of Ramamoorthy's group indicate that it is not necessary for hIAPP to form amyloid fibers in order to disrupt the membrane.Previous studies proved that amyloid fibers themselves are not particularly toxic, but there is controversy as to whether the process of amyloid formation is necessary for the amyloid peptides to generate toxic intermediates and disrupt the membrane."Ramamoorthy's group suggest that hIAPP disrupts the membrane without the formation of amyloids, and that membrane disruption occurs primarily as a consequence of factors not necessarily related to amyloidogenesis .As mentioned before, rIAPP and non-amyloidogenic peptide variants of amylin are non-toxic, providing evidence that the formation of amyloids is primarily responsible for the cytotoxicity of hIAPP.However, Knight et al. recently showed that rIAPP exhibits some membrane-disrupting activity.Also, diabetic rats show a small degree of β-cell apoptosis, which supports the conclusion that rIAPP can be toxic, though much less so than hIAPP .Other amyloid peptides also have the ability to form channels; therefore, Patel et al. used planar lipid bilayers and Atomic Force Microscopy to test the 441-amino-acid htau40 isomorph of the Tau protein for its ability to form ion-permeable channels.Results showed that the Tau protein is only capable of forming ion channels under acidic conditions.These Tau protein channels are remarkably similar to the channels formed by amyloid β-peptides in terms of appearance, physical and electrical size, permanence, lack of ion selectivity, and multiple channel conductance.On the other hand, they have some differences from amyloid channels, such as their voltage dependence and resistance to blockade by zinc ions.Also, the channels formed by Tau proteins are not blocked by Zn2+ ions, even at higher Zn2+ concentrations, unlike the channels that are formed by other Aβ-peptides .Furthermore, α-synuclein also has the ability to form membrane-ion channels, but further investigation is needed to determine which metal ions may directly influence this process.Some of metal ions have been tested only with specific short fragments of IAPP peptides; therefore, future studies are required to examine the effects of metal ions on the full-length peptide.Another direction of future study should be in vivo experiments on the impact of metal ions on the formation of soluble amylin oligomers and on the glucose-tolerance mechanisms observed in transgenic mice with overexpressed amylin in the pancreatic cells.In addition, hIAPP share similarities with Aβ amyloid peptide in terms of aggregation features, metal ions and membrane interactions.It was also reported that there is an interaction between rIAPP and the membrane. ,Moreover, the presence of small aggregates can be confirmed also in rIAPP samples, therefore, an rIAPP-associated cytotoxicity could be conceivable.Mentioned results suggest that the issue of potential rIAPP toxicity needs further investigation.The interactions of metal ions with amylin affect its chemistry and structural properties.The coordination mode and the effects caused on the IAPP peptide by interaction with seven metal ions, zinc, copper, iron, nickel, gold, ruthenium, and vanadium are discussed in this review.Interaction of these metal ions with amylin may affect its structure, causing formation of misfolded IAPP, which can undergo fibril formation, finally resulting in generation of oligomeric forms like protofibrils and mature fibrils.Fe, Cu, and Zn ions promote the formation of amylin oligomers/protofibrils.Ru complexes, V complexes, and some of the Au complexes examined inhibit the formation.Nevertheless, for Au complexes, the nature of their effect on fibril formation can heavily depend on concentration.Cu ions also may inhibit amylin fibrillation and reduce toxicity.The influence of metal ions on the amyloid pore formation by amylin and other amyloidogenic peptides, is thus complex and will need to be studied in more details.The importance of metal ions and their complexes with biomolecules in the medical applications brought much attention to this field, because metal ions can be used for diagnostic and treatment purposes in medical application.Moreover, as explained by all of reviewed studies they also have a significant role in case of the diabetic patients.This suggests that we can benefit from specific metal ions in the diagnosis or even treatment of diabetes.Thanks to the exploring of the interaction of metal ions and the biologically important biomolecules, new ways for the developing of new alternative medical treatment strategies can be opened. | Islet Amyloid Polypeptide (IAPP), also known as amylin, is a 37-amino-acid peptide hormone that is secreted by pancreatic islet β-cells. Amylin is complementary to insulin in regulating and maintaining blood glucose levels in the human body. The misfolding and aggregation of amylin is primarily associated with type 2 diabetes mellitus, which is classified as an amyloid disease. Recently, the interactions between amylin and specific metal ions, e.g., copper(II), zinc(II), and iron(II), were found to impact its performance and aggregation processes. Therefore, the focus in this review will be on how the chemistry and structural properties of amylin are affected by these interactions. In addition, the impact of amylin and other amyloidogenic peptides interacting with metal ions on the cell membranes is discussed. In particular, recent studies on the interactions of amylin with copper, zinc, iron, nickel, gold, ruthenium, and vanadium are discussed. |
326 | FMRI evidence for areas that process surface gloss in the human visual cortex | Surface gloss provides an important cue to an object’s physical material and its microstructure.From a perceptual perspective, it has particularly intriguing properties because there are cases where glossiness is specified only by small image areas containing highlights.Unlike other aspects of material, a slight change in an object can cause huge differences in the perceptual impression of gloss.While a number of image cues have been proposed to modulate gloss perception, it is an open challenge to understand how this information is processed to infer surface material.Psychophysical studies suggest that the brain uses a variety of visual signals to estimate gloss.For instance, low-level factors such as the image luminance histogram skew can bias perceived gloss and cause perceptual aftereffects.Mid-level factors such as specular reflections and surface relief also influence the impression of gloss.Highlights play a particularly important role in affecting judgments of material, and this can relate to their position and orientation, their colour, and their binocular disparity.Here we chose to investigate how manipulating surface appearance through highlights gives rise to changes in brain activity.In particular, we use fMRI to identify the cortical regions that respond preferentially to visual gloss depicted by highlights.Recent studies have suggested candidate areas in macaque brain that may play an important role in processing gloss.For instance, specular objects elicited more fMRI activation along the ventral visual pathway, from V1, V2, V3, V4 to inferior temporal cortex compared to matte objects and phase-scrambled images of the objects.Single-unit recordings from the superior temporal sulcus within IT cortex identified neurons that were selective for gloss uninfluenced by changes in the 3D structure of the viewed object or by changes to the illumination.Further, these gloss-selective responses reflect combinations of reflectance parameters that align to the perceptual dimensions guide judgments of surface properties.These results from the macaque indicate that specular reflectance properties are likely to be encoded in ventral visual areas.Despite this recent progress in the macaque model, we still have rather little insight into how the human brain processes gloss.Human brain imaging work examining the representation of material properties implicated a role of ventral visual areas, especially in fusiform gyrus, inferior occipital gyrus and collateral sulcus.This work employed stimulus changes in multiple image dimensions, meaning that activity related to gloss per se could not be determined.It is likely to be an important distinction as tests of a neuropsychological patient who had deficits in colour and texture discrimination showed that they were unimpaired on gloss judgments.This suggests that the cortical processing of gloss is independent from the processing of other material properties.Recently, Wada and colleagues reported that fMRI activity related to surface gloss is evident in V2, V3, V4, VO-1, VO-2, CoS, LO-1 and V3A/B.In particular, they contrasted glossy and matte objects under bright and dim illumination to exclude the confounding of luminance.Here we use the different approach of perturbing global image arrangement while preserving local image features to target mechanisms of the global synthesis of image cues when judging gloss.It is also different from Okazawa et al. who contrasted glossy objects with phase-scrambled versions of these objects.We presented observers with stimuli from four experimental conditions: Glossy, Scrambled Glossy, Matte and Scrambled Matte.Thereby we sought to discriminate Gloss vs. Matte renderings of objects while dissociating the role played by local image features.Fifteen participants who had normal or corrected-to-normal vision were recruited for the experiment.Two were authors and the remainder were naïve participants.All were screened for normal stereoacuity and MRI safety before being invited to participate.All participants had previously participated in other fMRI studies in which fMRI localiser data and a T1-weighted anatomical scans were acquired.The age range was 19–35 years old, and 13 of the 15 were male.All participants gave written informed consent before taking part in the experiment.The study was approved by the STEM Ethical Review Committee of the University of Birmingham.The work was carried out in accordance with The Code of Ethics of the World Medical Association.After completing the experiment, non-lab member participants received monetary compensation.The stimuli comprised 32 2-D renderings of 3-D objects generated in Blender 2.67a.The objects were spheres and tori whose surfaces were perturbed by random radial distortions to produce slightly irregular shapes.The diameter of the stimuli was 12° on average and they were presented on a mid-gray background.We illuminated the objects using a square light source located front and above the objects.We chose this simple light source to be able to increase the influence of our scrambling manipulation.We created versions of the stimuli for each object that made up the four conditions of the experiment: Glossy, Scrambled Glossy, Matte and Scrambled Matte.In the Glossy condition, objects were rendered using a mixed shader with 90% diffuse and 10% glossy components.We rendered objects in the Matte condition by setting the reflectance function to Lambertian.We controlled the luminance of the stimuli so that the mean luminance of the stimuli was 60.54 cd/m2 and the absolute maximum was 103.92 cd/m2 which corresponded to 57.55% and 98.78% of the display maximum luminance, respectively.All the objects were rendered without background then we set background colour to gray before further manipulations as described below.To produce spatial scrambling, we superimposed a 22 × 22 1-pixel black grid over the images and then randomly relocated squares within the grid.This approach differs from phase scrambling as blur, contrast, and luminance are only marginally affected.Moreover, the mosaic spatial scrambling approach we used interrupts object shape, shading, and specular highlights while all the local information is unchanged.Previous work indicates that highlight congruence with surface geometry and shading is crucial for perceived glossiness.Thus our stimuli strongly attenuate the impression of gloss by disrupting the relationship between highlights and global object structure.Note that the superimposed grid was presented for both intact and scrambled versions of the stimuli.This greatly attenuates the amount of additional edge information that results from the spatial scrambling manipulation.Formally, we assessed differences in image structure by computing possible image cues that might drive the fMRI response.In particular, we found that the image statistics of mean luminance, luminance root-mean-square contrast, and luminance histogram skew were matched across the four conditions indicating that there was more variation within the same class of stimuli than there was between classes.This is trivial for the scrambled versions of the stimuli, however, it is important that matte and glossy stimuli were well matched.In such a case, although the addition of a grid affects all these measures, it did not create any consistent difference across the four conditions, thus the interpretation of the results should not be affected.Furthermore, the power spectra of the stimuli in the different conditions indicate that the grid is effective in equalizing the spatial frequency content of the images, particularly when contrasted with scrambled images without a superimposed grid.The grid adds high frequency components to intact images creating a pattern that is very similar to the one due to the scrambling procedure.In this way, frequency spectra are made more similar across conditions.Stimulus presentation was controlled using MATLAB and Psychtoolbox.The stimuli were back projected from a JVC DILA SX21 projector onto a translucent screen inside the bore of the magnet.Participants viewed the stimuli binocularly via a mirror fixed on the head coil with a viewing distance of 64 cm.Luminance outputs were linearized and equated for the RGB channels separately with colorimeter measurements.A five-button optic fiber button box was provided to allow responses during the 1-back task.A 3-Tesla Philips scanner and a 32-channel phase-array head coil were used to obtain all MRI images at the Birmingham University Imaging Centre.Functional whole brain scans with echo-planar imaging sequence were obtained for each participant.The EPI images were acquired in an ascending interleaved order for all participants.T1-weighted high-resolution anatomical scans were obtained from previous studies.A block design was used.Each participant took part in 8–10 runs with 368 s length of each run in a 1.5 h session.Each run started with four dummy scans to prevent startup magnetization transients and it consisted of 16 experimental blocks each lasting 16 s.There were 4 block types, repeated four times in a run.During each block, eight objects were presented twice in a pseudo-random order.Stimuli were presented for 500 ms with 500 ms interstimulus interval.Participants were instructed to maintain fixation and perform a 1-back matching task, whereby they pressed a button if the same image was presented twice in a row.They were able to perform this task very well.Five 16 s fixation blocks were interposed after the third, fifth, eighth, eleventh and thirteenth stimulus blocks to measure fMRI signal baseline.In addition, 16 s fixation blocks were interposed at the beginning and at the end of the scan, making a total of seven fixation blocks during one experimental run.An illustration of the scan procedure is provided in Fig. 3.BrainVoyager QX version 2.6 was used for MRI data processing.Each participant’s left/right cortical surfaces were reconstructed by segmenting gray and white matter, reconstructing the surfaces, inflating, cutting and then unfolding.All functional images were pre-processed with slice scan timing correction, 3D head motion correction, high-pass filtering and linear trend removal.Functional images were co-registered with anatomical images and then transformed to Talairach coordinate space and aligned with each other.We computed the global signal variance of the blood oxygenation level dependent signal for each run using the whole-brain average of activity across volumes.If this exceeded 0.16% the scan run was excluded from further analysis to avoid the influence of scanner drifts, physiological noise or other artifacts.On this basis, 17/146 runs across 15 participants were excluded from further analysis.A 3D Gaussian spatial smoothing kernel with 5 mm full-width-half-maximum was applied before analysing the data using a group-level random effects general linear model.A total of 11 regions of interest were defined.For each participant V1, V2, V3d, V3A, V3v, V4 were drawn by visual inspection of the data obtained from a standard retinotopic mapping scan preceding the experiment.V3B/KO, hMT+/V5 and LO were defined by additional functional localizers respectively in a separate session as in previous studies.For nine of the fifteen participants, V3B/KO and hMT+/V5 were defined according to Talairach coordinates.LO and pFs were defined by a localizer scan for all participants in which intact object images and their spatially-scrambled versions were contrasted.pFs was identified as the more anterior portion of the activation map obtained from this contrast.The average mass centre of LO and pFs across the 15 participants were and for right and and for left hemisphere.The superior temporal sulcus was defined according to Talairach coordinates.We computed percent signal change by subtracting the BOLD signal baseline from each experimental condition and then dividing by the baseline.In addition, voxels used in the PSC analysis were masked with the t-value maps obtained by contrasting all stimulus conditions vs. fixation blocks for each individual participant.PSCs were examined within independently identified ROI under each experiment condition.We then computed the difference in PSC between intact and scrambled versions of Glossy and Matte objects, which we term ΔPSC.Finally, we used random effects Granger causality mapping to probe the information flow between ROIs.Granger causality uses temporal precedence to identify the direction of influence from a reference region to all other brain voxels.The GCMs for each participant were calculated first then they were combined together with a simple t-test and cluster-size thresholding.To identify brain areas that preferentially responded to glossy objects, we used a conjunction analysis to find voxels that were activated more strongly in Glossy condition than in any of the other three conditions across the 15 participants.In particular, Fig. 4 shows the results of a random-effects GLM with statistical significant voxels and cluster-size thresholding.The orange areas demark significantly higher activation in Glossy condition under the three contrasts, respectively: Glossy vs. Scrambled Glossy, Glossy vs. Matte, Glossy vs. Scrambled Matte.In general, these areas were distributed along ventral visual pathway in both hemispheres including the ventral occipitotemporal cortex.In addition, we found responses in the area around V3B/KO, which is traditionally thought to belong to the dorsal visual stream.To complement our whole brain contrast analysis, we also examined the percent signal change within independently identified regions of interest.To identify responses to global objects with consistent surface properties, we contrasted the glossy and matte stimuli against their scrambled controls by subtracting PSC in scrambled conditions from their intact counterparts for Glossy and Matte conditions leading to ΔPSC.We first tested whether activation differed for scrambled stimuli and their intact counterparts by testing if the ΔPSC deviated from zero.In early and intermediate visual areas, we found stronger responses to the scrambled stimuli than their intact counterparts, indicating that globally incoherent stimuli drive higher levels of activity.By contrast, in higher visual areas V3A, V3B/KO, hMT+/V5, LO and pFs we found stronger responses for intact versions of the stimuli.Response magnitudes in the STS were low, and not significantly different from zero.We then compared ΔPSC for Glossy against Matte conditions in all the ROIs.A two-way repeated measures ANOVA showed a significant difference between Glossy and Matte conditions, an effect of ROI, and a significant interaction.Thereafter we tested for the differences between conditions in each ROI.Asterisks in Fig. 5 represent significant differences in activation between the two conditions.We found that responses were significantly higher for objects with glossy than with matte surfaces in areas V3B/KO and pFs.Note that to compute ΔPSC we subtracted the activation in scrambled versions of the stimuli, so the glossy selectivity observed in V3B/KO and pFs is unlikely to be explained by low-level differences in the images of the objects.Moreover, we found no significant difference in the percent signal change between Scrambled Glossy and Scrambled Matte conditions, suggesting that the significant differences in ΔPSC between glossy and matte stimuli were mainly due to the PSC difference between Glossy and Matte conditions rather than between their scrambled counterparts.ΔPSC in early visual areas were also significant, however response modulation in these areas was higher for scrambled stimuli than for intact ones.Since the PSC in Scrambled Glossy and Scrambled Matte conditions were similar, we can conclude that the difference is mainly due to intact conditions.It is possible that some neurons in these areas selectively respond to glossy object, however, unlike V3B/KO and pFs, these areas respond prevalently to scrambled images rather than intact ones.This suggests that these areas primarily deal with low-level image features and do not account for overall glossy appearance.As reviewed above, responses in STS were very low and not significantly different across conditions.The preceding analysis indicates two brain areas that appear to be important in processing information about gloss.To quantify how these areas communicate with other parts of the visual cortex, we used a random effects Granger causality mapping analysis to assess how these areas influence and depend on activity elsewhere.Fig. 6 shows the results using either pFs or V3B/KO as the reference region, respectively.Blue areas indicate brain areas that are significantly influenced by the reference region, while the green colour map identifies locations that have a significant influence on the reference region.We found that activity in pFs had a strong influence on both dorsal and ventral areas.This may reveal that gloss-related activity is used for the processes of object processing in addition to affecting depth estimates.By contrast, the estimated connectivity in V3B/KO was quite different.V3B/KO mainly received information from ventral areas rather than having influence on them, perhaps indicating that gloss information in V3B/KO is inherited from a primary locus in ventral areas.In addition, we observed that V3B/KO also received some information from an area near the STS.Although our other analyses did not suggest the involvement of the STS, this analysis appears consistent with the role of the STS in gloss indicated by electrophysiological recordings.We should note that we could not determine whether the information flow captured by the Granger Causality Mapping is specific to gloss signals.Nevertheless, as the preceding conjunction analysis and PSC results showed the importance of pFs and V3B/KO in processing gloss, it is quite possible that the GCMs show different information flows between pFs and V3B/KO for gloss processing.The aim of this study was to localize the brain areas preferentially responding to glossy objects in the human brain.We did this by rendering glossy and matte versions of three-dimensional objects, and using scrambled images to control for low-level image cues.Our results point to a role for the posterior fusiform sulcus and area V3B/KO in the processing of surface gloss: we found stronger responses to glossy objects than their matte counterparts, and this could not be explained by low-level stimulus differences.By assessing connectivity between brain areas while viewing glossy and matte stimuli, we observed that pFs exerted influence on ventral and dorsal brain areas, while V3B/KO was influenced by activity in midlevel ventral areas, which may indicate a difference between areas in their use of information from gloss as a cue to material vs. object shape.Recent imaging studies in macaques suggest that glossy objects elicit more activation along the ventral visual pathway form V1 to IT cortex.We also found higher activation in the ventral stream, in particular in the pFs.Our results are reassuringly consistent with a very recent fMRI study that used a different image control approach.In particular, that study indicated the role of ventral areas and the combined areas V3A/B.Since the ROI in our study were mapped using independent localisers before the experiment whereas Wada et al. considered only one area, our results pinpoint gloss-related activity more precisely, suggesting that the more lateral V3B/KO region is more important in gloss processing than V3A.The involvement of early visual areas is not clear.Although ΔPSC in earlier areas is significant due to higher activation for Glossy than for Matte objects, however, unlike V3B/KO and pFs, response modulation in these early areas is higher for scrambled stimuli.This suggests that these areas primarily deal with low-level features such as the area which occupies visual field, discontinued borders and high spatial frequency information which is more in scrambled than in intact conditions.Note that some low-level features might be affected by our scrambling technique.For example, there are more highlight boundaries on Glossy objects and scrambling decreases the number of these segments and edges.Thus, the PSC difference in V1 to V4 might be caused by such low-level image properties rather than glossiness.Previous human fMRI studies found the modulation of fMRI responses by different object materials perception in the fusiform gyrus and collateral sulcus.This work employed a wide variety of object materials thus creating differences in surface gloss as well as differences in texture and colour.Here we focused on gloss, manipulating surface reflectance of untextured and homogeneously coloured objects.Despite this important difference between the studies, the surface-property-specific region they found is located very close to the area we denote as pFs based on a comparison of Talairach coordinates.Consistent with this, other work showed that a patient with colour and texture discrimination deficit could judge glossiness correctly, indicating that glossiness information does not exclusively depend on colour or texture processing.Taken together, this evidence suggests a dissociation between areas underlying material/texture from gloss.Nevertheless, the proximity of these areas may suggest a close interrelation and connection between material and gloss processing centres.An important finding here is that the brain area V3B/KO seems to be involved in gloss processing.V3B/KO, located in dorsal visual stream, is well known to selectively respond to kinetic boundaries.It was also found to be involved in integrating different depth cues.Our study, together with the recent results by Wada et al., indicate that the activity in V3B/KO is modulated by surface gloss, although previous work has not highlighted the involvement of this area in processing material information.One possibility is that V3B/KO does not actually processes gloss information per se.The causality mapping suggests quite a different pattern of causal relationships in V3B/KO than in pFs, with V3B/KO primarily being influenced by signals from elsewhere, while pFs influences responses in other areas.It is possible that the effect we found in V3B/KO was due to the effect of adding internal boundaries to the shapes corresponding to the locations with highlights.Alternatively, because specular highlights are known to influence the perception of 3D shape, it is possible that differences in activity in V3B/KO for glossy vs. matte objects relate to differences in the estimated 3D shape.This appears consistent with the recent work that indicates that V3B/KO integrates different cues to 3D structure.The superior temporal sulcus of the macaque was found to show specific responses to glossy objects based on both fMRI and single-unit recordings.However, in our study we did not find strong evidence for the involvement of human STS in glossiness processing: changes in signals in this area were low, although the causality mapping did indicate some modulation of activity near the STS.It is possible that there are functional differences between human brain and monkey brain.For example, studies found functional differences between the two species in V3A and the intraparietal cortex for three-dimensional structure-from-motion processing.It is also possible that the reasonably large voxel sizes used in our study limited our ability to detect responses to glossy stimuli in the human STS, and/or that the underlying population is spatially limited such that it did not survive the cluster threshold we applied.In our study we chose to generate control stimuli using a scrambling technique applied to a visible grid.The presence of a grid reduces changes in low-level image properties due to scrambling while disrupting global properties of the shapes that are known to modulate the impression of gloss.The use of a superimposed grid over the stimuli was conceived to ensure that the amount of edge information in the stimuli was broadly similar between intact and scrambled conditions.This expedient overcomes the large difference in spatial frequency content that would be produced by scrambling alone.Although there are slight differences in spatial frequency between intact objects and their scrambled counterparts, scrambling had similar effects for Glossy and Matte conditions.Therefore differences in the spatial frequency spectra could not be the only cause for the pattern of results found.Furthermore, image statistics did not differ substantially between Glossy and Matte conditions, ensuring that the results are not due to these properties as well.One could also argue that images with an overlaid grid could be amodally completed behind the occlusions.Such completion would be present for intact objects in both Glossy and Matte conditions.Therefore the completion-related activity would not bias the results.Similarly, even though scrambling clearly makes the stimuli occupy a larger portion of the visual field, our analysis procedures makes it unlikely that such differences contributed to the findings we report in the study.This is because our conjunction analyses were not based only on and on comparisons, but also on the contrast .Overall, the results we presented cannot be explained by local edges, contrast, or configuration changes as these factors were the same for Glossy and Matte conditions.We should also note that during our experiments our participants were not making active perceptual judgments of gloss.It is possible that activations would have been stronger had we asked for concurrent perceptual judgments.However, this would likely have introduced attention-based differences between the intact and scrambled conditions, which we deliberately sought to avoid using a task at the fixation point.Finally, it is interesting to consider whether the areas we identify here would be involved in other aspects of gloss processing.As discussed in the Introduction, gloss perception can be modulated by several factors including low-level image cues, image configurations, scene variables including light source direction, light source style and background colour.Moreover, factors related to 3D structure from self motion and object motion and stereo viewing can change perceived gloss.Finally, even non-visual sources such as haptic cues and interactions with objects can lead to changes in surface appearance.It is an open challenge to understand whether these variables involve processing in pFs and V3B/KO, or whether additional areas are recruited.This study reveals that V3B/KO and pFs are selectively active when processing images of glossy objects.This finding is consistent with other recent human fMRI studies and it suggests close but dissociated networks for gloss and material processing in the ventral stream.Our results point to a different role of V3B/KO and pFs, suggesting that V3B/KO may be tuned to processing highlight boundaries or 3D shape properties rather than to glossiness processing.Overall, our study highlights a small network in the fusiform sulcus that may be important in supporting our perception of surface gloss. | Surface gloss is an important cue to the material properties of objects. Recent progress in the study of macaque's brain has increased our understating of the areas involved in processing information about gloss, however the homologies with the human brain are not yet fully understood. Here we used human functional magnetic resonance imaging (fMRI) measurements to localize brain areas preferentially responding to glossy objects. We measured cortical activity for thirty-two rendered three-dimensional objects that had either Lambertian or specular surface properties. To control for differences in image structure, we overlaid a grid on the images and scrambled its cells. We found activations related to gloss in the posterior fusiform sulcus (pFs) and in area V3B/KO. Subsequent analysis with Granger causality mapping indicated that V3B/KO processes gloss information differently than pFs. Our results identify a small network of mid-level visual areas whose activity may be important in supporting the perception of surface gloss. |
327 | Arborvitae (Thuja plicata) essential oil significantly inhibited critical inflammation- and tissue remodeling-related proteins and genes in human dermal fibroblasts | Arborvitae, also known as western red cedar, and its essential oils have been traditionally used as a natural insect repellent and wood preservative, primarily because of its insecticidal and antimicrobial property .Recently, the topical use of Arborvitae essential oil for skincare has gained popularity.However, a literature search revealed no existing studies of the biological activities of AEO in human cells.Therefore, we evaluated the biological activities of a commercially available AEO in a pre-inflamed human dermal fibroblast culture model, which was designed to model the disease biology of chronic skin inflammation.First, we analyzed the effect of AEO on 17 important protein biomarkers that are closely related to inflammation and tissue remodeling.Then, we studied its effect on genome-wide gene expression in the same cell culture.All experiments were conducted using a BioMAP system HDF3CGF, which was designed to model the pathology of chronic inflammation in a robust and reproducible manner.The system comprises three components: a cell type, stimuli to create the disease environment, and a set of biomarker readouts to examine how the treatments affected the disease environment .Primary human neonatal fibroblasts were prepared as previously described and were plated under low serum conditions for 24 h before stimulation with a mixture of interleukin-1β, tumor necrosis factor-α, interferon-ϒ, basic fibroblast growth factor, epidermal growth factor, and platelet-derived growth factor.The cell culture and stimulation conditions for the HDF3CGF assays have been described in detail elsewhere and were performed in a 96-well plate .Direct enzyme-linked immunosorbent assay was used to measure the biomarker levels of cell-associated and cell membrane targets.Soluble factors in the supernatants were quantified using either homogeneous time-resolved fluorescence detection, bead-based multiplex immunoassay, or capture ELISA.The adverse effects of the test agents on cell proliferation and viability were measured using the sulforhodamine B assay.For proliferation assays, the cells were cultured and measured after 72 h, which is optimal for the HDF3CGF system, and the detailed procedure has been described in a previous study .Measurements were performed in triplicate wells, and a glossary of the biomarkers used in this study is provided in Supplementary Table S1."Total RNA was isolated from cell lysates using the Zymo Quick-RNA MiniPrep kit according to the manufacturer's instructions.RNA concentration was determined using a NanoDrop ND-2000 system.RNA quality was assessed using a Bioanalyzer 2100 and an Agilent RNA 6000 Nano kit.All samples had an A260/A280 ratio between 1.9 and 2.1 and a RIN score >8.0.The effect of 0.011% AEO on the expression of 21,224 genes was evaluated in the HDF3CGF system after a 24-h treatment."Samples for microarray analysis were processed by Asuragen, Inc. according to the company's standard operating procedures.Biotin-labeled cRNA was prepared from 200 ng of total RNA using an Illumina TotalPrep RNA Amplification kit and one round of amplification.The cRNA yields were quantified using ultraviolet spectrophotometry, and the distribution of the transcript sizes was assessed using the Agilent Bioanalyzer 2100.Labeled cRNA was used to probe Illumina human HT-12 v4 expression bead chips."Hybridization, washing, staining with streptavidin-conjugated cyanine-3, and scanning of the Illumina arrays were carried out according to the manufacturer's instructions.The Illumina BeadScan software was used to produce the data files for each array; the raw data were extracted using Illumina BeadStudio software.The raw data were uploaded into R and analyzed for quality-control metrics using the beadarray package .The data were normalized using quantile normalization , and then re-annotated and filtered to remove probes that were non-specific or mapped to intronic or intragenic regions .The remaining probe sets comprised the data set for the remainder of the analysis.The fold-change expression for each set was calculated as the log2 ratio of AEO to the vehicle control.These fold-change values were uploaded onto Ingenuity Pathway Analysis to generate the networks and pathway analyses.AEO was diluted in dimethyl sulfoxide to 8× the specified concentrations.Then, 25 μL of each 8× solution was added to the cell culture to obtain a final volume of 200 μL, and DMSO served as the vehicle control.The gas chromatography-mass spectrometry analysis of AEO indicated that it mainly contained methyl thujate and smaller amounts of numerous other aromatic molecules.We analyzed the biological activity of AEO by using an HDF3CGF cell system, which simulated the microenvironment of inflamed human skin cells with already boosted immune responses and inflammatory levels.None of the four studied concentrations was overtly cytotoxic, and therefore, the activity of 0.011% concentration was included for analysis.Key activities of biomarkers were designated if biomarker values were significantly different from those of vehicle controls, outside of the significance envelope, with an effect size of at least 10% and are discussed below.The expressions of several inflammatory biomarkers, such as vascular cell adhesion molecule 1, intracellular cell adhesion molecule 1, interferon gamma-induced protein 10, interferon-inducible T-cell chemoattractant, and monokine induced by interferon gamma, significantly decreased in response to AEO.Specifically, the levels of these protein biomarkers were already highly elevated in the pre-stimulated inflamed dermal fibroblasts.The inhibitory effects of AEO on the increased production of proinflammatory biomarkers suggest that it might possess anti-inflammatory properties.AEO also showed significant antiproliferative activity in dermal fibroblasts, as measured using the SRB proliferation assay 72 h after treatment.The levels of five tissue remodeling molecules—collagen-I, collagen-III, plasminogen activator inhibitor-1, and tissue inhibitor of metalloproteinase 1 and 2—significantly decreased in response to AEO treatment.AEO also significantly inhibited the level of macrophage colony-stimulating factor, a cytokine that mediates macrophage differentiation and thus, immunomodulation.It is noteworthy that the inhibitory effects of AEO on the increased production of these protein biomarkers were concentration-dependent.AEO inhibited all these factors, which suggests that it might play important roles in tissue remodeling and immunomodulation, and thus, the wound healing processes.These effects of AEO are presumably mediated by slowing down the tissue repair process, which reduces the chance of scar formation or improper chronic wound healing .Recent studies on the essential oils of T. plicata-related species, their major active components, or both have shown preliminary evidence of their therapeutic efficacy and safety in disease models .We conducted a literature search and found that no study has been conducted on the effects of AEO or its major component methyl thujate in human cells or similar models.Therefore, to the best of our knowledge, the current study is the first evidence of the biological activities of AEO in a human skin disease model, which suggests their anti-inflammatory, immunomodulatory, and tissue-remodeling properties in the human skin.We then analyzed the effect of 0.011% AEO on the RNA expression of 21,224 genes in the same cells.The results showed the significantly diverse regulatory effect of AEO on human genes, with numerous genes being either upregulated or downregulated.Among the 200 most-regulated genes by AEO, the majority were significantly downregulated.A cross-comparison of the protein and gene expression data revealed that AEO significantly inhibited both the protein and gene expression levels of VCAM-1, IP-10, and I-TAC.This suggests that AEO might play a profound role in regulating these three important players.IPA showed that the bioactivity of AEO significantly overlapped with numerous canonical pathways from the literature-validated database analysis.Many of these signaling pathways are closely related to inflammation, immunomodulation, and tissue remodeling.Overall, AEO appeared to inhibit these signaling pathways in the highly inflamed human skin cells, suggesting it has potential anti-inflammatory and immunomodulatory effects.To the best of our knowledge, this study provides the first evidence of the biological activities of AEO in highly inflamed human skin cells.The findings show that AEO significantly inhibited numerous protein and genes involved in inflammation, immune responses, and tissue remodeling.In addition, AEO diversely and significantly modulated global gene expression.Furthermore, AEO robustly affected various important signaling pathways in human cells.These findings provide the first evidence for the therapeutic potential of AEO in human skin cell inflammation.Further studies on the mechanism of action and clinical efficacy of AEO are required before drawing definite conclusions about its therapeutic properties.X.H. and T.P. are employees of dōTERRA, where the study agent AEO was manufactured. | Arborvitae (Thuja plicata) essential oil (AEO) is becoming increasingly popular in skincare, although its biological activity in human skin cells has not been investigated. Therefore, we sought to study AEO's effect on 17 important protein biomarkers that are closely related to inflammation and tissue remodeling by using a pre-inflamed human dermal fibroblast culture model. AEO significantly inhibited the expression of vascular cell adhesion molecule 1 (VCAM-1), intracellular cell adhesion molecule 1 (ICAM-1), interferon gamma-induced protein 10 (IP-10), interferon-inducible T-cell chemoattractant (I-TAC), monokine induced by interferon gamma (MIG), and macrophage colony-stimulating factor (M-CSF). It also showed significant antiproliferative activity and robustly inhibited collagen-I, collagen-III, plasminogen activator inhibitor-1 (PAI-1), and tissue inhibitor of metalloproteinase 1 and 2 (TIMP-1 and TIMP-2). The inhibitory effect of AEO on increased production of these protein biomarkers suggests it has anti-inflammatory property. We then studied the effect of AEO on the genome-wide expression of 21,224 genes in the same cell culture. AEO significantly and diversely modulated global gene expression. Ingenuity pathway analysis (IPA) showed that AEO robustly affected numerous critical genes and signaling pathways closely involved in inflammatory and tissue remodeling processes. The findings of this study provide the first evidence of the biological activity and beneficial action of AEO in human skin cells. |
328 | Morphological alterations induced by the exposure to TiO2 nanoparticles in primary cortical neuron cultures and in the brain of rats | For several years, the use of nanotechnologies, such as nanoparticles, has drastically increased in industrial and emerging countries.Inside the class of nanometric compounds, titanium dioxide NPs are one of the most produced.Since last decade, nearly 6 million tons of TiO2 were produced worldly and the percentage of TiO2 under nanoform was estimated to reach 50% of the total production in the year 2023 .TiO2 NPs are used in a large panel of uses such as cosmetic industry , battery production, pharmaceutical industry and food industry .The raise of utilization of NPs is due to their small size, large surface area and high reactivity .Despite the wide ranges of applications, there is a lack of information about the interaction of these NPs with biological systems including the impact of TiO2 exposure on the nervous system at short and long time.Many recent studies show that TiO2 NPs are toxic via different routes of exposure such as inhalation, ingestion or injection.After intraperitoneal injection in mice, the main target organs are liver, kidneys, spleen and lungs causing inflammation, fibrosis and tumors .Chen et al. demonstrated a toxicity of TiO2 NPs in rats after oral intake, which was characterized by an inflammatory process and an alteration of cardiac function.Other studies show that TiO2 NPs have also deleterious effects in vitro on different cell types including epidermal cells , endothelial cells , alveolar macrophage and renal tubular cells causing oxidative stress, decrease in growth and apoptosis.Regarding the nervous system, nanoparticles have the capacity to reach various major organs including different parts of the brain via systemic circulation .Therefore, the rise of nanotechnology and the environmental pollutants including TiO2 NPs may be an important risk factor of neurological disorders such as Alzheimer’s disease, Parkinson’s disease and brain tumors .To reach the brain, TiO2 NPs have to cross the blood-brain barrier which protects the brain from chemicals, toxins and pathogens.It is composed of tight junctions strongly connecting endothelial cells surrounded by astrocytes and pericytes.Only substances with low molecular weight are able to pass the BBB by passive diffusion, active transport or endocytosis .It has been demonstrated that the intraperitoneal administration of nanoparticles derived from metals such as Ag, Al or Cu causes the disruption of neuronal cell membranes that enable their entry into the brain .On the other hand, during inhalation, the NPs are directly captured by the endings bulbs of the olfactory and trigeminal nerves and can reach the brain by retrograde axonal transport .Elder et al. reported that manganese oxide NPs were found in rat brain after intranasal instillation by the olfactory neuronal pathway.Reactive oxygen species and oxidative stress have been implicated in the pathogenesis of neurodegenerative injuries.Oxidative stress is the most important accepted mechanism of nano-neurotoxicity.Inflammatory response, apoptosis, genotoxicity can be the consequences of an oxidative stress.ROS such as superoxide, hydrogen peroxide and hydroxyl radicals are able to interact with lipids, nucleic acids and proteins at the site of particle deposition.The brain is particularly vulnerable to oxidative stress because of its high energy demand, low level of antioxidants and high cellular content of lipids and proteins .Several in vitro and in vivo studies demonstrated the capacity of TiO2 NPs to induce an oxidative stress at the site of accumulation in different parts of the brain.Shrivastava et al. showed that after oral administration, ROS increased and the activities of antioxidant enzymes were disturbed inside the central nervous system.This oxidative stress was accompanied by histopathological injuries as observed after IP administration of TiO2 NPs .Nanoparticles reached the brain and induced histopathological changes and high levels of ROS, malondialdehyde, nitric oxide.Inflammatory reaction is also a major mechanism of neurotoxicity induced by TiO2 NPs.TiO2 NPs can interact with neurons and glial cells including microglia that are immune cells residing in the brain.If microglial cells are activated by NPs, they produced pro-inflammatory cytokines that induce neuro-inflammation .Liu et al. have demonstrated that tracheal exposition to NPs increased significantly the expression of interleukin-1β, TNF-α and IL-10 in the brain.Damages in astrocytes and disruption of the BBB were also observed.In the present study, the toxic effects of TiO2 NPs on the brain were investigated according to two different approaches.In the first part, in vitro experiments were performed on primary cortical cultures of rat embryos.The cells were exposed at different doses of NPs and several times of exposure to study the effects on neurons, astrocytes and microglia.Oxidative stress and cell proliferation were also studied.In the second part of the study, rats were exposed to NPs by IP injection.Animals were sacrificed after 4 days, 1 and 2 months.Histopathological injuries, cell proliferation and oxidative stress were investigated by immunohistochemical methods.TiO2 nanoparticles were provided by Sigma-Aldrich chemical Co,.According to the manufacturer specifications, the nanoparticles were composed of titanium oxide, anatase with a purity of 99.7%, based on trace of metal analysis.All suspensions were prepared in isotonic sterile solution of phosphate buffer saline.Before use, a stock solution of NPs was sonicated in probe sonicator, 50/60 Hz; 230 V) for 3 runs of 30 min as detail in a previous publication .The measurements of the size distribution and the zeta potential of the nanoparticles suspended in aqueous medium were performed on a Zetasizer nano zs using laser He-Ne.The zeta potential was determined directly in solution containing NaCl.The pH of the aqueous suspension containing the particles was adjusted by adding 0.1-0.001 mM HNO3 or NaOH solution.Animals were treated according to the guidelines specified by the Animal Welfare Unit of the Public Service of Wallonia and under the control of the local UMONS-ethical commission.Gravid rats were anesthetized with an intraperitoneal injection of Nembutal, and embryos were separated from uterus."Embryos were decapitated, and heads were placed immediately in iced Hank's Balanced Salt Solution.Cerebral hemispheres were removed and placed in a sterile 100-mm dish containing an excess of cold dissection medium.Under a dissecting microscope, brain hemispheres were separated and the cerebral cortices was carefully dissected removing the midbrain and meninges.Cerebral cortice samples were transferred in a sterile tube containing 2 ml of HBSS and mechanically dissociated following a procedure detailed in previous publications After evaluation of cell density using an hemocytometer of Bürker, cell suspensions were diluted and plated at density of 7 104/ 1.13 cm2 on sterile 12 mm diameter round glass coverslips pre-coated with polylysine in 24-well dishes.Cultures were placed in an incubator at 37 °C with humid atmosphere at 5% CO2.Cells were fed with fresh medium 2 times per weeks.A volume of 2.5–10 μl of stock suspension of TiO2 was added per ml of culture medium in order to obtain final concentrations of in the well dishes.The cultures were exposed for different time intervals of 6, 18.24.72 and 96 h. Equivalent volumes of 2.5–10 μl of vehicle were added per ml of culture medium in control cultures.The culture medium was not changed during the incubation periods.Proliferating cells were evaluated in culture by immunocytochemical detection of 5-Bromo-2′-deoxyuridine as detailed in previous publication .Briefly, culture cells were exposed to BrdU during 2 h before cell fixation.After fixation in paraformaldehyde 4% for 15 min culture slices were rinsed in distilled water and treated for 30 min with a 3 M HCl solution at 60 °C.After rinsing in PBS, culture cells were preincubated for 20 min in a 0.01% casein solution in PBS buffer.Thereafter, cells were incubated with a mouse monoclonal anti-BrdU antibody for 1 h at room temperature.This step was followed by an exposure of 30 min to anti-mouse/peroxidase complexes.Revelation of bound peroxidase activity was performed by incubation with a solution of 3.3′-diaminobenzidine 0.05% and 0.02% H2O2 in PBS.Finally, culture cells were counterstained with Mayer’s hemalun and mounted in permanent medium.The number of S-phase cells was counted on 50 microscopic fields picked at random per slide at high magnification 400X representing a total scanned surface of 4.2 mm² per culture.For each time of exposure to nanoparticles and for each TiO2 concentration, measures were done on 4 independent cultures, 4 no-treated cultures were analyzed following a similar procedure and were used as controls.For each experimental condition, the mean was calculated on four independent cultures and data presented as histogram +/- SEM.Cell monolayers present on glass coverslips were fixed with 4% paraformaldehyde in PBS.Following fixation, paraformaldehyde was changed for fresh PBS where cell cultures were stored at 4 °C until immunostaining.Before application of antibodies, cell monolayers were rinsed several times with PBS containing 0.1% Triton X-100.Before exposure to primary antibodies, cells were pre-treated for 20 min in PBS containing 0.05 M NH4Cl and 0.05% casein to prevent non-specific adsorption of antibodies.Cells were exposed for 60 min to mouse-monoclonal or rabbit polyclonal primary antibodies at an optimal dilution as detailed in.This step was followed by a 30 min-exposure to fluorescent secondary antibodies .After final rinses in PBS, the coverslips were mounted on glass slides using commercial anti-fading medium.Negative controls were produced by omitting the primary antibodies.This modification resulted in a disappearance of the fluorescence signal.For each culture, the number of neurones, the length of neuronal processes and the area occupied by astrocytes were quantified by morphometric analysis at 100× magnification.The procedure utilized a software designed for morphometry and colour analysis.For each culture condition, 5 microscopic fields were picked at random representing a total scanned surface of 1.8 mm².The number of neurones and the mean length of neuronal processes were quantified after MAP2 immunostaining.The surface occupied by astrocytes was calculated on cultures exposed to anti-GFAP immunofluorescence.For each time point, measures were done on 4 independent cultures and results were presented under box plots.Results obtained from morphometric analysis were submitted to non-parametric Mann-Whitney test.All experiments were performed on 2-month-old male Wistar rats weighing 200–250 g originally obtained from Charles River.Animals were treated according to the guideline specified by the Animal Welfare Unit of the Public Service of Wallonia and under the control of the local UMONS-ethical commission.Upon their arrival, the rats were transferred to an animal facility, submitted to a regular circadian cycle 12:12 h light/dark cycle.Tap water and standard rodent food were provided ad libitum.Experimental animals, distributed in twelve groups of 5 rats, received an intraperitoneal injection of TiO2 NPs prepared in normal saline and administrated at four different doses and were sacrificed 4 days, 1 month and 2 months after the beginning of the treatment respectively.Control groups received a saline injection and were sacrificed after the same time intervals.Each animal received an IP injection of BrdU one hour prior to sacrifice in order to detect S-phase cells by immunohistochemistry.All animals were sacrificed by an overdose of Nembutal.Just after sacrifice, brain was quickly fixed by immersion in Bouin Alcohol for 2 days.Brains were embedded in paraffin according to a standard procedure.Brain parasagittal sections were stained with Masson’s Trichrome or with Cresyl violet.Specific antigens present in the tissue were unmasked by microwave pre-treatment in 0.01 M citrate buffer 2 × 5 min at a power of 900 W. Tissue sections were incubated overnight at 4 °C with primary antibodies diluted at 1:75 in PBS.After rinsing in PBS, slices were treated with the complexe anti-rabbit/peroxidase for 30 min at room temperature.Bound peroxidase activity was visualized by precipitation of 3,3′-diaminobenzidine 0.02% in PBS containing 0.01% H2O2.Preparation was counterstained with hemalum and luxol fast blue, dehydrated and mounted with a permanent medium.The specificity of immunolabeling was ascertained on the basis of several criteria.In each case negative controls were essayed by omitting the primary or secondary antibody or by the substitution of non-immune serum for the primary antibody.No staining was observed on these sections in these conditions.The average aggregate size of TiO2 NPs was analyzed both by electron microscopy and by dynamic light scattering, the size of NPs aggregates determined by DLS was 52 ± 15 nm and the mean size of the nanoparticle aggregates evaluated by electron microscopy was 34 ± 9 nm.The zeta potential of the TiO2 nanoparticles is about – 20 mV.XPS measurements confirm that there is only titanium oxide and no traces of metallic titanium.Primary cortical cultures of rat embryos were exposed to different doses of TiO2 NPs over time periods ranging from 6 to 96 h to evaluate the effects of these NPs on neurons, astrocytes, microglia as well as their impact on oxidative stress and cell proliferation.To highlight neurons, MAP2, a protein that stabilizes microtubules in the dendrites was detected by immunofluorescence.An important decrease in neuronal cell density was observed in cultures exposed to TiO2 NPs during 24 h at 20 μg/ml compared to control cultures.Based on the pictures taken with a fluorescence microscope, 2 parameters were quantified using a computer assisted morphometric approach: the mean number of perikaryons and the mean length of neuronal processes.The effects of TiO2 NPs on the number of perikaryons was assessed after 24, 48, 72 and 96 h of exposure.A significant decrease of the number of neuronal cells was observed after 24 h of exposure compared to controls.This negative effect remained constant up to 96 h.The impact of increasing doses of NPs of TiO2 on the density of neurons in culture was also studied.After 24 h, the number of neurons was significantly reduced compared to the control values for the different concentrations of TiO2.These effects do not seem to increase as a function of the TiO2 concentration.Indeed, the toxic effect was already observed with the lowest dose of TiO2 causing a significant loss of neurons.The length of the neuronal processes, measured using computer software, was significantly reduced after 24 h of exposure to TiO2 NPs compared to controls.The gap between treated and control values increased as a function of culture times.This phenomenon was due to the fact that neurites continue to develop over time in the controls, whereas their growth was largely inhibited in TiO2-exposed cultures.The dose dependent study revealed that neurons exposed to the lowest dose of TiO2 exhibited already a significant reduction of axonal and dendritic extension length compared to control neurons.A similar reduction was observed in all treated cultures independently of the dose of TiO2 present in the culture medium.Fig. 4A and B illustrate BrdU-positive proliferating cells in control and treated cultures respectively.By comparison to control, treated cells exhibited an accumulation of nanoparticles aggregates inside cytoplasmic inclusions distributed around nucleus.The proliferating cells were predominantly identified as neuroblastic type by co-immunostaining.Indeed, the cells used in our experimental model were derived from rat embryos and, in consequence, contained a large proportion of neuroblasts which are able to divide up to 10 days after seeding.The number of BrdU-positive neuroblasts was drastically reduced in cultures exposed to TiO2 versus controls.These observations were confirmed by a quantitative analysis.The number of positive BrdU cells was reduced by 45% in the treated cultures compared to the controls.We observed the same negative impact of TiO2 on neuroblast proliferation in vivo.As illustrated in Fig. 4C, neuroblastic cells form a blastema which proliferates actively in the subependymal zone of the control rat brains.The same subependymal zone of animals exposed to a high dose of TiO2 showed a drastic reduction in the number of S-phase cells.The protein used to target astrocytes was GFAP that is the most abundant intermediate filament of the astrocytic cytoskeleton.Immunofluorescence pictures did not show significant differences after 24 and 48 h of exposure to TiO2 while a significant increase in the area occupied by astrocytes appeared after 72 and 96 h versus controls.The dose-dependent toxic effect was also studied after 72 h.The astrocytic surface showed no difference at the lowest dose but increased significantly for the higher doses.Microglial cells, the macrophages of the brain, have been highlighted by immunodetection of Iba1 protein.This protein is upregulated in activated microglia.No significant difference in the number of microglial cells was observed between control and treated cultures regardless of the concentration of TiO2 used and the exposure time.However, phenotypic differences were observed in microglial cells exposed to TiO2.Cultures exposed to NPs exhibited numerous hypertrophic microglial cells that are totally absent in controls.Moreover, some microglial cells of treated cultures presented pseudopodial extensions characteristic of cell activation and phagocytosis mechanism.These phenotypic changes induced by TiO2 NPs expressed the transformation of quiescent microglial cells into active microglial cells.The toxic effects of TiO2 NPs in vivo were realized on Wistar rats that received different doses of NPs by intraperitoneal injection.Animals were sacrificed after 4 days, 1 month and 2 months.Aggregates of NPs were observed in different regions of the brains in animals exposed to the highest doses for 4 days, 1 and 2 months.These macroscopic aggregates of several micrometers in diameter were found in particular in the choroid plexus and the cerebellum.In addition to the presence of aggregates, the brain of animals exposed to highest doses and sacrificed after 1 month showed areas of cell lysis localized within the white matter.These lesions were not observed in rats exposed to lower doses or shorter times.These cell necrosis areas are associated with TiO2 aggregates.These alterations were accompanied on one hand by the presence of numerous picnotic nuclei characteristic of apoptotic processes and on the other hand by the infiltration of polynuclear cells and lymphocytes reflecting an inflammatory response.Fibrous material accumulations were also present in these necrotic zones.Oxidative stress was assessed using an anti-4-Hydroxynonenal antibody targeting lipid peroxidation.Qualitative observations were realized in different regions of the brain.An abundant oxidative stress was evidenced in different cerebral zones of animals exposed to the high doses for 4 days and 1 month by comparison to equivalent areas of control animals devoid of immunoreactivity.In the hippocampus, many cells were positive in the Ammon’s horn.Another region affected by oxidative stress was the cerebellum and more particularly Purkinje cells.Some of these cerebellar neurons also appeared to have a different morphological appearance from the control Purkinje cells, suggesting an apoptotic degeneration.Finally, in the sub-ependymal areas, populations of neurons characterized by large perikaryons were the site of an important oxidative stress.Differences in astrocytic density occurred in several regions of the brain between control and treated animals with 4 and 16 g/kg BW of TiO2 for 1 month.There were no apparent differences in lower doses and in animals exposed to shorter times.In the plexiform zone, the astrocytic density is lower in the treated animals versus controls.Many astrocytes were also present in the sub-ependymal space bordering the cerebral ventricles of the control rats.A significant decrease in the number of astrocytes in this area was observed in the treated animals.Finally, a high density of astrocytes was immuno-detected in the white matter of the cerebellum of the control rats.TiO2 induced also a strong reduction in the astrocytic population in that zone of the cerebellum.All of these observations reflected a general decline in the number of astrocytes in animals exposed to NPs.In recent years, TiO2 NPs have been widely used in a large panel of industrial products such as candies, toothpastes, pharmaceutical excipients, paper, paints and sunscreens.Despite the increase in the use of NPs, there is a lack of information on the impact of NPs on the environment and human health.Several in vitro and in vivo studies have demonstrated toxic effects associated with exposure to TiO2 NPs.Liu et al. observed inflammatory injuries in lungs following exposure to TiO2 NPs.Dysfunctions and histopathological injuries were also observed in kidneys, liver, spleen when animals were exposed to NPs .However, few studies have investigated the toxic effects of these nanoparticles on the central nervous system.The exposure of humans to TiO2 via different consumer products is estimated at 5 mg per person per day.This represents a quotidian dose of 0.07 mg/kg body weight .The toxic effect could result from the cumulative effect of this compound that are not efficiently eliminated once incorporated into cells and tissues.Indeed, the presence of TiO2 aggregates in the brains of rats one and two months after the injection demonstrates the absence of elimination of these particles after their incorporation in the brain.The doses used in the present study were in the range of those most often mentioned in the literature for in vitro and in vivo experiments .Our study attested that NPs have the ability to cross the blood-brain barrier.Macroscopic aggregates of TiO2 particles were observed in the central nervous system of rats exposed to high doses of TiO2 NPs administered by intraperitoneal injection.These aggregates have been found in different brain regions such as cerebellum, choroid plexus, hippocampus, and white matter.In the white matter, these TiO2 accumulations were associated with areas of tissue necrosis and inflammation.Such inflammatory phenomena have been described in the hippocampus of mice exposed to TiO2 with overexpression of different cytokines such as TNF-α and IL-1β.These inflammatory mediators could be released by activated microglial cells .Our in vitro approach has revealed a deleterious effect of NPs on neuronal cells.Nanoparticles were internalized in neuroblasts present in culture and induced a drastic decrease in the number of perikaryons already after 6 h of exposure.These results were in accordance with similar studies mentioned in the literature.Indeed, Hong et al. have demonstrated that TiO2 NPs can be internalized by hippocampal neurons of rats in vitro and were distributed in the cell nucleus inducing oxidative stress and apoptosis .Neurite outgrowth is an important process in brain development and is associated with the synaptic structure, characteristics of information transmission efficiency and neuronal synaptic plasticity.We have clearly evidenced a significant reduction in the growth of axonal and dendritic extensions in primary neuron cultures.This phenomenon can lead to a reduction in memory and learning abilities .Oxidative stress characterized by increased ROS production is recognized as the main mechanism of toxicity induced by TiO2 NPs .An oxidative stress was detected by immunohistochemistry in vivo in rats who received the highest doses of NPs.The oxidative stress, evidenced by an anti-4-Hydroxynonenal antibody targeting lipid peroxidation, was present in neuronal populations of different cerebral zones including cerebellum and hippocampus.The hippocampus was particularly affected both at the level of the Ammon’s horn and of the dentate gyrus.By studying the rate of cell proliferation in cultures derived from cerebral cortex of rat embryos, we have evidenced a significant decrease in the number of dividing neuroblasts in cultures exposed to TiO2.Neuroblasts are stem cells able to differentiate into neurons during brain development.These cells have the ability to divide in primary embryonic brain culture several days after seeding .In vivo, BrdU positive cells were detected in the subventricular zone of control animals known to be one of the few areas to still harbor stem cells in adults.The proliferation of neural stem cells in this zone is inhibited in the presence of TiO2.Two hypotheses can be raised regarding the decrease of S-phase cells in treated animals.The first is that TiO2 NPs could interfere with the capture of BrdU from blood, which could explain the decrease in the number of BrdU positive cells.However, obtaining similar results in vitro favors a second hypothesis which consists in an inhibition of the division capacities of the cells related to exposure to TiO2.This inhibition could result from a perturbation of the enzymes involved in the control of the cell cycle and DNA replication process .The toxicity in the neuroblastic cells underlines the risk linked to the TiO2 NPs on the proliferation and the differentiation of these cells during the cerebral development.Takeda et al. exposed pregnant mice with TiO2 and found NPs in the brain and testicles of newborn mice.These data indicate their ability to cross the placental barrier.The fetuses do not yet possess all the defenses present in adults, such as, for example, the blood-brain barrier which is still immature at this stage.The impact of NPs on the ability of division of neuroblasts could have very detrimental consequences on cerebral development.In the present study, two types of glial cells were also studied: astrocytes and microglial cells.Astrocytes are the most abundant glial cells in the CNS.The increase in the area of the astrocytic network in the cultures exposed to nanoparticles at short term can be explained by glial cell activation leading to a rapid growth factor liberation in response to a general stress of the culture in the presence of TiO2.The massive death of neuroblastic cells in TiO2-exposed cultures may also explain a larger extension of the astrocytes which are more resistant to the toxic action of TiO2 and which could take the space left in the culture by the rapid disappearance of neuroblasts.By contrast, our in vivo study points to a significant decrease in astrocyte density in several cerebral areas such as plexiform zone, cerebellum and sub-ependymal space.In the long term, NPs could induce toxicity to these glial cells leading to a massive apoptotic process as suggested by Liu et al. .The decrease in the number of astrocytes observed after 1 month could also result from the decrease in the rate of proliferation that affects the glial cells of animals exposed to TiO2 as suggested by Márquez-Ramírez .Microglia cells are the resident macrophage-like cells in the CNS that play a pivotal role in the brain’s innate immunity .If pathogens or exogenous elements such as metallic nanoparticles are introduced in the brain, microglia responds to this invading to prevent neuronal damages .Morphological changes of these cells occurred in cultures exposed to TiO2 NPs.A larger size and formation of membrane protrusions typical of phagocytosis were detected.These phenotypes correspond to microglial cell activation caused by the presence of TiO2 NPs.When these cells are activated, they can release mediators that act on other cell types such as astrocytes .The release of these cytokines could explain the activation of astrocytes and their increased proliferation in TiO2 exposed cultures.In conclusion, TiO2 NPs clearly demonstrate a toxic effect on CNS.The NPs have the ability to cross the BBB.Immunohistochemical analyzes show oxidative stress detected in several types of neuronal cells.This toxicity is marked in vitro by a significant reduction in the number of neurons and the size of their neurites as well as an activation of the microglial cells.Inhibition of neuroblast proliferation has also been demonstrated in both in vitro and in vivo studies.The effects of TiO2 NPs on CNS are not limited to neurons alone but also affect astrocytes and microglia.I attest that our article presents no potential conflict of interest. | Nowadays, nanoparticles (NPs) of titanium dioxide (TiO2) are abundantly produced. TiO2 NPs are present in various food products, in paints, cosmetics, sunscreens and toothpastes. However, the toxicity of TiO2 NPs on the central nervous system has been poorly investigated until now. The aim of this study was to evaluate the toxicity of TiO2 NPs on the central nervous system in vitro and in vivo. In cell cultures derived from embryonic cortical brain of rats, a significant decrease in neuroblasts was observed after 24 to 96 h of incubation with TiO2 NPs (5 to 20 μg/ml). This phenomenon resulted from an inhibition of neuroblast proliferation and a concomitant increase in apoptosis. In the same time, a gliosis, characterized by an increase in proliferation of astrocytes and the hypertrophy of microglial cells, occurred. The phagocytosis of TiO2 NPs by microgliocytes was also observed. In vivo, after intraperitoneal injection, the TiO2 NPs reached the brain through the blood brain barrier and the nanoparticles promoted various histological injuries such as cellular lysis, neuronal apoptosis, and inflammation. A reduction of astrocyte population was observed in some brain area such as plexiform zone, cerebellum and subependymal area. An oxidative stress was also detected by immunohistochemistry in neurons of hippocampus, cerebellum and in subependymal area. In conclusion, our study demonstrated clearly the toxic impact of TiO2 NPs on rat brain and neuronal cells and pointed about not yet referenced toxicity impacts of TiO2 such as the reduction of neuroblast proliferation both in vitro and in vivo. |
329 | Life cycle cost and environmental assessment for resource-oriented toilet systems | The discharge of wastewater from the toilet containing human waste and flush water will not only cause the waste of nutrients and clean water, but also increase the difficulties of the sewage treatment.Averagely, an adult produces 1.5 kg urine and 0.14 kg feces per day, containing 11.5 g N, 1.5 g P and 3.15 g K, and of which about 88% of the N, 67% of the P and 73% of the K are contained in the urine.The urine contributes about 80% nitrogen, 50% phosphorus and 90% potassium in domestic wastewater.A typical secondary sewage treatment plant consumes 0.3–0.6 kWh electricity in treating 1 m3 wastewater per day.Additionally, the energy consumption in conveyance is several times more than that in treatment.The massive energy consumption in collection and treatment will finally result in negative impacts of environment, waste of resource and cost in construction and operation.Therefore, there is a paradox in existing sewage treatment processes, that is, the use of a large number of energy and chemicals to treat sewage, but eventually results in a huge waste of resource and a heavy burden to both environment and economy.In the face of these problems, recovering resource and energy from human waste becomes a global consensus.A lot of new treatment processes have been developed.For example, the struvite precipitation was applied to recover nitrogen and phosphorus; the electrochemical treatment processes were used to recovering nitrogen and disinfection; the combustion was used to process human waste into burnable fuel; the hydrothermal carbonization technology was used to reuse human biowastes as some kinds of safe material and recover nutrients at the same time; the dehydration was applied for volume reduction of the urine; the adsorption technology was used to ensure the safety of recovered products; the membrane technology was used to concentrate the urine.Besides, there are even some hybrid technologies, such as the hybrid process of membrane-based pre-concentration and ion exchange for recovering clean water and nitrogen, the hybrid of flocculation and nutrient precipitation for recovering nutrients from urine, etc.The low recovered nutrient concentration or complicated processes, however, limit the further development and practical application of these technologies.As a sustainable membrane technology, forward osmosis has a significant advantage in the concentration of nutrient-rich wastewater.In FO system, there are a draw solution side running a high concentration solution and a feed solution side running the target solution that needs to be concentrated.In the concentration process, water diffuses from the feed solution into a draw solution through a dense, semipermeable FO membrane and the osmotic pressure is the driving force of water transport.The main advantages of FO membrane technology in comparison with other membrane technologies such as reverse osmosis, electrodialysis etc. are: a) no need for external pressure and low membrane strength requirement, b) low fouling propensity and quick recovery when polluted, c) high rejection to the ions.Based on the advantages of the FO, it enables concentration of a range of challenging, nutrient-rich streams, achieving high enrichment factors for streams.It can be used for the concentration of source-separated urine as well and the concentrated urine can be used as liquid fertilizer for agriculture and forestry.Thus, the aim of wastewater treatment and resource recovery will be both achieved.At present, a number of researchers and institutions have been involved in this field.The FO membrane unit was used to concentrate the synthetic urine in a laboratory-scale.National Aeronautics and Space Administration used a two-stage FO infiltration system to treat urine and recycle water, and the rejection rate can reach more than 95%, and the recovery of water can reach 98%.However, there were few practical sanitation application cases of the FO membrane technology.In view of this problem, a pilot-scale resource-oriented toilet serving 500 persons each day was built in the northwest corner of the playground of Tsinghua Primary School in October 2015 by our research team.Simultaneously, it is also necessary to make a comprehensive assessment from economic and environmental perspective to determine the research priorities for the next step, evaluate the potential trade-offs for future expansion, and improve reliability before full-scale implementation.The main objective of this study is to design toilet systems for different scenarios based on the pilot-scale resource-oriented toilet using forward osmosis technology, and then evaluate economic feasibility and environmental sustainability of each system using the methodology of cost-benefit analysis and life cycle assessment.In the actual operation of the toilet with FO units, the enrichment factor of yellow water was around 2.5.After enrichment, the enriched urine was used as liquid fertilizer for greening.The reclaimed water was used to flush toilets.Both FO and RO membrane had a high rejection rate for trace organic compounds, ensuring safety of using reclaimed water to flush toilets.Additionally, the feces were digested to meet Chinese Sanitary Standard for the Non-hazardous Treatment of Night Soil.The overall performance is shown in Table 1.Considering the differences of the water supply and drainage infrastructure conditions in different regions, in this study, seven toilet systems were designed, as shown in Table 2, to meet different potential requirements from pilot-scale to full-scale application promotion process.Scenario A is a conventional public toilet system and there could be two operating modules applied in urban area and rural area respectively.The system for urban area uses tap water to flush and discharges the wastewater into sewage treatment plants finally.The system for rural area uses clean water to flush, then stores the wastewater in septic tanks, and eventually transports wastewater to STPs or directly reused after simple treatment.Scenario B1 and scenario B2 are partial resource recovery toilet systems.In these two scenarios, vacuum urine diversion toilets are used to get a source separate collection of urine and feces.Then, the FO system is used to concentrate the yellow water to obtain liquid fertilizer.A RO system is applied to recycle the draw solution used in FO system and recover flushing water from recycling process at the same time.The brown water is stored in the vacuum valve, and after a few days storage, the upper liquid in the vacuum valve would be discharged into sewer, and the sediment would be transported by trucks.Besides, SB1 relies on photovoltaic cells while SB2 is supported by electricity from power plants.These systems suit for areas with drainage facilities.Scenario C1, scenario C2, scenario C3 and scenario C4 are the complete resource recovery toilet systems.These systems treat brown water by anaerobic digestion on the basis of scenario B, becoming non-sewer systems.They can also be separated from the public water supply system when purify some surface water to produce enough flush water by RO system.Furthermore, this kind of system was divided into four scenarios depending on whether the water supply system is needed or not and what kind of power system is used, photovoltaic cells or power plants.These systems suit for areas where sewage collection and treatment systems need a large number of investment and operating cost.All designed toilet systems were equipped with 6 closet pans for the female and 2 closet pans and 2 urinals for the male which would be built in a park or scenic spot, serving 780 women and 800 men every day based on the standards for design of public toilets, i.e. Chinese Standard for design of urban public toilets; Beijing Specification for construction of public toilets.As all scenarios are designed for daily use, the functional unit was defined as “collecting and treating the human waste of 780 women and 800 men in one-day toilet use”.The system boundaries are shown in Fig. 1.From the pilot-scale application project mentioned in our preliminary research, it can be inferred that scenario A can produce 3344.325 L mixed urine and feces per day while scenario B1-C4 can produce 395.001 L urine and 368.662 L feces per day.More detailed design data is shown in Table A1.The main cost and benefit of these systems during their lifespan were analyzed in Table 3.More detailed calculations can be found in Tables. A3–A6.In addition, since some of the external benefits have little or no influence on the results of the calculation, the cost-benefit flows in Table 3 do not fully calculate all indirect benefit, e.g. benefit of reducing environmental degradation costs caused by water pollution, improvement of global atmospheric environment resulting from water conservation, improvement of local eco-environmental quality by reducing pollutant emissions etc.Life Cycle Assessment is a comprehensive method developed to evaluate the potential environmental impacts of a product system throughout its life cycle.It has been applied to assess sanitation systems to characterize their environmental impacts and evaluate their potential trade-offs, i.e. source-separated systems, rural toilet systems and struvite precipitation.In this study, LCA was used to compare resource-oriented toilet with conventional toilet from an environment perspective to identify the obstacles and limitations of the resource-oriented toilet in order to conduct more specific research to make it more environmentally sustainable in next stage.The system boundaries are shown in Fig. 1.The inventory for each scenario was established in spreadsheet format and described in the LCA software Gabi 8.0.All data were collected from experimental performance, reasonable assumption and computer models.More detailed inventory data is shown in Table A2.The life cycle impact assessment was characterized by CML 2001–Apr, in which the comprehensive environmental impacts of all scenarios were described in eleven categories: Global Warming Potential, Acidification Potential, Eutrophication Potential, Ozone Layer Depletion Potential, Abiotic Depletion Elements, Abiotic Depletion Fossil, Freshwater Aquatic Ecotoxicity, Human Toxicity Potential, Marine Aquatic Ecotoxity, Photochem.Ozone Creation Potential, and Terrestric Ecotoxicity Potential.Based on Table 3, the ENPV results of each scenario was calculated with equation to compare the economic benefits of seven scenarios, and the results are shown in Table 4.As shown in Table 3, the resource-oriented toilet systems have higher cost in construction and equipment but lower cost in the operation phase when compared to conventional toilet system.Therefore, the ENPV results, shown in Table 4, indicate that SA is not feasible from economic perspective because of the high expenditure for the usage of tap water to flush toilets and the sewage treatment in STPs.However, SB1-SC4 are more feasible due to the application of FO technology and RO technology to concentrate yellow water, obtaining liquid fertilizer and clean water for flushing.Due to the requirement of more resource recovery facilities and equipment in SC1-SC4, SB1 and SB2 which simply recover nutrients from the urine are more feasible in economic perspective.It is also because the benefits of the recovered heat and other resources were not included when using anaerobic digestion to treat brown water in SC1, SC2, SC3, and SC4.Comparing SB1 with SB2, SC1 with SC2, or SC3 with SC4, it can be demonstrated that the use of photovoltaic cells benefits a little, but there are still uncertainties, because it depends on the production processes and the maintenance consumption during 20 years lifespan.Additionally, the cost of wiring in SB2, SC2 and SC4 is not included in Table 3 because the distance is not easy to estimate and SB1, SC1 and SC3 also need wires in case of rainy season in some area.Because both SC3 and SC4 use RO system to generate enough flush water from surface water, an additional 10 m3 tank is required in these two scenarios.Therefore, SC3 and SC4 have higher construction costs than SC1 and SC2, resulting in lower ENPV values as Table 4 shows.However, the ENPV results of SC3 and SC4 are still much higher than SA.Moreover, SC3 can operate completely independently without the need of grid, water supply and drainage systems.Although its ENPV value is negative, it is still a better choice for rural area to collect and treat human waste to decrease investment and recover nutrients as fertilizer.With the further upgrading and renovation of related technologies, as well as the scale and standardization of supporting equipment and products, the cost of the resource-oriented toilet systems will decrease gradually.Thus, some scenarios are reasonable to have positive benefits when full-scale implemented.Furthermore, SB1 and SB2 are preferred in areas where water supply and sewage treatment infrastructure are available, while SC1 and SC2 have priority in areas where sewage treatment needs a high cost and SC3 can also be applied in areas without any external facilities.The inventory data are shown in Table A2.For each scenario, the energy consumption, emissions and environmental benefits were calculated, and the inventory data in all scenarios were estimated with the experimental data conducted in the pilot-scale resource-oriented toilet.The presented results also include fertilizer offsets based on the mass of nitrogen, phosphorus, and potassium, as one kg of N, P, K from the liquid fertilizer production would offset the equivalent kg of N, P, K in the commercial fertilizer.Fig. 2 shows the environmental profiles of different scenarios.The environmental impacts of the resource-oriented toilet systems are lower than the conventional toilet system in almost all aspects as indicated in Fig. 3.In some aspects, there are even positive impacts to the environment, e.g. MAETP of SB1, SC1, and SC3.However, due to electricity consumption in SB2, SC2, and SC4, the burdens on TETP of these three scenarios are much higher than that of SA.Furthermore, SC4 has a higher environment cost in TETP, ADP fossil, HTP, AP, and POCP than other scenarios.Besides, some similar trends can be found between SB1 and SC1.The reason is that the fertilizer offsets are the same in SB1 and SC1, and additional biogas and biowaste offsets, as well as heat and nutrients benefits, in SC1 which have not been fully recovered.Furthermore, this analysis highlights that SB1, SC1 and SC3, especially SC1, are feasible from environmental sustainability.Further analysis was provided in Figs. 3–5.Fig. 3 provides overall impacts for electricity consumption in the resource-oriented toilet systems.In detail, electricity consumption is the main contributor to GWP, AP, EP, ADP fossil, FAETP, HTP, MAETP, POCP and TETP.The results of the environmental impacts include the offsets of the biogas, liquid fertilizer, etc.The offsets reduce the total impacts a lot and even turn them to positive impacts to the environment.As results, the contributors of the electricity consumption would be more than 100%, and even result in negative values.Furthermore, the impacts of the electricity consumption in SB2, SC2, and SC4 are even higher than total impacts of SA in some aspect, e.g. AP, ADP fossil, FAETP, HTP, MAETP and TETP.In particular, RO system used to concentrate draw solution and produce clean water consumes the largest percent of electricity in the resource-oriented toilet systems, and contribute a lot to the environmental impacts.The same result was discussed in other study.However, fertilizer offsets in the resource-oriented toilet systems have more positive environmental impacts as shown in Figs. 3–5.It suggests that the resource-oriented toilets with forward osmosis concentrating urine provide a potential solution to sustainable development and decentralized sanitation success.The benefit of using photovoltaic cells to replace power grid is uncertain because the impacts would be affected by the multi-Si production process and the maintenance consumption during 20 years lifespan, thus, an overall contrast between SB1 with SB2 was conducted in Fig. 4, also a contrast between SC1, SC2 and SC2 in Fig. 5.As these two figures shows, whether use photovoltaic cells or not have obvious impacts on GWP, AP, ADP fossil, HTP, MAETP, POCP, TETP.Both Figs. 3 and 4 show the burdens mainly come from the coal-fired power plants.Figs. 4 and 5 suggest that the less treatment processes and emissions in the resource-oriented toilet would result in less environmental harmful as SC1 is better than SB1 and SC3 in almost all aspects.In addition, the transportation of surface water was not included in the inventory of SC3 and SC4, because it is not reasonable to transport water with trucks in the area without any external facilities, in contrast, it needs workers to replace trucks.Furthermore, as the inventory data mainly comes from reasonable estimation based on the existing pilot-scale resource-oriented toilet used in Tsinghua Primary School, a lot of external factor will affect the overall uncertainty.For example, the specialized equipment consumes more material and energy in its production process; the composition of urine and feces depends on personal habits a lot, and the children may prefer sweet food like cake, ice cream while the dietary habits and digestion ability of adults may wider and stronger than children.A ±10% deviation was included to the inventory data, which would be affected by the toilet scale, composition of urine and feces, consumption of maintenance during 20 years lifespan and instability of the system, then, the analysis process was conducted and the results were presented as the error bars in Figs. 4 and 5 to show the potential uncertainty ranges.A 10% improvement of water flux in forward osmosis process was assumed to evaluate the potential impacts of further development of membrane technology.the new environmental profiles of SB1 and SC3 are presented in Fig. 6.The little improvement of water flux has obvious impacts on MAETP and ODP.The high-performance forward osmosis membranes with high water flux and solute rejection rate will benefit a lot for environmental sustainability of resource-oriented toilets.Another source of uncertainty that was not addressed is the septic tank in scenario A.In China, almost all of the toilets are attached with a septic tank to treat urine and feces as a pretreatment process.The environmental impacts of this process are not easy to evaluate.Thus, this process was ignored and an assumption was made that the urine and feces were transported to STPs straightly through the sewer.As result, the environmental cost would decrease, because much more methane would be produced under anaerobic conditions in the septic tank.In contrast, there is no need to build septic tank in other scenarios.It turns toilets from pollution center to resource center.All scenarios were distributed in a X-Y coordinate system according to their ENPV results and environmental impacts in Fig. 7.As shown in Fig. 7, the environmental cost of SB2, SC2, SC4 are higher than SB1, SC1, SC3.It mainly attributes to the electricity consumption.SB1 has the highest ENPV, while SC1 have the lowest environmental impacts.The expenditure of SC3 and SC4 are higher than SC1 and SC2 because of the usage of RO system to treat surface water to produce flush water.However, as quality standards of flush water is significantly different with RO effluent, the economic and environmental cost will both decrease when other simpler processes were applied to purify surface water such as coagulation, precipitation, chlorine disinfection, etc.Furthermore, the comparison between SB1 and SC1, or SB2 and SC2 indicates that the complete resource recovery scenarios with more treatment processes results in a lower environmental impact and a higher economic cost.Overall, scenario B1 is the best choice to replace conventional toilet system with a positive environmental sustainability and economic feasibility.The clean energy such as solar energy, wind energy, etc. could support the full-application of these resource-oriented toilet systems in further expansion.The uncertainty mainly come from the application of specialized equipment, fluctuation of urine and feces composition, and instability of systems.More researches are needed to improve the efficiency of the membrane system.The improvement of forward osmosis performance and the decrease of energy consumption can both benefit a lot.Moreover, more effort is needed to investigate the technical feasibility of the resource-oriented toilet systems in rural area.If the new toilet systems could be more accepted by users in different areas, the manufacturing cost will be lower.Simultaneously, more users use the toilet, more liquid fertilizer offsets produce. | The rich content of nutrients in human waste provides an outlook for turning it from pollutants to potential resources. The pilot-scale resource-oriented toilet with forward osmosis technology was demonstrated to have advantages to recover clean water, nitrogen, phosphorus, potassium, biogas, and heat from urine and feces. For the possibility of further full-scale implementation in different scenarios, six resource-oriented toilet systems and one conventional toilet system were designed in this study. The methodology of cost-benefit analysis and life cycle assessment were applied to analyze the life cycle economic feasibility and environmental sustainability of these systems. As results indicated, resource-oriented toilets with forward osmosis technology concentrating urine proved to have both economic and environmental benefit. The economic net present value results of new resource-oriented toilets were much better than conventional toilet. The energy consumption in resource-oriented toilets contributes a lot to the environmental impacts while resource recovery such as the fertilizer production and fresh water harvest in resource-oriented toilet systems offsets a lot. Taking both life cycle economic feasibility and environmental sustainability into consideration, the partial resource-oriented toilet (only recovering nutrients from urine) is the best choice, and the totally independent resource-oriented toilet could be applied to replace conventional toilets in areas without any external facilities such as sewer and water supply system etc. |
330 | An Atlantic-Pacific ventilation seesaw across the last deglaciation | The two-step increase in atmospheric CO2 at the end of the last glacial maximum is well documented, yet the source of CO2 and its mechanism of release remain elusive.Synchronous drops in the radiocarbon activity of atmospheric CO2 are observed in numerous records leading to the proposal that CO2 was released from a radiocarbon depleted oceanic abyssal reservoir that had previously been isolated from the atmosphere.Upon release, the radiocarbon-depleted carbon would mix with the atmospheric carbon pool, increasing CO2 whilst reducing its 14C/12C ratio.It is possible that the observed deglacial changes in atmospheric radiocarbon activity could primarily reflect perturbations to the Atlantic overturning that had only a minor impact on atmospheric CO2, which would have responded much more sensitively to relatively small changes in the ventilation of the ocean interior via the deep Southern Ocean and the Pacific.One way of testing these hypotheses is to assess the existence of a significant volume of radiocarbon-depleted water in the ocean interior prior to deglaciation, as well as the occurrence of changes in marine radiocarbon ‘ventilation’ that would be consistent with renewed ocean–atmosphere carbon exchange across the last deglaciation, specifically in the Southern Ocean and/or Pacific.A plethora of recent studies at numerous locations, investigating changes in the distribution of radiocarbon in intermediate and deep waters since the LGM, has yielded contradictory conclusions.Many of these studies may have been hampered by a lack of radiocarbon-independent calendar age control, and therefore the exclusive use of benthic–planktonic age offsets, which can be relatively insensitive to changes in ocean–atmosphere radiocarbon age offsets, particularly during periods of rapid change in atmospheric radiocarbon activity.Nevertheless, extremely large radiocarbon depletions, observed at some shallow/intermediate water locations, have been interpreted as indicating that poorly ventilated waters exited through the Southern Ocean and were transported via Antarctic Intermediate Water into the Atlantic and Pacific Oceans.However radiocarbon data from several other locations that are also believed to have been influenced by AAIW across the last deglaciation have been interpreted as showing no large change in the ventilation age of this water mass since the last glacial period.If the general pattern of ocean circulation seen today is also assumed for the last glacial period, it is hard to see how AAIW could have carried radiocarbon depleted water to the sites where it is reported without leaving any sign at those where it seems not to have been detected.A coherent framework for the evolution of intermediate water ventilation across the last deglaciation therefore has yet to be proposed.We seek to address this question using new and existing radiocarbon data, in combination with intermediate complexity numerical model simulations.Here we present a record of intermediate water radiocarbon-based ventilation change across the last deglaciation, in the equatorial Atlantic off the coast of Brazil.Radiocarbon measurements were conducted on benthic and planktonic foraminifera from core GS07-150-17/1GC-A.This site is currently bathed predominantly in AAIW, with a minor influence of North Atlantic deep water, NADW, which lies immediately below.AAIW is predominantly formed in two locations north of the Subantarctic Front, in the southeast Pacific and southwest Atlantic where surface waters are subducted to intermediate depths during austral winter and early spring.The two main formation sites lead to two types of AAIW, one in the South Pacific, and one in the Atlantic that is colder and fresher.In the North Pacific, modified southern-sourced intermediate waters compete for space with North Pacific intermediate water, a low-salinity cold water mass which today forms in the Sea of Okhotsk.Foraminifera were picked from the >212 μm size fraction and where necessary from the 150–212 μm fraction.28 monospecific samples of Globigerinoides ruber and 15 samples of mixed benthic foraminifera were picked and graphitized in the Godwin Laboratory at the University of Cambridge using a standard hydrogen/iron catalyst reduction method.For some samples it was necessary to combine benthic foraminifera from two adjacent samples in order to have enough material to date accurately.AMS-14C dates were obtained at the 14Chrono Centre, Queens University Belfast.All dates are reported as conventional radiocarbon ages following Stuiver and Polach.The age model for this core was constructed based on the radiocarbon ages of 28 planktonic samples.The AMS-14C ages were converted to calendar ages using BChron and the calibration curve IntCal13 with a surface ocean–atmosphere 14C age offset of 458 14C-years,.Since knowledge of changes in the shallow sub-surface reservoir age over the deglaciation is lacking in this context, we assume a constant modern reservoir age.However, this represents a severe limitation and should be seen as a working hypothesis only; reservoir ages must have varied to some extent during deglaciation if only due to changes in the partial pressure of atmospheric CO2, resulting in reservoir ages perhaps ∼200–300 yrs higher than present during the last glacial period, depending on the state of the overturning circulation.The radiocarbon-based ventilation age of the bottom waters, was determined using the difference between paired benthic and planktonic radiocarbon dates, giving the 14C age offset between the bottom of the water column and the top of the water column.Although arguably it is preferable to use radiocarbon age offsets between bottom-water and the atmosphere, our age-model is based on the assumption of a constant surface ocean–atmosphere 14C age offset of 458 14C-years, which means that B-Atm will exhibit the same patterns of variability as B–P, albeit with a constant offset of 458 14C-years.For simplicity we therefore only refer to B–P offsets in this study.During the LGM our site was well ventilated, with a benthic–planktonic ventilation age similar to that of the early Holocene, 226 yrs and 199 yrs respectively.Over the deglaciation however, clear yet subtle changes in the ventilation age occurred.There are two periods, one during Heinrich-Stadial 1, HS1, and the other at the start of the Younger Dryas, YD, where the ventilation age increases by 200–500 yrs.These are transient events lasting around 2200 yrs and 900 yrs in HS1 and the YD respectively.These periods are both associated with cooling in the Northern Hemisphere and a ‘thermal bipolar seesaw’ response in the Southern high latitudes.During these times, the benthic stable carbon isotopic signature, δ13C, also decreased by up to 0.8‰.Our site is currently at the boundary between AAIW and NADW, and therefore would in principle be sensitive to changes in the ventilation state and the relative contribution of both water masses.Nutrient proxies from the North Atlantic provide conflicting evidence for changes in AAIW presence at intermediate-depth during the cold stadials.A recent neodymium isotope study, indicates reduced influence of AAIW between 671 m and 1100 m water depth on the Demerara Rise during both HS1 and the YD.Whilst a definitive conclusion regarding changes in AAIW does not emerge from these studies, what is clear is that NADW formation and export were certainly reduced at these times.Therefore, whilst changes in the contribution of AAIW at our site over the deglaciation remain ambiguous, a decrease in ventilation at times of reduced NADW export is evident.This ventilation decrease could be due to a slower overturning rate of NADW, a change in the initial radiocarbon disequilibrium of newly formed NADW, or due to a greater influence of southern-sourced waters at our site as a result of NADW shoaling, or indeed due to a combination of these.One alternative scenario, whereby the radiocarbon-based ventilation age and nutrient content of AAIW increased during HS1 and the YD with no change in the relative contribution of northern versus southern-sourced water at our site, can be ruled out given the evidence for reduced NADW export, as well as observations of low sub-surface reservoir ages in the Southern Ocean by the end of HS1 and the during the YD.Changes in NADW rather than AAIW must therefore be the primary cause of the high ventilation ages at our site during stadials.Radiocarbon-based LGM ventilation ages at our intermediate-depth site are much lower than those in the deep North Atlantic, confirming the existence of well-ventilated Glacial North Atlantic Intermediate Water, GNAIW.This water mass was most likely reduced during HS1 and the YD leading to a greater proportion of southern sourced waters at our site.During the Bølling–Allerød, B–A, both the intermediate and the deep Atlantic are as well ventilated as in the modern ocean, suggesting that a ‘modern-like’ NADW circulation cell was established at that time.Ventilation ages in the early Holocene are around 300 yrs lower than in the modern ocean potentially due to enhanced NADW at the end of the deglaciation or due to changes in the influence of AAIW at our site.Radiocarbon-based ventilation age changes measured in a marine sediment core located on the Chilean Margin, display striking similarities, albeit of opposite sign, to our radiocarbon based ventilation age changes in the intermediate Atlantic.Radiocarbon data from this southeast Pacific AAIW location, initially interpreted as showing no change in ventilation over the deglaciation, in fact shows slightly reduced ventilation ages during both HS1 and the YD.The clear anti-phasing of ventilation ages between these two cores reveals an Atlantic–Pacific seesaw in intermediate water ventilation over the last deglaciation.Numerical modeling experiments performed with Earth System models of intermediate complexity, LOVECLIM and the UVic ESCM also show a Pacific–Atlantic seesaw, although this is most strongly expressed in the Northern Hemisphere.In these idealized model experiments, the North Atlantic is perturbed with freshwater, resulting in a cessation of NADW formation and decreased Δ14C over most of the North Atlantic and below 2000 m in the South Atlantic.These simulations report a coincident decrease in ventilation ages in the intermediate depth Pacific.More specifically, these simulations show increased formation of North Pacific intermediate and/or deep waters and spreading of these waters at depths of 500–1500 m throughout the entire Pacific.In both the UVic ESCM and LOVECLIM simulations, lower ventilation ages at intermediate depths in the Pacific are mainly due to the vigorous formation of North Pacific Intermediate and Deep Water, which leads to the southward advection of younger waters to the South Pacific, South Indian Ocean and through the Agulhas leakage.Ventilation age reconstructions from the North Pacific further support the model results, indicating a clear seesaw in the ventilation age of NW Pacific intermediate/deep waters versus deep waters in the North Atlantic.Although a transient pulse of increased ventilation in the North Pacific has also been observed at a depth >3600 m during HS1, the extent of this ventilation anomaly has yet to be confirmed.It is also notable that two records from intermediate depths in the low-latitude Pacific conflict with the inference of widespread decreased ventilation ages in the intermediate Pacific during North Atlantic stadials.These show large increases in radiocarbon-based ventilation age during HS1 and the YD.However data from one of these cores suggests that intermediate water oxygenation improved during Heinrich events, as expected if ventilation improved.These contrasting observations could be reconciled if the observed changes in oxygenation were sufficient to rapidly oxidize buried ‘fossil’ organic carbon at these locations of high export productivity and high sediment accumulation, thus contributing to very high radiocarbon ages in benthic foraminifera.However, this scenario can only work if a very large amount of very old organic carbon is oxidized in this way.The availability of such a large amount of old sedimentary carbon might not be entirely plausible, and its oxidation would likely have caused under-saturation of pore-waters with respect to carbonate, contrary to evidence for enhanced carbonate preservation.An alternative explanation for the very high radiocarbon-based ventilation ages is the input of radiocarbon dead carbon from clathrates as the ocean warms, however this also remains controversial.It is notable that the Atlantic–Pacific seesaw described above would have operated in unison with a previously identified alternation between North Atlantic and Southern Ocean sources of ventilation of the deep Atlantic.While this other ventilation seesaw has been proposed to primarily affect the deep Atlantic, it is possible that intermediate waters in the Pacific and therefore on the Chilean Margin, were affected by changes originating both in the North Pacific and the Southern Ocean.Indeed, Fig. 3 shows that shallow sub-surface and intermediate depth reservoir/ventilation ages in the Southern Ocean show a similar deglacial pattern to that of the NW Pacific, supporting the proposition of a two-pronged ventilation pulse from the North Pacific and Southern Ocean.This is supported by radiocarbon-based ventilation ages from a core in the South Atlantic, which show an anti-phase relationship to our core from the equatorial Atlantic.Decreases in radiocarbon ventilation ages are thus seen in the intermediate South Atlantic during late HS1 and the YD, synchronous with increases at our site.Whilst the magnitude of radiocarbon-based ventilation changes observed at low- and southern latitude intermediate depths off Brazil, in the South Atlantic and off Chile is larger than seen in the model simulations, the general patterns of change are in good agreement and are further reinforced by ventilation reconstructions from the NW Pacific, North Atlantic, and Southern Ocean.Model and proxy data display a similar pattern of response, but models underestimate the ventilation changes occurring at intermediate depth and do not reproduce the changes in Southern Ocean overturning and ventilation that are suggested by various deglacial records.This indicates that not all processes contributing to enhanced ocean interior ventilation are captured by the highly idealized models.Nevertheless, taken together, the existing data and numerical model simulations provide strong prima facie support for the operation of two ‘ventilation seesaws’, whereby a weakening of NADW formation triggers an increase in North Pacific and Southern Ocean overturning, and therefore opposing changes in ventilation at intermediate depths in the Atlantic and Pacific basins.We therefore propose that when the North Atlantic is not ventilating the ocean interior, the North Pacific and Southern Ocean are.Despite the consistent support provided by data and model simulations for the proposed Atlantic–Pacific ventilation seesaw, many more well-resolved radiocarbon time-series will be needed to completely document the character of intermediate-water circulation changes across the last deglaciation.This is underlined by the fact that not only the magnitude of change but also the direction of change in radiocarbon-based ventilation age will depend sensitively on the location and in particular on the depth of the monitoring/study location.Depth transects at the key time periods will therefore be vital for confirming the details of intermediate depth circulation changes and water mass distributions over the last deglaciation.Intermediate-water radiocarbon-based ventilation ages from the Brazil margin show clear yet relatively low amplitude changes over the last deglaciation.These changes are of the same magnitude as, but anti-correlated with, those reported from the Chilean Margin,.This Atlantic–Pacific “seesaw” behavior is also reported in numerical modeling studies, which see a switch to active NPDW during NADW shutdown.If NPDW was initiated at HS1 and the YD, the resulting exchange of CO2 between the ocean and the atmosphere is likely to have contributed to the deglacial increase in atmospheric CO2 and the simultaneous fall in atmospheric radiocarbon activity.This mechanism would have bolstered a similar effect that appears to have operated via the Southern Ocean at the same time, via a North–South ‘Atlantic ventilation seesaw’.More ventilation age reconstructions are needed to determine how much of the Pacific was affected during HS1 and the YD before the amount of CO2 released to the atmosphere through this mechanism can be quantified.Our findings do not support the existence of an extremely radiocarbon-depleted signature conveyed via AAIW during the YD and HS1.Instead they underline the potential importance of relatively subtle yet globally coordinated changes in ocean dynamics and ventilation for the global carbon cycle, and ultimately the deglacial process. | It has been proposed that the rapid rise of atmospheric CO<inf>2</inf> across the last deglaciation was driven by the release of carbon from an extremely radiocarbon-depleted abyssal ocean reservoir that was 'vented' to the atmosphere primarily via the deep- and intermediate overturning loops in the Southern Ocean. While some radiocarbon observations from the intermediate ocean appear to confirm this hypothesis, others appear to refute it. Here we use radiocarbon measurements in paired benthic- and planktonic foraminifera to reconstruct the benthic-planktonic <sup>14</sup>C age offset (i.e. 'ventilation age') of intermediate waters in the western equatorial Atlantic. Our results show clear increases in local radiocarbon-based ventilation ages during Heinrich-Stadial 1 (HS1) and the Younger Dryas (YD). These are found to coincide with opposite changes of similar magnitude observed in the Pacific, demonstrating a 'seesaw' in the ventilation of the intermediate Atlantic and Pacific Oceans that numerical model simulations of North Atlantic overturning collapse indicate was primarily driven by North Pacific overturning. We propose that this Atlantic-Pacific ventilation seesaw would have combined with a previously identified North Atlantic-Southern Ocean ventilation seesaw to enhance ocean-atmosphere CO<inf>2</inf> exchange during a 'collapse' of the North Atlantic deep overturning limb. Whereas previous work has emphasized a more passive role for intermediate waters in deglacial climate change (merely conveying changes originating in the Southern Ocean) we suggest instead that the intermediate water seesaw played a more active role via relatively subtle but globally coordinated changes in ocean dynamics that may have further influenced ocean-atmosphere carbon exchange. |
331 | An assessment of soil erosion prevention by vegetation in Mediterranean Europe: Current trends of ecosystem service provision | Soil erosion is one of the main environmental problems in European Mediterranean agro-forestry systems and for the sustainability of important ecosystems.Several legislative and scientific initiatives have focussed on this issue since the late 1950s and recently the Thematic Strategy for Soil Protection defined a coherent framework for the assessment of European soils.It pointed out the concentration of soil related risks in southern Europe and the absence of a standardized approach to obtain policy relevant indicators.The ecosystem service concept is an effective communication tool to bridge knowledge between science and policy.In the case of soil erosion prevention, the TSSP recognizes the importance and knowledge gaps related to the contribution of specific ecosystems and ecosystem functions to the mitigation of soil erosion.The ES concept also supports guidelines for the development of policy relevant indicators for international monitoring systems because ES indicators that are sensitive to changes in land use, calculated using standardized methods, provide critical sources of information for agro-forestry systems under pressure from policy, environmental or climatic drivers.Several studies and international initiatives) are contributing to the development of a coherent indicator set for the mapping and assessment of ES.Under Action 5 of the European Union Biodiversity Strategy to 2020 the Working Group on Mapping and Assessment of Ecosystems and their Services was set up to develop an assessment approach to be implemented by the EU and its Member States.Supported by a growing scientific literature, this working group identified the need for more consistent methodological approaches to quantify and map ES and underlined the importance of finding indicators of ES provision that are sensitive to measure policy impacts."Vegetation regulates soil erosion and thereby provides a major contribution to Mediterranean agro-forestry system's sustainability.However, the regulation of soil erosion is projected to decrease in the coming decades in the region due to overgrazing, forest fires, land abandonment, climate change, urbanization or the combination of these drivers.And the intensity of these drivers has increased in the last decade.Vegetation acts as an ES provider by preventing soil erosion and therefore mitigating the impact that results from the combination of the erosive power of precipitation and the biophysical conditions of a given area.Consequently, to better represent the impacts related to these drivers it is necessary to map not only the capacity for ES provision but also the actual ES provision and the remaining soil erosion.This paper presents a spatially and temporally explicit assessment of the provision of SEP by vegetation in Mediterranean Europe between 2001 and 2013.It provides insights on past and current trends of ES provision and enables the mapping of vulnerable areas.Finally, it demonstrated the strength of having a coherent and complementary set of ecosystem service indicators to inform policy and land management decisions.The Mediterranean Environmental Zones were used to define the geographic extent of the study, which was constrained to continental Europe and a few larger islands due to data availability.The study area corresponds to 1.06 Million km2 and covers all European Mediterranean countries.It encompasses three major environmental zones, i.e. Mediterranean Mountains, which experience more precipitation than elsewhere in the Mediterranean, Mediterranean North and Mediterranean South, both characterized by warm and dry summers and precipitation concentrated in the winter months.Within the region agriculture is generally constrained by water availability and poor soils, and grasslands, vineyards and orchards are important land cover/use features.The conceptual approach for mapping and assessment of regulating services used in this paper has recently been described by Guerra et al., and is summarized in Fig. 2.SEP is provided at the interface between the structural components of the agro-forestry system and its land use/cover dynamics, which help mitigate the potential impacts from soil erosion.This approach combines a strong conceptual framework with the “avoided change” principle, characterizing regulating ES provision as the degradation that does not happen due to the contribution of the regulating ES provider.To assess SEP following this framework it is necessary to first identify the structural impact related to soil erosion, i.e. the erosion that would occur when vegetation is absent and therefore no ES is provided.It determines the potential soil erosion in a given place and time and is related to rainfall erosivity, soil erodibility and local topography.Although external drivers can have an effect on these variables, they are less prone to be changed directly by human action.The actual ES provision is a fraction of the total potential soil erosion, and it is determined by the capacity for ES provision in a given place and time.We can then define the latter as a key component to quantify the fraction of the structural impact that is mitigated and to determine the remaining soil erosion).This capacity for ES provision is influenced by both internal drivers and external drivers.A detailed description of the methodological and conceptual frameworks is given in Guerra et al.To understand the relation between drivers and the provision of ES, it is essential to translate the dynamics of the agro-forestry systems into a set of process related indicators that express system responses.We propose a set of eight indicators that describe the different processes that contribute to SEP, including indicators describing the state and dynamics of the structural impact, the ES mitigated impact, the actual ES provision and the capacity for ES provision.Together, these eight indicators are sensitive to changes in the climatic profile of each region, soil types, topography, management options and environmental drivers.Although all indicators have been produced at a 250 m resolution, these were finally aggregated by summation to a 5 km grid resolution to better communicate changes and trends in ES provision and to avoid false precision related with the different data quality of the input datasets.In the case of the capacity for ES provision the average was used as, considering the adimensional character of this indicator, the sum does not provide any relevant interpretation value.For the ES assessment, the structural impact was calculated using the expression ϒ = R × LS × K, and the gradient of ES mitigated impact was determined by βe = ϒ × α.Technical infrastructure that could reduce impacts locally was not consider given the spatial scale of the study.Following these two expressions the actual ES provision can be calculated by Es = ϒ − βe.Although no absolute measure of soil erosion is obtained, this mathematical formulation will generate a spatially explicit gradient of the potential soil loss and the related gradient of ecosystem service provided by vegetation cover.Artificial surfaces were excluded from the evaluation and all parameters were directly resampled to a 250 m resolution using an average filter.The spatial distribution and temporal trends of the indicators were analyzed and mapped, and an overall ES provision profile was calculated for the entire study area.This was done using spatial statistics to obtain a total sum value for the entire study area, and made it possible to isolate vulnerability areas and to pinpoint the periods with higher impact on SEP.The vulnerability areas were identified by superimposing the variation of the capacity for ES provision, with the variation of the ES mitigated impact, both calculated between 2001 and 2013.A breakdown of the total land surface area covered by different combinations of these two variables reveals four groups related to each of the four quadrants.The first group represent areas that, despite their increase in the capacity for ES provision, reveal an increase of ES mitigated impact, i.e. despite the increase of vegetation capacity to halt soil erosion, there was an increase in the remaining soil erosion after the ES provision.The second consisted of areas with a decrease of the capacity for ES provision and an increase of the ES mitigated impact, i.e. this group reflects the expected trend that a decrease in vegetation capacity to halt soil erosion resulted in more soil erosion.In the third group are combined areas with a decrease of both the capacity for ES provision and the ES mitigated impact, i.e. reflecting a reduction in the efficiency of the ES to halt soil erosion, and finally the fourth group included areas with an increase of capacity related to a decrease of the ES mitigated impact.This assessment thus identifies three types of vulnerable areas that require policy action.Following this analysis, two smaller case-studies with contrasting regional ES provision profiles are described.Their specific ES provision profiles were constructed based on the description of the main indicators following the same methodological approach as for the overall ES provision profile described for the entire study area.The structural impact followed the rainfall dynamics during the same period: decreasing between 2001 and 2009 but increasing toward 2013.Overall, a decrease of 7.86% was observed between 2001 and 2013.Using 2013 as a reference year, the distribution of the structural impact showed relatively high values in the north of Italy, south of France, the East coast of the Adriatic Sea and the western and southern areas of the Iberian Peninsula.This spatial distribution remained throughout the period of the analysis with the exception of 2009, when the distribution was less pronounced.Between 2001 and 2013 the areas that experienced an increasing structural impact over the four months in analysis were located in the south of Italy and in the south of the Iberian Peninsula.The results also showed that this increase is mainly related to an increase and higher variability of the structural impact in October following a dip in September.The ES mitigated impact presented a different trend from the structural impact with an increase between 2001 and 2005 followed by a relatively constant decrease in its values until 2013.For 2013 it showed a concentration of high values mainly in the Southeast of the Iberian Peninsula, and in particular areas of the North of Italy and South of France.Together with some areas in the East of the Iberian Peninsula, South of Italy, and East of Greece, these areas also corresponded to the regions where this indicator has increased between 2001 and 2013.This trend implies a degradation of the conditions present in a given place as the total amount of soil loss increased.Despite of these degradation areas, the overall result for the entire region showed a decrease of 15.09% of ES mitigated impact between 2001 and 2013.This decrease was mainly located in Greece and in large portions of Italy, Spain and Portugal.As expected, the actual ES provision showed the same spatial and temporal pattern as the structural impact.By contrast, the capacity for ES provision revealed two very different patterns.The first pattern included the Iberian Peninsula and some areas in Southern Italy and in Eastern Greece, which were characterized by lower values and a more differentiated distribution of this indicator.The spatial location of these low values was similar to the spatial distribution of high values of structural impact, particularly in the South of the Iberian Peninsula and in the South coast of Italy.The second pattern concerned areas that showed more homogeneous distribution of higher values of the capacity for ES provision.Examples of these areas are the South of France, the East coast of the Adriatic Sea and the North of Italy.Despite this variable distribution, considering the entire region the overall values of the capacity for ES provision increased slightly between 2001 and 2013, from 0.815 to 0.844.This increase originated mainly from the South and East coast of Italy and from large areas in the North of Iberian Peninsula, while in the South of the Iberian Peninsula the capacity for ES provision decreased between 2001 and 2013.This overall increase is the result of a constant positive trend between 2001 and 2013 that is more substantial between 2009 and 2013.Regarding these areas in the South of the Iberian Peninsula, and using the monthly variation of the capacity for ES provision, we infer that this decreasing trend was related mainly to a decrease of provisioning capacity in October, particularly between 2001 and 2005.These spatial and temporal decrease patterns of the capacity for ES provision were in line with the increase of structural impact in the region.A more detailed analysis of the rate of effective ES provision showed substantial dissimilarities between the different regions that were even more pronounced over the entire period.While in the first period the Iberian Peninsula showed substantial losses, in the following periods these losses were located more toward the North of Italy and the South of France and to the South of Italy and Greece.Overall, although not statistically significant, the entire study region presented a positive trend in terms of the effectiveness of service provision, particularly in the period between 2009 and 2013 where the rate of effective ES provision increased by 1.62%.The vulnerability analysis revealed that 43.5% of the total area is related to one of the three groups of vulnerable areas.The second and the fourth group demonstrated the expected inverse relation between the capacity for ES provision and the ES mitigated impact.Put differently, the increased capacity to prevent soil erosion is generally positively correlated to a decrease in soil erosion.In contrast, the other two groups included areas where despite an increase of capacity there is still an increase of impact, as well as areas where a decrease of capacity is followed by a decrease of the ES mitigated impact.Therefore to interpret trends of SEP provision to formulate effective mitigation measures these two different indicators need to be considered.Combined, these two indicators give a clear picture of the underlying questions that rise in each area.Fig. 5 suggests that in 64.7% of the total area the ES mitigated impact decreased, mainly due to an increase of the capacity for ES provision.In contrast, from the 35.3% of areas with an increase of the ES mitigated impact, 53.3% also showed an increase of the capacity for ES provision.The two selected case-study regions illustrate two very different trends.R1, the NUTS 3 Ciudad Real in Spain, presents an overall negative trend of the rate of effective ES provision.This happens despite the slight increase in the capacity for ES provision in the same period and is related to the substantial increase in the ES mitigated impact in the first period, which resulted from a decrease of 15.08% in the rate of effective ES provision for the same period.Despite the recent improvements in the rate of effective ES provision, the regional SEP dynamics resulted in an increase of 43.98% of the ES mitigated impact between 2001 and 2013.In contrast, R2, the NUTS 3 Trikala in Greece, presents an overall negative trend of the rate of effective ES provision accompanied by a decrease of 58.04% of the ES mitigated impact in the same period.Although this region presents a positive development in terms of SEP provision, the general trend of the rate of effective ES provision shows a systematic decrease in the period of analysis, despite the increase of 5.19% in the capacity for ES provision.The analysis of the spatial and temporal distribution of SEP used a diverse set of process indicators that encompass the impacts related to the dynamics of soil erosion and to the service provision generated by vegetation.Compared to other methodological approaches that usually base their assessments on a single indicator, our approach provides more insight and more easily identifies the relations between the underlying landscape processes and their consequences in terms of service provision and of the remaining impacts.Also, although the actual ES provision can be used as an indicator for valuation purposes, it is not a good “stand alone” indicator for trend analysis as it is dependent on the spatial distribution, magnitude and temporal trend of the structural impact.Our results show that the rate of effective ES provision can be a more insightful indicator as it provides a better grasp of the local/regional ES provision performance.This indicator corresponds to the percentual variation of the early time slice in comparison to the following.This means that if a particular area lost a considerable amount of ES provision in a given period, it is probable that in the next period it registers a gain.Although this does not mean that the net provision of ES was positive considering the entire period.This was illustrated in the South of the Iberian Peninsula where in the first period there was a substantial loss of the rate of effective ES provision accompanied by relative gains in the following periods, although, in the same area, there was a cumulative increase of the ES mitigated impact.In this case this dynamic can also be explained by the high variation in the capacity for ES provision registered in the region.SEP alone cannot be used to determine the effectiveness of ES provision in a given region.It is also important to consider the interactions and eventual trade-offs between services in more strategic assessment of the net ES provision in a given region to better define local environmental targets.Our results illustrate the value of having a comprehensive and complementary group of process-based ES indicators.They show an overall, non-significant, increase in SEP in the region.A worrying trend becomes apparent when assessing areas that showed a decrease of the capacity for ES provision and an increase of the ES mitigated impact.These areas point to the eventual insufficiency, ineffectiveness or non-existence of soil protection measures and reflect very important regional differences.While in Italy, the Northeast coast of the Baltic Sea and the South of France this dynamic is related to a predominance of forest areas, in the Iberian Peninsula and in Greece it is related to a predominance of agricultural areas.This vulnerability analysis also shows that, between 2001 and 2013, 25% of areas with an increase of the capacity for ES provision were subject to a further increase of soil loss.These results are related to the 18.8% of areas with an increase of both the ES mitigated impact and the capacity for ES provision, revealing a situation where the presence of protective vegetation cover did not result in an enhanced soil protection.The two smaller case-studies illustrate the power of creating a regional ES provision profile for assessing the efficiency of SEP.In R1 we observe that even with an overall increase of 0.88% in the capacity for ES provision, the region had an increase of 43.98% of the ES mitigated impact following a decrease of 4.73% in the rate of effective ES provision.Although there is an improvement in SEP provision in recent years, this exposes the insufficiency of current regional initiatives to halt soil erosion by promoting SEP.By contrast, R2 shows a completely different pattern with constant gains of efficiency, even when there is a decrease of 3.07% in the capacity for ES provision that is reflected in a slight decrease of 0.08% on the rate of effective ES provision.Both examples demonstrate the possibility to define regional targets that can steer regional conservation and economic development policies that aim to minimize these impacts and their effects on human wellbeing.Declines in regulating services provision like SEP can result in declines in ecosystem resilience, and affect the provision of other ES.Our results show that, in total, 43.5% of the entire study area presented some type of vulnerability regarding the mitigation of soil erosion.If this information would be available in national and international monitoring systems, policy and management decisions could be better informed and action could be taken timely.The insight provided by the combination of indicators suggests that current policies and land management fail to safeguard SEP to halt soil erosion.One possible explanation could be that most of the policies that land managers follow correspond to generic top-down sectorial approaches.The spatial patterns and indicator values found here indicate that further disaggregation, consideration of context, and place-based or regional targets could improve SEP in Mediterranean Europe and prevent undesired ES provision trajectories.Finally, in future research, the relative positive trends found in this paper should be contextualized and regionally assessed in relation to regional social, ecological and economic.This means that further research should identify whether the observed positive trends correspond to an increase of management efficiency and/or policy implementation or if they are related to land abandonment processes that eventually resulted in an increase in the capacity for SEP.This paper produced a spatially and temporally explicit assessment of the provision of SEP in Mediterranean Europe in the last decade.We found that in general the provision of this service is increasing in Mediterranean Europe, particularly between 2009 and 2013.Despite these positive results 43.5% of the region is vulnerable and in need of focused attention to identify causes and implement effective mitigation measures.The results suggest that current policy and land management actions are not safeguarding the provision of SEP.This emphasises the need to evaluate and assess regulating ES considering a bundle of process based ES indicators.Particularly for SEP this would provide a clear representation of the different dynamics associated to the provision of the service.This study suggests the need for more adaptive policy design that can cope with local trends of ES provision and the definition of regional ES provision targets to mitigate regionally relevant impacts. | Abstract The concept of ecosystem services has received increased attention in recent years, and is seen as a useful construct for the development of policy relevant indicators and communication for science, policy and practice. Soil erosion is one of the main environmental problems for European Mediterranean agro-forestry systems, making soil erosion prevention a key ecosystem service to monitor and assess. Here, we present a spatially and temporally explicit assessment of the provision of soil erosion prevention by vegetation in Mediterranean Europe between 2001 and 2013, including maps of vulnerable areas. We follow a recently described conceptual framework for the mapping and assessment of regulating ecosystem services to calculate eight process-based indicators, and an ecosystem service provision profile. Results show a relative increase in the effectiveness of provision of soil erosion prevention in Mediterranean Europe between 2001 and 2013. This increase is particularly noticeable between 2009 and 2013, but it does not represent a general trend across the whole Mediterranean region. Two regional examples describe contrasting trends and illustrate the need for regional assessments and policy targets. Our results demonstrate the strength of having a coherent and complementary set of indicators for regulating services to inform policy and land management decisions. |
332 | Late onset of neutral lipid storage disease due to novel PNPLA2 mutations causing total loss of lipase activity in a patient with myopathy and slight cardiac involvement | Neutral lipid storage disease with myopathy is an autosomal recessive disorder characterized by abnormal accumulation of triacylglycerols in cytoplasmic lipid droplets in most tissues, including muscle, heart, liver and peripheral blood."A rapid laboratory diagnosis of NLSDM can be easily performed through the detection of lipid vacuoles in peripheral blood leucocytes, also known as Jordans' anomaly .To our best knowledge, forty-six NLSDM patients have been clinically and genetically reported .Clinical symptoms of NLSDM are characterized by progressive myopathy, cardiomyopathy, hepatomegaly, diabetes, chronic pancreatitis, short stature and by high serum creatine kinase levels .The degree of clinical manifestations appears highly variable: from minimal symptoms to a more severe condition, causing physical disability and premature death due to dilated cardiomyopathy.However, since NLSDM is a rare metabolic condition, the pathophysiology of the disease is largely unclear and phenotype–genotype correlations remain incomplete .NLSDM is caused by mutations in PNPLA2 coding for the adipose triglyceride lipase, a member of the patatin-like phospholipase domain-containing proteins .This lipase is a lipid droplet-associated protein that catalyses the first step in the hydrolysis of TAGs, stored within LDs .The human ATGL protein consists of 504 amino acids comprising the patatin domain with catalytic residues S47 and D166, at the N-terminus, and a hydrophobic lipid binding domain at position 315–360 towards the C-terminus .Thirty-five PNPLA2 mutations variably affecting protein function or production have been identified so far in NLSDM patients.Many mutations are expected to generate either null alleles or truncated ATGL proteins with the catalytic domain partially lost, all resulting in dramatic impairment of LD metabolism.The outcome in most patients carrying these mutations has been reported as severe.On the contrary, recent studies showed that missense mutations, resulting in an ATGL protein with residual lipolytic activity, may be associated with slowly progressing myopathy and sparing of myocardial muscle .Here we describe clinical and genetic findings in a woman harbouring two novel mutations in PNPLA2.Although these mutations completely abolish lipase activity, our patient showed slowly progressive skeletal muscle weakness with late presentation, in association with mild cardiac impairment.The proband is a 54-year-old woman, presenting at age 39 with right upper limb proximal weakness, slowly progressing over the years.Her previous medical history had been unremarkable.At age 47 she noticed muscle weakness in lower limbs.Creatine kinase was 579 U/L.Urine organic acids, plasma carnitine and acyl-carnitine profiles were normal.Electromyography performed elsewhere showed a mixed pattern with predominant neurogenic signs and fibrillations in upper limb muscles; nerve conduction studies were normal.The patient, when first admitted in our outpatient clinic at age 49, reported moderate disability due to upper limb muscle weakness.No bulbar symptoms, muscle cramps or pain was reported.Neurological examination showed marked right upper limb weakness with abduction limited to 30 degrees and flexion to 45 degrees, whereas only mild weakness against resistance was observed on the left side.In addition, severe elbow flexion and moderate elbow and finger extension weakness were noticed on the right side.No axial or lower limb weakness was found.Mild hypertrophy of calves was observed.Tendon retractions or scapular winging was absent.Cranial nerves were normal.Pyramidal or cerebellar signs were absent; deep tendon reflexes were reduced in lower limbs, while triceps and biceps reflexes were inelicitable; superficial and deep sensibility were normal.Muscle MRI, performed at age 49, showed predominant posterior thigh and leg compartment involvement, as already reported .A quadriceps muscle biopsy, performed at the same age, revealed myogenic features with vacuoles mainly distributed in hypotrophic type I fibres; staining with Oil Red O showed lipid accumulation.No degeneration or regeneration was observed.Electron microscopy confirmed the excessive accumulation of lipid droplets without signs of mitochondrial alteration."Jordans' anomaly was found in the patient leucocytes at age 51.Cultured skin fibroblasts, obtained from a patient dermal biopsy, also revealed an abnormal accumulation of neutral lipids into LDs.The neurological follow-up showed moderate worsening of the condition and development of moderate muscle weakness of lower limbs, mainly in proximal muscles.In addition, neck flexor muscles were also impaired.The patient remained fully ambulant during the follow-up period.Cardiological evaluation through ECG and heart echo scan were normal until the age 53, when mild left ventricular diastolic dysfunction was detected, without any progression over the following 12 months.At age 54, heart MRI and standard spirometry were normal while a cardiopulmonary exercise test showed an exercise limitation.All together the cardiopulmonary exercise test data were suggestive of peripheral muscle deconditioning with normal cardiac function.Family history was negative for neuromuscular disorders.Parents were not consanguineous.In our patient, molecular analysis of PNPLA2 detected the two following novel heterozygous mutations: c.696+4A>G and c.553_565delGTCCCCCTTCTCG.The first mutation, inherited from the father, is localized in intron 5 and predicts in frame skipping of exon 5.The aberrant mRNA loses part of the sequence coding for the catalytic site of ATGL protein.The deletion extends from Arg163 to Leu232, including the Asp166 residue, which is part of the catalytic dyad.Hence, the c.696+4A>G mutation, disrupting the ATGL catalytic site, causes total loss of its enzymatic function, as previously shown by functional studies .The c.553_565delGTCCCCCTTCTCG mutation is localized inside exon 5; extensive RT-PCR analysis showed no PNPLA2 mRNA production from this mutant allele.This variant was carried also by the 81-year-old mother, who, at age 77, was normal on neurological examination and had a normal muscle biopsy.Both PNPLA2 mutations were not observed in >200 control alleles and were submitted to GenBank.To verify whether patient allele 1, showing the in frame skipping of exon 5, was expressed into patient cells, we performed western blotting analysis of ATGL using total protein extracts from patient fibroblasts.As shown in Fig. 3d, a mutated ATGL protein with lower molecular weight was detected in patient fibroblasts in comparison with control fibroblasts.Informed consent was obtained from the study participants.Patient investigations were conducted in accordance with protocols approved by the institutional review boards of the Carlo Besta Neurological Institute and the Catholic University of the Sacred Heart.The main clinical feature of NLSDM is skeletal muscle myopathy, which is present in 100% of patients.Muscle weakness usually presents in early adult life, between 20 and 30 years.A later onset of the muscle phenotype has been observed in our patient, as well as in some previously reported cases .In these patients, mainly PNPLA2 missense mutations, which partially save lipase activity, have been identified.On the contrary, in our patient with typical muscle phenotype characterized by predominant proximal upper limb muscle weakness, two severe mutations that completely abrogate protein function have been detected.The first mutation causes the skipping of exon 5 and the production of a mutated protein that loses part of the catalytic site, thus abrogating ATGL lipase activity; the second mutation determines complete lack of mRNA expression and protein production.A homozygous PNPLA2 mutation affecting the invariant G of the donor splice-site of intron 5 has previously been described in a Japanese male patient .This mutation caused the production of two aberrant mRNAs: one retaining 93 bp of intron 5 and resulting in a new reading frame shift and a stop at position 162; the other consisting of a PNPLA2 sequence lacking 210 bp due to in frame skipping of exon 5, exactly as the allele 1 of our female patient.Despite the molecular similarity, some important clinical differences emerged between the Japanese and the Italian patients, concerning, in particular, their cardiac involvement.These differences might be due to homozygous versus heterozygous condition or to modifier genes and epigenetic factors possibly involved in such variable phenotypic expression.In the Japanese patient muscle weakness presented earlier than in our patient and was associated with severe heart involvement presenting at age 33 and requiring heart transplantation.On the contrary, our female patient showed only slight cardiac involvement at age 53.Although the presence of PNPLA2 severe mutations is similar in men and women, cardiac damage was reported in almost 20% of NLSDM female patients and in 55% of male patients .Indeed, considering as severe the mutations that cause lack of ATGL protein production or expression of truncated proteins with catalytic site only partially conserved, we note that they represent 25% of total PNPLA2 mutations in female patients and 29% in male patients.The latter observation suggests that gender modulates clinical cardiac phenotype in NLSDM, also beyond the severity of mutations in PNPLA2.To this regard, it is known that oestrogens regulate the expression of peroxisome proliferator-activated receptor family members.PPARs control mitochondrial metabolism and are mainly involved in fatty acid and glucose utilization in heart .Very recently, Higashi et al. reported a distinct cardiac phenotype in two NLSDM siblings carrying the same homozygous PNPLA2 mutation; according to the hypothesis that there is a gender difference on the phenotypic clinical expression in NLSDM, the male died at age of 31 of heart failure, while his sister was still alive, although presenting hypertrophic cardiomyopathy .Certainly, additional larger clinical studies are warranted to elucidate whether female gender possibly plays a protective role in NLSDM.NLSDM is an ultra-rare disease, its pathophysiology is largely unclear, phenotype–genotype correlations are incomplete, and a cure is still lacking.To this regard, an international registry for NLSDs, recently established, may help to collect worldwide clinical and genetic data and develop common therapeutic protocols.In conclusion, we describe a 54-year-old NLSDM female patient showing late onset myopathy in association with slight cardiac involvement, although the identified novel mutations completely abrogate PNPLA2 protein function.Our data expand the allelic spectrum of PNPLA2 mutations, providing further evidence for genetic and clinical NLSDM heterogeneity.This work was supported by grant GGP14066 from Telethon Foundation. | Neutral lipid storage disease with myopathy (NLSDM) presents with skeletal muscle myopathy and severe dilated cardiomyopathy in nearly 40% of cases. NLSDM is caused by mutations in the PNPLA2 gene, which encodes the adipose triglyceride lipase (ATGL). Here we report clinical and genetic findings of a patient carrying two novel PNPLA2 mutations (c.696+4A>G and c.553_565delGTCCCCCTTCTCG). She presented at age 39 with right upper limb abduction weakness slowly progressing over the years with asymmetric involvement of proximal upper and lower limb muscles. Cardiological evaluation through ECG and heart echo scan was normal until the age 53, when mild left ventricular diastolic dysfunction was detected. Molecular analysis revealed that only one type of PNPLA2 transcript, with exon 5 skipping, was expressed in patient cells. Such aberrant mRNA causes the production of a shorter ATGL protein, lacking part of the catalytic domain. This is an intriguing case, displaying severe PNPLA2 mutations with clinical presentation characterized by slight cardiac impairment and full expression of severe asymmetric myopathy. |
333 | Bacterial community dynamics in a swine wastewater anaerobic reactor revealed by 16S rDNA sequence analysis | Wastewater from industrial, municipal, and agricultural sources has been utilized for microalgal cultivation and nutrients removal, therefore it has been proposed as an alternative to organic carbon sources.In wastewater facilities, the anaerobic digester, wherein anaerobic microorganisms consume organic carbon, is an essential component of wastewater treatment systems.Anaerobic digestion is a versatile technology for processing various organic wastes produced in urban, industrial, and agricultural settings.During the process, the organic matter is decomposed by a complex community of microorganisms and converted into two main end products: digestate and biogas.While the digestate can be used as a fertilizer, the biogas with about 60–70% methane content represents an attractive source of renewable energy.Biogas brings not only socio-economic benefits but also offers the possibility of treating and recycling the agricultural residues and byproducts in a sustainable and environmentally friendly way.Because of these benefits in waste management and in energy production, anaerobic digestion technologies increasingly become popular, particularly with the global emphasis on sustainability.Anaerobic bioprocesses are very sensitive to environmental changes, and this sensitivity makes their maintenance complex.In field applications, maintaining the stability of the microbial community in the anaerobic digesters is one of the major considerations.The balance of the microbial community in the digester might be disrupted by any fluctuations in the operational parameters.For example, a sudden increase in the organic loading rate could lead to accumulation of volatile fatty acids, which results in acidification of the system.Such perturbations decrease the efficiency of wastewater treatment and biogas production and may even lead to digester failure.In the event of system failure, the restoration process is time-consuming and expensive.Previous studies have investigated the effects of organic shock loading on the microbial communities in anaerobic digesters.Based on these studies, specific bacterial and archaeal groups that can tolerate or even take advantage of a higher concentration of acetate and volatile fatty acids increase in dominance and may help to stabilize the system.Unfortunately, the more detailed pictures of overall community dynamics are still lacking, mainly due to the technical limitations of traditional culture-independent methods for characterizing complex microbial communities.Most of the previous studies were based on methods such as terminal restriction fragment length polymorphism, amplified ribosomal DNA restriction analysis, denaturing gradient gel electrophoresis, single-strand conformation polymorphism, or Sanger sequencing of clone libraries.These methods typically provide a resolution of dozens of OTUs in the samples.However, a recent study of 14 sewage treatment plants in Asia and North America revealed that at least 1183–3567 microbial species could be found in each sample.These high levels of microbial diversity suggest that the traditional culture-independent methods could not provide sufficient resolutions to understand the microbial communities in anaerobic digesters.Advances in methodologies are necessary to obtain a more detailed understanding of these complex communities, which is fundamental for the improvement of digester efficiency and stability.In this study, high-throughput 454 pyrosequencing technology was utilized as the molecular tool to investigate the microbial community dynamics in response to an organic shock loading.The aim was to investigate whether the community composition returns to its original state or reaches a new equilibrium upon restabilization.To distinguish between these two alternative predictions, time-series samples were collected from a lab-scale anaerobic CSTR and used for characterizing the bacterial community composition at different time points before and after an organic shock loading.The efficiency of anaerobic digestion is influenced by some parameters such as constant temperature, pH-value, supplying of nutrient and stirring intensity, thus it is crucial that appropriate conditions for anaerobic microorganisms are provided.A bench-top anaerobic CSTR fed with swine wastewater was setup in this study, which includes a peristaltic pump for influent and effluent flow and a wet test gas meter for quantification of gas production rate.The reactor was maintained at a constant temperature of 37 ± 1 °C and the void volume was 4.5 L.The swine wastewater obtained from a pig farm in Zaociao Township, Taiwan was used as the substrate.Because methanotrophs in the deeper layers could be utilized as trustworthy inoculum sources for newly constructed biocovers, the seed inoculum was collected from the anaerobic sludge of the same pig farm.After collection, wastewater was stored at 4 °C and mixed thoroughly prior to use.Because the duplication rate of anaerobic bacteria is usually 10 days or more and the retention time must be sufficiently long to ensure that the amount of microorganisms removed with the effluent is not higher than the amount of reproduced microorganisms, the system was started at a 10-day HRT.The amount of 250 mL new swine wastewater influent with an average concentration of 6.5 g COD/L was semi-continuously fed into the reactor once a day.With an organic loading rate of 0.65 g COD/L/day, system was operated about 120 days to ensure the reactor reached a steady state.For simulating an organic shocking loading, HRT was then changed to 5 days and the organic loading rate was 1.3 g COD/L/day.For liquid part, both influent and effluent samples were collected once a day and analyzed including pH and COD according to the Standard Methods.Gas samples were recorded using the wet test gas meter and the methane composition was determined using the GC-TCD equipped with a thermal conductivity detector and a Porapak Q column where helium was used as the carrier gas.To isolate the DNA of microbial community, the digester samples were homogenized using an orbital shaker for 20 min and the microbial cells were suspended.Then, two filtration steps with Calbiochem Miracloth and Whatman Grade 3 filter paper were used to remove the large particles.The filtrate was centrifuged at 17,000 × g for 30 min to collect the microbial cells."The resulting pellet was used for total DNA extraction by the Promega Wizard Genomic DNA Purification Kit following the manufacturer's instruction.To identify the bacterial species contained in each sample, PCR was performed to amplify the 16S rDNA using the universal primers 27F and 511R with the appropriate 454 Life Sciences adaptor sequence.In addition, the forward primer used for each sample contained a unique 6-bp barcode for multiplexed sequencing.To minimize the biases that may occur in individual PCR reactions, three independent reactions were performed for each sample and the products were pooled before sequencing.Each of the 50 μL of PCR mixture consisted of 1 μL PfuUltra II Fusion HS DNA polymerase, 5 μL of supplied 10× buffer, 2.5 μL of 5 mM dNTP mix, 0.5 μL of 10 mg/mL BSA, 1 μL of each 10 μM primer, and 50 ng of template DNA.The PCR program included one denaturing step at 95 °C for 3 min, 25 cycles of 95 °C for 40 s, 55 °C for 40 s, and 72 °C for 40 s, followed by a final extension at 72 °C for 7 min.Gel electrophoresis was used to check the existence of a single band in an expected size for each PCR product.For the positive samples, PCR products were purified with the MinElute PCR Purification Kit.Furtherly confirm the successful amplification of bacterial 16S rDNA in the broad range PCR, the purified PCR products were cloned using the CloneJet™ PCR Cloning Kit and transformed into HIT-JM 109 competent cells.A limited number of clones were sequenced using the BigDye Terminator v3.1 Cycle Sequencing Kit on an ABI Prism 3700 Genetic Analyzer to verify the presence of expected 16S rDNA fragment, multiplexing barcodes, and the adapters for 454 sequencing.For high-throughput pyrosequencing, the purified PCR products from each sample were pooled in equal proportions and sequenced using a 454 Jr. sequencer.The procedure for sequence analysis is based on that described in our previous studies.The pyrosequencing flowgrams were converted to sequence reads with corresponding quality scores using the standard software provided by 454 Life Sciences.The sequences were quality-trimmed using the default settings of LUCY.After the quality trimming, reads shorter than 400-bp were removed from the data set and the sample-specific barcode and the primer sequence were identified and trimmed from each sequence.Sequences that lacked a recognizable barcode or the forward PCR primer were discarded.To identify the OTUs presented in these samples, the partial 16S rDNA sequences were hierarchically clustered at 100%, 99%, and 97% sequence identity using USEARCH version 5.2.32.In this study, the 97% sequence identity threshold was chosen because it is commonly used to define bacterial species.To assess the sampling depth provided by the sequencing reads, 10,000 randomization tests were performed in obtaining the rarefaction curve for each sample type.For taxonomic assignment, the representative sequence of each OTU was used as the query for the CLASSIFIER program provided by the Ribosomal Database Project.Those who could not be assigned to a particular phylum with at least 70% confidence level would be removed.For verification of the CLASSIFIER results and taxonomic assignment at species level, BLASTN similarity search against the NCBI nt database was performed for the representative sequence of each OTU.For community level analysis and comparisons of individual samples, the software package Fast UniFrac was utilized to perform sample clustering and Principal Coordinates Analysis.The OTUs were weighted by abundance and the branch lengths were normalized.To generate a reference tree for the Fast UniFrac, the representative sequences from all OTUs were aligned using the RDP Aligner.The resulting multiple sequence alignment was examined to ensure the 5′-end of each sequence was mapped to the expected location of the 16S rDNA.The program FastTree was then used to infer a maximum likelihood phylogeny of the OTUs.The anaerobic CSTR system was semi-continuously fed with swine wastewater at the concentration of 6.5 g COD/L and operated at an organic loading rate of 0.65 g COD/L/day at the beginning.The reactor was controlled at 37 ± 1 °C and HRT was maintained at 10 days before shock.The pH value and COD removal efficiency are presented in Fig. 1A.The organic shock loading is highlighted by a red triangle below the X-axis.For influent and effluent samples, the average pH values in this study were 7.28 and 7.36, respectively.The COD removal efficiency before shock was in the range of 55–75%.After the shock, HRT was changed to 5 days and organic loading rate increased to twice that of before the shock for increasing the organic loading reduces the HRT.The COD removal efficiency was decreased to 20% and then increased to a range of 60–80% after the 130th day.Before simulating an organic shock loading, the biogas production rate was mostly in the range of 1–2 L/L/day.Days on which the digester samples were not used for sequencing are labeled in gray.After the shock, three periods of increased biogas production rates were observed.The first period started immediately after the shock and the biogas production rate was about 4 L/L/day.The second period of 3.325 L/L/day was occurred on the 13th day after the shock.Then the third peak was observed during days 21–23 and increased the biogas production rate to 3.375–6.085 L/L/day.On days 24–32 after the shock, the biogas production rate returned to its original state as that before shock loading.Methane component was on an average of 68% prior to the shock and slightly increased to 73% after the shock.The 16S rDNA PCR products from the 21 samples were multiplexed and sequenced using two runs on a 454 Jr. sequencer.After the quality trimming, demultiplexing, and removal of primer sequences, 174,340 usable sequencing reads were obtained with an average length of 454 bp.The quartiles of the length distribution were 427, 464, and 483 bp, respectively.To identify the OTUs presented in these samples, these 174,340 reads were hierarchically clustered.The number of OTUs identified at each sequence identity threshold was 118,765 OTUs at 100% identity, 44,783 OTUs at 99% identity, and 10,134 OTUs at 97% identity.After the taxonomic assignment by RDP CLASSIFIER, sequences that cannot be assigned at the phylum level with at least 70% confidence were discarded for being likely to represent chimeras or other artifacts introduced during the PCR or pyrosequencing process.This quality control step removed 19,598 reads that were assigned to 1761 OTUs.Finally, the occurrence of OTUs in each of the samples were examined and the OTUs found in only one sample were discarded because these OTUs did not provide any information regarding the relatedness among samples and were likely to be results of sequencing artifacts.The final data set that passed all the quality control steps contained 148,216 reads were assigned to 3339 OTUs.Based on this data set, the number of reads per sample ranged from 5192 to 10,501 and the number of OTUs per sample ranged from 563 to 1008.Supplementary Tables A and B related to this article can be found, in the online version, at http://dx.doi.org/10.1016/j.jbiotec.2014.11.026.Taxonomic assignment of the OTUs.Relative abundance of each OTU in the samples.These OTUs could be confidently assigned at the phylum level.However, taxonomical assignments at lower levels have much lower confidence values.At the genus-level, the median confidence value is only 55%.Similarly, for the best BLAST hit from the NCBI database searches, the median sequence identity is only 90.8%, which appears to be too low for genus-level assignment.This uncertainty in lower taxonomic level assignments is not surprising given that most of the OTUs found are likely sampled from lineages that have not been cultivated or characterized before.This is a challenge faced by all recent high-throughput sequencing based studies that utilize culture-independent approaches.Thus, even though the community analyses of this study were performed at the species level, we chose to provide summary statements at the phylum level.The sampling depth and the number of OTUs identified in each sample were approximately two orders of magnitude higher than the previous studies that utilized traditional culture-independent methods such as T-RFLP or SSCP.To investigate if the sequencing efforts in this study were adequate to quantify the species richness in the samples, the rarefaction curve for each sample type was inferred.The individual samples of the same type were pooled and resampled 10,000 times to determine the number of OTUs found with different numbers of sequencing reads.The results indicated that the feedstock samples may contain >1000 OTUs and the digester samples may contain >2500 OTUs.These diversity estimates are consistent with previous studies using the 454 or Illumina sequencing technology to characterize the microbial community in anaerobic digesters.Although the sampling depth provided by the 454 pyrosequencing technology is much higher than the traditional culture-independent methods, the experimental design employed in this study appeared to be insufficient to fully quantify the bacterial diversity in the individual samples.A sampling depth of >100,000 reads per individual sample probably would be required to better characterize the diversity of these bacterial communities.For such sampling depth, a sequencing technology that could provide a higher throughput such as Illumina would be required.However, improvement in throughput of the Illumina technology over the 454 pyrosequencing comes at a cost of reducing sequence length.This reduction in sequence length would impact the resolution of OTU identification and taxonomic assignment.The Venn diagram in Fig. 2C illustrates the numbers of shared and unique OTUs among the sample types.The results showed that 1280 identified OTUs were not found in the feedstock samples but in the digester samples both before and after shock loading.These OTUs might be present in low abundance in the feedstock and were undetected due to the limited number of sequencing reads per sample.Alternatively, these OTUs might be specific to the previous batches of feedstock and were introduced into the digester during its early operation.The digester samples after shock loading contained the highest number of sample type specific OTUs, possibly due to the large number of sequencing reads used.To examine the bacterial community composition, the relative abundance of bacterial phyla in each sample was estimated.The taxonomic assignments were based on the RDP Classifier results and the relative abundance of each phylum was presented as the percentage of reads in each sample.In the three feedstock samples, Proteobacteria is the most dominant phylum.This observation may reflect the bacterial community composition in the pig guts or the environment of the farm.Within the anaerobic digester, Bacteroidetes and Firmicutes increased in relative abundance.This shift in community composition is likely a result of adaptation to the digester environment.Intriguingly, the operation temperature plays a role in determining the community composition.In the study of Zhang et al., Proteobacteria are more dominant in digesters that operate at lower temperatures while Firmicutes are more dominant in thermophilic digesters.The organic shock loading introduced fluctuations in the community composition.Notably, Proteobacteria gained dominance and then declined by day 30.Additional direct comparisons of our findings with previous studies on anaerobic digester microbiota are difficult because of the differences in the broad-range PCR primers used.Depending on the primer design, each study may have different biases for or against certain phylogenetic groups.Furthermore, the differences in the 16S rDNA regions amplified prohibit sequence alignment to find corresponding OTUs across different studies.Thus, we chose a more conservative approach to limit the comparisons to quantitative overview such as OTU richness and the phylum-level abundance presented here.To obtain a higher resolution of the community composition, a heatmap was utilized to illustrate the relative abundance of the 3339 OTUs found in each of the 21 samples.The taxonomic assignments were based on the RDP Classifier results.As in the phylum-level abundance, Proteobacteria was the most dominant phylum in the feedstock samples while Bacteroidetes and Firmicutes were relatively abundant in the digester samples.Interestingly, although Proteobacteria became relatively abundant on day 17 after the shock, this observation was due to the increase of few Proteobacteria OTUs rather than the phylum as a whole.This finding indicates that while the composition analysis at the phylum level is useful for providing an overview, such broad patterns do not provide sufficient resolutions to capture the intricacy of community dynamics.Furthermore, another major limitation of 16S rDNA-based survey is that we could not know the biological roles of these OTUs.From the comparative genomics studies over the past decade, we know that even closely related strains belonging to the same species may occupy different ecological niches due to minor differences in their gene content.Unfortunately, with only the 16S rDNA sequences, such inference is unfeasible.Thus, our main aim of this study is to investigate the community-level dynamics through tracking of OTU relative abundance changes.To track the dynamic bacterial community in response to the shock, PCoA plots were used to visualize the dissimilarities among samples.When all 21 samples in this study were included, the three feedstock samples shared a bacterial community composition that was distinct from the digester samples.The PCO1 explained ∼71% of the variance, which might be attributed to the relative abundance of Proteobacteria.The community composition remained relatively stable for about 1× HRT after the shock, gradually deviated from the original steady state represented by the samples before shock, changed the most on day 17, and re-stabilized by day 30.When the three feedstock samples were excluded, a similar pattern of the restabilization process was observed.The variance explained by the PCO1 was reduced from about 71% to 41%, possibly reflects the reduced explanatory power of Proteobacteria abundance.Nonetheless, the first two axes explained about 59% of the variance when combined, supporting the strong explanatory power of this analysis.Both sets of analyses show the same pattern that the bacterial community composition remained almost undisturbed immediately after the shock, changed the most on day 17, and gradually became more similar to the initial state by day 30.In the changes of bacterial community composition and the biogas production rate, a lack of correspondence between these two measurements was found.For example, while the biogas production rate was approximately doubled immediately after the shock, the bacterial community composition exhibited little changes during this period.Additionally, while the bacterial community exhibited a large change in its composition on day 17 after the shock, the biogas production rate was relatively stable during the period.However, due to the absence of technical replicates, it is unclear if these findings are results of stochastic events.Future studies are required to test the generality of these findings.Furthermore, detailed characterizations of the archaeal community are necessary for providing complementary information to better understand the biological processes involved in anaerobic digesters.The main goal of this study was to find out the bacterial community composition in an anaerobic digester after an organic shock loading.By collecting samples from an anaerobic CSTR and utilizing high-throughput 454 pyrosequencing to characterize the bacterial community in samples, the results suggested that an organic shock loading induced dynamic responses in the microbial community composition.After a threefold HRT, the community returned to a state similar to its original composition rather than reaching a new equilibrium.The mesophilic digester was dominated by three bacterial phyla: Bacteroidetes, Firmicutes, and Proteobacteria. | Anaerobic digestion is a microbiological process of converting organic wastes into digestate and biogas in the absence of oxygen. In practice, disturbance to the system (e.g., organic shock loading) may cause imbalance of the microbial community and lead to digester failure. To examine the bacterial community dynamics after a disturbance, this study simulated an organic shock loading that doubled the chemical oxygen demand (COD) loading using a 4.5. L swine wastewater anaerobic completely stirred tank reactor (CSTR). Before the shock (loading rate. =. 0.65. g. COD/L/day), biogas production rate was about 1-2. L/L/day. After the shock, three periods representing increased biogas production rates were observed during days 1-7 (~4.0. L/L/day), 13 (3.3. L/L/day), and 21-23 (~6.1. L/L/day). For culture-independent assessments of the bacterial community composition, the 454 pyrosequencing results indicated that the community contained >2500 operational taxonomic units (OTUs) and was dominated by three phyla: Bacteroidetes, Firmicutes, and Proteobacteria. The shock induced dynamic changes in the community composition, which was re-stabilized after approximately threefold hydraulic retention time (HRT). Intriguingly, upon restabilization, the community composition became similar to that observed before the shock, rather than reaching a new equilibrium. |
334 | A platinum nanowire electrocatalyst on single-walled carbon nanotubes to drive hydrogen evolution | A major technological obstacle with the spread of solar and wind energy is its storage for cloudy and windless occasions.Power-to-gas energy storage via water electrolysis is one of the few options for storing excess renewable energy at large scales and over long time periods.In this process, electrical power is converted to chemical energy in the form of hydrogen.Hydrogen is an ideal energy carrier in the sense that it is abundant and has the highest gravimetric energy density among conventional fuels.,However, 90% of the world’s hydrogen is currently produced from fossil fuels with considerable CO2 emissions, and therefore electrolysis of water attracts increasing interest as a sustainable alternative.,The hydrogen evolution reaction, 2H+ + 2e− → H2, is one of the necessary half-reactions in water splitting.By storing energy in hydrogen by splitting water, the energy can subsequently be released back in a fuel cell, for instance to power electric vehicles and portable electronics.,Alternatively, the hydrogen energy carrier can be used in industrial processes such as chemical, fertilizer or steel production.A barrier to large scale deployment of cutting edge polymer electrolyte membrane electrolyzers is a strong dependence on rare and expensive platinum group metals.Decreasing the amount of the PGMs is important for laying the groundwork for the large-scale and long-term deployment of H2 fuel.This has driven research into non-noble metal alternatives such as transition metal phosphides, carbides and nitrides or chalcogenides.For the time being, however, Pt-based HER catalysts are still regarded as the most relevant starting point owing to their better overall performance in activity, stability and integrability to industrial applications such as PEM electrolyzers in acidic environment.As to PEM electrolyzers under acidic operating conditions, the cathode can suffer from high degradation.,On conventional carbon black substrates, Pt nanoclusters tend to agglomerate during operation with the on/off electrochemical cycles of an electrolyzer connected to a renewable energy source.This has hitherto led to an antagonism between ultralow Pt loadings on one hand and the high durability requirements of a PEM electrolyzer on the other.In order to prevent the loss of active Pt via e.g. Ostwald ripening, Pt nanoparticle growth and detachment of Pt nanoparticles during catalysis, metal oxide supports with stronger interaction with Pt nanoparticles have been suggested.,However, their electronic conductivity is low and the oxides are prone to reduction under low HER potentials, rendering the support unstable.Promising ultralow-Pt supports for the HER have been reported based on conductive carbon materials such as N-doped graphene, TixW1-xC, and carbon nanospheres that do allow good activities and stabilities but still fall short of successful incorporation in an actual electrolyzer cell.Carbon nanotubes, on the other hand, can assist in bringing together the ideals of high activity, stability and integrability as we show in this article.Especially single-walled carbon nanotubes provide highly conductive and durable supports for accommodating even ultralow amounts of subnanometric Pt catalyst particles because of their unique morphology.,Now that the price of SWNTs has notably decreased in recent years due to the upscaling of their synthesis, their adoption for high-end electrocatalytic applications is favorable over multi-walled carbon nanotubes as has also been concluded elsewhere.In this work, we aim to mitigate PGM dependency by presenting a simple and upscalable synthesis of Pt nanowires on SWNTs.Our approach to improve the adhesion, and thus durability, of PtNWs on SWNT relies on controlled surface oxidation with ozone, whereas electrocatalytic activity and stability are attributed to morphological effects.For an ozone-treated SWNT substrate with 340 ngPt cm-2, we report a HER activity of 10 mA cm-2 at −18 mVRHE, being competitive to a state-of-the-art Pt/C catalyst that attains the same current density at −16 mVRHE but with notably higher Pt mass loading.Equally ultralow and pseudo-atomic Pt on SWNT has earlier been presented by Tavakkoli et al. In contrast, Pt/SWNT-O3 with similar Pt loading does not become deactivated under negative HER potentials as observed by Tavakkoli et al.The improved stability is attributed to interactions between the PtNWs and the SWNT support inducing compressive strain in the Pt bonds and resulting optimal hydrogen inaction with PtNWs.The feasibility of our Pt/SWNT-O3 catalyst is verified in a single-cell PEM electrolyzer.In terms of activity, even with one tenth of Pt mass loading at the cathode, Pt/SWNT-O3 performs close to the level of the state-of-the-art Pt/C material.The stability and durability of Pt/SWNT-O3 as a cathode catalyst are confirmed with more than 2,000 h of chronopotentiometric measurements and 10,000 potential cycles simulating on/off operation.We conclude that strain in the Pt bonds, the metallic nature of PtNWs and, with respect to the morphology, the PtNW edges in particular contribute positively to the overall performance.The clear catalytic enhancement achieved by the ozone treatment is attributed not only to morphological effects but also to improved hydrophilicity of the catalyst.Moreover, the simple synthesis of PtNW/SWNT presented in this study allows for both upscaling in volume and testing in industrially relevant applications.First, the SWNTs were functionalized using an ozone generator.An ozone flow of 200 mg/h was directed over the pristine SWNTs for 40 min to increase their surface activity and hydrophilicity, thus assisting the transfer of Pt from H2PtCl6 ∙ 6 H2O onto the SWNT surface.As a reference, a material using pristine SWNTs and H2PtCl6 ∙ 6 H2O was prepared.For this, the same synthesis process was used but omitting the ozonation step.The pretreated or pristine SWNTs were dispersed in relation of 1 mg per 1 ml of i-PrOH by ultrasonication for 15 min.After this, while mixing by magnetic stirring, Pt was introduced by adding H2PtCl6 ∙ 6 H2O in ethanol to obtain the desired concentration.This was again followed by ultrasonication for 15 min and magnetic stirring overnight.Subsequently, the well-dispersed ink was gradually heated up to 300 °C in N2 at a rate of 100 °C/h to first evaporate the solvent and counter ions.At 300 °C, N2 was switched to 5% H2/Ar for 2 h to reduce the Pt.The synthesis procedure is outlined in Fig. 1.After the heat treatment, the sample was cooled down to room temperature under an N2 atmosphere and collected to begin with electrochemical experimentation as detailed in the Supporting Information.The synthesis yield of the catalyst was ca. 88%.In addition to electrochemical experimentation, the material was thoroughly characterized with physical methods, including scanning transmission electron microscopy, inductively coupled plasma mass spectroscopy, Raman spectroscopy, X-ray photoelectron spectroscopy, X-ray absorption spectroscopy including extended X-ray absorption fine structure, and goniometry.Computational studies are included to shed light on the catalytic properties of the PtNW/SWNT structure.Further details about the experimental methods are discussed in the SI.The highest electrochemical activity is attained when the SWNTs are treated with ozone, before adding the Pt complex.The method of oxidizing the carbon support before depositing metal catalysts has been previously reported.,In our synthesis, the ozone step introduces polar oxygen functional groups on the SWNT surface as revealed by XPS, improving the adhesion of the ionic Pt precursor.Striking Pt nanowire structures are formed by heating the material in 5% H2/Ar atmosphere at 300 °C for two hours.Fig. 1a and b represent STEM images of the catalyst before and after the heat treatment and demonstrate the presence of sub-nanometer Pt clusters before the heat-treatment and appearance of PtNWs only after the heating step.In addition, some Pt still remains as small clusters observed as clear bright spots.Energy dispersive X-ray spectroscopy measurements confirm that both the NWs and the subnanoparticles consist of Pt.Based on our earlier observations, sub-nanometer Pt clusters are formed only on SWNTs , which is attributed to the curved nature of the SWNTs promoting asymmetric diffusion of the adsorbed Pt atoms: In axial direction diffusion is sluggish but in radial direction it is fast and this stabilizes sub-nm Pt cluster.These sub-nm Pt particles are needed to form PtNWs during the heat-treatment procedure because larger spherical agglomerates are more stable and do not form such NW structures.Interestingly, without the ozone modification of the SWNTs, the deposited PtNWs are longer.For Pt/SWNT and Pt/SWNT-O3 with similar Pt contents of 3.2 wt% and 3.9 wt%, respectively, the average lengths of the PtNWs are 65.6 nm and 16.4 nm.The size distributions of the PtNWs on both SWNTs and ozonized SWNTs are obtained from high-magnification STEM images and depicted by histograms in Fig. 2.In terms of width, there is no significant difference between the two samples.In both Pt/SWNT and Pt/SWNT-O3, Pt appears crystalline even though no single lattice orientation can be distinguished.In addition to the nanowires, the high-resolution HAADF/STEM image in Fig. 2f also shows the presence of subnanometric Pt clusters and even individual Pt atoms which apparently form as a result of the ozone treatment of the SWNTs.Such tiny Pt clusters are expected to show a high initial catalytic activity, but become deactivated within a few minutes.,The HER activity of ozone-treated Pt/SWNT-O3 is contrasted with its non-treated counterpart Pt/SWNT and a state-of-the-art Pt/C reference.The electrocatalytic benefit of pretreating the SWNTs with ozone before introducing Pt is clearly shown in Fig. 3.At a potential of −0.05 VRHE in acidic electrolyte, 3.2 wt% Pt/SWNT and 3.9 wt% Pt/SWNT-O3 generate current densities of 10 and 35 mA cm−2, respectively, while the 20 wt% Pt/C reaches 40 mA cm−2.Fig. 3b displays the HER activities of the catalysts normalized by their corresponding Pt loadings.Evidently, the ozonized Pt/SWNT-O3 catalyst has the highest HER mass activity of the three.A literature comparison with other low-Pt and nanosized Pt cluster catalysts show the good performance of Pt/SWNT-O3 and is presented in Supplementary Information.Fig. 3a also presents the HER polarization curves for a wt% series of the Pt/SWNT-O3 catalyst along with the commercial 20 wt% Pt/C.The activity enhancement achieved by adding even a small amount of Pt on the SWNTs is evident when comparing the HER performance with pristine and ozone-treated SWNTs.The catalyst prepared from SWNT-O3 with 3.9 wt% of Pt outperforms the other Pt/SWNT-O3 catalysts with other Pt contents and almost coincides with the commercial reference.Besides, this Pt loading has the highest electrochemically active surface area among the experimented ones as discussed below.Therefore, the 3.9 wt% Pt/SWNT-O3 catalyst is selected as the optimal candidate for further examination by electrochemical and physical characterization methods.The ECSAs are determined from CO stripping experiments.The differences in the PtNW morphology and the structural changes of the SWNT support are reflected as a tripling of ECSA for Pt/SWNT-O3 compared to Pt/SWNT.The decrease of ECSA with the increase in the Pt loading of the catalysts with ozone treated SWNT support may be attributed to formation of less active spherical agglomerates with a lower area-to-volume ratio.Interestingly, Pt/SWNT-O3 with ultralow Pt loading shows an ECSA of 29 m2 g-1Pt comparable to the ECSA of the commercial Pt/C with a notably higher Pt loading.The HER activities of Fig. 3a normalized with ECSA are presented in Figure S2.Local fluctuations in the linear trends of the HER curves of Fig. 3 are attributed to the vigorous evolution of H2 bubbles.,IR-corrected polarization and Tafel curves are presented in Figures S3 and S4, respectively.Changes in the electrolyte resistance from experiment-to-experiment are found negligible and thus the same conclusions can be derived irrespective of the iR correction.The shorter length of the nanowires on the ozonized SWNTs suggests one plausible cause behind the higher HER activity.As according to Fig. 3b, the difference in Pt content is not enough to account for the difference in activity, the higher number of active edges and corners for the shorter PtNWs are suggested to promote the HER.Enhanced hydrophilicity, or wetting of the surface, as shown by goniometric studies, may also contribute to the facilitated HER by improving the mass transfer to the Pt/SWNT-O3 electrocatalyst surface.,Higher hydrophilicity of Pt/SWNT-O3 compared to Pt/SWNT is attributed to surface structural changes induced by the ozone treatment.These include formation of functional groups with oxygen on the surface of the SWNTs according to XPS and carbon bonding of SWNTs as observed by Raman.Raman characterization revealed an increase in the amount of disorder carbon in comparison to graphitic sp2 carbon network after the ozone treatment since there is a significant decrease of the IG/ID ratio from 56.7 to 3.3 for the SWNTs.Here, IG refers to the Raman intensity arising from graphitic sp2 carbon whereas ID is attributed to disordered carbon.Similar differences in the Raman spectra are also observed for Pt-SWNT and Pt-SWNT-O3, indicating that the structural changes induced by the ozone treatment remain after the Pt deposition and the reduction step of the synthesis.The structural differences induced by the ozone treatment on the SWNT support and Pt is also reflected as higher currents in the cyclic voltammograms, both for the Pt containing catalysts and for the SWNT supports.These differences induced by the ozone treatment are discussed in more detail below in sections 3.3.and 3.4 Furthermore, the CV of SWNT-O3 is featured by a redox peak pair at the potential range of 0.4…0.8 VRHE, which is absent in pristine SWNT and hence attributed to oxygen containing groups.What is noteworthy at CV potentials lower than ca. 0.3 VRHE, is the absence of adsorption/desorption peaks in the hydrogen fingerprint region.The featureless CVs may simply result from the ultralow Pt loadings of 279 ng cm-2 for Pt/SWNT and 340 ng cm-2 for Pt/SWNT-O3.The stability of Pt/SWNT-O3 is confirmed by an accelerated stress test, consisting of 3400 CV cycles between −0.08 and 0.8 VRHE.The upper limit corresponds to the potential reached for the cathode of an electrolyzer during a shut-down process because of its spontaneous polarization induced by remaining oxygen transferred from the anode to the cathode.,Fig. 4 demonstrates the HER activity of the 3.9 wt% Pt/SWNT-O3 catalyst before and after the AST, where a negligible change in the activity indicates good durability.Throughout the potential range in which the formation of H2 bubbles is moderate enough to maintain the curves linear and comparable, the potential loss is no more than 5 mV.Furthermore, in comparison to what has been earlier presented by Tavakkoli et al. for equally ultralow Pt on SWNT as here, Pt/SWNT-O3 maintains its activity under negative HER potentials.Fig. 5a presents the performance of Pt/SWNT-O3 as the cathode catalyst of a PEM electrolyzer setup under acidic conditions.Pt/SWNT-O3 has now a ten times lower Pt loading per electrode area than the commercial Pt/C reference.The onset voltages of the polarization curves are close to each other, suggesting similar intrinsic activity towards the pertinent electrochemistry.However, the novel Pt/SWNT-O3 catalyst shows a higher operating voltage with increased current density compared to the standard Pt/C, which is probably related to differing resistances in the membrane electrode assembly.In the impedance spectra measured for the cell equipped with 0.02 mgPt cm-2 of Pt/SWNT-O3 an additional potential independent RC circuit appears when compared to spectra measured for the electrolysis cell with ten times higher loading of commercial Pt/C.This is attributed to the cathode processes becoming limiting because of the low Pt content and suggest that further reduction of Pt is not feasible due the increase of operation voltage.On the other hand, similar cell resistances obtained for both the set-ups suggest that the electronic conductivity of the ozone treated SWNTs is high enough for this application though ozone treatment is known to decrease conductivity of SWNTs .When decreasing the Pt content of the commercial electrode to the level of Pt/SWNT-O3, the polarization curves show similar activity.However, the long-term voltage profile at a constant current density of 1 A cm-2 is better for the novel catalyst most likely due to improved mass transfer.This is attributed to dissimilar surface properties and morphologies of the electrocatalysts plausibly resulting in divergent interactions between the catalyst and the ionomer and the species participating the reactions.The stability of the electrolyzer setup with Pt/SWNT-O3 at the cathode is tested by operating the cell at the constant current density of 1 A cm-2 for 2,000 hours.The MEA with this catalyst shows stable performance with a small voltage drop after 320 h since the contact is enhanced by inserting a GDL in place at the cathode.The stability is also investigated with an AST where the cathode catalyst is sprayed onto a carbon paper with a Nafion content of 30 wt%.This membrane is transferred into a test cell with in-situ reference electrode to carry out the AST to investigate changes in the ECSA as a result of power cycling.It has been shown that when the electrolyzer is switched off, the cathode contributes more to changes at OCV than the anode.,This can lead to various degradation processes such as Pt nanoparticle growth via migration and surface diffusion and Pt detachment or agglomeration due to carbon support corrosion induced by oxygen diffusion from the anode to the cathode during the shut-down phase.The change in ECSA with number of cycles is plotted in Fig. 5c. However, it should be noted that at the chosen catalyst loading, the changes in the ECSA do not have an impact on the overall cell voltage as the HER on Pt is such a facile reaction.The general ECSA trends with on-going voltage cycling for the two materials are displayed in Fig. 5c and are in agreement with the values measured in the electrochemical cell taking into account the large error margins and different operation environments.The initial ECSA of 44 m2 g-1Pt for the Pt/SWNT-O3 is well in the range of the Pt/C reference.After 9,000 potential cycles, the degradation results in 19 m2 g-1 for the Pt/SWNT-O3 while 21 m2 g-1 for the Pt/C.Although our catalyst does not succeed quite as well as the reference, the end result of the ECSA test still suggests comparable endurance under oxidative shut-down cycles.Overall, the performance of the novel Pt/SWNT-O3 catalyst approaches that of the standard Pt/C even though the Pt loading is only one tenth.XAS analysis was carried out to investigate the overall structural features of PtNWs, which dominate the spectral response over the subnanometer particles and which can be linked with the observed electrocatalytic activity.Figure S14 shows a comparison between the experimental Pt L3-edge X-ray absorption spectra of the ozonized and non-ozonized Pt/SWNT samples and the reference Pt foil.There are no changes in the position of the threshold energy and in the frequency of the spectral oscillations, confirming the metallic nature of the PtNWs.An in-depth EXAFS analysis of Pt/SWNT and Pt/SWNT-O3 was carried out starting from a rigorous fit of the crystalline Pt used as a known reference structure, on the basis of which a comprehensive understanding of the local atomic structure in Pt nanosystems was achieved.The details of the fitting procedure are discussed in SI, while the best-fit analysis and the structural parameters obtained for each Pt sample are presented in Fig. 6 and Table 1.As is shown in Fig. 6, the agreement between the experimental and calculated EXAFS spectra of the Pt foil is very good in the whole energy range, and the structural parameters are, within the statistical errors, in perfect agreement with previous crystallographic determinations, establishing the reliability of the data-analysis method.As far as Pt/SWNT and Pt/SWNT-O3 are concerned, despite the low Pt content, the quality of the EXAFS signal is still very satisfactory, providing the possibility to investigate the structural features up to the fourth coordination shell.The most visible change between the spectra of crystalline Pt and PtNW/SWNTs is the reduction of the EXAFS signal in the nanocrystal systems.This effect became more apparent observing the Fourier transform of the EXAFS spectra depicted in Fig. 7, and more pronounced for the peaks in the region of 3.5–6 Å, corresponding to the higher coordination shells, that are the most affected by nanoscale effects.This damping is usually due to a decrease in the coordination number, yet the first shell coordination number is practically unaffected in two PtNW/SWNTs compared to Pt foil.This can be explained by the simultaneous increasing in the structural disorder in the order Pt/SWNT > Pt/SWNT-O3 > Pt foil, which is known to have an impact on the signal amplitude.Therefore, in both PtNW/SWNTs, the first shell signal reduction is mainly due to an increase in the structural disorder, while the higher shells are simultaneously affected by the rise of the structural disorder and a decrease in the coordination number.At the same time, our EXAFS results clearly show a contraction of the average Pt-Pt distances for PtNW/SWNTs relative to bulk Pt, with a larger reduction for the non-ozonized Pt sample.One possible explanation for this phenomenon is an increase in the surface strain as a consequence of the high curvature of the nanowires as is further discussed in the computational part.,Compressive strain is known to downshift the d-band center of the late transition metals, thus weakening the interaction between the adsorbate and the surface.,Consequently, the HER current is enhanced by the observed compressive interaction between the support and Pt if hydrogen bonding is too strong.When comparing the HER current stability on Pt/SWNT-O3 to our earlier study on pseudo-atomic Pt on SWNT, it is obvious that the latter suffers from current decay when subjected to the HER potential region.This phenomenon is attributed to contamination of the pseudo-atomic Pt surface by adsorbed hydrogen, resulting from too strong hydrogen bonding.These observations suggest that the hydrogen bonding is closer to the optimal H-H bonding range for the PtNW morphology and the observed current can be mainly attributed to these structures for Pt/SWNT-O3, and not the individual Pt atom clusters also observed in STEM.Table 1 emphasizes the variation of the most representative structural parameters of the PtNW/SWNT samples relative to bulk Pt.It is important to note that in the case of PtNW/SWNTs, the accuracy related to the structural parameters decreases from the second coordination shell, the error bars depending on the greater noise level of the experimental EXAFS spectra compared to the data of the Pt foil.The elemental composition of the samples was assessed using XPS.Measurements of the SWNTs before and after the ozone treatment showed that the pure SWNTs contained only 0.7 at% oxygen, while after ozone exposure the oxygen content increased to 2.9 at%.When Pt was deposited on ozone functionalized SWNTs a higher amount of Pt is present on the surface compared to when Pt s deposited on pristine SWNTs.The Pt/SWNT sample contained 0.05 at% Pt, while for Pt/SWNT-O3 the amount of Pt is double at 0.11 at%.These values are lower than the Pt content obtained using ICP-MS measurements, which may be due to XPS being a surface sensitive technique.As discussed above in connection to the XAS measurements, the Pt is mainly present as Pt° for both Pt/SWNT and Pt/SWNT-O3 as can be observed from the asymmetric peak shape of the Pt 4f peaks.Deconvolution of the Pt 4f peaks for both samples indicates the presence of 72 % metallic Pt, 18 % PtO, and 10 % of oxides such as PtO2.,Most of the Pt surface is thus reduced by the heat treatment at 300 °C in a 5 % H2/Ar flow.Even though the Pt 4f profiles are similar for the two samples, Pt/SWNT-O3 contains twice as much metallic Pt and PtOx as Pt/SWNT before performing HER.All samples contain elements besides Pt, C and O, originating from the production process of the SWNTs such as Fe and S from the SWNT growth-catalyzing particles.The detected amount of Fe increases after Pt deposition, most likely due to the exposure of carbon-encapsulated Fe particles by the synthesis procedure.However, the presence of Fe particles does not influence the HER activity of the Pt/SWNT samples as the SWNTs alone do not exhibit any catalytic activity.While ozone treatment increased the amount of oxygen functional groups in the surface of the SWNTs, the synthesis procedure for Pt deposition also introduced significant amounts of oxygen.While the pure SWNTs and SWNT-O3 contain 0.7 at% and 2.5 at% oxygen, respectively, the Pt/SWNT and Pt/SWNT-O3 samples contained 3.2 at% and 4.6 at%.Deconvolution of the O 1s region was performed using five peaks.For transition metal oxides such as PtO/PtO2 and Fe2O3, the O 1s peak is usually located around 530.2 eV, which is the value used here.,This peak contributes to only 4% and 3% of the total amount of oxygen in the Pt/SWNT and Pt/SWNT-O3, meaning that most of the oxygen is present as oxygen functional groups and not Pt or Fe oxides.The three main oxygen peaks are identified as carbonyl C = O, C–O groups such as hydroxyl and epoxide and O–C = O groups such as carboxyl and anhydride.,Finally the peak at 534.7 eV is attributed to chemisorbed H2O.,Especially the amount of C = O and O–C = O groups is higher for Pt/SWNT-O3 compared to Pt/SWNT.These polar groups make the sample more hydrophilic, as indicated by goniometry, and they could also improve the catalyst wetting by ionomer, and thereby improve the HER activity of Pt/SWNT-O3 compared to Pt/SWNT.To study the origin of the electrocatalytic activity, the structure and hydrogen adsorption properties of both a continuous PtNW and a truncated PtNW with exposed edge-sites on a SWNT were investigated using periodic density functional theory calculations.As a reference, all calculations were repeated also on a close-packed Pt surface.The reported simulations were performed using the CP2K/Quickstep quantum chemistry code at the RPBE/GGA level of theory.A comprehensive description of the computational methodology is presented in the Supporting Information.Based on the HAADF/STEM images shown in Fig. 2, both a periodic PtNW wrapped around a SWNT and a truncated PtNW were constructed and their geometries were fully optimized with no applied constraints.We refer to these model structures as Pt/SWNT and Pt/SWNT-O3, considering that the ozone-treated system is proposed to contain shorter PtNWs and hence an increased edge-to-surface ratio.It is emphasized that the adopted naming convention refers exclusively to the experimental catalyst preparation methods and the resulting PtNW lengths, in contrast to explicit consideration of oxygen in the model systems.Although the experimental evidence of fully wrapped PtNWs is not complete, we argue that this structure is justified based on thermodynamic considerations and the highly regular hexagonal structure of the PtNW surfaces.Please see the Supporting Information for further discussion on the employed structural models.Furthermore, the advantage of the chosen angularly periodic model structures is that a complete decoupling of edge-effects from the intrinsic activity of the PtNW bulk surface is possible.Indeed, by deliberately excluding edge-sites along the angular axis, the properties of a fully edge-free and a truncated PtNW can be directly compared, enabling a more categorical analysis of the importance of edge-sites on the catalytic activity.To assess the HER activity of the PtNWs, the hydrogen adsorption affinity of both model systems was systematically characterized using the method of atomistic thermodynamics introduced by Reuter and Scheffler and later extended to the context of electrocatalysis by Nørskov et al.First, the reactivities of the periodic and truncated Pt/SWNT systems were probed by studying the adsorption of a single hydrogen atom and calculating the associated change in free energy.As a reference, the calculations were repeated also for a pristine SWNT and the Pt surface.The obtained free energy diagram is presented in Fig. 10.We present a robust and upscalable three-step synthesis for preparing an unprecedented HER electrocatalyst with an ultralow Pt loading on SWNTs.The ozone pre-treatment of the SWNTs results in shorter PtNWs and improved electrocatalytic efficiency compared with a non-ozonized counterpart.The high efficiency is attributed to a combined effect of better surface wetting and differences in the PtNW morphology.In particular, the ozone treatment leads to an increased number of PtNW edge-sites with optimal hydrogen affinity.DFT calculations suggest that these edge-sites mitigate repulsions between hydrogen intermediaries on the PtNW at high coverages.Thus, the hydrogen coverage window within which the HER is thermodynamically feasible is broadened, and tentatively also the kinetic barrier is decreased.The proposed high activity of PtNW edges by the DFT calculations is in accordance with the experimental observations demonstrating higher mass activity of the sample containing shorter PtNWs on ozonized SWNTs than the non-ozonized Pt/SWNT catalyst.Furthermore, the ozone treatment also introduces polar surface groups on the nanotubes, turning the catalyst more hydrophilic than its non-ozonized counterpart and therefore more benign to hydrogen ions.Our Pt/SWNT-O3 catalyst outperforms a commercial Pt/C reference in the HER mass activity.In addition, Pt/SWNT-O3 is proven feasible as the cathode catalyst of an electrolyzer, again approaching the activity and exceeding the stability of a commercial state-of-the-art reference, but with ten times less of precious Pt per electrode area.Because Pt/SWNT-O3 can be prepared in a strikingly simple and upscalable manner, this work provides an interesting starting point for both manufacturing less costly PEM electrolyzers and for discovering other high-impact electrochemical reactions to catalyze.T.R. carried out the electrochemical measurements under the supervision of T.K. and R.B. the electrolyzer tests.As for the imaging, M.T. carried out and interpreted HAADF/STEM under the supervision of T.S. and H.J. was responsible for the lower resolution STEM.M.E.M.B. carried out and interpreted XPS and A. Z. XAS/EXAFS.R.K. did the computational studies under the supervision of K.L. F.J. contributed to planning of the electrochemical characterization.The idea was conveyed and developed by T.R. and T.K. All the authors contributed to data interpretation and writing of the manuscript.T.K. supervised the work.The authors declare no competing interests. | Pertinent existing hydrogen technologies for energy storage require unsustainable amounts of scarce platinum group metals. Here, an electrocatalyst comprising high-aspect-ratio platinum nanowires (PtNWs) on single-walled carbon nanotubes (SWNTs) with ultralow Pt content (340 ngPt cm−2) is employed for hydrogen evolution reaction (HER). A comparable activity (10 mA cm−2 at −18 mV vs. RHE) to that of state-of-the-art Pt/C (38,000 ngPt cm−2) is reached in acidic aqueous electrolyte. This is attributed to favorable PtNW interaction with SWNTs and PtNW edge-sites which adsorb hydrogen optimally and aid at alleviating repulsive interactions. Moreover, the metallic nature of Pt, morphological effects and enhanced wetting contribute positively. The PtNW/SWNT relevance is emphasized at a proton-exchange-membrane electrolyzer generating stable voltage for more than 2000 h, successfully competing with the state-of-the-art reference but with one tenth of Pt mass loading. Overall, this work presents an unprecedently efficient HER catalyst and opens up avenues for PtNW/SWNT catalyzing other high-impact reactions. |
335 | Evaluating the use of HILIC in large-scale, multi dimensional proteomics: Horses for courses? | It is not so long since researchers would have counted themselves lucky to identify a few tens of proteins from a single shotgun proteomics experiment.However spectacular progress has been made in improving the efficiency of protein detection at multiple levels, including experiment design and protocols, sample preparation workflows, LC–MS instrumentation, and in silico analysis.As a result, it is now possible to identify a large proportion of a steady state cell proteome in a single experiment, either with or without, fractionation .Furthermore, it is also possible to describe additional proteome dimensions, such as protein turnover rate, cell cycle-specific changes, post-translational modifications and subcellular localization .A limitation of early shotgun proteomics experiments is that the resulting data were predominantly one dimensional: whether the sample was derived from either a whole organism, tissue, cultured cells or a purified organelle or subcellular fraction, the final result was typically a list of identified protein groups with limited quantitative information.However, to describe a cell proteome in a way that is both accurate and with maximum physiological relevance for understanding biological mechanisms, it is important not only to include quantitation of protein expression levels, but also to resolve protein groups into single isoforms, while also addressing such parameters as the subcellular distribution of proteins and the presence of post-translational modifications.This could also be combined with analysis of additional proteome properties, for example higher order protein complexes, cell-cycle dependent variations of the proteome, and/or the rate of protein turnover.This combined analysis approach has been referred to as either “Next Generation Proteomics” or, perhaps more accurately, “multidimensional proteomics” .A major advantage of the multidimensional characterization of cell proteomes is the ability to mine the resulting data to establish correlations between different properties, for example linking the subcellular location of a protein with either a specific isoform or post-translational modification .This can generate useful hypotheses regarding the functional significance of such correlations that can be evaluated directly in follow-on experiments.The comprehensive description of the proteome has to overcome several analytical challenges, including the inherent complexity of protein types in cell extracts and the wide dynamic range of protein expression levels.Thus, taking into account isoforms and PTMs, a cell proteome can potentially comprise several hundred thousands of protein isotypes, spanning at least five or more orders of magnitude in abundance.As a result, a wide range of fractionation strategies for peptides and proteins have become an integral part of proteomics workflows, with the general aim of reducing the sample complexity to a manageable level prior to tandem mass spectrometry analysis.This in turn reduces ion suppression effects and maximizes the number of peptides that are effectively transferred to the gas phase as gaseous ions, sequenced and successfully identified.The most commonly used multidimensional LC setup involves two chromatographic separation steps, or dimensions, and is referred to as two-dimensional liquid chromatography.In theory, any type of chromatographic separation can be used at either the protein, or peptide level, including ion exchange chromatography, standard and high pH reversed phase, hydrophobic interaction chromatography and/or size exclusion chromatography.In bottom up proteomics, however, 2D-LC is commonly a combination of an off-line chromatographic method followed by RP-LC directly coupled to the mass spectrometer.2D-LC has the potential to dramatically improve the separation power of chromatography, with its performance depending both on the peak capacity of the two chromatographic dimensions and their degree of orthogonality.In chromatography, the term ‘orthogonal’ is used to refer to a complementary method of fractionation from the initial fractionation, so that orthogonal chromatographic systems are typically based on the use of different physico-chemical properties to separate peptides.In this way, a more effective overall separation of the original peptide mixture is provided, ultimately allowing more peptides to be identified.A number of studies have previously investigated this concept of orthogonality in 2D-LC separation .For instance, it is common to use ion exchange chromatography prior to RP-LC, as the two techniques are complementary and compatible.In this case the peptides are separated by charge in one dimension and hydrophobicity in the second dimension.In addition to ion exchange chromatography, other approaches also offer orthogonality with RP-LC, such as hydrophilic interaction liquid chromatography , which has recently emerged as a popular chromatographic mode for the separation of hydrophilic analytes.HILIC operates on the basis of hydrophilic interactions between the analytes and the hydrophilic stationary phase, with either highly polar, or hydrophilic compounds interacting most strongly .There are several different HILIC stationary phases , including derivatized silica material, which can be neutral, such as the cation exchanger polysulfoethyl A , the weak cation exchanger Polycat A , the weak anion exchanger PolyWAX , TSKgel amide-80 and zwitterionic ZIC-HILIC .While these supports differ in the exact chromatographic mechanism by which they separate analytes, they all generate a hydrophilic layer around the functional groups, which strongly interacts with either polar, or hydrophilic compounds.Therefore, HILIC can in practice be viewed as “reversed RP”.Gradient elution in HILIC can be achieved by increasing the polarity of the mobile phase, either by reducing the concentration of organic solvent, or by increasing the salt concentration, depending on the stationary phase.When peptides are separated using a non-ionic stationary phase, such as TSKgel Amide-80, an inverse acetonitrile gradient is most convenient.If the separation is carried out using ionic packing, such as that contained in PolyHydroxyethyl A columns, an increasing salt gradient is normally used.When using the TSKgel Amide-80 stationary phase, it is necessary to include a pairing agent, such as TFA, in the mobile phase to prevent ionic interactions between peptide residues and residual silanol groups on the silica surface.The use of weaker acids was reported to negatively affect the chromatography by reducing peptide elution and broadening peaks .In the absence of acid, the separation is based on mixed mode .TFA results in ion suppression when the eluent is directly sprayed at the sampling region of the mass spectrometer, however it is not an issue at all to use in off-line preparative LC, as is the case of a 2D-LC set up.One of the major issues affecting the ability to combine HILIC and RPLC in an online setup has been the incompatibility of the solvents used in both dimensions.However, recently Di palma et al., reported a robust 2D-LC setup allowing the combination of the separation modes .A number of previous studies have compared the performance of HILIC against other chromatographic separation modes, including strong cation exchange and reversed phase.SCX is commonly used for peptide fractionation in 2D-LC setups.However, SCX suffers from low resolution as well as the additional requirement for desalting, which may result in losses, especially in phosphorylated peptides and hydrophilic peptides in general.Studies that have compared ZIC-HILIC and SCX side by side have reported that the former has higher resolution and results in increased numbers of identifications .HILIC also performed better on iTRAQ-labelled samples and was reported to reduce iTRAQ ratio-compression, a fact which has been attributed to its higher resolution .The efficacy of HILIC for the separation of polar compounds has been effectively exploited in the study of PTMs, including carbohydrates , glycopeptides and phosphopeptides .HILIC has also been used in combination with selective phosphopeptide enrichment methods, such as either immobilized metal affinity chromatography , or TiO2 enrichment in different orders.Phospho-enrichment first: For example, Albuquerque et al. reported the development of a multidimensional chromatography method combining IMAC, HILIC and RP-LC to purify and fractionate phosphopeptides.They showed that HILIC was largely orthogonal to RP-HPLC for phosphopeptide enrichment.Wu et al. combined dimethyl labelling, IMAC separation and HILIC fractionation to identify 2857 unique phosphorylation sites in the MCF7 breast cancer cell line.HILIC first: Annan and Mcnulty reported the use of HILIC as a pre-enrichment step prior to IMAC-based phospho-enrichment for large scale proteomics studies .This approach was successfully adopted in other studies .HILIC has also been employed either before, or after, phosphoenrichment.Thus, Engholm-Keller et al. reported a combination of a large-scale phosphoproteomics protocol prior to HILIC fractionation and TiO2 enrichment .By using sequential elution from IMAC , mono-phosphorylated peptides are separated from multiphosphorylated peptides.Non-phosphorylated and monophosphorylated peptides were further fractionated using HILIC, followed by TiO2 chromatography of the HILIC fractions.This demonstrated the feasibility of performing large-scale quantitative phosphoproteomics on submilligram amounts of protein that could be applied to cell material of low abundance.Although, early studies on peptide separation by HILIC mainly focused on its resolution power and orthogonality as a first fractionation method in a multidimensional set up, there have also been reports evaluating its signal intensities and its applicability in an online HILIC-ES-MS as an alternative to RP-ES-LC.The high organic content used in HILIC results in a peptide signal increase by a factor of 2–10 fold in 88% of cases investigated, compared with RPLC, thus improving the sensitivity of both peptide detection and quantification .Maximum sensitivity was obtained when using amide columns without any salt additives.Yang et al. meanwhile have evaluated the different stationary phases used in HILIC, addressing the effect of mobile phase composition on peak efficiencies with an online HILIC-ES-MS system using peptide mixtures and protein digests.This showed that the use of HILIC-ES-MS provided complementary separation selectivity to RPLC-ES-MS and offered the capability to identify unique peptides, thus highlighting its potential in proteomic applications.In addition, Horie et al. described the use of a meter-scale monolithic silica capillary column modified with urea functional groups for use in the HILIC mode, which provided highly orthogonal separation to RPLC with sufficient peak capacity, as well as highly sensitive detection for tryptic peptides.In effect, they reported on average ∼5-fold increase in the peak response for commonly identified tryptic peptides due to the high acetonitrile concentration in the HILIC mobile phase suggesting its application as a complementary tool to increase proteome coverage in proteomics studies .In this study, we extend the characterization of HILIC to evaluate its applications in proteomics workflows beyond the enrichment of hydrophilic analytes.Specifically, we systematically evaluate the performance of HILIC against the popular, hydrophilic strong anion exchange method of peptide fractionation, which separates peptides based on their charge ."U2OS osteosarcoma cancer cells were obtained from the European Collection of Cell Cultures and grown in Dulbecco's Modified Eagle Medium supplemented with 10% fetal bovine serum, 50 units/mL penicillin and 50 μg/mL streptomycin for no more than 30 passages at 37 °C and 5% CO2.For protein extraction, cells were washed twice with cold PBS and then lysed in 0.3–1.0 mL urea lysis buffer pH 8.5, Roche protease inhibitors, Roche PhosStop).Lysates were sonicated on ice.Proteins were reduced with TCEP, for 15 min at room temperature and alkylated with iodoacetamide, in the dark for 45 min at room temperature.Lysates were diluted with digest buffer to a final concentration of 4 M urea and digested overnight at 37 °C with endoprotease Lys-C using an enzyme to substrate ratio of 1:50.The digest was diluted further using 100 mM TEAB to a final concentration of 0.8 M urea and subjected to a second digestion using trypsin in a 1:50 ratio.Finally, the digestion was quenched by adding trifluoroacetic acid to a final concentration of 1%.Prior to fractionation, the peptide samples were desalted using C18 Sep-Pak cartridges."Cartridges were first activated with Acetonitrile and equilibrated with 50% ACN in water according to the manufacturer's protocol.The sample was loaded and washed 4 times with 500 μL water containing 0.1% TFA.The peptides were eluted into a fresh Eppendorf tube with 800 μL 50% ACN containing.The peptides are then dried in vacuo.HILIC was performed on a Dionex UltiMate 3000 using a similar protocol to the method described previously .The dried peptides were redissolved in 80% ACN incorporating 0.1% TFA.The peptides were resolved on TSK-gel amide 80-column using an inverted organic gradient of solvent A and solvent B.The fractions were collected in deep well 96 well plate.They were dried and redissolved in 5% formic acid.hSAX was performed on a Dionex UltiMate 3000 using a similar protocol to the hSAX method described previously.Briefly, tryptic peptides were desalted using Sep-Pak-C18 SPE cartridges, dried, and dissolved in 50 mM borate, pH 9.3.They were then loaded on AS24 strong anion exchange column and fractionated using an exponential elution gradient from 100% solvent A to 100% solvent B using a flow rate of 250 μL min−1.Fractions were collected into a 96-well plate from 5 to 55 min to give 16 fractions.They were acidified and desalted using Sep-Pak-C18 solid-phase extraction plates.The plates were first wetted with 50% acetonitrile in water, washed and equilibrated with water containing 0.1% TFA.The acidified peptide fractions were loaded onto the plates, washed with water containing 0.1% FA and then eluted with 300 μL 50% aqueous ACN containing 0.1% TFA.The desalted hSAX fractions were dried in vacuo and redissolved in 5% FA prior to RP-LC–MS.The elution programme was 100% buffer A for 10 min, continued by a short gradient of 0–3% of buffer B, followed by a gradient of 3–15% for 19 min, a 15–45% gradient for 15 min and a 45–100% gradient for 2 min.At the end of the gradient the column was kept at 100% buffer B for 7 min and then for 10 min in buffer A.The peptide samples were dissolved in 5% FA.Their concentration was determined using CBQCA assay.RP-LC was performed using a Dionex RSLC nano HPLC.Peptides were injected onto a 0.3 mm id × 5 mm PepMap-C18 pre-column and chromatographed on a 75 μm × 15 cm PepMap-C18.Using the following mobile phases: 2% ACN incorporating 0.1% FA and 80% ACN incorporating 0.1% FA, peptides were resolved using a linear gradient from 5% B to 35% B over 156 min with a constant flow of 200 nL min−1.The peptide eluent flowed into a nano-electrospray emitter at the sampling region of a Q-Exactive Orbitrap mass spectrometer.The electrospray process was initiated by applying a 2.5 kV to liquid junction of the emitter and the data were acquired under the control of Xcalibur in data dependent mode.The MS survey scan was performed using a resolution of 60,000.The dependent HCD-MS2 events were performed at a resolution of 17,500.Precursor ion charge state screening was enabled allowing the rejection of singly charged ions as well as ions with all unassigned charge states.The raw MS data from the Q-Exactive Orbitrap were processed with the MaxQuant software package.Proteins and peptides were identified against the UniProt reference proteome database using the Andromeda search engine .The following search parameters were used: mass deviation of 6 ppm on the precursor and 0.5 Da on the fragment ions; Tryp/P for enzyme specificity; two missed cleavages.Carbamidomethylation on cysteine was set as a fixed modification.Oxidation on methionine; phosphorylation on serine, threonine, and tyrosine; hydroxylation on proline and acetylation at the protein N-terminus were set as variable modifications.Thresholds for the identification of phosphopeptides were Delta Score = 6 and Andromeda score = 40.The false discovery rate was set to 5% for positive identification of proteins, peptides, and phosphorylation sites.Most of the subsequent data analysis was done in R version 3.1.3 using Rstudio 0.98.1091 and the package ggplot2 ; the sequence coverage analysis was done using Perseus 1.5.1.6 and the GO-terms enrichment analysis using the Cytoscape app BiNGO.In this study we compared two 2D-LC setups, i.e., HILIC–RP-LC/MS and hSAX–RP-LC/MS, using unfractionated cell lysates from both cultured mammalian cells and from nematodes.To facilitate a meaningful comparison of these two approaches, we took into consideration the differences in scale and practical implementation of both techniques, including sensitivity levels, system volumes/flow rates and fraction collection.We note, for example, that it is not possible to use the same amount of starting material for each method without either diluting the sample, or overloading one or other of the systems.Peptide fractionation using hSAX provides good separation when loading relatively low amounts of material.However, this in our experience is below the maximum practical loading capacity of hSAX, allowing us to increase the amount of peptides injected to limit sample dilution.In contrast, HILIC has a maximum loading capacity for peptides in the order of milligrams, while a minimum of ∼500 μg is required to achieve reasonable separation.Given this intrinsic difference in loading capacities for hSAX and HILIC, in the following experiments to allow us to load equal amounts of material on both set-ups, we chose a concentration of 500 μg that was near the lower limit for HILIC separation to avoid overloading the capacity of the hSAX system.To ensure robustness and stability of the RP-LC–MS analysis we have also injected 1 μg of each fraction on the RP-C18 column, determined using a fluorescent assay and used the same standard RP-LC–MS method in each case.To assess the resolution of chromatographic separation by HILIC and hSAX we have taken the approach of measuring the number of peptides that are only identified in a single fraction and measuring the degree of overlap between adjacent fractions.Higher resolution is obtained when a given peptide is only present in one fraction.Thus, for both HILIC and hSAX we compared the number of peptides identified in a single fraction, two fractions and so on and the results are summarized in Fig. 2.This shows a slightly superior resolution for HILIC where >70% of peptides were observed in a single fraction.Next, we compared to what extent the separation properties of HILIC and hSAX were orthogonal with RP-LC.Both methods show good orthogonality with RP-LC as can be seen from the distribution of peptide intensities across the RP-LC chromatogram for HILIC fractionated peptides and hSAX fractionated peptides.From this figure, we can see that there is a broad distribution of ions across the retention time resulting in a wide separation of peptides across the 2D-space.In addition, we note that while both methods offer good orthogonality they are not identical.This suggests that each technique may have intrinsic specificities that would be relevant to their use in proteomics workflows.Not surprisingly, incorporating either the HILIC, or hSAX methods into the MS workflow allowed a substantial increase in the depth of the proteome measured, in comparison with using RP-LC alone.However, as expected, based on the different physico-chemical properties used to fractionate the peptides, HILIC and hSAX favour different subsets of peptides and proteins.For example, when analyzing extracts of U2OS cells, both set-ups allowed the identification of >9,500 proteins, with hSAX identifying 9,935 proteins and HILIC identifying 9,612 proteins,.We note that even though here hSAX identifies slightly more proteins, there is still a subset of specific proteins that are only identified in the HILIC–RP-LC experiment.Interestingly, this HILIC-specific group mainly corresponds to proteins that are identified by post-translationally modified peptides.As discussed further below, this highlights a specific advantage of using HILIC when the identification of PTM-modified proteins is highly relevant to the biological experiment involved.When looking at the total number of peptides identified, as opposed to proteins, hSAX outperforms HILIC, here identifying more total peptides.However, even though hSAX identified substantially more peptides than HILIC, there is still a subset of peptides that were exclusively identified by HILIC, corresponding predominantly to hydrophilic and/or heavily modified peptides.Despite the higher overall number of peptides identified in the hSAX–RP-LC setup, it is significant that this does not result in a dramatic increase in the average protein sequence coverage from that measured by HILIC-RP.Instead we observe that the two 2D-LC techniques are on par with each other, with ∼27% average sequence coverage for proteins identified by either hSAX–RP-LC, or HILIC–RP-LC.It should be noted that in this study we have specifically analyzed peptides resulting from the double digestion of the proteome with trypsin + Lys-C, i.e., essentially tryptic peptides.These peptides have the advantage of possessing a basic residue at their C-terminus, which facilitates ionization under the conditions of online RP-LC and aids efficient fragmentation using collision induced dissociation.However, amongst the set of tryptic peptides generated, a significant proportion are too short to be identified reliably by LC–MS/MS based methods .One approach to increase the average protein sequence coverage further could be to employ parallel digestions using several proteases with different cleavage specificities, subsequently combining the results.This approach was reported recently to result in a significant increase in sequence coverage, which is further improved by using different activation methods during the tandem MS experiment .The multiple protease approach will result in peptides that are heterogeneous with regards to the position of basic residues and will not therefore be ideal for CID.For example, peptides that contain internal basic residues will give rise to internal fragments that are usually unassigned by current database search algorithms and their identification would benefit from using alternative activation techniques, such as electron transfer dissociation and more recently ultraviolet photodissociation.As demonstrated above, in addition to a major overlap, hSAX and HILIC favour detection of different subsets of peptides and proteins.To investigate the natures of the differences in protein identifications, we employed Gene Ontology analysis.To do this, the protein lists were submitted for statistical testing to identify the functional categories of enriched genes defined by Gene Ontology using the BiNGO app from Cytoscape as well as using DAVID .The results reveal that hSAX clearly enriches for specific classes of protein sequence features, especially different types of the zinc finger regions C2H2.Amongst the GO-molecular function terms enriched are ion binding, DNA binding and metal binding.Fig. 4D shows the Cytoscape GO term networks, highlighting biological processes that are significantly enriched in the hSAX protein list.The full results of the GO analysis are shown in Supplementary Table 1.Supplementary table related to this article can be found, in the online version, at http://dx.doi.org/10.1016/j.ijms.2015.07.029.Interestingly, a GO term analysis on a similar number of proteins specifically detected in the HILIC fraction showed no significant enrichment of either sequence motifs, or any GO terms associated with function.This is consistent with the fact that HILIC uses polarity/hydrophilicity to fractionate peptides, a property that shows little or no specificity for functional classes of proteins.In contrast, hSAX will preferentially enrich classes of proteins that contain highly charged regions, such as nucleic acid binding proteins.We infer that HILIC displays minimal bias relating to GO-terms beyond any intrinsic sampling bias inherent to the extract preparation methods.Next, we investigated possible reasons that could explain the lower numbers of peptides identified by HILIC-RP.We started by examining the number of successful peptide identifications per fraction for each of the 2D-LC set-ups.This shows a dramatic decrease in the number of peptides identified in the later fractions of HILIC.In fact, there is a gradual decrease in the number of peptides identified from fraction 9 to fraction 16, with successful peptide identifications made early in the RP-LC gradient in keeping with the increased hydrophilic character of these peptides.In contrast, the number of peptide identifications is uniformly distributed across the hSAX fractions and across the RP-LC chromatogram, suggesting that most/all of the hSAX fractions are similar in terms of their hydrophobicity.Analysis of the percentage of successful MS2 identifications across fractions in each experiment shows a dramatic decrease in the number of successful MS2 identification in HILIC, as compared with hSAX fractionation.A possible explanation is that the later HILIC fractions are largely empty, with very few peptides that can be selected for MS/MS.However, this is not the case, as shown by looking at the numbers of tandem MS spectra acquired across all of the HILIC fractions, which are similar to the number of spectra acquired for hSAX fractions.This shows that the total number of MS/MS spectra acquired is relatively constant across the HILIC fractions.When, for each HILIC and hSAX fraction, the total intensities from the raw chromatogram and from the successfully sequenced spectra are plotted side by side, it becomes apparent that while the later HILIC fractions do appear to be lower complexity than the earlier ones, they are also yielding relatively fewer sequenced evidences.We conclude therefore that it is the percentage of spectra that led to successful peptide identifications that has dropped in the later fractions of HILIC.Several factors may be contributing to the observed decrease in successful peptide assignments from the spectra recorded from the later HILIC fractions.First, these later fractions may be preferentially enriched in peptides containing one or more post translational modifications that we have not included in our database searches and as a result we were blind to these peptides.For example, HILIC has been reported to successfully enrich for O-GlcNAc containing peptides and sialic acid containing glycopeptides , amongst other sugars, which were not used as variable modifications when interrogating the database in this study .Similarly, these peptides may carry other known modifications that were not searched for and/or rare, or even novel, modifications, which may be unknown to us.Therefore, these peptides could be ideal targets for future analysis by de novo sequencing, rather than seeking to identify them by database matching.Second, there may be a decline in the quality of the spectra in the later HILIC fractions, reducing the number of spectra that are good enough for successful peptide identification.Successful peptide identification can be achieved only when product ions from a complete or nearly complete distribution of amide backbone cleavages are observed in the corresponding MS/MS spectrum.This could arise if these fractions are enriched in peptides that are modified in a way that alters either their behaviour, or fragmentation pattern, when subjected to HCD.Briefly, in this process, peptides that are protonated more or less randomly on backbone amide nitrogen atoms are collided with an inert gas.Imparted kinetic energy is converted to vibrational energy, which is then rapidly distributed throughout all covalent bonds in the peptide.Fragment ions are formed when the internal energy of the ion exceeds the activation barrier required for a particular bond cleavage.Fragmentation of protonated amide bonds affords a series of complementary product ions of types b and y , which allow assignment of a peptide sequence to a precursor ion.In this case the peptides in later HILIC fractions may not produce ideal fragmentation under the CID regime and hence not yield assignable MS2 spectra.For example, they may be heavily modified, by carrying several phosphate groups, or other labile groups, that readily dissociate by a lower energy pathway than that involved in the cleavage of the amide linkage, thus reducing the extent of backbone cleavages and so making the spectra difficult to assign.For example, in the gas phase, the phosphate competes with the peptide backbone as a preferred site of protonation and consequently, after collisional activation, undergoes nucleophilic displacement by a neighboring amide carbonyl group.The resulting product ions often constitute ≥85% of the fragment ions observed under the low-energy CID conditions.Identification of such peptides could benefit from using alternative activation methods, such as electron transfer dissociation , which results in backbone cleavage even in the presence of labile PTMs.This is due to the fact that ETD, like its predecessor ECD, is independent of amide bond protonation and occurs on a shorter time scale compared with internal energy distribution so that heavily modified peptides fragment more or less randomly along the peptide backbone and are easily sequenced.It is also possible that the unassigned peptides are highly charged, so that when subjected to CID they give rise to MS2 spectra that are too complicated for reliable database searching and identification.The presence of multiple basic residues in the sequence inhibits random protonation along the peptide backbone and thus reduces the extent of backbone cleavage, which is commonly accepted to occur predominantly through charge-directed pathways .Again, exploring other activation methods could be beneficial.For example, Coon and co-workers have reported that highly charged species gave more useful sequence information under ETD while lower charged species gave more successful assignments under the CID regime leading to the introduction of decision tree based proteomics to improve the sequence coverage of the proteome .We note that in Fig. 5E, the hSAX fractions have a relatively constant distribution of charge states which are consistent with them showing more constant MS2 assignments throughout the fractions in contrast with the unequal distribution of ion charges across the HILIC fractions.Consistent with the possibilities discussed above, we indeed observe more phosphorylated peptides in the later HILIC fractions, where we have increased the polarity of the mobile phase and reduced its organic content.Analysis of extracts prepared from both human cell lines and nematodes shows a consistent trend, with ∼30% of the peptides in the latter fractions having hydrophilic modifications, such as phosphorylation and/or proline hydroxylation.In summary, we observe a gradient of peptide identification efficiency across the HILIC fractions that may reflect the preferential enrichment in the later fractions of classes of hydrophilic peptides that are currently difficult to identify efficiently using conventional database search algorithms.Given the diversity of physico-chemical properties of proteins and their post-translationally modified forms, it is likely that there is no ‘one size fits all’ fractionation method that will allow perfectly efficient detection and measurement of all proteins and peptides in a single experimental setup.If the goal of a given proteomic study is to obtain the most comprehensive measurement of all forms of proteins in a cell, tissue or organism, then it is likely that more than one analytical technique will be required to maximize coverage.There are available now multiple chromatography setups that can be linked with tandem MS analyses and in this study we have compared specifically the performance of combining conventional RP-LC–MS with either HILIC, or hSAX, respectively.Both methods allowed an increase in the depth of proteome coverage as opposed to using RP-LC alone.In addition, the data show that both methods resulted in approximately equal numbers of protein identifications with similar average sequence coverage; despite the higher number of total peptide identifications obtained using hSAX as opposed to HILIC.Overall the data in this study show that hSAX is highly orthogonal with RP-LC and can be easily applied in large-scale proteomics, providing deep proteome analysis with good sequence coverage.The data also show that hSAX has slightly lower resolution than HILIC, in keeping with recent reports by Trost and co-workers , who have shown ∼55% of peptides eluting in one fraction when using hSAX, as compared with 69% for RP-LC.We also find that hSAX displays some bias towards preferential enrichment of peptides from specific classes of proteins, particularly those with highly charged domains.HILIC provides a robust and reproducible separation method for high throughput proteomics.Like hSAX, it helps to increase the depth of the proteome detected and is particularly useful in enhancing the detection of a subset of proteins that may otherwise be underrepresented, especially including proteins with post translational modifications.This can be particularly useful for biological experiments where it is important to detect the roles of specific hydrophilic PTMs, such as phosphorylation and proline hydroxylation, especially when it is not practical to include PTM-enrichment strategies in the experimental workflow.It is likely that the performance of HILIC can be improved even further.For example, in this study, we observed undersampling of peptides in the earlier HILIC fractions.By analyzing the hydrophobic portion of the HILIC chromatogram, using standard online RP-LC–MS, no peptides were detected eluting for ∼40 min.As the organic content of the mobile phase increased, peptides of comparable hydrophobicity were then sprayed over a short time period, likely overloading the tandem MS detection events and thus reducing the overall numbers of peptides detected.A potential way to improve performance would thus be to modify the RP-LC–MS gradient according to the hydrophobicity of the HILIC fractions, hence allowing earlier fractions to be analyzed using a shallower gradient that starts with higher organic content, potentially leading to a greater number of peptide and protein identifications. | Despite many recent advances in instrumentation, the sheer complexity of biological samples remains a major challenge in large-scale proteomics experiments, reflecting both the large number of protein isoforms and the wide dynamic range of their expression levels. However, while the dynamic range of expression levels for different components of the proteome is estimated to be ∼107-8, the equivalent dynamic range of LC-MS is currently limited to ∼106. Sample pre-fractionation has therefore become routinely used in large-scale proteomics to reduce sample complexity during MS analysis and thus alleviate the problem of ion suppression and undersampling. There is currently a wide range of chromatographic techniques that can be applied as a first dimension separation. Here, we systematically evaluated the use of hydrophilic interaction liquid chromatography (HILIC), in comparison with hSAX, as a first dimension for peptide fractionation in a bottom-up proteomics workflow. The data indicate that in addition to its role as a useful pre-enrichment method for PTM analysis, HILIC can provide a robust, orthogonal and high-resolution method for increasing the depth of proteome coverage in large-scale proteomics experiments. The data also indicate that the choice of using either HILIC, hSAX, or other methods, is best made taking into account the specific types of biological analyses being performed. |
336 | Multivariate test based on Hotelling's trace with application to crime rates | Crime assessment is of top most importance due to the high degree of crimes in the modern world which is as a result of technological advancements and increase in population growth.Crime regarded as a strain on societies all over the continents causes people and establishments to have lesser assurance in the security system of a country.Hence, it is an interruption to commercial standards, natural human life and often influenced by socio-economical issues.Crime is also regarded as a menace to the political and socio-economic security of any country and a foremost factor linked to under-development because it dampens both local and foreign investments, decrease in worth of human lives, damages associations between inhabitants and states, thus creating dejection of democracy and rule of law with the ability of any country to encourage growth .People who break the law habitually choose to and hence these socio-economical issues become vital in understanding how to reduce crime in any society .According to Louis et al. as cited in Gulumbe et al. , crime has always been an upsetting menace to normal daily life, properties, legalized authorities and a constant delinquent that have bedeviled human existence.Nigeria is regarded as one of the countries in western Africa with the most disturbing crime rates where cases of burglary, rapist, muggers, car theft, highway/armed robbery and internet fraud have grown exponentially due to high rate of poverty .In the last few decades, as reported by CLEEN Foundation criminal activities in Nigeria have increased as some studies and reports have confirmed .A few studies on crime analysis by different authors such as Pepper illustrated the capability of numerous regression models to predict city crime rates using a panel dataset of crime rates from the year 1980 to 2004.In the study a comparison of prediction performance between a homogenous model and its heterogeneous counterpart using the city level panel data provided by CLAJ was carried-out.The findings revealed the brittleness of the prediction exercise, the ostensibly minor changes to a model can lead to different qualitative forecasts and models which appeared to have provided sound predictions in some situations performed poorly in others.The study findings also showed that the naïve random walk model especially for short run forecast horizons performed better when compared to the linear time series model.Gulumbe et al. , analyzed crime datasets on eight major crime cases reported to the central police command in Katsina State for the period 2006–2008.The dataset used in the study consist of assault, wounding, arm-robbery, grievous hurt, burglary, rape, auto-theft, and stealing.They employed the principal component analysis and correlation analysis methods to describe the relationships between the different type of crimes and the distribution of the crime types in the state with respect to the local government areas.The results showed significant relationships between stealing, arm-robbery and auto-theft.The local government area with the lowest crime rate was MSW while the KTN local government area in the state had the highest crime rate.The study findings also showed that there was a high prevalence of arm-robbery cases in the DMS and rape cases in JBA local government areas of the state.The study PCA result showed that four components which described approximately 79% of the total variability in the dataset were retained.Ansari et al. , presented a study on the continuing trends of vicious and property crimes in India and also examined whether India crime trend follows the crime trend globally, particularly the decreasing drift in Western Europe and the United States of America.They studied the differences and likenesses in long-term trends between the different type of crimes by examining the turning points, troughs and peaks.The dataset used in the research was obtained from the crime statistics in India, Bureau of India annual publication.The study findings showed that rates of housebreaks, rioting, arm-robbery, theft, and murder followed a decreasing trend, while the rate of rape showed a growing trend between the years 1971 to 2011.They stated that the only category of crime which followed the global crime trend was the homicide trend in India.Oguntunde et al. presented a study on different crime patterns and crime rates in ten different states on the basis of top ten states with the highest number of crimes in Nigeria.The correlation analysis results revealed that relationships do exist among the various crime types in the various states.The study also revealed a decreasing trend in the various crimes across the states within the study period.Furthermore, authors such as Omotor, Odumosu , Akpotu and Jike , Shopeju , Usman et al. and Kunnuji have carried-out various crime studies in Nigeria but none of these studies were focused on crime comparison analysis between the two regions in Nigeria using any known statistical procedure in other to advise the appropriate authority on where to channel the limited resources in the war against crime.Hence, this study is aimed at giving adequate answers to the following,Is there a significant difference in crime rates in the two regions with respect to the crimes used in the study?,Which region has the highest number of inmates’ population in the Nigeria prisons?,Which religious group has the highest number of inmates in the Nigeria prisons?,Which age group in the country has the highest number of inmates in the Nigeria Prisons?, "The remainder of this research paper is organized as follows: Section 2 outlines the materials and multivariate test method based on the Hotelling's Trace.The study findings and discussions in Section 3 while Section 4 concludes the research paper.Secondary data comprised of 37 states from the annual abstract of statistics 2012 was used for this study to determine the region with the highest level of crimes in Nigeria .The extracted crime data on armed-robbery attacks, stolen vehicle crimes, inmate population in the Nigeria prisons, Prison admission by age groups and religion for the periods of 2008 – 2012 was used for this study.The study data was divided into two groups which are the southern region with 17 states and the northern region with 20 states.The SPSS, version 25 statistical software was used for all the graphs and data analysis in this study."Hotelling's T2 test is the multivariate extension of student's t-test used in multivariate analysis to test the difference between the mean of different vectors . "Here, we are interested in testing that the population mean vectors are equal against the general alternative that the mean vectors are not equal with respect to armed-robbery attacks, stolen vehicle crimes and inmate's populations in the Nigeria prisons. "In this section, the Hotelling's T-squared statistic is computed for armed-robbery attacks, stolen vehicle crimes and inmate population in the Nigeria prisons.The data on these crimes from seventeen states in the southern region and twenty states in the northern region was used for this analysis."The Hotelling's Trace statistic for armed-robbery attacks is provided in Table 1.We observed from the table that the Trace value is 0.463 with p-value of 0.31."The Hotelling's Trace statistic for stolen vehicle crimes is provided in Table 2.We observed from the table that the Trace value is 0.306 with p-value of 0.66."The Hotelling's Trace statistic for inmate population in the Nigeria prisons is provided in Table 3.We observed from the table that the Trace value is 0.094 with p-value of 0.714."The percentage of prison's admission was computed to determine which religion and age group has the highest number of criminals on record in Nigeria within the periods used in this study.The pie chart in Fig. 1, depicts the percentage of prison admission classified by religion and age groups."It was observed that Christian's and individuals within the age group years has the highest percentage of individuals in the prisons across the country.The study results showed that the mean armed robbery attacks for the southern regions was significantly greater than the mean armed robbery attacks for the northern region while the mean stolen vehicle crimes and inmate population in the Nigeria prisons for the southern region was found to be insignificantly different from the northern region.Furthermore, the percentage estimation of prison admission by religion showed that Christians followed by Muslim contributed to the number of criminals in the Nigeria prisons across the country.The percentage of other religions were traditional, Atheist and others without any identified religion.It was also found that the age groups with the highest criminals in the Nigeria prisons are individuals within 26 – 50 years, 21 −25 years and 16 – 20 years having contributed 57.70%, 21.68% and 16.44%, respectively.The contribution by individuals within 0 – 15 years and above 50 years were also found to be 1.20% and 2.98%, respectively. | In this paper, crime rates comparison was made between the southern and northern regions of Nigeria using the multivariate test based on the Hotelling's trace. Data on armed-robbery attacks, stolen vehicle crimes and inmate population in the prisons from thirty-seven states classified into southern and northern regions was used in this research study. The results showed that armed-robbery attacks is more prevalent in the southern region than in the northern region while there wasn't any significant difference in the mean stolen vehicle crimes and prisons population in both regions in Nigeria. The results also showed that Christians and individuals in the age group of 26 – 50 years had the highest percentage contribution to prison admission in Nigeria. |
337 | Flood hazard reduction from automatically applied landscaping measures in RiverScape, a Python package coupled to a two-dimensional flow model | Flood risk reduction ranked high on the political agenda over the last two decades, which is warranted given the high and increasing societal cost of flooding, the anticipated ongoing climate change, and economic developments in fluvial and deltaic areas.Here, flood risk is defined as the inundation probability times the inundation effect.The European Flood Directive states that it is feasible and desirable to reduce the risk of adverse consequences associated with floods, and obliges member states to create flood hazard and risk maps, and a flood risk management plan for the implementation.Flood risk management can be summarized by strategy, i.e. protection against floods, living with floods, and retreat to flood-safe areas, and timing of the action relative to the flood event, i.e. pre-flood preparedness, operational flood management and post-flood response.Consequently, river managers are confronted with large challenges in the planning of measures in and around floodplains of embanked alluvial rivers, not only due to the number of stakeholders involved, but also due to the long lasting effect on the landscape, economic development and riparian ecosystems.Flood hazard management at the river basin scale consists of storing water in the headwater of the basin, retaining water instream in the middle parts and discharging the water in the downstream reaches.This is because the propagation of a flood wave, or flood wave celerity, increases with the flow velocity of the water and with the fraction of the discharge conveyed by the main channel.For example, the narrowing of the floodplains by embankments and decreasing the flow resistance of the floodplain vegetation increases the flood wave celerity, which adversely affects the flood hazard downstream.Here we present a flexible tool for quantifying effects and effectiveness of common measures to lower the flood risk with the aim to support stakeholder discussions with evidence-based facts and figures.We develop and apply the tool to a specific case of a lowland deltaic floodplain at the downstream end of the river Rhine, which is a medium-sized river draining part of North-West Europe.Typical measures at the scale of a floodplain section have in common that they increase the water storage, and the conveyance capacity during floods.Two types of measures are considered here to lower the flood hazard, more specifically, the probability of flooding the embanked areas.The first type lowers the flood stage during peak discharges by creating more space for the river within the embankments.The second type comprises raising the main embankment, which enables higher water levels.The flood hazard reductions of these measures have been reported previously, and are routinely evaluated in operational river management.The typical workflow comprises a geodatabase with spatial information that is converted to input data for a hydrodynamic model.Experts, together with stakeholders, choose what measure will be implemented, and manual adjustments are made to the geodatabase and the derived hydrodynamic model.Expert judgment drives this process, which is limited by the amount of manual work required to update the hydrodynamic model with a realistic bathymetry and land cover at the spatial extent of the measure.These processes can take years for simple measures, and more than a decade for complicated projects due to the complex and iterative nature of joint decision making.Decision support systems for these long term planning projects in the preparedness phase are scarce, contrary to DSSs for operational flood management.The options for flood hazard management for the lower reaches of the River Rhine in the Netherlands were modelled for individual measures, and the water level lowering at the river axis were made available in a graphical user interface.Interactive planning of some measures was possible using geospatial software.Application at the river-reach scale with realistic measures, however, is tedious and impractical, showing a need for automated procedures to generate these measures in larger areas.Measures can be applied with different gradations and spatial extents, to which we will refer to as ‘intensities of application’.The units of this intensity vary, e.g. small and large side channels, or relocation of embankments over short or large distances.Nonetheless, each measure lowers the flood hazard and their implementation requires material displacement.Our main objectives were to develop a tool to automatically position and parameterize seven flood hazard reduction measures and evaluate these measures on hydrodynamic effects plus the required volume of displaced material.These aims are limited to the physical domain; evaluation on costs was outside the scope of this study, even though it is closely related to transported material.We developed the RiverScape package in Python and applied it to the main distributary of the River Rhine.The results are followed by discussion of the applicability to other alluvial rivers and future perspectives to incorporate values other than material displacement.We developed RiverScape, a Python package, which uses map algebra functions from PCRaster.RiverScape can position and parameterize landscaping measures and update the input data for the two-dimensional flow model Delft3D Flexible Mesh, which is also open source.It requires input on hydrodynamic boundary conditions, a geodatabase with layers of river attributes, and settings to determine the intensity of application for each measure."Once the measures are known, we updated the 2D flow model's input in order to determine the flood hazard reduction and the flow velocities.Here, we present the methods implemented.The case study area is located in the Rhine delta, which consists of three distributaries: the Rivers Waal, Nederrijn and IJssel.We selected the River Waal, which is the main distributary of the River Rhine in the Netherlands.The three main concerns here are flood risk in view of global change, navigability and ecosystem functioning.The study area spans an 94-km-long river reach with an average water surface gradient of 0.10 m/km.The total area of the embanked floodplains amounts to 132 km2.The main channel is around 250 m wide and fixed by groynes.The cross-sectional width between the primary embankments varies between 0.5 and 2.6 km.Meadows dominate the land cover, but recent nature rehabilitation programs led to increased areas with herbaceous vegetation, shrubs and forest.The design discharge for the River Waal is now set to 10,165 m3s-1, which has an average return period of 1250 years.Such a discharge is expected to give a 3.99 m water level above ordnance datum at the downstream end of the study area.The main channel functions as the primary shipping route between the port of Rotterdam and major industrial areas in Germany.The main channel position is fixed in place by groynes, which were partly lowered during the ‘Room for the River’ project.In the future, the design discharge will be combined with a risk-based approach that takes the potential damage and casualties within the protected areas into account.The spatial data describing the major rivers in The Netherlands are stored in an ArcGIS file geodatabase according to the Baseline data protocol, version 5.This protocol, specific for the Netherlands, describes the layers in the geodatabase and specifies the required attributes for each of the layers in terms of names, and properties.Baseline schematizations include layers with land cover as a polygon layer of ecotopes, hydrodynamic roughness as point, line, and polygon layers, minor embankments, groynes, and main embankments as 3D lines consisting of routes and events, and river geometry describing the extent of main channel, groyne fields, floodplains as polygons.We adhered to the Baseline schematization in order to allow comparison between our results and existing projects, but in principle any other method of input of the aforementioned data can be used in combination with RiverScape.All spatial river attributes are represented as vector layers, except bathymetry, which is represented as a Triangular Irregular Network for the main channel and the floodplain.The TIN represents the ground level and does not include the groynes and minor embankments.We used the ‘rijn-beno14_5-v2’ schematization of the Rhine branches, which describes the layout after the finalization of the measures of the ‘Room for the River’ program in December 2015.In the areas protected from flooding by the embankments additional data sources were required as Baseline only covers the embanked floodplains and the main channel.The national LiDAR-based Digital Terrain Model provided terrain elevation data.This gridded DTM has a 0.5 m resolution, and a vertical error less than 5 cm in open terrain.Building locations were derived from the national database of addresses and buildings.The Ministry of Infrastructure and Environment provided the computational mesh of the flow model.It consisted of 24 small quadrilateral cells across the main channel and groyne field, sized around 40 by 20 m.These are connected by triangular cells to large quadrilateral cells in the floodplains, sized around 80 by 80 m. No mesh refinement was implemented around the individual groynes to limit the computation time.We extended this mesh with triangular cells for the areas protected from flooding by embankments.Discharge and water level time series between 1989 and 2014 were obtained from the gauging station at Tiel.RiverScape was coupled to a 2D hydrodynamic model, and required a calibrated model as a starting point.In this study, we used DFM, the open source hydrodynamic model that is being developed and maintained by Deltares.The computational core of DFM solves the shallow water equations based on the finite-volume methods on an unstructured grid."DFM's computational mesh and output are stored in netcdf files that follow the UGRID conventions for specifying the topology of unstructured and flexible grids.The computational mesh of the study area consisted of 120,000 cells, of which 71,000 were active with the current location of the major embankments.The spatial DFM input consists of five components in either netcdf or ASCII format.Firstly, the ground level of the bathymetry is derived from the TIN in the Baseline geodatabase, which is converted to netcdf format using a predefined computational mesh.Secondly, linear terrain features, such as groynes, minor embankments, and steep terrain jumps, are defined as ASCII-formatted line elements.These linear features cause additional energy loss when submerged.They are excluded from the bathymetry to limit the number of computational cells and the associated long computation time.These linear elements are called fixed weirs and contain the coordinates, the height difference on the left and right side, and the width and slope of the linear feature.Thirdly, the hydrodynamic roughness is based on trachytopes: spatially-distributed, and stage-dependent roughness values.Trachytopes are based on points for single trees, on lines for hedge rows, and on polygons for land cover derived from the ecotope map.Trachytopes in the main channel are adjusted in the model calibration.For each flow cell in the computational mesh the fractions of each trachytope is given, e.g. 0.7 for trachytope X and 0.3 for trachytope Y. Chézy roughness is computed at runtime within DFM using the water depth dependent roughness equations developed by Klopstra et al.Fourthly, obstacles that can not be submerged, such as bridge pillars or houses, are implemented as so-called thin dams.These represent infinitely high obstacles to flow that may consist of lines, or polygons.Finally, dry areas are only represented as polygons that render the contained computational cells inactive, whereas for thin dams the cell remains active and water can flow around the obstacle.We generated the DFM spatial input from the Baseline geodatabase using the Baseline plugin for ArcGIS.Further, we extended the trachytope definitions with missing codes, defined the discharge time series on the upstream boundary, and compiled a rating curve at the downstream boundary.This completed the DFM model setup.RiverScape works on a gridded representation of the river to increase computational speed.The basic data consists of a terrain model, land use, measured water levels, a rating curve, and a functioning hydrodynamic model.This makes application in areas that are more data scarce than the Netherlands feasible.For the study area, vector-based data were available in Baseline, which were rasterized to a 25 m raster resolution, to ensure that the cell area of the rasters was smaller than the cell area of the computational mesh of the 2D flow model.Some subgrid roughness information is lost in this way, because a single flow cell may contain multiple trachytopes.In DFM, this information is maintained as each flow cell may store fractional trachytope areas.Thirteen relevant Baseline layers were rasterized to a common map extent and resolution in PCRaster format using the Geospatial Data Abstraction Library.The second set of attributes, needed for the automatic positioning of measures, gave additional information on the river geometry, such as channel curvature, curve direction, and separate floodplains sections.For example, a good location for embankment relocation is an area with a sharp right turn in the river axis, a narrow floodplain on river right, and a low total value in real estate.Floodplain width calculation was challenging as it posed a one-to-many problem: many points on the main embankment in the outer bend could be connected to a limited set of points on the channel bank, and in the inner bend many channel points could be connected to a single cell on the main embankment.Here we pragmatically calculated the distance from each embankment cell to the channel bank on a line perpendicular to the channel center line.Crossing lines were redirected towards the nearest embankment, while maintaining the highest width value.The radius and turning direction of the river were derived from fitting a circle to the river axis at each axis cell.The location of the center point of the fitted circle in river left or right determines the turning direction.The third set of river attributes represented hydrodynamic characteristics derived from a reference run with the 2D flow model.The discharge was increased in a stepwise manner between low flow and design discharge, and each step was maintained for 4 days to create a stationary flow.We used discharges of 698, 1481, 1713, 2157, 2935, 4966 m3 s-1, which are exceeded 363, 150, 100, 50, 20, and 2 days per year, respectively based on the time series of the Tiel gauging station.The discharge values were derived from percentiles derived from the exceedance percentage in days.The choice for these exceedance values was based on their ecological significance as implemented in the classification method for the Dutch ecotope map of the large water bodies.This map is periodically made with a standardized method for management purposes.For example, in areas that are inundated less than two days per year the vegetation is considered unaffected by inundation.The low exceedance values are ecologically relevant for vegetated floodplains, and the high exceedance values are related to the water bodies and side channels.In addition, we ran the model with the design discharge for the River Waal of 10,165 m3 s-1, which provided data on water depth, flow velocity and hydrodynamic roughness, amongst others.Cell IDs were required for fast nearest neighbor interpolation of the data stored in an unstructured mesh to regular rasters.We developed automated procedures to determine the flood hazard reduction potential of seven landscaping measures by adjusting the input of the 2D flow model.Six flood stage lowering measures in the floodplain and groyne field were determined plus embankment raising.Measure positioning was required to determine suitable locations for roughness smoothing, side channel construction, floodplain lowering, and embankment relocation.Table 3 gives a summary of these methods and settings.Groyne lowering, minor embankment lowering, and main embankment raising do not need positioning as their location is predefined in Baseline.Each of the measures was applied with six intensities of application.We omitted deepening of the main channel as a flood hazard adaptation option, because the River Rhine is already deepening due to reduced sediment input from the basin and the narrowing of the main channel with groynes.Bed degradation between Tiel and the downstream model boundary was approximately 5 mm/y based on, Fig. 5).Bed erosion negatively affects shipping at non-erodable outcrops, infrastructure and ecology due to lower water levels and exposes entrenched telecom cables and pipelines.Technically, the implementation would be similar to floodplain lowering.Each of the measures requires the transport of material when implemented in the field to create more space for the river.The volume of moved material does not necessarily equal the added volume available for water.For floodplain smoothing, the volume of moved material is larger than the water volume as the emergent vegetation also needs to be removed.Contrarily, in the case of raising the main embankments, the volume of water is much larger than the volume of soil required for raising.To compare the flood stage reduction effect of the different measures, we calculated material volume that needs to be moved, and water volume created for each of the measures.Separate volumes were calculated for vegetation based on stem densities and stem diameters per roughness class and mean volume per vegetation class, material in groynes and minor embankments derived from the 3D attributes, and soil transport, based on the differences between the current and new DTM.We limited the evaluation to the physical domain, i.e. the conveyance capacity using the lowered flood levels and the storage capacity by calculating the increased water volume.The logical extension of the evaluation on transported material would be the cost of the measures, but this was outside of the scope for this study.Flood wave celerity was not considered, since the study area is close to the river mouth and flood waves are long relative to the study reach length.Side channels were created in a two-step approach, which comprised positioning of the channel center line, followed by the parameterization of the cross section shape and hydrodynamic roughness.Firstly, we positioned side channels only in wide floodplain sections without side channels currently present.Over each section, the start and end point of a side channel were positioned on the river axis alongside the upstream and downstream end of the section.The centerline of the side channel was determined by the path of the least resistance between start and end point.A high resistance value was assigned to the main channel and groyne field to force the centerline into the floodplain.A low resistance was given to existing floodplain backwaters, and a resistance based on distance to the main channel and main embankment was given to the remaining areas.The upstream end of the side channel was disconnected from the main channel to prevent large morphological changes in the main channel.Secondly, we parameterized the side channels with a trapezoidal shape of which width, depth, and cross-sectional slope can be set with user-specified values.The new side channel is only defined where its depth is below the current bathymetry, leaving existing lakes largely untouched.For the largest side channel, we set the depth as an offset of 2.5 m below the water level at the river axis exceeded 363 days per year, the width was set to 75 m, and the bank slope to 1:3.In this study, we implemented a series of six side channels in each of the suitable floodplain sections.The six intensities of application were defined by the depth and width values scaled to 10, 20, 40, 60, 80, and 100 percent of the value used for the largest side channel.A low vegetation roughness increases the conveyance capacity of the floodplain area lowering the overall water levels.There is no standard procedure for choosing where to lower the roughness, and in practice it is based on the judgement of the river manager and contested by nature developers.We developed a method that optimizes roughness smoothing by selecting the areas where lowering the floodplain roughness is most effective in terms of water level lowering.This is the case where a high specific discharge) coincides with a high vegetation roughness, expressed as the Nikuradse equivalent roughness length; Fig. 4D).For example, a dense forest at the outflow point of a floodplain section would be a big obstruction to flow.We calculated α, the product of two fields q and k, and determined its cumulative frequency distribution.The score of cfdα at a specific percentile of the distribution was used as a threshold for positioning the roughness smoothing.Areas where α exceeded the percentile score were selected for roughness smoothing.The percentile was calculated as 100 minus a user-specified percentage of the terrestrial floodplain area.For example, floodplain smoothing over 10% of the floodplain area is positioned where the score at the 90th percentile of cfdα is exceeded.The vegetation type at the selected areas was changed into production meadow, the vegetation with the lowest roughness.Intensities of application were set to floodplain smoothing over 1, 5, 10, 25, 50, and 99% of the terrestrial floodplain area.The increasing increment was chosen, because of the decreasing effectiveness of this measure as the current land cover also includes production meadows.Floodplain lowering was positioned using a similar method as for roughness lowering.It is most effective where a high flow velocity coincides with a low water depth under peak discharge, for example in case of flow over a natural levee deposit at the upstream end of a floodplain section.We calculated the product of two fields, denoted as β, the water depth and flow velocity field subtracted from the maximum flow velocity at design discharge.The inverse of the flow velocity was chosen to prevent equifinality in the selection.Floodplain lowering was positioned where β exceeded the score at percentile of cfdβ, where the percentile equals 100 minus a user-specified percentage.The new terrain elevation was set to the height corresponding to the 50 days per year flood duration.We chose the roughness code for production meadows as the new roughness.This smooth land cover adds to the flood level lowering, but RiverScape is flexible in assigning new codes.Like floodplain smoothing, we increased the intensity of application by applying floodplain lowering over 1, 5, 10, 25, 50, and 99 percent of the terrestrial parts of the floodplain.Within the schematization of the flow model, bathymetry, roughness, and fixed weirs were updated.Embankment relocation can be implemented as a buffer around the current main embankment, but it is more efficient when the embankment is straightened locally, especially with a tortuous embankment shape in top view.Therefore, we relocated the embankment using an alpha shape derived from the embanked area.An alpha shape, also known as the concave hull, is based on a Delauney triangulation of a point set where long edges are removed based on the alpha value.The lower the alpha value, the more it follows the current embankments, while an infinitely high alpha value gives the convex hull.We increased the intensity by using alpha values of 500, 1000, 2000, 3000, 5000, and 7000 m.We took current built-up areas into account by creating ring dikes around areas with high building costs.Relocation was implemented by adjusting the dry areas.The vegetation type of the new floodplain area was set to production meadow.Minor embankments have been constructed to prevent the inundation of agricultural fields during minor floods and their crest level varies.Likewise, groynes have varying crest levels as some have been lowered within the ‘Room for the River’ project to lower flood water levels."Lowering of groynes and minor embankments was implemented using their current locations, as stored in the flow model's input.Groynes and minor embankments are both stored as ‘fixed weirs’ in DFM.Fixed weirs are three-dimensional lines that describe location and crest height for each vertex along the line.In addition to the crest height, each vertex contains information on the cross-sectional shape of the fixed weir: the terrain height on the left and right of the crest, the cross-sectional slope, and the crest width.Lowering can be applied as a percentage of the current height, an absolute change in height, or by using an external height level such as a water level that is exceeded a fixed number of days per year.Due to the current differences in crest height, we chose to homogenize the differences by applying an external height.The new height consisted of the minimum of the current crest height and the external level, but the crest height should not be lower than the terrain height left or right of the crest.The intensity of application was increased by lowering to flood durations of 50, 100, 150, 200, 250, and 366 d/y for groynes and to 2, 20, 50, 100, 150, and 366 d/y for minor embankments.The 366 d/y flood duration involved the complete removal of the groynes and minor embankments from the fixed weir input of the 2D flow model.We aimed at the evaluation of flood stage reduction effects of seven landscaping measures with increasing intensity and its relation to displaced material.Calculation time for rasterizing the input data, calculating the derived information, positioning and parameterization of measures, and updating the flow model output required 0.5, 1, 2, and 1.75 h, respectively, on a i7-6700 3.4 GHz processor using a single thread.The calculation time for updating the flow model input includes conversion of the updated data to GIS-compatible raster and vector layers.The initial hydrodynamic calculation with the stepwise increase in discharge and the ensemble of realizations both took 24 h.We first describe the measures that resulted from the automated methods and then describe their effects on flood water level and channel flow velocity changes.Finally the results are expressed as a function of the required material displacement volume for the measures and the additional space for flood water due to the measures.The spatial layout of the six flood stage lowering measures is given in Fig. 6 for the downstream section, and in Fig. S1 in the supporting information on an A3-sized figure for the whole study area.Roughness lowering locations coincide with forested areas when applied to 1% and 5% of the terrestrial part of the floodplain area.Examples include the forest on river left at river kilometer 872.5, on river right at rkm 878.5, and on river left at rkm 889.These forest patches are often located in areas with low specific discharge.At higher intensities of applications, also herbaceous vegetation, single trees, and hedgerows are removed and converted to meadows.Groyne lowering was applied to the 797 individual groynes along the main channel.The current height of the groynes is not the same over the river reach.Between rkm 876 and 886, 914 and 922, and 952 and 960 the groynes are around a meter above the water level exceeded 100 d/y, whereas in the remaining stretches the groynes are slightly below this level.The section between rkm 911 and 928 does not contain groynes in the inner curves as they were converted to longitudinal training dams, which are treated as minor embankments in the hydrodynamic input.Not all groynes were equally affected by groyne lowering, due to the spatial differentiation of the current groyne height.When groynes are lowered to the water level exceeded 250 days per year, the median height difference at the endpoint of the groynes between the crest and the highest groyne toe is still 2.93 m, which is marginally higher than the minimal guaranteed depth of 2.8 m.All of the 223 km of minor embankments were lowered to increasingly long flood durations.The maximum lowering of the crest was limited by the maximum toe height, which represents the ground level of the floodplain.Heights of the minor embankments above ground level vary strongly and they showed a 1.25 m interquartile range.The highest minor embankments are found upstream from rkm 883, especially on the river right.New side channels were planned in 16 out of the 29 wide floodplain sections.Positioning of side channels was comparatively demanding computationally as all floodplain sections were addressed sequentially.The centerlines follow the midpoints between the main embankments and the main channel in sections without water bodies present, e.g. at rkm 895.5 on river left.In curved sections, the center line is drawn towards the inner part of the curve within the floodplain section, and when water is present, the centerline is drawn towards existing water.We positioned one new side channel in the floodplain section on river left around rkm 930.In reality, a side channel was created here as well at a similar position.In contrast to our modeling choices, the side channel was connected to the main channel at the upstream end as well.Floodplain lowering at 1 and 5% of the terrestrial floodplain area mainly affected artificially raised industrial areas, such as the shipyard at rkm 897.5 on river left.At 10–25% natural levee deposits are removed, whereas at 50–99% also the low lying sections are lowered to the water level that is exceeded 50 d/y.It should be noted that the difference between floodplain lowering and groyne lowering is defined in Baseline, which states that the maximum width of minor embankments is 10 m. Wider areas are contained in the bathymetry.This is visible in the lowering pattern as elongated lines, e.g. at rkm 922 on river right.Embankment relocation was carried out with six increasing alpha shapes values, while existing real estate was taken into account.The larger alpha shapes almost doubled the surface area of the embanked floodplains in the tortuous upstream part and around rkm 923 to 934, where existing villages become islands in the floodplain.The straight sections led to elongated new floodplain areas, such as on both sides of the river between rkm 888 and 898.We derived water levels at the river axis and depth-averaged flow velocities from the 2D flow model.The simulation period was set to three days with a 10165 ms-1 discharge, which ensured that the flow became fully stationary.Wall clock time of a single simulation on a single core was around 4 h. Differences in water level between the reference run and runs with measures gave insight in the flood stage reduction of each measure and each intensity.The downstream boundary condition, a fixed water level creating a backwater effect, led to zero change at rkm 961.Changes in flow velocities from the measures indicate possible adverse morphological effects in the main channel that could hamper navigation.Flow velocity differences at 25 and 75% of the main channel width summarize the potential morphological effects.Roughness smoothing increasingly lowered the water levels with larger extents of application.The effectiveness reduces at larger percentages, with the water level reduction between 1 and 5% almost equal to the reduction from 25 to 50%.The 99% intensity showed a negligible effect compared to 50%.The lowering was equally distributed over the area, with a maximum lowering of 0.2 m.Groyne lowering showed three distinct steps in the lowered profiles at rkm 883, 920, and 958 for all lowering intensities except for the lowering ‘366 d/y’.The three steps coincide with the sections where the groyne height exceeds the 100 d/y exceedance level.The maximum reduction is 0.1 m.The ‘366 d/y’ groyne lowering involves the complete removal of the groynes from the flow model.The resulting 0.25 m reduction serves as a reference for a more natural river with active meandering, which is not feasible for the Waal.Minor embankment lowering reduced water levels when lowered to the 2, or 20 d/y flood duration, with a 0.12 m maximum.Lowering to flood durations of 50, 100, and 150 days did not lead to additional water level reduction, which indicates that the lowest ground levels around the minor embankment have an inundation duration of approximately 20 d/y.Similar to groyne lowering, we completely removed the minor embankments from the flow model input, which was labeled as ‘366 d/y’ for consistency.This led to an additional 0.1 m reduction in predicted flood levels.At 150 d/y all minor embankments are effectively reduced to a terrain jump, because the 150 d/y level is lower than the terrain height.The additional 0.1 m reduction is due to the neglect of the energy loss from the terrain jumps that are in the current bathymetry, but that are lost in the relatively coarse bathymetry model.The real world implementation would be to alter the terrain in such a way that the downstream slope of the jumps is less than 1 to 7 to avoid flow separation.Side channel construction led to flood water level reductions that increased with increasing cross sectional area.Each increase in intensity did not lead to equal steps in the lowering.For example, the side channel around rkm 933 on river right showed a 0.06 m lowering from 40 to 60% intensity, which is larger than for the other steps in intensity.This nonlinearity resulted from the upstream end of the disconnected side channel.At 40% intensity it does not affect the minor embankment here, whereas at 60% the extent is larger and the minor embankment was removed increasing the discharge capacity of the floodplain.Maximum lowering was 0.38 m; small differences in lowering were present between the 10 and 20% intensities.Floodplain lowering and embankment relocation resulted in flood level reductions that differed an order of magnitude with the other measures.Maximum reductions are 1.6 and 2.1 m for lowering and relocation, respectively.Floodplain lowering increased the flood level reduction in upstream direction, whereas relocation led to strong local reductions with their own backwater effects.While the main focus of our work is flood risk, here we also studied changes in flow velocity in the channel.This is important because of the morphodynamic response: a spatial gradient in flow velocity leads to a gradient in sediment transport, and the latter gradient causes erosion and sedimentation in the shipping fairway, which would require dredging.The flow velocity along the channel shows patterns as a result of channel convergence and divergence and of exchange with the floodplain.Fig. 9A–C shows minor changes in flow velocity, but still large gradients in width averaged velocities.Here we averaged flow velocities over the main channel width and compared the velocity against the reference scenario.We assume that the flow velocity in the reference run does not cause erosion and sedimentation that require dredging for fairway maintenance, but this is not strictly true because maintenance dredging is conducted frequently.The key result is that construction of side channels, floodplain lowering and relocation of embankments have significant effects.Zones of reduced flow velocity appear adjacent to the modified floodplain sections.On the other hand, floodplain smoothing, groyne lowering and removal of minor embankments in the floodplain hardly cause changes in flow velocity.The local velocity reduction is up to 0.5 ms−1 for side channels and floodplain lowering upstream from rkm 880, which is significant given that typical flow velocity in the channel is 1.75 ms−1 so considerable sedimentation is expected to result.This trend is even much stronger for embankment relocation, which implies dramatic morphological change in the river channel.The relatively modest reduction of sediment transport in the side channel and floodplain lowering measures should also be evaluated against the sediment balance of the river Waal.Over the past decades, possibly more than a century, the river bed eroded in response to the installation of the groynes, due to dredging and due to reduced upstream sediment supply.Our modelled reduction of sediment transport capacity in the channel counteracts this trend, meaning that floodplain modification potentially has a positive effect.These results point at the need to adapt measures along the river such that changes in the gradients of sediment transport in the channel are minimized.This is not the same as the present strategy to minimize changes in sediment transport magnitude.Each of the seven measures involves the transport of one or more types of materials: vegetation from roughness lowering, stones and soil from the adjustments of groynes, minor embankments and main embankments, and soil from the ground level.The material volume varied strongly per measure type and intensity.The vegetation volume for roughness smoothing was 15% larger than the water volume with a maximum vegetation volume of 1.3 105 m3.For the lowering of groynes and minor embankments material volume equals the water volume.The material volume for main embankment raising was exceeded by a nine times larger water volume.Side channel construction and floodplain lowering involved all three material types as the vegetation and minor embankments are removed as well at the measure extents.Vegetation volume is at least an order of magnitude smaller than the volume for groynes and embankments, which is again an order of magnitude smaller than soil from the ground level.Interestingly, the vegetation volume triples when floodplain lowering is doubled from 50 to 99% due to the low lying vegetated areas.For embankment relocation, the material volume is larger than the water volume at an alpha shape of 500 m.This confirms that relocation over small areas is inefficient, but the material volume barely increases with larger alpha shapes.Relocation with the alpha shape at 7000 m provided the largest water volume, 2.8 108 m3, which is 23% of the total water volume in the study area during current design water levels.The relation between the flood hazard reduction and the volumetric changes per measure provided a concise overview of the effectiveness of the different river management options.Solid and dashed lines indicate the material, and water volume, respectively.A small target of 0.05 m flood stage reduction could be achieved by all different measures, but the required intensity of application differed, as well as the volumes.Roughness smoothing required least material volume displaced, followed by main embankment raising, groyne and minor embankment lowering.Embankment relocation using alpha shapes required the largest material volume at a 0.05 m flood level reduction.Conversely, an ambitious target of 0.5 m flood stage reduction could only be reached using floodplain lowering, main embankment raising, and relocating the main embankment.Note that the difference between the displaced material volume of main embankment raising and the increased water volume is a factor 10 for our study area.The factor depends on the mean width of the cross sectional area, but this clearly shows that embankment raising is an effective method.Surprisingly, the lines of material and water volume for embankment relocation cross each other at a flood level reduction of around 0.1 m, and the material volume increases by 2.5 for larger alpha shapes.Many small embankment relocations require a large material displacement, and are ineffective for flood hazard reduction.With RiverScape, primary geospatial data can be used to quickly update a hydrodynamic model and determine the two dimensional hydrodynamic effect of landscaping measures.This modeling pipeline provides a transparent data stack for a systematically modelled set of specific measures at a range of intensities.The results can be seen as endmembers of river management options as each of the measures is assessed in isolation.The ranking of the measures in terms of their potential in mean flood level lowering over the whole study area is as follows: minor embankment lowering, floodplain smoothing, groyne lowering, side channels, floodplain lowering, and embankment relocation.The RiverScape routines provide flexibility in the area of application: per river section, per floodplain section, or over the whole reach as presented in this paper.Also the positioning and parameterization settings can quickly be adjusted in intuitive ways to create new measures.Our tool can be applied to all alluvial rivers, provided the input data are available.Results depend on the initial land cover, the bathymetry, and the position of the main embankments relative to the main channel.However, the major alluvial rivers in densely populated areas share many characteristics with our study area.Floodplains of the Mississippi, Yangtze, Elbe, Danube and San Joaquin Rivers all have floodplains that are around five times the width of the main channel, largely comprise land cover with a low vegetation roughness, and are fixed in position with groynes or riprap.Therefore, we believe that the ranking between flood hazard reduction and volume of displaced material can be upscaled or downscaled with main channel width and discharge.The precise relationship in other areas would require additional modeling.Our results are expected to scale less well with freely meandering rivers, such as the Red River, Paraná River, or the Guaviare River in Colombia as the natural land cover differs strongly from the River Waal.The applicability of the tool depends also on the availability of data and a hydrodynamic model schematization.The minimum requirement is a 2D hydrodynamic model, a land cover map, and a digital terrain model.However, for most countries where the problems of reconciling river functions are urgent, these data are likely to be available in some form.In addition, global hydrodynamic models are getting more detailed by using remote sensing data and open data, e.g. OpenStreetMap.RiverScape now uses 14 layers from a ready-made geodatabase, but most layers could be derived from a hydrodynamic model.For example river axis, river kilometer, left and right shoreline could be derived from the flood extent at low flow.Roughness codes could be extracted from a global land cover dataset and floodplain lakes and channels can be derived from global scale permanent water body products.The preprocessing for RiverScape would need to be tailored to these inputs.Similarly, we used Delft3D Flexible Mesh as our 2D flow model, but interoperability with other models could be created due to the modular setup of RiverScape.This would require a suitable conversion script for each 2D flow model.With Python as the scripting language, the data preprocessing and updating the 2D flow model input is likely to be possible.Within the framework of decision support, we focused on two physical criteria for the evaluation of all measures: transported material and the flood level lowering.The results should be interpreted as the exploration of the parameter space.More detailed measures should be designed by landscape architects in combination with engineers and stakeholders.In reality, measure selection includes a number of additional parameters, which were outside the scope of this study.These aspects are both limitations of the current study and possibilities for other applications and model extensions in the future as follows.Firstly, in the middle and upper reaches of the river, the measures should not increase the flood wave celerity to avoid adverse downstream effects.To determine the effects on the propagation of the flood wave, the stationary design discharge in this study should be replaced with a standardized flood wave with a peak discharge equal to the design discharge.This would particularly be useful in steeper and longer river reaches, while our study reach is situated at the downstream end of a large river, meaning that flood waves are relatively low and long.Measures that increase the flow velocity and the fractional discharge through the main channel increase the flood wave celerity, such as roughness lowering, and groyne lowering.In contrast, floodplain lowering, embankment relocation and side channel recreation will slow down the flood wave.Flood wave celerity and attenuation could provide additional parameters for measure evaluation outside of the downstream river reaches.This is useful to quantify in a longer reach situated more upstream in a river system.A major flood wave for the River Waal takes around two weeks, which is much longer than the travel time through the area of around 12 h.In the upper parts of the catchment the flood wave length and travel time differ less and the effects on the flood wave propagation are therefore stronger.Secondly, the set of measures could be extended with deepening of the main channel in addition to measures in the floodplain and in the groyne field.Main channel deepening is technically similar to floodplain lowering: create a mask for the area to be lowered and apply a change in bathymetry over the masked area, either as a fixed value, or as spatially distributed values.Contrary to floodplain lowering, channel deepening would require a relative change in bathymetry rather lowering to an absolute external level based on exceedance levels.For the Waal, also the permanent layers should be taken into account that were created to reduce deep scour in sharp bends.Thirdly, the improvement of the fluvial ecology provides a secondary objective of many flood hazard measures, which is also required by the European Water Framework Directive and the American Clean Water Act.The changes in ecotope composition can be evaluated beforehand on the potential biodiversity for different taxonomic groups.For example, in the parameterization of floodplain lowering, we assigned ‘production meadow’ as the new ecotope, but its biodiversity potential is rather low.Including potential biodiversity scores would provide a more complete evaluation.Fifthly, investment costs, depreciation costs from higher flood frequencies for agriculture in the floodplains, and maintenance costs would provide insight in the financial feasibility.The investment costs include cost for earthwork, treatment or storage of polluted soil, dike raising, groyne lowering, and acquisition and/or demolition cost of buildings and land parcels.Sixtly, under natural land management vegetation succession leads to a shift in vegetation from meadows or agricultural fields to herbaceous vegetation, shrubs and forest over a period of decades.The associated increase in hydrodynamic vegetation roughness lowers the conveyance capacity of the floodplains and increases the water levels.The model could be extended with a vegetation succession model.The resulting time series of vegetation distributions require the conversion to trachytopes or roughness value to serve as input for a two dimensional hydrodynamic model.Lastly, the durability of the measure can also affect the selection.Embankment relocation has a long lasting effect on the flood levels.In contrast, roughness lowering can be reversed in years due to vegetation succession, and floodplain lowering can be undone in decades due to increased sediment deposition.The seven extensions would add to the completeness of the decision support, but can not replace stakeholder interaction with experts to make the final decision.In the Netherlands, flood risk management has been based on a design flood with an average return period 1250 years.For the future, we should consider the new risk-based approach, which takes the economic value of the protected hinterland and the number of lives at risk into account in designing the type, size and location of flood protection measures.This should include a cost-benefit assessment of the measures that addresses the flood risk of the protected area as well.In addition, it might be required to increase the conveyance capacity of the River Waal from 10,165 to 11,436 m3/s, which can be combined with ecological restoration.Which measures should be carried out and in what order?,A large scale set-back of the main embankment clearly had the strongest effect on flood levels.Should we continue with small-scale measures in the floodplain, or would it be more cost-effective and ecologically superior to set back the embankments, and rebuild houses on raised mounds?,Making these trade-offs visible could revitalize the public debate on flood-proofing river and delta systems.Much of the time during the planning of flood alleviation measures is spend on negotiations between stakeholders, decision making processes and exploration of alternatives, which is typically done in a sequential manner rather than in parallel.There is no guarantee that the outcome of the policy arena will be the same if the process is repeated.Evolving decisions depend on timing, the individuals involved, interdependencies between people and organizations involved, and the larger context we are operating in.A systematic inventory of intervention options and their costs and benefits would provide high-dimensional feature space that makes the choices transparent and numerically underpinned.Pareto-optimal solutions in this feature space represent the rational, numerically optimized cost-benefit solution, which can be compared against solutions driven by desires of specific stakeholders, or political optimization.Given a fully operational and interactive tool, we believe the decision process can be shortened by years.This approach turns the usual planning process around by starting at the effects, and working backwards towards the spatial implementation, thereby transferring more significant information on flood, cost, and ecology while fundamental information on geometry is still available.Nonetheless, the modeling results should always be interpreted by an expert panel.In this study, we aimed to develop a tool to automatically position and parameterize seven flood hazard reduction measures and evaluate these measures on hydrodynamic effects plus the required volume of displaced material.The RiverScape toolbox automatically positions and parameterizes typical landscaping measures and updates a hydrodynamic model accordingly.The ranking of flood hazard reduction in terms of transported material from high to low effectiveness is: vegetation roughness smoothing, main embankment raising, groyne lowering, minor embankment lowering, side channel construction, floodplain lowering and relocating the main embankment.This provides an integrated assessment at river-reach scale rather than many disconnected measures for individual floodplains as is the current practice for the study area.Water level reductions of more than 0.5 m could only be achieved with floodplain lowering, or embankment relocation for the Waal River.The modelled reduction in flow velocities in the main channel served as a proxy for morphological tendencies, which suggested that the trend of ongoing bed degradation could be slowed down due to lower flow velocities in the main channel.We applied all measures in isolation to determine the endmembers of river management options.However, the routines are flexible in their application.Spatial subsets could be used for local planning, or a combination of measures could be tested to optimize specific solutions with respect to biodiversity, or long term flood safety.Given a fully operational and interactive tool and an expert panel to interpret the results, we believe the decision process can be shortened by years. | River managers of alluvial rivers often need to reconcile conflicting objectives, but stakeholder processes are prone to subjectivity, time consuming and therefore limited in scope. Here we present RiverScape, a modeling tool for numerical creation, positioning and implementation of seven common flood hazard reduction measures at any intensity in a 2D hydrodynamic model for a river with embanked floodplains. It evaluates the measures for (1) hydrodynamic effects with the 2D flow model Delft3D Flexible Mesh, and (2) the required landscaping work expressed as the displaced volume of material. The most effective flood hazard reduction in terms of transported material is vegetation roughness smoothing, followed by main embankment raising, groyne lowering, minor embankment lowering, side channel construction, floodplain lowering and relocating the main embankment. Implementation of this tool may speed up decision making considerably. Applications elsewhere could weigh in adverse downstream effects, degradation of the ecology and overly expensive choices. |
338 | Intensification of protein extraction from soybean processing materials using hydrodynamic cavitation | Protein is an important nutrient to be considered when studying food production for human consumption, with major pressure to provide nourishment for an increasing population.The use of vegetable proteins like soy instead of animal derived protein sources is a rapidly increasing consumer trend.Extraction of protein and other soybean components from milled soybeans may happen under alkali aqueous conditions at high temperature to prepare soybase, the soybean extract further processed to soymilk or tofu.After the extraction, insoluble materials are removed from the extract typically by decanting, and the fibrous waste stream, termed okara, is utilised as animal feed.This process requires attention as the current yield in factories is relatively low; improved production methods may yield a greater mass of protein for human consumption.The majority of the soybean structure is made up of cotyledon cells, ranging in length from 70 to 80 μm and 15–20 μm in width.Within the cotyledon cells, the majority of protein is organised in protein bodies that are typically 2–20 μm in diameter.Oil is located within the cytoplasmic network in oil bodies stabilised by low molecular weight proteins termed oleosins.These oil bodies are smaller in size than protein bodies with sizes in the range 0.2–0.5 μm.The main barrier for the extraction of intracellular components of interest is the cell walls.Other limitations include insolubility of materials and entrapment in the continuous phase of the insoluble waste stream.Cavitation is a process responsible for the success of some extraction assistance process technologies.The phenomena of cavitation include air void formation within a treated sample, growth of the voids and their potential violent collapse.Upon microbubble collapse, local regions of high pressure and temperatures result in the regions of 1000–5000 atm and 500–15,000 K, which can aid the extraction process.Another result of cavitation is void collapse near a solid surface: leading to local regions of high shear resulting in solubilisation and also cell disruption.Ultrasound, a processing technology based on acoustic cavitation, has been shown to enhance the extraction of protein and other components during the processing of soybean materials.Ultrasound improved the extraction of protein by up to 19% upon 15 min treatment of okara solution with a lab probe system.The material was examined using confocal laser scanning microscopy; improved solubility was found to be the main factor enhancing the yield, not cell disruption.Unfortunately, when ultrasound was applied at pilot plant scale it was not feasible to give the soy slurry a treatment equivalent to that possible at lab scale.Pilot scale ultrasound treatment of okara was shown to increase protein extraction yield by only 4.2% compared to control samples.Other parameters, including okara solution flow rate and okara concentration, also had a significant impact on the protein extraction yield.During the lab scale sonication treatment an approximately 300 × greater energy intensity was experienced by the samples, compared to the pilot scale sonication.Considering the minimal total extraction yields for soybase production at pilot scale, ultrasound was not considered viable for industrial processing.It was found that the remaining protein within the okara was within intact cells.Therefore, a processing technology that targets intact cells might be more beneficial.Hydrodynamic cavitation is widely accepted as a technique for cell disruption of microbes and microalgae, as well as for the recovery of intracellular enzymes.It can be achieved using a high pressure homogeniser at pressures above 35 MPa.HPH has been employed in the food industry for large scale microbial cell disruption, as well as for other purposes, such as emulsification.Extraction with assistance from high pressure has been studied for several food systems with promising results, such as carotenoid extraction from tomato paste waste and phenolic acids extraction from potato peel, as well as oil extraction from microalgae for use in biodiesel production.High pressure treatment has also been applied to a number of soy based systems.Typically pressures of greater than 300 MPa have been studied for the formation of soy protein gels.These studies did not include hydrodynamic cavitation; only the effects of high pressure achieved using a pressure cell were investigated.Some studies of the effects of HPH on soy protein, focusing on the microbial stability of products and the production of fine emulsions rather than on extraction, have been published.For the implementation of HPH for extraction in industrial scale processes, a number of factors have to be considered, including energy consumption, instrument geometry and wear, and productivity.Many examples of the use of HPH within the food industry are available, yet current applications focus on the structuring of products, such as fine emulsion production.Creaming, which is an unwanted phenomenon seen in the dairy industry, is one such example for the possible industrial use of HPH.A scale up study by Donsì et al. showed that the scale of HPH operation did not influence microbial cell disruption at a given pressure.This gives confidence for the scalability of HPH for use in extraction at an industrial scale, if positive results are achieved at lab scale for extraction.Extraction of protein from soybeans has been reported previously in the literature as discussed above; however, there are no studies describing the effects of HPH on soybean processing materials and extraction yields.Here we show an investigation of the extraction yields of oil, protein and solids with high pressure treatment compared to the industrial control sample, as well as the availability of protein and separation efficiency on soybean processing materials.Particle size measurements, flow behaviour and an investigation into the microstructure using confocal laser scanning microscopy are carried out to identify the mechanisms of HPH.Slurry and okara were freshly prepared in the pilot plant facilities at Unilever R&D Vlaardingen.A process flow diagram can be seen in Fig. 1.Commercially available soybeans went through two wet milling stages to produce a soy slurry under alkaline conditions.The processing input consisted of 28 kg h− 1 of soybeans treated with 175 kg h− 1 of softened water and 0.2 kg h− 1 of sodium bicarbonate, which resulted in a soybean-to-water ratio of 1:7.To prepare soybase and okara for subsequent treatment, the slurry was fed into a decanter centrifuge operating at a g-force-time of 1.5 × 105g-s.Before homogenisation, the okara was diluted approximately 7 times with demineralised water on the day of homogenisation and stirred using a magnetic bar.For each homogenisation study, a fresh 1 L solution was made from okara stored below 5 °C for no longer than 6 days.The composition of slurry and okara can be seen in Table 1.Fig. 1 shows the process flow diagram for experiments conducted on:Slurry prepared as above, and,Okara prepared using decanter centrifugation,to identify what effects of HPH can be identified on both materials.All HPH treatments were conducted using a homogeniser, PandaPLUS 2000, equipped with 2 stages as shown schematically in Fig. 2.During the homogenisation treatments, the 2nd stage was always adjusted to 10 MPa using a manual hand wheel actuator on the equipment, and then the pressure was increased to the required total pressure by the 1st stage, using the 1st hand wheel.The approximate flow rate for demineralised water of 150 mL min− 1 was recorded prior to each experiment using the homogeniser, with a lower limit set to 142.5 mL min− 1.The soy sample was introduced through the feed hopper of the homogeniser.A sample of approximately 100 mL was taken after each pass through the homogeniser for analysis.For the control samples, the samples were heated to their relevant temperatures and stirred; however, they were not passed through the homogeniser.For each trial using slurry, 1 L was heated to 80 °C and stirred using a magnetic stirrer bar.This temperature was chosen to replicate the conditions which would be found during processing in a factory after the milling process.Once the desired temperature was reached, a control sample was taken and the remaining slurry was introduced into the homogeniser, which was preheated using boiling water.For each treatment, the soy slurry was passed through the homogeniser and a sample was collected for analysis and further processing.The remaining slurry was added into the homogeniser for subsequent treatment, up to a maximum of 5 passes in total.The temperature was recorded before and after treatment.Fresh okara solution, prepared as described in Section 2.1, was heated to 50 °C and stirred using a magnetic stirrer bar.This temperature was chosen due to the okara production temperature at pilot scale; when the milling was performed at 85 °C and diluted to 13.7% using room temperature water, a solution temperature of 50 °C was achieved.Once heated, the solution was added to the homogeniser for treatment, and a control sample was collected.Care was taken to ensure particles were dispersed evenly when sampling okara solution.Once the solution was added to the sample hopper, the solution was stirred to prevent particle settling.After each pass through the homogeniser, a sample was taken for analysis and further processing.The remaining solution was recirculated back through the homogeniser for up to 5 passes in total.The temperature was recorded before and after treatment.xw,s Mass fraction of water in soybase,xw,o Mass fraction of water in okara,xp,s Mass fraction of protein in soybase,xp,o Mass fraction of protein in okara,Please note that the extraction yield) is equal to separation efficiency multiplied by the availability of protein.Oil and solid contents were measured using a microwave moisture analysis system equipped with NMR for direct detection of fat content.Oil and solid extraction yields were also determined using Eq., replacing the masses of protein, with the respective masses.The particle sizes of soy slurries after extraction were determined using laser diffraction.To determine particle size distributions, refractive indices of 1.33 and 1.45 were used for the water and the particles, respectively.Protein, moisture and particle sizes were measured in triplicate for each sample.The D and D values represent the volume weighted and surface weighted mean particle size, respectively.The D90 value gives an indication of the particle size under which 90% of the total particles fall below.An AR G2 rheometer equipped with a sand blasted steel parallel plate was utilised to study the effects of homogenisation on samples.All experiments were carried out at 20 °C with a gap width of 2500 μm.Flow curves were measured with increasing and decreasing shear rates in the range 0.1–200 s− 1 over a time period of 2 min per sweep.The sweep of increasing shear rate was treated as a conditioning step and the shear viscosity measurements were recorded.A Leica TCS-SP5 microscope in conjunction with DMI6000 inverted microscope was used with the dye nile blue A to visualise the effects of HPH treatment on soy slurries.One drop of dye stock solution was added to 1–1.5 mL of sample and mixed well before adding the sample to the slide.For visualisation using nile blue, sequential scanning was employed to prevent the excitation laser occurring in the emission signals.Table 2 shows the scans utilised and the corresponding colours assigned to the emission channels.Soybeans were milled and the resulting slurry was separated using a decanter centrifuge to obtain the soybase and the okara fraction, as described in Section 2.1.To increase the protein extraction yield, HPH was applied to either the slurry or the okara solution at various pressures to determine an appropriate pressure for subsequent treatments.Fig. 3 shows the total protein extraction yield calculated using Eq. as a function of HPH pressure after a single pass through the system.On first observation, the total protein extraction yield increased with increasing HPH pressure for both samples after a single pass.Okara solution treatment included a primary extraction of protein during okara preparation, followed by subsequent dilution, HPH treatment and separation of insoluble materials, such as fibres, insoluble proteins and intact cells, if present.A pressure of 100 MPa was chosen for all further experiments to ensure the optimal extraction yield of protein for both slurry and okara solution treatments was achieved.In the following sections, the resultant effects of the homogenisation treatment on the particle size, microstructure and rheology will be studied in order to explain the results from Fig. 3.To locate the most optimal treatment conditions, an experiment was carried out to deduce the optimum number of passes through the homogeniser geometry at 100 MPa.For soy slurry, the homogeniser treatment was carried out at 80 °C to replicate the temperature straight from the pilot processing line.Fig. 4 shows the extraction yields of oil, protein and solids from slurry versus the number of treatments at 100 MPa.The control sample represents slurry heated to 80 °C and separated under the same conditions as the treated samples.The extraction yield of protein was approximately 65% without homogenisation.The optimum number of passes for the extraction of oil, protein and solids occurred for a single pass at the pressure investigated.Extraction yields improved by 21%, 16% & 12% for oil, protein and solids respectively after 1 pass through the homogeniser.After each subsequent pass through the homogeniser, a stepwise reduction in extraction of all components studied was observed.The effects of homogenisation were also tested on okara, the waste stream from soybase production.The extraction yields of oil, protein and solids were calculated considering the okara treatment only.From the control sample, approximately 55% of oil and protein and 35% of solids were extracted.After 1 pass, oil protein and solid extraction yields were improved by 36%, 26% and 17% respectively.Unlike soy slurry treatment, the subsequent extractions did not lead to a reduction in extraction yield: a plateau in extraction yields was reached after 1 pass through the homogeniser, with no significant change for higher numbers of passes.To understand how the homogenisation treatment affected the extraction yield, the separation efficiency and protein availability were calculated.Protein extraction yield is a function of the availability of protein and the separation efficiency.Fig. 6 shows the effects of homogenisation treatment of both the slurry and okara feeds, Fig. 1), compared to a control sample with heating but without HPH treatment.Initially the availability of protein was considered: protein availability increased by approximately 18% and 30% after a single pass through the homogeniser of slurry and okara solutions, respectively.The increase in the availability of protein suggests that either intact cells were disrupted, or the solubility of protein was improved.After each subsequent pass of homogenisation treatment, there was no change in the availability of protein for either slurry or okara solution.Separation efficiency was affected in both slurry and okara solution homogenisation.The largest effect of separation efficiency was observed for the slurry samples; after 1 pass, the separation efficiency was improved meaning less soluble protein was retained in the okara phase after homogenisation treatment.After subsequent passes, a large reduction in separation efficiency of the slurry was observed.This reduction in separation efficiency of slurry provides good correlation with the stepwise reduction in extraction yields from soy slurry after more than 1 HPH pass as shown in Fig. 4.In contrast, the okara solution observed little change in the separation efficiency after homogeniser passes, suggesting a similar mass of soluble protein resided in the okara after each HPH treatment, compared to the control sample.This coincides well with the plateau in the extraction yield observed for okara solution after 1 HPH pass in Fig. 5.To understand the effects of homogenisation, it was important to study the effects of treatment on the resulting sample characteristics.Particle size measurements were carried out for soy slurry and okara solution samples to study the effects of homogenisation treatment.Fig. 7 shows the effects of number of passes through the homogeniser on the resulting particle size of soy slurry.The control sample represents a soy slurry sample heated to 80 °C; all samples had the same pre-treatment.For soy slurry heated to 80 °C, the D90 value was approximately 760 μm with a D of ca. 350 μm and D of 15 μm.After one high pressure treatment at 100 MPa, the biggest change can be observed in the D90 value; a reduction to a value in the region of 100 μm was observed.This particle size reduction observed after a single pass can be attributed to homogenisation effects.A reduction in D was also observed; however, the D appeared to increase slightly after one pass.The slurry sample without HPH treatment consisted of particles ranging from submicron to 1 mm, seen in the PSD.The largest volume based reduction in particle size occurred after a single pass at 100 MPa compared to the control.The peak at 0.35–3 μm cannot be observed in any of the HPH treated soy samples, suggesting that oil droplets and other components, such as soluble proteins, reduced in volume.After each subsequent pass of slurry, there was a small stepwise increase in the particle size.The distribution of particles in the size range 2–200 μm, observed for the slurry sample after a single pass increased in broadness to 2–350 μm after 5 passes at 100 MPa.In Section 3.2 Separation efficiencies and availability of protein, the availability of protein was not affected by multiple passes for slurry treatment, suggesting protein aggregation did not occur.In a previous study by Lopez-Sanchez et al., an increase in particle size was observed for tomato suspensions due to cell wall swelling caused by a single pass through a HPH at a pressure of 60 MPa.Soybean cell wall fibres could also swell similar to tomato cells after multiple passes through the HPH.HPH treated okara solution was also analysed to examine the changes of particle size upon application of homogenisation.Fig. 8A shows the effects of the number of passes through the homogeniser at 100 MPa on the particle sizes of the okara solution.The initial particle sizes for okara solution were greater than those of the soy slurry.After 1 pass through the homogeniser geometry: all particle sizes were reduced when compared to the control sample.Focusing on the particle size distribution of the control sample of okara solution, there is a sharper, higher volume peak of particles in the range 100–1000 μm compared to the slurry control sample.Generally, the initial distributions are similar in size ranges.An initial reduction in the larger volume particles is seen upon a single pass, the peak shifts to the range 10–200 μm.There was also a loss of particles with a size of 0.35–6 μm with any number of passes with the homogeniser, as was also observed for the slurry sample.This could be attributed to a reduction in size of the oil droplets.With each subsequent pass after a single pass, a small stepwise increase in the particles size was also observed, similarly to slurry treatment due to the swelling of fibrous materials.For both slurry and okara solution treatments, homogenisation effects caused an initial reduction of the particle size after a single pass through the HPH at 100 MPa.After multiple passes through the homogeniser at 100 MPa, an increase in particle size for both samples was observed in comparison to their respective single pass samples.The resultant increase in particle size with multiple passes through the HPH can be attributed to swelling of the soybean cell wall fibres.To investigate the effects of homogenisation treatment on slurry and okara solution samples, CLSM was employed in the presence of nile blue for visualisation.Nile blue is a dye used to visualise apolar material.In the system settings oil appears green and other, less apolar materials, including protein and fibres, appear red.Initially, the microstructure of the control samples was investigated.Fig. 9 shows the typical structures observed in the soy slurry after milling of soybeans at pilot scale with a thermal pre-treatment of 80 °C.Droplets of oil, depicted in green, were found throughout the continuous phase of the sample, with sizes up to 12 μm in diameter, which were larger compared to those located within soy slurry and were previously reported to be typically less than 0.5 μm.Intact cell wall structures were visible in the soy slurry sample, without HPH treatment.In Fig. 9B, intact cells are observed, and intact protein bodies are visible within these structures, visualised in red.Fibrous structures in red can also be seen, and these are empty cell wall structures from where the contents have been extracted.After a homogenisation treatment at 100 MPa, a sample of slurry was visualised and the results are shown in Fig. 10.The representative micrographs show changes in many of the aspects of the slurry sample.The oil droplets, green in these images, are reduced significantly in size.It is near impossible to distinguish the individual droplets in the continuous phase.This supports the reduction in particle size and loss of the volume based peak at 1 μm to submicron sizes.Intact cells were not observed in any of the samples studied using CLSM, after one treatment at 100 MPa or for multiple passes.The fibrous structure observed in the control sample were reduced to shorter lengths with homogenisation treatment, which confirms the results seen in particle size examination.The microstructure observed after 5 passes through the homogeniser using CLSM was similar compared to that seen after 1 pass.No aggregation of proteinaceous material could be visualised after 5 passes.The small increase in the particle size upon both slurry homogenisation and okara homogenisation might be due to limited protein aggregation.After 1 pass, the protein extraction yield from slurry increased to 82%.The D was reduced to approximately 50 μm for all resultant samples, both slurry and okara solution, after homogenisation.Apparently, most of the particles in this size range was still soluble or dispersible, and resided in the soybase during the extraction process.It has been shown previously using transmission electron microscopy and CLSM that the size of hydrated soybean cotyledon cells vary in length from 70 to 80 μm and 20–30 μm in diameter.Assuming a spherical cotyledon cell, its average size is about 45–55 μm.The measured particle size data would suggest that homogenisation disrupted all intact cells.That is indeed what has been confirmed by CLSM measurements in this study: after 1 pass through the homogeniser, no intact cells were present.To understand the differences observed between the slurry and okara solution separation efficiencies, a study of the flow behaviour was conducted.Focusing on the viscosity profiles of the control soy slurry, it is possible to observe shear thinning behaviour: as the shear rate increased, the viscosity decreased.Upon 1 pass through the homogeniser, the slurry viscosity increased especially at the relatively lower shear rates.At shear rates above 3 s− 1, the viscosity of slurry decreased after a single pass.Upon 5 passes, the viscosity increased at all shear rates investigated compared to the control sample.This can be attributed to the change in the composition of the sample: intact cells are disrupted, and more intracellular components are solubilised into the soy slurry continuous phase.The drastic change in particle size after homogenisation treatment leads to the formation of a large number of smaller particles from a few number of larger particles.With a greater concentration of particles after homogenisation, particle-particle interactions play a greater role in the viscosity of the resultant sample.Focusing on the okara solution viscosity profile, the first obvious difference compared to the slurry curves is the lower viscosity for all okara samples, treated and non-treated.The control okara solution behaved as a Newtonian fluid.Okara solution is a dispersion of a small volume of insoluble materials, such as intact cells and fibrous particles, dispersed in water.In the control for okara solution, there are less particle-particle interactions due to their low volume fraction.Particle sedimentation in the rheometer could be responsible for this behaviour observed in the okara solution control sample.At shear rates below approximately 1 s− 1, a shallow plateau in the flow curve was observed.It is believed that this was caused by shear banding in the okara solution samples.This is an artefact and can be neglected.At a shear rate of approximately 100 s− 1, it was possible to see an increase in the dynamic viscosity for all of the okara solution samples; however, this is also an artefact caused by the presence of turbulence within the measurement, which is not assumed during the calculation of viscosity.This can also be neglected in the interpretation.Upon 1 pass through the homogeniser of the okara solution, the behaviour changed from Newtonian behaviour to shear thinning, with an increase of viscosity versus the control.After 5 HPH passes, a viscosity increase was observed compared to the control at all shear rates investigated.Such an increase in viscosity could be beneficial for producing a low solids product, with a similar viscosity profile to soy slurry without homogenisation treatment.The release of intracellular materials, such as proteins and smaller fibrous materials upon cell disruption could lead to enhanced particle-particle interactions and build-up of structure, thus increasing the viscosity.This has also been observed previously by Lopez-Sanchez et al. for tomato cell suspensions.Focusing on the okara solution homogenisation, the lower viscosity of all samples compared to slurries was caused by a reduced solid content of the okara solution, i.e. 2.5 ± 0.1%.The control slurry sample consisted of 12.6 ± 0.1% solids in comparison, which accounts for soluble solids and insoluble solids, such as oil, protein, cell wall fibres and intact cells.As the particle size of the slurry slightly increased and the viscosity increased upon subsequent HPH passes, the slurry particles became increasingly difficult to separate from the bulk solution.For the implementation of high pressure homogenisation within industry, it is also vital to consider the energy density at each scale of treatment.Energy density for the lab-scale system has been calculated for each number of passes investigated in the experimental section.Extraction yields, show that a single pass of slurry or okara solution was able to reach maximal protein extraction yields, equating to an energy density of 740 MJ m− 3.If this technique is considered for implementation at industrial scale, it is necessary to calculate the energy density for a suitable system.Processing at 72 MPa at a flow rate of 2500 L h− 1, it is possible to introduce 108 MJ m− 3.Further studies are necessary to investigate the effects of homogenisation using a pilot-scale homogeniser and its energy density should also be considered.Productivity is another important factor to consider when designing an industrial plant, giving an indication of the efficiency of processing.Fig. 12 shows the effects of number of homogeniser passes at 100 MPa on the productivity.The greatest productivity was found for 1 HPH pass at 100 MPa for both slurry and okara solution.Comparing the productivity of slurry and okara solution after a single pass at 100 MPa, slurry treatment was found to be a more viable option.The low protein concentration in the resultant soybase after okara solution treatment caused lower productivity in comparison to slurry.After each subsequent pass for both slurry and okara solution, the productivity reduced below that of the control sample, without homogenisation treatment.In conclusion, high pressure treatment was found to improve the extraction of oil, protein and solids from soybean processing materials.The improvement for both slurry and okara solution treatment after one HPH pass was found to be a result of availability of protein and separation efficiency.The improvement in availability can be attributed to the reduction in particle size and cell disruption, as confirmed by particle size measurements and CLSM.A decrease in separation efficiency was observed for slurry treatment with increasing number of homogenisation passes, resulting in a reduction of protein extraction yield contrary to okara solution treatment.This reduction can be attributed to a slight increase in particle size and increase in viscosity upon subsequent HPH passes.High pressure treatment based on the phenomena of hydrodynamic cavitation offers a more viable route of extraction intensification from soybean processing materials in comparison to ultrasound.Based on the productivity of the technology, the best scenario includes the use of HPH on slurry rather than okara solution, for 1 pass at 100 MPa.Further work is required to optimise this processing technology, including scale up to determine the viability for implementation at factory scale as well as sensory evaluation and storage stability of the resulting soy based products.To reduce energy costs, it is also beneficial to study further the effects of lower pressures than 100 MPa. | High pressure homogenisation (HPH) has been investigated for its potential to aid the aqueous extraction of protein and other components from soybeans. HPH treatments (50–125 MPa) were applied to soy slurry and okara, the diluted waste stream from soybase production. Extraction yields of oil, protein and solids were calculated, and the feasibility of the technology was assessed. The most productive HPH treatment investigated improved extraction yields of protein up to 82% with a single pass of soy slurry at 100 MPa. In comparison, a maximal protein extraction yield of 70% has been achieved previously using ultrasound at lab-scale for 15 min (20 kHz, 65 W according to manual, 13 mm probe tip) (Preece et al., in press). Results showed a particle size reduction upon HPH and disruption of intact cells, confirmed via confocal laser scanning microscopy. Multiple HPH passes of soy slurry caused an increase in dynamic viscosity and a small increase in particle size probably due to cell wall swelling, resulting in decreased separation efficiency and consequently a reduced extraction yield. HPH offers extraction assistance, with more promising results reported in comparison to ultrasound-assisted extraction of soybean processing materials. Industrial relevance Improvement of current soybean processing is desirable on an industrial level to better use available raw materials and reduce waste production. This study shows the effects of a technology already widely employed in industry for other benefits, such as fine emulsion production and microbial cell disruption. High pressure homogenisation was carried out on a lab-scale on soybean processing materials which were prepared in a pilot plant, with similar feed compositions to those produced at an industrial scale. |
339 | Investigating the collocations available to EAP writers | It is widely acknowledged that it is not advisable for language users to learn just single words.Lexical competence involves being able to put words together in texts."This requires knowledge of the world, knowledge of a language's syntactic constraints and knowledge of a language's lexical preferences.Knowledge of the world enables language users to distinguish between plausible and semantically anomalous propositions."Knowledge of a language's syntactic constraints enables users to discriminate between well-formed strings of words and errors. "Knowledge of a language's lexical preferences, in turn, enables users to differentiate between what is acceptable, conventional and idiomatic, and what is not sanctioned by usage.The latter kind of knowledge embraces a range of interconnected concepts that have confusingly come to be referred to in the literature by a variety of often overlapping terms, such as chunks, collocations, fixed expressions, formulaic language, idioms, lexical bundles, multiword units, prefabricated units, set phrases, to name but a few.This is understandable, given the wide body of research from different corners of the world the phenomenon has attracted.The focus of the present study is on collocation, defined here in the Firthian sense of ‘lexical items occurring with a greater frequency than the law of averages would lead you to expect’.According to this definition, collocation can be empirically verified against corpus data.It can include strings of words like auburn hair, where auburn rarely occurs in contexts other than hair, and brown hair, where brown occurs in many other contexts but is nevertheless still exceptionally frequent in the context of hair.Collocation can be contiguous, but also allows for non-contiguous forms.Collocation can cover semantically more transparent associations between words such as cold weather, strings that include words used in the figurative sense like cold war, and idioms like cold feet.Appropriate use of collocations seems to facilitate comprehension."According to Hoey's Lexical Priming theory, people's minds are primed to make automatic connections between words that they have encountered together before, so word combinations that language users are already familiar with tend to be processed with less effort than combinations of words that they may not have seen before.This view is supported by empirical studies such as Conklin and Schmitt and Ellis et al., which indicate that predicting what words are going to be used on the basis of our prior knowledge of how they normally combine facilitates language processing.In terms of language production, the learning difficulties associated with collocations have long been acknowledged by language teachers, lexicographers and linguists.Palmer saw collocation as ‘a succession of two or more words that must be learned as an integral whole, and not pieced together from its component parts’."Hornby's Idiomatic and Syntactic English Dictionary – the precursor to the Oxford Advanced Learners' Dictionary – addressed the problem by introducing phraseological information that could help learners use words in context. "Nowadays, it is standard practice for English learners' dictionaries to provide information on collocation, and there are also specific collocation dictionaries available on the market. "In the context of academic writing, resources like the Academic Collocations List and the Oxford Learner's Dictionary of Academic English have been compiled to cater for the particular needs of EAP learners whose first language is not English.In both cases, expert academic English corpora were used to research which target collocations to address."Learner corpora, in turn, have been a rich source of information on learners' difficulties regarding collocations.Learner-corpus research has shown that many of the difficulties learners encounter seem to arise when collocations do not have a word-for-word equivalent in their L1.For example, Nesselhauf observed that around half the inappropriate verb-noun combinations by German learners of English could be traced back to German phraseology.Similarly, Laufer and Waldman found that the majority of English miscollocations by Hebrew learners of English were a result of literal translations from Hebrew.However, the problem of collocations is not just one of linguistic interference leading to error.Studies such as Kaszubski, Nesselhauf, Durrant and Schmitt, Laufer and Waldman, Lu, Paquot and others found that, apart from producing collocation errors, second language learners tend to prefer collocations that are congruent with collocations in their L1, and that less striking combinations of words like very cold are more widely used than more unique collocations like bitterly cold.At the same time, there is also some evidence that learners may exaggerate the use of memorable idiomatic phrases such as far as something is concerned, which Hasselgren referred to as ‘lexical teddy bears’ because they seem like safe choices.Despite the richness of the data generated by learner corpora and the valuable insights we have gained from them, learner-corpus research can only provide a partial picture of collocation.An important limitation is that learner corpora generally consist of a collection of short texts about a restricted set of topics that learners have been asked to write.These texts are not varied or long enough to be representative of all the collocations learners know, and the topics used to elicit the data will have influenced the collocations that surface in their writing.Of course, this problem is not exclusive to learner corpora.An elicited L1 corpus like the Louvain Corpus of Native English Essays, which was designed to be comparable to the International Corpus of Learner English, discloses a very limited set of collocations if we compare it with the collocations present in the much larger and more varied British National Corpus, whose texts were sampled from a wide range of authentic communicative situations.Take the noun people as an example: despite it being the most frequent noun in LOCNESS and having a normalized frequency over four times greater than in the BNC, there is no evidence of people collocating with elderly in LOCNESS, while in the BCN there is little doubt that the two words are very strongly associated.The above limitation is probably one of the key reasons why learner-corpus studies have revolved around the collocates of high-frequency words.For example, Kaszubski examined collocations with the verb be, Gilquin looked at collocations with make, Laufer and Waldman studied collocations surrounding the most frequent nouns in a learner corpus, and Lu examined lexical items related to the composition topics used to elicit the corpus.There is simply not enough data in these studies to enable one to obtain a fuller picture of the collocations learners are able to use.Moreover, as the texts that make up learner corpora are usually quite short, many learner-corpus studies draw conclusions from what can be observed in the corpus as a whole, often overlooking individual differences between learners and idiosyncratic behaviour which could skew the data.Even if learner corpora were made up of much longer and more varied texts, however, as noted by Gilquin,What corpus-based studies cannot establish, is the extent to which collocations which are not produced by a learner are part, or not, of his/her mental lexicon.Because a learner does not produce a particular collocation does not mean that s/he does not know it, but this is a side of the coin to which corpus-based approaches have no access.To complement the information that can be gleaned from corpora, it is also possible to collect data on collocation via direct elicitation.Most of the research in this domain revolves around studies that gather introspective data to find out what collocations language users judge as acceptable.However, the fact that language users recognize collocations does not necessarily mean they are able to use them when required."Indeed, it is generally acknowledged that language users' receptive knowledge of lexis is greater than their productive knowledge.In one of the few empirical studies that attempts to look at the two together, Laufer found that the passive vocabulary of Hebrew learners of English was much larger than their active vocabulary."Whereas Laufer's study looked at knowledge of single words, there is no reason why the same principle should not apply to learners' knowledge of collocations.Another way of harnessing empirical data on collocation is via productive elicitation tasks, where participants are typically required to supply target words in gap-filling and/or translation tasks.For example, Gilquin used the following gap-filling plus translation exercise to capture what verb French learners of English would combine with choice in the context below:She ____________ the choice of never seeing her son again.= Elle fit le choix de ne plus jamais revoir son fils.One of the problems of controlled tests like the above, however, is that the words participants are required to provide are not necessarily the words they would want or need to use in more naturalistic settings.Thus, unlike corpora, which ‘allow learners to choose their own wording rather than being requested to produce a particular word or structure’, gap-filling/translations tasks like the above may lack ecological validity.Another problem is that lexical choices are not always black and white.Although gap-filling tasks are normally designed to elicit a single target collocation, poorly-designed tests may not always elicit the intended data.In addition, little has been said about linguistic contexts that evoke a range of possible collocations for users to choose from.Take the noun control as an example.Articulate language users wishing to employ this word as an object should have little difficulty in remembering not just the word control in isolation, but rather collocations like have control, take control, gain control, seize control, exercise control, exert control, or whatever verb-noun collocations fit in with their intended meanings.In contrast, less proficient language users may have a more limited range of readily available collocations to choose from, or may simply not know what verb to use with control.In other words, even when language users know exactly what they want to say and what initial words to employ, a limited collocation repertoire may restrict how well they can express themselves.However, there does not yet seem to be much research on the range of collocations available to language users at the moment of language production.As discussed above, neither learner-corpus research nor controlled gap-filling/translation tasks have so far been concerned with data that effectively taps into this aspect of lexical proficiency.In addition, there seems to be insufficient information on L1 difficulties with collocations, since existing studies tend to use L1 data as a benchmark for assessing second language performance.Yet it is important to recognize that, in the context of specific registers like academic writing, collocations could be problematic even to L1 users, who ‘have to take on new roles and engage with knowledge in new ways when they enter university’.The present investigation is an attempt to delve more deeply into the collocation repertoire available to EAP users, where writers often struggle to put complex ideas down on paper, and where not being able to recall a suitable collocation could disrupt writing processes.More specifically, the study seeks to identify patterns in the performance of EAP users of different levels of academic experience whose first language is English and not English in a controlled collocation test.The research questions that guided the present investigation were as follows:Is the number of academic collocations available to L1-English EAP users greater than those available to Other-L1 EAP users?,Is the number of academic collocations available to more experienced EAP users greater than those available to less experienced EAP users?,Are there qualitative differences in the collocation choices by L1-English and Other-L1 EAP users?,Are there qualitative differences in the collocation choices by EAP users of different levels of academic experience?,This section describes the participants taking part in the study, presents the elicitation materials and procedure used, and details how the data was transcribed and processed.The participants in the study were 90 students and members of staff at the Languages Department of a British University, whose details are provided in Table 1.The sampling was opportunistic, as the researcher worked at the Department, which facilitated the data collection.Because it was a Languages Department, it should be noted that the participants were probably more linguistically aware than average.It is also important to bear in mind that the different groups taking part in the experiment were not homogeneous in terms of academic experience – defined here in terms of participant role in higher education, with the undergraduates being regarded as the least experienced group and the academics as the most experienced one1 – or L1.The L1s other than English represented were Mandarin, Spanish, Italian, Polish, Russian, Greek, Arabic, Farsi, French, German, Slovak, Thai and Turkish.The cohort was nevertheless a fair reflection of the population of the Department, and indeed of the mix of backgrounds that is often seen in British universities."The level of English of the students from L1 backgrounds other than English met the university's entry requirements, i.e., undergraduates scored a minimum of 6, and MA and PhD students scored a minimum of 7 in the writing component of the IELTS or provided evidence of equivalent qualifications.The level of English of the academics and EAP tutors with L1s other than English was not formally assessed, but can be assumed to be very high, given their roles in higher education.In order to gather data for the present study, ten nouns frequently used in general academic English served as bases for eliciting the collocations available to EAP writers.While it is recognized that there are important lexical variations in different disciplinary fields, this study takes the view supported by Coxhead, Ackermann and Chen, Gardner and Davies and others that there is a common core of academic vocabulary that can be useful across disciplines.The nouns used in the experiment were selected from the Academic Vocabulary List, compiled by Gardner and Davies.AVL is based on the 120-million-word academic component of the Corpus of Contemporary American English.Although COCA_ac is a North-American corpus, researchers from all over the world publish in American journals, and general academic English vocabulary was felt to be sufficiently international to sanction the use of this conveniently open-access corpus as a starting point for the present study.The ten nouns selected as collocation bases are listed in Fig. 1.They were chosen among the fifty most frequent nouns in AVL, so it can be assumed that all the participants taking part in the experiment would be familiar with them.Another important criterion in the selection of those nouns was to ensure that they could activate a range of EAP collocations rather than a single target collocation.In addition, it was determined that the nouns should evoke collocations that could be used across disciplines, rather than being specific to one particular discipline.In order to ascertain these criteria were met, the collocates of each noun were inspected in COCA_ac.For example, the noun system evoked adjectival collocates such as solar, immune, new, political, legal, nervous, public, educational, and so on.Despite the variety of adjectives retrieved, many were only attested in sources pertaining to specific disciplines.On the other hand, COCA_ac rendered a good range of verbal collocates attested in different disciplinary areas within COCA_ac, which made VERB + system a suitable test item for the present study.Having selected the collocation bases to be used, it was important to ensure that the elicitation task would put the participants in the right frame of mind for EAP.The nouns were thus presented within contexts pertaining to gapped academic English concordances from COCA_ac.These were piloted with two experienced academic writers, and a few adjustments were made to ensure the test items elicited the data anticipated.The sentence excerpts used are listed in Fig. 2, with examples of typical collocations from COCA_ac given in italics.Unlike traditional gap-filling collocation tests in which participants are asked to supply collocations that are not necessarily relevant to their language needs, it can be seen that the elicitation frames of the present study are typical of the kind of texts the participants encounter routinely in their work.As shown, some of the frames in the test are more restrictive than others.For example, while in item 7 practically any adjective that collocates with role in academic English would be acceptable, in item 8 the missing verb needs to collocate with analysis as an object and at the same time be complemented by in two stages.It was thus anticipated that the level of difficulty of the test items would vary.This served our purposes well, as it would help to better discriminate between more and less proficient users of academic collocations.The participants were told they would be presented with ten gapped sentence excerpts and were asked to fill the gaps with as many words as they remembered in the context of academic English without having to stop and think.The idea was to capture only collocations they could retrieve effortlessly, without disrupting their writing processes.It was explained that, apart from using single words, they could also supply a combination of words, such as a verb and a preposition.The participants were instructed to write a question mark,if they could not think of any word for a particular gap and to move on to the next sentence.This was to ensure that the gaps were not left blank because they had been inadvertently skipped, but rather because the participants were not able to retrieve a suitable word.Examples of each of these situations were given in the instructions.After reassuring the participants that the test was entirely anonymous, it was emphasized that they should not dwell on each test item, and should only supply the words that automatically came to their minds, and then move on to the next one.To ensure the test only captured lexis that was recalled effortlessly, the participants were explicitly instructed not to go back and revise their answers.These instructions were supplied in writing and explained orally.No time constraints were imposed so as not to give an advantage to faster-thinking participants over those more deliberate in their writing.What was important was to capture the moment words failed them, rather than how fast they could fill in the gaps.It took no more than 5 min for the participants to complete the task.When transcribing the lexical items supplied, three illegible words could not be processed.Six spelling mistakes were corrected in the transcription, as they were not deemed relevant to an analysis focusing on lexis.However, commonly mistaken words like affect/effect were transcribed literally.Where participants supplied different inflections of a lemma, like An additional factor that affects/affected these results was …, only the first form was transcribed, since the lexical choice remains the same.Five gaps were filled in with entire phrases rather than collocations, like An additional factor that reduced the significance of these results.These phrases were not relevant to the study so were not transferred to the database.One EAP tutor altered the preposition preceding two stages in test item 8 from in to into, and filled in the gap with divided.This response was invalidated.After transcription, the words in the gaps were sorted according to whether they qualified as EAP collocations by checking them against a corpus of academic English.At this point in the study, the 37-million-word Pearson International Corpus of Academic English had been kindly made available to the researcher on Sketch Engine.PICAE is made up of texts covering a wide range of academic disciplines from American, Australian, British, Canadian and New Zealand publications, and was preferred over the admittedly larger COCA_ac because collocation look-ups on Sketch Engine are faster and more efficient, which, as shall be seen below, greatly facilitated the analysis.It was determined that for the lexical items supplied to qualify as collocations, they had to score high in terms of strength of association and be sanctioned by a minimum number of analogous co-occurrences in different texts in PICAE.Gablasova, Brezina, and McEnery discuss different methods for establishing whether combinations of words in a corpus can be considered collocations, including traditional strength of association measures like t-score and MI, and more recent ones like logDice.The association measure used in the present study was the logDice statistic favoured in Sketch Engine.It is more robust than the t-score, which is overly sensitive to high-frequency words, and more appropriate than the MI-score, which rewards low frequency items, including very rare or even misspelled words.As Gablasova et al. explain, logDice ‘highlights exclusive but not necessarily rare combinations’.This is exemplified in Fig. 3, which shows the top five lemmas immediately to the left of role in PICAE, ranked according to t-score, MI and logDice.An exploratory investigation of what could be a reasonable cut-off point in terms of logDice and co-occurrence frequencies was conducted by examining the collocates of table and system, the collocation bases of the study that had respectively the lowest and highest frequencies in PICAE.In consultation with an EAP expert, a threshold of logDice ≥3 and a minimum of five analogous co-occurrences in at least five different sources in PICAE was found to work well for both bases.It naturally captured very frequent collocations in academic English like develop a system, but was at the same time sensitive to less common but nevertheless idiomatic collocations like devise a system.On the other hand, the cut-off point left out semantically sensible combinations of words that are arguably not appropriate in an academic register like come up with a system, and other plausible but more open-choice combinations like discover a system.As expected, it also excluded less obvious combinations of words like ?,hypothesize a system.2, "Sketch Engine's Word Sketch option conveniently sorts collocations according to their grammatical relations and ranks them in terms of co-occurrence frequencies and logDice score.This enabled one to validate the main collocates for each gap efficiently and flexibly, since the results are not constrained by contiguous co-occurrence or the exact wording of each test item, but allow for analogous contexts of use, as exemplified in Fig. 5.Despite the convenience of how collocations are displayed in Sketch Engine, it was nevertheless necessary to carry out complementary concordance queries and inspect them manually in order to check that the lexical items attested in Word Sketches pertained to a minimum of five different sources in PICAE, and verify whether lexical items which did not figure in Word Sketches could have nevertheless satisfied the criteria established to qualify as collocations.Note that when undertaking this analysis, spelling variants like favour and favour were considered together.A breakdown of the overall results is provided in Table 2.The number of blanks was very small, constituting only 1.5% of the total number of responses.Of the 2330 lexical items elicited in the test, 1664 were classified as collocations according to the criteria specified in 2.5."Section 3.1 examines the 1664 elicited collocations in terms of the participants' L1 and their level of academic experience from a quantitative perspective.Section 3.2 reports on the participants’ collocation choices from a more qualitative perspective.The 686 lexical items that did not reach the collocation threshold defined in 2.5 will be submitted to acceptability judgement testing in a follow-up study.As previously shown in Table 2, there was considerable variability in the performance of the cohort.One participant retrieved as many as 44 collocations, while another one was only able to supply 2 collocations in the entire test.This section examines this variability from the perspectives of L1 background and academic experience.Table 3 summarizes performance according to L1.As shown, the L1-English group did on average slightly better in the test.Since the scores were not normally distributed, a non-parametric Mann-Whitney one-tailed test was used to determine the statistical significance of these results.With U = 861.5, p < 0.05, the difference was not statistically significant.This means it is not possible to rule out the possibility that the slightly higher average score of the L1-English participants was due to chance, and therefore it is not possible to make any claims about the performance of the participants on the basis of their L1.A closer look was then taken at performance according to academic experience.The results of this analysis are presented in Table 4.As evident in the means, there is a steady progression in the number of collocations supplied that correlates with experience in academia, with the undergraduates supplying the fewest collocations, and the academics retrieving on average almost twice as many.The EAP tutors positioned themselves between the MA and PhD students, but were excluded from further comparison because their academic qualifications had not been controlled for.A one-way ANOVA was used to investigate whether the differences among the remaining participants were significant.The results of the test were significant, with a high effect size value, meaning the differences observed are substantial and unlikely to be due to chance.A post-hoc Tukey test revealed the academics significantly outperformed the undergraduates and MA students, and the PhD students significantly outperformed the undergraduates.The remaining differences were not statistically significant.These results suggest that academic experience affects the number of collocations available to EAP users, and that the wider the gap in experience the more discernible its effect.At this juncture, it must be remembered that the undergraduates were predominantly L1-English speakers, while the PhD students were mostly from other L1 backgrounds.These groups were too unequal to allow for a more fine-grained analysis of the extent to which the L1 variable may have affected the results in Table 4.However, for the two remaining groups – the academics and the MA students – the distribution of L1-English and other L1s was reasonably balanced.Table 5 therefore details a comparison of the performance of the academics in terms of L1, and Table 6 summarizes analogous data for the MA students.The values in Table 5 indicate that the differences between L1-English and other academics seemed negligible.Unsurprisingly, a one-tailed t-test showed the differences detected were not statistically significant.There is therefore no evidence that having L1-English had an effect on the number of collocations the academics were able to recall.Table 6 shows that the L1-English MA students were able to provide on average slightly more collocations than the other MA students.However, a one-tailed t-test indicated that the former did not significantly outperform the latter.Thus, as with the results obtained for the academics, for the MA students too it was not possible to assert that having English as a first language significantly affected the number of EAP collocations remembered.This section examines the participants’ lexical preferences.As the focus was on lexis, different forms of the same lemma and spelling variations were grouped together and represented by their most frequent form.Additionally, only lexical items favoured by at least 20% of each group were taken into account, as below this threshold there was too much idiosyncratic variation for the analysis to be meaningful.Table 7 displays the collocations favoured by the participants in each L1 group.Overall, there were 27 different collocations that at least 20% of the L1-English group agreed on, and 25 among the other group.Despite this slight difference in lexical diversity, it can be seen that the participants of both groups tended to use the same collocations.In eight of the ten test items, the most frequent collocation was actually the same.One noticeable difference, however, was that the L1-English participants were more prone to using high-frequency, general English lexis like give, key, do and take.In contrast, the other participants agreed more often on the use of more specialized words like analyse and propose.When examining the lexis chosen by the participants according to academic experience, there was more variation in the lexical preferences of each group.As shown in Table 8 the academics agreed on the greatest number of different collocations, which can be interpreted as an indication of a more varied and consolidated collocation repertoire.Collocation diversity correlated with academic experience, with the undergraduates agreeing on the fewest different collocations.Interestingly, the EAP tutors positioned themselves between the academics and the PhD students on this scale.The most frequent collocations chosen by each group coincided in only five out of ten test items, suggesting there is again more variability in relation to academic experience than in terms of L1 background.Another interesting finding is that the academics agreed on the greatest number of collocations that were unique to their group, which could be another indication of a more consolidated collocation repertoire.In contrast, none of the collocations agreed upon by at least 20% of the undergraduates were unique to their group.It has long been acknowledged that lexical knowledge is not just about understanding words, but also about employing words in context.Corpora have enabled researchers to capture how linguistic communities conventionally put words together, and learner corpora have brought to light problems that are typical among less proficient language users.However, corpora cannot provide information on the lexical choices available to writers at the moment of writing.The present study set out to investigate the collocations available to a group of 90 EAP users in a controlled language production task designed to elicit academic collocations.The lexical items the participants supplied varied in number and in type.This is not unexpected, as the study cohort was not and should not be treated as a homogeneous group.Although the participants were all from a Languages Department, and therefore likely to be more linguistically aware than the average EAP user, their uneven performances serve to underscore the fact that there can be substantial variability in the productive collocational repertoire of regular users of academic English.This heterogeneity should be acknowledged and understood.One factor that could explain the differences observed is that many EAP users do not have English as a first language.However, no significant differences were found in the number of collocations available to participants with and without L1-English.Even though the initial overall results could have been distorted by the fact that the L1 groups in the opportunistic experimental cohort were not balanced, when the two subgroups that were similar in size were compared separately, the results were the same.These quantitative findings were reinforced by the qualitative analysis that followed.The participants of both language groups tended to favour the same collocations.One small but noticeable difference detected, however, was that the collocation repertoire of the L1-English participants tended to be more permeable to less formal, general English lexis.This could be explained by the fact that they will normally have had more exposure to non-academic uses of English.The main variable affecting the number and variety of collocations available to the participants in the study was their level of academic experience.The undergraduates supplied the fewest collocations, the MA students came next, then the PhD students, and finally the academics.Additionally, the EAP tutors outperformed the MA students and undergraduates.Of course, the positive correlation of collocation repertoire and years at university could have been skewed by the imbalanced L1 backgrounds pertaining to different levels of academic experience in the cohort.However, as pointed out above, when the two reasonably balanced groups were compared, their performances did not differ significantly.Moreover, when considering the cohort as a whole, it should be noted that the Other-L1 bias was stronger among the PhD students, while the L1-English bias was more pronounced among the undergraduates.Therefore, it cannot be inferred that the groups of higher academic experience performed better because there were more L1-English participants among them, and neither that the groups of lower academic experience did less well because there were more participants whose first language was not English among them.If anything, quite the opposite was true.These findings indicate that having English as a first language does not automatically give an advantage to users of academic English in terms of their productive collocation repertoire, and lend support to the view that there are no native speakers of Academic English."Moreover, in line with Hulstijn's theory of Higher Language Cognition, the present findings suggest that L1 performance should not be indiscriminately used as a benchmark for assessing L2 proficiency, particularly when dealing with a specialized register like EAP.The qualitative analysis showed the academics not only supplied more collocations, but were also more consistent in their lexical choices.They supplied more collocations in common than the other groups, particularly the undergraduates."These findings are consistent with Hoey's Lexical Priming theory, whereby language users take mental notes of how words are used, and learn to make automatic connections between such words once they have encountered them together sufficiently often.In the present case, years of experience in reading and writing academic texts seem to have equipped the academics with a more sophisticated and more consolidated collocation repertoire, regardless of their having L1-English or not.In terms of implications for teaching, the present findings suggest that novice EAP users would benefit from further awareness of and exposure to academic collocations, even when their L1 is English."While extensive reading and writing over the years at university appears to be an effective way of boosting one's collocation repertoire incidentally, it is important to recognize that the writing of novice EAP users may be disrupted by a less than optimal recall of academic collocations, and that L1-English writers too may need support in this respect. "Dictionaries and other collocation resources can jog writers' memories at the moment they need a specific collocation, and it would not be unreasonable to suggest that EAP collocation references like the Academic Collocations List appended to the Longman Collocations Dictionary and the Oxford Learner's Dictionary of Academic English, which have been compiled specifically for EAP users of L1s other than English could in fact also be useful to L1-English undergraduates and secondary school students who have not yet had sufficient opportunities to assimilate the lexical conventions of the register.This does not mean to say that differences in L1 background should go unacknowledged.There is abundant evidence on the negative impact of incongruent L1/L2 collocations.Moreover, the present study generated new data indicating that L1-English EAP writers tend to be more prone to using general English lexis in academic contexts.The planned follow-up investigation where the 686 non-collocations elicited in this study will be subjected to acceptability testing should disclose further insights about the effect of L1.Questions such as whether the words classified as non-collocations were open-choice, errors, too informal or just odd in an EAP context remain to be answered.Another issue that should not be overlooked is the difference between core and discipline-specific collocations.Although the present study did not examine discipline-specific collocations, it pointed to deficiencies in the use of core collocations by novice EAP users.This lends credibility to the pedagogical value of EAP vocabulary resources that cut across different academic domains proposed by Coxhead, Gardner and Davies, Ackermann and Chen, Lea and others.In fact, one must not discard the possibility that general EAP collocations might well be harder to acquire incidentally, since they could be less noticeable when compared with the more targeted and concentrated way in which EAP users are exposed to discipline-specific collocations.Having said this, the present findings are exploratory and should be interpreted with caution.A larger investigation, with a more balanced cohort in terms of L1 and academic experience, and that includes participants from different disciplinary areas, is still needed.In future, a computer-delivered test with screen recording would also enable one to better control for the exact moment collocations stop flowing.Notwithstanding these limitations, the study offers important insights into the collocations effortlessly available to EAP writers, and opens the way for further studies.Future research could usefully explore how well writers of different L1s recall congruent and incongruent collocations, and whether there are differences in discipline-specific and core academic collocation recall. | Studies on the productive use of collocations have enabled researchers to harness a wealth of information about the phenomenon. However, most such studies focus on the collocations that come to the surface in finished texts, and have not been able to capture the range of collocational choices available for writers to choose from as they write. The present investigation addresses this gap by examining the collocations users of academic English at a British university were able to recall when presented with a selection of general academic writing frames. The study examined the collocations instinctively available to a group of 90 academics, tutors of English for Academic Purposes (EAP) and students at PhD, MA and undergraduate levels in an academic writing gap-filling test where more than one collocation could be used in each gap. The results indicate that experience of English academic writing plays a more decisive role than having English as a first language (L1) in the collocations effortlessly available to EAP users. |
340 | Data set on interactive service quality in higher education marketing | The context determines the meaning of the word quality to different people.Quality can be described as conformance of output to planned goals, specifications and requirements .Service marketing scholars believe that quality is about exceeding customer expectations .quality in education as the fitness of educational outcome and experience for use .This paper is premised on transcendent view of quality by Garvin .Scholars argue that recognition of quality is dependent on experience gathered from repeated exposure to the service.This perspective of quality is consistent with innate excellence, high achievement and uncompromising standards .Interactive quality is one of the essential dimension of service quality.It refers to the nature of communication and relationship that exist between the students and faculty and staff of the University.It is also about the quality of teaching and learning process in the University.Instruction may be described as the impartation of skills, values as well as knowledge that came as a result of quality teaching.The education literature presents a good number of teaching strategies and there are also a good number of research studies that validate them .The issues of teaching quality and teaching effectiveness have been attracting scholarly debates and controversies in the higher education community.As a result, a good number of scholars focused on teaching quality from different views .Many researchers agreed to the fact that teaching quality is one of the major factors that influence student achievement, other school-related factors include financial condition, class size, leadership or school organization .However, only limited studies considered the views of the alumni of the universities .The data, in this article, describe academic specialization of the students across the three categories of universities,The results from these data can be used to assess the level of learning that took place in those universities,It provides information on the quality of examination that take place in those universities as perceived by the students.The results of the data show the ratings of universities by their students as regards to the quality of assignments given, group discussion as well as the breadth of the knowledge being impacted on the students.The data can be used to compare the three categories of universities based on their perceived interactive quality.The results can further be categorised based on gender, academic specialization as well as state of origin,Many studies have been done on technical and functional quality of higher institutions especially from the perspective of the regulatory bodies but limited studies have been done in the area of interactive quality.The data provided shall therefore facilitate further studies on interactive quality in higher education marketing,The data presented the academic disciplines of the respondents in the study.Industrial Relations and Human Resource Management, Accounting, Business administration as well as marketing that are captured as management related courses have the largest percentage of representation.The management related courses were mostly subscribed for in those Universities and as such the distribution is a good representation of the population.Other areas of specialization of the alumni and their corresponding percentages are social science courses, science and environmental based courses, Law, Engineering courses, education as well as art and humanities based courses.Figs. 1–6,The alumni were asked to assess the interactive or instructional quality of their “alma matter”.The dimensions of interactive quality considered in this study included learning, group discussion, breadth of lecture, assignment, examination and social relationship.The bar charts below represented the extent to which these dimensions were rated by different categories of universities involved in this study.The bar charts above revealed the variations in responses of the alumni of the universities involved in this study to the different dimensions of interactive quality.This paper gathered data on interactive service quality imperatives among Nigerian Universities.Scholars have different opinions on the dimensions or components of service quality.Responses were elicited from the alumni to rate their universities based on their level of interactive quality; learning, breadth, assignment, group discussion, examination and social relationships.This paper considered the interactive component of service quality.The questionnaire was adapted from the works of previous scholars .In addition, the questionnaire was further subjected to factor analysis in order to ascertain its convergent validity.The result revealed that the least loading was 0.261 while the highest loading was 0.730.The adequacy of sampling was ascertained with KMO measure of 0.748 with Barlett׳s Test result of p=0.000.This result therefore suggests that the instrument pass the test of convergent validity.This data article analyzed the responses of graduates of the selected universities in Nigeria as regards to the quality of interaction received during their undergraduate programmes.The data provided will encourage empirical studies that could assess the current trends in quality of education in Nigeria and how the education, as a service, could be improved upon and marketed to both internal and external stakeholders.It is hopeful that empirically based insights gathered from this data article will further contribute to relevant theories, policy formulation and practice in the academia. | This paper provides data on the interactive quality of the educational services rendered in south west, Nigeria. Data were gathered based on conclusive research design. Stratified and convenience sampling techniques were adopted. Responses were elicited from the alumni as regards to their perception towards the interactive quality; learning, group discussion, breadth, assignment, examination as well as social relationships. Interactive quality component of the Student Evaluation of Educational Quality (SEEQ) developed by previous scholars was adapted. The research instrument was confirmed to have all the necessary psychometric values considered appropriate for the study. Some descriptive statistical analyses were carried out to further clarify the data and provide the necessary platform for further analyses. |
341 | Repurposing of an old drug: In vitro and in vivo efficacies of buparvaquone against Echinococcus multilocularis | Alveolar Echinococcosis is a life-threatening disease caused by infections with the fox tapeworm Echinococcus multilocularis which is endemic in the Northern hemisphere.The natural life cycle of E. multilocularis typically includes canids as definitive hosts and voles as intermediate hosts.However, a large variety of mammals can be infected as accidental intermediate hosts by ingesting parasite eggs shed by the definitive hosts during defecation.In humans, E. multilocularis forms larval metacestodes which primarily infect the liver, but they can also form metastases and affect other organs, especially at the late stage of infection.Metacestodes grow aggressively and infiltrate the host tissue, thus causing AE.AE has many pathological resemblances with a slow growing, malignant hepatic tumor, and for surgical excision of parasite lesions, the general rules of hepatic tumor surgery are followed accordingly.However, complete surgical removal of the parasitic lesions is often not possible, due to the diffuse and infiltrative nature of the metacestode tissue.In such cases, chemotherapy remains the only widely used treatment option against AE.The current drugs of choice are the benzimidazole derivatives albendazole and mebendazole.However, they have several drawbacks, most importantly they act parasitostatic rather than parasiticidal, hence they have only limited potential to bring about a cure from infection, and massive doses of these drugs usually have to be administered throughout life.Additionally, benzimidazoles are not always well tolerated and can cause severe side effects, such as hepatotoxicity in some patients.All these shortcomings make it urgent to develop alternative chemotherapeutic options against AE.Given the relatively small target population, commercial support for neglected diseases such as echinococcosis is modest.Thus, one of the most promising strategies to find new drugs against AE is the repurposing of substances with already described activities against other pathogens.Open source drug discovery is fundamental to enable drug repurposing in an academic environment, and supported by organizations such as the Medicines for Malaria Venture.MMV is a product development partnership with the declared goal of “ discovering, developing and facilitating the delivery of new, effective and affordable antimalarial drugs”.In 2013, MMV launched the open-access Malaria Box, a collection of 200 drug-like and 200 probe-like molecules with in vitro inhibitory activity against the malaria parasite Plasmodium falciparum.The MMV Malaria Box was since then screened in over 290 assays against a wide range of organisms, including various parasites, bacteria, yeasts, and cancer cell lines.The 400 compounds from the Malaria Box were screened against E. multilocularis metacestodes, seven were found to be active in vitro at 1 μM, and one of them was studied in more detail.Following the success of the Malaria Box, MMV launched the Pathogen Box which contains 400 drug-like molecules with confirmed activity against various pathogens including parasites, bacteria, and viruses.Also included in the Pathogen Box are 26 reference compounds, which are well described drugs that are frequently used in clinical applications against various pathogens.In this study, we screened the compounds from the MMV Pathogen Box in vitro against E. multilocularis metacestodes by applying the PGI-assay and the Alamar Blue assay to monitor decreased viability of the metacestode tissue.Four compounds with promising activities were further tested for their cytotoxicity against rat hepatoma cells and human foreskin fibroblasts in vitro.Overall, we found two novel compounds with distinct activities against E. multilocularis metacestodes.One of them is buparvaquone, which is a known anti-theilerial drug that was subsequently also tested in mice experimentally infected with E. multilocularis.To further study the mode of action of BPQ, we performed transmission electron microscopy and established a system to measure its effect on the oxidative phosphorylation in the mitochondria of E. multilocularis cells.All chemicals were purchased from Sigma, unless stated otherwise."Dulbecco's modified Eagle medium and fetal bovine serum were obtained from Biochrom.The solutions containing Trypsin-EDTA, Penicillin/Streptomycin, and amphotericin B were purchased from Gibco-BRL.The 400 compounds from the Pathogen Box were provided by MMV as 10 mM solutions in DMSO and stored at −20 °C.Additional samples of the compounds MMV021013, MMV671636, MMV687807, and BPQ were prepared as 10 mM stocks in DMSO upon arrival and stored at −20 °C.E. multilocularis metacestodes were cultured as described by Spiliotis et al.In short, metacestodes were grown in vivo in intraperitoneally infected Balb/c mice for 3–5 months.The parasite material was subsequently resected, pressed through a conventional tea strainer, and incubated overnight at 4 °C in PBS containing 100 U/ml penicillin, 100 μg/ml streptomycin, and 10 μg/ml tetracycline.To establish a new in vitro culture, up to 2 ml of parasite tissue was co-cultured with 5 × 106 Reuber rat hepatoma feeder cells and incubated at 37 °C with 5% CO2 in DMEM containing 10% FBS, 100 U/ml penicillin, 100 μg/ml streptomycin, and 5 μg/ml tetracycline.Once a week, the culture medium was changed and new RH cells were added.RH cells were cultured in parallel in the same culture medium, under the same conditions as the metacestodes, and they were passaged once a week.The in vivo studies were performed in compliance with the Swiss animal protection law.The study was approved by the Animal Welfare Committee of the Canton of Bern.Balb/c mice, 6 weeks old, were purchased from Charles River Laboratories and used for in vivo experiments when they were 8 weeks old and weighted approximately 20 g.The mice were housed in a type 3 cage containing enrichment in the form of a cardboard house and paper and woodchip bedding with a maximum of seven mice per cage.They were maintained in a 12 h light/dark cycle, controlled temperature of 21 °C–23 °C, and a relative air humidity of 45%–55%.Food and water was provided ad libitum.Experimentally infected mice were treated with BPQ to elucidate the efficacy of the drug in vivo.To infect mice, in vitro grown E. multilocularis metacestodes were washed in PBS, were mechanically destroyed by pipetting and the resulting suspension was centrifuged for 5 min at 500 g.The parasite tissue was then taken up in an equal volume of PBS.Each mouse was subsequently infected intraperitoneally with 200 μl of this suspension.32 infected mice were randomly distributed into 3 treatment groups with 4 animals per cage.Group 1 received only the solvent corn oil; group 2 received ABZ; and group 3 received BPQ.Treatments of mice started 2 weeks post-infection and lasted for 4 weeks, with consecutive treatment of mice for 5 days per week, followed by an interruption of treatment for 2 days for recovery.All treatments were administered by oral gavage in a volume of 50 μl, with ABZ and BPQ being suspended in corn oil.After four weeks, all mice were anesthetized with isoflurane and subsequently euthanized by CO2.The parasitic tissue from each mouse was completely resected and weighed.The mass of the resected parasitic tissue was used for statistical analyses of the experiment.The three groups were analyzed by two-sided exact Wilcoxon rank-sum test and p-values were Bonferroni adjusted.The significance level was set to p < 0.05.Figures were prepared in R and Adobe Illustrator 2015.1.0.The 400 compounds of the MMV Pathogen Box were initially screened at 10 μM in singlets by PGI-assay.The positive compounds from this initial screen were re-tested by PGI-assay in triplicates to confirm their activity at 10 μM.Thereafter, positive compounds were further tested at 1 μM in triplicates.Compounds were considered as active if they exceeded 20% PGI activity of the positive control Triton X-100.After this screening cascade, four active compounds remained that were serially diluted from 90 μM in 1:2 or 1:3 dilution steps to assess their EC50 values in triplicates.EC50 values were calculated after logit-log transformation in Microsoft Office Excel.The three screening rounds of the Pathogen Box were each carried out once, and dilution series to assess the EC50 values were tested in three independent experiments.Mean values and standard deviations are given for the EC50 values.In order to assess the activity of compounds from the Pathogen Box on E. multilocularis metacestodes, the PGI-assay was employed.The PGI-assay measures the amount of the enzyme phosphoglucose isomerase that metacestode vesicles release into the medium supernatant when their integrity is disrupted.Metacestodes used for the PGI-assay were cultured in vitro for 4–10 weeks, washed in PBS, and mixed 1:1 with DMEM before distribution in 48-well plates.Drugs were pre-diluted in DMSO and then added to the wells.Corresponding amounts of DMSO were used as the negative control, and the nonionic surfactant Tx-100 was applied as positive control.The parasite- and drug-containing plates were incubated at 37 °C and 5% CO2, under humid atmosphere.To assess drug-induced metacestode damage by PGI-assay, 120 μl medium supernatant was collected from each well after 5 and 12 days of incubation and stored at −20 °C until further measurements were performed.The amount of PGI released in these media was measured as described by Stadelmann et al.The activity of PGI was finally calculated from the linear regression of the enzyme reaction over time and expressed as relative activity of the positive control Tx-100 in Microsoft Excel and Figures were prepared in Adobe Illustrator.1.0.After initial screening by PGI-assay, the vesicle viability assay by Alamar Blue was applied to the most active drugs.The setup was the same as for PGI-assay EC50 calculations and it was performed in triplicates.After 12 days of treatment, viability of metacestodes was measured by Alamar Blue assay as previously described.Data was used to calculate the minimal inhibitory concentrations of these compounds on metacestodes.The MIC was defined as the lowest concentration of a drug with no significant difference in viability compared to the Tx-100 control, where all parasites were dead.MICs were tested in three independent experiments and mean values and standard deviations were calculated in Microsoft Office Excel.The in vitro toxicity of selected compounds was tested against confluent and pre-confluent human foreskin fibroblasts as well as RH cells.HFF were kept in DMEM supplemented with 10% FBS, 100 U/ml penicillin, 100 μg/ml streptomycin, and 0.25 μg/ml amphotericin B at 37 °C and 5% CO2 in a humid atmosphere.To start the assay, HFF were seeded in 96-well plates.The cells were incubated in 100 μl HFF cultivation medium at 37 °C and 5% CO2 to attach to the well and let grow for 4 h or 22 h before the drugs were added.Drugs were serially diluted starting at 100 μM in 1:2 or 3:4 dilution steps and added to the cells.The final dilution series was adapted individually for each drug.The cells were subsequently incubated for 5 days at 37 °C and 5% CO2 in humid atmosphere.RH cells were treated the same way as the HFF, with the difference that 50,000 cells were seeded per well to obtain a confluent monolayer, and 5000 cells per well for pre-confluent wells.RH cells were incubated in DMEM containing 10% FBS, 100 U/ml penicillin, 100 μg/ml streptomycin, and 5 μg/ml tetracycline.To measure the viability of the cells after treatment, the Alamar Blue assay was employed.Therefore, the cells were washed three times in PBS and resazurin was added to 10 mg/l.The fluorescence at 595 nm was subsequently measured after 0 h and 2 h with an EnSpire 2300 plate reader.IC50 values were calculated in Microsoft Excel after logit-log transformation of relative growth.Each drug concentration was executed in triplicates for one experiment, and averages and standard deviations of three independent experiments were calculated for each drug.The preparation of the samples for transmission electron microscopy was done according to the protocol of Hemphill and Croft.In short, E. multilocularis metacestodes were distributed to 48-well-plates and incubated with DMSO or BPQ as described above.After an incubation period of 5 days, metacestodes were fixed in 2% glutaraldehyde in 0.1 M sodium cacodylate buffer; pH = 7.3 for 1 h. Next, the samples were stained for 2 h in a 2% osmium tetroxide solution cacodylate buffer, and subsequently pre-stained in a saturated uranyl acetate solution for 30 min.After washing the samples with water, they were dehydrated stepwise by washing in increasing concentrations of ethanol.The samples were then embedded in Epon 812 resin with three subsequent resin changes during 2 days and incubated at 65 °C overnight for polymerization.Sections for TEM were cut using an ultramicrotome, and were loaded onto formvar-carbon coated nickel grids.The specimens were finally stained with uranyl acetate and lead citrate, and were viewed on a CM12 transmission electron microscope that operates at 80 kV.In an additional experiment, effect of oxygen on the activity of BPQ on E. multilocularis metacestodes was assessed.BPQ was serially diluted from 30 μM down to 4.57 nM in 1:3 dilution steps and added to metacestodes as described above.Corresponding DMSO controls were included.The plates with the metacestodes were incubated for 5 days either under aerobic conditions in a standard incubator or under anaerobic conditions at 37 °C in a defined gas mixture containing 80% N2, 10% CO2, and 10% H2, humid atmosphere.Subsequently, samples were taken for PGI-assay and processed as described above.The experiment was repeated three times independently.Figures were prepared in Adobe Illustrator 1.0.To obtain germinal layer cells from in vitro grown metacestode vesicles, the protocol described by Spiliotis and Brehm was followed with few modifications.Prior to the isolation process, conditioned medium was prepared as follows: 106 RH cells were seeded in 50 ml DMEM in a T175 cell cultivation flask.These cells were incubated for 6 days at 37 °C with 5% CO2, under humid atmosphere.In addition, 107 RH cells were cultivated the same way but incubated only for 4 days.After the incubation periods, medium supernatants were sterile filtrated, mixed 1:1, and stored at 4 °C until further use.To isolate GL cells, E. multilocularis metacestode vesicles from in vitro culture were harvested and washed in PBS.The vesicles were mechanically disrupted using a pipette.The remaining vesicle tissue was incubated in EDTA-Trypsin and occasionally gently shaken for 20 min.Thereafter the mixture was sieved through a 50 μm polyester tissue sieve and rinsed with PBS.The flow-through containing the GL cells was collected, centrifuged, and the pellet was taken up in cDMEM.To standardize the amount of cells present in the mixture, the O. D. 600 of the cell suspension was measured.An O. D. 600 of 100 was defined as one arbitrary unit per μl of the undiluted cell suspension.700 units of GL cells were then seeded in 5 ml cDMEM and incubated overnight at 37 °C in a humified, oxygen-free environment of N2.A Seahorse XFp Analyzer was used to assess the oxygen consumption rate as an indicator of the mitochondrial respiration of E. multilocularis GL cells in real time.Plasma membrane permeabilizer, was applied to selectively permeabilize only the plasma membrane of GL cells and thereby exposing the mitochondria directly to the assay medium."The assays were done according to the manufacturer's manuals and to Divakaruni et al. "One day prior to the assay, the sensor cartridge was hydrated overnight in XF calibrant solution at 37 °C and a Seahorse XFp miniplate was coated with CellTak according to the manufacturer's protocol to prepare them for cell attachment.The assays were carried out in mitochondria assay solution which consisted of 220 mM mannitol, 70 mM sucrose, 10 mM KH2PO4, 5 mM MgCl2, 2 mM HEPES, and 1 mM EGTA, at a pH of 2.7."A stock solution of 3 x MAS was prepared as described by the manufacturer's manual and stored at 4 °C, and BSA was added to 1 x MAS at a final concentration of 0.2% for each assay.To run an assay, the test compounds to be injected were prepared as ten times stock in MAS and then loaded to the delivery ports of the sensor cartridge.The final concentrations of the test compounds for injections were 1 μM, 10 mM, 20 mM and 0.6 mM).GL cells that had been isolated the previous day were washed in MAS and taken up in assay buffer which consisted of 1x MAS supplemented with 10 mM succinate, 2 μM rotenone, 4 mM ADP, and 3.6 nM PMP.The cells were then distributed to a CellTak coated XFp miniplate with 50 units GL cells per well in 180 μl assay medium.The plate was centrifuged at 300 g for 1 min and transferred to the Seahorse XFp Analyzer to start measurements with 30 s mix time, 30 s delay time, and 2 min measure time without an equilibration step.BPQ was injected after the fourth measurement, and after the seventh measurement the substrates of interest were added to the wells.Measurements were performed in triplicates and data analysis was performed in Wave.The experiment was repeated three times, and one representative figure is shown.The figure was prepared in Adobe Illustrator.1.0.400 compounds from the MMV Pathogen Box were initially screened in vitro on E. multilocularis metacestodes at 10 μM.This screen was carried out in singlets and resulted in 13 active compounds after 5 days and 46 active compounds after 12 days of incubation.The 46 compounds that were positive in the initial screen were confirmed in a second screening round at 10 μM in triplicates to exclude false-positives.This yielded 8 positive hits after 5 days, and 5 additional active compounds after 12 days.From these active compounds, four were reference compounds of the Pathogen Box, four compounds were from the tuberculosis disease set, two compounds were from the malaria disease set, and one compound each was from the onchocerciasis, cryptosporidiosis, and kinetoplastid disease set.In order to assess the efficacies of those 13 active compounds at low concentrations, they were further tested at 1 μM in triplicates.Four compounds were found to exhibit distinct in vitro activities against metacestodes at this lower concentration.The numerical results of the full screening of the compounds from the Pathogen Box are provided in Supplementary Tables 1-3.Subsequently, we assessed EC50 and MIC values on E. multilocularis metacestodes for these four compounds.The EC50 are representative for the activity in the metacestode PGI-assay, and the MIC for the parasiticidal potential in the vesicle viability assay by Alamar Blue assay.The compound with the highest activity after 5 and 12 days of incubation was MMV671363, followed by MMV687807, both with 5 day EC50 and MIC values below 1 μM.BPQ was a little less active after 5 days, but activity also increased to the sub-micromolar range at day 12.MMV021013 did not exhibit a specifically low EC50, although longer exposure of metacestodes to the drug increased its efficacy as well.We determined the IC50 values of BPQ, MMV021013, MMV671636, and MMV687807 on mammalian RH cells and HFF.Large differences between the IC50 values were observed depending on the confluence and type of the host cell, but all four compounds had commonly lower IC50 values against pre-confluent cells than against confluent cells.BPQ was less toxic against all tested host cells than against E. multilocularis metacestodes.MMV021013 was generally as toxic to host cells as it was to E. multilocularis metacestodes; only confluent HFF were more resistant.MMV671363 was less toxic against all tested cell lines than against E. multilocularis, indicating a potential therapeutic window.Additionally, it had a notably lower IC50 for RH cells than for HFF.MMV687807 showed the highest toxicity against HFF, and accordingly this compound could only exhibit a potential therapeutic window for RH cells.Taken together, only BPQ and MMV671363 exhibited specific toxicity against E. multilocularis metacestodes.Since BPQ is an already marketed drug for the treatment of theileriosis in cattle, and other potential applications include leishmaniasis and babesiosis, this compound was chosen for further characterization.The morphological alterations induced by BPQ on E. multilocularis metacestodes were thoroughly investigated by TEM.The E. multilocularis metacestode is composed of two layers: an outer, acellular and protective layer that is composed of highly glycosylated mucins, and an inner layer denominated GL, where various cells reside.In between the LL and the GL is the tegument, which is a syncytial tissue containing villi-like microtriches that protrude into the LL.In vitro-cultured E. multilocularis metacestodes were cultured in the presence of different concentrations of BPQ during 5 days.Ultrastructural damage was observed at concentrations as low as 0.3 μM: The most distinct effects at this low concentration were seen within the mitochondria, which appeared less electron dense than those of the untreated control.At 1 μM, membrane stacks were observed, and the GL started to separate from the LL.The metacestode integrity was seriously impaired at 3 μM of BPQ, with the LL being detached completely from the GL.Due to the alterations of mitochondria upon treatment with BPQ, further studies on the mode of action of BPQ in E. multilocularis focused on oxygen-dependence.As assessed by PGI-assay, incubation of E. multilocularis metacestodes under anaerobic conditions resulted in a reduction of activity of BPQ.After 5 days of incubation in an oxygen-free atmosphere, the drug did not induce damage on metacestodes at 10 μM or lower concentrations.Only at 30 μM BPQ was active, and less pronounced compared to metacestodes that were incubated under aerobic conditions.To further elucidate the mode of action of BPQ, we established an in vitro system using a Seahorse XFp analyzer and isolated, permeabilized GL cells of E. multilocularis that allows us to monitor the mitochondrial respiration.The Seahorse XFp analyzer measures the OCR of cells, which directly correlates with the activity of mitochondrial complex IV.After addition of 1 μM BPQ to E. multilocularis GL cells, the OCR rapidly decreased.Moreover, addition of ascorbate together with TMPD could restore the OCR, and ascorbate/TMPD are generally known to feed electrons directly into complex IV.However, neither the addition of succinate, nor glycerol 3-phosphate, could restore the OCR, as they are both taken up upstream of complex III.Taken together, this strongly suggests that BPQ selectively inhibits complex III in the mitochondrial electron transport chain of E. multilocularis GL cells.The in vivo efficacy of BPQ treatment was assessed in experimentally infected Balb/c mice.Mice were treated during 4 weeks p.o. with 100 mg/kg BPQ during 5 days per week.ABZ.None of the mice showed signs of adverse effects due to treatment with BPQ or ABZ during the whole course of treatment.While treatment with ABZ led to a significant reduction in parasite burden when compared to the control or the BPQ treated group, there was no significant difference between the control group and the BPQ treated group.Alveolar echinococcosis is a serious and life-threatening disease caused by the cestode E. multilocularis.Current chemotherapies rely on benzimidazole treatment.However, they are insufficient since they can cause severe side effects, and they can only inhibit the growth and dispersion of metacestodes, but do not kill the parasite.Thus, alternative treatment options are urgently needed.In recent years, major advances have been achieved for the E. multilocularis model.These include the development of new in vitro culture methods which allow the large-scale production of metacestode vesicles, as well as the introduction of the PGI-assay as a medium-throughput drug-screening method providing an objective read-out.These breakthroughs enabled the screening of hundreds of compounds against E. multilocularis.An in vitro cascade to screen drug libraries against E. multilocularis has recently been introduced by Stadelmann et al. and it was applied to the MMV Malaria Box.In the present study, we screened the MMV Pathogen Box in vitro for active compounds against E. multilocularis metacestodes.From the 400 compounds, 13 were active at 10 μM and 4 of these also at 1 μM.This is a similar hit ratio when compared to the outcome of the MMV Malaria box, where 24 and 7 compounds were found to be active at 10 μM and 1 μM respectively.Of the four compounds that were active at 1 μM, only BPQ and MMV671636 exhibited a high specificity against the parasite.MMV021013 showed only a moderate EC50 against E. multilocularis metacestodes and was as toxic to mammalian cells as it was against the parasite.MMV687807 was very effective against E. multilocularis, but unfortunately also exhibited substantial toxicity against HFF.Interestingly, MMV687807 is structurally very similar to MMV665807, the top hit from the screening of the Malaria Box against E. multilocularis.However, MMV665807 did not exhibit any specific toxicity against HFF, in contrast to the here tested MMV687807.Both, MMV665807 and MMV687807, are salicylanilide-derivatives related to the well-known anthelmintic niclosamide, with the only difference that MMV687807 has an additional trifluoromethyl group attached to the benzene ring.Both BPQ and MMV671636 were highly active against E. multilocularis metacestodes and less against mammalian cells, thus suggesting for a potential therapeutic window and rendering these two compounds suitable for further analyses.MMV671636 belongs to a group of novel anti-malarial compounds called endochin-like quinolones, some of which, including ELQ-400, also exhibit excellent activities against other apicomplexan parasites such as Toxoplasma, Babesia and Neospora.We here further focused on the marketed hydroxynaphthoquinone BPQ, which is related to parvaquone and ubiquinone and currently used in the treatment of theileriosis in cattle.BPQ also has reported in vivo activity against Leishmania spp. in mice and Babesia equi in horses.It has been shown that BPQ acts via a mechanism involving the inhibition of cytochrome bc1 complex in the mitochondria of Theileria.Another study in Theileria annulata suggested that BPQ is also targeting the peptidyl-prolyl isomerase PIN1.According to our TEM observations, the mitochondria of E. multilocularis metacestodes are among the first structures to be affected when treated with BPQ.Moreover, we confirmed the mitochondrial cytochrome bc1 complex as a molecular target of BPQ in E. multilocularis.The Seahorse technology that was applied to perform these experiments has already been employed to study the metabolism of the trematode Schistosoma mansoni, the nematodes Caenorhabditis elegans and Haemonchus contortus, but so far never for any cestode or isolated helminth cells.The cytochrome bc1 complex has already before proven its value as a valid antiparasitic drug target: Atovaquone for example is another hydroxynaphthoquinone and a potent inhibitor of the cytochrome bc1 complex.It is currently widely used to treat and prevent malaria, especially in chloroquine resistant patients.In E. multilocularis metacestodes, the in vitro activity of BPQ decreased under anaerobic conditions.E. multilocularis can perform fermentation under anaerobic conditions.In addition, as for many other parasitic flatworms, Echinococcus can perform malate dismutation to ferment carbohydrates under anaerobic conditions, and is thus not totally dependent on the mitochondrial respiration chain.This could explain, why BPQ is not highly active under anaerobic conditions.However, as for the in vivo situation, it is expected that the parasite is depending on a combination of aerobic and anaerobic energy generating pathways and that it encounters at least microaerobic conditions in the liver.Our in vivo trial in experimentally infected mice demonstrated that there was no statistically significant reduction in parasite burden upon treating E. multilocularis infected mice p.o. with BPQ.One important reason for this discrepancy between in vitro and in vivo activity could be explained by the fact that in vitro screening was performed in the absence of any serum, as the assay was initially established without FBS due to interference with the test.Another reason for failure of the drug against murine AE could be the experimental model, which is based on artificial injection of parasite metacestodes into the peritoneal cavity of mice, and thus growth of parasites occurs primarily there.Upon natural infection of mice with E. multilocularis eggs, where the parasite grows primarily in the liver, higher oxygen concentrations might be reached, and thus also higher effectiveness of BPQ would be expected.A further explanation for the different outcome of in vitro and in vivo treatment of the parasite with BPQ could lay in its mode of action: Blocking the electron transport chain in the mitochondria is expected to lead to the generation of toxic reactive oxygen species.Whereas the parasite E. multilocularis is known to be sensitive against ROS as it is lacking some of the key enzymes for ROS detoxification, E. multilocularis metacestodes might be better protected from ROS in an in vivo setting where detoxifying host cells are closely surrounding the parasite.However, the topic of ROS in echinococcosis awaits further investigation in the future.A third drawback of BPQ is its poor solubility and consequently poor bioavailability, and in particular poor entry into the parasitic tissue, which might be a further explanation for lack of in vivo efficacy thus far.Within the present study, neither plasma levels nor BPQ concentrations within the metacestodes were determined.Only one study so far measured BPQ levels in orally treated mice, and reached a Cmax of 1.2 μM when treating with a single dose of 6 mg/kg.Assuming linear correlation, extrapolation of this dosage to the here applied 100 mg/kg would result in a Cmax of 20 μM, which is above the EC50 of BPQ against E. multilocularis metacestodes in vitro.Some attempts to increase the bioavailability of BPQ were made in the past, such as formulation of better soluble oxime- and phosphate derivatives, which show higher efficacies against leishmaniasis in vivo.Solid lipid nanoparticles loaded with BPQ were also generated, but these nanoparticles were never tested against parasites.More recently, Smith and colleagues presented a BPQ loaded self-nanoemulsifying drug delivery system, which showed a slightly increased bioavailability, compared to an aqueous dispersion of BPQ, after oral administration in mice.Such formulations of BPQ should be tested in the future also for their efficacy against AE in mice.Several compounds from the MMV Pathogen Box were already tested before against E. multilocularis or E. granulosus in vitro and/or in vivo.Pentamidine, alpha-difluoromethylornithine, and suramine were all tested in vivo against E. granulosus, but did not show any effects.Rifampicin and miltefosine were both tested in vitro against E. multilocularis metacestodes and rifampicin was also tested in vivo.However, both compounds were ineffective in these studies.In accordance to these findings, the compounds were also inactive in the present in vitro screen against E. multilocularis.Praziquantel, despite its wide use against intestinal infections with adult cestodes and other parasites, is not active against the metacestode stage of E. multilocularis, neither in vivo, nor in vitro, as confirmed in this study.This could be explained by the fact that praziquantel causes paralysis of the parasite musculature, which then only affects actively moving, adult worms but not sessile metacestode larvae.The antifungal agent amphotericin B was shown to destroy E. multilocularis metacestodes in vitro at 2.7 μM.Amphotericin B was also tested for treatment of human AE patients, but with limited success as the drug acted only parasitostatic and was accompanied with severe side effects.Amphotericin B was not active in our screen at 10 μM, as Reuter et al. employed a different cultivation system that required medium change three times a week.Additionally, a different parasite strain and assay readout was employed.Another compound with known activity against E. multilocularis is nitazoxanide.It was previously shown to be active in vitro against E. multilocularis metacestodes at 3.3 μM, as well as against E. granulosus metacestodes and protoscoleces.Nitazoxanide was also tested in vivo in mice and in human patients suffering from CE or AE, but virtually no beneficial effects were observed.Congruently, nitazoxanide was also among the 13 compounds from the Pathogen Box that were active at 10 μM in the present study, but it did not maintain its activity at 1 μM and was not further followed here.Mebendazole, together with ABZ, is the current standard chemotherapeutic treatment for AE patients.One of the first in vitro studies with E. multilocularis metacestodes demonstrated an inhibition of parasite proliferation over the course of three weeks treatment with mebendazole at 1 μM.Mebendazole was not active in our screen with a threshold of 20% relative activity compared to Tx-100, as the PGI-assay only identifies compounds that are active within a shorter time-span.This finding is line with our previous observations, where benzimidazoles only induced a slow release of PGI.However, comparisons of benzimidazoles by electron microscopy showed that the drugs are having a clear effect on the metacestode ultrastructure early on.Auranofin is a thioredoxin-glutathione reductase inhibitor that was shown to kill E. granulosus protoscoleces at 2.5 μM after 48 h.Consistent with these findings, the drug was also active against E. multilocularis metacestodes at 10 μM, but not at 1 μM.Mefloquine, originally developed and used against Plasmodium, has recently been found to be active against E. multilocularis both in vitro, as well as in vivo.Mefloquine has a rather high IC50 value against this parasite in vitro, but nevertheless it was identified in our screening at 10 μM.Taken together, the results of our present screening of the Pathogen Box correlate well with already known activities of specific drugs, underlining the power of the here employed screening cascade.Moreover, we identified four novel compounds with distinct in vitro activity against E. multilocularis.So far, the Pathogen Box has been screened against the nematode H. contortus, the fungi Candida albicans and Cryptococcus neoformans, Plasmodium and the kinetoplastids Leishmania and Trypanosoma, Neospora caninum, Mycobacterium abscessus and M. avium, Toxoplasma gondii, C. elegans, Entamoeba histolytica, and Giardia lamblia and Cryptosporidium parvum.Interestingly, all compounds that exhibited activity against E. multilocularis were also active against at least one more pathogen other than the one it was selected for by MMV, thus underlining the importance and potential of the concept of drug repurposing.We identified two compounds within the 400 compounds of the MMV Pathogen Box with potent in vitro activities against E. multilocularis metacestodes.Moreover, we studied mitochondrial function in the parasite using a Seahorse XFp Analyzer and proved the cytochrome bc1 complex as a molecular target of BPQ in E. multilocularis GL cells.BPQ failed to be active in vivo in the murine model of AE.New, enhanced formulations of BPQ with increased bioavailability could overcome this problem in the future and hence lead to improved prognosis of patients suffering from echinococcosis.This study underlines that the repurposing of drugs has great potential when developing alternative treatment options against neglected diseases. | The metacestode stage of the fox tapeworm Echinococcus multilocularis causes the lethal disease alveolar echinococcosis. Current chemotherapeutic treatment options are based on benzimidazoles (albendazole and mebendazole), which are insufficient and hence alternative drugs are needed. In this study, we screened the 400 compounds of the Medicines for Malaria Venture (MMV) Pathogen Box against E. multilocularis metacestodes. For the screen, we employed the phosphoglucose isomerase (PGI) assay which assesses drug-induced damage on metacestodes, and identified ten new compounds with activity against the parasite. The anti-theilerial drug MMV689480 (buparvaquone) and MMV671636 (ELQ-400) were the most promising compounds, with an IC50 of 2.87 μM and 0.02 μM respectively against in vitro cultured E. multilocularis metacestodes. Both drugs suggested a therapeutic window based on their cytotoxicity against mammalian cells. Transmission electron microscopy revealed that treatment with buparvaquone impaired parasite mitochondria early on and additional tests showed that buparvaquone had a reduced activity under anaerobic conditions. Furthermore, we established a system to assess mitochondrial respiration in isolated E. multilocularis cells in real time using the Seahorse XFp Analyzer and demonstrated inhibition of the cytochrome bc1 complex by buparvaquone. Mice with secondary alveolar echinococcosis were treated with buparvaquone (100 mg/kg per dose, three doses per week, four weeks of treatment), but the drug failed to reduce the parasite burden in vivo. Future studies will reveal whether improved formulations of buparvaquone could increase its effectivity. |
342 | TEN-YEAR follow-up of treatment with zygomatic implants and replacement of hybrid dental prosthesis by ceramic teeth: A case report | Complete or partial zygomatic implant supported prosthetic rehabilitation has become a safe and predictable treatment over the last 30 years .Initially, zygomatic implants were placed with the original technique, leading to various problems because the implant head was palatinized .Soft tissue inflammation around the microunit abutments, sinusitis, phonetic and cleaning problems, hybrid prostheses with enormous palatal cantilevers were common occurrences .Nevertheless, the success rates were always high, above 97% ."The advent of this approach to treatment began by means of variations in the surgical techniques and the ZAGA philosophy, in which the zygomatic implants are place according to the patient's facial anatomy .Their trajectory could be intra- or extra-maxillary sinus, with the implant head localized closer to the crest of the alveolar ridge .The new implant designs also contributed to improving the contact with the peri-implant soft tissue and minimize possible sinus treatments when the trajectory of the implant was within the maxillary sinus and at least had a smooth middle third, without threads .The aim of this case report was to show the 10-year follow-up of a zygomatic implant supported rehabilitation treatment and replacement of the hybrid dental prosthesis with resin teeth - one by one - with ceramic teeth.Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.Not commissioned, externally peer reviewed.The patient, a 48-year-old man, wearer of a removable partial denture in the maxillary arch, presented to a private clinic in October 2009, with the intention of undergoing rehabilitation of the maxilla with a fixed implant supported dental prosthesis.After analyzing the radiographic exams, the treatment options were presented to the patient, who chose the placement of standard dental implants by the all-on-4 technique ; on which there would be a hybrid denture with resin teeth.After reverse planning and fabrication of the surgical guide, the surgery was scheduled for February 2010.After extraction of all the remaining teeth in the maxilla, 3 conventional internal hexagon implants were placed.These were Conect Conico 4.3 × 11.5 mm in the region of tooth 24, with insertion torque of 40 N; Conect AR 3.75 × 13 mm in the region of tooth 21.with torque of 45 N, and Conect Conico 5.0 × 13 mm in the region of tooth 11, with torque of 80 N. Because teeth 14 and 15 were ankylosed, the vestibular bone wall in their region was lost after extraction, making it unfeasible to place standard dental implants, even if they were inclined/tilted."At this time, after obtaining the patient's consent, we opted for the placement of an internal hexagon zygomatic implant Zigomax 4.0 × 40 mm and insertion torque of 80 N. Straight microunit abutments, were placed with a 2mm band in the region of teeth 24 and 11; at an angle of 17° with a 2 mm band in the region of tooth 21; and at an angle of 30° with a 3 mm band on the zygomatic implant, screw-retained with a torque of 20 N.The index was made with acrylic resin together with the impression transfer, using light condensation silicone with the surgical guide.Three days after surgery, a post-operative session of soft tissue laser therapy was performed, the suture was removed and the implant supported hybrid denture with resin teeth and with bar cast in chrome/cobalt were delivered to the patient.The patient presented a hematoma on the lower eyelid on the right side.Hirudoid ointment was prescribed for application on the eyelid 3 times a day for 7 days.After delivery of the hybrid prosthesis, the patient had 7 consultations for evaluation, during which occlusal adjustments were made, radiographic follow-up and prophylaxis of the maxillary hybrid prosthesis and teeth in the maxillary arch were performed.In October 2010, the hybrid prosthesis was removed for relining of the spaces ere there had been peri-implant soft tissue resorption due to the extractions.In addition, 4 microunit prosthesis fixation screws were changed.After this date, the patient returned for preventive consultations another 7 times up to November 2014.In this period prophylaxes were performed.Teeth 11 and 12 of the hybrid prostheses, which had become loose, were replaced.As we observed that natural wear of the teeth had occurred due to bruxism, the patient was instructed to have a functional myorelaxant plate made and have a new prosthesis remade.In 2016, after contact by telephone, the patient said he had had the prosthesis remade by another dentist.In February 2019, the patient returned to our private clinic as tooth 11 of the hybrid prosthesis had become loose.Temporary repair of the prosthesis was made, and we proposed to the patient that he should have a new hybrid prosthesis with a personalized cast bar made to receive 12 ceramic teeth cemented to it.After the patient accepted the proposed treatment, the prosthesis was removed, and the microunit abutment of the zygomatic implant was found to be loose.It was replaced with a torque of 20 N."The peri-implant tissues were affected by mucositis, due to the patient's poor oral hygiene and lack of periodic preventive control.Prophylaxis was performed and an oral mouth wash was prescribed.After this, the impression was taken for fabricating the occlusal orientation plane.After the register and esthetic try-in of the resin teeth, a personalized bar was fabricated.After this stage and try-in of the bar, a try-in was made of 12 teeth made of wax, copying the same shapes as those of the resin teeth, approved by the patient, on the acrylic resin casings to enable a new esthetic try-in and for checking occlusion on the bar.The next step was to try-in the 12 ceramic crowns that were ready, and to make occlusal adjustments.After approval by the patient, the crowns were cemented with Relyx resin cement onto the chrome/cobalt metal structure, and prosthesis with artificial gingiva made of resin was acrylized.The new hybrid prosthesis was delivered to the patient in April 2019.A panoramic radiograph was taken to verify the seating of the prosthesis on the microunit abutments.The patient was given instructions about oral hygiene care and prevention.The placement of zygomatic implants is a challenging treatment .In the clinical report presented, change in the surgical planning occurred due to the loss of the vestibular bone walls, leading to the surgeon option for placement of the zygomatic implant under local anesthesia and oral sedation.Only very experienced surgeons should perform this procedure in type of situation .Over the course of ten years after beginning with the treatment, the patient attended preventive consultations, with a view to maintaining the health of peri-implant tissues and prosthetic system used.All patients who have zygomatic implants inserted must participate in a preventive program, which cooperates with achieving the success of this type of therapy over the course of years .Right from the beginning of treatment, problems occurred, such as post-operative hematoma in the lower eyelid, loosening of teeth in the denture and of the zygomatic implant microunit abutment, considered common in this type of clinical approach .The last prosthesis fabricated for the patient not only improved his self-esteem psychologically, because he was able to use dental floss between the teeth, but because it also prevented embarrassment in daily life contact with other persons, which made him extremely happy.A recent satisfaction survey recently conducted between two groups of patients revealed that this type of treatment improved their self-esteem and their general level of satisfaction was high.Oral hygiene was the only item that received a score below 7 in one of the groups, however, without significant difference .In this clinical report, the patient was observed to experience difficulty with oral hygiene, right from the beginning of treatment.This occurred in relation to both cleaning the hybrid prosthesis and the natural teeth in the mandibular arch, which was lacking in various evaluations.Thus, it reinforced the idea that all patients must return for preventive consultation more frequently.No sinusitis occurred in this patient.This is considered the major cause of complications, particularly when the trajectory of zygomatic implant lies within the maxillary sinus and there are threads along its entire surface .In this clinical report, the zygomatic implant was of the internal hexagon type with threads on its entire body.At present, the use of implants with threads on the head and apex only is recommended, or even only on the apex, with a view to minimizing problems with sinusitis and dehiscence of the soft tissue over the head of the zygomatic implant.This is particularly the case when there is bone around the head, as occurs in the extra-maxillary technique ."The microunit abutment of the zygomatic implant was loose when the patient's old hybrid dental prosthesis was removed for taking the impression.This was in agreement with a recent study with finite element analysis, in which the highest stress received was localized on the posterior zygomatic implant and its microunit screw .This showed the importance of periodic clinical consultations for checking the tightening of all the screws.In spite of the small problems verified in this clinical report, treatment with zygomatic implants continues to be an excellent treatment option, with a high success rate, survival and patient satisfaction reported in the literature worldwide .Therapy with zygomatic implants must be part of the treatment options presented to patients.Surgery may occur in the private clinic with local anesthesia and oral sedation when performed by experienced professionals.These treatments have shown high success and patient satisfaction rates, with improvement in quality of life.All patients must participate in a maintenance and oral hygiene program.Study design: Paulo H. T. Almeida.Data collection: Paulo H. T. Almeida.Data interpretation: Paulo H. T. Almeida.Manuscript preparation: Paulo H. T. Almeida.Critical revision: Paulo H. T. Almeida, Sergio H. Cacciacane, Ayrton Arcazas Junior.Paulo Henrique Teles de Almeida. | The aim of this case report was to show the 10-year follow-up of a zygomatic implant supported rehabilitation treatment to replace the hybrid dental prosthesis with resin teeth - one by one - with ceramic teeth. The complications that occurred were described right from the time when the first implant supported prosthesis with immediate loading was placed, through the fabrication of a personalized dental prosthesis with twelve ceramic crowns, with a view to achieving esthetic excellence and restoring the patient's self-esteem. It was concluded that all patients with zygomatic implants must participate in a preventive maintenance program to assure the predictability of this type of treatment. |
343 | How changes in column geometry and packing ratio can increase sample load and throughput by a factor of fifty in Counter-Current Chromatography | The inspiration for this paper comes from the first phase clinical trial of Honokiol as an anti-lung cancer therapy .Honokiol was manufactured using preparative countercurrent chromatography.For countercurrent chromatography to become competitive when scaling up for 2nd and 3rd phase clinical trials and final manufacture then the process must become more efficient.Honokiol and its scale up for manufacture was first reported by Chen in 2007 when a DE-Mini Instrument was used to maximize sample loading in preparation for quickly scaling up 3500x to the DE-Maxi-CCC Instrument .This paper utilizes the principle that increasing the aspect ratio of rectilinear tubing can double column efficiency compared to convention circular tubing .It further increases column efficiency by using thin wall rectilinear tubing which increases the capacity of the column by having a higher packing efficiency and has the additional advantage of producing a lighter column and therefore less wear and tear.The objective of this paper is to see how far the new more efficient column geometry combined with better packing efficiency with thinner wall tubing can increase sample loading and throughput from what was possible with the standard 18 ml, 0.8 mm bore analytical instrument in 2007.Berthod has shown that increasing the bore of circular tubing can greatly increase stationary phase volume retention and allow faster separations with similar resolution despite the number of theoretical plates reducing dramatically.It will be interesting here to see if similar behaviour is observed with the more efficient rectilinear tubing.The hexane ethyl acetate-methanol-water phase system and the bobbin spool geometry were the same as used in .Two experimental rectilinear columns were constructed, both wound on identical bobbins with the same internal and external dimensions.The first column has a rectangular section with its wide section horizontal relative to the radial force field which is perpendicular to it; the second column also has a rectangular section.The outside cross-sectional dimensions of both rectangular sections are the same the difference is in the wall thickness which is 0.8 mm for the thick wall tubing and 0.4 mm for the thin wall tubing.The cross-sectional area of the thick wall tubing is 2.0mm2 equivalent to a bore in circular tubing of 1.6 mm.The cross-sectional area of the thin-wall tubing is 3.96 mm2 equivalent to a bore of 3.2 mm.A bobbin wound with circular cross-sectional area tubing of 1.6 mm internal diameter is used for reference.The original commercial bobbin with circular cross-sectional area tubing of 0.8 mm internal diameter is also used for reference.Rectilinear cross-sectional tubing was manufactured with the above dimensions by Adtech Polymer Engineering Ltd,in the UK and by Hongfa in China.Bobbins were constructed using 3D printing technology in the Advanced Bioprocessing Centre at Brunel University London.The columns were mounted on a Mini-DE CCC centrifuge, with a rotational planetary radius of 50 mm and a β value ranging from 0.54 to 0.76.A Knauer K-1800 HPLC pump was used to pump solvent into columns.A Knauer K-2501 spectrophotometer with a preparative flow cell was operated at 254 nm and 280 nm to monitor RP and NP elution respectively.For flows above 20 mL/min an Agilent 1200 Prep Pump was used.The solvent system used for the two-phase flow experiments was a HEMWat system 21 .The flow direction was Normal Phase.All solvents were of analytical grade from Fisher Chemicals.HPLC water was purified by a Purite Select Fusion pure water system.The sample is a crude extract of Magnolia officinalis Rehd.Et Wils.The bark of magnolia was extracted with 95% ethanol.After recycling of the ethanol, the residue was re-dissolved in NaOH solution, after filtration, the solution was precipitated with HCl solution.The suspension was then filtered again and the residue was collected and washed by water before being dried in a vacuum at 40 °C.Four grams of the above was made up to 50 mL with the upper mobile phase to give 80 mg/mL.This solution was used in 2.5%, 5%, 7.5% and 10% of column volume sample loops.The column was initially filled with the lower stationary phase, then the rotor speed was set at 2100 rpm, and the mobile phase pumped into the column to establish hydrodynamic equilibrium at the required flow in normal phase mode.Then the sample solution was injected and elution started, which was monitored with a UV detector at 254 nm.The volume of stationary phase eluted was collected so that the volume retention of stationary phase could be calculated in the usual way.As the sample loading could get quite high any stripping of stationary phase was noted and at the end of the run the contents of the column collected so that a check could be made on the final stationary phase volume.The resolution between honokiol and magnolol was used to assess separation efficiency.Throughput was calculated as the weight of sample loaded divided by the separation time.So that comparisons could be made as flow and geometry varied, all throughput results were normalized to a resolution of 1.5 by hypothetically changing the length of the column using the Rs∼√L relationship ,The values of g-field given in this paper are based on the g-field measured at the centre of the planetary rotor where R is the rotational radius of the planetary axis and ω is the angular rotation of the main centrifugal rotor.The acceleration field measured at the centre and periphery of where the tubing is wound on the bobbin will be much larger as described by van den Heuvel and Konig ,Peak height variations were within 6–12%, whereas peak elution time/position was within ±2%.The variation of stationary phase volume retention with the square root of mobile phase flow for normal phase is shown in Fig. 1 for both the thick wall and thin-wall columns.The effect of having a 10% column volume injection of sample can be seen to have little to no effect on stationary phase retention.Berthod found for circular tubing when changing from 0.8 mm bore tubing to 1.6 mm bore tubing wound on the same bobbin spool that there was a huge increase in stationary phase volume retention.The same was found here when changing from the thick wall tubing to the thin wall tubing.The effect of increasing sample volume from 2.5% of column volume to 10% of column volume for rectangular thick wall horizontal tubing is shown in Fig. 2.It can be seen that baseline resolution is preserved for 2.5%CV, 5%CV and 7.5%CV but at 10%CV this starts to be lost.Based on these results it was decided to use 10%CV for the optimization of mobile phase flow tests that follow.The effect of increasing mobile phase flow rate from 2 ml/min to 12 ml/min for rectangular thick wall horizontal tubing is shown in Fig. 3.Baseline separation is maintained for flows of 2, 4 and 8 ml/min, but at 12 ml/min baseline resolution is lost,Baseline separation can be maintained for thick-wall tubing up to a mobile phase flow rate of 8 ml/min when the stationary phase volume retention is 67% in Fig. 1.This suggests flow rates of 30 ml/min should be possible for thin-wall tubing as Sf = 69%, whereas at 40 ml/min Sf drops dramatically to 33%.This is shown to be the case in practice when mobile phase flow rate is increased from 5 to 40 ml/min for the thin wall rectangular tubing column in Fig. 4.Near baseline separations are maintained up until a mobile phase flow of 30 ml/min but lost at 40 ml/min.The variation of resolution between Honokiol and Magnolol is plotted against mean linear flow in Fig. 5.Resolution can be held above one to a much higher linear flow for the larger cross-sectional area thin-walled column than with the thick-walled column.The variation in sample loading/throughput in g/hour is shown in Fig. 6 plotted against mean linear mobile phase flow.Note that these results are normalized to a resolution of Rs = 1.5.Throughput is proportional to mean linear flow for both the thick and thin-walled columns until at high linear flow a maximum is reached.The optimum throughput conditions for the thick and thin-wall tube columns are compared to the original standard column in Table 1.The thick and thin-wall columns give a 22x and 55x improvement over the original standard commercial column.The objective of this paper was to see how far the new more efficient column geometry combined with better packing efficiency with thinner wall tubing could increase sample loading and throughput from what was possible with the original commercial standard 18 ml column.Berthod had already shown that increasing the bore of circular tubing could greatly increase stationary phase volume retention and allow faster separations with similar resolution despite the number of theoretical plates reducing dramatically.It should be emphasised that the same column bobbin/spool geometry was used so that the same analytical DE-Mini Instrument could still be used but with columns that would enable semi-preparative operation with the same resolution power.This was done by firstly by changing the geometry of the tubing from Berthod’s 1.6 mm circular tubing column to a similar wall thickness rectangular column with an aspect ratio of 3.125 and internal dimensions 0.8 mm x 2.5 mm.Peng had already shown that this arrangement could double column efficiency.It was found that flows could be increased to 8 ml/min with a throughput of 0.84 g/h with a separation time of 10 min, which was a 22x improvement over the original sample loading optimisation of 0.038 g/h at 2.5 mL/min with a separation time of 45 min.Reducing the wall thickness of the tubing to 0.4 mm and maintaining the same outer dimensions gives a column of the same length but increases the volume from 24.3 mL to 56 mL and effectively doubles the cross-sectional area from 2 mm2 to 3.96 mm2 equivalent to a circular tubing column bore of 3.2 mm.It was found that the optimum flow of 30 mL/min could be predicted from the retention curves in Fig. 1.The throughput was 2.1 g/h with a separation time of only 6.5 min giving an overall improvement of 55x the original sample loading optimisation of 0.038 g/h with a separation time that is 7x quicker.What is amazing is that good resolution between Honokiol and Magnolol can be preserved as column volume increases and separation times are reduced.The original resolution was 0.7 with the 0.8 mm bore circular tubing column and a flow of 2.5 mL/min.This increased to 1.28 for the thick-wall rectangular column at 8 mL/min and only reduced to 1.07 for the thin-wall rectangular column at 30 mL/min.This research opens the possibility of small analytical instruments being developed with high aspect ratio rectilinear columns, already shown to double column efficiency compared to conventional circular columns, with a range of columns.These would vary in cross-sectional area depending on whether long column, small cross-sectional area analytical results are required where sample volumes are limited or shorter, large cross-sectional area semi-preparative columns where gram quantities of material can be harvested for further analysis.It should be noted that, at this stage, the optimization has been demonstrated on analytical columns.The next step will be to examine whether similar improvements can be made at the preparative and industrial scale where further sample optimization strategies, as outlined by Kostanian , can be applied.This paper demonstrates that changes in column geometry from conventional circular tubing to rectangular horizontal tubing of high aspect ratio where the wider side is perpendicular to the ‘g’ field, not only doubles column efficiency but also increases sample loading capacity or throughput by up to 55 times the optimum conditions obtained using the conventional 0.8 mm 18 mL commercial column.Furthermore, this was achieved without changing the outside dimensions of the column and as large volumes of PTFE have been replaced by solvent system, the overall weight of the rotating bobbin has become lighter opening up the prospect of higher ‘g’ more efficient instruments being produced by manufacturers. | This paper builds on the fact that high aspect ratio rectilinear tubing columns of the same length and outside dimensions can double column efficiency. It demonstrates that further improvements in efficiency can be made by using rectilinear tubing columns with half the wall thickness thus replacing heavy PTFE with light solvent systems and producing lighter higher capacity columns. Increases in sample loading/throughput of up to 55x are demonstrated by comparing the separation of Honokiol and Magnolol using a Hexane: Ethyl Acetate: Methanol: Water (5:2:5:2) phase system with the new thin wall rectilinear column (56 mL, 30 mL/min, 2.1 g/h in 6.5 min.) with the original optimization performed using a conventional DE-Mini column (18 mL, 0.8 mm bore circular PTFE tubing, 2.5 mL/min, 0.038 g/h in 45 min.). Honokiol is currently going through first phase clinical trials as an anti-lung cancer therapy where preparative countercurrent chromatography was used for its manufacture. To be competitive in the future it is important for the technology to become more efficient. This is the first big step in that direction. |
344 | A new method for defining balance: Promising Short-Term Clinical Outcomes of Sensor-Guided TKA | Traditionally, soft-tissue “balance” during TKA has been determined exclusively by the subjective assessment of each surgeon.However, these imprecise methods contribute to 35% of reasons for revision, based on imbalance-related complications .In light of the anticipated five-fold increase of annual revision procedures projected by 2030, it is desirable to prevent these premature failures .Thus, it is imperative that balance be defined, and corrections executed, based on empirical data.With recent technological advances, it is now possible to dynamically track joint kinetics via tibial inserts embedded with microelectronics.One such wireless tool, the VERASENSE Knee System, allows the surgeon to observe kinetics across the bearing surface, through dynamic motion, and with the capsule closed.This multicenter study, using VERASENSE intraoperatively, has provided a unique opportunity to observe the short-term clinical outcomes of patients with a quantifiably balanced knee versus those who have quantifiably unbalanced knees.The results of these clinical outcomes, at 6 months, are promising.Postoperatively, balanced patients showed greater improvement in mean values than unbalanced patients, in both KSS and WOMAC scores.KSS scores, at 6 months, were 172.4 versus 145.3 for balanced and unbalanced patients, respectively.For WOMAC, at 6 months, balanced patients averaged 14.5 and unbalanced averaged 23.8.The results of the linear regression analysis, with respect to KSS score, suggest that balanced knees not only improve postoperative outcomes, but do so predictably on a pound-for-pound basis.Step-wise multivariate logistic regression analyses show that, when calculating the effect of all possible confounding variables, and joint state), balanced soft-tissue is the most highly significant variable that has contributed to the vast improvement in patient-reported outcomes.Not only is this variable the most significant, but its significance is consistent throughout all combinations of variables tested.Activity level was an anomalous variable during the regression analyses.While it showed significance when paired with balanced knees alone, it was highly non-significant on its own, or when combined with any other variable.This is suggestive that there is a relationship between activity level and joint balance, though perhaps not as two variables.Based on the significance values associated with activity level and balanced joint state, activity level was found to be best represented as a dependent measure.As such, joint balance was also the most highly significant variable in improving activity level.This relationship between activity level and a balanced knee may be part of a cascade effect among clinical outcomes.A balanced joint may contribute to more favorable biomechanics.This, in turn, may lead a patient to perform better in postoperative physical therapy than an unbalanced patient.This improved performance may also decrease pain levels, which would potentially lead to the increased activity levels observed in this study.Furthermore, increased activity levels may lead to higher patient satisfaction, manifesting as the significantly improved KSS and WOMAC scores also observed in this study.The odds ratios observed in this study are also promising.When defining a “meaningful improvement” in clinical outcomes, balanced patients were 2.5 times, 1.3 times, and 1.8 times more likely to obtain meaningful improvement than unbalanced patients in KSS, WOMAC, and activity level respectively.There were limitations to this study.Firstly, we did not have a control group.The primary design of the multicenter evaluation was intended to be observational.Because 13% of patients remained unbalanced we were given the opportunity to compare the two groups.Whether or not this unbalanced group is representative of traditional, non-sensor guided TKA is unknown.Secondly, the number of unbalanced patients was much smaller than balanced patients.While power analyses did confirm that comparisons could be reasonably made, an equal proportion of patients in each group would have been more favorable.Thirdly, none of the 8 surgeons participating in this study are experienced Stryker Triathlon users.Despite an inherent learning curve associated with using unfamiliar components and instrumentation, there is a chance that clinical outcomes may be better improved with seasoned users.However, the highly favorable clinical results achieved with balanced knees suggest that the learning curve for surgeons may be compressed when using the VERASENSE system.It also holds promise as a technical aid for lower-volume surgeons in whom a subjective feel may be less refined and also as a teaching tool in the academic setting.As the numbers of primary TKA patients continue to increase, so, too, will the need for less experienced surgeons to perform TKAs, leading to a larger potential for surgeon error.Soft-tissue balancing is one of the only remaining aspects of TKA that has not yet benefitted from quantified metrics.The effects of implant design, rotation, and alignment on soft-tissue balance can now be defined and their effects on short and long-term outcomes can be evaluated.This study has begun to elucidate aspects of what has, thus far, only been based on intuition: a balanced knee leads to better clinical outcomes.Conflict of Interest Statements,For the purposes of this evaluation, a definition of soft-tissue “balance” was quantitatively assessed using GUI feedback from the VERASENSE system.In order to classify a knee joint as being “balanced” two criteria must have necessarily been met.Firstly, the joint must have exhibited stability in the sagittal plane tension leading to excessive rollback or anterior lift off of the tibial component).Secondly, a difference in pounds of pressure between the medial and lateral compartments of the tibial plateau must not have exceeded 15 lb.The decision to choose 15 lb as the upper limit for balance was made based on: 1) Biomechanical research on condylar pressures in a passive state ; 2) Intraoperative observations by experienced surgeons that quantified 2 mm of opening with varus/valgus stress and load changes coupled with navigation; 3) Significant drop-offs, observed in this study, in postoperative, patient-reported outcome scores in patients with intercompartmental loading differences exceeding 20 lb.A total of 176 patients enrolled in the multicenter study had reached the 6-month follow-up interval when outcome data were evaluated.Of the full cohort 13% were “unbalanced”; 87% were “balanced”.All unbalanced patients remained so due to intraoperative surgeon discretion: In many cases, patients exhibited excessive loading in the medial compartment, lateral compartment, or both.Oftentimes, after a succession of ligamentous release, the surgeon chose to keep the patient in an unbalanced state, rather than compromise stability as a result of further release.The mean age at surgery for the unbalanced cohort was 72 ± 7 years; mean age at surgery for the balanced cohort was 69 ± 8 years.The average BMI for the unbalanced group was 31 ± 6.4; the average BMI for the balanced group was 30 ± 5.3.The average female-to-male ratio for both groups was approximately 2:1.An ANOVA comparison of means for demographic variables showed that there was no significant difference, in any of the above categories, between the two group profiles.Of the 176 patients who underwent sensor-assisted TKA, 97% had a primary diagnosis of osteoarthritis.The average preoperative ROM for all patients was 114°, and 63% exhibited a preoperative varus alignment with an average anatomic alignment of 5.1°.In order to measure improvement in clinical performance of the balanced versus unbalanced groups, patient-reported outcomes scores were used for comparison.All statistics that follow are based on this comparison method.The short-term follow-up interval for all patients is 6-months.All improvements in score are based on preoperative reports, the means of which were approximately the same in both groups, with no statistical difference: total KSS = 105 ± 24.6; total WOMAC = 47 ± 14.8.ANOVA means comparison of KSS score at 6 months yielded 172.4 for the balanced group; 145.3 for the unbalanced group.The 95% confidence intervals were 168–177 and 123–168 for the balanced group and unbalanced group, respectively.The change between preoperative and 6-month KSS score was 63.8 for the balanced group; 42.6 for the unbalanced group.ANOVA means comparison for WOMAC did not reach significance.While the means between the two groups were markedly different, the balanced and unbalanced patients exhibited high standard deviations which contributed to the non-significant P-value.Because KSS scores exhibited a highly significant difference in means comparison, a linear regression model was applied and yielded a predictive value of P = 0.032.Multivariate binary logistic regression analyses were performed for both KSS and WOMAC scores at 6 months.Variables run in these analyses included: age at surgery, BMI, gender, preoperative ROM, preoperative alignment, change in activity level, and joint state.For KSS and WOMAC, both step-wise and backward multivariate logistic regression analyses were calculated to be best fit models with similar significance.Ultimately, the step-wise model was used.The binary model revealed that the variable exhibiting the most significant effect of improvement on KSS and WOMAC score was balanced joint state.Joint state was the most highly significant variable; this demonstrated similar levels of significance throughout all possible combinations of variables included in the model.Joint state was also observed to be the sole significant factor in patient-reported outcome score improvement.Interestingly, there was also a concurrent significance observed with activity level.However, activity level was not significant on its own.This leads to the conclusion that a balanced joint state results in a higher activity level.This would make activity level more of a dependent variable, rather than a predictor.Thus, it was pulled from the regression and evaluated, along with KSS and WOMAC scores at 6 months, with odds ratios.Odds ratios were calculated based on meaningful clinical improvement in KSS scores, WOMAC scores, and activity levels at 6 months.Based on literature review, “meaningful improvement” for KSS scores were anything greater than 50 points; WOMAC scores greater than 30 points; and gains in activity level greater than or equal two 2 lifestyle levels .Scores from the unbalanced group were used as the reference point.The odds ratio for balanced joint state and improved KSS score was 2.5, with a positive coefficient.This suggests a high probability of obtaining a meaningful improvement in KSS with a balanced knee joint, over those who do not have a balanced knee.The odds ratio for balanced joint state and improved WOMAC score was 1.3, with a positive coefficient.Again, this suggests a favorable probability that patients with a balanced state will achieve a meaningful improvement in WOMAC score, over those that do not have a balanced knee.Finally, the odds ratio for balanced joint state and improved activity level was 1.8, with a positive coefficient.This also suggests a favorable probability of meaningful gains in activity level in those with a balanced knee, versus those with an unbalanced knee.One hundred and seventy six patients, from eight sites in the United States, have had a PCL retaining or sacrificing TKA performed with the use of the VERASENSE Knee System, used in conjunction with the Triathlon Knee System.Baseline data were obtained, and all patients have subsequently returned for a scheduled 6-month postoperative assessment.Each site received Institutional Review Board approval to enroll patients and all subjects signed a written informed consent document prior to enrollment in the study.Patients were considered for enrollment in this study if they were eligible for primary TKA, with a diagnosis of: osteoarthritis, avascular necrosis, rheumatoid or other inflammatory arthritis, or posttraumatic arthritis.Patients less than 50 years of age were excluded.Other exclusion criteria included: prior TKA, ligament insufficiencies, prior surgeries such as ACL or PCL reconstructions, posterolateral reconstructions, osteotomies, or repair of tibia plateau fractures.For this evaluation, patients were evaluated preoperatively, intraoperatively, at 6 weeks, and at 6 months postoperatively.Two patient-reported outcomes measures were inventoried at each clinical evaluation point, including the American Knee Society Score, and the Western Ontario and McMaster Universities Osteoarthritis Index.For all patients, at all intervals, standard weight-bearing plain radiographs were taken, including anteroposterior, lateral, and sunrise patellar or merchant views.At all intervals, varus/valgus and anteroposterior stability, extension lag, anatomic alignment, and ROM were also recorded.Intraoperatively, knee joints were accessed through a medial parapateller, subvastus or midvastus approach.The surgeons performed standard cuts for the distal femur and proximal tibia, either with or without the use navigation, at their discretion.Some surgeons used a measured resection technique for femoral cuts; others used a gap balancing technique for femoral rotation.With the trial components for the tibia and femur in place, the standard polyethylene trial was inserted and the knee was reduced.The knee was assessed manually to confirm that the joint was not excessively tight or loose in the coronal or sagittal planes, in extension and flexion.Once the appropriate tibial insert size was determined, the corresponding VERASENSE sensor was activated, and registration was verified.During the activation process, the patella was cut and patellar button applied.The VERASENSE sensor was then inserted after the appropriate shim was affixed to its undersurface to replicate the thickness of the standard trial that was used during initial assessment.The VERASENSE Knee System replicates the exact geometry of the standard tibial trial insert in order to obtain information related to the knee design and to minimize any error introduced by nonconforming geometry.It also allows closure of the medial capsule to ensure appropriate soft tissue tension during evaluation of the knee joint.Prior to soft-tissue evaluation, tibial tray rotation was visually quantified using the sensors.The mid to medial third of the tibial tubercle was used as a reference to set initial tibial tray rotation.As per surgeon preference a pin was placed in either an anteromedial or anterolateral position to stabilize any translational motion during rotational correction.With the VERASENSE sensor inserted, the knee was taken into extension.The tibial baseplate was rotated until the medial and lateral femoral contact points were seen as parallel on the graphic user interface and a second pin was added.This was a critical step, as malrotation can significantly impact soft-tissue tension.Once appropriate rotation was achieved, balance of the knee was assessed in three positions: full extension, mid-flexion, and in 90 degrees of flexion.Visible varus-valgus stress testing was performed in extension, as well as at 10 and 45 degrees of flexion to assess any laxity present in the collateral ligaments.With the capsule closed, medial and lateral load measurements and center of load were documented at 10, 45, and 90 degrees of flexion.It is important while assessing compartment pressure that no axial compression is applied across the joint.A posterior drawer was applied in 90 degrees of flexion with the hip in neutral rotation to evaluate stability of the posterior cruciate ligament.Flexion balance was achieved when femoral contact points were within the mid-posterior third of the tibial insert, symmetrical rollback was seen through ROM, intercompartmental loads were balanced, and central contact points displayed less than 10 mm of excursion across the bearing surface during a posterior drawer test.A tight flexion gap during surgery creates excessive pressures in flexion and the peak contact point resided more posteriorly on the tibial insert.This was corrected through recession of the PCL or, in some instances, by increasing the tibial slope.PCL laxity was identified via the excessive anteroposterior excursion of the femoral contact points across the bearing surface, during a posterior drawer test.Surgical correction required a thicker tibial insert, anterior-constrained insert, or a conversion to a posterior-stabilized knee design.Soft-tissue releases and/or “pie crusting” techniques were performed for coronal asymmetric imbalance, as necessary, until the desired balance was achieved.Generally, soft-tissue releases were performed using “pie crusting” techniques, as described by Bellemans, et al, to correct coronal asymmetric imbalance, as necessary, until balance was achieved .With this technique, multiple punctures were made to the medial collateral ligament, using a 19-gauge needle or #11 blade, to progressively stretch the MCL or the lateral structures until the intercompartmental pressures were deemed acceptable by the individual surgeon.This technique is performed gradually, allowing the knee to flex and extend after several punctures to allow the ligament to stretch and re-tension.The surgeons documented all soft-tissue releases performed.Final load measurements were recorded prior to cementing the components.Analysis of the data was performed using SPSS version 21.Comparative statistics were run between outcomes data stratified by two groups: those with a “balanced” joint, and those with an “unbalanced” joint.Analysis of variance was used to assess the difference between each group, with post-hoc t-tests to demonstrate significance.Separate analyses were performed to evaluate power of sample sizes and any correlative effect that demographic/clinical variables may have had on patient outcomes.All variables that could have contributed to improved postoperative outcomes were combined in a multivariate logistic regression model, as per best fit analyses.This allowed us to control for any simultaneous confounding effects.Odds ratios were calculated for each group of patients to evaluate the probability of influence in post-operative outcomes.Significance was defined as a P-value < 0.05. | Recently, technological advances have made it possible to quantify pounds of pressure across the bearing surface during TKA. This multicenter evaluation, using intraoperative sensors, was performed for two reasons: 1) to define "balance" 2) to determine if patients with balanced knees exhibit improved short-term clinical outcomes. Outcomes scores were compared between "balanced" and "unbalanced" patients. At 6-months, the balanced cohort scored 172.4 and 14.5 in KSS and WOMAC, respectively; the unbalanced cohort scored 145.3 and 23.8 in KSS and WOMAC (P<. 0.001). Out of all confounding variables, balanced joints were the most significant contributing factor to improved postoperative outcomes (P<. 0.001). Odds ratios demonstrate that balanced joints are 2.5, 1.3, and 1.8 times more likely to achieve meaningful improvement in KSS, WOMAC, and activity level, respectively. © 2014 The Authors. |
345 | Environmental Conservation and Social Benefits of Charcoal Production in Mozambique | Charcoal production and trade provides work for millions of people in Africa, is the main cooking fuel in many African urban centres and its demand is increasing because of population growth and migration from rural to urban areas.In rural areas of Sub-Saharan African countries more than 90% of the population uses firewood for cooking and less than 5% use charcoal; in urban areas the figures change to 25% relying on firewood and nearly 50% on charcoal.Charcoal is a provisioning ecosystem service, and increasing evidence suggests ecosystem services, i.e. the benefits people obtain from ecosystems, contribute to the well-being of the rural population in Africa, e.g. provisioning services, regulating services and cultural services.As such, charcoal can be an important woodland based provisioning ES for African rural populations, but at the same time can be a driver of deforestation and forest degradation through intensive and selective wood extraction.Therefore, the land use and land cover change produced by charcoal production is a major driver affecting future provisioning of ES and consequently can have important consequences for human well-being.Despite growing socio-ecological systems understanding, the resulting complexities of charcoal production and trade for sustainable land management and local livelihoods remain poorly understood.For example, not only ecosystem services supply is key for the well-being of local populations, but also the way the services are used and distributed.In Mozambique, 15% of the population participates in the charcoal market, which is estimated to have an annual value of 250 million USD.Around 70–80% of the urban population uses charcoal as primary energy source and demand is rising with rapid urban population growth.Consequent woodland depletion results in a shifting charcoal production frontier that rapidly extends into more remote areas.Charcoal production in Mozambique is affected by a range of factors that apply to most sub-Saharan countries.Policy effectiveness suffers from limited institutional cooperation, integration and coordination between related sectors.At the same time, the government lacks capacity for effective legislation implementation and enforcement."Concerning the distribution of benefits from the charcoal value chain, large part of charcoal derived income goes to non-local individuals due to communities' lack of technical, institutional, and financial capacity, limiting the success of community-managed projects in Mozambique.In this paper we analyse the consequences of charcoal production on local well-being in Mabalane District.Specifically, we analyse and evaluate the influence of LULCC on how the villagers use three woodland based provisioning ES and on local well-being, and identify and evaluate policy interventions that could contribute towards a charcoal production system that alleviates poverty, improves environmental sustainability, and provides a reliable charcoal supply.We also evaluate social factors that can act as access mechanisms to ES.We chose Mozambique as a case study because despite high degradation and deforestation rates, there is still abundant woodland, and a progressive land use policy, so Mozambique can still make a choice about its future before it is too late.The method presented allows the use of a social-ecological perspective to develop an integrated analysis of both biophysical and social consequences of charcoal production and its associated LULCC.It allows at the same time the evaluation of potential interventions aimed to improve the studied situation.Mabalane District, in Gaza Province, covers 8922 km2.Its semi-arid climate, erratic rainfalls and poor soils lead to low agricultural yields, and land cover is dominated by woodlands with minor extension of other land cover classes.In Mabalane District, 300 km from Maputo, charcoal production started to increase in early 2000, and has now become the main charcoal supply area of Maputo.Since 2007, large-scale commercial charcoal production has been evident in the Mopane woodlands) of Mabalane.Mopane is the preferred tree species used for charcoal production in the study area, followed by Combretum sp., because it produces the highest quality charcoal: it burns slowly and produces low smoke and little sparks.There are two main charcoal value chains in Mabalane: one run by local producers and one by large-scale operators.The latter is responsible for the largest amount of wood extraction for charcoal production, with only 8% of its monetary benefits remaining in the local communities.Vollmer et al. found unequal charcoal production patterns at the community level and they could not find a direct relation between charcoal production and alleviation of acute multidimensional poverty.Both findings suggest that most benefits are not reaching the rural poor in Mabalane, yet the direct consequences of forest degradation are felt locally.Our research was carried in seven villages, each with fewer than 70 households, distributed along a forest degradation gradient, from high to low degradation as described in Baumert et al.Approximately 85% of the investigated sample of HH are farmers and up to 70% also produce charcoal.A HH was defined as a unit based on members who “eat from the same pot”.Subsistence agriculture is the most predominant farming system, practiced on a small scale).Main crops are maize, cow peas, peanuts and sesame.Sixty percent of the HH keep livestock as insurance and production gains are not targeted.The objective of the paper is articulated through a series of specific research questions designed to query a newly developed Bayesian Belief Network of the charcoal production system in Mabalane.We used a BBN to conceptualize the charcoal production in Mabalane as an integrated system to compare the consequences of policy interventions on woodland based provisioning ES supply and on the well-being of the local population.The BBN and three alternative future scenarios were developed in a participatory process involving a broad range of stakeholders and experts to increase the saliency and relevance of research.The process followed eight main steps that are described in the next paragraphs.BBNs have been used in participatory approaches in the environmental sector and several different guidelines have been produced.We assimilated the most pertinent aspects of those guidelines for a participatory BBN construction in Mabalane, using stakeholders to help design the BBN structure.Participatory workshops are often structured around a topic, which typically emphasises a specific theme or subject that can be explored in depth.In our case, the focus was on the construction of a causal diagram by the participants.We asked them to link aspects of rural wellbeing, ecosystem services, land use change and possible interventions so that well-being of rural habitants and natural conservation could be improved at the same time.We conducted five workshops at different levels: 1) one with stakeholders working in institutions at national level held in Maputo; 2) one with stakeholders working at provincial and district levels held in Xai Xai; and 3) three with local communities of the study area.The objectives of the workshops were: a) to ensure that all important aspects were considered during the process of construction of the BBN structure; b) to get a local perspective of issues related to land use, ES and rural well-being; and c) to learn how these are influenced by interventions and other factors.We were also interested in the new variables that were generated from the discussion among participants, as these workshops provide an excellent means for knowledge exchange and discussion.The method used in the village workshops followed a similar pattern as Maputo and Xai-Xai workshops, adapted to the local circumstances; e.g., as some of the participants cannot read and write, photos and pictures were used to represent the variables.In each village one group built one causal diagram.Details of the methods followed can be found in Appendix A.A BBN is a statistical multivariate model defined as a directed acyclic graph where the nodes represent the variables of the model and the links indicate a statistical dependence between them, defined through a conditional distribution based on Bayesian probability or conditional probability.BBNs have been burgeoning in environmental sciences in recent years.BBNs can explicitly accommodate uncertainty and variability in the model predictions; are useful in situations where it is necessary to integrate qualitative and quantitative data; are a useful tool for dealing with complex systems; for integrating multiple knowledge domains and combining different sources of knowledge; and for analysing trade-offs.The resulting nine causal diagrams from the workshops were digitised using the software yEd Graph Editor.From them we obtained three lists with a) the most repeated variables, b) the betweenness centrality1 of each variable and c) their number of links.After this analysis, a common diagram was designed including the variables more repeated, connected and central, and the more repeated links between them.The result was set as our reference BBN for Mabalane District.Finally, this BBN was adapted: a) to focus the BBN on the production of charcoal, considered the most important driver of LULCC in Mabalane; b) to introduce the most meaningful interventions from a set of 74 proposed by the stakeholders, based on qualitative information gained through participatory rural appraisals activities); c) to introduce the most relevant variables involved in the access mechanisms to ES using non-parametric Mann-Whitney and Kruskal-Wallis tests and Poisson linear regression models; and d) to adapt the BBN to data availability.The BBN was constructed using Netica software.Scenarios are plausible and often simplified descriptions of how the future may develop based on a coherent and internally consistent set of assumptions about key driving forces and relationships.Scenarios are used to assist in the understanding of possible future developments in complex systems that typically have high levels of scientific uncertainty.Scenarios of future LULCC in Mozambique were constructed for the year 2035 using input from stakeholders collected at the national and provincial workshops mentioned above.In a first round of workshops, stakeholders provided information about the most important drivers of LULCC in Mozambique; in a second round, stakeholders evaluated and corrected three sets of previously prepared narratives."Three scenarios were constructed: a) Large private investments characterised by low socio-economic development of small farmers and as a consequence high rural-to-urban migration; b) Small holder promotion with successful improvement of small farmers' situation and, as a consequence, lower rural-to-urban migration; and c) Balanced situation with intermediate circumstances.The qualitative scenario narratives were incorporated into the BBN by defining different combinations of interventions and different levels of urban charcoal demand.This last variable is not an intervention, but a driver of change in charcoal production that has great uncertainty and cannot be directly controlled by the government.In Large private investments scenario, none of the interventions introduced in the BBN are applied and we consider urban charcoal demand increases greatly as a result of the great migration to urban centres.The government attempts to trigger development by promoting large investments, resulting in limited change in the participation of the rural population in charcoal production.In Small holder promotion all proposed interventions are applied successfully, as the government seeks to improve local rural capacities and nature protection.Urban charcoal demand remains constant, a result of low migration from rural areas to urban centres and of an increase in the use of other types of energies.In Balanced scenario, charcoal demand suffers an increase but not as high as in the first scenario, and three interventions are applied: facilitated access to licences; development of a forest management plan by the communities; and improved forest control.The data used to build the conditional probability tables, which are tables where statistical relationships between different nodes are defined, were collected from various sources:Data for the ES/well-being relationships, influenced by the access mechanisms, were derived from a HH survey in the studied villages and from an Acute Multidimensional Poverty Index constructed using this data.An extensive HH survey was done on 261 HH, with questions about poverty indicators and use of woodland based provisioning ES.Details of the HH survey method can be consulted in Vollmer et al.Poverty is a complex notion and there is not an international consensus on its definition or measurement.It is widely measured in income or consumption expenditure deficiencies, but due to the complexity of the phenomenon, multidimensional measurements of poverty are increasingly being used.To assess poverty at HH level we use a multidimensional poverty index based on three domains and nine dimensions.Data for land cover/ES relationships came from a model of potential ES supply assessment that uses field data of a) type of ES used from each tree species, b) tree species biomass present in each land cover category, c) production functions that set the proportion of the biomass in each land cover class that delivers each ES.Data for the current land cover map was obtained from satellite images and field data.LULCC related to each intervention is based on government reports, stakeholder opinions, research team knowledge, and results from the literature, specifically Del Gatto, Kasparek, and SEI.We prepared a table with data for each HH about well-being, social factors and use of ES, and data about ES supply for each village.Then, the data were introduced in the BBN with the “Incorporate case file” command in Netica software.The software calculates the conditional relationships between the variables of the BBN based on the data incorporated in this way.The land cover map was introduced as a variable, where each state represented a land cover category and the probabilities represented the proportion of the area occupied by each.The BBN uses as spatial and social boundaries the seven villages studied in Mabalane District.The BBN assumes the proposed interventions will be applied during the next 20 years, and the outputs represent the results of those interventions for the year 2035.For the well-being indicators, the data used come from the household survey, so the units are HH.The results are more easily interpreted if we understand the probabilities as proportion of the HH in the studied area.The BBN has six main types of variables: interventions, land cover, ES supply, ES use, access mechanisms and human well-being.Fig. 3 shows the final BBN, which is described in full detail in Appendix D.The land cover map was introduced as a variable, where each state represents a land cover category and the probabilities represent the proportion of the area occupied by each.We focused on the following provisioning ES: charcoal supply, firewood supply and grass supply because they were closely related with local well-being and because were the woodland based provisioning ES most repeated in the causal diagrams constructed during the village workshops.To assess poverty at HH level we use the AMPI based on nine indicators.Three of the indicators have also been included disaggregated from the index as well-being indicators: Food security, Housing and Assets owned.The variables selection was based on the stakeholder classification.The objectives of the interventions included in the BBN are to increase poverty alleviation based on charcoal production, to achieve environmentally sustainable charcoal production and to address a reliable charcoal supply.Descriptions of the interventions are shown in Table 4.Interventions related to decreasing charcoal demand have not been included, due to the focus on the production side of the value chain.Cautions concerning the interventions can be found in Appendix F.Fig. 3 refers to the Balanced scenario and will be used to describe the BBN rational: Some but not all interventions have been applied from 2015 up until 2030: improved facilitated access to charcoal licences, development of a forest management plan and an improvement of forest control.These interventions reduce the rate of charcoal production by large non-local operators and there is a 63% chance that the total charcoal production remains high along the time period.The effect has been an increase of degraded woodland from 9% to 35%, a decrease of high charcoal supply areas down to 33%, and an increase of HH producing very high quantities of charcoal up to 24%.The consequences for the locals are that 55% of the villagers remain in multidimensional poverty.The BBN was used to investigate six pertinent questions about the charcoal production in Mozambique by evaluating the effects of alternative combinations of the states of the variables on the probability distributions of woodland based provisioning ES and livelihood indicators.We set 100% probabilities for each of the different states of the ES use variables, and checked the resulting changes in the probabilities of the well-being indicator variables.Charcoal production has a low influence on housing and multidimensional poverty and a positive effect on assets owned and food security.This is in accordance with the qualitative data from the BBN workshops: HH primarily use income from charcoal to buy food and some small assets.The quartile of HH producing the least charcoal have higher rates of food security than those producing Low and High.This can occur since some of the better-off HH are not producing charcoal because they do not need it to achieve a successful livelihood.Livestock owned shows a stronger influence than charcoal production on the four well-being indicators analysed: the more livestock owned, the lower rates of multidimensional poverty HH, the higher assets owned and the higher food security.Farmland area has a positive influence on multidimensional poverty, assets owned and food security, but not on housing.The time spent on the collection of firewood has little influence on the four well-being indicators.We wanted to know if a decrease of ES supply would have a big effect on how the villagers use those ES.To find it out, we classified the villages as having low, medium or large ES supply and compared the mean quantity used per HH in each type of village.In the case of charcoal, low supply of charcoal leads to higher charcoal production.To understand the results, it is important to know that the “Low charcoal supply” situation was fed with data from the villages with the longest charcoal production period and thus with fewer trees suitable for charcoal.In those villages, large-scale operators have driven the biggest part of woodland degradation, and the long prevalence of charcoal production led to a high number of HH producing charcoal and having means to obtain assets.Meanwhile in the “High charcoal supply” villages, the big operators have not yet arrived, charcoal has been produced for a shorter period of time and only some of the villagers produce charcoal.In this highly forested study area, changes in firewood and grass supply produce little changes on the variables that represent its ES use: Time spent collecting firewood and Livestock owned.These results suggest that the supply of these two ES is not a limiting factor in the study area.The effects of ES supply on well-being act through the “ES use” variables, and knowing the small effects explained in the previous paragraph, it is normal to obtain no differences in HH well-being under different ES supply situations owned.Gender is the strongest barrier to ES use: HH headed by a woman produce smaller quantities of charcoal, own fewer livestock, spend more time collecting firewood and work smaller farmlands than male headed HH.On the contrary the strategies for producing more ES seems to be participating in a farming or forest association, having highest level of formal education and having more than two income streams: those HH produce more charcoal, own more livestock and work bigger farmlands.Finally, poverty affects differently the quantity of ES used by the villagers: poor HH have smaller farms and less livestock but produce slightly more charcoal than non-poor HH.The “access mechanisms” variables ordered from high to low influence on charcoal production are: Gender > Being member of association > Income diversification > Multidimensional poverty > Formal education.In the case of livestock and farmland, the influence is different: Income diversification > Multidimensional Poverty > Being member of association > Gender > Formal Education.The effects of the “access mechanisms” variables on Time collecting firewood are small.The influence of individual “access mechanism” variables on the well-being components is limited.However, the combination of different factors has greater effects than a single factor: female headed, non-associated and little diversified HH are associated with food insecurity, few assets owned and high rates of multidimensional poverty.Taking out the gender factor, the influence of Association and Diversification follows a similar pattern although not so sharp.Looking deeper into the HH data, only 5% of the female headed HH are members of an association, compared to 25% of the male headed HH.We analysed the effects of the access mechanisms to ES use under different situations of ES supply.The most important interaction was detected between charcoal supply and diversification and association: in the high charcoal supply villages associated and diversified HH produce much more charcoal than non-associated and little diversified HH, the effect being greater than in the low charcoal supply villages.Belonging to a farmer or forest association has a bigger influence than diversification.However, there are not such clear differences with education, gender or poverty.A similar effect is observed with livestock, what reveals some kind of interaction that should be further analysed.The effect on LULCC of individually applied intervention is small and increases when more than one intervention is applied.Improve Forest Control is estimated to produce the highest effect because it could reduce both small-scale community and large-scale non-local charcoal production, while the other interventions would only directly affect local small-scale production.The simulations show that under successful application of all the proposed interventions, the land affected by forest degradation could be reduced by approximate 20%–30%, and that the interventions have a higher impact with lower urban charcoal demand than with a high charcoal demand.Higher urban charcoal demand situation increases forest degradation by 13% while lower demand decreases it by 14% if taking “current charcoal demand” as baseline.We tested the consequences of the scenarios introducing different combinations of interventions and urban charcoal demand.Large private investments scenario produces the biggest change, with the reduction of Mopane woodland land cover from 23% to 8% of the study area, and Small holder promotion the smallest, with a decrease of Mopane woodland to 18%.Balanced scenario produces an intermediate LULCC.These LULCC under the three scenarios have different consequences for the supply of ES, occurring some trade-offs.Under Large private investments, areas with high charcoal and firewood supply would diminished while areas with high grass supply would increase.The consequences of the LULCC scenarios on the quantity of ES used by villagers are different for each ES.The largest changes occur in the production of charcoal: compared to the current situation, under Large private investments the proportion of HH producing high amounts of charcoal would increase.Quantity of livestock owned and time spent collecting firewood would not change significantly, showing that the supply of those ES is not a limiting factor for its use in Mabalane.Finally, and in accordance with the results explained previously, the LULCC scenarios have little influence on the well-being variables.Deforestation and woodland degradation reduced woodland based provisioning ES supply, but surprisingly, there is little change in its use.For example, under Large private investment Scenario more HH produce very high and high quantities of charcoal than currently.There seem to be two reasons for this.Firstly, the data show that villages with degraded forests and low charcoal supply have higher charcoal production, because these villages have become specialised and accustomed to producing charcoal.Second, most of the Mopane woodlands in the study area are degraded more than deforested) and villagers can still keep producing charcoal from smaller trees and other types of woodlands.Furthermore, enough woodland remains and therefore the degradation does not seem to be greatly affecting livelihoods, the quantity of charcoal produced and livestock owned, or the time spent in firewood collection.Nevertheless, continued charcoal production at current rates will ultimately deplete Mopane and other woodland types and affect other ES.Therefore, the question about the future of Mabalane woodlands is not how much land will be degraded, but what will be the intensity of degradation.The analysis revealed only a weak effect of charcoal production on multidimensional poverty alleviation.The majority of HH only produce a small amount of charcoal, with a value of less than 1 USD per day.While this can improve food security and the assets owned by the HH, it has a limited effect on other aspects of the AMPI including sanitation, education, health, social relationships or housing.In some of these last components of well-being, public policies have a greater impact.These results are consistent with various studies showing that forest resources have a small role to play in poverty alleviation, although are in opposition to some studies that use a poverty indicator based on income in Uganda).Our data and methodology is unable to test whether the most prosperous HH are able to use charcoal as a pathway out of poverty.Nevertheless, they do show that forest resources are important for the cover of basic needs, and therefore can work as safety nets, specially for the poorest HH.As suggested by previous studies, we have found that there are access mechanisms to ES use and that their effects seem to be driving the ES use more importantly than ES supply.The most important being:Gender: female headed HH produce less quantities than male headed HH, like Khundi et al., 2008 in Uganda but unlike Smith et al. in Malawi.Diversification: HH with a high number of income streams produce higher quantities of charcoal, also noticed by Smith et al. in Malawi and by Jones et al. in a different part of Mozambique.Association: a higher proportion of HH being part of forest or farmers associations produce charcoal.Vollmer et al. showed that individual factors have a small effect on AMPI, and that only a combination of different factors results in clear differences between HH.We found similar results: the combination of several access mechanisms produces bigger differences than individual mechanisms in food security, assets owned and multidimensional poverty.Ethnicity and religion seem to have some relationship with livestock and agriculture, but not with charcoal production.This therefore, could be analysed in more detail in future studies.Other clear access barriers to charcoal production are the difficulty of locals to sell directly charcoal in Maputo and to obtain charcoal licences, as previously highlighted by Schure et al., 2013 in other African countries and similarly to the financial barriers found by Khundi et al.Fig. 4B shows that associated and diversified HH have a greater capacity to adopt new production activities such as charcoal production than other HH.This is in line with previous findings of how technology, skills and capital may be required to initiate and capture benefits from forest products.During wealth-ranking focus groups, the villagers explained that wealth and poverty are related with work ethic, social networks, farm size, gender, livestock and housing.Work ethic could be related with diversifying income streams and social networks with being part of associations, so the quantitative results from the HH survey are aligned with the qualitative results obtained with the focus groups.The high woodland cover in the study area meant that, although we selected villages with different woodland degradation stages, woodland based provisioning ES supply is currently not a critical factor restricting ES use by local communities.Therefore, with the data available, the different scenarios of LULCC simulated only small consequences in local well-being.The biggest influence observed is due to social factors more than to ecological limitations, e.g., in the villages where large-scale charcoal production started earlier, the proportion of villagers producing charcoal is higher, and it is produced in largest quantities.Even when the woodlands are degraded, villagers continue producing charcoal from lower quality natural resources.Gender of the HH head is the most social influential factor for a lower use of ES, while diversification of income activities and participating in associations are associated with increases in the use of those ES by villagers.Nevertheless, higher production of charcoal does not directly result in a decrease of multidimensional poverty.The results show that charcoal production is working more as a safety net that helps villagers to prevent their situation from worsening.Improving social services like education, health care, drinking water and infrastructure, are clear actions to decrease multidimensional poverty in Mabalane.The analysis of social-ecological systems often lack important data and stakeholder involvement has been proposed as needed and appropriate.The involvement of stakeholders and local people in the construction process of the presented BBN has proven to be key because it provided information that could not be obtained by consulting the literature or by collecting field data.Stakeholder involvement reduces the time necessary to understand the situation from only publications and legitimizes that the critical facts for the people involved in the issue have been included.Together with the difficulty of analysing land use change in this type of ecosystem, the weak policy implementation in the study area prevented us to use local data to construct the relationships between land use change and policy interventions.To overcome this lack of data we used the scarce data found in similar case studies and under similar type of interventions plus expert opinion and stakeholder involvement.Other important data lacking are related to the long term data on extension, type and intensity of woodland degradation and recovery rate, what has been overcome with short term data from a biomass change study.Therefore, the presented BBN illustrates how BBNs can deal with uncertainties and data scarcity in social ecological systems.Our results do not show dramatic changes in woodland based provisioning ES resulting from the policy interventions.We argue that is due to the high woodland cover in the study area, which ensures that ES supply is not a limiting factor for its use, and due to the small differences in land cover across the studied villages.For future studies we propose to use the HH and not the village as the reference unit to study the relation of well-being with woodland based ES.In that way, differences in the distance from individual houses to forest can show clearer effects of land use change on the use of ES and clearer effects of woodland scarcity on local well-being.The process done to build the BBN provided a holistic understanding of the case study in a systematic way, and therefore facilitates the detection of the most crucial variables involved and of data gaps.This is useful when complex systems make difficult to distill the key management strategies that can deal with tradeoffs and benefit a wider type of actors.This paper is of the first that analysed with field data impacts of different management options at the same time on ES supply and on well-being using a multidimensional approach to poverty.The novelty of using BBNs to explore quantitatively scenarios of the future has proved to be a very appropriate approach to analyse complex systems.Local data and direct input from stakeholders and locals has been used to describe the multiple relationships between charcoal production, LULCC, woodland provisioning ES and well-being in social-ecological systems.The method allowed us to deal with the complexity of the case studied and with the uncertainties and lack of data that these kind of cases confront.The existence of two main value chains, one run by local producers and one by large operators, result in a greater part of the forest degradation being caused by the large operators, with the villagers obtaining lower revenues and dealing with the consequences of deforestation.Woodland degradation means a decrease in the supply of some provisioning woodland based ES.Nevertheless, due to the selective tree harvesting for charcoal production and to the remaining high forest cover, current woodland degradation in the case study has limited impact on human well-being."Due to the government's lack of capacity and to the rising charcoal demand in the next decades, increasing local capacities will be an important alternative to improving charcoal production with the objectives of alleviating poverty, improving environmental sustainability, and providing reliable charcoal supply.Support for increasing local capacities and facilitating the access of locals to the licence scheme have been proposed as important actions in this paper, acknowledging the important difficulties that community based natural resource management faces to success.Improving the control of the illegal charcoal production has proved to be efficient in reducing charcoal production, as that measure affects the two existing value chains.Other interventions proposed, but not analysed with the BBN are: improving land ownership and promoting a more transparent relationship between the large operators and the locals. | Charcoal is an important source of energy and income for millions of people in Africa. Its production often drives forest degradation and deforestation which have impacts on the local people that remain poorly understood. We present a novel methodology for analysing the contribution of woodland ecosystem services (ES) to rural well-being and poverty alleviation, which takes into account access mechanisms to ES, trade-offs between ES, and human response options. Using a participatory approach, a set of land use change scenarios were translated into a probabilistic model that integrates biophysical and social data. Our findings suggest that in highly forested areas woodland degradation does not have a critical impact on the local use of the three ES studied: charcoal, firewood and grass. Social factors show the largest impact on the quantity of charcoal produced, e.g. female-headed households experience the greatest barriers to access charcoal production. Participating in forest associations and diversifying income activities lead to greater charcoal production. Results show that charcoal production increases some aspects of well-being (e.g. household assets), but does not decrease acute multidimensional poverty. Great efforts are required to reach a charcoal production system that alleviates poverty, improves environmental sustainability, and provides a reliable charcoal supply. |
346 | Treatment with the MAO-A inhibitor clorgyline elevates monoamine neurotransmitter levels and improves affective phenotypes in a mouse model of Huntington disease | Monoamine oxidases play an important role in brain function via the metabolic regulation of monoamine neurotransmitters, such as dopamine, norepinephrine, and serotonin.Alterations in MAO activity, which are associated with changes in monoamine neurotransmitter levels as well as with the production of toxic reactive oxygen species, have been implicated in the pathobiology of neuropsychiatric and neurodegenerative disorders."Interestingly, psychiatric manifestations are a common feature of many neurodegenerative disorders including Parkinson's and Huntington disease.Indeed, depression is the most prevalent symptom in PD and HD, occurring in approximately 40–60% of patients.Two isoforms of MAO exist, MAO-A and MAO-B, which share approximately 70% amino acid identity.Both proteins are expressed in most mammalian tissues, associated with the outer membrane of the mitochondria; however, the ratio of MAO-A and MAO-B isoforms and the levels of MAO activity vary between regions of the human brain.Enzymatically, MAO-A and MAO-B differ in their substrate selectivity.NE and 5-HT are specific substrates for MAO-A, whereas phenylethylamine and benzylamine are only degraded by MAO-B.DA is a common substrate for both isozymes.Abnormal MAO-A and MAO-B activity are therefore involved in distinct clinical presentations.Dysregulation in MAO-A activity has been implicated in a variety of neuropsychiatric disorders including depression, anxiety, autism, and attention deficit hyperactivity disorder, whereas MAO-B activity, which has been described to increase with ageing, is associated with neurodegenerative disorders, such as PD.Moreover, alterations in both MAO-A and B activity have been observed in brain regions that undergo neurodegeneration in HD patients, such as the basal ganglia and the pons.Monoamine oxidase inhibitors have long been used for treatment of psychiatric disorders and, more recently, have shown therapeutic benefits in the treatment of neurodegenerative disorders.MAO inhibitors can be classified as reversible or irreversible inhibitors of MAO-A, MAO-B, or both.Inhibition of MAO-A results in antidepressant and anxiolytic effects, whereas selective MAO-B inhibitors are useful in movement disorders such as PD.Although MAO inhibitors have yet to be tested in the treatment of HD, the presence of psychiatric manifestations and increased MAO-A and MAO-B activity in HD patients suggest that MAO inhibitors may be of therapeutic benefit.Recently, we showed that inhibition of excessive MAO activity in mouse and human HD neural cells using clorgyline, an irreversible MAO-A inhibitor, reduces oxidative stress and improves cellular viability.These observations prompted us to evaluate the effects of clorgyline further in the YAC128 mouse model of HD.The YAC128 HD mice express a full-length human mutant HTT transgene and exhibit neuropathological and behavioural phenotypes that mimic symptoms of patients with HD, including affective phenotypes such depressive- and anxiety-like behaviour.After establishing the appropriate dose of clorgyline in wild-type FVB/N mice, we sought to determine the effect of MAO-A inhibition on monoamine neurotransmitter levels and affective phenotypes in YAC128 HD mice.Three-month-old male FVB/N mice, purchased from InVivos, used for the initial clorgyline dosing study, and four-month-old YAC128 HD and littermate wild-type mice were group-housed on a reverse 12-h light/dark cycle.The mice had ad libitum access to water and food throughout the study.Clorgyline hydrochloride was diluted in phosphate buffered saline and was delivered by intraperitoneal injection once per day.The diluted drug solution was free of precipitates.All animal procedures were reviewed and approved by the Institutional Animal Care and Use Committee of the Biological Resource Centre, ASTAR.For the clorgyline dosing study, 3 month old WT mice were divided into four groups of 10 mice each.Three groups received clorgyline at a dose of 0.5, 1.5, or 3 mg/kg and the fourth group received an equivalent volume of PBS.For the YAC128 treatment study, three independent cohorts of 4 month old mice were used, giving a total of 17–22 mice per treatment/group.For each cohort, mice were divided into three groups.One group of YAC128 HD mice received clorgyline at a dose of 1.5 mg/kg and two groups, one of WT mice and another one of YAC128 HD mice, received an equivalent volume of PBS.Treatments were administered daily before noon at a volume of 10 mL/kg i.p. for 21 days and were continued throughout the behavioural testing phase, which commenced on day 22.During the behavioural testing phase, mice were treated in the afternoon following completion of the corresponding behavioural test.Mice were sacrificed once the behavioural testing was finished at day 26 or 28.The MAO-Glo Assay System was used to measure MAO-A activity.Brain tissue was homogenized in lysis buffer.Protein lysates were diluted to 1 mg/mL using lysis buffer.Diluted lysate was incubated with an equal volume of MAO substrate solution for 2 h at room temperature.Luciferin detection reagent was added and luminescence was measured using FLUOstar Omega plate reader.Except for the lysis buffer, all solutions and reagents were provided by the MAO-Glo Assay System.Following completion of behavioural testing, mice were sacrificed by carbon dioxide asphyxiation followed by rapid removal of the brains.Brains were micro-dissected on ice and immediately snap-frozen in liquid nitrogen.DA, NE, and 5-HT were determined by Brains On-line using established methods.Briefly, for the preparation of LC–MS samples, 6 mL of 0.5 M perchloric acid was added to each mg of striatal tissue and the samples were homogenized by sonication.The homogenates were centrifuged and the supernatants were stored as brain extracts at − 80 °C until analysis.For analysis, an aliquot of internal standard solution was mixed with a diluted aliquot of each brain extract sample.The mixture was centrifuged and the supernatant was transferred to a vial suitable for use in the autosampler.Concentrations of 5-HT, DA, NE, and DOPAC were determined by HPLC with tandem mass spectrometry detection, using deuterated internal standards of the analytes.For each LC–MS sample, an aliquot was injected onto the HPLC column by an automated sample injector.Chromatographic separation was performed on a SynergiMax column held at a temperature of 35 °C.The mobile phases consisted of A: ultra-purified H2O + 0.1% formic, and B: acetonitrile: ultra-purified H2O + 0.1% formic acid.Elution of the compounds proceeded using a suitable linear gradient at a flow rate of 0.3 mL/min.The MS analyses were performed using an API 4000 MS/MS system consisting of an API 4000 MS/MS detector and a Turbo Ion Spray interface.The acquisitions on API 4000 were performed in positive ionisation mode, with optimised settings for the analytes.The instrument was operated in multiple-reaction-monitoring mode.Data were calibrated and quantified using the Analyst data system.Concentrations in experimental samples were calculated based on the calibration curve in the corresponding matrix.All behavioural tests were performed in the morning during the dark phase of the reverse light/dark-cycle.The open-field test is commonly used to assess anxiety in rodents.The testing apparatus is a 50 × 50 cm open, grey, acrylic box with 20-cm high walls.Because rodents have an innate fear of open and bright spaces, they preferentially spend more time at the perimeter rather than the centre of the open field.The time spent in the centre versus the perimeter is taken as a measure of anxiety-like behaviour.Test sessions lasted 10 min and the time spent in the centre versus perimeter was recorded using an automated video-based tracking system.The Elevated Plus Maze is a well-established test of anxiety.The testing apparatus is shaped like a ‘+’ with two open arms perpendicular to two closed arms of equal dimensions.The closed arms are enclosed by three 10-cm high walls.Because rodents have an innate fear of elevated open spaces, they tend to spend less time in the open arms.Time spent in the open versus closed arms is taken as a measure of anxiety-like behaviour.Generally, treatment of rodents with anxiolytic drugs that reduce anxiety increases both the amount of time spent in and the number of entries into the open arms.Test sessions lasted 5 min and the number of entries into the open arms and time spent in the open versus closed arms were recorded using an automated video-based tracking system.The TST was first developed as a rodent screening test for potential human antidepressant drugs, and was performed as previously described.Briefly, mice were suspended by their tails with adhesive tape attached to a suspension bar.The test sessions were recorded by a video camera and each session lasted 6 min.Immobility scores for each mouse were determined by manual scoring.The experimenters scoring the videos were blinded to the treatment and genotype.Reduced mobility is considered a measure of depressive-like behaviour.The Porsolt FST was performed as described previously.Briefly, mice were placed in individual cylinders filled with room temperature water to a depth of 15 cm for a period of 6 min.The test sessions were recorded by a video camera placed directly above the cylinders.The sessions were examined blinded and the last 4 min of the test session was scored using a time-sampling technique to rate the predominant behaviour over 5-s intervals.The following behaviours were measured and recorded at the end of every 5 s: swimming/climbing and immobility.Data are expressed as means ± SEM."Statistical significance was determined by one- or two-way ANOVA with appropriate post hoc testing, or by Student's t test.Differences were considered statistically significant when p < 0.05.We first assessed the effect of different doses of clorgyline treatment on the enzymatic activity of MAO-A in the cortex of WT mice."Clorgyline treatment for 21 days resulted in a significant inhibition of MAO-A enzymatic activity compared with vehicle-treated animals. "All three doses tested resulted in a reduction of approximately 80% in enzymatic activity, and no significant differences were observed between the three doses.To examine the effect of MAO-A inhibition on the metabolism of monoamine neurotransmitters, we assessed the levels of 5-HT, NE, and DA, as well as the levels of 3,4-Dihydroxyphenylacetic acid, the degradation product of dopamine, in the striatum of vehicle-treated WT mice."Treatment with clorgyline at all tested doses significantly elevated striatal levels of 5-HT and NE compared with vehicle-treated animals. "Interestingly, DA levels remained unchanged, whereas DOPAC levels were significantly decreased in clorgyline-treated WT mice, suggesting that inhibition of MAO-A activity decreases the metabolism of DA.Having assessed the effect of different doses of clorgyline on the levels of MAO-A activity and its monoamine neurotransmitter substrates, we sought to evaluate the effect of clorgyline treatment on affective function in WT mice.First, we examined possible detrimental effects of clorgyline on WT mice by measuring the body weight of all mice at the end of the 21 days of treatment."No significant difference in body weight was observed in mice treated with the low and intermediate doses of clorgyline. "However, treatment with the high dose resulted in a significant loss of body weight when compared with vehicle-treated mice, suggesting a potentially detrimental effect of the highest dose of clorgyline on WT mice.We then assessed the effect of clorgyline on mouse anxiety levels using the open field and elevated plus maze tests of anxiety-like behaviour.Clorgyline treatment had no effect on performance in these tests; time spent in the centre of the arena of the OF and in the open arms of the EPM were similar between the clorgyline- and vehicle-treated mice.Next, we assessed depressive-like phenotypes using the tail suspension test and Porsolt forced swim test.In the TST, mice treated with the high dose of clorgyline showed a significant decrease in depressive-like behaviour compared with vehicle-treated mice.Clorgyline had no effect on performance in the FST at any of the doses tested.These data indicate that clorgyline did not have a detrimental effect on anxiety- and depressive-like behaviour at any of the doses tested.However, because the highest dose of clorgyline resulted in body weight loss, we chose the intermediate dose, the highest dose tested that did not cause detrimental effects, to evaluate the effect of clorgyline in the YAC128 mouse model of HD.To ensure similar pharmacokinetics of clorgyline in WT and YAC128 mice, we measured plasma levels of clorgyline following treatment and found no difference between the genotypes.Clorgyline treatment inhibited MAO-A enzymatic activity in cortical tissue from YAC128 HD mice by approximately 90%.Striatal DA and NE levels were also significantly reduced in YAC128 HD mice compared with WT mice however striatal 5-HT was not significantly altered in YAC128 HD mice compared with WT mice.In clorgyline-treated YAC128 HD mice, striatal levels of all three neurotransmitters were significantly elevated in comparison with vehicle-treated YAC128 HD mice.Striatal 5-HT and NE levels were significantly elevated in treated YAC128 HD mice compared to vehicle-treated WT mice.To test the specificity of MAO-A inhibition by clorgyline, striatal levels of PEA, a specific MAO-B substrate, were also measured.No differences in striatal PEA were observed between vehicle- and clorgyline-treated YAC128 HD mice, suggesting lack of MAO-B inhibition.Mouse models of HD, including the YAC128 HD model, show affective phenotypes, such as anxiety and depression.Given the well-established relationship between changes in monoamine neurotransmitter levels and affective phenotypes, we evaluated the effect of clorgyline on behavioural measures of depression and anxiety in YAC128 HD mice.We found no difference in the body weight of clorgyline- and vehicle-treated YAC128 HD mice at the end of the 21-day treatment, suggesting no detrimental effects of clorgyline treatment.YAC128 HD mice showed an increased anxiety-like phenotype, signified by shorter time periods spent in the centre of the arena in the OF test and in the open arms of the EPM, compared with WT mice.YAC128 HD mice also displayed depressive-like behaviour in the Porsolt FST, as shown by the increased immobility observed compared with WT mice.Performance in the TST was unaltered in HD mice compared with WT.Clorgyline treatment improved anxiety-like phenotypes, significantly increasing the amount of time spent in the centre of the arena in the OF test and in the open arms of the EPM.Decreased immobility times were also observed in the TST, but not the FST, indicating a reduction in depressive-like behaviour.Psychiatric manifestations are a common feature of HD.While social and psychological factors are thought to play a role, findings from human and animal studies strongly implicate neurobiological alterations related to the HD mutation in the aetiology of psychiatric disturbances of HD."Although the aetiology of depression in HD remains poorly understood, several pathogenic mechanisms have been proposed, including deficient brain-derived neurotrophic factor and neurotrophin signalling, a hyperactive hypothalamic–pituitary–adrenal axis, and impaired hippocampal neurogenesis.In addition to these abnormalities, we now demonstrate that YAC128 HD mice exhibit deficits in monoamine neurotransmitters known to be tightly associated with affective disorders.We further show that elevating the levels of these monoamines, namely serotonin, norepinephrine, and dopamine, by pharmacological inhibition of MAO-A improves affective phenotypes of YAC128 HD mice.These findings suggest that monoaminergic impairments may be a contributing factor to the psychiatric manifestations of HD.Deficits in dopaminergic signalling have been demonstrated in presymptomatic gene carriers, symptomatic patients, and animal models of HD.Similarly, there is evidence for serotonergic dysfunction in rodent models and in patients with HD.Treatments targeting the dopamine-norepinephrine and the serotonin systems have been shown to improve affective function in rodent models of HD, indicating that these deficits contribute to affective abnormalities.Interestingly, improvements in affective phenotypes resulting from targeting the serotonergic system have been observed in rodents expressing an N-terminus fragment of mutant HTT but not those expressing the full length HTT protein.Serotonergic deficits may therefore be more important in the development of affective phenotypes in HTT N-terminus fragment models, which generally have a more rapid onset.In addition to our findings with clorgyline, an MAO-A inhibitor, treatments with the mood stabilizers, lithium and valproate, have also been reported to improve affective phenotypes in YAC128 HD mice.Indeed, valproate treatment improved performance in the FST and TST tests of depression, and combined valproate and lithium treatment improved performance in the FST, TST, as well as the open-field test of anxiety.These improvements were accompanied by altered GSK-3beta activation, histone H3 acetylation, and improved expression of HSP70, BDNF, and its cognate receptor TrkB, which may have contributed to the observed improvements.Thus, multiple pathogenic mechanisms may underlie affective dysfunction in HD.While it would be tempting to speculate about the relative contribution of these pathways and hence the therapeutic value of engaging each in isolation on affective phenotypes in HD, a combinatorial treatment strategy is likely to yield greater benefit, and determining the optimal combination of therapeutic agents should be the subject of future studies.It should be noted that at the dose chosen in this study, clorgyline treatment elevated serotonin and norepinepherine in YAC128 HD mice to levels higher than those seen in WT mice.This observation highlights the need for appropriate dose titration to ensure that neurotransmitters affected in HD are restored without overtly exceeding the normal range and risking undesirable side effects such as the serotonin syndrome.Several factors may contribute to the deficits in the levels of striatal monoamine neurotransmitters observed in YAC128 HD mice.The rescue of neurotransmitter levels we observed following clorgyline treatment supports a role for MAO-A.Dysregulation of MAO-A/B activity has been linked not only to depression but also to neurodegenerative disorders.Indeed, increased MAO-A/B activity has been observed in patients with HD in brain regions that undergo neurodegeneration and in human neural cells differentiated from HD patient-derived induced pluripotent stem cells.It is plausible that monoamine deficits reflect altered biosynthesis, although this requires further investigation.There is considerable variability in the nature, severity, and timing of appearance of psychiatric symptoms in HD.While this variability may, in part, reflect poor standardisation of the diagnostic criteria used to assess psychiatric features, it also points to a likely role for environmental, epigenetic, and genetic modifiers.Genetic polymorphisms and epigenetic variation in MAOA have been associated with altered MAO-A levels and activity as well as psychiatric disorders, including depression, aggression, and anxiety.Furthermore, MAO-A expression has been shown to be responsive to certain environmental factors.Thus both genetic and epigenetic variation in MAOA may contribute to psychiatric manifestations in HD.The results from our study support a role for monoaminergic impairments in the affective phenotypes observed in YAC128 HD mice, and suggest potential therapeutic benefits of MAO-A inhibitors for the treatment of psychiatric abnormalities in HD.Our findings also raise the question of whether targeting monoaminergic impairments in HD may improve symptoms beyond psychiatric abnormalities, including deficits in motor function and cognition. | Abnormal monoamine oxidase A and B (MAO-A/B) activity and an imbalance in monoamine neurotransmitters have been suggested to underlie the pathobiology of depression, a major psychiatric symptom observed in patients with neurodegenerative diseases, such as Huntington disease (HD). Increased MAO-A/B activity has been observed in brain tissue from patients with HD and in human and rodent HD neural cells. Using the YAC128 mouse model of HD, we studied the effect of an irreversible MAO-A inhibitor, clorgyline, on the levels of select monoamine neurotransmitters associated with affective function. We observed a decrease in striatal levels of the MAO-A/B substrates, dopamine and norepinephrine, in YAC128 HD mice compared with wild-type mice, which was accompanied by increased anxiety- and depressive-like behaviour at five months of age. Treatment for 26 days with clorgyline restored dopamine, serotonin, and norepinephrine neurotransmitter levels in the striatum and reduced anxiety- and depressive-like behaviour in YAC128 HD mice. This study supports a potential therapeutic use for MAO-A inhibitors in the treatment of depression and anxiety in patients with HD. |
347 | Optimisation and validation of high-temperature oxidation of Cyclopia intermedia (honeybush) – From laboratory to factory | Globally consumers are increasingly health-conscious, leading to greater appreciation for natural products, including herbal teas.As a result, the market for honeybush tea, produced from a number of Cyclopia species, has seen rapid growth since its “re-discovery” in the early 1990s.Demand is currently exceeding supply, necessitating that each production batch should meet optimum quality standards.With the transition from a cottage industry to a formalised industry supplying mainly an export market, attention was given to improve inconsistent and poor product quality.Du Toit and Joubert investigated the high-temperature oxidation step of C. intermedia as this species provided the bulk of production at that stage, and which is still currently the case.Optimum quality was defined by the development of a sweet-associated flavour, obtained when C. intermedia was fermented at 70 °C/60 h or 90 °C/36 h.At that stage no attempt was made to further characterise this positive broad-based sensory attribute and to identify negative attributes.Given the long fermentation period, especially at 70 °C, some processors have chosen not to adhere to the recommended conditions, choosing instead to use lower temperatures or shorter times in an attempt to either increase throughput, save energy or accommodate limitations of processing equipment.Erasmus et al., investigating the fermentation of C. genistoides, C. subternata, C. maculata and C. longifolia, and using the high temperature-short time regime applied by some processors as starting point, demonstrated that a good quality herbal tea could be obtained at 80 °C/24 h or 90 °C/16 h, depending on the specific species and aroma profile required.Given the possibility that a high temperature/short time fermentation regime could be suitable for producing a good quality herbal tea from C. intermedia, in addition to the progress we have made to date to characterise the evolution of the aroma profile of a number of Cyclopia species during fermentation, the present study revisited the optimisation of the fermentation conditions of C. intermedia.The aim of the present study was to confirm 70 °C/60 h or 90 °C/36 h as optimum fermentation conditions or to establish a new set of optimum conditions for C. intermedia.A large number of commercial production samples of C. intermedia were collected from processors and retail outlets in an attempt to fully define its sensory profile and determine the extent of variation in quality.Following determination of the optimum fermentation temperature–time combination for C. intermedia on laboratory-scale, validation of these combinations was carried out on factory-scale.A total of 54 production batches of fermented C. intermedia were sourced from commercial processors, farm stalls and supermarkets throughout the Western and Eastern Cape provinces of South Africa.The sample set served to capture sensory variation in terms of attributes and intensities, particularly to identify the major aroma attributes associated with C. intermedia.Another aim was to gain insight into the presence of negative attributes and taints that could point to sub-optimal fermentation conditions.The samples were stored at room temperature in glass jars until analysis.Three batches of C. intermedia plant material, with each batch representing an independent replicate, were harvested at different times over a period of 6 weeks from two commercial plantations on a farm near Barrydale and a natural stand on a farm in the Langkloof, respectively.Shoots of several plants were harvested and pooled to form a batch.Thick stems, largely devoid of thin side branches with leaves, were removed before processing.The plant material was mechanically shredded, moistened to ca 60% moisture content and mixed thoroughly before sub-division.Each sub-batch was placed in a stainless steel container and covered with a double layer of heavy duty aluminium foil to prevent excessive moisture loss during fermentation.The sub-batches were randomly allocated to three pre-heated laboratory ovens at 70, 80 and 90 °C, respectively.One sub-batch was removed from each oven after predetermined time intervals and the fermented plant material spread out on four drying trays.The trays were placed in a cross-flow temperature-controlled dehydration tunnel at 40 °C for 6 h to dry the fermented plant material to ca 10% moisture content.Each dry sub-batch was mechanically sieved as described by Theron et al. and the “tea bag” fraction was collected and stored at room temperature in sealed glass jars until analysis.Four batches of C. intermedia, representing replicates, were harvested over a period of 10 days from natural stands at locations near the factory, situated in the Langkloof.The individual batches were mechanically shredded and deposited directly into steam-heated, double-walled stainless steel fermentation tanks.Each tank has a capacity of 500 kg plant material and is equipped with rotating paddles to ensure adequate mixing and heat transfer during fermentation.A steam injector was used to rapidly increase the temperature of the plant material to ca 90 °C.The plant material had a high inherent moisture content and required only superficial wetting to aid the fermentation process.Samples were collected after 16, 24, 36 and 48 h, spread out on drying trays and dried in a cross-flow dehydration tunnel at 40 °C for 6 h to less than 10% moisture content.Laboratory-scale fermentation, as described for the previous experiment, was executed concurrently on the same plant material batches to allow direct comparison.As soon as the bulk plant material was shredded and mixed, four samples per batch were collected, superficially wetted as for the bulk plant material and fermented in stainless steel containers in a laboratory oven at 90 °C.The samples were removed after 16, 24, 36, and 48 h and dried as described for the factory-scale samples.All dried samples were mechanically sieved, their “tea bag” fraction collected and stored in sealed glass jars until analysis.Infusions were prepared by pouring 1 L freshly boiled distilled water on 12.5 g tea leaves in a glass jug to infuse for 5 min, whereafter it was decanted through a tea strainer into a 1 L preheated stainless steel thermos flask.Approximately 100 mL of each infusion was served in preheated white porcelain mugs, covered with plastic lids and kept warm during sensory analysis in water baths at 65 °C.The infusions were prepared, served and analysed as described for previous sensory analysis of honeybush to ensure consistency in methodology so that comparison of the sensory attributes of C. intermedia with those of other Cyclopia species are valid.The panel consisted of nine female panellists with extensive experience in descriptive sensory analysis of honeybush tea.They have been previously screened for their ability to discriminate between similar samples, rate products for intensity and identify tastes and aromas, as advised by Drake and Civille.For each set of samples training sessions were held prior to descriptive sensory analysis.Training served to familiarise the panellists with the respective aroma, flavour, taste and mouthfeel attributes, as well as taints present in the samples and to calibrate them in terms of the range of intensities.Reference samples, to re-familiarise the panellists with specific honeybush aroma attributes, consisted of samples produced from Cyclopia species, analysed by Theron et al.These reference samples exhibited high intensities of specific attributes and thus served to “calibrate” the panellists.The commercial samples were analysed first to generate a comprehensive list of descriptors that best described the aroma, flavour, taste and mouthfeel of C. intermedia.Specific attention was given to the presence of taints.The list of 68 aroma and 51 flavour, taste and mouthfeel descriptors, generated by Theron et al. for development of a generic honeybush sensory wheel and lexicon, served as basis.During further discussion redundant descriptors, including those present infrequently, were removed to simplify the list to 28 aroma and flavour attributes, 3 taste modalities and 1 mouthfeel attribute, i.e. astringency.The samples generated during the fermentation optimisation and validation experiments were analysed separately, following training sessions.Since these sample sets included samples that were fermented for a short or very long period, and most likely resulted in under- and over-fermentation, attributes such as “green grass”, “cooked vegetables” and “dusty” aroma and flavour, identified during training, were included to accurately describe the change in the sensory profile during fermentation.The quantitative aspect of the descriptive sensory analysis of C. intermedia infusions entailed scoring the intensity of each attribute by assigning a value on a scale.Attribute intensities were scored on an unstructured line scale with verbal anchors on each end, using Compusense® five software.Six samples were evaluated per day with each sample analysed in triplicate during three consecutive sessions.The order of presentation was randomised and samples were labelled using random three-digit codes to ensure blind tasting.Panellists cleansed their palates between samples with water and unsalted fat-free biscuits and were given a 10 min break between sessions to reduce panel fatigue.All analyses were conducted in individual tasting booths situated in a light- and temperature-controlled room.The infusions of the commercial samples and those generated during the validation experiment were analysed for soluble solids content, colour and turbidity to provide additional “cup-of-tea” parameters for comparing infusions.Soluble solids content and colour give an indication of the strength of the infusion, while high turbidity is associated with poor quality.An aliquot of each infusion, prepared for sensory analysis, was filtered prior to analysis.The soluble solids content was determined gravimetrically on 20 mL aliquots.The absorbance of the infusion was measured from 370 nm to 510 nm and integrated to obtain the area under the curve, representing “total colour”.The absorbance was measured at 10 mm intervals, using a Biotek Synergy HT microplate reader.The turbidity of a 25 mL aliquot of each sample was measured using a Thermo Scientific Orion AQUAfast AQ3010 Turbidity Meter Ltd., Cape Town, South Africa), auto-ranging from 0 to 1000 nephelometric turbidity units.The AQ3010 meter was calibrated using four EPA-approved styrene-divinylbenzene primary standards.All analyses were carried out in triplicate.For descriptive sensory analysis of the commercial samples a completely randomised design was used, presenting each sample and its three replicate infusions in a randomised order to panellists.For the laboratory-scale fermentation experiment the experimental design was a randomised block, with each of the 18 treatment combinations replicated on three batches of plant material.The treatment design was a 3 × 6 factorial with three fermentation temperatures and six fermentation times.For the factory-scale experiment the experimental design was also a randomised block, with each of the 8 treatment combinations replicated on four batches of plant material.The treatment design was a 2 × 4 factorial with two fermentation scales and four fermentation times.Panel performance was monitored using PanelCheck® software.Descriptive sensory analysis data were pre-processed to test for panel reliability using a model that includes panellist, replicate and sample effects and interactions.The Shapiro–Wilk test for normality was performed on the standardised residuals from the model.If there was significant deviation from normality, outliers were removed.Following the confirmation of panel reliability and normality, subsequent statistical analyses were conducted on means over triplicate infusions and panellists.Descriptive sensory analysis and instrumental data were subjected to ANOVA according to the experimental design of each trial to test for sample/treatment differences."Where the F-test indicated significant differences, Fisher's least significant difference was calculated at the 5% level to compare treatment means.A probability level of 5% was considered significant for all significance tests.Univariate analyses were performed using SAS software.Principal component analysis was also performed, using the correlation matrix, by means of XLStat to graphically illustrate the association between the samples and sensory attributes.Previously, Theron et al. profiled the sensory characteristics of the infusions of several Cyclopia species to develop a generic sensory wheel for honeybush.Prominent to all species were “fynbos-floral”, “fynbos-sweet” and “plant-like” attributes, yet subtle differences between species resulted in three distinct groups according to discriminant analysis.A limited number of C. intermedia samples were included in their sample set.The list of aroma, flavour, taste and mouthfeel descriptors generated by Theron et al. therefore served as basis for the comprehensive sensory profiling of C. intermedia in the present study.The large number of production batches sourced from commercial processors and retail outlets for the present study included samples varying in quality, some with prominent taints.The intensity scores for the aroma and flavour attributes followed similar trends, however, flavour attributes were perceived at lower intensities.This is in agreement with the intensity data obtained for other Cyclopia species.Aubrey et al. demonstrated that some attributes of wine are evaluated more effectively by the nose, which explains the reduced retronasal perception of attributes present in C. intermedia.Only aroma attributes, the taste modalities and astringency were therefore used to describe the sensory profile of C. intermedia.The PCA loadings and scores plots show the association of the commercial samples with the aroma attributes, tastes and astringency.A large number of samples associated with taints, indicative of poor quality.Taints identified were “medicinal”, “smokey” and “wet fur/farm animals”.Some samples scored high intensities for these taints.“Wet fur/farm animals” is most likely an indication of poor process control and/or practice.Low temperatures, combined with excessive long fermentation periods are conducive to development of off-odours or taints associated with poor quality.Fermentation at less than 60 °C leads to mould growth.Negative aroma attributes included “dusty”, “cooked vegetables”, “green grass” and “burnt caramel”.“Green grass” is associated with under-fermented honeybush tea, while “burnt caramel” may indicate over-fermentation or uneven heating, creating hot spots in the fermentation tank.Considering attribute intensity and occurrence frequency in the sample set, box plots of selected aroma attributes are provided in Fig. 3.The positive aroma attributes, “fynbos-floral”, “fynbos-sweet” and “woody”, present in all samples, were consistently high with minimum intensities of 28, 25 and 34, respectively, highlighting their prominence in C. intermedia infusions.Both Theron et al. and Erasmus et al. identified “fynbos-floral” and “fynbos-sweet” as very prominent in Cyclopia, both in terms of intensity and percentage occurrence frequency, and thus as defining aroma attributes of honeybush.Theron et al. also noted “woody” as a defining aroma attribute, because it was consistently present in the samples analysed.“Woody” aroma, however, presents an interesting case.Whilst one of the most prominent aroma attributes in the present C. intermedia sample set, the samples analysed by Theron et al., including those of C. intermedia and C. longifolia, had low intensity scores for “woody”.Erasmus et al. found this aroma attribute to be prominent only in C. longifolia, increasing with fermentation temperature and fermentation time.Its intensity was low in C. genistoides, C. subternata and C. maculata.The present C. intermedia sample set had mean intensities of 17 and 8 for “fruity-sweet” and “apricot/apricot jam”, respectively.These attributes were present in 100% and 77% of the samples, respectively.“Honey”, “caramel/vanilla” and “cooked apple” were noted in 75, 60 and 51% of the samples, respectively.Some samples gave relatively high intensity scores, but the mean scores for these attributes were low.Other fruity and floral attributes such as “rose geranium”, “lemon” and “raisin” were present in less than 7, 32 and 21% of the samples, respectively, at low mean intensities.In terms of negative attributes, only “hay/dried grass” had a mean intensity score higher than just perceptible and it was noted in more than 90% of the samples.Given the consistent presence of “woody” and “hay/dried grass” in the present sample set, as well as in other sample sets, it can be postulated that their intensities would be decisive in their classification as negative or positive.For insight into the intensities determining a positive or negative contribution to the characteristic aroma profile of C. intermedia, further investigation, including consumer testing, is needed.Sweet taste and astringency are considered characteristic of honeybush and were perceived in all samples with a minimal difference between the minimum and maximum intensity scores.Bitter taste was negligible in C. intermedia infusions.Sour taste was detected in less than 50% of the samples at low intensities.When perceptible, sour taste can be another indication that fermentation conditions were sub-optimal.The intensity scores for sweet, bitter and sour were similar to those obtained by Theron et al., but the samples of the present study were more astringent than previously demonstrated for C. intermedia.Although reference standards, originating from the sample set used by Theron et al. were used to calibrate panel members, panel drift cannot be excluded.Considering intensity, percentage occurrence frequency and the importance of an attribute in terms of the sensory profile of C. intermedia, an aroma wheel, accompanied by a bar graph, was compiled of selected attributes.The relative size of a slice in the aroma wheel represents the relative perceived intensity of an attribute, while the bar graph represents the percentage occurrence frequency of each attribute in the sample set.Combined, the wheel and bar graph provide the user with a “snap shot” of the aroma profile of C. intermedia and thus a tool suitable for use in quality control by industry.This aroma wheel complements the generic honeybush wheel, developed by Theron et al.Whereas the latter wheel gives a more comprehensive list of positive and negative attributes that could be expected in honeybush, no indication of their relative intensities and percentage occurrence frequencies are provided.The PCA loadings and scores plots indicate that the negative aroma attributes associated with the shorter fermentation times for plant material fermented at 70 and 80 °C.As fermentation progressed, the samples associated with floral, fruity and sweet-associated attributes, but also “dusty” and “earthy”.The latter two attributes were, however, present at very low intensities.For greater insight into the evolution of the aroma notes during fermentation, the intensities of the three major aroma attributes, “fynbos-floral”, fynbos-sweet” and “woody”, were plotted over time for each temperature.At 90 °C their intensities did not increase significantly over time and were higher than the intensities at 70 °C when fermented for ≤ 36 h.At both 70 and 80 °C the intensities of these attributes increased over time, with the effect of time being more prominent for 70 °C.“Rose-geranium” presented a different scenario in that the lower fermentation temperature was more conducive to its development, however, very small changes in intensity took place over time.Previously, a similar trend was observed for “rose geranium” when C. genistoides was fermented at 80° and 90 °C, with 80 °C resulting in a higher intensity than 90 °C after 32 h. Conversely, “rose geranium” was more prominent in C. longifolia when fermented at 90 °C than at 80 °C.The negative aroma attributes, “hay/dried grass” and “green grass”, were reduced in intensity during fermentation during the first 24–36 h, confirming their association with under-fermented honeybush.The decrease was most prominent when C. intermedia was fermented at 70 °C.Erasmus et al. also showed that their intensities in C. longifolia were reduced within the first 24 and 16 h of fermentation at 80° and 90 °C, respectively, whereafter fermentation time had no effect.The PCA loadings and scores plots were again employed to provide a broad overview of the association between fermentation conditions and attributes.Significant main and interaction effects are summarised in Table 1.“Rose-geranium” associated exclusively with factory-scale samples, while “fruity-sweet”, “fynbos-sweet”, “fynbos-floral” and “apricot jam” associated almost exclusively with laboratory- and factory-scale samples fermented for 24 and 48 h.A significant interaction for fermentation scale × time was obtained for “rose-geranium”, with its intensity in factory-scale samples decreasing significantly from 16 to 24 h to a level not significantly different from those of the laboratory-scale samples.“Woody” also showed a significant fermentation scale × time interaction, with the laboratory-scale samples having a consistently higher intensity than the factory-scale samples, except at 48 h.“Raisin” and “caramel/vanilla” were scored slightly higher in laboratory-scale samples and their intensity increased with time.“Honey”, on the other hand, was scored higher in factory-scale samples and its intensity decreased with fermentation time.Of these three aroma attributes, “raisin” received the highest intensity score.A significant interaction between fermentation scale and time was also obtained for “hay/dried grass”.Its intensity was reduced during laboratory-scale fermentation to a level significantly lower than that of the factory-scale samples at 48 h, but fermentation scale had no effect when C. intermedia was fermented for 24 and 36 h.“Dusty” scored slightly higher in laboratory-scale samples and increased with fermentation time.The intensity scores for “dusty” of samples fermented for 36 h or less did not differ significantly and only at 48 h was its intensity significantly increased.Other parameters, i.e. soluble solids content, colour and turbidity of the infusions, were also considered to compare the effect of laboratory- and factory-scale fermentation on “cup-of-tea” characteristics.Additionally, the samples were compared to the commercial samples.Most notable was the difference in colour between the laboratory- and factory-scale samples with the latter giving slightly lower values than when fermented on laboratory-scale.The commercial samples showed large variation for the respective parameters.This was to be expected as the samples also encompassed the inherent variation in plant material harvested in different areas, etc., however, many outliers were observed for turbidity with some samples giving NTU values > 150.High turbidity is not acceptable in honeybush tea and mostly likely the result of sub-optimal fermentation conditions.This is the first attempt at quantifying the turbidity of honeybush infusions.Future research should determine specifications in terms of acceptable NTU values before turbidity could be included as a quality parameter in a quality grading system for honeybush tea.Overall in terms of sensory profile and physicochemical parameters, laboratory- and factory-scale fermentation delivered more or less the same product.Some attributes were favoured by laboratory-scale fermentation, while others developed at higher intensities during factory-scale fermentation.Although the shredded plant material was thoroughly mixed before small sub-batches for laboratory-scale fermentation was taken, complete homogenisation was not possible and variation in plant material could contribute to these differences.Another factor is heating during fermentation.Temperature logs provide insight into the difference in actual temperature and control that exists between laboratory- and factory-scale fermentation.Fermentation temperature increased to > 85 °C in ca 1 h on factory-scale achieved by steam injection, while it took ca 6 h to reach this point on laboratory-scale.Despite this slow heating time, fermentation temperature remained relatively constant between 88 and 90 °C on laboratory-scale, whereas temperature fluctuations of up to 10 °C occurred during factory-scale fermentation.Additionally, the temperature did not reach 90 °C and when the steam generator failed to operate overnight for 6 h during fermentation of batch 1, the temperature decreased to ca 50 °C.Large variation in the sensory quality, presence of taints and high turbidity levels of infusions of C. intermedia samples sourced from processors and retail outlets confirmed the need to re-visit the optimisation of its fermentation temperature and time.Fermentation at 90 °C for 36 h, or 48 h at 70 °C and 80 °C, was required to effectively increase positive aroma attributes, while fermentation for 12 h at 90 °C and 24 h at 70 °C and 80 °C was effective to reduce the negative attributes to negligible levels on laboratory-scale.Fermentation performed concurrently on the same batches of plant material on laboratory- and factory-scale delivered more or less the same product, validating the optimum fermentation conditions determined on laboratory-scale. | Inconsistent and poor quality honeybush herbal tea, produced from Cyclopia intermedia, required that comprehensive sensory profiling of this Cyclopia species be undertaken to identify not only its characteristic sensory attributes, but also to identify negative attributes, including taints responsible for poor quality. This was achieved by descriptive sensory analysis of infusions, prepared at “cup-of-tea” strength of a large sample set sourced from processors and retail outlets. The aroma attributes, “fynbos-floral”, “fynbos-sweet” and “woody”, and to a lesser extent, “fruity-sweet” and “apricot jam”, were the most prominent. The presence of taints such as “smokey” and “wet fur/farm animals” at relative high intensities in some samples indicated poor processing practices. The presence of “green grass” and “dusty” aroma notes is most likely attributable to under- and over-fermentation, respectively. Fermentation is the high-temperature oxidation step essential for the development of the characteristic sensory attributes of traditional honeybush tea. High turbidity levels of some infusions further confirmed sub-optimal processing of plant material. The effects of fermentation temperature (70, 80 and 90 °C) and time (12, 16, 24, 36, 48, and 60 h) on the sensory characteristics of C. intermedia infusions were thus investigated on laboratory-scale to establish optimum conditions. Different fermentation temperatures produced teas with slightly different sensory profiles, with infusions of plant material fermented at 70 °C predominantly floral, at 80 °C predominantly fruity and at 90 °C overall most characteristic of C. intermedia. Fermentation at 90 °C for 24 h or 36 h proved effective to increase the major positive aroma attributes to prominent levels, while decreasing the negative aroma attributes to negligible levels. These conditions were thereafter validated on factory-scale. Fermentation performed concurrently on the same batches of plant material on laboratory- and factory-scale delivered more or less the same product in terms of aroma profile and “cup-of-tea” strength, with the latter assessed according to the soluble solids content, colour and turbidity of the infusion. For optimum quality, inherent batch-to-batch variation in plant material may require careful monitoring of aroma development during fermentation between 24 and 36 h. Application of the optimum fermentation temperature-time combination by industry will contribute towards improved and consistent product quality. |
348 | A comparison of Landsat 8, RapidEye and Pleiades products for improving empirical predictions of satellite-derived bathymetry | The rapid expansion of the Irish economy is putting unprecedented pressure on the coastal marine area and its resources.The Census 2016 summary results showed that in Ireland 40% of the total population reside within 5 km of the coast.These circumstances demand efficient coastal management procedures able to protect the sustainable use of these environments.Timely and accurate environmental information such as bathymetry is necessary to support effective resource policy and management for coastal areas and assure human security and welfare.Several techniques have been developed to derive depth values used to produce bathymetric maps.Globally, single and multibeam echo sounders provide the most accurate and reliable method to derive depth.However, this technique is costly, slow, weather-dependent and large survey vessels are unsuited for operations in shallow waters.Airborne bathymetric LiDAR represents an alternative to vessel campaigns and its suitability has been demonstrated in coastal areas.This method is rapid, unhindered by maritime restrictions but performs poorly in turbid waters, as demonstrated by tests performed by the national marine mapping programme, INFOMAR.Satellite-derived bathymetry is emerging as a cost-effective alternative methodology that provides high resolution mapping over a wide area, rapidly and efficiently.Multispectral satellites of several spectral and spatial resolutions have been assessed for this purpose worldwide.Deriving bathymetry from multispectral satellite imagery applies the principle that light penetration of the water column at different wavelengths is a function of the properties of sea-water and was first proposed as a potential optical alternative for bathymetric surveys in the 70s.However, it is noted that depth penetration is limited by water turbidity and these methods require calibration, particularly in areas of variable seabed type.Coastal environments are highly dynamic and heterogeneous, and therefore more studies are needed to ensure robust methodologies.Development of applications utilising satellite imagery will help EU member states like Ireland to leverage their investment in this technology.The frontier of SDB research has advanced from basic linear functions into band ratios of log transformed models, non-linear inverse models and physics-based methods similar to radiative transfer models.Empirical SDB prediction methods have been assessed for deriving bathymetry in Irish waters in previous tests and although SDB performance varies depending on the approach, the prediction differences were approximately 10% of water depth, and were influenced by water type and by sensor types.In this study, we apply and extend a proven empirical approach to a selection of multi-resolution imagery products: Landsat 8, RapidEye and Pleiades.The potential of these sensors has been individually reported in other studies.However, in this study, we incorporate multiple spatial, spectral and radiometric resolutions to ascertain their influence on bathymetric accuracy both prior and post image-corrections.In particular, satellite-derived relative depth was determined using the three satellite-based products where different spatial filters, pre-processing steps, atmospheric corrections and multispectral band combinations were investigated."This operation resulted in 23 different formulations of SDRD, each of which was assessed for use as a potential predictor variable in the study's SDB predictions models.The ground-reference water depth was provided via airborne bathymetric LiDAR.Bathymetry has an inherent, under-utilised spatial element that can be exploited to improve SDB accuracy through application of spatial prediction techniques.We complement a previous empirical SDB prediction study for Irish coastal waters, through the application of linear regression, a non-spatial predictor and regression kriging, a spatial predictor.Here prediction accuracy is the focus, as similarly assessed in Monteys et al.Prediction uncertainty accuracy assessments require more sophisticated predictors, using say, Bayesian constructions.Thus, for this study our key objectives can be summarized as follows:Statistically determine the best satellite-derived predictors of bathymetry for each satellite product through linear correlation and regression analyses."Compare LR and RK for their SDB prediction accuracy, together with the significance of each model's parameters found from.Evaluate the importance of integrating seabed type and turbidity on prediction accuracy.Suggest steps to upscale to encompass an entire coastal bay in North Atlantic waters.The paper is structured as follows.Section 2 introduces the study area and the study data sets.Section 3 defines the image processing approach and the formation of image-based predictor variables, the statistical analyses to retain only the most informative predictor variables, and describes the two study prediction models, LR and RK.Section 4 presents the results of the statistical analyses for the predictor variables and the SDB predictive performances of LR and RK.Section 5 highlights the main findings, the implications of this study and is followed by our conclusion.Tralee Bay is located on the west coast of County Kerry, Ireland.Several small rivers feed into the bay through the town of Tralee and the River Lee, a large river also feeds into the bay increasing turbidity in the area surrounding the mouth.Tralee bay is representative of many of the coastal embayments on the Irish west coast where rivers enter the North Atlantic.Incorporating imagery from different satellite platforms enabled an investigation of the influence of image resolution, offering a range of spatial resolutions, spectral resolutions, temporal resolutions and radiometric resolutions.As this research was initiated prior to Sentinel 2a becoming fully operational, Landsat 8 was the primary open data set utilised in our tests.Third Party Mission imagery was provided by the European Space Agency, which was available under license, and this enabled more tests than utilising the open data alone.The final satellite datasets selected for the project were Landsat 8, RapidEye and Pleiades - all multispectral satellite data sets with extensive archive coverage for Ireland.For each Satellite data source, the choice of image was based on the following criteria:Extent of cloud cover over and near the study area,Visible effects of sun glint over water,Visible effects of turbidity within bay.Date of Image acquisition.Tidal level during image acquisition.Cloud cover was the most significant limiting factor in the selection of satellite data.For example, in 2015 of the 69 Landsat 8 scenes captured over the survey area, only two dates were considered ‘cloud free’ and warranted further consideration.Data on tidal level was obtained from Castletownbere Tide Gauge which is part of the Irish National Tide Gauge Network and located approximately 65 km south of Tralee bay.Considering the above criteria, an optimal image from each satellite was chosen, the details of which are listed in Table 2.The Optimal images are displayed in Fig. 1.Satellite imagery available from free and commercial sources are generally available with varying degrees of pre-processing.The degree of processing applied to each image can range from raw, uncorrected data up to a level where all possible corrections have been applied and the secondary data generated.To ensure fair comparisons between each multispectral image source, it was important that each image was processed using the same technique.For this reason, the Landsat 8 data used in this report was processed to ‘Level 1 T’, RapidEye was processed to ‘3A’ and Pleiades to ‘ORTHO’ level – all of which are prior to application of atmospheric correction.Each data source used a differing naming convention to indicate the processing level, however, processing levels can be assumed as equivalent in terms of how raw data from each source was converted to absolute radiance with precision terrain-corrections.During radiometric correction for each data source, radiometric artefacts and relative differences between bands were detected and corrected.For each satellite source, the data was converted to absolute radiometric values using calibration coefficients developed specifically for that satellite.Each data source was geometrically corrected using accurate Digital Surface Models and ground control points.The only difference in the methodology used for geometric correction between the data sources was in the kernel type used during resampling.Unlike Landsat 8 Level 1 T data and RapidEye 3A data which use a Cubic Convolution Resampling Kernel, Pleiades ORTHO is resampled using a spline kernel.Ground reference bathymetry data for Tralee Bay was acquired between 2008 and 2014 by the INFOMAR program.In 2008 Tenix Laser Airborne Depth Sounder carried out a LiDAR survey of the bay covering most of the bay and at 200% coverage to allow multiple passes over the same area.Data was processed using Tenix LADS proprietary hydrographic software and tidally corrected using local tide gauges.Category Zone of Confidence values are used to indicate the accuracy of data presented on navigation charts.The resulting dataset can be classified to CATZOC B type and the survey report is provided as supplemental material with this paper.The seabed between Kerry Head and Brandon Point was mapped via multibeam SONAR in 2009, 2011 and 2014 by the Celtic Voyager.The RV Geo, RV Keary and Cosantóir Bradán mapped the shallower waters along the coast of Tralee Bay in 2014.Sonar data meets IHO order 1 specifications with an overall vertical error of <2% water depth and is classified as CATZOC type A1.Further data is available on seabed-type, where four classes characterizing different seabed properties in Tralee Bay may help explain the variation in water depth.Seabed information was derived from the seabed geological maps and databases published by Geological Survey Ireland.These maps, produced by interpreting multibeam bathymetry and backscatter data, inform about seabed-type and geomorphological factors.Data points for Tralee bay showing similar characteristics were grouped into four discrete classes.Sediment samples were used to label these classes with geological descriptors."Hardground and Coarse account for 35% and 35.1%, respectively of the Bay's seafloor.Fine-grained sediments account for over 29.9%.The difference in fine-grained sediment can primarily be attributed to two distinct backscatter acoustic signatures, which are typically related to sediment properties.“Fine sediments II” could be finer grained sediments than “Fine sediments I”, but due to insufficient sediment samples to confirm this trend these were left as undifferentiated.In addition, it is always possible to use the coordinate data to help explain variation in water depth.Thus, seabed-type and the coordinates, together with satellite derived data are all employed as potential predictors of bathymetry using the study models.Each of the following steps used to process the data were undertaken in the open source R statistical programming language version R 3.5.1.In this paper we applied Dark Object Subtraction atmospheric correction, since the absence of thick clouds casting shadows over deep water eliminated the possibility of applying the method proposed by Hernandez and Armstrong – previously identified as optimal when atmospherically correcting imagery for deriving bathymetry.This empirical method applies a LR to relate known depth measurements to the SDRD values.Using this method, SDRD maps were generated by calculating the log ratio of the blue and green bands of the recorded image.Here multiple derivations of SDRD were found using the Landsat 8, RapidEye or Pleiades imagery, where in turn, different spatial filters, different pre-processing steps, different atmospheric corrections and different multispectral band combinations were used.This resulted in a total of 23 different formulations of SDRD for use as potential predictors of bathymetry in each of the study prediction models.Water turbidity can have a significant impact on water leaving radiance and thus the derived depth.Water turbidity can result in higher water-leaving radiances across the visible and Near Infrared portions of the spectrum, overestimating depths in shallower areas and underestimating depths in deep areas.The geo-referenced SDRD and NDTI values were then combined with the INFOMAR bathymetric LiDAR data.To further ensure no anomalies were introduced into the analysis by including data over land or in areas prone to high degrees of turbidity such as the river mouth, all data above the high-water mark as defined by Ordnance Survey Ireland vector shapefiles and all ground reference data with elevation values above ground level were removed.The full study data set thus consisted of a single response variable together with six distinct predictor variables, as detailed in Table 4.The spatial resolution of the full study data was 5 m with n = 4,464,329 observations.Fitting prediction models to such a massive spatial dataset presents a problem computationally, and for this reason, the study prediction models were specifically chosen to reflect this.To better approximate a smaller ground-truth dataset and demonstrate the utility of SDB for application anywhere, use of the full data set was unnecessary, and as such, the full data set was sub-sampled via random sampling to a smaller, more manageable size.Furthermore, as it was necessary to objectively evaluate the prediction models, the decimated data set was split into a calibration and validation data set, with a 40:60 split, where the specified split was judged to provide reasonably well-informed model calibrations but not at the expense of too few validation sites.This resulted in a decimated data set size of n = 4462, a calibration data set of size of n = 1768 and a validation data set size of n = 2678.In addition, it was considered inappropriate to attempt to predict LiDAR-B at depths below 12 m because the imagery data will not accurately represent such depths in all the satellite scenes evaluated for a valid cross-comparison exercise.In this respect, the calibration and validation data sets were further processed to remove observations with LiDAR-B values deeper than 12 m.This resulted in a revised decimation data set size of n = 3041, a revised calibration data set of size of n = 1214 and a revised validation data set size of n = 1827.These final data sets are mapped in Fig. 3."It is stressed that this study's reported results were representative of numerous explorations with different data decimations and different randomly-sampled calibration and validation data sets, where the decimated data sets were allowed to vary in size from 0.05% to 1% of the full data.For the statistical analyses, objectives were to determine the strongest relationships between LiDAR-B and: each of the SDRD variables derived from the Landsat 8, RapidEye or Pleiades products; the NDTI from the satellite products; seabed-type; and the coordinates.This was achieved through basic assessments of: normality, to gauge where a Box-Cox transform is appropriate; linear correlations and associated scatterplots; conditional boxplots for categorical variables, LS8-NDTI, RE-NDTI, PL-NDTI and Seabed-type; and ‘in-sample’ non-spatial and spatial LR fits.For assessment, ‘in-sample’ LR fits are opposed to ‘out of sample’ LR fits where they are calibrated with the calibration data to fit ‘out-of-sample’ at the validation data sites.Parameters of the non-spatial LR were estimated through ordinary least squares, whilst the parameters of the spatial LR were estimated using restricted maximum likelihood to account for a spatially-autocorrelated error term.The OLS and REML LR fits were conducted using the linear mixed model function in the R nlme package, where model fit statistics of R2 and AIC are reported for comparison.The results of these statistical analyses were used to determine the final predictor variable sub-sets for retention in the two ‘out-of-sample’ prediction models, where comparisons of prediction accuracy with respect to the predictor variables from different imagery products were undertaken.Observe that REML LR fits are computationally intensive, but with small calibration data sets they can provide directions and insights for predictor variable retention for much larger calibration data sets, leading up to the full data set.The study prediction models consist of LR and RK only, both of which were calibrated to predict LiDAR-B informed by some combination of the SDRD data, the NDTI data, seabed-type and the coordinates.For computational reasons, RK has been chosen over its close counterpart of kriging with an external drift, where RK and KED have the following properties.Both RK and KED are LR-based geostatistical predictors, designed to account for spatial autocorrelation effects in the error term via the residuals of a LR trend fit, where RK in this study, is viewed as a statistically sub-optimal but computationally simpler version of KED.RK is an explicit two-stage procedure, where the LR predictions are found first, then added to the ordinary kriging predictions of the LR residuals.For this study, RK is statistically sub-optimal since: its LR trend component is estimated via OLS, its residual variogram parameters are estimated via a weighted least squares model fit to the empirical variogram, and a local kriging neighbourhood of the nearest 20% of the residual data is specified.Conversely, KED can be viewed as statistically optimal, where the LR trend and residual variogram parameters are estimated concurrently using REML and provided a global kriging neighbourhood is specified.In this form, KED presents a problem computationally but is required for best linear unbiased prediction, whereas the chosen specifications of RK are each chosen to alleviate computational burden, whilst still providing tolerable levels of prediction accuracy.Theoretical details, equivalents and comparisons for explicit RK models and implicit KED models can be found in Bailey and Gatrell; Hengl et al."For this study's RK models, an isotropic exponential variogram model was specified; and the various components of an RK calibration were achieved using gstat and geoR R packages.We first report the results of the statistical analyses using the study calibration data set only.All variables displayed reasonable normality, so in the interest of model parsimony, no variables were transformed to such.For data relationships, the linear correlation coefficients and associated scatterplots are given in Figs. 5 and 6, while conditional boxplots are given in Fig. 6.The exploratory analysis also allowed us to assess the impact of the atmospheric correction on the relationship of LiDAR-B to SDRD - and in each case the atmospheric correction provided no worthwhile improvement in the r values.Additionally, the decrease in reflectance from single bands provided the weakest relationship.It was found that LS8_DOS_L_BG was the most strongly correlated Landsat 8 SDRD variable with LiDAR-B.Similarly, RE_DOS_3X3_L_BG and PL_DOS_3X3_L_BG provided the strongest correlations for RapidEye and Pleiades, respectively.As there was a high degree of collinearity among the SDRD predictors from each satellite product group, only the above variables were retained and used in any one LR/RK fit.In addition, the Northing coordinate was negatively and moderately correlated to LiDAR-B, and was also retained, reflecting the north-south orientation of Tralee Bay.From the conditional boxplots, LS8-NDTI, RE-NDTI, PL-NDTI and Seabed-type, could all strongly discriminate across the range of LiDAR-B values; and were thus, all worthy of retention.The results of the ‘in-sample’ OLS and REML LR fits are given in Table 5.The residual variograms for the three REML LR fits are given in Fig. 7, which all displayed clear spatial dependence.These are also given with the WLS estimated residual variograms used in RK, where some degree of similarity between the WLS and REML variogram fits is expected.Observe that the parameters from the REML variogram could have been used in the RK fit instead of those from the WLS variogram, but the objective here is ultimately to provide computationally feasible solutions for large data sets.There was no evidence to suggest a non-spatial LR would suffice over a spatial LR for inference, or over RK for prediction.From Table 5, the RapidEye product provided the best SDRD and NDTI predictors in terms of the best fitting OLS LR model, but conversely yielded an increase in AIC of 1824.0–1732.8 = 91.2 units over the Pleiades REML model, which provided the most parsimonious LR fit.Landsat 8 provided the weakest OLS fit, but not the weakest AIC results.Lending weight to pursuing a spatial analysis, all REML LR fits provided large reductions in AIC over their OLS LR counterparts."Observe that Northings and LS8_DOS_L_BG were retained as they were still considered informative to ‘out-of-sample’ prediction, and it would still be considered a significant predictor in RK as RK's trend component is the ‘in-sample’ OLS LR fit.The prediction accuracy performance of the two prediction models and the three satellite products are summarized via the single-figure diagnostics in Table 6, together with plots and maps in Figs. 8 to 11.On viewing the diagnostics in Table 6, some clear trends emerge, where the most accurate model was RK with Landsat 8 products.For all three satellite products, RK always out-performed LR, both in terms of average bias and average prediction accuracy.Interestingly, with LR only, prediction using RapidEye followed by Pleiades, both outperformed prediction with the Landsat 8 products.However, when residual spatial information was considered with RK, this behaviour was reversed, where prediction was most accurate using Landsat 8 products, then Pleiades, then RapidEye.This behaviour appears unusual, but can in part, be explained by high prediction differences with RK informed by the Pleiades and RapidEye products - higher than that found with the corresponding LR model.However, for prediction using the Landsat 8 products, RK significantly reduced high prediction differences over its LR counterpart.Tentatively, this suggests that prediction using the relatively high-resolution Pleiades and RapidEye products is more prone to spatial anomalies, than that found with the comparatively low-resolution Landsat 8 products.These effects are more clearly seen in the observed versus predicted scatterplots of Fig. 8, where obvious outlying points were evident for RK using the RapidEye and Pleiades products.Prediction with the RapidEye products also resulted in impossible LiDAR-B predictions with both LR and RK.RK clearly performed better than LR as points are more clustered around the 45° line.These results were contrary to the in-sample results of Section 4.1 with the OLS and REML LR fits but should not be viewed as unusual given the assessment here was out-of-sample and the results were not always linear in behaviour.Further, differences in the in-sample LR results were often marginal.Our study therefore demonstrates that clearly, prediction model choice was always of more importance than satellite product choice.In terms of spatial performance, the observed data and the predictions are mapped in Fig. 9, and the corresponding prediction differences are mapped in Fig. 10 for all six model/satellite product combinations.From Fig. 9, all three RK models appear to reflect the spatial characteristics of the observed LiDAR-B data reasonably well, but some impossible predictions occur in the shallows.The prediction difference maps clearly depicts where the RK models out-perform the LR models, especially in the western shallow areas.Finally, Fig. 11 plots the observed LiDAR-B data versus the prediction differences, where all three LR fits tend to over-predict in shallow waters but tend to under-predict in deep water.This characteristic disappears with all three RK fits.The extraction of bathymetric information from optical remote sensing data can be generally divided into two main approaches: empirical approaches and “physics-based” model-inversion approaches.Among the empirical approaches, one of the most commonly used is the band ratio regression model.However, in recent years, new studies have focused on enhancing empirical model performance, for example, through spatial rather than standard, non-spatial modelling.Following this trend, this study assessed an empirical modelling framework through the incorporation of spatial autocorrelation effects.This was specifically achieved via a REML estimated LR model for inference and also by an RK model for prediction.Both models were applied to a selection of satellite products each with different spatial and spectral resolutions in order to better constrain SDB prediction accuracy.The temporal offset between the images and the ground reference LiDAR ranged from a few months to a number of years and therefore this represents a potential error source and does not allow for a definitive comparison.Delays in finding cloud free images with minimal evidence of turbidity also influenced the temporal offset.However, the bathymetric LiDAR dataset was selected for this study as it provided complete coverage of the whole bay with overlaps for verification.It was considered sufficiently accurate for SDB comparison as similar temporal offsets between satellite imagery and reference LiDAR have been incorporated successfully in SDB studies before and when compared with subsequent localised SONAR surveys in later years for Tralee bay, it displayed no significant variation.The LiDAR dataset also enables testing of consistency across images, particularly regarding water depth intervals.In terms of SDB prediction performance, LR models using the RapidEye and Pleiades products showed smaller and more consistent prediction differences than that found with Landsat 8; however, Landsat 8 models seemed to work better than RapidEye and Pleiades models locally in the deeper parts of the bay.All three LR models tended to over-predict in shallow waters but tended to under-predict in deeper waters, but importantly, this was not the case for RK, where this prediction bias was not present.Conversely, for the RK models, Landsat 8 marginally outperforms RapidEye and Pleiades based on the prediction accuracy diagnostics and on the prediction difference plots.Performance was also assessed at different water intervals and as a general indication of the success of the methodology for the whole test site.In very shallow water depths the trends observed across all three satellite images indicate a similar over-prediction pattern generally increasing with depth.The pattern observed is inverse to the 4 to 12 m interval.The most plausible explanation for this effect is the degree of influence in the observed seafloor reflectance values.Reflectance values from the visible bands can carry significant reflected light from the seafloor contribution.Seafloor variability at the pixel scale can occur primarily due to changes in seafloor type, variations in slope or aspect; or when it is covered by algae or other non-geological factors.Local seafloor variation is present in the study area as observed, for instance, in the high resolution bathymetry images of the seafloor that appear with glacially shaped terrain characteristics.The reflectance response, at the pixel scale, from the three platforms are expected to differ substantially and are difficult to quantify.The same issue was reported by other studies using only LR models.Algae and rocky bottoms present a darker signal compared to deep water areas having an influence on the performance of the model.As the bay deepens the trend in model performance gradually changes from over-prediction to prediction values clustered around the 45° trend line with minimal prediction difference.The influence of the seafloor gradually diminishes and other factors linked to water properties might now play a more important role.In deeper waters the results show prediction differences gradually increasing towards negative values.This trend towards under-prediction possibly reflects a depth threshold where the contribution of the seafloor is negligible or absent.A similar depth limit has been reported in other studies carried out on the Irish coast using empirical methods but without a spatial component and also using Sentinel-2 data.This confirmatory evidence suggests that around this depth lies a critical limit for SDB prediction using non-spatial LR models in similar regions on the Irish coast.The central and deepest part of the study area, where water depths ranged between 8 and 12 m had generally low prediction differences for all three RK models, whereas for the LR models, relatively high prediction differences were present, representing a continued tendency to under-predict.This behaviour is reinforced in the observed versus predicted scatterplots for LR, where the scatterplot trends have a slope change when compared to that at 4 to 8 m depths.This slope change is most pronounced in the Landsat 8 LR model.This change can be attributed to the non-linear relationship between reflectance versus water depth as the plateau reflects a water depth threshold caused by the combination of an absence of seafloor component and maximum light penetration.The distribution of the prediction differences in the North East corner of the bay displayed high spatial variability, both with large negative and large positive prediction differences.This was true both for LR and RK.This high variability in prediction accuracy can be attributed primarily to local changes in seabed type between rock outcrops and fine-grained sediments.The influence of hardgrounds has already been described in other studies carried out on the Irish coast as a source of high prediction difference.The inclusion of seabed class in the prediction models helps to understand its influence on prediction accuracy and the local limitations in the overall bathymetry results.In general, the LR models exhibited large positive prediction differences around the edges of the bay, particularly in areas characterised by hardgrounds and coarse gravel.Fine-grained sediments presented lower prediction differences.For the spatial RK models, this non-conformity was partially addressed, however large prediction differences were still present due to local variability driven by seabed type.For further avenues of research, firstly an investigation making SDRD itself more spatially-explicit would be worthwhile.Secondly, for upscaling the study results to the whole bay, tools providing cloud-based computing like Google Earth Engine should be explored further, as demonstrated in Traganos et al.Computational savings could also be achieved via mathematical adjustments to the LR and RK models.On the other hand, Sentinel-2 data with improved technical capabilities in comparison to Landsat-8 to the Shortwave Infrared), becomes a potential dataset that could provide new advancements in the performance of SDB and in the generation of more detailed and accurate satellite derive bathymetry maps.In this study, methods for improving accuracies of satellite derived bathymetry were explored using three satellite datasets and two linear prediction models, one non-spatial, the other spatial.For the satellite derived relative depth predictor variables, a total of 23 different constructions were evaluated, with different spectral band combinations, spatial filters and log ratios.Turbidity and seabed type were also assessed as predictors of bathymetry.By using LiDAR derived bathymetric maps as ground reference data, we can conclude that:All three satellite products provide robust and meaningful results to assess SDB prediction accuracy at different spatial and spectral resolutions in the test area, Tralee bay.SDB predictions using Landsat 8 products showed the most accurate results when using the spatial, RK model, but returned the largest prediction differences with the non-spatial, LR model.Pleiades products returned good results both with the LR and the RK models, suggesting a certain suitability for SDB at high spatial resolutions.In all cases, the spatial RK model was able to constrain SDB prediction differences as water depth increased, whereas the non-spatial LR performed poorly in this respect. | Satellite derived bathymetry (SDB) enables rapid mapping of large coastal areas through measurement of optical penetration of the water column. The resolution of bathymetric mapping and achievable horizontal and vertical accuracies vary but generally, all SDB outputs are constrained by sensor type, water quality and other environmental conditions. Efforts to improve accuracy include physics-based methods (similar to radiative transfer models e.g. for atmospheric/vegetation studies) or detailed in-situ sampling of the seabed and water column, but the spatial component of SDB measurements is often under-utilised in SDB workflows despite promising results suggesting potential to improve accuracy significantly. In this study, a selection of satellite datasets (Landsat 8, RapidEye and Pleiades) at different spatial and spectral resolutions were tested using a log ratio transform to derive bathymetry in an Atlantic coastal embayment. A series of non-spatial and spatial linear analyses were then conducted and their influence on SDB prediction accuracy was assessed in addition to the significance of each model's parameters. Landsat 8 (30 m pixel size) performed relatively weak with the non-spatial model, but showed the best results with the spatial model. However, the highest spatial resolution imagery used – Pleiades (2 m pixel size) showed good results across both non-spatial and spatial models which suggests a suitability for SDB prediction at a higher spatial resolution than the others. In all cases, the spatial models were able to constrain the prediction differences at increased water depths. |
349 | Extracting actionable knowledge from social networks with node attributes | Data mining systems seek to extract interesting patterns and models from data.However, the systems suffer from two major drawbacks, namely, interpretation and quality, the former states that the operability of the induced results is not transparent for expert while the latter indicates that they cannot be integrated seamlessly into the business domain.Actionable Knowledge Discovery is a paradigm shift from data-driven data mining to domain driven data mining and aims to discover the knowledge which not only is of technical significance, but also satisfies business expectations, and further can be immediately applied to the operation in the corresponding domain.The AKD concept can be illustrated by an example in CRM, involving a bank loan system.Data mining may give the answer to the following question, “How much is the probability of a customer pay back his loan?”,but AKD may find the answer to the following question, “How can we increase the probability of a customer pay back his loan such that we need to pay less cost?”.The action is a new tool in AKD area that explicitly describes how changes in data that can be influenced divert instances from undesired status to the desired one .In social networks, meaningful actions are needed to help companies in decision-making.As an instance, consider a social network of millions of individuals which explicitly or implicitly are members of different groups.Extracting actions from such a network could suggest some potential changes in data which could affect the group membership of individuals when they are applied.To illustrate this concept, consider the sample network shown in Fig. 1.in which individuals wearing white shirt are buyers of a specific product.Assume a predictive method predicts that Jack has only a 20 percent chance of buying the product, i.e. he is not a member of the buyers’ group.An action can be like this one: if the attribute value of gem for node u changes from 2 to 1, then Jack’s chance of buying a product will increase to 60 percent.To explain in more detail, the suggested change will strengthen the relationship between Jack and u and could lead to affect Jack’s group membership.This effect is a result of the Homophily phenomenon in social networks.In particular, labels can be propagated as a result of some changes in the attributes of individuals.It is obvious that such knowledge is more desirable in business environments where domain experts usually become confused about what they are supposed to do with DM patterns.Existing action mining methods rely on simple data such as tables describing a collection of independent instances.However, in social networks, relationships enable one individual to influence another, so ignoring them in action mining process would lead to miss some profitable actions.In this paper, we develop effective methods for mining actions from a social network."The problem is as follows: given a social graph including a set of labeled nodes and nodes’ attributes, a labeling method over the graph A, a desired class as goal label, the space of possible changes and their associated costs and specific node x, the aim is to find a cost-effective action to change x's label to the goal value.The action identifies an optimal set of changes in the attribute values of nodes.To solve the problem, we need to apply given labeling method A over the graph and then vary the attribute values of the input graph such that the changes incur a minimum cost while A predicts the goal label for x.In this regard, we develop an algorithm based on Random Walks that combines the information from the network structure with node attributes."The proposed approach is as follows: in the first phase, we apply Zhou's method to learn a random-walk-based model which assigns class labels for nodes with associated probabilities.In the second phase, to extract cost-effective actions, we need to explore the space of changes in the graph.We formulate the problem as an optimization function, where the objective is to learn changes in nodes’ attributes such that a random walk starting at x is more likely to visit the nodes which have goal label while minimizing the cost of the changes.We develop an algorithm, MANA, which exploits the stochastic gradient descent approach to optimize the objective function iteratively.We also provide two extensions to improve efficiency and scalability of the proposed algorithm.Our Experiments on Facebook, Google+, DBLP and Hep-th networks show that our approach perfectly outperforms state-of-the-art approaches.The rest of the paper is organized as follows: Section 2 contains related works.Section 3 presents the preliminaries, definitions, and terminology needed for later sections.In Section 4 we introduce our basic algorithm MANA, and two extensions to improve the algorithm.Section 5 presents the experimental results obtained on several real datasets.Finally, we conclude the paper in Section 6.Action mining is part of a subdomain of data mining called Actionable Knowledge Discovery, which is concerned with finding the knowledge which not only is of technical significance but also satisfies domain expectations, and can be applied to operations with minimal further effort of domain experts.Action mining methods find optimal action for an given instance or find rules for different groups of similar instances which can afterwards be used to predict actions.Ras and Wieczorkowska defined the concept of action rules and proposed a method to produce the rules using pairs of classification rules.Afterwards, this work was developed to mine action rules without pre-existing classification rules through an Apriori-like algorithm and to handle big data by an algorithm based on Map-Reduce framework as well as an algorithm based on Hadoop Map-Reduce and Apache Spark frameworks.Su, Mao, Zeng, and Zhao introduced actionable behavioral rule mining which aims to extract action rules from an object-based data for affecting an entity behavior.The data includes observations of an entity instead of members of an entity.In addition, there is a current observation where each change of a proposed action rule needs to change an attribute value of the entity from the current observation."Then, they extended the definition of a rule's support to consider non-uniform contribution for each instance which supports a rule.Su et al. presented MABR to find action rules from the object-based data in a framework of support and profit.Zeng et al. proposed a liner-function-based observation-weighting method which handled the problem of non-uniform contribution for different instances to support an action rule.In mining such action rules, it often occurs that different rules may suggest the same acts with different expected profit called conflicting rules.To resolve the conflicts, Su, Zhu, and Zeng utilized rule ranking procedure for selecting the rule with the highest profit.To guarantee the reliability of the actionable behavioral rules, mentioned approaches need to find frequent action sets.However, this will result in high time complexity.To handle this problem, they proposed a decision-tree-classifier-based mining method.While an action rule is a set of changes that need to be made for achieving the desired result, meta-actions are the actions that need to be executed in order to trigger corresponding changes.Ranganathan, Allen, Arunkumar, and Angelina proposed a new efficient system, to generate meta-actions by implementing specific action rule discovery based on Grabbing Strategy and applying it on Twitter data for semantic analysis.Ras, Tarnowska, Kuang, Daniel, and Fowler proposed strategy for automatic meta-action mining from the text data.Yang, Yin, Ling, and Pan proposed a method which first learns a decision tree from data, then for each object finds the leaf node in which the object falls, and, for every other leaf computes the net-profit of moving the object to that leaf.Finally, the leaf node with maximum net-profit will be selected and necessary changes for the transition of the object from the current node to that node will be returned as the recommended action.Afterward, to find the optima action, the method was extended to post-processing an ensemble of trees and a Fuzzy Decision Tree Cui, Chen, He, and Chen presented a framework to post process any Additive Tree Model classifier to extract the optimal action and formulated the problem in an integer linear programming.Lu, Zhicheng, Yixin, and Xiaoping presented a state space graph formulation to model the problem as a well-studied combinatorial optimization problem that can be solved by graph search.To take a good balance between the search time and action quality, they presented a sub-optimal heuristic search.All of the proposed methods assume that instances are independent.However, in many real-world domains, there is a network of relationships between the instances.Ignoring such relationships in action mining process would lead to miss some profitable actions.In this paper, to discover actions from social networks, we incorporate the relationships in the action extraction process.In this section, we present the notations used throughout the paper and then explain a well-known node classification method.We represent the graph G as an adjacency matrix W by wuv if∈E, and 0 otherwise.We are given a set of labeled nodes L⊂V.We represent labels by a vector y such that yv = 1or −1 if node u is labeled as the goal or the other labels respectively, and 0 if u is unlabeled.An action Γ: is a set of potential changes in the values of nodes’ attributes where each attribute of a node occurs at most once.A change in the value of attribute i∈Φ of the node u from fui to f′ui is a structure of the form, where f, f′∈Dom and f is the observed value for i.It corresponds to changing the value of attribute i from f to f′ by means of an external intervention."The problem aims to find attribute data F' such that the change from F to F' incurs the minimum cost, measured by the cost function C, while AF' results in the desired model = θ*).Based on the above problem definition, we denote an action by Γ: F → F′which suggests a set of changes in the input attribute data."In the problem statement, any learner which has a closed-form AF' can be plugged in the objective function Eq.For the cases with a continuous closed-form function, the optimal action can be extracted by the gradient-based method.It is argued that classification in real applications is not enough and some actions are needed to reclassify some instances to the desired class which could be interesting in the corresponding domain.In the area of social network research, the wide range of applications of node classification motivated us to focus on the extraction of useful actions based on node classification."Based on such consideration, our problem could briefly be defined as follows: given the graph G including a set of labeled nodes and nodes’ attributes as an input network, a node classification method A, a node x, the space of changes and their associated costs, and goal label g, we aim to find a cost-effective action for the node x which maximizes its membership probability in a more desirable group. "We use Zhou's method to classify nodes over the network.There are several following considerations for choosing the method:It is one of the most successful algorithms for node classification.It is random walk based.It is based on label propagation.It learns a global labeling function over the graph with provable convergence guarantees."The method is based on the idea that the probability of labeling a node u with label l is the total probability that a random walk starting at u visits a node labeled l. Assume parameter r specifies the relative amount of the label information from a node's neighbors and its initial label information.Let Q be a transition matrix defined as D−1/2WD−1/2 in which D is a diagonal matrix with its-element equals to the sum of elements in the ith row of W.The method is summarized in Algorithm 1.The model constructed by the node classification method is exploited in the action extraction process as we present in the next section.A simple solution for our problem is searching the possible change space exhaustively.Consider the sample network shown in Fig. 2. in which labels of nodes a and e are known."Edges’ weights are computed by Eq. in Fig 2. and the network is classified by Zhou's method shown in Fig 2.The number in the square box near each node is its label probability.The sign of the number is the predicted label of that node.Assume that the aim is to explore the space of changes in the attribute data to extract the optimal action for node c.It can be seen that it is predicted that node c belongs to the positive class with probability 0.4."Let |f'-f | to be the cost function of each attribute.A candidate action is as follows: Γ:) which increases score label of node c to 0.506 and costs 2.This formulation is suitable for our problem in which dropping the constraint makes the resulting relaxed problem easier to solve.The objective function in Eq. is non-convex."We solve it using Stochastic Gradient Descent based approach by first calculating the derivative of L with respect to F' and then updating the adjacency matrix W along the negative direction of the derivative.The above optimization solution for the Mining Actions from social Networks with node Attributes is summarized in Algorithm 2.We present some iterations of MANA algorithm in Fig. 3.It directly corresponds the case depicted in Fig. 2.The time complexity of the Guiding set algorithm is O.Line 1, by using the iterative method to extract the column x of P, would take O time.The first loop, line 2, is executed n times.Rest of the algorithm takes O to find top-ns large if from nodes.The time complexity of the rest of Algorithm 2 is O), where k1 is the number of iterations to convergence, and k2 is the number of iterations in the iterative method for computing stationary distribution of a random walk starting at node x.The inner loop, line 6, is executed |S| times.For line 7, it would be too expensive) to compute P =−1.Instead, we can iteratively extract the corresponding column of P and this line would take O time, where k2 is the number of iterations in this method and m is the number of edges.As a result, the overall time complexity is O,In practice, it is often not necessary to update the whole feature data of the network; instead, we could focus on the parts of the network which play a more important role to guide random walks starting at x.In particular, the updates could be restricted on the neighborhood of node x as well as the neighborhood of the nodes in the guiding set S."More precisely, to update the feature data F' and compute derivate of L we need to operate just over an intended neighborhood as we described above.Algorithm MANA_N takes O + |N|2) time, where |N| is the number of nodes in the intended neighborhood.Therefore, the overall time complexity of MANA_N is O.Since the algorithm operates just on a part of the network, it nearly requires a constant amount of additional space when compared to MANA.That is the overall space complexity of the algorithm is O.We performed experiments on real-world network datasets to evaluate the performance of the proposed algorithms not only with respect to the parameters but also in comparison to the major competing methods in terms of effectiveness, efficiency, scalability and different measures as we describe in the following.Datasets: We used the network datasets described in Table 1.Friendship Networks: In the networks, nodes are users and edges indicate friendship relations.We consider the following networks: Facebook, where labels are locales, Google+ where labels are places.Co-authorship Networks: In these networks, the nodes are authors and an edge exists between two authors if they have co-authored the same paper.We consider the following networks: High energy physics theory and DBLP.These networks consist of some disconnected components.We report our experiments on the two largest connected components.In addition, for every node u of the networks we generate the following features:Number of papers u authored;,Number of papers u authored in goal conference;,Number of papers in which u is the first author;,The time since u authored the last paper;,Time since u last authored a paper in goal conference;,Number of time slices in which u authored a paper;,Number of Conferences/Journals in which u authored a paper;,Number of Conferences/Journals in which u was the first author;,Number of citations of u;,Number of citations of u in the goal conference;,Number of the papers cited by u;,Number of the papers in the goal conference cited by u.The datasets are all available on the web.In order to produce a binary input label, in our experimentations, we consider a label value as positive and others as negative."As explained before, Pr is computed using the modified network W, whereas Pr is computed using the input network. "Both are computed using iterative Zhou's method. "Effectiveness estimates how effective an action is to change node's label to the goal label. "Cost: The optimal action is the one whose cost, as defined in Section 3.2) is minimal.Validity: How likely a node is correctly labeled as the class suggested by an action.To evaluate this, the underlying classifier could be rebuilt, incorporating the structural changes in the network suggested by the action.Time: this measures the average running time of the algorithms per action.Cost function: As is the case with most action mining algorithms, the cost function plays an important role.We experiment with the following cost functions: ck =2,The experiments are performed on a Windows machine with four 2.5 GHz Intel Cores and 3.9GB RAM.In our experiments, we hide labels of 70% of labeled nodes on each network.First, we perform a classification method to predict label probabilities then we choose 50 negative nodes randomly and apply the proposed method to infer actions.We evaluate several aspects of our algorithm.Parameter λ and γ,We examine how the different choices of parameters affect the performance.We only report the results on Facebook because the results on the rest of networks show the same trend.We vary the parameter λ and examine how the parameter affects the performance of algorithm MANA_N. Average effectiveness of the actions increases as the parameter λ increases).It is because a high λ ensures that random walks on the graph will be more likely to visit the nodes which have goal label.In addition, the increase in λ leads to increase in cost).The larger the λ, the more dominant the average effectiveness compared to the cost.We will compare proposed methods also with Yang et al.’s approach, ILP and Feature Tweaking.To our knowledge, these methods are the state of the art in cost-effective transductive action mining.Since the competing methods do not use any relations at all, we apply them to the dataset consists of nodes described by a set of attributes including nodes information.Tables 2 and 3 show a comprehensive comparison in terms of the running time and the solution quality measured by the average cost and effectiveness, and validity.The reported results are the average performance of different settings as stated before.From Tables 2 and 3, we make the following observation: Our methods always attain the minimum cost and the largely outperform all of the competing methods in terms of quality, which is under our expectation.In fact, the proposed methods are guaranteed to find the optimal solution due to following reasons: First, we incorporate structural properties of the social networks as well as node features in action extraction process.Second, by formulating learning problem in terms of the random walk on the graph, we directly use the structure of the graph to aid action mining task.Third, our method is guaranteed to find the optimal solution using matrix methods.Action mining is an effective way for actionable knowledge discovery.Up to now, many methods have been developed in this field.Decision trees, classification rules and association rules, are three types of learner machines that have been used for this task till now.Current state-of-the-art researches in action extraction field mainly focus on simple data where inherent relationships between data objects are ignored.In this paper we introduce action mining in social networks where such relationships are incorporated in the action mining process.Action mining is formulated as an optimization problem which in turn is solved using stochastic gradient descent approach where the underlying node classification is a random-walk based method.The aim is to explore cost-effective changes in node attributes which result in desired changes in the labels of intended nodes.For finding cost-effective action for an intended node, we take a random walk from the node and make a decision to vary the value of attributes based on how nodes in its neighborhood are labeled in order to guide the random walk on the graph to be more likely to visit the nodes which have the desired label.We develop algorithm MANA to directly learn estimation function of the values of nodes’ attributes.We also provide two heuristic algorithms to improve the efficiency of the proposed algorithm.Experiments on real-world social data demonstrate that the results are significant and our approach outperforms state of the art competing methods in terms of several measurements.Mining social networks generates descriptive as well as predictive models / patterns where they could not directly be applied in the related domain.Extracting actionable patterns from social networks could explicitly be emerged as a response to this need.As our current paper is a first step towards extracting actionable knowledge from social networks, there are several issues which could be discussed from different perspectives.First, Even if our focus is on the adoption of a selected node classification method, many other learning methods in social networks including community detection, link prediction, influential user detection, and so on, could potentially be adopted and plugged into the framework with the corresponding objective functions which in turn could be solved using the stochastic gradient descent or alternative optimization methods.Second, since the underlying data is graph, the conducted approach could be extended for a variety of non-social complex networks including transportation, electrical transmission, wireless sensor, wired/wireless signal transmission networks as well as many others.Third, considering the problem space, in the current work we extract actions for an intended node although the problem space might be very large .The approach could be extended for more general situations where actions are extracted for a group of intended/selected nodes.Fourth, we assume that the network structure is static.That is, the nodes and edges are not added/removed and the value of attributes as well as the weight of edges are not independently changed.Obviously, dynamism in the network imposes several challenging issues which requires independent research investigations as future works.Finally, action extraction from social networks has its own limitations.Evaluating the quality of the extracted actions in a real-word scenario requires that the actions are to be applied in the real network."However, this is possible for the network's owner.Moreover, exploration of the space of possible changes in the network data is practically infeasible.That is, considerable research investigations are required to devise heuristics to reduce the problem search space while keeping the quality of actions for real word network applications.Nasrin Kalanat: Conceptualization, Funding acquisition, Formal analysis, Writing - original draft.Eynollah Khanjari: Conceptualization, Formal analysis, Writing - original draft, Writing - review & editing. | Actionable Knowledge Discovery has attracted much interest lately. It is almost a new paradigm shift toward mining more usable and more applicable knowledge in each specific domain. An action is a new tool in this research area that suggests some changes to the user to gain a profit in his/her domain. Currently, most of action mining methods rely on simple data which describes each object independently. Since social data has more complex structure due to the relationships between individuals, a major problem is that such structural information is not taken into account in the action mining process. This leads to miss some useful knowledge and profitable actions. Consequently, more effective methods are needed for mining actions. The main focus of this work is to extract cost-effective actions from social networks in which nodes have attributes. The actions suggest optimal changes in nodes’ attributes that are likely to result in changing labels of users to more desired one when they are applied. We develop an action mining method based on Random Walks that naturally combines the information from the network structure with nodes attributes. We formulate action mining as an optimization problem where the goal is to learn a function that varies the values of nodes’ attributes which in turn affect edges’ weights in the network so that the labels of intended individuals are likely to take the desired label while minimizing the cost of incurring the changes. Experiments confirm that the proposed approach outperforms the current state-of-the-art in action mining. |
350 | Weekends-off efavirenz-based antiretroviral therapy in HIV-infected children, adolescents, and young adults (BREATHER): a randomised, open-label, non-inferiority, phase 2/3 trial | Antiretroviral therapy has substantially improved the prognosis for HIV-infected children, transforming HIV-1 infection from a life-threatening disease to a chronic infection.Furthermore, with new evidence,1 universal ART is now recommended2,3 for all people living with HIV, including children and adolescents, even without major immunosuppression or HIV-related symptoms.Therefore, the population of children, adolescents, and young adults on life-long ART is growing.4,For this population, innovative treatment strategies are needed to address their lifestyle needs, to help maintain long-term retention-in-care, and to improve adherence to ART, which is particularly problematic during adolescence.4–6,Short cycle therapy aims to maintain suppression of HIV-1 RNA during planned short breaks from ART, thereby reducing ART intake, long-term toxic effects, and costs.First proof-of-concept studies suggested the feasibility of a 7 days on and 7 days off ART strategy;7–9 however, this approach proved inferior to continuous therapy in two randomised controlled trials in adults.10,11,Single-arm studies with shorter breaks in ART reported inconsistent results.12,13,However, two small randomised controlled trials confirmed that a short cycle therapy strategy of 5 days on and 2 days off ART is achievable: in the FOTO trial, including 60 US adults,14,15 and in a larger randomised controlled trial in 103 Ugandan adults,10 short cycle therapy was non-inferior to continuous therapy in terms of maintained viral load suppression over 48 weeks with the added benefit of less toxicity.Most participants in both trials were on efavirenz, which has a long plasma half-life, and lamivudine, which has an intermediate long intracellular half-life.16,However, whereas participants in the US study received tenofovir disoproxil fumarate as the third drug,16 those in the Ugandan trial received shorter-acting stavudine or zidovudine.Evidence before this study,We searched PubMed up to March 1, 2016, with the search terms “HIV” AND AND “therapy” and the references from the retrieved manuscripts.More than a decade ago, small proof-of-concept studies in adults suggested that structured treatment interruptions with 7 days on and 7 days off cycles of antiretroviral therapy could maintain virological suppression, particularly if drugs with long half-lives were used.However, this strategy proved inferior to continuous therapy in two randomised controlled trials in adults.Single-arm studies of a short-cycle therapy strategy with 4 days on and 3 days off showed inconsistent results: although there was no confirmed viral rebound in adults on different ART regimens in a French study, a study in highly treated adolescents and young adults on protease inhibitor-based therapy in the USA showed high rates of viral rebound.Adult studies of short cycle therapy with 2 days per week off efavirenz-based ART showed promising results: following a single arm study of 5 days on and 2 days off ART, which showed rates of virological suppression of about 90% over 48 weeks, two small randomised controlled trials in adults confirmed non-inferiority of maintaining virological suppression with this short cycle therapy strategy compared with continuous therapy.No published trials have assessed 5 days on and 2 days off ART in children or adolescents.Added value of this study,To our knowledge, this is the first randomised controlled trial to investigate the feasibility and acceptability of efavirenz-based short cycle therapy in a geographically diverse group of children, adolescents, and young adults with no previous treatment failure.The short cycle therapy was acceptable and non-inferior in terms of maintaining virological suppression.No significant differences were noted in immune activation, total HIV-1 DNA, or development of resistance, and the short cycle therapy group had fewer ART-related adverse events than did the continuous therapy group.Additionally, participants expressed a strong preference for this short cycle therapy compared with continuous treatment, once they had adapted to the new routine.Implications of all the available evidence,The findings of this trial, supported by previous adult studies, show that a short cycle therapy strategy with 5 days on and 2 days off efavirenz -based ART with a standard dose of efavirenz is a viable option for virologically suppressed children, adolescents, and young adults with 29% reduction in the cost of drugs.2 year extended follow-up of the trial is ongoing to address sustainability of this strategy over a longer duration and results will be available in 2017.Further studies are warranted to assess short cycle therapy with lower doses of efavirenz and other long-acting ART regimens in settings with less frequent viral load testing than the quarterly monitoring done in trials reporting to date.No randomised trials of short cycle therapy have been done in children or adolescents, who face longer-term ART than adults.We aimed to assess whether short cycle therapy on first-line efavirenz-based ART in children, adolescents, and young adults was non-inferior to continous therapy in terms of maintaining virological suppression and adherence to ART, while improving quality of life.We found no evidence that short cycle therapy was inferior to continuous therapy in maintaining viral load suppression with a very small non-significant difference between the groups favouring short cycle therapy.Further, five of six participants on short cycle therapy who had low level viraemia resuppressed on returning to daily ART.Results were essentially unchanged in further analyses that adjusted for small differences in CDC stage at baseline, and were done per protocol.Our results have broad generalisability because we recruited participants from diverse geographical, ethnic, and sociocultural backgrounds in 11 countries, including 21% who were young adults in their early twenties.There were fewer major resistance mutations among children failing on short cycle therapy than in those on continuous therapy, although this was not statistically significant.In both groups and similarly to the PENPACT-1 trial, which assessed timing of switch to second-line ART, NNRTI and Met184Val mutations emerged rapidly even at low level viraemia.20,Although virological suppression to less than 50 copies per mL was the primary endpoint, we further investigated the safety of short cycle therapy by assessing its effect on very low level viraemia and HIV reservoir and showed no differences between the short cycle therapy and continuous therapy groups.Methods of varying technical difficulty and biological meaning have been suggested to quantify the HIV reservoir, which is responsible for viral rebound following treatment interruption.21,We measured HIV-1 DNA because it is a surrogate for reservoir size in acute and chronic HIV infection.22,23,Increases in chronic immune activation and inflammation have been reported in adult interruption trials designed to allow rebounds in viral load, and have been associated with adverse HIV-related outcomes.24,Immune activation with raised concentrations of biomarkers of inflammation and coagulation has also been reported in patients with virological suppression,25 both among elite controllers not on ART and ART recipients with supressed viral load, albeit at low levels.26,Therefore, we also measured the effect of the short cycle therapy strategy on 19 potentially important biomarkers and found no evidence of any differences between groups, with the exception of D-dimer which, by contrast with expectation, was lower in short cycle therapy than in continuous therapy, which could be a chance finding.The absence of a signal suggestive of any increased immune activation and inflammation adds further confidence that the short cycle therapy strategy was not causing subclinical injury.Furthermore, we recorded no differences in cellular markers previously shown to be rapidly deranged during treatment interruption.27,Most safety profiles were similar between randomised groups, and there were more ART-related adverse events reported in the continuous therapy group.However, in an open-label trial, potential for reporting bias exists.Assuring adherence to the randomised strategy is crucial to the integrity of trial results.If participants randomly assigned to the continuous therapy group elected, of their own accord, to take breaks in therapy, non-inferiority of short cycle therapy and continuous therapy might be shown, because both groups could be taking similar breaks off-ART.Three independent indicators of adherence to assigned strategy all showed that participants on short cycle therapy had appropriately less ART exposure than those on continuous therapy.As well as being the first randomised trial in children, our results build on those from two adult trials with similar design, showing non-inferiority of short cycle therapy versus continuous therapy on efavirenz-based ART.10,15,Only one non-randomised study of short cycle therapy in US adolescents and young adults has been reported in heavily ART-experienced participants taking a 3 day weekend break from protease inhibitor-based ART regimens.12,This study differed substantially from our study and the adult short cycle therapy trials in both design and ART used.More than a third of participants had viral rebound and more than half changed to continuous treatment for other reasons; with no control group and multiple previous ART regimens, viral load, and resistance test results are hard to interpret or compare with our trial.Protease inhibitor ART might not be ideal for short cycle therapy because half-lives are shorter than NNRTIs and might not protect against viral replication during days off.Furthermore, participants in the US study had breaks of 3 days, whereas those in our trial had breaks of only 2 days.Acceptability of the short cycle therapy strategy was shown among participants from all backgrounds; in particular, it was valued because it allowed for more socialising with friends at weekends.Similar results were reported in the associated qualitative substudy, during which participants also discussed liking short cycle therapy because of perceived reduction of previously unreported and unrecognised ART side-effects, such as dizziness and reduced energy.The qualitative substudy17 provided insights into the complexities of physician–patient interactions, particularly relating to non-adherence.In particular, participants who are virologically suppressed might elect not to disclose adherence lapses because of a desire not to fail or disappoint their physician."Overall, the qualitative findings endorsed participants' enthusiasm for short cycle therapy, but also highlighted the need for support with early adaptation to weekend breaks off-ART.17",The overall reduction in drug exposure could reduce long-term toxicity for individuals and, at a population level, result in cost savings, enabling more participants to receive treatment.The ENCORE1 trial28 showed that daily 400 mg efavirenz was non-inferior to 600 mg, with less toxicity; in both groups, efavirenz was given with daily tenofovir and emtricitabine.Efavirenz 400 mg daily is included as an alternative option to 600 mg efavirenz-based ART in revised WHO 2016 adult guidelines.29,Of note, the weekly cumulative dose of daily 400 mg efavirenz is almost the same as in our trial.Both strategies seem to be more acceptable to patients than 600 mg daily efavirenz and provide the possibility of individualisation of ART regimens to suit life situations.The results from this study show that short cycle therapy might be a promising strategy for adherent children and adolescents well established on ART.However, follow-up is relatively short.A 2-year trial extension is ongoing, which will provide further data on longer-term sustainability.More than 90% of participants have reconsented to stay on their randomised strategy and we expect results in 2017.Of note, this short cycle therapy strategy can be generalised only to children and young people taking efavirenz-based ART who have not had treatment failure, and where there is availability of viral load monitoring.Appropriate counselling and support is needed to explain that there should be a maximum of 2 days per week breaks in therapy.Furthermore, results presented here cannot necessarily be extrapolated to ART containing the reduced dose of efavirenz or other ART regimens, or to settings where viral load monitoring is unavailable or infrequent.Further research is needed to address this, and could also assess short cycle therapy with other suitable long-acting drugs or drugs with a higher barrier to resistance such as tenofovir alafenamide and dolutegravir .30,In conclusion, in an adherent and geographically diverse population of HIV-infected 8–24 year-olds on 600 mg efavirenz-based ART, a short cycle therapy strategy with 2-day weekend breaks was non-inferior to continuous therapy in terms of virological, immunological, inflammatory effects, and resulted in fewer adverse events.Treatment with ART 5 days per week instead of 7 provides potential for cost savings.Short cycle therapy was liked by participants; in particular, it improved their social lives.This short cycle therapy strategy is a viable option for adherent HIV-infected young people who are stable on efavirenz-based ART.Ongoing longer-term follow-up will further inform sustainability and further research is required for settings where viral load monitoring is less accessible.In this open-label, randomised, parallel group non-inferiority phase 2/3 trial, participants aged 8–24 years were eligible if they had a CD4 cell count 350 cells per μL or higher, suppressed viral load less than 50 copies per mL for at least 12 months on an efavirenz based regimen with two or three nucleoside or nucleotide reverse transcriptase inhibitors and no previous treatment failure.Children on nevirapine or boosted protease inhibitor ART who had not had treatment failure and with undetectable viral load could be enrolled if they substituted efavirenz and viral load remained undetectable for 12 weeks or longer before enrolment.Previous two-drug ART, substitution of NRTIs, or both were allowed, provided these were not for regimen failure.Previous monotherapy was only allowed if taken perinatally for prevention of mother-to-child-transmission.Participants were not eligible if they were pregnant, on concomitant medications for acute illness, or if their creatinine or liver transaminases results were grade 3 or higher at screening.Parents or guardians and older participants provided written consent; young children gave assent appropriate for age and knowledge of HIV status, as per guidelines for each participating country.The trial protocol was approved by the ethics committees in participating centres in Europe, Africa, and the Americas, and is available online.Patients were randomly assigned to remain on continuous therapy or change to short cycle therapy and randomisation was done centrally by the MRC Clinical Trials Unit at UCL, according to a computer-generated randomisation list, using permuted blocks of varying size, stratified by age and site.The randomisation list was prepared by the trial statistician and securely incorporated within the database.Randomisation of study participants was done via a web service accessed by site clinician or one of the three coordinating trials units.An initial 3 week randomised pilot safety phase in selected clinical centres was done in 32 participants to ensure those in the short cycle therapy group maintained undetectable viral load after the 2 day break and before resuming weekday ART on Monday.Recruitment to the main trial commenced after review of three consecutive Monday morning viral load results per participant by the Independent Data Monitoring Committee.In the main trial, participants randomly assigned to short cycle therapy chose 2 consecutive days off ART, and continued this cycle throughout.Participants on continuous therapy remained on continuous efavirenz-based ART.Substitutions for simplification or toxicity were allowed."Participants were randomised 2–4 weeks after screening and assessed clinically at weeks 4 and 12, then every 12 weeks until the last participant had completed 48 weeks' follow-up.Examination for lipodystrophy, Tanner stage, and a pregnancy test were done at randomisation and repeated every 24 weeks.Viral load and T lymphocytes were measured at every visit; participants with viral load of 50 copies per mL or higher had a repeat test within 1 week; those on short cycle therapy with confirmed viral rebound recommenced continuous ART.Additional assessment of treatment adherence and a stored sample for resistance testing were requested for all participants with viral rebound.Haematology and biochemistry tests were done at screening and randomisation; thereafter, haematology was done every 12 weeks and biochemistry as per local practice.Blood lipids, including total cholesterol, high density lipoprotein, low density lipoprotein, and very low density lipoprotein, were measured at weeks 0, 24, and 48.Plasma and cells were stored for additional immunology and virology tests at baseline and weeks 4, 8, and 12, and then every 12 weeks for plasma and 24 weeks for cells.Questions on compliance to the strategy were asked at every follow-up visit.Adherence questionnaires were completed by carers and participants at weeks 0, 4, 12, 24, and 48.Acceptability questionnaires for those randomised to short cycle therapy were completed at randomisation and at final visit.The trial incorporated three substudies.The virology and immunology substudy assessed low level viraemia, total HIV-1 DNA, and 19 biomarkers of inflammation, vascular injury, and disordered thrombogenesis; all were measured retrospectively on stored plasma and cell samples.The ultrasensitive quantitative HIV-1 RNA and DNA assays used the Qiagen QIAsymphony SP for nucleic acid extraction.An ABI Prism 7500 real-time thermal cycler was used for amplification of HIV-1 RNA and DNA using Invitrogen RT-PCR and Qiagen Multiplex PCR reagents, respectively.An in-house standard curve calibrated against the WHO HIV International standard in IU per mL was used for HIV-1 RNA quantification.The quantitation of HIV-1 DNA was based on a standard curve using the 8E5 cell line, which carries one HIV proviral genome per cell; cell numbers were estimated with the single copy gene for pyruvate dehydrogenase; results were reported as copies of HIV-1 DNA per million cells.19 biomarkers were analysed with Meso Scale Discovery or by ELISA kits.CD4 and CD8 lymphocyte subsets were quantified locally on fresh samples; CD45RA and CD45RO subpopulations of CD4 and CD8 cells were assessed on fresh or stored frozen cell samples at selected sites.The adherence substudy assessed adherence in participants from selected sites by recording bottle openings using a Medication Event Monitoring System capped container.MEMS caps were placed on the container with most frequently taken antiretrovirals."The longitudinal qualitative substudy focused on participants' experiences of the trial and acceptability of short cycle therapy.17",The primary endpoint was confirmed viral load of 50 copies per mL or higher by week 48.Secondary outcomes were: confirmed viral load of 400 copies per mL or higher by week 48; cumulative number and type of major HIV-1 RNA resistance mutations in those with viral rebound; change in CD4% and CD4 cell count, glucose, blood lipids from baseline to week 48; changes in ART regimen; change back to continuous therapy; adherence; acceptability; division of AIDS grade 3 or 4 clinical or laboratory adverse events,18 and treatment-modifying adverse events of any grade; and new US Centers for Disease Control stage B or C diagnoses or death.160 participants provided 80% power to exclude a non-inferiority margin of 12% for the difference in proportion of participants reaching the primary endpoint, assuming 10% of participants have confirmed viral load 50 copies per mL or higher in the continuous therapy group and a one-sided α of 0·05.The Trial Steering Committee decided to continue recruitment until the end of the planned randomisation period to allow sites to recruit patients already invited for screening and to avoid the study being underpowered if the proportion of participants reaching the primary endpoint in the continuous therapy group was lower than expected.In the primary, intent-to-treat analysis, the proportion of participants who had viral rebound was estimated with Kaplan-Meier methods, with adjustment for baseline stratification factors, censoring at week 54 or last follow-up date if not seen at week 48.The difference in proportion of participants who had viral rebound was estimated and two-sided 90% CIs of the difference was obtained with bootstrap SE.19,In a prespecified sensitivity analysis on the per-protocol population, individuals were censored if they had a break in treatment for longer than 7 days, discontinued efavirenz for longer than 7 days, or changed strategy to continuous therapy for reasons other than viral rebound.The intent-to-treat analysis was also repeated without adjustment for stratification factors.Confirmed viral load of 400 copies per mL or higher was estimated by the same approach.Major resistance mutations were summarised.Immunology, HIV-DNA, haematology, biochemistry, and lipids were assessed at week 48 by fitting normal regression models with adjustment for randomised group and baseline values.Natural log transformations were applied as appropriate.Change from baseline is presented as change from mean at baseline in all participants."Categorical variables were compared with Fisher's exact tests, or McNemar's tests for paired data; rates used Poisson regression.Generalised estimating equations were used to compare self-reported adherence across randomised groups over time.Stata version 13.1 was used for all analyses.To assess adherence to allocated strategy, the number of days that MEMS cap was opened at least once divided by number of days that MEMS cap was in use during the trial was calculated for each day of the week.Pilot phase data were included in the analysis.The IDMC reviewed full interim data on three occasions, viral load and enrolment data at a fourth meeting, and analyses of viral load results alone on six further occasions during the trial.The trial was registered with EudraCT, number 2009-012947-40), ISRCTN, number 97755073, and CTA, number 27505/0005/001-0001.The funders had no direct role in the study design, data collection, data analysis, data interpretation, report writing, or decision to submit the report for publication.The corresponding author had access to all data and responsibility for submission for publication.Between April 1, 2011, and June 28, 2013, 227 participants were screened, of whom 199 from 24 sites were randomly assigned.One participant in the continuous therapy group moved location and withdrew consent at week 24; the remaining 198 were followed up to at least week 48.Of those patients randomly assigned, 70 were recruited from Uganda, 48 from western Europe, 36 from Thailand, 20 from Ukraine, 14 from the USA, and 11 from Argentina.Baseline characteristics were similar between the groups.Although CD4% and count were high and well matched between groups, fewer participants had CDC stage C disease in the short cycle therapy group than in the continuous therapy group.Pre-trial ART exposure was comparable between groups: median time on ART at randomisation was 6·1 years, 82 were on their initial ART regimen at baseline, 29 had previously substituted a protease inhibitor, but following the exclusion criteria, none had switched ART for failure.13 participants had a confirmed viral load 50 of copies per mL or higher at any time up to 48 weeks, an estimated probability of viral rebound of 6·1% in short cycle therapy versus 7·3% in continuous therapy.Thus, the 4·9% upper band of the two-sided 90% confidence limit was well within the 12% non-inferiority margin.The per-protocol analysis gave a similar estimated difference of −1·1%, as did analysis without adjustment for stratification factors.After viral rebound, five of six participants in the short cycle therapy group resuppressed viral load compared with only three of seven participants in the continuous therapy group.The remaining five participants remained non-suppressed; three on first-line ART and two after switching to second-line ART.Results repeating the primary analysis, adjusted for CDC stage at baseline were qualitatively unchanged: −1·3% difference between groups, in favour of short cycle therapy.To determine whether the risk of reaching the primary endpoint was related to type of NRTI, a Cox model adjusted for randomised group and NRTI received was fitted; results showed no significant differences between continuous therapy and short cycle therapy.Six participants had confirmed viral load of 400 copies per mL or higher by week 48; estimated probability 2·1% in the short cycle therapy group versus 4·2% in the continuous therapy group."12 participants changed ART regimen during the first 48 weeks, five because of toxic effects.Of 13 participants reaching the primary endpoint, resistance results were available for nine; the remaining four patients had samples with low viral load, insufficient to obtain a result.All four participants suppressed again after these blips, suggesting drug resistance was unlikely.Seven of nine participants with available results had resistance mutations: all seven had NNRTI mutations and two had Met184Val.No new CDC stage C and two CDC stage B events were recorded and no significant differences were noted between groups in CD4% or CD4 cell count.With the exception of lower mean corpuscular volume in those on zidovudine and lower platelet levels in the short cycle therapy group compared with the continuous therapy group; haematological variables did not differ.Concentration of low density lipoproteins was higher at week 24 in the short cycle therapy group than in the continuous therapy group, but we observed no difference at week 48.By week 48, eight participants in the short cycle therapy group had reverted to continuous therapy: six participants reached the primary endpoint, one developed gynaecomastia leading to efavirenz discontinuation and resumption of daily ART, and one had ART changed for poor adherence."By 48 weeks, 20 participants had 27 grade 3 or 4 adverse events, with decreased neutrophil count being the most common. "Two ART-related adverse events were reported in two participants in the short cycle therapy group compared with 14 events in ten participants in the continuous therapy group; this was the only significant difference in adverse events between groups).Lipodystrophy and gynaecomastia were the most common ART-related events.13 serious adverse events were reported in nine participants.There were five pregnancies.Among 192 children in the immunology and virology substudy, values for viral load less than 20 copies per mL, total HIV-1 DNA, and inflammatory markers, including interleukin 6 and D-dimer, were similar between randomised groups at baseline."At week 48, 13 children in the short cycle therapy group and 14 in the continuous therapy group had viral load 20 copies per mL or higher and there were no significant differences between groups in total HIV-1 DNA, including after adjustment for differences at baseline or after exclusion of participants with evidence of viral rebound.No differences between groups were noted at week 48 in the 19 biomarkers of inflammation, vascular injury, and disordered thrombogenesis, with the exception of D-dimer, which was lower in the short cycle therapy group than in the continuous therapy group by log 0·5.No differences were identified in CD8 cells, ratios of CD45RA:CD45RO cells, and CD8RA:CD8RO cells between groups at week 48.In the short cycle therapy group, 95% of weekend breaks were reported as taken.The MEMS cap substudy data supported these results.Among 61 participants enrolled in the substudy, 56 continued to use MEMS caps until 36 weeks and 46 were still using MEMS caps at week 48.The median number of cap openings per week was five in the short cycle therapy group and seven in the continuous therapy group.MEMS caps were opened at least once daily from Monday to Friday more than 80% of the time in both groups, with the percentage of bottle openings remaining high in the continuous therapy group at weekends, but dropping to less than 20% for those on short cycle therapy."Based on ART logs, updated at each visit, one participant in the short cycle therapy group and seven participants in the continuous therapy group had a treatment interruption of 3 days or more.Adherence questionnaires were completed by 91 participants in the short cycle therapy group and 93 participants in the continuous therapy group at one or more visit to 48 weeks.Adherence was similar in both groups with 7% of reports in the short cycle therapy group versus 10% of reports in the continuous therapy group of missing ART in the week prior to the assessment visit."Adherence based on carers' questionnaires was also similar between the two groups.In acceptability questionnaires completed at baseline, 70 of 80 participants in the short cycle therapy group thought the approach would be easier than staying on continuous therapy.At end of follow-up 81 of 90 participants in the short cycle therapy group reported that weekend breaks made life easier than daily ART, mainly because going out with friends was easier: 15 of 76 participants who completed both questionnaires reported this was difficult pre-trial compared with only two of 76 during the trial.The acceptability of short cycle therapy as further explored in the qualitative substudy will be reported elsewhere.17 | Background For HIV-1-infected young people facing lifelong antiretroviral therapy (ART), short cycle therapy with long-acting drugs offers potential for drug-free weekends, less toxicity, and better quality-of-life. We aimed to compare short cycle therapy (5 days on, 2 days off ART) versus continuous therapy (continuous ART). Methods In this open-label, non-inferiority trial (BREATHER), eligible participants were aged 8–24 years, were stable on first-line efavirenz with two nucleoside reverse transcriptase inhibitors, and had HIV-1 RNA viral load less than 50 copies per mL for 12 months or longer. Patients were randomly assigned (1:1) to remain on continuous therapy or change to short cycle therapy according to a computer-generated randomisation list, with permuted blocks of varying size, stratified by age and African versus non-African sites; the list was prepared by the trial statistician and randomisation was done via a web service accessed by site clinician or one of the three coordinating trials units. The primary outcome was the proportion of participants with confirmed viral load 50 copies per mL or higher at any time up to the 48 week assessment, estimated with the Kaplan-Meier method. The trial was powered to exclude a non-inferiority margin of 12%. Analyses were intention to treat. The trial was registered with EudraCT, number 2009-012947-40, ISRCTN, number 97755073, and CTA, number 27505/0005/001-0001. Findings Between April 1, 2011, and June 28, 2013, 199 participants from 11 countries worldwide were randomly assigned, 99 to the short cycle therapy and 100 to continuous therapy, and were followed up until the last patient reached 48 weeks. 105 (53%) were men, median age was 14 years (IQR 12–18), and median CD4 cell count was 735 cells per μL (IQR 576–968). Six (6%) patients assigned to the short cycle therapy versus seven (7%) assigned to continuous therapy had confirmed viral load 50 copies per mL or higher (difference −1.2%, 90% CI −7.3 to 4.9, non-inferiority shown). 13 grade 3 or 4 events occurred in the short cycle therapy group and 14 in the continuous therapy group (p=0.89). Two ART-related adverse events (one gynaecomastia and one spontaneous abortion) occurred in the short cycle therapy group compared with 14 (p=0.02) in the continuous therapy group (five lipodystrophy, two gynaecomastia, one suicidal ideation, one dizziness, one headache and syncope, one spontaneous abortion, one neutropenia, and two raised transaminases). Interpretation Non-inferiority of maintaining virological suppression in children, adolescents, and young adults was shown for short cycle therapy versus continous therapy at 48 weeks, with similar resistance and a better safety profile. This short cycle therapy strategy is a viable option for adherent HIV-infected young people who are stable on efavirenz-based ART. Funding UK National Institute for Health Research Health Technology Assessment; UK Medical Research Council; European Commission; PENTA Foundation; INSERM SC10-US19, France. |
351 | The effect of reward on listening effort as reflected by the pupil dilation response | Talking to a friend is often considered to be rewarding, and this is what motivates people to initiate a conversation and what usually keeps them motivated to stay engaged while continuing talking, even when listening is effortful.According to the Framework for Understanding Effortful Listening, ‘when and how much effort we expend during listening in everyday life depends on our motivation to achieve goals and attain rewards of personal and/or social value’.When listening becomes too demanding or when we do not recover from high levels of effort while listening we may lose motivation."The motivational intensity theory states that motivational arousal occurs when a task is sufficiently difficult, within one's capacity, and is justified by the magnitude of reward.When a task becomes too difficult, there will be little or no mobilization of energy.The greater the reward, the greater the amount of energy a person is willing to mobilize.A high or low reward assigned before the start of a block can affect the level of motivation.Richter examined the effect of monetary reward on effort-related cardiovascular reactivity, indexing sympathetic nervous system activity, while participants performed an auditory discrimination task.The results showed an effect of reward on pre-ejection period reactivity, an indicator of sympathetic activity, in the difficult task condition.The pupil dilation response, also reflecting autonomic nervous system activation, is similarly sensitive to reward.In a study by Bijleveld et al. participants had to listen to, memorize, and report back 2 or 5 digits, while each trial was preceded by high or low monetary reward.Their pupil response was significantly larger for the high reward, but only for the difficult 5-digit condition.These outcomes indicate that the effect of reward can be measured objectively by the assessment of autonomic cardiac responses and pupil dilation.Listening effort is defined by FUEL as the deliberate allocation of resources, as reflected by pupil dilation, to overcome obstacles in goal pursuit when carrying out a listening task.The allocation of more task related resources results in a larger pupil dilation response.The pupil response is an autonomic response related to activity balance of the sympathetic and parasympathetic nervous systems.The pupil response to speech-in-noise processing is widely used as an objective measure of speech processing load.Mean pupil dilation reflects the average processing load in a specified time window while peak pupil dilation reflects the maximum processing load.Hence, both MPD and PPD reflect changes in listening effort, but theoretically the MPD has higher sensitivity to changes in duration of effortful listening."PPD latency has been found to be related to the speed of cognitive processing and the baseline pupil size prior to the pupil response is considered to reflect an autonomic response that provides information about an individual's state of arousal in anticipation of the amount of cognitive resources needed for the task at hand.Research shows that high levels of fatigue are associated with a smaller pupil dilation response.Wang et al. showed a negative correlation between need for recovery and the PPD as measured during processing of speech in noise, indicating that lower NFR was associated with a larger pupil response.As explained by Wang et al., NFR can be regarded as an intermediate state between exposure to stressful situations at work and daily life fatigue.According to FUEL, fatigue can affect how we evaluate our task demands, which could affect the available capacity of cognitive resources.In line with Wang et al., FUEL predicts that a high level of fatigue results in a decreased available capacity of resources in order to preserve energy.Additionally, FUEL predicts that high levels of fatigue lower the motivation to achieve goals.Hence, the NFR questionnaire as introduced by van Veldhoven and Broersen, shown in Table 1 of their study, was included in this study."Finally, according to the motivational intensity theory, the level of motivational arousal has to occur within one's capacity.WMC and the ability to inhibit irrelevant linguistic information, associated with speech performance, can be measured with the size-comparison span task.Interestingly, individual differences in SICspan performance have also been shown to affect the pupil response.Participants with a larger WMC and better ability to inhibit irrelevant information showed a larger PPD when processing speech masked by speech.Hence, we were interested in whether WMC is related to the effect of reward.To investigate this, the SICspan task was included in this study.The main purpose of this study was to investigate whether motivation has a mediating effect on listening effort as reflected by the pupil dilation response.Therefore, we tested the effect of reward on the perception of speech masked by a single talker at relatively easy and difficult intelligibility levels, and in a control condition using speech in quiet.We also examined the effect of reward on the simultaneously recorded pupil dilation response.We hypothesized improved performance and a larger PPD in the high reward than in the low reward condition.In addition, based on the results of Richter, we hypothesized that the effect of reward would be strongest in the hard listening condition.Note that based on the Motivational Intension Theory, a ‘hard’ condition, although resulting in performance below a required minimum score, should not be so difficult that participants give up on a task, while an ‘easy’ condition resulting in higher performance levels should also not be too easy, as people tend to become easily distracted.Ohlenforst et al. showed an inverted U-shape curve with the largest PPD around 50% intelligibility and an intermediate PPD at approximately 85% intelligibility for speech masked by a single talker.Speech processing in quiet should result in a small PPD and should show no effect of reward.Additionally, in line with FUEL we expected participants with a relatively high NFR and/or a smaller WMC to show a smaller PPD and a smaller effect of reward, on the PPD.This is the case because low capacity and high fatigue can lower our motivation to put effort into a task when demands are high.Twenty-four normal hearing adults, recruited at the VU University and VU Medical Center, participated in the study.Ages of the participants ranged from 18 to 52 years with a median age of 21 years.The sample size was based on a moderate effect size of attention related processes on the PPD, as observed in a previous study.Normal hearing was defined as pure-tone thresholds less than or equal to 20 dB HL at the octave frequencies 0.25–4 kHz."Participants' pure-tone hearing thresholds averaged over both ears and over the octave frequencies 1–4 kHz, ranged from −2.5–10 dB hearing level = 3.6 dB).Participants had no history of neurological diseases and reported normal or corrected-to-normal vision.They were native Dutch speakers and provided written informed consent in accordance with the Ethics Committee of the VU University Medical Center, Amsterdam.Speech perception was measured using the adaptive speech reception threshold task.Target sentences were everyday Dutch sentences, uttered by a female talker.An example of an everyday sentence is ‘Hij maakte de brief snel open’, which directly translates to ‘He quickly opened the letter’.The sentences were masked by a single male talker, at two difficulty levels, or were presented in quiet.Participants were asked to repeat each sentence.For the ‘easy’ and ‘hard’ conditions, the target intelligibility was 85% and 50% correct sentence recognition, respectively.These conditions were presented in a blocked fashion.Importantly, while participants were unaware of the intelligibility levels at the start of each block, they were informed about the difficulty of each condition, and were told that they could earn a high or low reward when repeating 70% or more of the sentences correctly.A within-subject design with six blocks was applied: intelligibility condition x reward.Each block contained 25 trials and during each trial, the pupil diameter was recorded.The single-talker masker contained concatenated sentences from another set uttered by a male talker.The masker had a long-term average frequency spectrum identical to that of the target speech signal.The value of the SRT, was estimated separately for each reward level, for speech presented at 50% and at 85% target intelligibility levels using a weighted up-down method.The sentence was scored as correct only if each word was repeated correctly and in the right order.For each condition, the target speech level was fixed at 55 dB SPL.The onset of the masker was 3 s prior to the onset of the target sentence and continued for 3 s after the offset of the target sentence.The length of each trial co-varied with the length of the presented sentence, which had a mean duration of 1.84 s.At the end of the trial a 1000-Hz prompt tone was presented for 0.5 s after which participants were instructed to respond.Manipulation of intelligibility level and reward level resulted in a total of six conditions that were presented in a blocked fashion.Each block contained 25 trials and the order of the blocks was counterbalanced over participants.Prior to the experiment, participants were familiarized with the easy and hard listening conditions by listening and responding to 10 practice sentences each.During and after performing the SRT tasks, listeners did not receive any feedback.After each block, participants were asked to rate their effort, performance, and tendency to quit performing the task.For the effort rating, participants indicated how much effort it took on average to perceive the speech during the last block.This was rated on a visual analogue scale from 0 to 10.For the performance rating, they were asked to estimate the percentage of sentences they had perceived correctly.This was rated from 0 to 10.Finally, participants were requested to indicate how often during the last block they had abandoned the listening task because the task was too difficult.This was rated from 0 to 10.The SICspan, is a visual task that measures WMC and the ability to inhibit irrelevant linguistic information.During this task participants were asked to make relative size judgments between of items,by pressing the ‘J’ key for yes and ‘N’ for no on a QWERTY keyboard.Each question was followed by a single word they had to remember, which was semantically related to the object items in the sentence.Sentences and words were presented on screen in black on a light grey background.Ten sets containing two to six size comparison questions were presented in ascending order.After completion of a set, participants were asked to verbally recall the to-be-remembered words in order of presentation.Because the size comparison items and to-be-remembered words were from the same semantic category, the size-judgment items from the questions had to be inhibited while recalling the to-be-remembered words.Between sets, the semantic categories differed.The SICspan score used in this study was the total number of correctly remembered items independent of order, which leads to a maximum score of 40.The higher the score the better the performance.The NFR scale was used to assess NFR after work.Participants had to respond ‘yes’ or ’no’ to 11 statements related to how they feel at the end of a working day.For example, “I find it difficult to relax at the end of a working day” or “When I get home from work, I need to be left in peace for a while”.The total NFR score was calculated by dividing the number of ‘yes’ responses by the total number, after which the outcome was multiplied by one hundred.A higher score represents a higher NFR.All testing was performed in a sound treated room."After recording the participant's audiogram and testing near vision acuity, participants filled in the NFR questionnaire, and they performed the SIC-span and SRT tasks.During the SIC-span and the SRT tasks, participants were seated in front of a computer screen at 65 cm viewing distance.During the SRT test, the pupil diameter of both eyes was measured at a 60 Hz sampling rate using an infrared eye tracker.The light intensity of the LEDs attached to the ceiling of the room was adjusted by a dimmer switch such that, for each participant, the pupil diameter was around the middle of its dynamic range as measured by examination of the pupil size at 0 lx and 750 lx.For the SRT task, audio in the form of wave files was presented diotically by an external soundcard through headphones.All tests were presented by a lap-top computer running Windows 10.The whole procedure, including measurement of pure-tone hearing thresholds, near vision acuity, performing the SIC-span task, calibrating the eye-tracker, practicing and performing the SRT tasks, and a 10-min break halfway through the SRT task took 2 h.At the end of the session participants were informed about their performance on the SRT task.They received 10.40 euros reward in addition to the 7.50 euros hourly rate.The first five trials of each block were excluded from the analyses.For the pupil diameter traces for the remaining 20 trials per condition, zero values within the time window of 1 s before and 4.3 s after sentence onset were coded as blinks.Traces in which more than 20% of their duration consisted of blinks were excluded from further analysis.For the remaining traces, blinks were removed by linear interpolation between the fifth sample before and eighth sample after the blinks.The x- and y-coordinate traces of the pupil center were “deblinked” by application of the same procedure.Trials for which these coordinate traces contained eye movements within the time window of 1 s before and 4.3 s after sentence onset and deviating more than 10° from fixation on the x- or y-axis were removed from analysis.A five-point moving average smoothing filter was passed over the de-blinked pupil traces to remove any high-frequency artifacts."All remaining traces were baseline corrected by subtracting the trial's baseline value from the value for each time point within that trace.This baseline value was the mean pupil size within the 1-s period prior to the onset of the sentence, when either listening to the speech masker alone or no sound.The baseline period is shown by the left and middle dotted vertical lines in both plots in Fig. 2.Average traces in each condition were calculated separately for each participant.Within the average trace, MPD was defined as the average pupil dilation relative to baseline within a time window ranging from the start of the sentence to the start of the response prompt, shown by the middle and right dotted vertical lines in both plots in Fig. 2.Within this same time window, the PPD was defined as the largest value relative to the baseline.The latency of the PPD was defined relative to the sentence onset.Finally, for each participant and each condition the average pupil diameter at baseline was calculated.For all dependent behavioral, pupil, and self-rated variables we performed 2 × 2 analyses of variance with condition and reward as the repeated measures within-subject variables.Since no SRTs were measured for the control conditions, the dependent variables that were measured in quiet were analyzed separately by means of two-sided paired-samples t-tests.For the correlation analysis between the SRTs, PPDs, and the self-rated variables, these were first averaged over all masked conditions.Additionally, for the PPD difference score was calculated for the effect of reward by subtracting the average score for the low reward conditions from the average score for the high reward conditions.Control conditions were excluded from these calculations due to ceiling effects on the rating scores.Pearson correlation coefficients were calculated to assess the relationships between the resulting average rating scores, PPD difference scores, and the SICspan scores."Finally, a non-parametric Spearman's ρ was calculated to examine each relationship between the resulting average values and difference scores and the NFR scores, as the distribution of the NFR scores was skewed.For the SICspan task, the average score was 30.0.The average NFR score was 17.8%.Average SRTs and subjective ratings as a function of reward for all conditions are presented in Table 1 and average pupil measures as a function of reward for all conditions are presented in Table 2.Average SRTs are plotted in Fig. 1, and average pupil traces over participants for each condition are plotted in Fig. 2.No effect of reward was observed for any of the parameters for the control conditions.Mean performance for speech reception in quiet was 99.8% whole sentences correct for both reward conditions.Analysis of the SRTs revealed a significant main effect of task difficulty, as indicated by the lower SRTs for the hard than for the easy condition.No significant main effect of reward or interaction effect between reward and task difficulty was found.Analysis of the MPDs revealed a significant main effect of task difficulty.No significant main effect of reward or interaction was found.Analysis of the PPDs revealed a significant main effect of task difficulty and a main effect of reward.No interaction was found.A larger PPD for the high than for the low reward condition was observed.Apart from the trend for an effect of task difficulty on the pupil baseline, there were no significant main effects or interactions for the PPD latency and pupil baseline.Self-rated effort, performance, and quitting all showed a significant main effect of task difficulty, indicating that the hard conditions were rated as more effortful and resulted in lower performance and a higher quitting rate than the easy conditions.No significant main effect of reward or interaction effect was found.The SICspan and NFR scores were not correlated with one another or with the average SRT and PPD."For the self-rated scores, there was a positive correlation between NFR and quitting rate, such that participants with a higher NFR reported a higher quitting rate.The PPD reward difference score was not correlated with the SICspan or NFR scores.The results showed a significantly higher PPD for the high reward than for the low reward when participants processed speech masked by a single talker.This effect occurred in the absence of an effect of reward on the SRT.This means that reward led to an increase in effort without any measured behavioral change.The effect of task difficulty was reflected by larger MPD and PPD values for the hard than for the easy condition.In contradiction to the Motivational Intensity Theory, the results showed no significant interactions, i.e. stronger effect of high reward than low reward in the ‘hard’ listening condition than in the ‘easy’ condition.Self-rated effort, performance, and quitting were all affected by task difficulty but not by reward.Interestingly, the correlation analysis revealed that participants with a low NFR reported a lower quitting rate.However, no relation was found between NFR and the PPD, while this relationship was observed by Wang et al.The current results demonstrate that monetary reward influences the pupil response.Monetary reward is known to affect motivation, and according to FUEL, listening effort can be modulated by changes in motivation.Despite the effect of reward on the PPD, no behavioral effect of reward was observed.Speech perception is largely automatic and highly efficient, so trying even harder will not result in improved performance.Still, the control system responsible for the allocation of resources could be increasingly activated."However, Carver made a distinction between the fulfillment of goals, which is driven by motivation and may apply to finishing the task at a sufficient performance level, and feelings related to sensing one's rate of progress.Based on this, an alternative explanation of the observed effect is that the effect of monetary reward on the PPD reflects arousal partly related to positive feelings rather than just motivation.However, for positive feelings to occur during the task, one needs trial-by-trial feedback in order to monitor performance and perceive that the task is done better than required.Since in this study the level of reward was only mentioned at the start of a block, and no feedback on performance was provided, participants received no information about their progress.Additionally, there was no effect of reward on self-rated performance or quitting."Hence, we don't consider the effect of reward on the PPD as resulting from positive feelings, but rather from motivation.Still, positive emotions instead of motivation cannot be ruled out as an explanation for the current results and this is something to take into account in future research.Note, the lack of feedback, in contrast to the study of Richter that provided feedback on a trial-by-trial basis, might also explain the absence of reward-related behavioral change in the current results.There was an effect of reward on the PPD for speech in a background talker but not for speech in quiet.Less expected, and not consistent with Richter, there was an effect of reward on the PPD for the 85% intelligibility condition, which was clearly above the 70% required to obtain the reward.The fact that participants underestimated their performance, as shown by their average performance rating of 7 on a 10-point scale, suggests that the easy condition was perceived as more difficult than it actually was.This, may have warranted more motivational arousal and therefore no interaction between reward and task difficulty for the masked conditions.Note, that early pupillometry research by Kahneman et al. did show an effect of reward on the pupil response during performance of an easy task.However, in the study of Kahneman et al. participants were rewarded on a trial-by-trial basis and therefore the observed response might have reflected positive feelings rather than motivation.As anticipated, reward was not reflected by the pupil baseline, as measured before sentence onset."This suggests that reward does not necessarily affect an individual's state of arousal.Still, there was a trend for the effect of difficulty on the pupil baseline, which is in line with previous studies showing an increased baseline for difficult listening conditions.Although we did not observe a behavioral benefit for high compared to low reward on the SRT, other aspects of performance not captured by the SRT could have been affected.This is an issue that deserves exploration in future research."Importantly, we now know that in sufficiently difficult listening conditions the PPD during speech processing is affected by the participants' level of motivation.We also know that when conditions become too difficult participants tend to give up.This should be considered both when designing an experiment and when interpreting the results.For instance, the level of listening effort can be modulated by motivation when a task is either too easy or too difficult, and differences in the pupil response between participants could be partly explained by differences in motivation.There was a positive correlation between NFR and quitting rate.This suggests that people with a higher NFR are more likely to quit the task they are performing.According to FUEL, when demands get too high, one might no longer put effort into a task.The evaluation of demands can be affected by the level of fatigue.However, the expected decrease in PPD, as hypothesized and shown by Wang et al., was not observed in the current results.The absence of this effect can be explained by the fact that the NFR scale was validated for people who were occupationally active, as was the case for the participants in the study of Wang et al., and the scale might be less valid for the students included in the current study.To conclude, consistent with the motivational intensity theory, we showed an effect of reward on listening effort when the tasks were sufficiently difficult, and NFR scores were correlated with quitting rate.SICspan scores were not correlated with any of the other outcome measures, suggesting that cognitive capacity for this homogeneous sample of participants did not influence the impact of reward.Importantly, one consequence of the current outcome, as pointed out by Richter, is that in order to explain changes in the pupil response in terms of changes in listening effort or resource allocation, we need be aware of and acknowledge the mediating effects of motivation on resource allocation, that itself can be affected by manipulations of the independent variable under investigation.In other words, changes in motivation can account for changes in the pupil response and also for part of the observed variance in pupil size between people.Future research should investigate whether motivation, when affected by other factors than monetary reward, also has an impact on listening effort. | Listening to speech in noise can be effortful but when motivated people seem to be more persevering. Previous research showed effects of monetary reward on autonomic responses like cardiovascular reactivity and pupil dilation while participants processed auditory information. The current study examined the effects of monetary reward on the processing of speech in noise and related listening effort as reflected by the pupil dilation response. Twenty-four participants (median age 21 yrs) performed two speech reception threshold (SRT) tasks, one tracking 50% correct (hard) and one tracking 85% correct (easy), both of which they listened to and repeated sentences uttered by a female talker. The sentences were presented with a single male talker or, in a control condition, in quiet. Participants were told that they could earn a high (5 euros) or low (0.20 euro) reward when repeating 70% or more of the sentences correctly. Conditions were presented in a blocked fashion and during each trial, pupil diameter was recorded. At the end of each block, participants rated the effort they had experienced, their performance, and their tendency to quit listening. Additionally, participants performed a working memory capacity task and filled in a need-for-recovery questionnaire as these tap into factors that influence the pupil dilation response. The results showed no effect of reward on speech perception performance as reflected by the SRT. The peak pupil dilation showed a significantly larger response for high than for low reward, for the easy and hard conditions, but not the control condition. Higher need for recovery was associated with a higher subjective tendency to quit listening. Consistent with the Framework for Understanding Effortful Listening, we conclude that listening effort as reflected by the peak pupil dilation is sensitive to the amount of monetary reward. |
352 | The control of alternative splicing by SRSF1 in myelinated afferents contributes to the development of neuropathic pain | Insults to the peripheral nervous system usually result in pain and hypersensitivity to noxious and innocuous stimuli.These abnormal sensations arise due to neuronal plasticity leading to alterations in sensory neuronal excitability.These alterations include peripheral sensitization, with enhanced evoked and on-going activity in primary afferents, and central sensitization, responsible for the generation and maintenance of chronic pain.The most widely accepted model for establishment of central sensitization is that ectopic firing/increased activity in C-nociceptive afferents drives altered spinal sensory processing, particularly the processing of A-fiber inputs, resulting in secondary hyperalgesia and allodynia.C-nociceptor changes are reported in the majority of studies of animal or human neuropathies).Central sensitization can also occur through neuro-immune interactions, following injury-induced local immune cell infiltration and cytokine production/release.After nerve injury there is activation of spinal glia, disruption of the blood-spinal cord barrier, and consequent infiltration of immune cells.These events can alter the central processing of peripheral inputs, implicated in the development of chronic pain.There is, however still debate on how the processing of A or C fiber inputs is differentially regulated to form the neuronal basis of chronic pain.During chronic pain, changes in the complement of proteins result in alterations in sensory neuron excitability, as recently demonstrated whereby expression of voltage gated potassium channels in the DRG is altered in ATF3 positive sensory neurons following nerve injury.Furthermore, alternative mRNA splicing allows for functionally distinct proteins to arise from a single gene.This provides a vast repertoire of actions from a limited source of transcripts, allowing for cell-specific and stimulus-induced alteration in cellular function.Targeting regulation and expression of alternative RNA transcripts, and hence proteins, has been proposed as a potential route for novel drug discovery, but this has not been widely investigated with respect to nociception/analgesia.We recently demonstrated the analgesic effect of targeting alternative mRNA splicing, by inhibition of peripheral serine-arginine rich protein kinase 1, SRPK1.SRPK1 controls phosphorylation of serine-arginine rich splice factor 1, which is fundamental to the control of the vascular endothelial growth factor A family alternative splicing.Inactive SRSF1 is located in the cytoplasm, but when phosphorylated by SRPK1 it translocates to the nucleus.There are two VEGF-A isoform families, VEGF-Axxxa and VEGF-Axxxb where xxx refers to the number of amino acids encoded, and a and b denote the terminal amino acid sequence.SRSF1 phosphorylation results in preferential production of the proximal splice site isoforms, VEGF-Axxxa.Little is understood about the contribution of VEGF-A proteins to nociceptive processing.VEGF receptor-2, the principal receptor activated by both isoform families, has been implicated in nociceptive processing in animal, and clinical studies.VEGF-A isoforms and VEGFR2 are present in the spinal cord, and contribute to neuroregeneration and neuroprotection.We therefore tested the hypothesis that the SRPK1/SRSF1 system contributes to spinal nociceptive processing in rodent models of neuropathic pain, concentrating on the effects of SRPK1 inhibition, and VEGF-Axxxa/VEGFR2 signaling in central terminals of myelinated afferents.Adult male Wistar rats and adult male 129Ola mice were used.Animals were provided food and water ad libitum.All animal procedures were carried out in laboratories at the University of Bristol in accordance with the U.K. Animals Act 1986 plus associated U.K. Home Office guidance, EU Directive 2010/63/EU, with the approval of the University of Bristol Ethical Review Group.Nociceptive behavioral testing was carried out as previously described.All animals were habituated to both handling by the tester and the testing environment on the day prior to testing.Two days of baseline testing were carried out prior to any intervention followed by testing post-intervention at discrete time-points as detailed in each experiment.Stimuli were applied to the partially innervated medial aspect of the plantar surface of the hind paw, an area innervated by the saphenous nerve.Mechanical withdrawal thresholds were calculated from von Frey hair force response curves.Animals were housed in Perspex holding chambers with metal mesh floors and allowed to habituate for 10 min.A range of calibrated von Frey hairs were applied to the plantar surface of the hind paw, with a total of five applications per weighted hair.From these data, force response curves were generated and withdrawal values were calculated as the weight at which withdrawal frequency = 50%.Tactile allodynia was assessed in the metal mesh floored enclosures using a brush moved across the plantar surface of the hind paw where a withdrawal scored one, with no response zero.This was repeated a total of five times giving a maximum score of five per session.Cold allodynia: a single drop of acetone was applied to the plantar surface of the hind paw using a 1 ml syringe a maximum of five times giving a maximum score of five if the animal exhibited licking/shaking behavior in response to each application.Thermal hyperalgesia: animals were held in Perspex enclosures with a glass floor.A radiant heat source was positioned under the hind paw, and the latency was recorded for the time taken for the animal to move the hind paw away from the stimulus.This was repeated three times and a mean value calculated for each test.Formalin Testing: animals were habituated to glass floored testing enclosures as above.A single 50 μl injection of 5% formalin was administered to the plantar surface of the right hind paw by intradermal injection.Immediately following formalin injection, animals were placed into the testing enclosures.Time spent exhibiting pain-like behaviors and the total number of pain-like behaviors was recorded in five minute bins for sixty minutes.Data are shown as the classical biphasic response with behavioral responses pooled for the first phase 0–15 min and second phase 20–60 min.Blinding of nociceptive behavioral studies are routine in the laboratory however where animal welfare/experimental design prohibits this, it cannot be implemented.For instance, in nerve-injured animals blinding is not possible as controls are naïve.The lack of blinding may have introduced some subjective bias into these experiments, which is in part mitigated by behavioral data is supported by the inclusion of experiments in which measurements are not subjective.A well-defined method for minimally invasive preferential selection of either C- or A- fiber mediated nociceptive pathways was used.Noxious withdrawal responses to A- and C-nociceptor selective stimulation were carried out as previously described, by measurement of electromyographic activity in biceps femoris.Animals were anesthetized using isoflurane induction, and the external jugular vein and trachea were cannulated to allow maintenance of airway and anesthesia.Following surgery, anesthesia was switched to alfaxalone, and animals were maintained at a steady level of anesthesia by continuous pump perfusion via the jugular vein for the remainder of the experiment.Bipolar electrodes were made with Teflon coated stainless steel wire implanted into the bicep femoris.EMG recordings were amplified and filtered by a combination of in-house built and Neurolog preamplifier and band pass filters.Animals were maintained at a depth of anesthesia where a weak withdrawal to noxious pinch could be elicited for the duration of the experiment.A- and C-cutaneous nociceptors were preferentially activated to elicit withdrawal reflex EMGs using a well-characterized contact heating protocol.Two different rates of heating were applied to the dorsal surface of the left hind paw as these are known to preferentially activate slow/C-nociceptors and fast/A nociceptors respectively.Contact skin temperature at the time of onset of the EMG response was taken as the threshold.A cutoff of 58 °C for A-nociceptors, 55 °C for C-nociceptors was put in place to prevent sensitization if no response was elicited.If a withdrawal response was not elicited, threshold was taken as cut-off + 2 °C.Three baseline recordings were performed before i.t. drug injection with a minimum 8 min inter-stimulus interval, and alternating heating rates, to prevent sensitization or damage to the paw.Digitized data acquisition, digital to analogue conversion, and offline analyses were performed using a CED Micro1401 Mark III and Spike2 version 7 software.The partial saphenous nerve ligation injury model was used to induce mechanical and cold allodynia, as described previously.Under isoflurane anesthesia, the saphenous nerve was exposed via an incision made along the inguinal fossa region of the right hind leg.Approximately 50% of the nerve was isolated and tightly ligated using 4.0 silk suture, and the incision was closed using size 4.0 sterile silk suture.I.t. injections were carried out under isoflurane anesthesia, using 0.5 ml insulin syringes in rats and mice.For i.t. administration, 10 μl injections were made in the midline of the vertebral column through the intervertebral space between lumbar vertebrae five and six.The injection was deemed to be in the correct place when it evoked a tail flick response.Rats were used for i.t. anti-VEGF-Axxxb experiments, as the 56/1 mouse monoclonal antibody had not been validated in mice at that time.All nociceptive behavioral testing was carried out one hour after intrathecal injection as initial experiments indicated that responses to i.t. PTK peaked at 1 h, and returned to normal by 2 h after injection.All drugs were made up as stock concentrations and then diluted to working concentration in phosphate buffered saline as described in each experiment.Vehicle controls were used for each drug.PTK787 was dissolved in polyethylene glycol 300/PBS, with the final PEG 300 concentration at 0.002%.ZM323881 was made up in DMSO/PBS and given intrathecally at a final concentration of 100 nM ZM323881/0.001% DMSO.Mouse monoclonal VEGF-A165b antibody 56/1, recombinant humanVEGF-A165A and rhVEGF-A165b were all dissolved in PBS.SRPIN340-5-phenyl]isonicotinamide; SRPK inhibitor purchased from Ascent Scientific, Bristol, UK) was dissolved in DMSO and diluted to final concentrations in PBS.All peptides and concentrations used have been previously shown to exert functional effects in neurons and/or other biological systems.SRPIN340 has been used in several other studies, different pathological states, and was used at a known functional concentration, as previously described.Rats were terminally anesthetized with sodium pentobarbital overdose and were perfused transcardially with saline followed by 4% paraformaldehyde.The L3-4 segments of the lumbar enlargement, containing the central terminals of saphenous nerve neurons, and L3-L4 dorsal root ganglia were removed, post fixed in 4% paraformaldehyde for 2 h and cryoprotected in 30% sucrose for 12 h. Tissue was stored in OCT embedding medium at − 80 °C until processing.A cryostat was used to cut spinal cord and dorsal root ganglia sections that were thaw mounted onto electrostatic glass slides.Slides were washed in phosphate buffered saline solution 3 times for 5 min per incubation, and incubated in PBS 0.2% Triton X-100 for 5 min.Sections were blocked for 2 h at room temperature, and then incubated in primary antibodies diluted in blocking solution overnight at 4 °C.Sections were washed three times in PBS washes and incubated for 2 h in secondary antibody.For the third stage, incubations and washes were as described for the secondary antibody.Slides were washed in PBS 3 times prior to coverslipping in Vectorshield."Images were acquired on either Nikon Eclipse E400 and a DN100 camera or Leica TCS SPE confocal microscope using Leica application suite.Primary antibodies used were as previously reported: anti-ATF3, anti-c-fos, anti-SRSF1, anti-vGLUT1, anti-NF200, anti-NeuN.Use of anti-VEGF-A and SRSF1 antibodies for both immunolocalization and immunoblotting has been previously reported.Secondary antibodies: Alexafluor 488 goat anti-mouse, Alexafluor 488 chicken anti-goat, Alexafluor 555 donkey anti-goat, Alexafluor 555 donkey anti-rabbit; biotinylated anti-rabbit, Extravidin CY3.Dorsal root ganglia neuronal cell counts were performed using ImageJ analysis to measure neuronal area.The saphenous nerve is approximately equally derived from lumbar DRGs 3 and 4 in rat and human; the mean number of neurons per section was quantified from 10 non-sequential random L4 DRG sections per animal.Data are presented as the mean number of neurons per section and the experimental unit is the animal.The number of activated SRSF1-positive neurons was calculated as a percentage of total neurons as designated by size.The total number of DRG neurons quantified was ~ 5000.Determination of SRSF1 spinal cord expression/localization was determined from 5 non-sequential random spinal cord sections per animal using Image J analysis.Images were converted to an 8-bit/grayscale image then thresholding was applied across all acquired images to determine the area of positive staining.Areas of positive staining were then quantified across all sections and groups.Colocalization was determined via coloc2 plugin in ImageJ.Controls for VEGF-A and SRSF1 immunofluorescence consisted of incubation with only secondary antibody or substitution of the primary antibody with a species matched IgG.Naïve and PSNI rats were terminally anesthetized and perfused with saline solution.The lumbar region of the spinal cord was extracted and frozen immediately on dry ice, then stored at − 80 °C.Protein lysates were prepared using lysis buffer with protease inhibitors and samples were homogenized.Protein extracts were stored at − 80 °C until required.Samples were run on a 4% stacking gel/12% running SDS-PAGE gel and transferred to nitrocellulose membrane for 1 h @ 100 V. Membranes were then incubated with either α-SRPK1, α-SRSF1, α-SRSF1, α-Actin α-VEGF-A165b, α-pan-VEGF-A or α-tubulin antibodies and visualized with Femto chemoilluminescence kit or Licor IRdye secondary antibodies).All data are represented as means ± SEM.Data were extracted and analyzed using Microsoft Excel 2010, Graphpad Prism v6 and ImageJ.Nociceptive behavioral analyses were between-subjects designs comparing effects of drugs by two way ANOVA with post-hoc Bonferroni tests.In those experiments involving intrathecal and intraperitoneal administration of drugs in naïve animals, both hind paws were included in the analysis as replicates.EMG experiments used a within-subjects design and immunofluorescence experiments a between-subjects design with the effects of drug treatment compared to baseline values using one-way ANOVA with post-hoc Bonferroni tests.Immunofluorescence analysis of spinal cord was taken from entirety of dorsal horn.DRG and spinal cord neuron counts were ascertained from multiple representative images, at least 10 per animal and the mean value of those 10 calculated.Coloc2 analysis was used to ascertain the pixel intensity spatial correlation of SRSF1 and vGLUT1 staining in the spinal cord."This provides an automated measure of the correlation of pixel intensity for the two independent immunofluorescence channels for each sample, given as the Pearson's correlation co-efficient.Western blot analyses of SRSF1 and VEGF-A family expression were determined from ImageJ densitometry analysis and compared using Mann Whitney U tests.All F test statistics are described as a column factor with reference to drug/experimental grouping.NS designates not significant.SRPK1 and SRSF1 are key factors in the control of VEGF-Axxxa preferential splicing particularly in disease.SRSF1 is expressed in the cytoplasm of dorsal root ganglia neurons in naïve animals.Upon activation, SRSF1 is known to translocate from the cytoplasm to the nucleus, where it is involved in pre-mRNA processing.Following PSNI, SRSF1 immunoreactivity in sensory DRG neurons was found to be nuclear in some but not all neurons.Matched IgG and omission of primary antibody controls showed no signal.PSNI injury induces activating transcription factor 3 expression in injured DRG sensory neurons.There was an increase in ATF3-positive DRG neurons after PSNI, with 43% of DRG neurons expressing ATF3 post-PSNI compared to only 1% in naïve animals.After PSNI, all nuclear localized SRSF1-positive DRG neurons were also ATF3 positive, indicating nuclear SRSF1 was exclusively found in damaged neurons.This represents that 45% of ATF3 -positive neurons were also SRSF1 positive, with the remaining 55% of ATF3 positive neurons negative for SRSF1.SRSF1 was expressed predominantly in the cytoplasm of 96% of larger neurofilament-200 positive DRG neurons in naïve animals, and 71% of medium neurons, but was in only a small proportion of neurons of area < 600 μm2.NF200 is a marker for myelinated neurons indicating that SRSF1 expression is principally found in the somata of A-fiber DRG neuronal population, but it was also found in peripheral sensory nerve fibers in PSNI animals.Following PSNI, activated SRSF1 co-localized with ATF3 and NF200 in DRG sensory neurons, The size distribution of activated SRSF1 in injured neurons was similar to that in natives, − 69% of large cells, 21.5% of medium cells but a small proportion of small neurons.In contrast, only a minority of the IB4-binding, largely unmyelinated DRG neurons from nerve-injured animals were positive for SRSF1.The size distribution profile of DRG sensory neurons indicated that SRSF1-positive neurons are medium/large in size.SRSF1 immunofluorescence was also identified in the lumbar region of the spinal cord of PSNI rats, where it was co-localized with the marker of myelinated primary afferent central terminals, the vesicular glutamate transporter 1.There was an increase in SRSF1 expression in the central sensory terminals 2 days after PSNI, as assessed by immunofluorescence and quantified by Western blot.Co-localization analysis of vGLUT1 and SRSF1 staining showed a stronger colocalization in the PSNI animals in PSNI.vGLUT1 is found in large diameter myelinated neurons, and is not found in either the peptidergic or IB4-binding C-nociceptor populations.Furthermore, SRSF1 was co-localized with vGLUT1 in DRG sensory neurons.There was no SRSF1 expression in the contralateral dorsal horn of either naïve or PSNI rats, although vGLUT1 expression was evident, indicating that the increased spinal SRSF1 expression was associated with injury to peripheral neurons and not a systemic response.The increased SRSF1 immunoreactivity in vGLUT1-positive central terminals after PSNI was accompanied by an increase in total VEGF-A expression in spinal cord assessed with the pan-VEGF-A antibody A20.VEGF-A was also co-localized with SRSF1 in some, but not all central terminals.VEGF-Axxxb remained unchanged in spinal cord after PSNI whereas total-VEGF-A significantly increased.This indicates an increase in the expression of VEGF-Axxxa isoforms, resulting in a decrease in VEGF-Axxxb as a proportion of total-VEGF-A.These results suggest that SRSF1 phosphorylation and activation at the level of the spinal cord is induced by PSNI, and is accompanied by a change of the balance of VEGF isoforms toward VEGF-Axxxa.As VEGF-A165a has been shown to be pro-nociceptive, and VEGF-A165b anti-nociceptive, it is therefore possible that changes in SRSF1 and VEGF-A expression at the level of the spinal cord are associated with the development of neuropathic pain behaviors.SRSF1 activity is activated through phosphorylation by serine-arginine-rich protein kinase SRPK1.To test the hypothesis that PSNI neuropathic pain is dependent upon SRSF1 activation, we inhibited SRPK1 in the spinal cord of rats, with intrathecal injection of the SRPK1 antagonist, SRPIN340-5-phenyl] isonicotinamide, Ascent Scientific, Bristol UK) at the time of nerve injury surgery.SRPIN340 has been used extensively to inhibit SRPK1 activity and a multitude of studies have demonstrated its involvement with controlling alternative splicing for VEGF-A isoforms, through suppression of SR protein phosphorylation and stabilization.SRPIN340 inhibits both SRPK1 and SRPK2 at concentrations equal or < 10 μM, and this has been shown previously to inhibit VEGF-Axxxa production in vitro and in vivo.PSNI induced a reduction in mechanical withdrawal thresholds in the ipsilateral hind paw as expected, and this was blocked by i.t. SRPIN340.Tactile and cooling allodynia which also developed in the ipsilateral hind paw were also inhibited by SRPIN340.Contralateral hind paws from vehicle and SRPIN340 treated groups did not differ from each other, indicating no effect of central SRPK1 inhibition on noxious processing from uninjured tissue.The PSNI model does not in itself lead to the development of heat hyperalgesia, but Hargreaves latencies did increase as a result of SRPIN340 treatment compared to vehicle treated PSNI animals, both ipsilateral and contralateral to the nerve injury, indicating a possible contribution of SRPK1/SRSF1 in normal nociceptive processing.SRPIN340 treatment also resulted in a significant inhibition of the increase in SRSF1 immunoreactivity in the central terminals of the dorsal horn of the spinal cord induced by PSNI.Furthermore, the administration of SRPIN340 resulted in increased distal splice site, anti-nociceptive isoform VEGF-Axxxb with no overall change in total VEGF-A expression, indicating a switch from proximal to distal splice site transcripts following SRPIN treatment in peripheral nerve injury.Intrathecal SRPIN340 not only blocked the development of nociceptive behaviors and altered alternative splicing in the dorsal horn, it also blocked indicators of central sensitization.The number of c-fos positive neurons in the spinal cord, a marker of central sensitization as assessed by immunofluorescent staining, was increased after PSNI and was significantly reduced by i.t. SRPIN340.SRPK1 protein expression within the spinal cord was not significantly altered following nerve injury alone.VEGF-Axxxa and VEGF-Axxxb differ only in their terminal 6 amino acids.The C-terminal sequence determines the efficacy of VEGFR2 signaling of the isoforms and their functional properties.On binding to VEGFR2, VEGF-Axxxa leads to full phosphorylation and activation of VEGFR2, whereas VEGF-Axxxb activates only partial VEGFR2 phosphorylation, leading to receptor degradation.VEGF-A165b also antagonizes VEGF-Axxxa binding.The different C-terminal sequences also determine the anti- or pro-nociceptive effects of the VEGF-A165b and VEGF-A165a isoforms respectively but both isoforms promote neuroprotection.Our findings above show that VEGF-A alternative splicing is altered in neuropathic states, and this is associated with pain behaviors.These results suggest that spinal cord VEGFR2 activation by different VEGF isoforms could contribute to nociceptive processing.Despite evidence from clinical studies that demonstrate an involvement of VEGF receptors in pain, and experimental evidence showing that spinal VEGF levels are associated with pain, there are few published findings on the effects of VEGF-A in spinal nociceptive processing.As spinal VEGF-A splicing and isoform expression, and therefore by inference VEGFR2 activation, were altered in PSNI we determined the effect of VEGFR antagonism on central nociceptive processing.PTK787 is a tyrosine kinase inhibitor that has non-selective inhibitory actions on VEGFR1 and 2.It is 18-fold more selective for VEGFR1 and 2 over VEGFR3, and has slight selectivity for VEGFR2 over VEGFR1.In naïve rats, systemic VEGFR antagonism with PTK787 increased thermal withdrawal latencies to heat indicating an analgesic effect.To determine the effect of PTK787 on one aspect of central nociceptive processing, we used the formalin test.Injection of formalin into the hind paw allows for the investigation of two distinct phases of acute nociceptive behavior.The initial phase is largely mediated by peripheral nerve activation, whereas the second has both a peripheral and central component.One hour prior to formalin injection, rats were treated with either vehicle or PTK787.The acute phase was unaffected by PTK787 treatment.In contrast the second phase was significantly reduced by systemic PTK787 treatment for both the time of flinching and the number of flinches.These results suggest a central component of VEGFR inhibition.To determine the targets of VEGF-A/VEGFR signaling in naïve rats, given the effects of the VEGFR antagonist on the second phase of the formalin test, we recorded electromyographic nociceptive withdrawals to selective nociceptor activation.Fast heating preferentially activates myelinated A-nociceptors and slow heating activates unmyelinated C-nociceptors, both inducing a withdrawal from the stimulus.To determine VEGFR2 specific actions, ZM323881 quinazolin-4-yl]amino]-4-fluoro-2-methylphenol) was used locally.ZM323881 which has sub-nanomolar potency and specificity for VEGFR2, with an IC50 > 50 μM for VEGFR1 and PDGFR.I.t. ZM323881 led to a prolonged increase in the temperature at which the rats withdrew during A-nociceptor stimulation.ZM323881 did not have a significant effect on C-nociceptor withdrawals.These results show that VEGFR2 signaling is mediated, at last in part, by A-nociceptor activation in the spinal cord.Taken together, these results are consistent with the hypothesis that the VEGF-A isoforms may have different functions in the spinal cord, as in the periphery.We tested this by giving VEGF agonists and antagonists intrathecally, and measuring pain behaviors in mice and rats.PTK787 increased both mechanical withdrawal thresholds and heat nociceptive withdrawal time compared with vehicle treated mice.In contrast injection of 2.5 nM VEGF-A165a reduced mechanical withdrawal thresholds and heat withdrawal latencies, indicating a central pro-nociceptive action of VEGF-A165a in naïve mice.Conversely, 2.5 nM VEGF-A165b increased mechanical thresholds and heat withdrawal latencies indicating a central anti-nociceptive effect.In rats, administration of a neutralizing antibody against VEGF-Axxxb had a similar effect to that of VEGF-A165a, decreasing withdrawal thresholds to mechanical stimulation and the time taken for withdrawal from heat, indicating that loss of endogenous VEGF-Axxxb from the spinal cord is painful in naïve animals.We mimicked the effect of spinal SRPK1 inhibition by increasing the proportion of spinal VEGF-A165b with exogenous protein, 2 days after the onset of neuropathic pain behavior in rats.Intrathecal VEGF-A165b reversed both mechanical and cold allodynia and increased thermal withdrawal latencies both ipsilaterally and contralaterally.IP PTK787 led to the increase in withdrawal latencies to heat both ipsilateral and contralateral in PSNI injured rats.We show that the splicing factor kinase SRPK1 is a key regulator of spinal nociceptive processing in naïve and nerve injured animals.We present evidence for a novel mechanism in which altered SRSF1 localization/function in neuropathic pain results in sensitization of spinal cord neurons.Inhibiting the splicing factor kinase SRPK1 can control alternative splicing of VEGF-A isoforms in spinal cord, and can prevent the development of neuropathic pain.The development of neuropathic pain and associated neuronal excitation, results from alterations in neuromodulatory protein function, leading to sensitization of peripheral and central nociceptive systems.Both short and long term changes occur in the expression and function of ion channels, receptors, excitatory and inhibitory neurotransmitters/modulators and second/third messenger systems leading to the regulation of neuronal excitability through modulation of excitatory and/or inhibitory networks.Many of these alterations can be attributable to altered protein expression.Alternative pre-mRNA splicing is a rapid, dynamic process, recognised to be important in many physiological processes, including in nociception.Such splicing of many channels and receptors particularly calcium channels, is altered in pain states, but prior to our studies the control of mechanisms of alternative pre-mRNA splicing had not been considered as a contributory factor in nociceptive processing.The splicing kinase SRPK1, a member of the serine-arginine-rich kinases, controls alternative pre-mRNA splicing of a relatively small number of identified RNAs.To date, there is strong evidence for the involvement of only one of these, VEGF-A, in nociception.SRPK1 controls the activity of splice factor SRSF1 that is fundamental to the processing of pre-mRNA transcripts, their cellular localization/transport, and it may also be involved in translational repression.Phosphorylation and activation of SRSF1 results in nuclear translocation in a number of cell types.After nerve injury activated SRSF1 was only found in the nuclei of injured large excitatory neurofilament-rich DRG neurons whereas it was found in the cytoplasm of uninjured DRG neurons.Interestingly, SRSF1 was also seen in the central terminals of myelinated neurons after injury, but was not in central terminals in naïve animals.The nuclear localization suggests that neuronal SRSF1 is activated in mRNA processing in injured myelinated neurons.The redistribution of cytoplasmic SRSF1 to central terminals may reflect a change in neuronal function or mRNA transport.Little is understood of this function of SRSF1 in sensory neurons, although mRNA transport is closely linked to splicing, and specific mRNA splice variants can be targeted to axons.After traumatic nerve injury, injured DRG neurons demonstrate ectopic and/or increased evoked activity.These neuronal phenomena arise due to expression changes in key mediators of sensory neuronal excitability, ultimately underlying chronic pain phenotypes.Local neuro-immune interactions resulting from damage to neurons alter the properties of adjacent ‘uninjured’ afferents, including sensitization of A-fiber afferents, and together these drive excitability changes in the spinal cord.Mechanisms such as SRPK1/SRSF1-mediated alternative pre-mRNA splicing could underpin this ‘phenotypic switch’ change in properties, for example by controlling relative expression of ion channel splice variants in damaged neurons.Increased release of neurotransmitters and modulators from primary afferent central terminals is seen in the spinal cord following nerve injury.The cellular SRSF1 redistribution also suggests that phosphorylated SRSF1 could act to transport RNAs to the central terminals in nerve injury, and hence enable translation of specific isoforms in the nerve terminals.This reduction in the amount of SRSF1 present in afferent central terminals following intrathecal SRPK1 inhibition could be due to increased degradation of the SRPK1-SRSF1 complex and/or reductions in transport of mRNA to the central terminals of primary afferents.In addition to peripheral sensitization, PSNI results in mechanical and cold hypersensitivity and central sensitization.Intrathecal administration of the SRPK1 inhibitor SRPIN340 abolished pain behaviors including mechanical allodynia and hyperalgesia, and cold allodynia, and the central sensitization indicated by spinal c-fos expression.Central hyperalgesic priming of primary afferent nociceptors is dependent on local protein translation in central terminals, so we speculate that SRPK1/SRSF1 actions on RNA localization or protein translation may also contribute to this sensitization mechanism.As heat hyperalgesia was also reduced but PSNI animals did not display sensitization to radiant heat, this suggests that central SRPK1 inhibition not only prevents central sensitization, but also reduces activation of non-sensitized spinal nociceptive networks.SRPK1/SRSF1 controls the splice site choice in the alternative splicing of the vascular endothelial growth factor A family, leading to increased expression of VEGF-Axxxa isoforms.VEGF-Axxxa isoforms are widely known as pro-angiogenic/cytoprotective factors and this splicing pathway is strongly associated with solid tumor development.Peripheral administration of VEGF-A165a resulted in pain, as did, somewhat surprisingly, VEGFR2 blockade.These findings are supported by observations that systemic VEGF-A receptor blockers result in pain in clinical studies and painful experimental neuropathy.In contrast, given intrathecally, the VEGF-R2 antagonist, PTK787 decreased hypersensitivity in naïve and neuropathic rodents, but VEGF-A165a again increased hypersensitivity in naïve and spinal cord injury rats.This latter increase in pain was associated with aberrant myelinated fiber sprouting in dorsal horn and dorsal columns that may be VEGF-A dependent.In contrast, van Neervan and colleagues found only very small anti-nociceptive effects of intrathecal VEGF-A165a on pain, and no effect on neuronal function.Observed differences in VEGF-A effects could be attributable to different concentrations used, the source of VEGF-A165a, the degree of injury, or different endogenous isoform complement.Clinically, elevated levels of VEGF-A in the spinal cord of neuropathic pain patients correlate with reported pain.VEGF-A and VEGF-A receptor 2 are present in both peripheral and central nervous systems including spinal cord.rhVEGF-A165a has consistent pro-nociceptive actions peripherally and centrally, and our findings demonstrate that the different VEGF-A isoform subtypes have opposing actions on nociception in the spinal cord, as they do in the periphery.We are the first to show that the alternatively spliced isoform, VEGF-A165b has anti-nociceptive actions in the spinal cord.Taken together our observations of: increased spinal splicing factor expression, increased spinal pro-nociceptive VEGF-A165a but unchanged VEGF-A165b expression, and blockade of pain behavior and VEGF-A expression changes by SPRK1 inhibition, suggest that exogenous and endogenous VEGF-A isoforms modulate spinal nociceptive processing in naïve animals and after peripheral nerve injury.The sites of ligand/receptor expression, the differences in peripheral and central administration, and the current clinical use of many anti-VEGF treatments to treat varied diseases highlight the importance of recognizing the different functions and sites of action of the alternative VEGF-A isoforms.We found that VEGFR2 blockade resulted in inhibition of A fiber nociceptor-mediated nociception, suggesting that endogenous VEGF is involved in spinal processing of A fiber nociceptor inputs.Irrespective of the animal model or human condition of neuropathic pain, the prevailing evidence is that afferents are sensitized both C-fiber and A-fiber nociceptors, increasing the afferent barrage to the spinal cord through enhanced stimulus-evoked responses and/or increases in spontaneous/ongoing firing.Other mechanisms, such as neuro-immune interactions, can also contribute to changes in spinal excitability.The result of increased input to and excitability of spinal neurons is central sensitization leading to hyperalgesia and allodynia.It has been hypothesized that central sensitization allows low threshold A-fiber afferents to “access” pain pathways although precise mechanisms are unknown.Early reports of low threshold Aβ fiber mechanoreceptors sprouting into superficial laminae are still debated.A-fiber nociceptive afferents, as opposed to LTMs, have similar central terminals in superficial dorsal horn laminae in both naïve and nerve injured animals and may represent the afferents expressing SRSF1.What is clear is that altered central processing of myelinated nociceptor information contributes to neuropathic pain, such as secondary dynamic allodynia.Both C-fiber and A-fiber pathways can contribute to chronic pain, but this is the first time that VEGFR2 has been implicated in the processing of information in these pathways.If VEGFR2 is involved in A-fiber nociceptive pathways, then this provides a potential new mechanism for the modulation of nociception.Here we identify a novel pathway of nociceptive processing through a SRPK1-SRSF1-VEGF-Axxxa axis in myelinated nociceptors that is involved in nociception at the level of the spinal cord.During neuropathic pain development SRPK1 drives expression of pro-nociceptive VEGF-Axxxa at the level of the spinal cord.Therefore the development of SRPK1 targeted therapy, or other controls for alternative splicing, would be interesting targets for novel analgesic agent development.These findings highlight the importance of understanding control of RNA function, including alternative splicing in relation to pain, and considering specific interactions of splice factors in excitatory networks following peripheral nerve trauma. | Neuropathic pain results from neuroplasticity in nociceptive neuronal networks. Here we demonstrate that control of alternative pre-mRNA splicing, through the splice factor serine-arginine splice factor 1 (SRSF1), is integral to the processing of nociceptive information in the spinal cord. Neuropathic pain develops following a partial saphenous nerve ligation injury, at which time SRSF1 is activated in damaged myelinated primary afferent neurons, with minimal found in small diameter (IB4 positive) dorsal root ganglia neurons. Serine arginine protein kinase 1 (SRPK1) is the principal route of SRSF1 activation. Spinal SRPK1 inhibition attenuated SRSF1 activity, abolished neuropathic pain behaviors and suppressed central sensitization. SRSF1 was principally expressed in large diameter myelinated (NF200-rich) dorsal root ganglia sensory neurons and their excitatory central terminals (vGLUT1 + ve) within the dorsal horn of the lumbar spinal cord. Expression of pro-nociceptive VEGF-Axxxa within the spinal cord was increased after nerve injury, and this was prevented by SRPK1 inhibition. Additionally, expression of anti-nociceptive VEGF-Axxxb isoforms was elevated, and this was associated with reduced neuropathic pain behaviors. Inhibition of VEGF receptor-2 signaling in the spinal cord attenuated behavioral nociceptive responses to mechanical, heat and formalin stimuli, indicating that spinal VEGF receptor-2 activation has potent pro-nociceptive actions. Furthermore, intrathecal VEGF-A165a resulted in mechanical and heat hyperalgesia, whereas the sister inhibitory isoform VEGF-A165b resulted in anti-nociception. These results support a role for myelinated fiber pathways, and alternative pre-mRNA splicing of factors such as VEGF-A in the spinal processing of neuropathic pain. They also indicate that targeting pre-mRNA splicing at the spinal level could lead to a novel target for analgesic development. |
353 | Strong TCRγδ Signaling Prohibits Thymic Development of IL-17A-Secreting γδ T Cells | γδ T cells make rapid non-redundant contributions in numerous disease settings that include malaria and tuberculosis infections, as well as immunopathologies such as psoriasis.In addition, γδ T cells display potent anti-tumor capabilities, such that a tumor-associated γδ T cell expression signature was the most favorable immune-related positive prognostic indicator in analyses of more than 18,000 tumors.Murine γδ T cells execute their effector capacities through provision of cytokines.Anti-tumor function is associated with IFNγ production, whereas IL-17A drives γδ T cell responses to extracellular bacteria and fungi.This delivery of IFNγ or IL-17A mirrors that of αβ T helper cell clones that acquire cytokine-secreting functions only at the point of peripheral activation in secondary lymphoid tissue.By contrast, γδ T cells largely acquire their effector potential in the thymus, well before their participation in subsequent immune responses.The mechanisms that drive thymic commitment to γδ T cell effector function are still unclear.“Strong” ligand-dependent signaling through the γδ T cell receptor was suggested to promote commitment to an IFNγ-secreting fate, with weaker, possibly ligand-independent TCR signaling being required for IL-17A production.However, recent studies have also implicated “strong” TCRγδ signals in commitment to an IL-17A-secreting fate.Alternatively, evidence exists for TCR-independent commitment to effector potentials.For example, IL-17A-secreting γδ T cells develop exclusively in a perinatal window, such that adoptive transfer of adult bone marrow will not reconstitute the IL-17A-secreting γδ T cell compartment.IL-17A-producing γδ T cells are also suggested to preferentially develop from CD4−CD8− double-negative 2 cells.And certain γδ T cell subsets may inherently require certain transcription factors.Clearly, a better understanding of γδ T cell development is required that will provide critical insight into γδ T cell biology.There is presently no accepted approach for stage-wise assessment of thymic γδ T cell development.Indeed, although studies have analyzed Vγ usage, acquisition of effector potential, gene transcription, and surface marker expression, a methodology that combines these parameters, akin to that for αβ T cells, is still lacking.Here, using precursor/product relationships, we identify thymic stages in two distinct developmental pathways that generate γδ T cells committed to subsequent secretion of IL-17A or IFNγ.This exposes a temporal disconnect between thymic commitment to effector fate and immediate capacity to display effector function.Cytokine-independent identification of fate-committed γδ T cells reveals the full contribution of Vγ-chain-expressing progenitors to both cytokine-producing pathways through ontogeny, highlighting sizable numbers of IL-17A-committed cells expressing Vγ1 and Vγ2/3 chains.Importantly, these analyses also permit definitive assessment of TCRγδ signal strength in commitment to γδ T cell effector fate; increased TCRγδ signal strength profoundly prohibits the development of all IL-17A-secreting γδ T cells, regardless of Vγ usage but promoted the development of γδ progenitors along the IFNγ pathway.These observations provide important insights into functional γδ T cell biology.There is no consensus for describing stages in murine γδ T cell development.Thus, we re-assessed, on perinatal, neonatal, and post-natal thymic γδ T cells, the expression of γδ T cell surface markers combined with intracellular staining for IFNγ and IL-17A.This revealed that staining for CD24, CD44, and CD45RB neatly segregated both thymic and peripheral γδ T cells, throughout ontogeny, into two apparent “pathways”; CD24− cells that expressed high CD44 but not CD45RB were committed to IL-17A secretion, but did not make IFNγ, whereas cells that had upregulated CD45RB had potential to secrete IFNγ but not IL-17A.CD45RBhi γδ T cells can also upregulate CD44, which correlates with NK1.1 and CD122 expression and robust peripheral commitment to IFNγ secretion.Consistent with IL-17A-secreting potential, CD44hiCD45RB− γδ T cells were RORγt+T-betlo and expressed significant CD127 that appeared to follow upregulation of CD44.By contrast, CD44+CD45RB+ γδ cells were T-bet+RORγtlo and displayed little CD127.Finally, although we could not detect IL-4-secreting γδ T cells directly ex vivo, a small fraction of the CD44+CD45RB+ subset from both post-natal thymus and adult spleen produced IL-4 after 18 hr culture in PMA/ionomycin.Thus, in the thymus and periphery, CD24, CD44, and CD45RB neatly segregate γδ T cells into subsets with IL-17A- or IFNγ-secreting potential.CD44 and CD45RB appear to segregate CD24− γδ T cells into two developmental pathways, whereby CD44−CD45RB− cells develop as either CD44hiCD45RB− IL-17A-committed γδ T cells or CD45RB+ IFNγ-committed γδ T cells.To formally investigate this hypothesis, we used fetal thymic organ culture that re-capitulates thymic T cell development in vitro and is suited to studying γδ T cell development that occurs predominantly in the perinatal period.Indeed, E15 thymic lobes cultured for 7 days generate γδ T cell subsets similar to those observed ex vivo.To show precursor/product relationships, we first took E14 lobes and cultured them in FTOC for either 1 or 2 days.Ex vivo, γδ T cells from E14 lobes are all CD24+, with a sizable proportion also CD25+.Consistent with CD25+ γδ T cells’ being the earliest γδ T cell subset in the thymus, the proportion of these cells is notably reduced over a 2-day culture period.On day 1, CD24+CD25− cells were the dominate subset, whereas by day 2, a substantial proportion of cells became CD24−; this suggests a developmental progression from CD25+CD24+ to CD25−CD24+ to CD25−CD24−.We next fluorescence-activated cell sorting-purified the four CD24− γδ T cell populations from 7-day FTOC of E15 thymic lobes.These were CD44−CD45RB− a cells, CD44−CD45RB+ b cells, CD44+CD45RB+ c cells, and CD44hiCD45RB− d cells.Sorted cells were then cultured for a further 5 days on OP9-DL1 stromal cells, which also support thymic T cell development, and subsequently re-assessed.On re-analysis, both CD44+CD45RB+ and CD44hiCD45RB− subsets displayed characteristics of terminally differentiated cells, retaining both their CD44/CD45RB expression and complete and full commitment to IFNγ- and IL-17A-secreting potential, respectively.In contrast, the CD44−CD45RB− subset differentiated to all other phenotypes, with their CD45RB+ products displaying expected IFNγ-secreting potential and their CD44hiCD45RB− products appearing committed to IL-17A.Finally, CD44−CD45RB+ cells gave rise to a significant number of CD44+CD45RB+ products, suggesting a developmental pathway from CD44−CD45RB− to CD44−CD45RB+ to CD44+CD45RB+ for an IFNγ-secreting fate.Thus, CD24, CD44, and CD45RB identify two distinct γδ T cell development pathways that segregate commitment to either IFNγ- or IL-17A-secreting potential.The preferential use of γδTCRs that incorporate certain Vγ-regions has been frequently correlated with peripheral cytokine-secreting potential: Vγ4+ and Vγ6+ cells being linked to IL-17A production, with Vγ1+ and Vγ5+ cells linked to IFNγ.However, this is difficult to study in the early thymus, as only a minority of neonatal CD24− γδ T cells display immediate cytokine-secreting capacity after 4 hr stimulation with PMA/ionomycin.In contrast, the vast majority of these CD24− cells have already entered one of the two developmental pathways described above and are thus already committed to a cytokine-secreting fate.To use this extra sensitivity to observe cytokine-committed TCRγδ+ thymocytes, we assessed through ontogeny, from E17 to day 8 post-birth, Vγ usage of γδ T cells committed to either the IL-17A or IFNγ pathway using staining strategies that detect Vγ1+, Vγ2/3+, Vγ4+, Vγ5+, Vγ6+, and Vγ7+ cells.Before birth, Vγ5+ and Vγ6+ cells dominated the IFNγ-committed and IL-17A-committed pathways, respectively.Indeed, at E17 almost complete segregation of Vγ5+ cells to a CD45RB+ fate and Vγ6+ cells to a CD44hiCD45RB− fate was observed, which corresponded to T-bet expression in Vγ5+ cells and RORγt expression in Vγ6+ cells.However, such precise mapping of Vγ staining to one of the two pathways was not observed for other Vγ regions, as Vγ1+, Vγ2/3+, and Vγ4+ cells were clearly represented in both routes of development.Indeed, Vγ2/3+ cells, which have been overlooked in murine γδ T cell studies to date, make sizable contributions to both pathways and are as capable as either Vγ4+ or Vγ6+ cells of making IL-17A.Finally, Vγ7+ cells, which are readily identifiable in early CD24+ subsets, are barely detected in either of the mature CD24− pathways, supporting the view that these cells leave the thymus at an early stage of thymic development to seed the murine intestine.The factors that dictate commitment to an IL-17A- or IFNγ-secreting fate are still unclear.Central to this is the role of TCRγδ signaling, as although consensus suggests that “strong” TCRγδ signals favor development of IFNγ-committed cells, conflicting views exist as to the strength of TCRγδ signal required for an IL-17A-secreting fate.In 7-day FTOC of E15 thymic lobes, addition of anti-TCRγδ antibody GL3, which increases TCRγδ signal strength, clearly reduced the generation of IL-17A-committed cells while significantly increasing the number of CD44+CD45RB+ cells.The effect on those cells capable of immediate IL-17A secretion was particularly dramatic, reducing both absolute cell number and the amount of IL-17A produced per cell.This effect was GL3 dose dependent, was not the result of TCR signaling-induced apoptosis, and resulted in a complete absence of all Vγ-expressing cells in the IL-17A pathway if GL3 was added to 7- to 14-day FTOC of E14 thymic lobes.Moreover, intraperitoneal administration to pregnant wild-type mice at 13-days post-conception of the anti-CD3ε antibody 2C11, which induces similar developmental changes as GL3 in vitro, also resulted in profound reduction of IL-17A-commited γδ T cells in pups at day 2 after birth.Finally, and consistent with these findings, cells from the IFNγ pathway, from either 7-day FTOC or day 2 pups ex vivo, displayed significantly more CD73, than cells from the IL-17A pathway, regardless of Vγ usage.TCR signals are transduced, in part, by signals through the ERK/MAP kinase cascade.Hence, to assess the consequences of weaker TCRγδ signaling, the MEK1/2 inhibitor of ERK signaling UO126 was added to 7-day E15 FTOC.Compared with control cultures, UO126 significantly increased cell number in the IL-17A-committed pathway while reducing the ratio of terminally differentiated CD44+CD45RB+ cells to less mature CD44−CD45RB+ cells in the IFNγ-committed pathway.Importantly, UO126 could also rescue the number of IL-17A-committed cells in FTOC containing GL3 and improved the ratio of CD44+CD45RB+ to CD44−CD45RB+ cells.Thus, manipulation of TCRγδ signal strength with either crosslinking anti-TCRγδ antibody or an ERK pathway inhibitor demonstrates that strong TCRγδ signals are prohibitive for the generation of γδ T cells destined to secrete IL-17A, regardless of the Vγ chain they use.Here, we describe a straightforward methodology to study the sequential thymic development of murine γδ T cells.TCRδ+CD25+ cells, which are considered the earliest thymic γδ T cell subset, begin development by downregulating CD25, followed by CD24.How this is triggered remains to be elucidated, but TCRγδ signaling was shown to be necessary to pass beyond a TCRγδloCD25+ stage.When cultured as a population, CD24− γδ thymocytes that are CD44−CD45RB− give rise to either IL-17A-committed CD44hiCD45RB− cells that express RORγt but not T-bet or to IFNγ-committed CD45RB+ cells that express T-bet but not RORγt.Interestingly, our CD44/CD45RB plots show overlap with CD44/Ly-6C plots suggested to identify naive-like and memory-like peripheral γδ T cell subsets.Thus, combination staining of CD44 with both Ly-6C and CD45RB may prove particularly insightful.Importantly, our analyses identify two thymic pathways of functional γδ T cell differentiation that diverge from a common CD24−CD44−CD45RB− phenotype.Whether each CD24−CD44−CD45RB− cell has potential to enter both pathways, or whether the subset instead contains both IL-17A- and IFNγ-committed progenitors, is still uncertain.However, that some CD24−CD44−CD45RB− γδ T cells can already make either IL-17A or IFNγ supports a model in which commitment to an IL-17A- or IFNγ-secreting fate, with initial expression of corresponding “master” transcriptional regulators, spans an early window of development that includes CD24+ subsets.Nonetheless, commitment appears fully established by the time cells upregulate either CD44 or CD45RB from the CD24−CD44−CD45RB− stage.Notably, these committed cells do not necessarily display immediate capacity to secrete cytokine.This is particularly evident for CD45RB+ cells in the IFNγ pathway as only a minority secrete IFNγ ex vivo.However, when isolated and cultured on OP9-DL1 cells for a further 5 days, virtually all then secrete IFNγ.These observations suggest thymic commitment of γδ progenitors to distinct effector fates is distinguishable from actual capacity to secrete cytokine.The identification of surface marker-defined, cytokine secretion-independent developmental pathways for γδ T cell generation facilitated re-examination of TCRγδ signal strength requirements for thymic commitment of γδ progenitors to specific effector fates.Strong antibody-induced TCRγδ signaling favored the IFNγ pathway.This was consistent with significantly higher expression of CD73 on cells committed to secrete IFNγ compared with those in the IL-17A pathway.Cells in the IFNγ pathway express CD45RB that is upregulated on developing Vγ5+Vδ1+ cells in the presence of Skint1, a possible ligand for the Vγ5Vδ1 TCR.In the absence of Skint1, Vγ5+ cells instead adopt characteristics of Vγ6+ cells, including capacity to secrete IL-17A.In our studies, strong antibody-induced TCRγδ signaling prevented the development of all cells destined for the IL-17A pathway, which included a sizable number of Vγ1+ and Vγ2/3+ cells, as well as Vγ4+ and Vγ6+ cells.This appears at odds with a recent report that revealed an absence of IL-17A-committed γδ T cells in SKG mice that have severely reduced Zap-70 activity.Although interpreted as showing that strong TCRγδ signaling is required for commitment to an IL-17A-secreting fate, we instead prefer the explanation that generation of IL-17A-producing γδ T cells is simply Zap-70 dependent.Importantly however, our data show that this Zap-70 dependence cannot equate to transducing a strong TCRγδ signal.Our results indicate that at least one downstream mediator of strong TCRγδ signaling is the ERK/MAP kinase pathway, as its inhibition promoted the IL-17A pathway while reducing progression through the IFNγ pathway.Moreover, it reversed many effects of increased TCRγδ signal strength mediated by anti-TCRδ antibody.Thus, activation of the ERK/MAP kinase pathway by strong TCRγδ signaling is a key limiter of progression to an IL-17A-secreting fate.As mentioned above, such strong signaling may reflect engagement of TCR ligand, as supported by complete segregation, in the prenatal thymus, of Vγ5+ cells to the IFNγ pathway and Vγ6+ cells to the IL-17A pathway.However, γδ T cells bearing Vγ1+, Vγ2/3+, or Vγ4+ TCRs were readily detected in both pathways.This could imply that only some of these TCRs engage ligand.Alternatively, ligand-independent signaling that depends on surface expression levels and/or features of particular Vγ regions may dictate the proportion of cells that successfully engage the ERK/MAP kinase pathway.Finally, Vγ7+ cells, which largely seed the murine intestine, are not present in either pathway, suggesting that factors other than TCRγδ signaling should also be considered.These ideas, and the involvement of other downstream signaling cascades, are currently under investigation.Additional details are available in Supplemental Experimental Procedures.C57BL/6 mice were purchased from Charles River Laboratories.All mice were fetal, neonatal, post-natal, or adult.All experiments involving animals were performed in full compliance with UK Home Office regulations and institutional guidelines.Thymic lobes from B6 mice were cultured on Nuclepore membrane filter discs in complete RPMI-1640 medium plus 10% fetal calf serum for 7–14 days.OP9-DL1 cells were provided by J.C. Zúniga-Pflücker.For detection of Vγ5Vδ1 and Vγ6Vδ1, cells were pre-stained with GL3 followed by 17D1.For i.c. cytokine staining, cells were stimulated with 50 ng/ml phorbol 12-myristate 13-acetate and 1 μg/ml ionomycin for 4 hr at 37°C.Acquisition was performed with an LSR-II or a Canto II.Analysis was performed using FlowJo.GraphPad Prism software was used to analyze data, which are presented as mean ± SD.Two-tailed Student’s unpaired t test was used when only two groups were compared, and one-way ANOVA with Tukey’s test was used for multiple comparisons.Significance was determined at p ≤ 0.05.N.S. and C.L.G. performed experiments.N.S. and D.J.P. analyzed the data.B.S.-S.and D.J.P. designed the study.D.J.P. and N.S. wrote the paper. | Despite a growing appreciation of γδ T cell contributions to numerous immune responses, the mechanisms that underpin their thymic development remain poorly understood. Here, using precursor/product relationships, we identify thymic stages in two distinct developmental pathways that generate γδ T cells pre-committed to subsequent secretion of either IL-17A or IFNγ. Importantly, this framework for tracking γδ T cell development has permitted definitive assessment of TCRγδ signal strength in commitment to γδ T cell effector fate; increased TCRγδ signal strength profoundly prohibited the development of all IL-17A-secreting γδ T cells, regardless of Vγ usage, but promoted the development of γδ progenitors along the IFNγ pathway. This clarifies the recently debated role of TCRγδ signal strength in commitment to distinct γδ T cell effector fates and proposes an alternate methodology for the study of γδ T cell development. |
354 | Cirrhosis of liver: Interference of serpins in quantification of SERPINA4 – A preliminary study | Cirrhosis of liver is a pathological condition characterized by diffuse fibrosis, severe disruption of intra hepatic arterial and venous flow, portal hypertension and finally liver failure resulting from varied etiologies of chronic liver diseases .Despite different etiological factors, pathological characteristics, degeneration, necrosis of hepatocytes, replacement of parenchyma by fibrotic tissue, regenerative nodules; loss of liver functions are common .Liver is a major organ with synthetic capacity to produce plasma proteins.Reduction in concentration of plasma proteins is reflected as decreased hepatic synthesis .Serpins are class of plasma proteins that have similar structure and diverse functions.Serpins are divided into clades based on sequence similarities.In humans, 36 serpin coding genes and 5 pseudogenes are identified based on phylogenetic relationship .Extracellular clade A molecules are localized on chromosomes 1, 14 and X. Intracellular clade B serpins are localized on chromosome 6 and 18 .Serpins are interrelated due to highly conserved core structure .Majority of clade A serpins are localized on chromosome 14 which are expressed from liver.SERPINA1, is an inhibitor of neutrophil elastase .Pseudogene SERPINA2 indicates an ongoing process of pseudogenization .Antichymotrypsin, SERPINA3 is an inhibitor of chymotrypsin and cathepsin G found in blood, liver, kidney and lungs .SERPINA5 inhibits active C protein and are expressed from liver .Non inhibitory hormone binding protein, SERPINA6 is a cortisol transporter .SERPINA9 which is expressed from liver plays an important role in maintaining native B cell .The inhibitory protein of activated coagulation factors Z and XI is SERPINA10 .SERPINA11 is a pseudogene and uncharacterized .SERPINA12 is an inhibitory protein of kallikrein and plays a role in insulin sensitivity .Kallistatin, belongs to clade A serpins encoded by the SERPINA4 gene with 5 exons and 4 introns mapped to chromosome 14q31-32.1 in humans and expressed from liver cell lines.It is an acidic glycoprotein with a molecular weight of 58kD and isoelectric pH ranges from 4.6 to 5.2 .Apart from inhibitory action on human tissue kallikrein, it is a potent vasodilatory protein .Kallistatin is involved in prevention of cancer, cardiovascular disease and arthritis through the effects of antiangiogenic, anti-inflammatory, antiapoptotic and antioxidative properties .Kallistatin concentration in serum depends on the degree of severity of different chronic liver diseases .Interference of other serpins with antibodies may give a significant false positive/negative value in quantitative estimations of kallistatin, which may mislead in assessment of extent of the disease.Hence, in the present study, an attempt has been made to identify immunological cross reactivity between kallistatin and other serpins in cirrhotic liver and compared with healthy subjects.Blood samples were collected from 20 subjects: 10 clinically and diagnostically proven cirrhotic liver subjects with varying degree, age and gender matched 10 healthy subjects from R. L. Jalappa Hospital and Research Centre, Kolar, Karnataka, India.Collection of blood samples from cirrhotic liver and healthy subjects was carried out after obtaining informed consent and study is approved by Institutional Ethical Committee.Serum was collected from clotted blood using serum separator tubes centrifuged at 4000 rpm for 10 min.Serum was stored at − 20 °C for further analysis.All the samples were used to find cross reactivity of other serpins with kallistatin by western blot after protein segregation by sodium dodecyl sulphate polyacrylamide gel electrophoresis.Primary monoclonal antibodies specific for kallistatin along with secondary antibodies and recombinant kallistatin were procured from R&D systems, USA.Other chemicals of analytical grade were procured from Bio-Rad and Sigma Aldrich, USA.SDS gels were prepared as per standard protocol."Cirrhotic liver and healthy subject's serum samples were loaded in different gels and SDS-PAGE was carried with duplication at 25 mA in 1X SDS running buffer.After electrophoresis, gels were incubated in fixing solution at room temperature for 20 min.At this point, the gels were transferred onto a PVDF membrane for western blot and duplicate gels were subjected for staining with colloidal Coomassie brilliant blue in a shaker at room temperature for 2 h. Excess staining solution was removed and the gels were washed with 10% acetic acid and placed in deionized water for destaining till the appearance of bands .Proteins separated by SDS-PAGE were transferred onto PVDF membranes using a Transblot-Blot SD semi dry transfer cell at 15 V for 2 h.After transfer, PVDF membranes were kept for blocking using blocking buffer and incubated over night at 4 °C.After overnight blocking, PVDF membranes were washed with 1X PBST thrice for 3 min each.Primary antibodies were diluted and PVDF membranes were incubated in diluted primary antibody solution at room temperature with slow shaking on rocker for 2–3 h. PVDF membranes were washed with 1X PBST thrice for 3 min each.Secondary antibody was diluted and PVDF membranes were incubated in diluted secondary antibody solution at room temperature with slow shaking on rocker for 2–3 h.After incubation, PVDF membranes were washed with 1X PBST thrice for 3 min each.12.5 mL Tris buffer, 30 µl of 30% H2O2, a pinch of DAB were added into detection tray, mixed well and PVDF membranes were kept into the tray.The tray was gently shaken for a period of 10 min until the colour developed in the control lane .SDS-PAGE and western blot were repeated with pooled and concentrated cirrhotic liver and healthy serum samples along with recombinant kallistatin.Dialyzing tube containing serum to be concentrated is coiled up in a beaker and covered with commercial sucrose for 4 h.The liquid accumulated outside the dialyzing bag was poured off.Tubing was removed from the sugar at the end of 4 h and is tied off above the solution placed in water to dialyze away the sugar .Since SDS-PAGE is an efficient tool for separation of proteins based on molecular weight, proteins in serum were separated in both diseased and healthy gels along with corresponding molecular weight marker.Recombinant kallistatin was spotted on another SDS-PAGE with pooled and concentrated samples of cirrhotic liver and healthy subjects.Western blot analysis allowed identification of cross reactivity of serpins in diseased and healthy samples using monoclonal antibodies specific for kallistatin followed by secondary antibodies conjugated with HRP.No bands were observed on PVDF membranes of diseased as well as healthy samples.However, a significant band was observed with recombinant kallistatin.There was no band detection with pooled and concentrated samples of diseased and healthy indicating that there is no cross reactivity of other serpins with kallistatin.Serpins are broadly distributed family of protease inhibitors which circulates in blood and are mainly expressed from liver .Highly conserved similar structure of serpins are crucial for their inhibitory function and play an important role in haemostasis and fibrinolysis .These proteins are suicide or single use inhibitors that use conformational changes to inhibit target enzymes .Inhibitor binds tightly to a protease by incorporating reactive centre loop of inhibitor into β sheet of the enzyme by forming SDS and heat stable complex .A highly conserved secondary and tertiary structure is the main criteria for the classification with modest amino acid similarities .Despite chromosomal proximity, these genes have divergent function .Serpin genes are present in clusters on same chromosome with common precursor.The human genes encoding α1-antitrypsin, corticosteroid-binding globulin, α1-antichymotrypsin and protein C inhibitor are mapped to the chromosome 14q32.1.Kallistatin is also mapped within the region on the same chromosome .In spite of similarity in chemical properties having minor amino acids sequence resemblance and mapped on same gene, our study did not show any cross reactivity between serpin class proteins in cirrhotic liver and healthy subjects which may be attributed due to absence of identical epitope among serpins.Cross reactivity occurs when two different serpins share an identical epitope.Epitope comprises approximately 15 amino acids of which 5 amino acids influence strongly for binding to definite paratope of Fab region on variable domain of antibody .Due to the absence of identical epitope among serpins might be reason for no cross reactivity in cirrhotic liver and healthy subjects.There will be reduced expression of serpin proteins into blood stream due to decreased synthetic function of liver in cirrhotic liver subjects.Molecular basis of polymerization is induced by mutations or mild denaturation which is common for all serpins.The conformational change in the serpin structure is crucial for functions and which also is susceptible reason for mutations .Mutations which bring about polymerization can also occur anywhere in the serpin and leads to formation and accumulation of stable polymers with similar properties .Serpin polymerization can also occur through domain swapping as recorded in antithrombin, α-1 antitrypsin and neuroserpin, which needs further studies to evaluate domain swapping polymerization of entire serpin family proteins, .Polymerization leads to reduction in serpin secretion with qualitative changes in protein structure .The etiological factors of cirrhosis of liver may not induce polymerization which directs to share identical epitope of serpin family proteins.This may be the reason why no cross reactivity was observed in cirrhotic liver subjects in our study.Even though, incidence of diseases caused by serpin polymerization is rare, homozygous mutations in SERPINA1 gene is associated with liver disease including cirrhosis.Human variants of serpin genes has been found in large number as a resultant of mutations which are associated with many diseases ."SERPINA1 alone has 1411 SNPs; SNPs for SERPINA4 are 906 in NCBI's dbSNP database.Mutational studies in terms of cross reactivity, for identification of identical epitope, might be difficult at this point because of huge diversity of serpins.Concentration of kallistatin is less in cirrhotic liver as well as in healthy subjects."Hence, the sensitivity of monoclonal antibodies might not detect kallistatin.In case of any cross reactivity, these antibodies may detect other serpins whose concentrations are in nanograms in serum.Use of more sensitive antibodies might detect kallistatin in cirrhotic liver as well as in healthy subjects and enhance successful immunological interactions of other serpins.For separation of proteins, 2 dimensional electrophoresis might be better option than single dimensional SDS-PAGE.In the present study, no immunological cross reactivity was observed between serpins and SERPINA4 due to the absence of identical epitope in cirrhotic liver and healthy subjects.Because of enormous diversity of serpins, validation of quantitative ELISA should be carried out to check interference of other factors along with cross reactivity by using different types of antibodies.Further quantitative studies of Kallistatin may provide insights into potential diagnostic options for chronic liver diseases. | Background Cirrhosis of liver is a pathological condition, wherein functions of liver are impaired by chronic liver exploitations. Due to decrease in synthetic capacity, expressions of plasma proteins tend to decrease in blood stream. Serpins (Serine protease inhibitors) are class of plasma proteins expressed from liver with structural similarities and diverse functions. SERPINA4 (Kallistatin) is a multifunctional serpin clade A protein expressed from liver and concentration in serum is the reflection of extent of liver dysfunction. Objective To identify interference of other serpins by immunological cross reactivity with SERPINA4 in cirrhotic liver and healthy subjects. Materials and methods Blood samples were collected from 20 subjects (10 cirrhotic liver, 10 healthy) from R.L. Jalappa Hospital and Research Centre, Kolar, Karnataka, India. Separation of proteins was carried out by SDS-PAGE. Cross reactivity study was analyzed using western blot. Results Proteins present in cirrhotic liver and healthy subject's serum were separated by SDS PAGE. There was no band detection on both (cirrhotic liver and healthy) PVDF (polyvinylidene diflouride) membranes. However, a significant band was observed with recombinant kallistatin. Conclusion Structurally similar serpins with minor amino acid sequence similarities did not show any immunological cross reactivity with SERPINA4 due to non identical epitope in cirrhotic liver and healthy subjects. Present study revealed that there is no interference of serpins for immunological reactions in quantitative estimation of kallistatin which needs further validation. |
355 | A multi-criteria GIS model for suitability analysis of locations of decentralized wastewater treatment units: case study in Sulaimania, Iraq | Lack of sufficient available water resources to cover the requirements of a city is one of the crucial problems nowadays all over the world.Decentralized wastewater treatment system is considered as a powerful solution to the problem of water shortage.The treated wastewater could be reused for many purposes such as irrigation, groundwater recharging, car washing, industrial uses, and firefighting.The locations of the treatment units are critical to getting the best benefit from the reused water."Also selecting the treatment unit's position should not create any health hazard to the community.Some essential criteria should be considered when selecting the best location such as; environmental standards, social aspects, cost, and other technical details.Moreover, land availability inside the city is another important factor that will effect on selecting the location of the treatment units.First, it is required to do a careful study and collect information to select appropriate sites for the decentralized treatment units.After collecting the required data, the right techniques should be used to find suitable locations.One of the methods is the Analytical Hierarchy Process, which is used widely in decision analysis.AHP is presented by, based on the comparison of the importance of two elements.Combining GIS with Multi-Criteria Decision Analysis is a powerful method in land evaluation.Many factors are usually taken when using AHP in selecting the suitable land location a used GIS with AHP to build a multi-criterion model to select appropriate locations for the decentralized wastewater units in Chennai area in India.Six thematic layers were selected which are: Land use, Population density, Soil type, The land slope, Cost and Technology.Adopted AHP using the Expert Choice 11 software for selecting suitable locations of decentralized wastewater units in Qom city in Iran.The criteria that been selected are: Population density, Land slope, Land use, and Reusing, concerning the environmental, economic, and social conditions of Qom.a used GIS for siting areas for stabilization pond system to be used for the treatment of wastewater of rural regions in Thrace.The factors that considered in the selection methodology are; Environmental criteria, Land topography, Land use, Geological formation, Distance from the SP units to the major rivers,Distance to the existing cities and villages, and Effluent characteristics.selected suitable locations to use Land Application Method for treated sewage produced from a wastewater treatment plant in Christchurch city in New Zealand by using GIS.The selection was based on many factors which are; Social acceptability, Soil type, Economic, Weather Land slope and Environmental factors.Other researchers used other methods such genetic Algorithm GA to select the suitable locations for decentralized treatment units.Developed an optimization model using Genetic Algorithm to find optimum design configurations of decentralized wastewater treatment system regarding best locations and number of treatment units to get minimum cost and highest benefit.This work aims to find the optimal locations for the decentralized treatment units in Sulaimania City in Iraq to be reused for irrigation by using AHP combined with GIS.Specifying those locations inside a city like Sulaimania needs a careful study as most of the districts are residential and the population densities are different from region to another region.Moreover, the city land is mountainous, and there is a big difference in the land levels.Finally, the locations of the existing sewer network are also considered as one of the significant factors.This study is carried out in Sulaimania City, Iraq - Kurdistan.The city has a mountainous topographic with elevations ranged from.The latitudes are between, and the longitudes are between.Sulaimania City divided into four suburbs, which are the main suburbs, Bakrajo, Rapareen and Tasluja.The research focused on Sulaimania Main suburbs only, and the study area named as Sulaimania City.The case study total area is 114 km2 with 156 districts as shown in Fig. 1.The sewer system of the city is combined and concrete box sewers used as main trunk sewers.The arrangements of the main sewer networks consist of 10 separate groups.The groups were named as; Lines A, B, C, D, E, F, G, H, J, and I. Each sewer line is divided into branches as shown in Fig. 2.At the end of each main sewer boxes, the wastewater is currently discharged to open areas through separate outlets then to Qilyasan Stream.The arrangements of the sewer networks of Sulaimania City are suitable to be used in decentralized wastewater treatment systems.The city suffers from a lack of water because of the rapid expansion of the city, climate changes and immigration from surrounding areas.There are many green zones in Sulaimania City like green parks of different sizes, green zones in the road medians and the green regions inside many residential compounds.Fig. 3 shows the locations of the green areas of Sulaimania City.The total green land size is about 6.58 km2, excluding the green areas inside the residential compounds.The study methodology and technique consisted of multiple works such as site visits to collect data and information, GIS works for mapping and modeling, Multi-Criteria Decision Model and statistical analysis to solve the model.The details are explained in the following sections;,Site visits and many interviews with authority representatives to collect information about the study area were done.Preliminary Selection of the Nominated Lands was done and selection was made based on many criteria, which are explained hereafter: Size of the selected lands was more than 1,000 m2, Locations are not at the beginning of the sewer network as there will not be enough flow to treated, Selected locations have accessibility to the roads,Selected lands are not located on a high level area in compare to the sewer box level, andSelected lands are located inside or close to the green regions.Based on the mentioned criteria preliminary selections of 134 nominated lands are obtained.The areas gathered into 10 groups which are; NA, NB, NC, ND, NE, NF, NG, NH, NI, and NJ and they are located on sewer lines A, B, C, D, E, F, G, H, I, and J respectively.Fig. 4 shows part of the selected, nominated areas on lines A and B.S Land Suitability Index.Wi Weight of the criteria.Ci Suitability of criteria.n, m Number of criteria and restrictions, respectively.The five criteria are measured in different scales; therefore, they are standardized using GIS Reclassify Tool.Each criterion is weighted based on their significance level and they are applied into Eq.GIS software is not capable of finding those weights; therefore, Analytic Hierarchy Process is used which is one of the Multi - Criteria Decision Making methods.Each criterion is evaluated by using pairwise comparisons matrix with scales shown in Table 1.In this method, the magnitude of preference between factors is reflected.The influence of the factors is specified based on experience and wise judgment as shown below:The area size criterion is the preferred factor in comparison to the other factors as the land values are high inside the city.Moreover, obtaining lands inside the study area is difficult.Distance to the green lands is second preferred factor as it has a significant effect on the cost of reusing the treated wastewater for irrigation.The city has a high feature, and far distances will need pumping to convey the treated wastewater."The slope factor has less effect among the other suitability criteria, as it is not difficult to change the nominated area's level and make it flat.The cost of leveling the field is less than the land value and less than the cost of water conveying.Population density also is essential as treatment units in crowded areas may not be accepted by the people, and it needs additional precautions and expenses.From practical experience, the additional precaution cost is still less than the value of the land and cost of the distance to the green areas.Depths of the sewer box is evaluated from practical experience, and it is clear that for deep sewers pumps will be required to lift the sewage to the treatment units, which is not preferred.The costs of pumps are almost the same cost of conveying the treated wastewater to green areas but less than the value of the lands and more than the cost of the land flatting.Table 2 shows the Pairwise Comparison Matrix for the five mentioned criteria.n Number of criteria.RI Random Index value referred from Table 3.λmax Largest Eigen Value.The nominated areas should be close to the main sewer box to avoid high costs of connection works from the proposed decentralized units to the sewer box and to keep construction work as far as possible from the residential areas.That distance from the nominated areas to the residential buildings are taken based on the characteristics of the area such as average street widths and the distributions of the buildings."The width of the city's main street is 20 m, while the street widths inside residential areas ranged from m or less in some places and the buildings arrangement are close to each other.Therefore, a distance of more than 50 m will cause a significant cost of excavation, construction, and destruction of the surrounding area.In the GIS the sewer box line is buffered with a distance of 50 m from each side.Values within the buffer area will take a Boolean value of one while values outside the buffer area are the restricted area, and it will take a Boolean value equal to zero.Fig. 10 shows the restricted area around the sewer box.According to the environmental restrictions, the proposed decentralized unit should be far away from the residential buildings at least by a distance of 10 m.Therefore; the building layer is buffered with a range of 10 m in the GIS model.The restricted areas are inside the buffer region and it has a Boolean value of zero.The area outside the buffered area is the allowable areas, and it has a Boolean value equal to one.Fig. 11 shows the details of the buffered areas around the buildings.The results that obtained from the AHP and GIS models are only to classify the suitability of each of the 134 nominated areas.It is required to make a further selection from those areas to find the optimum locations of the treatment units."Normalized Weighted Average Value of the final GIS model's results of each area calculated as shown below:",min, max; Minimum and Maximum value of WAV of nominated areas located on each sewer box, The values of NWAV are ranged from 0.0 to 1.0."The selection of the final best locations is based on the NWAV's amount.For the 10 nominated areas group, sites that have the highest values of NWAV are selected.Extended aeration package plants are used as it is recommended for small residential communities.Extended aeration method is a modified activated sludge process used to remove biodegradable organic wastes under aerobic condition.This type of plant is recommended, as it is efficient, does not need big footprints and it produces a small amount of sludge."In the present study, the results of the weights of the suitable criteria using method shows that the weight W of the size of the nominated area's factor has the most considerable effect which is equal to 35 % and the other results are shown in Table 5. "The Consistency Ratio is found to be equal to 1.63% < 10 %, which is acceptable, and it means that the judgment of criterion's ranking was correct.The results of the GIS model showed the percentages of the suitability classification of the total nominated areas as presented in Table 6.The results also showed that most of the nominated areas could not be classified under a certain suitability class with 100% ratio.For example, nominated area NE18 has 3.4% restricted area, 1.5 % is suitable, and 95.1% is very suitable.Fig. 15 shows the suitability of nominated areas NA1, NA2, NA3, NA4, NA5, NA6, NA7, NB3, and NB4.The analysis showed that 58 nominated areas have some restricted parts.Most of those restricted parts are located at the outer part of the nominated land, and others located along one side of the area, or at one corner of the land.The restricted % part in the nominated areas are; 20 areas have %, 31 areas have <5 %, 6 areas have >15 %, and the remaining areas have no restricted part.The size of the other restricted parts of the remaining 43 nominated areas varied from 393 m2 to 10 m2 and the restricted locations are mostly at the outside part of the lands.The other nominated areas have percentages of suitability ranged from a suitable class to extremely suitable.The results also showed that nominated location have suitability classification ranged from very suitable to highly suitable with a percentage >75%.In addition, it is found that only 3 areas are extremely suitable with a percentage >90%.From the results of the final suitability of the GIS for each nominated area, NWAV is calculated.The values of NWAV reflect the level of suitability of the location to be used for installing decentralized treatment unit.For instance, nominated area NB7, the total area is 5544 m2, which has 262 m2 restricted, 161 m2 suitable , 1521 m2 very suitable and has no other classification levels.R % = x 100 = 4.7 %, M.S % = x 100 = 0.0 %, S% = x 100 = 2.9 %, V.S%.= x 100 = 92%, H.S.% = x 100 = 0.0 %, and E.S.% = x 100 = 0.0 %.WAV = /3.WAV = 13.NWAV =/ = 0.3,.The values are ranged from 0.0 to 1.0 with an average amount equal to 0.50.The optimum locations from the 134 nominated areas are the areas that have the highest NWAV.From each group some nominated regions are selected, who their NWAV ≥0.50.The total selected areas are 30 optimum locations as shown in Table 7.Figs. show the suitability classifications of the 30 optimum nominated areas.The final 30 optimum locations are distributed in organized and strategical positions in the study area and are spread over the 10 main sewer box lines.The number of the selected regions per each sewer box ranged from 1 – 5."Line A has only one suitable area, as the preliminary areas chosen from the beginning are only 7 areas because line A is short and covers a small part of the city's districts.Figs. 17a and shows the 30 optimum locations of the proposed decentralized treatment units.Decentralized wastewater treatment units are an effective solution to the problem of water shortage in a city like Sulaimania.Selecting the locations of the units is crucial and should be done according to the required standards.This study was conducted to find suitable sites for decentralized wastewater treatment units in Sulaimania City by using GIS and Analytical Hierarchy Process.Five suitability criteria were taken; area size, distance to green areas, population density, land slope and depth of the sewer pipes at the nominated locations.Also, two restrictions were used; distance from the decentralized unit to buildings ≥10 m and the distance from the decentralized units to the sewer box ≤50 m. Selections of 134 locations were made in Sulaimania City to test their suitability.Previous studies related to the same aspect were to find the best locations of a certain facility directly using GIS.In this research the work was done in two stages.First an evaluation of preliminary selected locations of the DTUs was done.Second stage was to evaluate the preselected locations then select the best areas.The results of the model classified the selected areas into 6 suitable classes starting from restricted to extremely suitable.Moreover, from the suitability results of GIS and AHP further analyses has been done and 30 final optimum locations found which are beside the 10 sewer lines.Ako Rashed Hama, Zeren Jamal Ghafoor: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Rafea Hashim Al-Suhili: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper. | Sulaimania is a City located in Kurdistan region in the north of Iraq. The city is facing a lack of water, and it will reach a very critical condition shortly. One of the potential solutions is to reuse the treated wastewater for non-direct human uses such as irrigation, washing, firefighting, groundwater recharging, and others. There is no sewage treatment plant in the city. The wastewater flows into a stream through some sewer outlets, and that causes big environmental issues. Decentralized wastewater treatment units (DTUs) are suggested to solve the issue. The treated wastewater will be used for the irrigation of the green areas of the city. The selected plant type is Extended Aeration treatment system, which is recommended for residential areas. Specifying the locations of the treatment units is very important from environmental, social and technical aspects. The main objective of this study is to select the best suitable places for the DTUs. Preliminary selections of 134 nominated areas for DTU locations were made in different places in the city. The locations are distributed into 10 groups near the main sewer pipes of the city. A model is created to evaluate those selected locations and eliminate the non-suitable locations by using GIS software integrated with the Analytical Hierarchy Process (AHP). Five criteria were used in the model, which are, (1) The size of the available lands, (2) The distance from the decentralized units to the green areas (3) Population density around the decentralized treatment unit locations, (4) The slope of the land and (5) Depth of the main sewer pipe at the nominated area. In addition, the model adopted two restriction factors, which are: (1) The distance from the decentralized treatment unit to the buildings should not be less than 10 m and (2) The distance between the main sewer pipes and the treatment units are taken to be <50 m. The results of the suitability analysis produced six classes of suitability levels of the nominated areas started from restricted to extremely suitable. The suitability percentages of the 6 classes of the total nominated areas were found to be; 8.5% (6.95 ha) restricted, 0.4 % (0.23 ha) moderately suitable, 12.8% (10.50 ha) suitable, 38.8% very suitable (31.60 ha), 32.2% (26.33 ha) highly suitable and 7.3% (5.92 ha) extremely suitable. Each nominated area has more than one suitability class. Normalized Weighted Average (NWAV) of the suitability level percentage of each nominated area is found. The values of the NWAV are ranged from 0.0 to 1.0, and the selection of final DTUs locations will be for areas that have NWAV larger than 0.5. Optimum 30 suitable locations are selected out of the 134 nominated areas. |
356 | Cycle life of lithium ion batteries after flash cryogenic freezing | Market adoption of hybrid electric vehicles, plug-in hybrid electric vehicles and battery electric vehicles containing lithium ion batteries continue to accelerate; for example over 2 million plug-in vehicles were sold worldwide in 2018, adding to the 5 million already on the road .Despite improvements in battery durability , ageing mechanisms, such as solid electrolyte interphase layer growth , cause LIBs to eventually no longer store sufficient energy capacity for their original automotive application .End of life for automotive applications is commonly defined as when the battery capacity has reduced to 80% capacity when compared to new or when the internal impedance has doubled .In-field failures or road traffic accidents which damage the battery pack also result in premature EoL for the LIB.The Circular Energy Storage, a London based research and consulting group, estimates the global EoL LIB market to be currently worth $1.3 billion, with the LIB second life market expected to reach $4.2 billion whereas the recycling market is predicted to grow to $3.5 billion by 2025 .Despite the growing size of the EoL markets, protocols and procedures for LIB EoL, such as recycling, remanufacturing and re-use, e.g. , are not well established .As discussed in , there are a number of barriers opposing the reverse logistics required to support second life applications.The current legislation such as the UK Batteries Directive and the European agreement concerning the International Carriage of Dangerous Goods by Road make it challenging to transport damaged or defective batteries, which is a prerequisite for LIB second life.Further, higher costs are incurred and only specialised logistics firms are typically able to provide this service.Unless these LIBs can be shown to be safe, which is defined as not being able to explode, vent dangerous gases, catch fire, or go into thermal runaway, they must be transported in accordance with transport category 0 as per ADR SP376 .Practically, this means using specialised transport service providers, e.g. using explosion proof steel containers, which cost circa €10,000 for a typical Tesla sized pack and a further circa €10,000 for the UN accreditation .In this paper, the authors have not established if cryogenic freezing will be more cost effective since the aim of this body of work is to first establish if the process of making LIBs safe through cryogenic freezing is at all viable from a LIB performance point of view.Once proof-of-concept has been established, as discussed in Section 3.3, the authors recommend a full economic assessment of the methodology that will further underpin commercialisation and adoption by industry.In reality, it is normally impractical to establish whether a damaged or defective LIB is safe because the LIB is not conforming to the type tested according to UN38.3 .Additional testing would be required to ensure the LIB is safe, which is not practical to perform at a road traffic accident site.Effectively, LIB packs with no or only relatively minor damage will end up discarded because it is not economically viable to transport them to a battery remanufacturing or re-use facility.A large LIB, which can contain thousands of individual lithium-ion cells, can be rendered damaged or defective by a proportion of the cells being damaged or defective or by ancillary failures, e.g. a failure of the BMS to report battery status.Therefore, depending on the failure mode, most of the cells in a damaged or defective pack could still be reusable.In addition to LIBs being transported from first usage in motor vehicles to a battery treatment centre for remanufacturing, re-use or second life applications, the research is also relevant to new batteries being delivered to a car manufacturers.The authors have recently demonstrated that lithium ion cells are safe when they are cryogenically frozen , which should comply with the requirements of ADR SP376 and permit their transport without the use of expensive explosion proof steel containers.Furthermore, results presented in reveal that the cryogenic freezing has little to no-effect on the electrical performance of the two different cell chemistries and form factors tested.A recent study confirmed this finding on 18,650 lithium-ion cells after a 14-day cryogenic freezing period .This solution would therefore facilitate the potential repair or remanufacturing of individual cells and modules, prolonging the useful life, as well as support second life applications for LIBs ; improving the environmental sustainability of EVs.In this paper, for the first time, the authors present the capacity degradation of cryogenically frozen LiBs cycled over several months compared to LiBs that was not frozen.The effects of flash cryogenic freezing on the life cycle of LIBs has not yet been reported in the literature.This initial study therefore aims to establish if cells that have been cryogenically frozen age and degrade normally when cycled over several months, in order to establish if there are any effects of the life expectancy of LIBs.The experimental method consists of cycling lithium ion cells in an Espec thermal chamber set to 25 °C that had been cryogenically frozen prior to the experiment and comparing the results against a control group of cells that have not been subject to the same frozen conditions.The full detailed methodology for cryogenically freezing the cells and evaluating the electrical performance of the cells using the energy capacity and Hybrid Pulse Power Characterisation measurement is described in full detail in .Briefly, the HPPC test consisted of 10 pulses applied at 90%, 50% and 20% SOC at 25 °C after leaving the cells to equilibrate electrochemically and thermally for three hours as per the method defined in IEC-62660 .This short communication only describes the cycling performed.For consistency, the same cell selection is used as in our previous study : i.e. Dow Kokam 5 Ah 100 x 106 mm nickel manganese cobalt oxide pouch and Panasonic 3 Ah 18,650 nickel cobalt aluminium oxide cylindrical cells.Although the DK 5 Ah is not a cell currently used in EV traction batteries, NMC is the most popular of the chemistries currently used by EV manufacturers, such as Kia, Hyundai, BMW and Mercedes-Benz.”.Six DK 5 Ah cells were used and divided into two groups, where n is the number of cells in each group.DK01: cryogenically frozen before cycling,DK02: control,Both groups were cycled 105 times before energy capacity measurements are performed, and 210 times before HPPC measurements.One cycle consists of fully discharging the cells at constant current of 4C to their lower voltage threshold.The cells were rested for 10 min prior to being fully charged using a constant current of 1C to the upper voltage defined by the manufacturer followed by a constant voltage phase until the current reduced to 0.1A.There is another 10 min rest before the next cycle begins.In order to maintain safety throughout the experiments, a maximum surface temperature of 60 °C was established.After the 150th cycle, since the surface temperatures of the cells were below the threshold temperature, as shown in Fig. 1, the cycle rates were increased to the maximum C-rates on the manufacturer’s datasheet for both groups for the remaining of the experiment.A paired t-test1 with p-value = 0.05 for each measurement is performed in order to establish if there is any statistically significant difference between the two battery groups: DK01 and DK02.Likewise, six Panasonic 18,650 cells were also divided into two groups, where n is the number of cells in each group.PAN01: cryogenically frozen before cycling,PAN02: control,Both groups are cycled 45 times before energy capacity measurements are performed, and 90 times before HPPC measurements.One cycle consists of fully discharging the cells at constant current of 1C to their lower voltage threshold.The cells were rested for 10 min prior to being fully charged using a constant current of 0.5C to the upper voltage defined by the manufacturer followed by a constant voltage phase until the current reduced to 0.1A.There is another 10 min rest before the next cycle begins.After the 90th cycle, since the surface temperatures of the cells were below 50 °C, as shown in Fig. 1, the cycle rates were increased to the maximum C-rates on the manufacturer’s datasheet for both groups for the remaining of the experiment.A paired t-test with p-value = 0.05 for each measurement is performed in order to establish if there is any statistically significant difference between the two groups, i.e. the null hypothesis that both groups have the same mean.The DK 5Ah and Panasonic 3Ah capacity average measurements ± standard error are presented in Fig. 2.The standard error is the sample standard deviation divided by the square of the root of the sample size.Fig. 2 shows that in the beginning there is very little variation between the energy capacities of the DK5 Ah cells that were cryogenically frozen and the control group.After approximately 600 cycles, the two groups appear to diverge, with the control group having lower average capacity than the cryogenically frozen one.The DK 5 Ah capacity t-test results are summarised in Table 1, which are all > 0.05, therefore the difference is not statistically significant.The results obtained support accepting the null hypothesis that both groups have the same means.This finding should be confirmed with a larger sample size.Similarly, Fig. 2 shows that the Panasonic 3 Ah capacity measurements that are very similar for the group that were cryogenically frozen and the control group.Table 2 summarises the Panasonic 3 Ah capacity t-test results, with a confidence level >0.05.As for the previous case, therefore we accept the null hypothesis and conclude that both groups have the same mean value of energy capacity.It can therefore be concluded that for both the DK 5 Ah and the Panasonic 3 Ah, the flash cryogenic freezing does not affect the cycle life.Fig. 2 shows that the energy capacities of the DK5 Ah cells monotonically decrease as the cells are electrically and thermally cycled.This is a well-documented phenomenon within the literature: the energy capacity of LIBs deteriorates due to ageing mechanisms such as solid electrolyte interphase layer growth and structural changes to the electrode .However, Fig. 2 shows that the Panasonic 3 Ah capacity measurements increase at four points in the cycling process, i.e. the average capacity measurements at 360, 540, 810, and 900 cycles are greater than the corresponding previous measurement.This is unusual since the additional cycles would normally reduce the overall cell capacity.Each of these instances occurred after the cycling was paused, e.g. after the capacity measurement at 765 cycles, the testing did not resume immediately due to facilities constraints.All tests were conducted with the same cyclers that was calibrated to the manufacturer’s recommendations.The cells were stored for 2 weeks before they were cycled and the capacity measurement at 810 cycles was performed.It is interesting to note that the DK5 Ah were mounted on the same jig and their cycling was also paused four times like the Panasonic 3 Ah cells, however they do not show this apparent capacity increase).This phenomenon was only observed in aged Panasonic 3 Ah cells, which are comprised of NCA with a LiC6 anode.Conversely, the internal chemistry of the DK5 Ah cell is NMC.This reversible capacity effect has been investigated in automotive 25 Ah prismatic NMC cells and attributed to anode overhang .However, Epding et al. 2019 refutes anode overhang as the mechanism responsible for the phenomenon since a capacity increase is measured in cells stored at 100% state of charge during the rest period, and suggests it is linked to the occurrence of lithium plating.Regardless of the mechanism, the experimental method was robust: energy capacity measurements were performed as per the recognised protocols defined in IEC-62660 , which includes long relaxation times, i.e. 720 min for the thermal stabilisation before fully electrically charging the cells and 180 min prior to the cells being discharged to measure their capacity.Further testing revealed that once the cells were sufficiently aged, the capacity measurements would creep up for ˜175 h. Further work is being undertaken to investigate the very slow electrochemical effects that influence capacity measurements in aged Panasonic 3 Ah.However, since the cryogenically frozen cells and the control both display this phenomenon, it does not detract from the findings reported here, that is to say the flash cryogenic freezing does not adversely affect the cycle life of DK 5 Ah and Panasonic 3 Ah cells.The DK 5 Ah and Panasonic 3 Ah internal resistance average measurements ± standard error are presented in Fig. 3.Fig. 3– show that in the beginning there is very little variation between the internal resistances of the DK5 Ah cells that were cryogenically frozen and the control group.After approximately 600 cycles, the two groups appear to diverge, with the control group having higher internal resistance than the cryogenically frozen one.The DK 5 Ah internal resistance t-test results are summarised in Table 3, for a confidence interval of 0.05, therefore the difference is not statistically significant.We accept the null hypothesis that both groups have the same means.Similarly Fig. 3– show that the Panasonic 3 Ah internal resistance measurements are very similar for the group that were cryogenically frozen and the control group.Table 4 summarises the Panasonic 3 Ah internal resistance t-test results, which are all > 0.05, therefore we accept the null hypothesis and conclude that both groups have the same means.Flash cryogenic freezing does not affect the internal resistance of both DK 5 Ah and Panasonic 3 Ah throughout the automotive useful life.The six plots in Fig. 3 show an apparent reduction in internal resistance during the cycling at the fifth measurement.This corresponds with the pause in testing and is exacerbated by the HPPC measurementsnot all being performed at the desired SOCs.Table 5 shows the estimated SOC that the HPPC measurements were carried out for both the cryogenically frozen cells and the control group.The SOCs are estimated since the capacity measurements used to establish the remaining capacity are affected by reversible effects.Since the cryogenically frozen cells and the control group were both measured at the same SOCs, the authors assert that the SOC variation does not detract from the findings reported, that is to say the flash cryogenic freezing does not affect the internal resistance of DK 5Ah and Panasonic 3Ah cells.No detrimental effect to cell performance was found on the flash frozen cells for two cell chemistries and form factors.It is expected this applies to other Li-ion chemistries, such as lithium Manganese Oxide and lithium iron phosphate.However, the transferability of these results to other LIB technologies requires further experimentation before this can be more fully understood.Cell autopsies where the electrodes are removed from the cell in order for the electrode surface to be analysed with a scanning electron microscope are to be performed in order to confirm the results.Further work is being undertaken to scale the work from cell to module and pack level.Since most state-of-the-art electrolytes crystallise at temperatures below −40 °C , it is expected that it is not necessary to maintain LIB packs at cryogenic temperatures in order to prevent thermal runaway.As such, further experiments are being undertaken to establish the minimum temperature to prevent thermal runaway within a complete battery installation to facilitate safe transportation.Different freezing rates will be investigated in order to establish how it affects the results.Finally, once proof-of-concept has been established, the authors recommend a full economic assessment of the methodology that will further underpin commercialisation and adoption by industry.The experiment evidences presented implies that flash freezing Li-ion cells causes no significant detrimental effects on cell electrical performance throughout the whole automotive life.The cell performance was determined by its impedance and energy capacity throughout cycling, as these dictate the power delivery capability and the amount of energy that can be stored, which in turn, defines the EV range, acceleration, and charging performance.Cell impedance and capacity were measured at regular intervals during cycling to quantify any effects of flash freezing on cell ageing and performance degradation.No statistical difference, with a 95% confidence level, in cell performance was found on two cell chemistries and form factors between the cells that were flash frozen and the control groups.This provides initial confidence that flash cryogenic freezing will not affect battery lifetime and ageing. | Growing global sales of electric vehicles (EVs) are raising concerns about the reverse logistics challenge of transporting damaged, defective and waste lithium ion battery (LIB) packs. The European Union Battery Directive stipulates that 50% of LIBs must be recycled and EV manufacturers are responsible for collection, treatment and recycling. The International Carriage of Dangerous Goods by Road requirement to transport damaged or defective LIB packs in approved explosion proof steel containers imposes expensive certification. Further, the physical weight and volume of LIB packaging increases transport costs of damaged or defective packs as part of a complete recycling or repurposing strategy. Cryogenic flash freezing (CFF) removes the possibility of thermal runaway in LIBs even in extreme abuse conditions. Meaning damaged or defective LIBs may be transported safely whilst cryogenically frozen. Herein, LIBs are cycled until 20% capacity fade to establish that CFF does not affect electrical performance (energy capacity and impedance) during ageing. This is demonstrated on two different cell chemistries and form factors. The potential to remanufacture or reuse cells/modules subjected to CFF supports circular economy principles through extending useful life and reducing raw material usage. Thereby improving the environmental sustainability of transitioning from internal combustion engines to EVs. |
357 | Choose and Book: A sociological analysis of 'resistance' to an expert system | Healthcare depends increasingly on information and communication technologies, whose introduction is often characterised by limited adoption or adoption followed by abandonment, especially when part of a large, top-down change programme).The health informatics literature tends to explain such ‘failed’ projects in terms of resistance and to couch solutions in terms of securing behavioural compliance without questioning ends.For example:“…the major challenges to system success are often more behavioral than technical.Successfully introducing such systems into complex health care organizations requires an effective blend of good technical and good organizational skills.People who have low psychological ownership in a system and who vigorously resist its implementation can bring a ‘technically best’ system to its knees.However, effective leadership can sharply reduce the behavioral resistance to change–including to new technologies–to achieve a more rapid and productive introduction of informatics technology.,Healthcare IT policy typically reflects this behaviourist framing by focusing on incentives, sanctions and training.In contrast, socio-technical systems theory proposes that technologies and work practices are best co-designed using participatory methods in the workplace setting, drawing on such common-sense guiding principles as staff being ‘able to access and control the resources they need to do their jobs’, and insisting that ‘processes should be minimally-specified to support adaptive local solutions’.Socio-technical theory frames resistance to ICTs in terms of poor fit between the micro-detail of work practices and the practicalities of using technology.Brown and Duguid have shown how technologies in the workplace are embedded in networks of social relationships that make their use meaningful.The detail of how to use, adapt, repair or work round technologies is learned through membership of a community of practice; this social infrastructure strongly influences whether and how particular technologies ‘work’ in particular conditions of use.In this paper we acknowledge this perspective and seek to complement it with a multi-level theoretical analysis that considers macro forces emanating from government and state agencies; meso-level networks that mediate these forces; and micro-level sites of acquiescence or resistance by human agents.We incorporate selected insights from actor-network theory, which conceptualises networks of humans and technologies that are dynamic and unstable."ANT usefully considers human actors' behaviour as a consequence of the overall pattern of influences generated across the network.In this study our preferred analytic lens is structuration theory, developed by Anthony Giddens.We adopt a layered ontology, finding it productive to make distinctions between structure and agency, and between macro, meso and micro, which ANT rejects."We integrate technology into the picture, a dimension that is missing from Giddens' work. "We also seek to go beyond Giddens' abstract concern with social structures in general and use an empirical case study approach to look at particular fields of social relations. "This emphasis has many parallels with ANT's notion of networks.Taking a layered approach to the study of relations or networks highlights the ways in which the interdependencies and interactions that constitute these networks are embedded in hierarchical power relations, both near and distant.Unlike ANT, this ‘strong’ version of structuration theory carefully distinguishes the agency of humans from that of technologies.Strong structuration theory also considers how the values and knowledge possessed by both individual and organisational actors are influenced by external structures, and how this value-knowledge nexus informs and influences their actions, with or without technologies, in particular social situations.For strong structuration theorists, resistance to ICTs stems from the human agent, who is positioned in a particular network of social relations; has a particular identity, organisational role, set of moral principles, beliefs, capabilities, and so on; and accords significance to technologies in particular contexts.We sought to apply strong structuration theory to explore resistance to a nationally mandated healthcare ICT, ‘Choose and Book’, in terms of the reasoning and actions of human agents and how this was influenced both by social structures and by the material capabilities and constraints of the technology.To that end, we undertook a secondary analysis of a rich ethnographic dataset on the use and non-use of electronic records in UK general practice.As with many healthcare ICTs, Choose and Book was linked to a specific national policy, described in detail by others."The first government commitment to providing patients with a choice of time and date of their hospital appointment was in 2001 with the Labour government's landmark NHS Plan.Choice of hospital was promised a year later.In 2004, The NHS Improvement Plan promised all patients a choice of hospital at the point of referral."It predicted that introduction of ‘choice’ would reduce waiting times, make the service more responsive to patients' needs, promote quality improvement and increase efficiency.The plan sought to introduce competition between providers via a new reimbursement system that paid hospitals a fixed tariff price per patient seen.From January 2006 all National Health Service patients referred to hospital for elective care were to be offered a choice of four or five ‘clinically appropriate’ local providers."In April 2008, patients became eligible to choose any provider nationally who offered care at the national tariff rate and met standards set by the Care Quality Commission.The assumption underpinning the introduction of patient choice in referrals was that the option for patients to take their custom elsewhere was a significantly more effective quality driver than the possibility that they might complain – and indeed, that the potential for ‘exit’ added weight to ‘voice’.It is worth keeping in mind the abstract, decontextualized nature of these assumptions.Development of Choose and Book, intended to support patient choice at the point of referral in England, was funded in 2003 via a 5-year, £64.5 million contract to the commercial supplier ATOS.Its national implementation was the responsibility of a designated lead within the Department of Health.At the time of this study, its local implementation was formally the responsibility of primary care trusts.It was anticipated that by replacing the traditional paper referral with an ‘integrated’ electronic system, Choose and Book would also be more convenient for patients and GPs; reduce the number of referral-based enquiries GPs and their staff had to deal with; lessen the bureaucracy associated with referrals; reduce ‘did not attend’ and cancellation rates in outpatients departments; and encourage a more standardised format for referrals.Recognising that ‘choice’ would be effective only if patients were informed of the key differences between local providers in a particular service, in 2007 the Department of Health launched a website giving details of these services to allow comparison between providers prior to referral and a ‘Choosing Your Hospital’ booklet."This booklet emphasised patients' right to choice and encouraged them to report services to their GP if they were not satisfied.Choose and Book was part of a wider socio-technical network."This included the National Programme for IT; the machinery of the New Labour government; the Care Quality Commission and other national regulatory bodies; civil servants who created the performance metrics for choice and Choose and Book, monitored performance of healthcare organisations against these and linked them to financial incentives; professional bodies; and local managers in PCTs.This socio-technical network was distinctly unstable during our data collection period.The Department of Health continued to produce reports and electronic updates purporting that Choose and Book was improving ‘choice’.These were countered by letters and articles published by doctors in academic journals that documented increased workload and a rise in ‘did not attend’ rates following the introduction of Choose and Book; patients referred under Choose and Book who had no recollection of being offered a choice of provider; and a widespread perception that the technology was inefficient, inflexible, complicated and politically-driven.NHS policy changes in the 2000s reflected the mindset of late modernity, with its emphasis on an abstract blueprint for control that lacked grounding in, or sensitivity to, the details and variety of local contexts.The predominant frame of reference was rationalist; there was a strong sense that innovation and change represented progress, and a particular confidence in the value of expert systems – defined by Giddens as “ system of technical accomplishment or professional expertise that organize large areas of the material and social environments in which we live today”; the possible negative consequences of such technical systems, including their impact on social interaction, was rarely systematically considered; and designers and policymakers were orientated to an imagined ‘proximate future’ – a time almost upon us when the technology is fully functional and all technical, ethical and political challenges have been smoothed out.The expert system is a relatively recent phenomenon, resulting from the powerful triad of classificatory systems, bureaucracy and information technology in the age of globalisation.Such systems, which now range far and wide, are driven by abstract rules and procedures designed to co-ordinate social relations across large distances.Giddens proposed that these expert systems, using technology to encode information and store formal knowledge, have an inherent tendency to ‘empty out’ the content of local interactions because the technical knowledge they contain is assumed to have validity independently of any particular interaction, and to have the authority to override situational contingencies.They are designed to exert control and order – measurable, quantifiable – over distance in a way that seeks to remove the ability of distinctive people, relations and contexts to upset the uniform application of the rules and classificatory system embedded in the system.There is a powerful momentum towards general and universalising rules and processes, and away from the application of practical wisdom in specific contexts.Expert systems capture professional expertise by formalisation – deploying impersonal knowledge, classificatory systems and procedures to shape, monitor, standardise and render calculable the work they support.Anthropologist Mary Douglas, developing earlier insights from Durkheim, argued that producing lists, rankings and other classification systems helps establish and then sustain social institutions by introducing conventions that “describe the way things are”.Classification systems are fiercely negotiated and defended for precisely this reason.They have long been combined with those bureaucratic forms of instrumental rationality carefully analysed by Weber.It is the interweaving of these two systems with powerful information technologies that is new.The classificatory rules and procedures embedded in Choose and Book software and its networks assumed that the sick patient functioned primarily as a rational chooser, able and willing to weigh up information about potential options and decide between them if provided with high-quality information and decision support.Managing illness was assumed to consist, more or less, of making a series of objective decisions based on a limited number of decontextualised indicators and then following through on these.It follows from these assumptions that provision of statistical information on the ‘quality’ of services in a standardised format will prompt the ‘right’ choices and that these choices will lead to the ‘best’ services winning out in a competitive market.Choose and Book was also influenced by the abstract ideology of competition, and by the salutary effects that the competition blueprint was said to have on cost efficiency, patient satisfaction and patient outcomes.When the policy of choice was introduced, much attention was paid to the expert system but there was little exploration of the meso- and micro-level social interactions and processes that would convert the policy idea into the reality of a more efficient, effective and responsive healthcare system that improved patient satisfaction and outcomes."Notably, the over-riding influence of national policy meant that Cherns' principles of socio-technical design at local level were not recognised or applied.With a view to redressing this imbalance, we sought to analyse the micro processes and interactions involved in the practice of referral using the Choose and Book technology.To this end, we used the theoretical lens of strong structuration theory introduced above.This focuses on actors who are sited within a field of position–practice relations that has a powerful presence external to them, and which imposes itself upon them in various ways.These external structures pose constraints, provide resources and possibilities for action, and are the source of pressures and forces, including those of socialisation and induction into cultural meanings and values.Strong structuration theory takes seriously the hermeneutic, interpretative frames of the actors, the ways these are built up over time, and the way these mediate perceptions of external reality.But it departs from many forms of social constructionism in framing this in terms of the ways in which external structures are internalised within the interpretive frames of actors.Key to our current argument is the further division of these internal structures into two interacting aspects."First, an individual actor's generalised dispositions, or habitus, which refers to durable and deeply socialised aspects of embodied skills, culture, moral values and principles, and so on, built up over time as an actor is exposed to, and interacts with, their social contexts.This provides the phenomenological perspective by which events in the world are framed and perceived."Second, the actor's knowledge of the immediate strategic terrain of position–practice relations facing them at any particular time, including knowledge of the potential functionality of technologies and the sense of how this fits with other aspects of the terrain.Such conjuncturally-specific knowledge may be informed and fine-grained, or it may be ill-informed and broad-brush, risking unintended and unwanted consequences.Internal structures are an important part of the capabilities of actors, drawn from or worked upon – and in compliance or resistance – by actors as they engage with the everyday flow of practices.The emphasis of strong structuration theory on values and norms within the habitus of actors means that humans are viewed not primarily as rational actors, nodes in a network or members of a socio-technical system but as moral beings who have commitments, desires and values.It views work – especially the work of doctors – not merely as a series of coordinated tasks but as having symbolic significance in society.As Sayer put it in the title of his book, “things matter to people” – objects, actions, experiences and relationships have personal and moral significance as well as economic or instrumental worth.With this in mind, our analysis set out to explore the tensions between professional morals and values on the one hand and the demands made on the GP or other actor in the here and now by the remote, disembedded expert system of Choose and Book on the other."In the language of strong structuration theory, this is a tension between key aspects of the GP's value-dispositions and his or her conjuncturally-specific knowledge of the social forces and sanctions embedded in the proximate structures of the Choose and Book technology.The nexus of ethical values embedded within the habitus of a healthcare professional is not static or unproblematic.Indeed, it may be variously ambivalent, fragmented or conflicting, reflecting the ethical tensions and inherent conflicts of healthcare practice.In this paper, we show how an understanding of these values and how they inform professional notions of excellence are a useful point of departure for illuminating those practices that are framed as ‘resistance’.If resistance is to be investigated at the micro level in terms of what matters to human agents,we need to consider what macro-level influences, shared among members of a professional community, shape these values and perceptions.MacIntyre depicted these influences as the ‘internal goods’ of a domain.The internal goods of medicine would include the Aristotelian virtues, along with the dispositions and capacities, that are valued by doctors and which they believe are necessary to sustain standards of excellence in their profession.It has long been argued by sociologists that because they bear a commitment to the refined knowledge, ethics and values of their specialised community, professionals act as a bulwark against the impersonal march of capitalist and bureaucratic forces.More recently, French sociological theorist Luc Boltanski has called for policymakers to go beyond ‘neomanagerialism’ and engage with the moral and normative positions taken by individuals and groups on particular issues, notably the ethically-motivated concerns of professionals and lobbyists."Medicine's internal goods are clustered, broadly speaking, around the themes of caring, curing, and comforting, and are embedded in the formal and informal codes of practice of the medical, nursing and other related professions.In the analysis that follows, we use these internal goods as a benchmark against which to consider not merely the means by which a referral to hospital is made but also the ends that are in mind when it is made."A GP's judgements about referral to hospital have traditionally been directed towards a range of ends such as access to restricted tests or procedures, specialist advice in diagnosis or treatment, confirmation that nothing has been missed, symbolic affirmation of a serious illness, and respite from a patient whose chronic incurable illness has become wearing. "GP referrals are informed by knowledge of the patient's personal history, knowledge of the workings of their own health system, and knowledge of local social relations, including the character of local hospitals and the clinical interests and personal style of particular consultants.The ‘expert system’ character of Choose and Book militates against using such knowledge, placing constraints on the scope of professional judgements."A professional framing of medical work sees doctors as wielding their symbolic power with integrity and commitment with the patient's best interests in mind. "Patient empowerment notwithstanding, there are aspects of the unequal relation between doctors and patients whose legitimacy is socially conferred, due largely to the fact that illness makes people vulnerable and in need of society's help.Referral decisions are not merely ‘rational’ but also practical and ethical, asking whether this referral, to this specialist at this hospital, is the right thing to do."As Dixon et al showed in both UK and Netherlands, patients' choices do not follow the narrow economic rationality that policymakers anticipated, but reflect practical and symbolic influences that are perceived to matter in particular circumstances.Professional judgement, particularly in primary health care, relies on being rooted in the immediacies of context.As a strikingly top-down form of expert system, Choose and Book imposes abstract and generalised protocols that have limited capacity to take account of local circumstances and contingencies."As Boltanski's critique highlights, this socio-technical network crystallises a tendency to ignore or dismiss the skills, concerns and situated judgements of professionals. "This is especially troubling in healthcare, since medicine is inherently exception-filled and medicine's internal goods are not, in large part, reducible to formulaic rules and protocols.Neither technologies, nor the policies and processes in which they become embedded, are morally neutral, and to be able to judge the appropriateness and adequacy of particular policy initiatives and linked technologies, it is necessary to assess how well they allow patients to receive the levels of care, cure, comforts and so on they can reasonably expect from the healthcare system – and support doctors to provide them.Having a clear sense of ‘what good might look like’ allows us to begin to open up policies and their socio-technical networks to critical scrutiny.Our research questions were: When referring patients to hospital, how does the tension between the systemic demands of Choose and Book as an expert system and the situated application of local knowledge through practical and professional judgement play out? To what extent can ‘resistance’ to Choose and Book be explained in terms of the structure-agency dynamic?,The idea for this secondary analysis emerged during the Healthcare Electronic Records in Organisations study, conducted in 2007–10 and funded by the Medical Research Council, which explored the use of electronic records in English general practice.It occurred at a time when a number of networked technologies were being introduced as part of national IT policy.The original HERO dataset covered four practices and included around 200 clinical consultations either directly observed or videotaped, as well as ethnographic field notes, documents and naturalistic interviews – that is, asking people what they were doing and why as part of ethnographic observation.Naturalistic interviews provide particularly useful data in the study of work, since people can best describe and reflect on their work when doing it.Our original analysis produced findings relating to how the work of GPs, nurses and receptionists is shaped and constrained by technologies in use and by prevailing expectations about who should use them and how.One technology in particular – Choose and Book – was rarely if ever used as its designers had intended.Its embedded scripts) were ignored and/or deliberately subverted.When it was used, the consequences were not as predicted.Most referrals were still dictated, typed and sent in the traditional way.Furthermore, GPs and administrators often had strong feelings towards Choose and Book and there was much talk of ‘clinician resistance’.We decided to seek funding for a secondary analysis of our dataset to explore these impressions further; we obtained this from the National Institute for Health Research Health Services and Delivery Research Programme.Commencing with the entire HERO dataset, we selected a much smaller dataset for further analysis.This mainly comprised direct observations of work practices relating to referral, whether undertaken manually or with Choose and Book).The extracted dataset also included some videotaped consultations, and documentation produced nationally and locally over the time period of the original study.In an initial familiarisation phase, we produced first-order interpretations in which we sought to describe and offer preliminary explanations of observed practice and to summarise the assumptions that underpinned policy and were embedded in the Choose and Book technology.In a further analytic phase, we used strong structuration theory, whose application to the use and non-use of ICTs has been described previously.We focused on the conjuncture – that is, a critical combination of events and circumstances in which the human agent draws on both habitus and knowledge of the here-and-now situation, and is supported or constrained by the available technologies, to inform, execute and justify a particular course of action.We considered two kinds of conjuncture: clinical consultations in which outpatient referrals were initiated and administrative activities in which staff sought to follow through on such referrals.In both kinds, the actor either used or chose not to use Choose and Book to support the referral in particular ways."We considered how the actor's habitus combined with their assessment of external circumstances.The purpose of this was to focus in detail on the extent to which, and the ways in which, actors felt enabled and constrained by the material properties and capabilities of Choose and Book.Development of theory and analysis of data occurred concurrently, each feeding into the other and informed by interdisciplinary discussions.Two authors are medical doctors with an interest in the sociology of professional practice and clinical interaction; the third is a professor of sociology who has developed strong structuration theory, and is an acknowledged authority on the work of Giddens.At the time our primary study began, Choose and Book had been available for two years.Many practices had invested in training and additional staff and begun to use it for referrals but had subsequently reduced their use of it.Of the four GP practices we observed, anonymised as Dale, Beech, Elm and Clover, the percentage of referrals being submitted via Choose and Book was reported to us as 50–60%, 0%, 25% and 80–90% respectively.We never saw a GP use Choose and Book directly during a consultation – even in Clover practice, which described itself as ‘top of the league table’ for percentage of referrals made using the system.Our secondary analysis revealed four analytically distinct but empirically overlapping foci of active or passive resistance to adopting the scripts, implicit in the system design, that actors were required to follow if Choose and Book was to be a success."Each focus highlights a different aspect of GPs' refusal fully to comply, because they believed that Choose and Book threatened to subvert a dimension of valued professional commitments. "The four foci of resistance were: to the policy of choice that Choose and Book symbolised and purported to deliver; to finding ways to accommodate the technology's socio-material constraints and implications; to interference with doctors' contextual judgements; and to adjusting dutifully to the altered social relations consequent on its use.We consider these below."In each case, we summarise and illustrate data from our direct observations of situated action, then consider the relevant internal structures of human actors in conjunction with what was manifest in the material properties of the technology-in-use, and also the corresponding external structures. "One of our most consistent findings when observing GP-patient consultations was that choice of hospital was either not offered at all or was presented to the patient as an external requirement, with GPs often highlighting the perceived absurdity of the situation by expressing humour or exasperation.We observed a number of examples in which the offer of choice introduced a distinct note of confusion into an otherwise smooth conversation, since the patient could not understand why they were being given the option of travelling to a distant and unfamiliar hospital.Indeed, GPs appeared to invoke “the government” or “the computer” as a third party in an attempt to reduce this confusion."In all cases where choice of provider was offered, it was recorded on the electronic record using a distinct code that could later be used to audit the practice's performance.Our informal discussions and naturalistic interviews with GPs suggested that this recurring pattern appeared to be driven by three things that GPs ‘knew’: first, that the overwhelming majority of patients wished to attend their local hospital; second, that the government was mistaken in assuming that choice of hospital would act as an effective mechanism to promote competition and efficiency in the NHS; and third, that offering choice was linked to a financial incentive, embedded within the technology, for the practice.Thus, GPs were ‘resisting’ the policy of choice by presenting it to patients as an absurd demand of the system, at odds with their judgement, and refraining from the active investment of energy that its design relied upon, while most were also ‘complying’ with it at a superficial, pragmatic level in order to gain the reward."GPs' perceptions were, broadly speaking, borne out by our data.We did not encounter a single example of any patient choosing to go anywhere except their local hospital, and only one example of a member of staff who recalled such a choice being made.Neither did we encounter any examples of either doctors or patients seeking or using comparative performance data when considering their referral preferences.Tellingly, the capacity of the technology to generate ‘personalised’ lists of options depending on whether patients wished to choose by distance, car parking, food quality and so on was never instantiated.A facility for patients to access such data in their local library in the district where Dale and Elm practice were located had no takers in six months.Beech practice stopped using Choose and Book when financial incentives ceased.In terms of the wider social structures impacting on choice of hospital, our dataset included substantial evidence of attempts to lever political authority.Locally, PCT managers described the PCT as being “beaten up” by the Strategic Health Authority, which in turn was being “hammered”, “bashed” and “kicked” by the Department of Health.Monthly bulletins from the Department of Health reported on progress in implementing the policy, and annual large-scale National Patient Choice Surveys were commissioned by the Department of Health in an attempt to demonstrate that the technology had been instrumental in achieving the policy goal.In these, around half of responding individuals recalled being offered ‘choice’, but the response rate was very low.The process of referral was severely constrained in real time by the material functionality of the Choose and Book technology, whose operation at the time of our data collection was cumbersome, unreliable and time-consuming.Our ethnographic observations confirmed estimates of practice staff that a Choose and Book referral took, on average, twice as long as a manual referral.They recounted numerous examples of the technology freezing, crashing, running slowly, failing to supply the necessary password for the patient, requiring manual data entry for some fields and failing to identify a suitable appointment slot at the preferred hospital.Choose and Book referrals were far from ‘paperless’.On the contrary, they generated large amounts of printed paper, including internal memos and request sheets, sticky notes, protocols, flowsheets, instructions and passwords for patients and – in three participating practices – a paper ledger of all ‘paperless’ referrals sent.In this and other ways, Choose and Book was viewed by practice staff as worsening the service problems it had been introduced to solve and as generating negative knock-on effects.Administrative staff considered Choose and Book highly temperamental, and spoke of having to get to know the system through accumulated experience and trial and error.We observed many examples of staff helping one another across a shared office in this regard.They spoke of not trusting the electronic system, and of being unable to navigate the system comfortably even when highly experienced in using it.They spent considerable time on the telephone to a helpdesk or to their counterparts in the hospital service trying to over-ride or work around glitches in the system.An aspect of this sociomateriality was resistance to the expense of the Choose and Book system."A few GPs in our sample identified positive aspects of Choose and Book but commented that the technology was a cumbersome and expensive way of achieving that goal.In this focus of resistance, the key external structures impacting on human actors were, on the one hand, the modernist ideal of a reliable, touch-of-a-button automated system and, on the other, the reality of technologies-in-use: invariably messy and less than ideally fit for purpose."Our observations showed that when considering whether and where to refer a patient, the GP routinely drew on his or her personal knowledge of that patient, both clinical and social, and of local services, including the scope of particular clinics in particular localities; the patient's own history of being treated at a particular hospital; transport services and the patient's ability and willingness to use these; the expertise and interests of local consultants; personal experience of referring patients to that service previously; and even – as in the above quote – personal experience of being treated by particular consultants themselves.In the single example we observed of a GP attempting to use Choose and Book, he abandoned the attempt because he could not find a suitable service.GPs and administrative staff explained to us that services in other localities tended to be organised differently – for example, they called the ‘same’ clinic by a different name or subdivided the work of the specialty in a different way, so a GP was typically very knowledgeable about a local hospital service but much less knowledgeable about comparable services in other localities.Importantly, the kind of knowledge the GP needed to select the best option for the patient was not the kind available on the ‘NHS Choices’ website.The policy of offering choice of hospital assumed that in different localities, similar service models with similar names would be available for a limited menu of diseases or conditions, allowing the ‘best’ service to be selected easily using a dashboard of performance metrics.The reality was that patients invariably presented not – or not merely – with a ‘textbook’ clinical condition but with a unique illness along with a unique set of comorbidities, personal priorities and social circumstances.The abstracted criteria embedded in the Choose and Book software and NHS Choices website were far less nuanced.Many GPs described how they gave up using Choose and Book because it rendered them unable to apply their knowledge and skills to obtain the best outcome for their patient.As one observed, “the choice is only of the crudest kind”."In terms of external structures, this locus of resistance reflected a wider mismatch between what we have called ‘medicine's internal goods’ and neoliberal policy.The pressure from policymakers on the medical profession to comply with a restricted taxonomy of readily classifiable disease states that map unproblematically to particular investigations or treatments reflects the kind of technology-work mismatch described by Brown and Duguid in a range of work settings.In a more layered sociological analysis, such mismatch in relation to medical work is depicted as having political origins and been termed ‘conceptual commodification’:“External control over medical care requires something more than literal commodification.Rather, it requires conceptual commodification of the output of the medical labour process: that is, its conceptualization in a standardized manner.Such commodification facilitates control over the production of services, not just over the arrangements for their exchange….The basic strategy of commodification is to establish a classification system into which unique cases can be grouped in order to provide a definition of medical output or workload.,In the consultations we observed directly, most discussions about referral took a traditional format, with the GP suggesting a consultant and a course of action, and the patient accepting the suggestion.One reason why they did not use Choose and Book during consultations was a reluctance to take on what they viewed as a more technical role.As one GP put it on an email exchange about Choose and Book, “We seem to be moving away from curing, caring and comforting to robotic automata”.As with the other forms of resistance described above, this can be explained in terms of internal social structures – specifically, professional identity.GPs considered it their professional duty to recommend a clinically appropriate outpatient clinic, including any necessary dialogue with the patient about their needs and preferences.But they defined the technicalities of booking appointments as outside their scope of practice and associated these with a loss of status and autonomy that was often deeply held and strongly expressed.This resistance was played out at locality level, between GPs and PCT staff."PCT managers did not question the ends of Choose and Book but presupposed that it was fit for purpose, attributing its low uptake to Luddism and even “spite”.But when the PCT sent GPs a letter that spoke of “failure” against the “standard” of Choose and Book and described low uptake as a “threat to good quality care”, the GPs responded vociferously by challenging this definition of quality and the legitimacy of the metrics being applied.On the contrary, they claimed, they had abandoned Choose and Book because it was a threat to professional standards.Our observational data revealed that the tension between the professional and the technical was perceived by some administrative staff as well as by most doctors.One administrator in Clover practice, BN, told us she had decided to take early retirement as a direct result of Choose and Book.She associated professionalism in her role with qualities such as knowledge of the services available locally, and with the ‘family doctor’ relationship that was built between patients and particular staff through continuity of care: “The patients have always been my main concern here."I don't know where patients are these days – lost under piles of paper and in the Choose and Book system”.BN was concerned that patients often phoned the practice because they did not understand instructions for booking their appointment.She bemoaned the introduction of a standard accompanying letter sent to patients with Choose and Book paperwork as impersonal, and insisted on adding her own name and signature to it.But the new system discouraged such personal touches."As BN commented while doing a Choose and Book referral: “I need to save this in Choose and Book …now what I'm going to do in my capacity as ‘absolutely nothing’, I'm going to attach it….”.A few administrative staff, however, were positive about Choose and Book.The lead administrator at Clover practice, XY, for example, was a ‘super user’ of the system: skilled, confident and keen to help others learn it."She saw Choose and Book's technical idiosyncracies as a challenge and felt that its complexity made her job more interesting.She took particular pride that the practice was outperforming all other practices locally for use of Choose and Book."When some GPs in the practice had advised her to “hold off a bit” on using the Choose and Book technology because of its questionable cost effectiveness, her response was “I can't do my job 50%”. "In terms of dispositional values, BN aligned strongly with the values of the traditional family doctor service.In contrast, XY could be viewed as having positioned herself as a bureaucratic cog within the expert system, reflecting the ‘new professionalism’ of what Harrison has called scientific-bureaucratic medicine, overly detached from the professional values of the locally embedded general practice, and focused primarily on the efficiency of means rather than the value of the ends.Both national and local policymakers were characterised by a striking lack of engagement with the values, identities and relationships of general practice."The PCT managers we interviewed, for example, saw referral as the same administrative process whether achieved via Choose and Book or a traditional referral.This framing did not take account of the wider changes in roles, responsibilities or identities associated with the Choose and Book system.This case study of referral to hospital in the English NHS in 2007–10 has revealed a contested social practice driven by national policy and linked to the use of a nationally mandated technology."The combination of strong structuration theory, Giddens' conceptualisation of expert systems, and a hermeneutic and ethical sensitivity to professional values has allowed us to do the following.Firstly, we have theorised this phenomenon in relation to wider social changes in late modernity, as resistance to an expert system.Secondly, we have constructed an ideal typical conception of the professional values of those involved in general practice that articulates the moral bases for their resistance.Thirdly, we have explored the tensions between these value-dispositions and the specific forces and pressures introduced by the abstract system of Choose and Book.The various hierarchical orderings inscribed within Choose and Book and on the NHS Choices website created potential for policymakers to influence social relations and practices beyond immediate face-to-face interaction.But expert systems can produce such action only to the extent that the people intended to use them actually do so.If they refuse, or are prevented from doing so, the intended action at a distance does not occur.Our findings reveal a mismatch between the model of clinical work underpinning the ‘choice’ policy and inscribed in the Choose and Book technology and the more complex, granular and exception-filled nature of real-world clinical practice."The choice policy pursued by the English Department of Health depicted clinical care in transactional rather than relationally situated terms: it harboured a model of GPs' input as taking place within artificially bounded, unconnected episodes and in relation to overly simple scenarios.It also used the term ‘quality’ mainly in relation to discrete and abstractly conceived structures, processes and procedures.A contextual and professional framing, in contrast, would emphasise the quality of relationships between patient and doctor and between GP and consultant and the value of continuity of these relationships over time."A striking finding in this study was policymakers' and managers' limited understanding of the detail of clinical work and the knowledge that informs referral practice.It was assumed that GPs could be prompted to use the system through two behaviourist mechanisms: financial incentives and disclosure of performance data.Policymakers were either unaware of, or dismissed, the influence of institutional structures such as the norms of professional practice, which defined quality in ethical and relational terms rather than in terms of a state-imposed metric of compliance with a policy."They also under-estimated the extent to which the technology's material properties would prove limiting. "The framing by PCT staff of Choose and Book use as a ‘quality standard’, and their refusal to engage with the GPs' concerns about threats to quality, is an example of the silencing effects that Boltanski writes about in criticising neo-managerialism's concerted, ill-advised, constriction of the space for meaningful conversation and debate about the role of normative values in guiding policy. "The managers' perspective reflects a situated frame of meaning, in which their role is defined in such a way that they deal solely with implementing means.Theirs is a bureaucratic form of professionalism, which entails a refusal to question ends and the values that inform these.We conclude that overly top-down, abstracted approaches to reducing resistance to information technology are not the best way forward.Rather, resistance to such technologies and the expert systems of which they are part would be reduced if there was, firstly, a greater recognition and dialogue with the world of professional values within its design and implementation, and secondly, a greater willingness to seek degrees of balance between such virtual, remote, systems and the exigencies of the local sites in which professional values are performed.Choose and Book is one of many expert systems being introduced, top down, in the English NHS."It is surely time for academics and policymakers to heed Boltanski's call to open up debate with a view to acknowledging the tension between normative values and forms of order and authority. "While coming from a different theoretical perspective, such an approach would align with socio-technical theorists' longstanding call for technologies to support rather than over-ride the micro-detail of professional work. | In 2004, the English Department of Health introduced a technology (Choose and Book) designed to help general practitioners and patients book hospital outpatient appointments. It was anticipated that remote booking would become standard practice once technical challenges were overcome. But despite political pressure and financial incentives, Choose and Book remained unpopular and was generally used reluctantly if at all. Policymakers framed this as a problem of 'clinician resistance'. We considered Choose and Book from a sociological perspective. Our dataset, drawn from a qualitative study of computer use in general practice, comprised background documents, field notes, interviews, clinical consultations (directly observed and videotaped) and naturally occurring talk relating to referral to hospital in four general practices. We used strong structuration theory, Giddens' conceptualisation of expert systems, and sensitivity to other sociological perspectives on technology, institutions and professional values to examine the relationship between the external environment, the evolving technology and actions of human agents (GPs, administrators, managers and patients). Choose and Book had the characteristics of an expert system. It served to 'empty out' the content of the consultation as the abstract knowledge it contained was assumed to have universal validity and to over-ride the clinician's application of local knowledge and practical wisdom. Sick patients were incorrectly assumed to behave as rational choosers, able and willing to decide between potential options using abstracted codified information. Our analysis revealed four foci of resistance: to the policy of choice that Choose and Book symbolised and purported to deliver; to accommodating the technology's socio-material constraints; to interference with doctors' contextual judgements; and to adjusting to the altered social relations consequent on its use. We conclude that 'resistance' is a complex phenomenon with socio-material and normative components; it is unlikely to be overcome using the behaviourist techniques recommended in some health informatics and policy literature. © 2013 The Authors. |
358 | Bright clumps in the D68 ringlet near the end of the Cassini Mission | The D ring is the innermost component of Saturn's ring system, and it is a very complex region with structures on a broad range of scales. "One of the more perplexing features in this region is a narrow ringlet found around 67,630 km from Saturn's center.This ringlet, designated D68, was first observed in a small number of images obtained by the Voyager spacecraft, and more recently has been imaged repeatedly by the cameras onboard the Cassini spacecraft, enabling several aspects of its structure and composition to be documented.These images show that the ringlet is very faint in back-scattered light, and that its brightness increases dramatically at higher phase angles.This implies that the visible material in this ringlet is very tenuous, and composed primarily of dust-sized particles in the 1–100 micron size range.Meanwhile, high-resolution images show that D68 has a full-width at half-maximum of only around 10 km, while lower-resolution images reveal that D68 has a substantial orbital eccentricity and that its mean radial position appeared to oscillate ± 10 km around 67,627 km with a period of order 15 years."Finally, these studies found that prior to 2014 the brightness of the ringlet had broad and subtle brightness variations that revolved around the planet at around 1751.65°/day, consistent with the expected rate for material orbiting at the ringlet's observed mean radius.After 2014, the Cassini spacecraft continued to monitor D68 until the end of its mission in 2017."This was not only because D68 is scientifically interesting, but also because Cassini's final orbits around Saturn took it between the planet and the D ring, causing the spacecraft to pass within a few thousand kilometers of D68. "Hence it was important to know how this ringlet was behaving in case it could either pose a hazard to the spacecraft or have any interesting effects on the in-situ measurements during Cassini's close encounters with Saturn.These images revealed unexpected and rather dramatic changes in the brightness structure of this ringlet.Whereas the brightness variations in D68 prior to 2014 could not be clearly discerned in individual images, images taken after 2015 showed a series of bright “clumps” that were several times brighter than the rest of the ringlet.These clumps were observed multiple times over the last two years of the Cassini mission, enabling their motion and slow evolution to be documented.Localized brightness enhancements have previously been observed in a number of other dusty rings."Some, like the arcs in Saturn's G ring and Neptune's Adams ring, persist for decades and therefore probably represent material actively confined by either mean-motion resonances or co-orbiting moons.Others, like the bright features seen in the F ring and the dusty ringlets in the Encke Gap, are more transient and therefore probably consist of material released by collisions and/or concentrated by interparticle interactions.The relatively sudden appearance of the clumps in D68, as well as their evolution over the last two years of the Cassini mission, are more consistent with the latter scenario.Hence this work will explore the possibility that these clumps consist of material released by collisions among larger objects within D68.The relevant aspects of the observational data used here are provided in Section 2, while Section 3 describes the properties of the D68 clumps, including their motions and brightness evolution.Section 4 then discusses how these features might have been generated from repeated collisions among objects orbiting within or close to D68.Finally, Section 5 provides estimates of where these clumps were located relative to the Cassini spacecraft during its final orbits, and Section 6 summarizes the results of this analysis.The data on the D68 ringlet considered here come from the Imaging Science Subsystem onboard the Cassini Spacecraft.Table 1 summarizes the images used in this analysis.All of these images were calibrated using the standard CISSCAL routines that remove dark currents, apply flatfield corrections, and convert the observed brightness data to I/F, a standardized measure of reflectance that is unity for a Lambertian surface illuminated and viewed at normal incidence.These calibrated images were geometrically navigated with the appropriate SPICE kernels, and the pointing was refined as needed based on the observed locations of stars in the field of view.Note that the long exposure durations used for many images caused the images of stars to be smeared into streaks.The algorithms for navigating images based on star streaks are described in Hedman et al.In previous analyses of D68 images, the brightness data from each image would be averaged over longitude to produce a radial brightness profile.This was sensible when the ringlet showed only weak longitudinal brightness variations, but is no longer appropriate now that the ringlet possesses clumps that are smaller than the longitude range spanned by a single image.Hence, for this analysis the image data were instead re-projected onto regular grids in radii r and inertial longitudes λi1.Each column of the re-projected maps then provides a radial profile of D68 at a single inertial longitude, which can be co-added as needed to generate longitudinal profiles with sufficient resolution to document the clumps.Since the ring material orbits the planet, these profiles are constructed in a co-rotating longitude system λc = λi − n0 where n0 is the mean motion of the ring material, t is the observation time, and t0 is a reference time."This study uses a reference time of 300,000,000 TDB or 2009-185T17:18:54 UTC, which is the same value used in prior investigations of this ringlet's structure.Also, the mean motion is taken to be 1751.7°/day, a value that ensures the most prominent clump remains at nearly the same co-rotating longitude in the available data.This rate is also consistent with the expected mean motion of particles orbiting within D68.Note that material moving at this rate will smear over 0.02°–0.4° in longitude over the 1–20 second exposure times of the relevant images.Fortunately, this longitudinal smear is small compared to the scale of the clumps that form the focus of this study.Of course, the observed brightness of D68 not only depends on the amount of material in the ringlet, but also on the viewing geometry, which is parameterized by the incidence, emission and phase angles.Fortunately, in this case the dependence on incidence and emission angles are relatively simple because D68 has a very low optical depth.While D68’s optical depth has not yet been directly measured because it has so far eluded detection in occultations, the overall brightness of the ringlet is consistent with the peak optical depth of the visible material being of order 0.001.This means that any individual particle is unlikely to either cast a shadow on or block the light from any other particle."In this limit, the surface brightness is independent of the incidence angle and is proportional to 1/|μ|, where μ is the cosine of the emission angle3 Hence the above estimates of the ringlet's equivalent width are multiplied by |μ| to obtain the so-called normal equivalent width4.The profiles of D68’s phase-corrected normal equivalent width versus co-rotating longitude are shown in Figs. 2, 5 and 6 and are provided in three supplemental tables to this work.Note that no error bars are provided on these profiles because systematic uncertainties due to phenomena like variations in the background level dominate over statistical uncertainties, and such systematic uncertainties are difficult to calculate a priori.Instead, rough estimates of these errors are computed based on the rms variations in the profiles after applying a 3°-wide high-pass filter to suppress features like the clumps.These noise estimates are provided in Table 1."This section summarizes the observable properties of the D68 clumps derived from the above profiles of the ringlet's brightness.Section 3.1 describes the distribution and evolution of the bright clumps observed in the last 18 months of the Cassini mission.Section 3.2 compares these clumps with the previously-observed brightness variations in the ring and uses the sporadic observations in 2014 and 2015 to constrain when these bright clumps may have originally formed.Finally, Section 3.3 documents the trends in the locations and brightnesses of these clumps.Longitudinal brightness variations can be seen in every observation of D68 obtained in 2016 and 2017.Fig. 2 shows profiles derived from a sub-set of those observations that covered most of the clumps, were obtained at phase angles above 140° and had ring opening angles above 5°.These seven profiles provide the clearest picture of D68’s structure during this time.The estimated noise levels for all these profiles are less than 3 m, which is consistent with their generally smooth appearance outside of a few sharp excursions that can be attributed to background stars and cosmic rays.Hence statistical noise and most systematic variations associated with instrumental phenomena like stray-light artifacts are less than 10% of the signal for all of these profiles.Observations at lower phase and/or ring opening angles also captured these brightness variations, but either had lower signal-to-noise or did not provide such clean profiles due to the greatly degraded radial resolution away from the ansa.In all seven of these profiles the bright clumps are clearly restricted to a range of longitudes between ± 90°.Figs. 3 and 4 provide closer looks at these clumps and more clearly shows how they evolved over the last two years of the Cassini mission."In early 2016, there are four clear peaks in the ringlet's brightness.These four features are here designated with the letters T, M, L and LL.The M clump is the brightest of these features, being 4–5 times brighter than the background ring, while the T, L and LL clumps are more subtle features.All four clumps appear to be superimposed on a broad brightness maximum that is centered between the M and L clumps.Over the course of 2016, each of these clumps evolved significantly.The L and LL clumps became progressively broader and less distinct.By contrast, the M clump became somewhat brighter and slightly more sharply peaked, while the T clump became much brighter and developed a strongly asymmetric shape with a very sharp leading edge.In 2017, the L and LL clumps continued to become less and less distinct, with the LL clump becoming practically invisible by the latter half of 2017.The T and M clumps also started to become progressively broader over the course of 2017.At the same time, two new features appear in the profiles.First, a small bright feature, designated ML emerges from the leading edge of the M clump and drifts ahead of the clump over the course of 2017.More dramatically, a new brightness maximum appears behind the T clump.Designated TN, this feature is first seen on Day 123 of 2017, where it appears as very faint peak on the trailing flank of the T clump.On Day 170, it appears as a narrow feature with a peak brightness intermediate between the T and M clumps.Finally, on Day 229 this clump has brightened and broadened dramatically, becoming the brightest feature in the ringlet.The bright clumps described above are completely different from the brightness variations seen in D68 prior to 2014.Fig. 5 shows profiles of the rings derived from six observations obtained prior to early 2014.Five of these are a subset of observations from Hedman et al., but have been processed using the techniques described above to ensure that any narrow structures would not be missed.Also note that the brightness profiles presented in Hedman et al. used a slightly different co-rotating longitude system with a rotation rate of 1751.65°/day, while here a rate of 1751.70°/day is used to facilitate comparisons with the later data.Since there were no obvious sharp features in these profiles, these profiles were made with larger longitudinal bins in order to improve signal-to-noise.None of these early observations show any of the bright clumps seen in 2016 and 2017.Instead, any real brightness variations are much more subtle and can be comparable to variations associated with instrumental noise.Note that these profiles are more heterogeneous in their noise properties than those illustrated in Fig. 2 because they were obtained under a broader range of viewing geometries and employed a larger range of exposure times.The Rev 039 HIPHAMOVD, Rev 168 DRCLOSE and Rev 173 DRNGMOV profiles are comparable in quality to the 2016–2017 profiles, exhibiting random fine-scale variations of around 5 m.The Rev 037 AZDKMRHP profile shows somewhat higher scatter in its profile, despite having a similar noise level in Table 1.This is because the variations seen here are primarily at the few-degree scale due to instrumental background phenomena like stray light artifacts.Still, all four of these early profiles show the broad, low peak that was first identified in Hedman et al. and appears to be a real ring structure.In 2007, the most visible aspect of this modulation is a falling slope in brightness around +90° in the new co-rotating coordinate system.Between 2007 and 2012, the shape and location of the peak shifted, such that in 2012–2013 the most obvious brightness variation is now a rising slope centered around − 45°, near the location where the clumps would later appear.This same brightness maximum is also present in the Rev 198 DRNGMOV profile, but it is somewhat harder to discern because this profile shows a periodic modulation with a wavelength of order 20°, which arises because there is a particularly bright stray-light artifact running across part of these images that is not entirely removed by the background-subtraction procedures.The Rev 201 DRNGMOV profile is interesting because on fine scales it appears to be of comparable quality to the Rev 039 HIPHAMOVD and Rev 168 DRCLOSE profiles, but it also exhibits some novel brightness structure.Specifically, in addition to the broad hump, this profile also has brightness variations on scales of a few tens of degrees between − 45° and 0° longitude, where some of the brightest clumps would later be found.Stray light artifacts are far less prominent in this dataset than in Rev 198 DRNGMOV, so these could be real features in the ring.Unfortunately, there are no other profiles of comparably good quality from this time period that can be used to confirm the existence of these weak peaks, and so all that can be said with confidence at this time is that nothing like the bright clumps seen in 2016 were present in early 2014.5, "Observations of D68 were very limited between early 2014 and early 2016, in part because the spacecraft was close to Saturn's equatorial plane for part of that time.Table 1 includes a list of every observation of D68 with resolution better than 30 km/pixel during this time period, and Fig. 6 shows longitudinal brightness profiles for those image sequences which captured more than 10° of the region between co-rotating longitudes of ± 90°.These observations, which generally have lower signal-to-noise or less longitudinal coverage than those discussed above, still provide slightly improved constraints on when the clumps might have formed.Working backwards from when the clumps clearly existed, we can first note that the T, M, L and probably LL clumps can all be seen in the Rev 231 FNTHPMOV observation obtained in early 2016, albeit at lower signal-to-noise.Portions of the M clump are also clearly visible in the Rev 231 DRCLOSE and the Rev 228 HPLELR observations."The latter was obtained at very low ring opening angles, and therefore only provides limited snapshots of the ring's brightness.Also, these snapshots often show noticeable trends within the data derived from each image, which likely arise because the extreme foreshortening away from the ansa can lead to slight inaccuracies when the observed brightness values are interpolated onto maps of brightness versus radius and longitude.Despite these complications, there are hints of the T, L and LL clumps in these data.The T, M, L and LL clumps therefore all probably existed by the end of 2015.Unfortunately, there are no observations of the longitudes that would contain the M clump between mid-2014 and late 2015.One observation from mid-2015 is another low-ring-opening angle observation which just missed all four clumps.Prior to this, in early 2015, there are two DRCLOSE observations.The Rev 211 DRCLOSE observation on Day 9 of 2015 shows a brightness maximum that could be the L clump, while the Rev 212 DRCLOSE observation on Day 42 shows a small brightness variation that could be the T clump.Finally, in mid-2014 there was a low-phase DRLPMOV observation in Rev 206.The signal-to-noise for this observation is quite low because it was obtained at low phase angles where the ring is comparatively faint, and so it is hazardous to interpret any of the brightness variations in this profile as evidence for any of the later clumps, Still, it is worth noting that there are no features that are as bright as the M clump.It is therefore reasonable to conclude that the bright M clump most likely formed sometime after mid-2014 and before late 2015, but that the fainter L and T clumps probably began to appear sometime in 2014.To better quantify the temporal evolution of these clumps, Tables 2–4 and Figs. 7–8 provide summaries of how their positions and integrated brightnesses changed over the course of the Cassini mission.Table 2 provides the observed locations of several clump features.In this table, “Peak” locations correspond to brightness maxima and “Edge” locations correspond to minima found just ahead of the T and M/ML clumps.The highly variable morphology of the clumps made automatic algorithms for locating these features impractical, so the locations given in Table 2 were instead determined by visual inspection of the profiles.Uncertainties on such numbers cannot be reliably computed a priori, and so no errors are provided in the table.However, position estimates obtained at roughly the same time differ by only a few tenths of a degree, which suggests that the uncertainties in these parameters are less than half a degree.Fig. 7 plots these position estimates as functions of time, along with quadratic model fits where the clump is allowed to have a drift rate that varies linearly with time.The parameters derived from these fits are provided in Table 3, along with uncertainties derived from the scatter in the data points around the trend.As shown in the top panel of Fig. 7, prior to 2017 all of the clumps were drifting forwards at rates between 2°/year and 8°/year.However, these drift rates gradually slowed down, with both the M and T clumps beginning to drift backwards at a rate of around − 2°/year by the end of the Cassini mission."Both the average drift rates and the accelerations of these features contain information about the clump's dynamics.For one, the small dispersion in the drift rates associated with these clumps indicate that the clump material is tightly confined in semi-major axis.The dispersion of clump drift rates at any given time is always of order 5°/year, which corresponds to a fractional spread in mean motions δn/n ∼ 8 × 10−6, which in turn implies a fractional spread in semi-major axes δa/a = 2/3 ∼ 5 × 10−6, or a δa ∼ 0.4 km.We can also note that the difference in mean motions between the peak of the M clump and its leading edge is about 2°/year.If this spreading is due to Keplerian shear, then it implies that the material in this part of the brightest clump has a δa ∼ 0.2 km.The widths of the L and LL clumps are harder to quantify, but we may note that between the Rev 233 FNTHPMOV observation on Day 2016-071 and the Rev 256 HPMONITOR on Day 2017-011 the L clump increased from roughly 5° wide to about 10° wide, which implies the two ends sheared apart at a rate close to 5°/year, comparable to the dispersion of the clumps as a whole and the spreading rate of the M clump.All these findings suggest that the clumps consist of material with sub-kilometer spreads in semi-major axis."Turning to the slow accelerations of the clumps, these correspond to relatively slow changes in the material's average semi-major axes.For example, the drift rates for the M and T clump changed by roughly 4°/year2.This corresponds to an outward radial migration rate of roughly 0.3 km/year.The radial acceleration of the L and LL clumps are smaller, more like 1°/year2, but also suggest slow outwards migration.Recall that the mean radius of D68 appears to oscillate with an amplitude of ∼10 km over a period of roughly 15 years, which would correspond to maximum radial drift rates of order 3–4 km/year.Note that during this particular time period the ringlet should be moving outward, which is consistent with the observed accelerations of the clumps, but the magnitude of the radial drift rates are roughly an order of magnitude slower than one would expect for the ringlet as a whole.Hence the connections between the acceleration of the clumps and the overall radial migration of D68 remain obscure.Table 4 and Fig. 8 summarize the total brightness estimates for each of the clumps.Note that for the T clump we also provide the values with the contribution from the superimposed TN clump removed.The clumps show a variety of brightness trends.The L and LL clumps show roughly constant brightness in 2016.Thus these two clumps appear to consist of roughly constant amounts of material that are gradually spreading out over larger and larger longitude ranges, and therefore becoming more indistinct.By contrast, the M, T and TN clumps clearly show initial increases in their total brightness over time."The TN clump's brightening actually accelerated over time, with its PC-NEA going up by 20 km2 in the 47 days between the first two observations, and by another 100 km2 in the following 59 days. "Clump T's increase in brightness could also have accelerated in 2016 if the feature seen in 2015 is really that same clump, but throughout 2016 its brightness increased at a roughly steady rate of 150 km2/year, which is substantially slower than TN's brightening rate.Interestingly, T appears to have stopped brightening around the time TN formed."Clump M's brightening is probably the least well documented, but the early 2016 measurements are roughly one-half the brightness observed in late 2016 and 2017.Assuming a linear brightness increase between early and mid 2016, this would imply a brightening rate of around 500 km2/year, which is comparable to the brightening rate for the TN clump.Clearly, something happened to D68 in 2014 or 2015 to create the clumps seen at the end of the Cassini Mission.The most obvious explanation for such dramatic and localized increases in brightness is that fine material was released by collisions into or among larger objects located within or nearby D68, similar to the way bright features are thought to form in the F ring.This scenario will be examined in some detail below.Section 4.1 examines how much material is needed to produce the visible clumps, and whether suitable source bodies for this material could be lurking in or around D68.Next, Section 4.2 argues that the clumps in D68 are primarily due to collisions among particles orbiting close to D68, rather than impacts from interplanetary objects.These arguments are based on comparisons between the properties of the clumps in D68 and those found in the F ring.Finally, Section 4.3 examines the spatial and temporal distribution of the clump forming events and what they may imply about the distribution of source bodies within D68.Of course, the above calculation is the minimum mass required to produce the visible dust clumps, and does not include material released in the form of vapor, particles much smaller than 1 μm in radius or particles much larger than 10 μm in radius.Still, the above calculations imply that the observed clumps do not require kilometer-scale objects to supply the observed material.Instead the source objects could be comparable in size to the largest typical particles in the C ring.The relatively small amount of material required to produce these clumps is also generally compatible with the lack of any direct evidence for any particles larger than the visible dust grains in the available images of D68.Objects with radii between 1 mm and 1 km can be very difficult to see because their surface-area-to-volume ratios are smaller than small dust grains and because they are too small to be easily resolved as discrete objects in most images of D68.Indirect evidence for such source bodies comes from two high-resolution observations of D68 obtained in 2005 that contained a secondary peak on the inner flank of this ringlet, displaced inwards by 10–20 km from the main D68 ringlet.Unfortunately, no later high-resolution D68 observations obtained during the remainder of the mission covered the same co-rotating longitude range, and no other images showed clear evidence for additional ringlets near D68, so the connections between these secondary peaks and the clumps are rather obscure.However, the two observations where the secondary peaks are clear do appear to have occurred around 0° in the above co-rotating longitude system, and so they could potentially represent material scattered out of D68 by larger objects that later gave rise to the clump material.More detailed analysis would be needed to ascertain whether similar-sized objects could produce both the clumps and the displaced ring material.Further evidence for a population of larger particles in the vicinity of D68 comes from the in-situ measurements made by the Cassini spacecraft when it passed between the planet and its rings.During this time, the Low Energy Magnetospheric Measurement System component of the Magnetosphere Imaging Instrument detected a clear reduction in the intensity of protons and electrons when the spacecraft crossed magnetic field lines that passed near D68.This localized reduction in charged particle flux implies that there is a concentration of material capable of absorbing charged particles around D68.The total mass of this material is still being investigated.However, it is worth noting that D68 is the only feature in the D ring interior to D73 that significantly affects the measured plasma densities, despite the fact that D68’s brightness relative to its surroundings is not exceptionally high.Hence it is reasonable to conclude that the reduction in the plasma density around D68 is due to a population of larger particles orbiting in the vicinity of D68, which are invisible to the cameras but efficient absorbers of charged particles.If the clumps found in D68 are collisional debris, then their closest analogs would be the bright clumps in the F ring, which have also been interpreted as the results of collisions either into or among larger objects within that rings.Indeed, by comparing the overall brightness, motions and evolution of these two different types of clumps, we can gain some insights into what sorts of collisions could be responsible for producing the clumps in D68.First of all, the clumps in D68 appear to involve much less material than the clumps in the F ring.French et al. provides the most extensive survey of F-ring clumps to date, which include phase-normalized integrated brightness values.However, some care is needed in comparing the two sets of brightness estimates because French et al. normalized the observed brightness by the ratio of the phase function at the observed phase angle to the phase function at 0° phase.This differs from the normalization used here by a factor of the phase function at 0° phase, which is 0.0095 for the phase function given by Eq.Hence the values of PC-NEA need to be multiplied by this factor to obtain “Phase-Normalized Normal Equivalent Areas” that can be compared with the values given in French et al.The range of 10–700 km2 in the PC-NEA values for the D68 clumps therefore correspond to PN-NEA values in the range of 0.1 –7 km2.By contrast, the F-ring clumps have PN-NEA values ranging between 100 and 20,000 km2, and a few clumps are even brighter than this.Each of the F-ring clumps therefore includes hundreds to thousands of times more material than the D68 clumps.The above comparisons imply that the collisions release more material in the F ring than they do in D68.This suggests that the F ring has more abundant and/or larger potential dust sources than D68.This is a reasonable supposition, since there is abundant evidence from images, occultations and charged-particle data for a population of kilometer-scale moonlets within the F ring.By contrast, the generally homogeneous structure of D68 prior to 2015 strongly suggests that such large objects are not common in the vicinity of that ringlet.7, "The F ring probably contains more large source bodies because it lies close to Saturn's Roche limit for ice-rich objects, where larger objects can more easily survive and grow, while D68 is located very close to the planet, where tidal forces will inhibit any accumulation of material into larger objects.Hence it is reasonable to expect that there is more source material for clumps in the F ring than in D68.Next, consider the range of drift rates and spreading timescales of the clumps in D68 and the F-ring.The clumps in the F ring have drift rates that vary by ∼ 0.2°/day, or ∼ 100°/year, and the lengths of individual clumps change at comparable rates.These rates are over an order of magnitude larger than the range of drift rates and spreading rates observed in D68.This implies that the material in the F-ring clumps has a much larger spread in semi-major axes than the material in the D68 clumps.Indeed, the spread of drift rates in the F-ring clumps implies that this material spans a semi-major axis range of order ten kilometers, compared to the sub-kilometer range spanned by D68’s clumps.These differences in semi-major axis spreads are also probably responsible for the differences in how long it takes these clumps to fade away.For example, French et al. showed that an exceptionally bright F-ring clump brightened at a roughly constant rate for about 5 months before fading in a quasi-exponential manner with a half-life of roughly 100 days.The fading timescale for this clump is short compared to the brightness evolution timescales of the D68 clumps, whose integrated brightness could remain constant for over a year."This is consistent with the F ring clumps having shorter spreading timescales than the D68 clumps due to the particles' broader semi-major axis range. "The above differences in the clumps' evolution rates strongly suggests that the material in both these ringlets is primarily released by collisions among objects within the ring, rather than collisions into those objects by meteoroids on interplanetary trajectories. "If the clumps were created by interplanetary impactors, the velocity dispersion of the debris would be similar for the two rings, which is clearly not the case.However, if the collisions involve interactions among objects within the ring, then the relative velocities would depend on the velocity dispersion of those objects.While we do not have direct measurements of the orbit parameter dispersion for all the potential source bodies in either the F ring or D68, we may note that while D68 typically appears to be about ten kilometers wide, the F ring has multiple components that span hundreds of kilometers.While the visible material is mostly dust, it is reasonable to expect that the larger particles in the F ring are also more dispersed than the ones in D68, in which case collisions among objects in the F ring happen at higher relative speeds than those within D68, which could more naturally explain the different range of drift rates and spreading timescales for the two rings.Finally, we should note that the extended time it takes for clumps in both D68 and the F ring to reach their maximum brightness is more easily explained if they are both created by collisions among multiple objects within the same ring.A collision involving an interplanetary meteoroid would release dust in a short period of time, which is not what is observed either in D68 or in the F ring.However, if the objects involved in the collision are on nearly the same orbit, then the larger bits of debris from the collision would also be in roughly the same region of phase space, increasing the possibility of repeated collisions, and a gradual release of fine material.If the above arguments are correct, then the debris seen in the D68 clumps probably arose from collisions among larger objects orbiting close to or within the ringlet.Of course, this immediately raises questions about both the timing of the clump formation and the distribution of the source bodies that gave rise to the clumps.At the moment, there are no clear answers to these questions, but we can at least examine some aspects of these clumps that might be relevant to understanding their origin.First of all, it is reasonable to ask why clumps only appeared in D68 after 2014.If these clumps are due to collisions among larger objects within the ring, then something must have happened at that time that increased the probability of such collisions.There are two different potential explanations for what could have happened at this time, one involving the internal evolution of D68 itself, and the other involving impacts by objects from outside the Saturn system."If one wishes to attribute the timing of clump formation to processes internal to D68, the aspect of this ringlet's structure that is most likely to be relevant is the slow evolution of its mean radius.Prior observations of D68 showed that its mean radius slowly declined by 20 km between 2006 and 2012.Later observations, combined with earlier Voyager images suggest that the mean radius of this ringlet oscillates back and forth with a period of order 15 years."The visible ringlet's mean radius was therefore moving outwards during 2014–2015.Since the origin of this oscillation is still unclear, the visible dust and larger source bodies could oscillate differently, and so perhaps 2014 corresponded to a critical time where dust was more likely to collide with the source bodies and thereby release additional material.The major problem with such an idea is that in 2014 the visible ringlet was not near either its maximum or its minimum mean radius, and was instead close to the same radius it was in 2011, a time when no obvious clumps appeared.If one wants to consider scenarios where something exterior to D68 initiated the clump formation process, then the most likely option would be that one or more interplanetary objects collided with bodies near D68, initiating the collisional cascades that generated the clumps.A challenge for this sort of scenario is that the LL, L, M and T clumps all appear to have formed around the same time, and did not move substantially further apart during the 18 months they were observed.This either means that debris from the original event was able to spread across a region 90° wide, or that multiple impacts struck different parts of D68 around the same time.In practice, the former option appears unlikely, since the observed clumps are well separated and do not appear to be parts of a continuum of debris.The idea that multiple objects could have struck multiple source bodies at about the same time might at first seem equally unlikely."However, there is evidence that Saturn's rings have not only been struck by discrete objects, but also by more extensive debris clouds analogous to meteor storms.In particular, corrugations found in the C and D rings appear to have been generated by such debris fields, which probably represent material released from an object that was torn apart by either tidal forces or a prior impact with the main rings during a previous passage through the Saturn system.One could therefore posit that a similar debris cloud passed through D68 in 2014–2015, impacting at least 4 source bodies and so initiating the formation of clumps LL, L, M and T."In principle, such a debris cloud could have also had affects on other parts of Saturn's ring system, but at the moment there is no clear evidence for such a recent event elsewhere in the rings.Hence we cannot yet place strong constraints on exactly what event initiated the formation of D68’s clumps.Turning to the spatial distribution of the dust sources, the first thing worth noting is that objects large enough to generate the clump debris must have orbits very close to that of D68.Since D68 is a uniquely narrow, isolated ringlet in the otherwise rather broad and smooth inner D ring, this strongly implies that some process is confining material near this ringlet."In principle, the visible dust could become trapped by a variety of non-gravitational processes, such as resonances with asymmetries in Saturn's electromagnetic field.However, if each clump is debris from collisions involving larger objects, then all those objects would need to have very similar orbits to D68, which strongly suggests that the relevant confining force is not size dependent.The forces responsible for confining D68 are therefore most likely gravitational."Given D68’s narrowness, the confinement mechanism most likely involves some sort of resonance with either one of Saturn's moons or some asymmetry in the planet's gravitational field. "However, at the moment there is no known resonance with any of Saturn's moons that could explain the observed properties of D68. "More generally, the ringlet's lack of strong azimuthal brightness variations prior to 2014, as well as its simple eccentric shape, are not consistent with the orbital perturbations associated with most resonances.It is also worth noting that the clumps in D68 do not appear to be randomly distributed.For one, the clumps are all confined to a region roughly 120° across.Furthermore, the TN/T, M/ML, L and LL clumps appear to maintain a suspiciously regular spacing of roughly 30° for the entire time they are observed.Specifically, the separation between the peaks of the T and M clumps was 26°± 1°, the separation between the M and L clumps was 32°± 1°, and the separation between the L and LL clumps was 29°± 1°.This suggests that the source bodies are not randomly distributed around D68, but that there is something selecting out particular locations for either the source bodies or the dust release.The lack of strong azimuthal brightness variations in the dust prior to 2014 would be difficult to reconcile with any external force confining dusty material in longitude.Hence it seems more likely that this spacing reflects something about the distribution of the source bodies.Interestingly, the stable solution for roughly 4 equal-mass bodies in the same orbit also has the four objects spanning roughly 120° and being spaced by between 30° and ∼ 40°.We may therefore posit that there is some outside force that is trapping material at a particular semi-major axis, which includes both large source bodies and dust.This trapping potential would need to be longitude-independent, allowing the few large bodies in this region have arranged themselves into a stable co-orbital configuration.At the moment, I am not aware of any phenomenon that can satisfy all these requirements, so more work needs to be done to develop a plausible dynamical explanation for the confinement and structure of D68.During its Grande Finale, the Cassini spacecraft passed between the planet and D68, enabling it to make in-situ measurements of the material in this region.Some of these measurements revealed azimuthal variations that might be correlated with the D68 clump locations.While the visible material in D68 appears to be strongly confined in semi-major axis, the clump-forming events could potentially release smaller particles and molecules that could more easily reach the spacecraft.In addition, the event that triggered clump formation in D68 could have had larger-scale effects on the planet and/or its rings that might have influenced these measurements.Hence, for the sake of completeness, Fig. 9 shows where the Cassini spacecraft passed relative to these clumps on all of the relevant orbits.The spacecraft clearly sampled a wide range of longitudes relative to the clumps during its final orbits, allowing a variety of hypotheses to be tested regarding connections between D68’s clumps and the in-situ measurements.The main results of the above analysis of D68’s longitudinal structure are the following:Sometime in 2014 or 2015 a series of four bright clumps appeared in D68.The material in two of these clumps slowly spread over time, making the clumps less distinct.The two other clumps became progressively brighter over the course of 2016, and appeared to give rise to additional structures in 2017.The spreading rates and dispersion in drift velocities suggest that the material in all these clumps spans less than a kilometer in semi-major axis.The total amount of material visible in the clumps could come from a few objects less than 10 m in radius.These clumps could have been produced by collisions among larger objects orbiting within or very close to D68.The spatial distribution of these clumps may provide new insights into how material is confined around D68. | The D68 ringlet is the innermost narrow feature in Saturn's rings. Prior to 2014, the brightness of this ringlet did not vary much with longitude, but sometime in 2014 or 2015 a series of bright clumps appeared within D68. These clumps were up to four times brighter than the typical ringlet, occurred within a span of ∼ 120° in corotating longitude, and moved at an average rate of 1751.7°/day during the last year of the Cassini mission. The slow evolution and relative motions of these clumps suggest that they are composed of particles with a narrow (sub-kilometer) spread in semi-major axis. The clumps therefore probably consist of fine material released by collisions among larger (up to 20 m wide) objects orbiting close to D68. The event that triggered the formation of these bright clumps is still unclear, but it could have some connection to the material observed when the Cassini spacecraft passed between the planet and the rings. |
359 | Topic detection using paragraph vectors to support active learning in systematic reviews | Systematic reviews involve searching, screening and synthesising research evidence from multiple sources, in order to inform policy studies and guideline development .In evidence-based medicine, systematic reviews are vital in guiding and informing clinical decisions, and in developing clinical and public health guidance .In carrying out systematic reviews, it is critical to minimise potential bias by identifying all studies relevant to the review.This requires reviewers to exhaustively and systematically screen articles for pertinent research evidence, which can be extremely time-consuming and resource intensive .To reduce the time and cost needed to complete the screening phase of a systematic review, researchers have explored the use of active learning text classification to semi-automatically exclude irrelevant studies while keeping a high proportion of eligible studies in the final review .Active learning text classification is an iterative process that incrementally learns to discriminate eligible from ineligible studies.The process starts with a small seed of manually labelled citations that is used to train an initial text classification model.The active learner will then iterate through several learning cycles to optimise its prediction accuracy.At each learning cycle, the active learner automatically classifies the remaining unlabelled citations.A sample of the automatically labelled citations is validated by an expert reviewer.Finally, the validated sample is used to update the classification model.The process terminates when a convergence criterion is satisfied.Key to the success of the active learning approach is the feature extraction method that encodes documents into a vector representation that is subsequently used to train the text classification model.Wallace et al. proposed a multi-view active learning approach that represents documents using different feature spaces, e.g., words that appear in the title and in the abstract, keywords and MeSH terms.Each distinct feature space is used to train a sub-classifier, e.g. Support Vector Machines.Multiple sub-classifiers are then combined into an ensemble classifier using a heuristic.With regard to the active learning selection criterion, the authors employed uncertainty sampling.The uncertainty selection criterion selects those instances for which the classifier is least certain of their classification label.To enhance the performance of the active learner, they introduced an aggressive undersampling technique that removes ineligible studies from the training set which convey little information.The aggressive undersampling technique aims at reducing the negative effect of class imbalance that occurs in systematic reviews, i.e., a high percentage of ineligible studies tends to overwhelm the training process.For experimentation, they applied the proposed method to three clinical systematic review datasets.They showed that the uncertainty-based active learner with aggressive undersampling is able to decrease the human-workload involved in the screening phase of a systematic review by 40–50%.Whilst good results are obtained in the clinical domain, Miwa et al. demonstrated that the active learning approach yields a significantly lower performance when applied to public health reviews.The authors argued that the identification of relevant studies is more challenging in this domain compared to others, e.g., clinical documents.This can be attributed to the fact that the public health literature extends across a wide range of disciplines covering diverse topics .To alleviate problems introduced by challenging public health articles, the authors proposed to learn a topic-based representation of studies by employing the widely used Latent Dirichlet Allocation , a probabilistic and fully generative topic model.They further investigated the use of a certainty-based selection criterion that determines a validation sample consisting of instances with a high probability of being relevant to the review.Experimental results determined that topic-based features can improve the performance of the active learner.Moreover, the certainty-based active learner that uses topic features induced by LDA exceeded state-of-the-art performance and outperformed the uncertainty-based active learner .Topic models are machine learning methods that aim to uncover thematic structures hidden in text.One of the earliest topic modelling methods is the probabilistic Latent Semantic Indexing .PLSI associates a set of latent topics Z with a set of documents D and a set of words W.The goal is to determine those latent topics that best describe the observed data.In PLSI the probability distribution of latent topics is estimated independently for each document.In practice, this means that the complexity of the model grows linearly with the size of the collection.A further disadvantage of PLSI is the inability of the underlying model to generalise on new, unseen documents.Extending upon of PLSI, LDA assumes that topic distributions are drawn from the same prior distribution which allows the model to scale up to large datasets and better generalise to unseen documents.In this article, we present a novel topic detection model to accelerate the performance of the active learning text classification model used for citation screening.Our topic detection method can be used as an alternative approach to the LDA topic model to generate a topic-based feature representation of documents.The proposed method uses a neural network model, i.e., paragraph vectors , to learn a low dimensional, but informative, vector representation of both words and documents, which allows detection of semantic similarities between them.Previous work has demonstrated that paragraph vector models can accurately compute semantic relatedness between textual units of varying lengths, i.e., words, phrases and longer sequences, e.g., sentences, paragraphs and documents .While the standard bag-of-words approach has been frequently employed in various natural language processing tasks, paragraph vectors, which take into account factors such as word ordering within text, have been shown to yield superior performance .To our knowledge, our work is the first that utilises the vector representations of documents produced by the paragraph vector model for topic detection.We hypothesise that documents lying close to each other in the vector space form topically coherent clusters.Based on this, our approach clusters the paragraph vector representations of documents by applying the k-means clustering algorithm and treats the centroids of the clusters as representatives of latent topics, assuming that each cluster corresponds to a latent topic inherent in the texts.After detecting latent topics in a collection of documents, we represent each document as a k-dimensional feature vector by calculating the distance of the document to the k cluster centroids.Additionally, our topic detection model computes the conditional probability that a word is generated by a given topic and thus readily determines a set of representative keywords to describe each topic.The topic-based representation of documents is used to train an active learning text classification model to more efficiently identify eligible studies for inclusion in a review.The contributions that we make in this paper can be summarised in the following points:We propose a novel topic detection method that builds upon the paragraph vector model.We introduce various adaptations to the paragraph vector method that enable the underlying model to discover latent topics in a collection of documents and summarise the content of each topic by meaningful and comprehensive text labels.We incorporate the new topic detection method with an active learning strategy to support the screening process of systematic reviews.We conduct experiments, demonstrating that our topic detection method outperforms an existing topic modelling approach when applied to semi-automatic citation screening of clinical and public health reviews.In this section, we detail our proposed topic detection method.We then provide an overview of the active learning process used in our experiments and discuss the evaluation protocol that we follow to asses the paragraph vector-based topic detection method.Topic models assume that a set of documents has a specific number of latent topics, and words in a document are probabilistically generated, given the document’s topics.For example, if a topic assigns high probabilities to the words “alcohol”, “drunk”, and “accidents”, we can infer that the topic is about alcohol-related accidents.Our novel contribution is the development of a topic detection method using the paragraph vector model.To aid the study identification process of systematic reviews, it is useful to capture semantic similarities between articles and group studies according to the latent topics within them.Since typical approaches to topic models are based on bags-of-words, important information that can be used to calculate semantic similarity, e.g., word order, is lost .In contrast, the paragraph vectors approach allows us to incorporate more detailed contextual information into our topic detection method.Fig. 2 shows an example abstract from the Cooking Skills dataset with the 4 most important topics induced by the proposed topic detection method and the LDA topic model.The two topic detection methods are trained by setting the number of topics to 300.Moreover, each topic is characterised by the top 5 words with the highest probability of being relevant to that topic.An exact match between words that occur in the abstract and the topic descriptors is highlighted by a solid green line for the PV topic detection method and with a dashed blue line for the LDA topic model.The automatically assigned topic descriptions show that the two topic detection methods tend to induce thematically coherent topics which are also representatives of the underlying abstract.For example, topics 3 and 4 extracted by the paragraph vector-based topic detection method seem to be related to two of the key points discussed in the abstract.Moreover, it can be noted that both models capture synonymous or semantically related words that occur as keywords in the same topic.To evaluate the proposed topic detection method, we investigate the performance of a certainty-based active learning classifier using topic-based features extracted by our paragraph vector-based method and the baseline LDA model.We employ a certainty-based active learning classifier, previously presented in Miwa et al. .A high-level view of the active learning strategy is illustrated in Fig. 3.In our approach, citations are represented as a mixture of topics induced by a topic modelling approach.The two topic models used in this work are unsupervised methods.Thus, we extract topics from the complete set of citations.An expert reviewer initiates the active learning process by manually labelling a small sample of citations.This labelled sample, encoded into a topic-based representation, is then used to train an SVM text classification model.The trained model automatically classifies the remaining unlabelled citations and determines the next sample of citations to be validated by the reviewer according to a certainty-based criterion, i.e., instances for which the classifier has assigned a high confidence value of being relevant to the review.The certainty selection criterion has been previously shown to better address class imbalance that occurs in systematic reviews.In a succeeding iteration, the reviewer validates the next sample of citations which is used to augment the training set with additional labelled instances.The iterative process terminates when at least 95% of eligible studies are identified by the active learner , ideally without needing to manually label the entire list of citations.In our experiments, we simulate a human feedback active learning strategy given that the employed datasets are already manually coded with gold standard classification labels.At each learning iteration, we construct a sample of 25 studies and we validate the sample against the gold standard.The validation sample is subsequently used to re-train the text classification model.Following previous approaches, we repeat learning iterations until the active learner has screened the complete list of citations.We report the performance of the active learner when applied to the first stage of the screening process.Cross-validation experiments are performed on two publicly available clinical datasets and three public health datasets, previously used in Miwa et al. .Table 1 summarises the five datasets that we use for experimentation accompanied with the: underlying domain, number of citations and percentage of eligible studies.It is noted that the size of the five employed datasets varies significantly, from small clinical review of approximately 1600 citations to a large public health review of more than 15,000 citations.Additionally, all five datasets contain a very low percentage of eligible studies that range between 2% and 12%.In order to maximise the performance of the active learner, we tune the parameters of the topic modelling methods.Specifically, we train the paragraph vector-based topic detection method by setting the dimensionality of word vectors to 300, the dimensionality of document vectors to 1000 and the number of training epochs to 500.We then applied the k-means algorithm to cluster the paragraph vectors into 300 clusters which resulted in a topic-based representation of 300 dimensions.With regard to the baseline LDA topic model, we used the freely available MALLET toolkit .Additionally, we performed hyperparameter optimisation for every 10 Gibbs sampling iterations and set the total number of iterations to 500.As in the case of the proposed topic detection method, we used 300 LDA topics to represent documents.To train an SVM text classification model, we used the LIBLINEAR library with a dual L2-regularised L2-loss support vector classification solver.We investigate the performance of active learning, in terms of yield and burden, over an increasing number of manually labelled instances that are used for training.During the last iteration of the active learning process, both yield and burden are 100% since the active learner has identified all eligible studies but with the maximum manual annotation cost.Figs. 4 and 5 show the yield and burden performance achieved by the active learning models when applied to the COPD and Cooking Skills datasets, respectively.We denote with AL_PV an active learning model that uses topic features extracted by our proposed paragraph vector-based topic detection method and with AL_LDA the baseline active learning model that employs LDA topic features.The dashed vertical lines indicate when an optimal yield performance of 95% is reached.In all cases, the burden performance follows a U-shaped pattern.This can be explained by the fact that during the initial learning iterations where a small number of instances is available for training, the active learner erroneously predicts that the majority of studies is relevant to the review which results in an increased screening burden.As we extend the training set with more labelled instances, the burden performance descends since the active learner obtains a more stable classification performance.Finally, the screening burden increases again but this time linearly with the number of labelled instances.In the clinical COPD dataset, the AL_PV method shows approximately the same burden performance with the AL_LDA model.However, our active learning strategy converged faster to a high yield value when compared to the baseline AL_LDA method.The AL_PV method improved the yield performance of the baseline model by approximately 3–7% in the COPD dataset.For a given manual annotation workload of 17%, the AL_PV method automatically identified 91% of relevant studies compared to 87% of relevant instances retrieved by the AL_LDA method.By increasing the manual annotation workload to 20%, the AL_PV method achieved a yield performance of 96% while the baseline AL_LDA a yield performance of 89%.With regard to the Cooking Skills dataset, we observe that during the early learning iterations the performance obtained by the AL_PV model slightly fluctuated and in some cases the model obtained a lower yield and burden performance than the AL_LDA.In subsequent learning iterations, the AL_PV achieved a superior yield and burden performance compared to the baseline.The experiments that we conducted demonstrate that the proposed topic detection method can improve upon a state-of-the-art semi-automatic citation screening method that employs the standard LDA topic model.In clinical reviews, our topic detection method outperformed the LDA-based model by 1–5% while in public health reviews we observed larger performance gains between 5% and 15% in terms of [email protected] results suggest that the paragraph vector-based topic detection model can substantial reduce the manual annotation workload involved in both clinical and public health systematic reviews.In our approach, we followed a retrospective evaluation protocol where automatic screening predictions were compared against completed systematic reviews.This retrospective evaluation assumes that human reviewers screen at a constant rate which is not always the case in live systematic reviews.For example, O’Mara-Eves et al. outlined that reviewers tend to make faster screening decisions once they have processed the majority of the important studies.Based upon this, we plan to integrate our topic detection method with bespoke systematic review systems and assess the performance of active learning in real application scenarios.Moreover, we will investigate alternative uses of topic modelling techniques that can further facilitate the study identification phase in systematic reviews.Specifically, although the literature of some disciplines is indexed using well-organised bibliographic databases, e.g., MEDLINE or EMBASE , this is not so for all disciplines, which can result in decreased performance of search strategies.Additionally, the PICO framework effective for this population compared with this other intervention) which is commonly used to structure pre-defined questions matching clinical needs, ill suits public health reviews .Unlike clinical questions, public health questions are complex and may be described using abstract, fuzzy terminology, excluding defining a priori an adequate PICO question.Thus, topic modelling approaches that automatically discover groups of semantically related words and documents can be used to organise the most relevant evidence in a dynamic, interactive way that supports how public health reviews are conducted.In this paper, we presented a new topic detection method to support the screening phase of systematic reviews.Our proposed method uses a neural network model to identify clusters of semantically related documents.By treating the cluster centroids as representatives of latent topics, we enable the model to learn an informative and discriminative feature representation of studies.This new topic-based representation of studies is utilised by an active learning text classification model to semi-automatically identify citations for inclusion in a review and thus directly reduce the human workload involved in the screening phase.We evaluated our approach against an active learning strategy that employs topic-based features extracted by Latent Dirichlet Allocation in both clinical and public health reviews.Experimental evidence showed that the neural network-based topic detection method obtained an improved yield and burden performance when compared to the baseline method.Additionally, we demonstrated that in four out of five reviews, the proposed method drastically reduced the manual annotation cost while retaining 95% of eligible studies in the final review.The authors declare that they have no conflict of interest. | Systematic reviews require expert reviewers to manually screen thousands of citations in order to identify all relevant articles to the review. Active learning text classification is a supervised machine learning approach that has been shown to significantly reduce the manual annotation workload by semi-automating the citation screening process of systematic reviews. In this paper, we present a new topic detection method that induces an informative representation of studies, to improve the performance of the underlying active learner. Our proposed topic detection method uses a neural network-based vector space model to capture semantic similarities between documents. We firstly represent documents within the vector space, and cluster the documents into a predefined number of clusters. The centroids of the clusters are treated as latent topics. We then represent each document as a mixture of latent topics. For evaluation purposes, we employ the active learning strategy using both our novel topic detection method and a baseline topic model (i.e., Latent Dirichlet Allocation). Results obtained demonstrate that our method is able to achieve a high sensitivity of eligible studies and a significantly reduced manual annotation cost when compared to the baseline method. This observation is consistent across two clinical and three public health reviews. The tool introduced in this work is available from https://nactem.ac.uk/pvtopic/. |
360 | Do resource constraints affect lexical processing? Evidence from eye movements | During normal reading, readers make use of contextual information to help them resolve ambiguities inherent in the text.However, not all information relevant for disambiguation is available immediately, such that, in some cases readers encounter ambiguities for which the intended meaning is unknown given the lack of contextual cues.One such type of ambiguity is lexical ambiguity – for example, one sense of the word wire could be paraphrased “thin metal filament”, another as “telegram”.Although leading models of lexical ambiguity resolution agree that readers are able to make use of available contextual information to help them activate and integrate the appropriate meaning of an ambiguous word, they disagree on exactly what readers do in situations where the intended meaning is unknown.In the absence of contextual disambiguating information, readers must either maintain multiple meanings of an ambiguous word, or select one meaning to elaborate and integrate.In the latter case, if they select the meaning ultimately intended in the discourse, reading can continue without disruption, but if they have chosen to integrate the incorrect meaning, disruption and reanalysis may occur when later context disambiguates toward the unselected meaning.Thus, it may be advantageous for a reader to maintain multiple meanings in parallel if they are able.However, maintaining multiple meanings may be substantially subject to resource constraints: maintaining multiple meanings might require or tax scarce cognitive resources such as limited working memory, and prolonged maintenance may well be impossible.To the extent that readers do attempt to maintain multiple meanings of ambiguous words encountered in neutral contexts and maintenance of multiple meanings is subject to resource constraints, we might expect to find that readers are unable to maintain all meanings at a steady level of activation over time.The activation of less preferred meanings might decrease, making resolution to less preferred meanings more effortful the longer the ambiguity persists before disambiguation—so called digging-in effects.Such digging-in effects have primarily been documented and discussed in the context of syntactic ambiguity resolution.It is less clear, however, whether lexical ambiguity resolution is subject to such resource constraints.Here we report two experiments bearing on this question.Depending on the nature of and constraints on the processing of lexical ambiguities, three distinct possibilities emerge for the resolution of ambiguous words encountered in neutral contexts: readers do not attempt to maintain multiple meanings of an ambiguous word, readers maintain multiple meanings of an ambiguous word and such maintenance is not subject to resource constraints, or readers maintain multiple meanings of an ambiguous word, but such maintenance is subject to resource constraints.The third possibility predicts that digging-in effects should be observed at disambiguation, whereas the first two possibilities predict that digging-in effects should not be observed.The question of whether or not digging-in effects are observed during lexical ambiguity resolution is important for distinguishing between leading models of lexical ambiguity resolution, which make different predictions regarding the presence of such effects.Therefore we next review relevant data and theory in lexical ambiguity resolution, contrasting two classes of models: exhaustive access models that do not predict that digging-in effects should be observed and memory-based models that do.Exhaustive access models assume that all meanings of a word are initially activated.In many exhaustive access models, it is additionally assumed that one meaning is rapidly integrated into the sentence.Indeed, there is evidence from offline tasks that immediately after encountering an ambiguous word, multiple meanings may be active, but one meaning is selected relatively quickly with other meanings decaying or being actively inhibited.Even without strongly biasing context preceding an ambiguous word, evidence from cross-modal priming suggests that a weakly dispreferred meaning may not be maintained for more than a few syllables after the word is encountered.Further evidence for exhaustive access comes from online reading, where, in the absence of prior disambiguating context, readers look longer at balanced homographs than at unambiguous control words matched for length and frequency.The reordered access model, a more specific exhaustive access model, further specifies the interaction of sentence context and relative meaning frequency.According to this model, upon first encountering an ambiguous word there is immediate access to the alternative meanings based on their frequency and the bias of the preceding context, competition among the meanings ensues, and resolving that competition takes longer the more equal the initial footing among the alternative meanings.Hence, in a neutral prior context a balanced homograph will be read more slowly than a matched control word, but a biased homograph is read more or less as quickly as a matched control word, since the homograph’s dominant meaning is activated first and easily integrated into the sentence.Following a strongly biasing prior context, a biased homograph will be read quickly when its dominant meaning is the one more consistent with the context, but will be read more slowly when its subordinate meaning is the one more consistent with the context, as competition among the two meanings is prolonged.In sum, in the absence of contextual information, the two meanings of a homograph are accessed in the order of their meaning frequency, with the dominant meaning being accessed first and integrated into the sentence very rapidly.In the reordered access model, readers do not attempt to maintain multiple meanings even following neutral contexts, so lexical ambiguity does not increase cognitive resource requirements above).If a homograph is preceded by neutral context and later disambiguated, this model predicts that disambiguation to the subordinate meaning will be more difficult than disambiguation to the dominant meaning regardless of how much material intervenes between the homograph and disambiguation.Indeed, Rayner and Frazier reported this pattern of results for highly biased homographs preceded by neutral context and disambiguated either immediately or three to four words later, further suggesting that for highly biased homographs, meaning selection is immediate and multiple meanings are not maintained.Examples of the short and long subordinate-resolution conditions from Rayner and Frazier are shown in and, with short and long dominant-resolution conditions in and.In these examples, the homograph is underlined and the disambiguating word is italicized.George said that the wire informed John that his aunt would arrive on Monday.George said that the wire was supposed to inform John that his aunt would arrive on Monday.George said that the wire surrounded the entire barracks including the rifle range.George said that the wire was supposed to surround the entire barracks including the rifle range.Rayner and Frazier reported that gaze durations, measured in milliseconds per character, were consistently longer at the disambiguating word in resolution-to-subordinate-meaning conditions— than in resolution-to-dominant-meaning conditions—, and did not change with presence or absence of an intermediate neutral region.Although this is initial suggestive evidence against digging-in effects in lexical ambiguity resolution, its interpretation is limited by the fact that the disambiguating word has a different form in each of the four conditions.A different class of exhaustive access models, probabilistic ranked-parallel models, also predicts the absence of digging-in effects, not because only one meaning is maintained, but because under such models, readers are able to maintain multiple meanings without significantly taxing scarce cognitive resources).For example, in syntactic parsing, many models propose probabilistic, parallel disambiguation, such as the surprisal model of Hale and Levy.In this model, multiple syntactic parses are maintained, ranked according to the likelihood of each parse given the preceding sentence context, which is updated as new information is read.Such parallel, probabilistic models can easily be extended to lexical ambiguity resolution.The simplest instantiations of these models allow unlimited maintenance in parallel, without cost, of all interpretations that have not been ruled out by incompatible input.These simplest instantiations thus predict that, following neutral context, disambiguation will be equally effortful regardless of how much neutral material intervenes between the homograph and disambiguation.When an ambiguous word is encountered in a neutral context, each meaning will be maintained with strength roughly that of each meaning’s frequency.Upon encountering disambiguating material, the different probabilities for the ambiguous word’s meaning will affect processing difficulty: all else being equal, the disambiguating material’s surprisal will generally be higher the lower the probability of the compatible meaning.Thus disambiguation to a subordinate meaning will be higher surprisal, and therefore more difficult, than disambiguation to a dominant meaning, but the simplest instantiations of these models do not predict digging-in effects since multiple meaning maintenance is not subject to appreciable resource constraints.Alternatively, the memory-oriented capacity-constrained model of Miyake et al. assumes that readers might attempt to maintain multiple meanings of an ambiguous word, and that their working memory capacity for language constrains the degree to which they are able to do so).According to the capacity-constrained model, working memory is a general “computational arena” that subserves both lexical processing and storage of intermediate and final products of comprehension.Meanings of ambiguous words are activated in parallel, with more frequent meanings being activated more quickly and to a greater extent.Following a strong biasing context, only the supported meaning is integrated into the sentence and the other meaning either decays or is actively suppressed.In the absence of preceding disambiguating information, readers may create dual mental representations, elaborating them in parallel until disambiguating information is reached.This maintenance and elaboration is subject to resource constraints, such that activation of the different meanings is reduced as memory capacity fills, such that with time, the activation of subordinate meanings may fall below threshold.This model thus predicts an interaction of meaning dominance and resource availability: subordinate meanings should be especially difficult to process when few resources are available.Furthermore, since initial meaning activation varies as a function of meaning frequency, the activation of the subordinate meanings will persist longest when the homograph in question is balanced, and will fall below threshold faster with decreasing subordinate meaning frequency.Indeed, Miyake et al. supported this prediction for moderately biased homographs in two self-paced reading experiments with separate manipulations of memory and resource availability.First, they showed that readers with low working memory span are especially slow to process subordinate resolutions of ambiguous words.Second, they manipulated the length of material intervening between homographs and eventual subordinate disambiguating material, creating short and long subordinate conditions, examples of which are shown in and, with the homograph underlined and the disambiguating region italicized.Since Ken liked the boxer, he went to the pet store to buy the animal.Since Ken liked the boxer very much, he went to the nearest pet store to buy the animal.With this length manipulation they showed that mid-span readers exhibit digging-in effects in lexical ambiguity resolution, processing subordinate resolutions especially slowly when additional words intervene before disambiguation.Thus, their results run counter to those of Rayner and Frazier who found only an effect of meaning at disambiguation.There are a few notable differences between the stimuli and methods used by Rayner and Frazier and Miyake et al.First, the homographs used by Rayner and Frazier were highly biased, such that the average probability of picking the dominant meaning in offline norming was .92.In contrast, Miyake et al. used moderately biased homographs where the dominant to subordinate frequency ratio was judged to be 7.8:2.2.As already stated, according to the capacity-constrained model, the subordinate meaning falls below threshold faster with increasing frequency disparity.Thus, the failure of Rayner and Frazier to find an interaction of meaning frequency and length at disambiguation may have been due to the nature of their targets.In other words, it may be the case that the subordinate meaning frequency for their targets was so low that the subordinate meaning fell below threshold as early as the next word in the sentence.In contrast, for Miyake et al.’s moderately biased homographs, the subordinate meaning may have persisted until disambiguation in their shorter sentences, but fell below threshold before the point of disambiguation in the longer sentence versions.Second, in Miyake et al.’s Experiment 2, where memory capacity was held constant, critical comparisons were between the reading of a homograph and the same sentence frame with the homograph replaced with an unambiguous semantic associate.This allowed for easy comparison of the disambiguating region, which was identical across conditions, but made comparisons of the critical word difficult, as they were not matched on lexical variables known to influence reading time.The words differed in length, and the authors did not specify whether these controls were frequency-matched, and if so, whether to the frequency of the homograph overall, the frequency of its dominant sense, or the frequency of its subordinate sense, three scenarios that can produce markedly different results.In contrast, Rayner and Frazier compared reading on the same critical homograph later disambiguated to either its dominant or subordinate sense.This avoided the issue of deciding which frequency to match the control word to and allowed for easy comparison of reading times on the homograph in different disambiguation conditions, but made comparisons of the disambiguating region slightly more challenging as it contained different words across conditions.Third, Rayner and Frazier’s results were obtained using eye tracking, which allowed for millisecond precision in sampling the location of the eyes during reading—providing a very sensitive measure of online processing.In contrast, Miyake et al.’s use of self-paced reading obscured whether their reported length-based digging-in effect arose instantaneously upon reaching disambiguation, or was instead associated with clause wrap-up, since effects were observed primarily in sentence-final words that were within the spillover region following the disambiguating word.Finally, the lexical digging-in effect reported by Miyake et al. was significant by subjects only, raising questions about the reliability of the effect.Thus it remains an open question whether digging-effects manifest in lexical ambiguity resolution within a sentence.Interestingly, despite their differing architectures and assumptions, these models make similar behavioral predictions regarding how lexical ambiguity resolution proceeds in other situations.For example, both the reordered access model and the capacity constrained model agree that, following strong biasing context, only the contextually appropriate meaning of a homograph is integrated and maintained.These models only differ in their predictions surrounding behavior in the absence of prior disambiguating context, making it the critical test case to adjudicate between them.In the current study, we sought to further test the predictions of the reordered access model1 and the capacity-constrained model while also attempting to reconcile the different results obtained by Rayner and Frazier and Miyake et al.Consistent with Rayner and Frazier, we collected eye movement data to allow for a fine-grained investigation of potential digging-in effects in lexical ambiguity resolution.We used moderately biased homographs, consistent with Miyake et al., to determine whether digging-in effects might only be observed for less highly-biased homographs, for which the capacity-constrained model predicts both meanings can be maintained for some amount of time.We constructed our stimuli similarly to Rayner and Frazier, such that critical comparisons were between conditions where the homograph was ultimately disambiguated to either its dominant or subordinate meaning after a short or long intervening region of text.The reordered access model and the capacity-constrained model agree that, at the point of disambiguation, readers should not have trouble disambiguating a moderately biased homograph to its dominant meaning.They differ in their predictions regarding disambiguation to the subordinate meaning.The reordered access model predicts immediate meaning selection for the homograph even following a neutral prior context, such that the dominant meaning will be selected and integrated, making disambiguation to the subordinate meaning more difficult than disambiguation to the dominant meaning regardless of how much text intervenes between the homograph and subsequent disambiguation.In contrast, the capacity-constrained model predicts that both meanings will be maintained until resource limitations cause the subordinate meaning to fall below threshold, such that disambiguation to either meaning will be easy with short regions of intervening text between the homograph and disambiguation, but disambiguation to the subordinate meaning becomes increasingly more difficult with increasing length of intervening text.Sixty native English speakers from the University of California, San Diego received course credit for their participation in the study.All participants had normal or corrected-to-normal vision.Participants’ eye movements were monitored using an Eyelink 1000 eyetracker, which sampled and recorded eye position every millisecond.Subjects were seated 61 cm away from a 19-in.ViewSonic LCD monitor.Text was displayed in 14-point, fixed-width Consolas font, and 4 characters equaled 1° of visual angle.Viewing was binocular with eye location sampled from the right eye.Prior to material creation, thirty-three native English speakers from the United States participated in online norming through Amazon’s Mechanical Turk service for monetary compensation.They were given a list of words, one at a time, and asked to construct sentences containing each word.Prior to the start of the norming, each participant was shown two examples where sentences were constructed using each word in its noun sense.In this way we hoped to covertly bias the participants to compose sentences using the noun senses of our homographs without expressly instructing them to do so and potentially highlighting the ambiguous nature of our stimuli.80 homographs and 64 unambiguous words were included for a total of 144 words, and it took participants approximately forty minutes to compose sentences for all of the words.Sentences were then coded for which meaning was expressed, and the overall bias of each homograph was computed as the proportion of participants expressing one versus the other meaning for the homograph.Based on the results of this norming, we selected thirty-two ambiguous words for which the probability of generating the dominant meaning ranged from .56 to .87 such that the homographs were moderately biased.We compared the bias ratings that we obtained via Amazon’s Mechanical Turk with previous norms collected at the University of Alberta; only 26 of the homographs used in the current study are contained in the Alberta norms, but for those 26 homographs, our norms and the Alberta norms are highly correlated with each other = .47, p = .015; Twilley, Dixon, Taylor, & Clark, 1994).Four sentence versions were created for each of the thirty-two biased homographs, resulting in a total of 128 experimental sentences.Both long and short sentence versions contained regions of text between the homograph and disambiguation.We refer to this region of text in the short conditions as the intermediate region; this region averaged 4.16 words in length.In the long conditions, this region of text consisted of the intermediate region plus a lengthening region that was inserted either before the intermediate region or in the middle of the intermediate region, to increase the amount of material that subjects read prior to disambiguation.Material appearing before the lengthening region in the long conditions we refer to as Intermediate 1; material appearing after the lengthening region we refer to as Intermediate 2.Since one region, Intermediate 1, is not present in all items, we additionally defined an Intermediate Super-region comprising Intermediate 1, Lengthener, and Intermediate 2.The Intermediate Super-region thus reflects, across all items, the totality of material intervening between the homograph and disambiguation.We defined the disambiguating region as extending from the first word that differed between the dominant and subordinate versions of a given sentence pair, to the end of the sentence.We also identified a disambiguating word post hoc, to facilitate comparisons with previous research.In order to do so, we gave an additional set of six Mechanical Turk subjects, who did not participate in the other set of online norming, the short versions of our stimuli and asked them to select the first word in each sentence that they believed disambiguated the homograph.This norming revealed that, for the majority of our stimuli, subjects did not unanimously agree on which word was the first to disambiguate the homograph.We defined the disambiguating word as the most commonly selected word in our norming.2,Across all items, 77.3% of subject responses were in agreement with the word analyzed as the disambiguating word.It should be noted that these words were not intentionally matched in the design of the experiment; however, across all items, there were no significant differences between the dominant and subordinate disambiguating words in length or log-transformed HAL frequency3.Sample stimuli appear in Table 1, and the full set of stimuli is listed in Appendix A.Four experimental lists were constructed and counterbalanced such that within each list each condition appeared an equal number of times, and across experimental lists each sentence appeared once in each of the four conditions.The thirty-two experimental sentences were presented along with seventy filler sentences.Simple comprehension questions appeared after 16 of the critical items and 34 of the filler items.Meaning condition and length condition were tested within participants; however, each participant saw only one sentence for each homograph.The beginning of the sentence and the lengthening region were always neutral, and the disambiguating region always supported only one meaning of the homograph.Each participant was run individually in a session that lasted approximately thirty minutes.At the start of the experiment, participants completed a calibration procedure by looking at a random sequence of three fixation points presented horizontally across the middle of the computer screen.Each trial required the participant to fixate a point in the center of the screen before moving their eyes to a black square, which appeared on the left side of the screen after the central fixation mark disappeared.This box coincided with the left side of the first character of the sentence and once a stable fixation was detected within the box, the sentence replaced it on the screen.Prior to the experimental portion, 10 practice sentences were presented.All sentences were randomized for each participant and vertically centered on the screen.Participants were instructed to read silently at their normal pace for comprehension, and to press a button on a keypad when they finished reading.When comprehension questions appeared on the screen after a sentence, participants were required to respond yes or no via button press.Following incorrect answers, the word incorrect was displayed for 3 s before the next trial was initiated.Following correct answers, there was no feedback and the experiment continued with the next trial.Participants were correct on an average of 95.9% of questions.Both early and late eye movement measures for the target homograph, the neutral pre-disambiguating region, and the disambiguating region were assessed.We present separate analyses of pre-disambiguation measures and post- disambiguation measures.Both the reordered access model and the capacity-constrained model predict that initial reading of the homograph, and any re-reading prior to disambiguation, should not differ as a function of meaning, since no disambiguating information has yet been encountered.Therefore, for pre-disambiguation we report first pass time, second pass time, and rereading time on the homograph, first and second pass time on the lengthening region, and first pass time on the second intermediate region.4,We also report the probability of making a first-pass regression out of the intermediate super-region, as well as the probability that a first-pass regression out of this intermediate region leads to a fixation on the homograph before the first fixation rightward of the intermediate region or the end of reading.Following disambiguation, the reordered access model predicts more difficulty processing subordinate resolutions regardless of length, whereas the capacity-constrained model predicts more difficulty processing subordinate resolutions, which increases with the length of intervening material.Processing difficulty could manifest as either longer first pass reading of the disambiguating region or more regressions back to and longer re-reading of the homograph or beginning of the sentence.Post-disambiguation, we thus report first pass time on the disambiguating region, go-past time on the disambiguating region, second pass time, rereading time, and total time on the homograph, the probability of making a regression out of the disambiguating region, and the probability of an X ← Y regression from the disambiguating region to the homograph.We also report three post hoc analyses of first pass time, go-past time, and total time on the disambiguating word.Digging-in effects could manifest as interactions in any of these post-disambiguation measures between meaning and length conditions.Furthermore, main effects of meaning condition on second-pass time on the homograph, probability of regressions out of the disambiguating region, and/or probability of an X ← Y regression from the disambiguating region to the homograph that emerged only after disambiguation, would suggest that disambiguating to one meaning was more difficult than the other.Prior to analysis, fixations under 81 ms were deleted, or pooled if they were within 1 character of another fixation, and fixations over 800 ms were deleted.For analyses of the homograph, we deleted any trials in which subjects blinked during first-pass reading of the target homograph, resulting in 11% data loss.5,Because the disambiguating region was a long, multi-word region, and trial exclusion based on blinks in the region resulted in the loss of a substantial percentage of the data, we did not exclude trials based on blinks for analysis of the disambiguating region.Mean fixation durations and regression probabilities by condition are summarized in Table 2.Linear mixed-effects models were fitted with the maximal random effects structure justified by the design of the experiment, which included random item and participant intercepts and slopes for sense, length, and their interaction.In order to fit the models, we used the lmer function from the lme4 package within the R Environment for Statistical Computing.We used sum coding for the fixed effects of these predictors.Following Barr, Levy, Scheepers, and Tily, we assess significance of each predictor via a likelihood ratio test between a model with a fixed effect of the predictor and one without it, maintaining identical random effects structure across models.6,Results of these models are summarized in Table 3.Values near 0 indicate similar marginal likelihoods under two models, positive values indicate support for the non-null model, and negative values indicate support for the null model.When trying to determine the degree of support for a given model over another, it has been suggested that a Bayes factor in log base 10 space whose absolute value is greater than 0.5 should be interpreted as providing “substantial” evidence, greater than 1 as providing “strong” evidence, and greater than 2 as providing “decisive” evidence.There were no significant effects of meaning or length on first pass times on the homograph, lengthener, or intermediate regions.8,There was a significant effect of length in second pass time on the homograph pre-disambiguation and a significant effect of length on the probability of making a regression out of the intermediate super-region as well as the probability of making an X ← Y regression from the intermediate super-region to the homograph, such that, prior to disambiguation, readers were more likely to make regressions to the homograph and had longer second pass reading times on the homograph in the long conditions.The Bayes factor analyses confirmed the results we obtained in the linear mixed effects models.For second pass time on the homograph, first pass regressions out of the intermediate super-region, and X ← Y regression from the intermediate super-region to the homograph, Bayes factor analyses revealed that the models with an effect of length were favored over the reduced models without.This effect of length on second-pass reading times for the homograph might seem to suggest that maintaining multiple word meanings from a homograph over longer periods of time without disambiguation is effortful.However, analysis of rereading time revealed no significant difference in the amount of time spent rereading the homograph across conditions.Taken together, these results suggest that the effect of length that we observed in second pass time was primarily driven by a tendency for readers to make more regressions into the homograph in the long conditions, rather than a tendency to spend significantly longer actually rereading the homograph following a regression.Furthermore, there was also a significant effect of length on the probability of making an X ← Y regression from the second intermediate region to the homograph, such that readers were more likely to regress from intermediate region 2 to the homograph in the short conditions than when lengthening material intervened).No other effects of length were significant, and no effects of meaning or the interaction of meaning and length were observed across any measures prior to disambiguation.For all pre-disambiguation measures, Bayes factors computed for the maximal model compared to the model without an effect of meaning were all less than −0.36, and without an interaction of meaning and length were all less than −0.28, demonstrating that the reduced models were favored over models with effects of meaning or the interaction of meaning and length.There was a main effect of meaning in second pass time on the homograph post disambiguation, such that readers had longer second pass time on the homograph when it was disambiguated to its subordinate meaning.The Bayes analyses for the maximal model compared to one without an effect of meaning confirmed that the maximal model was favored.Again, we computed a measure of pure rereading time that did not average in zeros when no second pass time occurred.Although this measure showed numerically longer rereading times following subordinate resolutions, the effect was not significant, demonstrating that the effect we observed in second pass time was primarily driven by a tendency for readers to make more regressions into the homograph in the subordinate conditions, rather than a tendency to spend a significantly longer time actually rereading the homograph following a regression.Additionally, there was a main effect of meaning in the probability of making an X ← Y regression from the disambiguating region to the homograph, such that regression paths targeting the homograph were more likely following subordinate disambiguating material.The Bayes analyses confirmed this result; the maximal model with an effect of meaning was favored over one without for the probability of making an X ← Y regression from the disambiguating region to the homograph.The maximal model with an effect of meaning was also favored over one without for the probability of making a regression out of the disambiguating region in general, though this effect was only marginal in the results of the linear mixed-effects models.Post-hoc analyses of first pass time, go-past time, and total time on the disambiguating word revealed no significant effects of meaning, length, or the interaction of meaning and length, suggesting that disambiguation unfolded over time as participants read the disambiguating regions of our stimuli, rather than being driven by encountering a specific word.No other effects of meaning were significant, and no effects of length or the interaction of meaning and length were observed across any measures after disambiguation.For all post-disambiguation measures, Bayes factors computed for the maximal model compared to a model without an effect of length were all less than −0.23, and the Bayes factors computed for the maximal model compared to a model without an interaction of meaning and length were all less than −0.26, demonstrating that the reduced models were favored over models with effects of length or the interaction of meaning and length—critically, providing support for the model with a null interaction of meaning and length.In Experiment 1, we investigated whether readers attempt to maintain multiple meanings of a moderately biased homograph encountered in neutral context, and if so, whether this maintenance is subject to resource constraints.To do so, we tested for the presence of digging-in effects in the eye movement record as a function of increasing amounts of intervening sentence material before disambiguation.The capacity-constrained model of lexical ambiguity resolution predicted an interaction of meaning and length, such that processing of the subordinate resolution would be especially difficult when additional material intervened before disambiguation, whereas the reordered access model predicted that the subordinate resolution would be harder to process independent of the amount of intervening material.Consistent with the predictions of both models, which assume that the subordinate meaning is less active or less readily available than the dominant meaning, we found that readers experienced difficulty disambiguating to the subordinate meaning of the homograph.Critically, this effect was not modulated by the amount of intervening material before disambiguation as the capacity-constrained model would predict—readers were more likely to regress to the homograph, and had longer second pass times on the homograph when it had been disambiguated to its subordinate meaning, regardless of the amount of text between the homograph and disambiguation.Bayes factor analyses confirmed that the model with a null interaction of meaning and length was favored over a model with a non-null interaction for all critical measures.This lack of an interaction of meaning and length is consistent with the predictions of the reordered access model and the results of Rayner and Frazier, and lends no support to models specifying resource-constraints on multiple meaning maintenance, such as the capacity-constrained model, suggesting instead that readers can maintain multiple meanings without significantly taxing cognitive resources, or that they are not attempting to maintain multiple meanings at all, instead selecting only one meaning to maintain.Critically, the effects of meaning that we observed only arose at disambiguation.Prior to making any fixations in the disambiguating region of the text, we only observed effects of sentence length on the eye movement record, and no effects of the to-be-disambiguated meaning.These effects of length are likely explicable in theoretically uninteresting ways.First, subjects were more likely to make regressions from the intermediate super-region to the homograph, and spent longer re-reading the homograph pre-disambiguation in the long conditions.This can most straightforwardly be explained as a result of the increased opportunities for regressions provided by the longer ambiguous region.Second, the probability of making a regression from the second intermediate region to the homograph pre-disambiguation was greater in the short conditions.In these conditions, there was usually no intervening material between the regression launch site and the homograph, so it is plausible that these short regressions were just regressions between neighboring words rather than the longer regression paths we observed post-disambiguation.Since we found only main effects of length prior to disambiguation and of meaning at disambiguation, and no evidence of the interaction predicted by the capacity-constrained model, the results of this experiment are consistent with the reordered access model of lexical ambiguity resolution.Although these results are consistent with those reported by Rayner and Frazier for highly biased homographs, they stand in contrast to those of Miyake et al., who reported lexical digging-in effects for moderately biased homographs.This is striking given the close similarity of the design of Experiment 1 to Miyake et al.’s Experiment 2; both used moderately biased homographs and had comparable length manipulations.Aside from our use of eyetracking, the key difference between the designs of the experiments was the choice of controls for the subordinate resolutions of the homographs: our Experiment 1 used dominant resolutions of the same homographs, while Miyake et al. used unambiguous semantic associates.Additionally, because we compared dominant and subordinate resolutions in Experiment 1, our disambiguating material was necessarily different.Since the critical interaction of meaning and length reported in Miyake et al. emerged only after disambiguation, it is important to be able to directly compare reading times in this region.In order to test whether these factors explained the contrasting results of the two experiments, we designed a second experiment using unambiguous controls, thereby more directly replicating Miyake et al.’s design.Again, the capacity-constrained model predicts an interaction of meaning and length, whereas the reordered access model predicts only a main effect of meaning at disambiguation.An additional sixty native English speakers from the University of California, San Diego received course credit for their participation in the study.All participants had normal or corrected-to-normal vision.The apparatus was identical to Experiment 1.Materials were adapted from Experiment 1 by replacing the dominant conditions with unambiguous conditions: homographs in the dominant conditions were replaced with unambiguous semantic associates of the homograph’s subordinate sense.These semantic associates were roughly matched to the homograph’s overall word form frequency, but differed in length.Lexical frequencies for all stimuli were computed via log-transformed HAL frequency norms using the English Lexicon Project.The homographs had an average word form log frequency of 9.08,9 and the unambiguous semantic associates had an average log frequency of 9.3.Homographs and semantic associates were on average 4.75 and 5.31 characters long respectively.Critically, across all conditions, the disambiguating regions were now identical and instantiated the homograph’s subordinate resolution.This facilitated comparison of reading measures in the disambiguating regions across all condition, as the lexical content of the regions was identical.Sample stimuli appear in Table 4, and the full set of stimuli is listed in Appendix A.The procedure was identical to Experiment 1.Data pooling and exclusion criteria were identical to Experiment 1.For analyses of the homograph, deletion of trials for blinks during first-pass reading of the target homograph resulted in 7.8% data loss.10,Participants were correct on an average of 93.8% of comprehension questions.Mean fixation durations and regression probabilities by condition are summarized in Table 5.Results of model comparisons and Bayes analyses are summarized in Table 6.11,As with Experiment 1, the ANOVA results paralleled the results we obtained with linear mixed effects models, but for transparency they are reported in Appendix C.There were no significant effects of meaning or length on first pass times on the homograph/control, lengthener, or intermediate regions.12,There were significant effects of length in second pass time on the homograph/control pre-disambiguation, first pass regressions out of the intermediate super-region, and the probability of making an X ← Y regression from the intermediate super-region to the homograph/control, such that, prior to disambiguation, readers were more likely to make regressions to the homograph/control and had longer second pass times on the homograph/control in the long conditions.Confirming these results, Bayes analyses favored the maximal model over the model without an effect of length for second pass time on the homograph, first pass regressions out of the intermediate super-region, and the probability of making an X ← Y regression from the intermediate super-region to the homograph/control.As with Experiment 1, we also computed a measure of pure rereading time that did not average in zeros when no regression occurred.There was a marginal effect of length on rereading times, such that people spent numerically longer rereading the homograph/control in the long conditions, but again, this demonstrates that the effect we observed in second pass time was primarily driven by a tendency for readers to make more regressions into the homograph/control in the long conditions.As in Experiment 1, there was also a significant effect of length on the probability of making an X ← Y regression specifically from the second intermediate region to the homograph/control, such that readers were more likely to regress from intermediate region 2 to the homograph/control in the short conditions when those two regions were often adjacent.Unlike Experiment 1, we found an effect of meaning in the likelihood of making a regression out of the intermediate super-region, with readers more likely to make a regression out of the intermediate super-region when it was preceded by the homograph than when it was preceded by an unambiguous control.Indeed, the Bayes analyses for the maximal model compared to a model without an effect of meaning, favored the maximal model.This result confirms that our eye movement measures are picking up meaningful correlates of processing difficulty due to disambiguation of lexical meaning.No other effects of length or meaning were significant, and no interactions of meaning and length were observed across any measures prior to disambiguation.For all other pre-disambiguation measures, Bayes factors computed for the maximal model compared to the model without an effect of meaning were all less than −0.01, and without an interaction of meaning and length were all less than −0.1, demonstrating that the reduced models were favored over models with effects of meaning or the interaction of meaning and length.There were main effects of meaning in second pass time and total time on the homograph/control following disambiguation, such that readers had longer second pass times on the homograph once it was disambiguated to its subordinate meaning than they had on the unambiguous control word.The Bayes analyses confirmed that the maximal models were favored over models without effects of meaning for both second pass time and total time.We again computed a measure of pure rereading which patterned like second-pass time numerically, but was not significant.There was also a main effect of meaning on the probability of making an X ← Y regression from the disambiguating region to the homograph/control, such that readers were more likely to make regressions from the disambiguating region to the subordinately disambiguated homograph than to the unambiguous control word.Indeed, the Bayes analyses for the probability of making an X ← Y regression from the disambiguating region to the homograph/control confirmed that the maximal model was preferred over a model without an effect of meaning.Consistent with these effects, we also found a main effect of meaning in go-past time for the disambiguating region.Unlike Experiment 1, post hoc analyses of the disambiguating word revealed a significant effect of meaning on the total time spent reading the disambiguating word, such that participants spent longer total time reading the disambiguating word in the subordinate condition than the unambiguous condition.No other effects of meaning were observed and no effects of length or the interaction of meaning and length were observed following disambiguation.For all post-disambiguation measures, Bayes factors computed for the maximal model compared to a model without an effect of length were all less than −0.08, and Bayes factors computed for the maximal model compared to a model without an interaction of meaning and length were all less than −0.1, demonstrating that the reduced models were favored over models with effects of length or the interaction of meaning and length—providing support for the model with a null interaction of meaning and length.In Experiment 2, we attempted a more direct replication of Miyake et al.Following their design, we compared subordinate resolutions of moderately biased homographs to identical sentence frames with unambiguous controls, rather than the dominant resolutions of the same homographs as in Experiment 1.The results of Experiment 2 were parallel to those of Experiment 1 in all key respects.As in Experiment 1, prior to reaching the disambiguating region, eye movements exhibited effects of sentence length, which are not central to our current question and likely theoretically uninteresting.Interestingly, an effect of meaning emerged prior to disambiguation that we did not observe in Experiment 1.Prior to reaching the disambiguating region, readers made more regressions out of the intermediate regions to reread the beginning of the sentence when it contained a homograph than when it contained an unambiguous control.Since we did not observe a difference in the pre-disambiguation regression rates between sentences containing dominant and subordinate homographs in Experiment 1, the difference in Experiment 2 likely reflects more effortful processing of an ambiguous word relative to an unambiguous word roughly matched on word form frequency.Although initial processing of our homographs and unambiguous controls did not significantly differ, this difficulty in later measures suggests that processing difficulty for our moderately-biased homographs fell somewhere in between that of highly-biased and balanced homographs.After disambiguation, consistent with Experiment 1, we only observed effects of meaning—readers spent longer total time reading the disambiguating word in the ambiguous conditions, and were more likely to regress to and spent longer second pass time on ambiguous homographs than unambiguous controls.These effects are again predicted by both the reordered access model and the capacity-constrained model, since, in both models, the subordinate meaning of a homograph is less readily available than the single meaning of an unambiguous control.However, this effect of meaning was not modulated by the amount of intervening material prior to disambiguation as the capacity-constrained model would predict, and as Miyake et al. found.Indeed, as with Experiment 1, the Bayes factors computed for all critical post-disambiguation measures favored a model with a null interaction of meaning and length over one with a non-null interaction.The fact that we again found a main effect of meaning at disambiguation that did not interact with length lends further support to the reordered access model of lexical ambiguity resolution.Experiment 2 therefore rules out the possibility that different control conditions are responsible for the differences between our Experiment 1 results and those of Miyake et al.In two experiments, we investigated the processing of moderately biased homographs embedded in neutral preceding contexts.By varying the length of sentence material that intervened between the homograph and subsequent disambiguation, we sought to determine whether readers attempt to maintain multiple meanings of an ambiguous word presented without prior disambiguating information, and whether this meaning maintenance is subject to resource constraints.Consistent with both the reordered access model and the capacity-constrained model, we found that disambiguating to the subordinate meaning was more difficult than disambiguating to the dominant meaning.In neither experiment did we find evidence for resource constraints on lexical ambiguity resolution: disambiguating to the subordinate meaning never become more difficult with increasing material.This second result runs counter to the predictions of the capacity-constrained model of lexical ambiguity resolution, and the previous results reported by Miyake et al.They found that increasing the distance between an ambiguous word and its disambiguation, indeed made dispreferred resolutions especially difficult to process.The design of our experiments differed minimally from theirs, featuring moderately biased homographs, approximately the same additional distance to disambiguation between long and short sentence versions, and the same choice of controls for subordinately-resolved homographs, namely unambiguous words.The key remaining difference is the task itself: Miyake et al. used self-paced reading, while we used eyetracking.It is plausible that, given the generally lower resolution of self-paced reading and the fact that the crucial interaction was significant at p < .05 by subjects only, Miyake et al. observed a false positive.Indeed, careful inspection of their total reading time data following disambiguation seems to show that the effect is being driven by effects of length when processing the unambiguous conditions.They report a difference in total reading time between the long and short ambiguous conditions of 22 ms, and a difference between the long and short unambiguous conditions of −122 ms.While this is still an interaction of meaning and length at disambiguation, their capacity-constrained model would specifically predict an interaction driven by increased reading time following disambiguation to the subordinate meaning in the long condition relative to the short condition, whereas they show a larger effect of length in the unambiguous conditions.Finally, most of their effects did not emerge immediately at disambiguation, but rather in spillover, and were pushed toward the end of the sentence, potentially further obscuring their results with sentence wrap-up effects.While our results are inconsistent with the results of Miyake et al., they are consistent with the results of Rayner and Frazier.They found that increasing the distance between an ambiguous word and its disambiguation had no effect on resolutions to the subordinate meaning—subordinate resolutions were more difficult than dominant resolutions, but the magnitude of this main effect of meaning did not vary as a function of length.They used highly biased homographs, for which, theoretically in the scope of the capacity-constrained model, initial activation of the subordinate meaning might have been so low that further effects of length could not be observed.However, the fact that we extended their results to moderately biased homographs demonstrates that their lack of an interaction was not likely due to floor effects in subordinate activation.Our results also go beyond those of Rayner and Frazier in using more tightly controlled disambiguating-region material, in drawing evidence from a wider range of eye movement measures, and in quantifying evidence in favor of the null hypothesis of no interaction between meaning and length by computing Bayes factors.One could argue that perhaps our failure to find an interaction of meaning and length was due to the fact that even our short sentence versions were too long to show multiple-meaning maintenance.That is, perhaps the distance between the homographs and disambiguation in our short conditions was not short enough to provide evidence for the maintenance of multiple meanings.We think this is unlikely given the converging results of Rayner and Frazier.In their short condition, the homograph was immediately followed by the disambiguating word and they still failed to find a difference between subordinate resolutions in that condition and their long condition, where 3–4 words intervened between the homograph and disambiguation.They argued that this suggested immediate resolution of lexical ambiguities even without contextual disambiguating information.Alternatively, one might instead question whether our length manipulation was simply too limited to detect any digging-in effects.Digging-in effects should manifest as positive correlations between the length of intervening material and any of our critical, post-disambiguation measures for subordinate resolutions.However, we find no evidence for a relationship between length of intervening material for a given item and any of our critical, post-disambiguation measures.Overall, then, the bulk of pertinent results on lexical disambiguation suggest one of two theoretical possibilities.First, readers may not attempt to maintain multiple meanings of an ambiguous word that they encounter in a neutral context, instead committing to one interpretation very rapidly—in the case of biased homographs, typically the dominant interpretation, as suggested by the reordered-access model.Under this explanation, since readers never attempt to maintain multiple meanings, whether or not cognitive resources are depleted during sentence comprehension, should have no effect on the reader’s ultimate resolution—if they initially selected the correct meaning, reading will proceed easily, and if they initially selected the incorrect meaning, reading will likely be disrupted, but the degree of disruption will not increase with more material.Second, readers may be able to maintain multiple meanings without significantly taxing available cognitive resources as suggested by the simplest probabilistic ranked-parallel models such as surprisal.However, the idea that multiple meanings can be maintained without taxing cognitive resources may not be psychologically plausible.The number of possible interpretations of a sentence generally grows exponentially with its length, and no known algorithm can exhaustively explore all possible interpretations in time linear in sentence length.Recently, more cognitively realistic models of probabilistic sentence comprehension have been proposed which involve approximation algorithms intended to bring inferences close to the resource-unconstrained “ideal”.Levy et al. proposed one such algorithm, the particle filter, in which limited resources are used to efficiently search the space of possible interpretations by repeated stochastic sampling as each incremental input word accrues.The stochasticity of the search gives rise to drift in the probabilities of alternative analyses, and the longer an ambiguity goes unresolved the more likely one of the interpretations is to be lost altogether.Thus, these more cognitively realistic models of probabilistic sentence comprehension predict that digging-in effects might arise during lexical ambiguity resolution, therefore making behavioral predictions analogous to the capacity-constrained model.However, we found no evidence for digging-in effects in lexical ambiguity resolution, and therefore no evidence for the capacity-constrained model or particle filter.In principle, the processing of lexical ambiguity and syntactic ambiguity could well be fundamentally similar: both types of ambiguity might require the maintenance of multiple alternative representations, and multiple sources of ambiguity create a combinatorial explosion of overall possible interpretations that poses fundamental computational challenges.Arguments both for and against this view have been advanced in the literature.The results from syntactic ambiguity resolution during the reading of garden-path sentences, suggest that readers are subject to resource constraints when resolving syntactic ambiguities, which gives rise to digging-in effects.For example, consider the reading of garden-path sentences as in &:While the man hunted the deer ran into the woods.While the man hunted the deer that was brown and graceful ran into the woods.Although the ambiguity in these two sentences is structurally identical—in particular, the noun phrase containing the deer should be parsed as a new clause subject, not as the object of hunted—readers experience substantially more difficulty recovering from the ambiguity in than in.If the processing of both types of ambiguity were fundamentally similar, then we would expect similar digging-in effects to emerge in lexical ambiguity resolution.However, that is not what we found.Our results are unlike those for syntactic ambiguity resolution, as we found no evidence for digging-in effects in lexical ambiguity resolution.These differences in how lexical and syntactic ambiguities are managed pose challenges for accounts of ambiguity resolution that characterize syntactic ambiguity resolution purely as a type of lexical ambiguity resolution, suggesting instead that the two types of ambiguity may be represented differently, may impose different resource demands, and/or may be managed differently in human sentence comprehension.Across two studies, we found no evidence for digging-in effects in lexical ambiguity resolution, and therefore no evidence for the capacity-constrained model.The ease with which readers disambiguate to each meaning did not increase with intervening material as the capacity-constrained model or the particle filter would have predicted.Instead, taken together with the results of Rayner and Frazier, our results suggest that, in the absence of prior disambiguating context, either readers are able to maintain multiple meanings without significantly taxing cognitive recourses, or readers commit to one interpretation very rapidly—typically the more frequent interpretation—as suggested by the reordered-access model. | Human language is massively ambiguous, yet we are generally able to identify the intended meanings of the sentences we hear and read quickly and accurately. How we manage and resolve ambiguity incrementally during real-time language comprehension given our cognitive resources and constraints is a major question in human cognition. Previous research investigating resource constraints on lexical ambiguity resolution has yielded conflicting results. Here we present results from two experiments in which we recorded eye movements to test for evidence of resource constraints during lexical ambiguity resolution. We embedded moderately biased homographs in sentences with neutral prior context and either long or short regions of text before disambiguation to the dominant or subordinate interpretation. The length of intervening material had no effect on ease of disambiguation. Instead, we found only a main effect of meaning at disambiguation, such that disambiguating to the subordinate meaning of the homograph was more difficult—results consistent with the reordered access model and contemporary probabilistic models, but inconsistent with the capacity-constrained model. |
361 | A case of aggressive giant dermatofibrosarcoma protuberance occurring in the parotid gland | Although DFSP may have been reported in the literature as early as 1890, Darier and Ferrand first described it in 1924 as a distinct cutaneous disease entity called progressive and recurring dermatofibroma.Hoffman officially coined the term dermatofibrosarcoma protuberans in 1925 .Dermatofibrosarcoma protuberans is a relatively uncommon soft tissue neoplasm of intermediate- to low-grade malignancy.The incidence of sarcomas of the major salivary gland appears to be about one-tenth of benign mesenchymal tumors .Metastasis occurs rarely.DFSP is a locally aggressive tumor with a high recurrence rate .The treatment of choice is surgical resection with wide margins or Mohs micrographic surgery .In cases of large, unresectable, metastatic tumors, the adjuvant radiotherapy or chemotherapy implemented as described in several papers .Tyrosine-kinase inhibitors like imatinib or sorafenib use are more efficient compared with other classical agents .We report a very rare case of large DFSP located in the parotid region with involvement of the parotid salivary gland, external ear requiring large resection volume and immediate reconstruction with free ALT flap.The work has been reported in line with the SCARE criteria .A 38-year old Russian woman presented with the complaint of a slow growing, painless pretragal swelling of eight years duration.The patient had a resection of parotid gland under local anesthesia eight years ago in the local regional hospital.Microscopic examination showed signs of angiofibrosis.In 1 year there was a recurrence of the tumor, and histological examination did not reveal any malignant cells.After seven years the patient noticed the fast growth of the tumor.General practitioner referred the patient to our institute, tertiary research cancer center.The clinical examination found a dense, slightly painful on palpation mass in the right parotid region with exophytic growth on a wide base with spread to the temporal, mastoid region and extension to the auricle cartilage measuring 18.0 × 8.0 × 9.0 cm.There was no facial paralysis or cervical lymph node enlargement.MRI scan revealed the homogeneous mass in the right parotid region with good margins, unmineralized, nodular soft-tissue mass involving the skin and subcutaneous adipose tissue.The features were suggestive of a sarcoma.According to, the decision of the tumor board, wide resection of the tumor including right total parotidectomy, auriculectomy, and the reconstruction of the postoperative defect with the anterolateral thigh flap on the microvascular anastomosis performed.Histopathological examination revealed short spindle cells, hypercellularity, moderate to marked atypia, nuclear pleomorphism and high mitotic activity with the tight storiform pattern.On immunohistochemistry tumor, diffusely CD34 positive.The final histopathological diagnosis confirmed as dermatofibrosarcoma protuberance.In postoperative period she decided to get treatment in a regional cancer hospital.Four cycles of dacarbazine 400 mg on 1–5 days, doxorubicin 100 mg and 40 Gy radiotherapy delivered to the primary tumor bed by her regional cancer hospital.Tumor recurred after ten-month follow-up.The PET-CT examination found a 4 cm mass in the right parotid area.Resection of recurrent tumor performed without complications.The microscopic picture showed signs of dermatofibrosarcoma.The adjuvant radiotherapy till 80 Gy performed after the second resection.TKIs as Imatinib or Sorafenib not used, as was unavailable in a regional cancer hospital.She refused to continue treatment in Almaty due to her own reasons.The patient is under follow-up and disease-free for 28 months after the last treatment.DFSP is a fibrohistiocytic tumor of low to intermediate malignancy, having infiltrative margins with high local recurrence, and rare distant metastasis .These tumors occur mainly over the trunk and proximal extremities and tend to recur after wide local excision.Parotid gland or parotid region is a very rare site with few published case reports .Routine workup consists of MRI or CT and biopsy with histological examination.On computed tomography, these tumors appear as well-defined masses that are hypointense to muscle and demonstrate homogenous contrast enhancement.On magnetic resonance imaging, DFSP is homogeneous and iso- or hypo-intense to muscle.They are strongly enhanced post-contrast on T2-weighted images .The macroscopic characteristics of the tumor include a well-defined and encapsulated mass which may be accompanied by bone destruction, though it is normally free of infiltration.Histological features suggesting malignancy include high mitotic rate, hypercellularity, moderate to marked atypia and nuclear pleomorphism, tumor necrosis and infiltrative borders.Histologically, a diagnosis of DFSP is also difficult because many tumors display similar findings.DFSP, unlike a solitary fibrous tumor, shows remarkable uniformity, a lack of the hemangiopericytic pattern and a distinct storiform pattern around an inconspicuous vasculature.Because of overlapping immunohistochemical results for DFSP and solitary fibrous tumor, caution required in the differential diagnosis.Benign fibrous histiocytoma may resemble DFSP, but it is usually negative for CD34 and Bcl-2.Schwannoma contains Antoni A and B areas, and it is S-100 protein positive which not seen in DFSP .Molecularly, DFSP characterized by a specific t translocation leading to the formation of COL1A1-PDGFB fusion transcripts .The main treatment of DFSP is surgical resection with wide negative margins.Due to the giant size of the tumor in our case, we decided to carry out the one-stage reconstruction of the postoperative defect with the anterolateral thigh flap on the microvascular anastomosis.Mohs technic recommended by several publications gives the possibility to have narrow margins .The 5-year survival rate for head and neck sarcomas is approximately 50%.Most authors agree that the same prognostic factors– grade, size, and depth-apply to sarcomas no matter where they arise.In the head and neck, however, local recurrence has more significant consequences because of the difficulty of subsequent management.In general, salivary gland sarcomas are aggressive neoplasms with recurrences in about 40–64%, hematogenous metastasis and mortality rates ranging from 36 to 64%.The prognosis for parotid DFSP is not clear due to the scarcity of cases reported in the literature .In our case recurrence was observed within 11 months after treatment despite aggressive postoperative treatment including 4 cycles of chemotherapy and 40 Gy radiotherapy.After second surgery only radiotherapy performed as adjuvant therapy .Some studies also recommend adjuvant tyrosine-kinase inhibitors therapy with good response rates published by several authors .Only both surgeries performed by our team, all postoperative treatment delivered by regional cancer hospital without access to TKIs.Since recurrence and metastasis can take place after several years, a lifelong clinical and imaging regular follow-up is compulsory.Dermatofibrosarcoma protuberans is a rare tumor, with infiltrative margins, high local recurrence rate, and rare distant metastasis.Parotid gland or parotid region is a very rare site with few published case reports, shares the common features of trunk and extremities DFSP.The complete resection is the most important prognostic factor, and no evidence supports the efficacy of any therapy different to surgery.Radiotherapy or chemoradiotherapy can be applied to large and recurrent cases, but with unclear benefit.Due to the frequent local recurrence even after many years of remission, a long-term follow-up is guaranteed.No conflicts of interest to declare.It is a case report study.No funding was obtained to perform a study.Not applicable, observational case reports are exempt from ethical approval in our Institution.We have obtained a written consent from a patient; despite there is no possibility to identify a patient on provided pictures.DA – study concept, and final approval.DA, FK, ST – acquired and interpreted the data and drafted the manuscript.DA, DAh. -,performed the operation and perioperative management of the patient, revision of the manuscript.All authors read and approved the final manuscript.Not commissioned, externally peer-reviewed. | Introduction: Dermatofibrosarcoma protuberans (DFSP) is a cutaneous malignancy that arises from the dermis and invades deeper tissue. The cellular origin of DFSP is not clear. Evidence supports the cellular origin being fibroblastic, histiocytic, or neuroectodermal. Presentation of case: A 38 years old, women presented with slow-growing large right parotid mass. A total parotidectomy performed with auriculectomy and reconstruction using ALT flap. Diagnosis confirmed by pathology and immunohistochemistry. Tumor recurred in 10 months, the second surgery with subsequent chemoradiotherapy performed. Patient initially treated with wide resection, 4 cycles of chemotherapy and postoperative radiotherapy 40 Gy, with the recurrence in 10 months. We performed a second surgery followed by radiotherapy. She is disease-free for more than two years under follow-up. Discussion: The main treatment of DFSP is surgical resection with wide negative margins or Moh's surgery. Advanced cases treated with addition of radiotherapy or chemoradiotherapy, but with unclear benefits. In our case, huge tumor located in the parotid region recurred after initial surgery and adjuvant treatment. Conclusion: Clinically, DFSP usually manifest as well circumscribed, slow-growing, smooth, and painless masses. In cases with advanced tumor in parotid region, recurrence may occur despite aggressive initial treatment with wide resection and chemoradiotherapy. |
362 | Pharmacological activities of selected wild mushrooms in South Waziristan (FATA), Pakistan | The use of wild mushrooms in the diet has increased worldwide, enhancing their marketability and economic contribution by approximately two billion dollars and medicine.Mushrooms are non-timber forest products that are important for both their nutritional as well as pharmacological effects.They are sources of many biologically active compounds that can help in strengthening the immune system and shielding against carcinogens.Previous studies have shown that mushrooms contain active ingredients, which have numerous therapeutic effects such as antitumor, immune-modulating and amelioration of chronic bronchitis.Several genera of mushrooms are edible providing sources of proteins, carbohydrates, vitamins, minerals and amino acids.Calvatia gigantea, the largest edible mushroom species, belongs to family Lycoperdaceae.Morchella esculenta is an economically important mushroom species largely collected in the wild.The fruiting body of this mushroom species is edible and is usually used as a flavoring agent in soups and gravies.Astraeus hygrometricus is another wild growing mushroom species in South Waziristan that is used as food and possesses antimicrobial properties.South Waziristan extends over an area of 6,500 sq km, located about 580 km northeast of Islamabad, Pakistan.South Waziristan shares a 300 km border with Afghanistan.It is among the federally administered tribal areas of Pakistan, in which poverty is widespread and people source their food and medicine from the wild.The present study aimed to investigate the content and biological activities: total phenolics, protein content, antioxidant activity and cytotoxicity of selected mushroom species viz., M. esculenta, A. hygrometricus and C. gigantea collected from South Waziristan, Pakistan, using methanolic extracts on radish seed growth.The mushroom samples were collected from Tehsil Makeen, Wana and Birmal of South Waziristan Agency during April–May 2010 and identified at Pakistan Museum of Natural History, Islamabad.The fresh mushroom samples were washed with sterile distilled water, cut into slices and shade dried.The dried material was ground finely with an electric grinder and stored at 4 °C.A total of 20 g of each mushroom sample, ground and powdered, was soaked in a flask in 200 ml of methanol for 3 days with occasional shaking.The mixture was filtered through Whatman filter paper No. 1 in a Buchner funnel using suction pump.To the residue left in the flask, the same amount of the solvent was added and the process was repeated.The extracts were concentrated to dryness using a rotary evaporator at 40 °C under reduced pressure and was further stored at 4 °C in a refrigerator.The free radical scavenging activity of methanolic extract of M. esculenta, A. hygrometricus and C. gigantea was measured in terms of hydrogen donating or radical scavenging ability using the stable radical 1, 1-diphenyl-2-picrylahydrazyl, prepared by dissolving 3.2 mg DPPH in 100 ml of methanol.DPPH solution of 2.8 ml was added to a glass vial followed by the addition of 0.2 ml of test sample solution, in methanol, leading to the final concentration of 1 μg/ml, 5 μg/ml, 10 μg/ml, 25 μg/ml, 50 μg/ml and 100 μg/ml.These solution mixtures were kept in the dark for 30 min at room temperature and absorbance was measured at 517 nm.A lower absorbance value of the reaction mixture indicates higher free radical scavenging activity.Ascorbic acid was used as standard in 1–100 μg/ml solution.All the tests were carried out in triplicate.The radical scavenging activity was calculated as percentage of DPPH discoloration using the following equation:% scavenging DPPH free radical = 100 ×,AE is the absorbance of the solution when extract was added, and AD is the absorbance of the DPPH solution with no addition.Thus, the IC50 value was calculated as the concentration of sample required to inhibit 50% of the DPPH free radical using GraphPad Prism v.5.0.Total phenolic contents were estimated using the method of Oyetayo, Nieto- Camacho, Ramırez-Apana, Baldomero, and Jimenez.The 200 μl of each fraction was added to 10 ml of 1:10 folin-ciocalteu reagent.The mixture was mixed and incubated for 5 min before the addition of 7 ml of 0.115 mg/ml Na2CO3.The solution was incubated for 2 h before absorbance readings were taken at 765 nm.Gallic acid was used for the calibration curve.Results were expressed as milligram gallic acid equivalent per gram of dried fraction.The brine shrimp cytotoxicity assay was performed according to Meyer-Alber, Hartmann, Sumpel, and Creutzfeldt.Samples were prepared by dissolving 10 mg of extract in methanol to form a stock solution that was used to prepare further dilutions.Brine shrimps were incubated in a two-compartment rectangular tray containing sea salt saline.The sea saline was prepared by dissolving 38 g sea salt in 1 L of deionized H2O with continuous stirring for 2 h. Eggs were sprinkled in a dark compartment of the tray and after 24 h the hatched larvae were collected by pipette from the lighted side.For shrimp treatment, 0.5 ml of each concentration was placed in vials and the solvents were evaporated.Residues were re-dissolved in 2 ml saline.Ten shrimps were transferred to each vial and the volume was made up to 5 ml by incubating the vials at 25–28 °C.The same assay was performed for the standard Ampicillin trihydrate.After 24 h of incubation, the survivors were counted and recorded for calculating the LC50 values using GraphPad Prism v.5.0.Following the method described by Rehman and Khan, Tariq, and A, Khan, the methanolic extract was dissolved in 50 ml methanol to make a stock solution of 10 mg/ml, i.e., 10,000 mg/L or 10,000 ppm concentration.The stock solution was further diluted to 1000 ppm with methanol.Autoclaved distilled water and pure methanol were used as control and vehicle control, respectively.An aliquot of each sterilized concentration was placed onto sterilized filter paper in a 10 cm Petri dish.Methanol was vacuum evaporated separately, and then 5 ml autoclaved distilled water was added to each Petri plate.Three replicates were prepared for each concentration.For negative control, 5 ml methanol was added to the plate, it was vacuum evaporated and then 5 ml autoclaved distilled water was added to it.For positive control, only 5 ml autoclaved distilled water was added to each plate.Three replicates were prepared for each control.Ten sterilized radish seeds were placed at sufficient distances with a sterilized forceps in each plate.For the sterilization of radish seeds, 0.1% of mercuric chloride solution was prepared in a beaker, where radish seeds were put in for 3 min.Furthermore, it was rinsed with autoclaved distilled water followed by drying on sterilized blotting paper.Petri plates were incubated in a light of 350 μmol m− 2 s− 1 at 25 °C."The data for the effect of methanolic extracts on radish growth were analyzed by analysis of variance according to Steel and Torrie, and comparison among treatment means was made by Duncan's multiple range test using MSTAT-C version 1.4.2.Antioxidant activity of selected mushrooms measured as DDPH radical scavenging activity tests has been summarized in Table 1.Among the methanolic crude extracts prepared and standard tested for in vitro antioxidant activity using the DPPH method, the crude methanolic extracts of A. hygrometricus, C. gigantea and M. esculenta showed antioxidant activity with IC50 values of 9.3 ± 0.32 μg/ml, 22.2 ± 0.3 2 μg/ml and 18.0 ± 0.1 2 μg/ml, respectively.The IC50 value recorded for ascorbic acid was 7.5 ± 0.2 μg/ml.The results showed that IC50 value of A. hygrometricus is close to ascorbic acid.These results are in agreement with those of Fui, Shieh, and Ho, who found that various cultivated and wild mushrooms possess significant antioxidant and free radical scavenging activities.Huangs also found excellent scavenging effects with methanolic extracts from Antrodia camphorata and Brazilian mushroom at 2.5 mg/ml, respectively.The brine shrimp lethality test was performed to assess the cytotoxic effects of methanolic extract prepared from fruiting bodies of A. hygrometricus, C. gigantea and M. esculenta as presented in Table 2.A higher LC50 value was recorded for methanolic extract of A. hygrometricus followed by M. esculenta.However, the methanolic extract of C. gigantea exhibited relatively lower LC50, which was comparable to LC50 of antibiotic Ampicillin trihydrate.Cytotoxicity-based screening for identification of compounds has been previously shown to be successful in the discovery of many clinically useful anticancer natural products.Previous studies showed that C. gigantea produced an antitumor compound calvacin.Although during the present investigations emphasis was not given to anticancer activity of selected mushrooms collected from South Waziristan, these native mushroom species indicated their cytotoxic potential against brine shrimp.Several studies have shown that brine shrimp assay has been an excellent method to screen cytotoxic activity in plant and mushroom species and also for the isolation of biologically active compounds.The method provides a preliminary toxicity screening basis for further experiments on mammalian models.The lethal effect of wild mushroom methanolic extracts on brine shrimp depicts the presence of potential cytotoxic compounds which warrants further investigation as anticancer agents.The extracts obtained from natural products with LC50 value less than 100 μg/ml, as observed in the brine shrimp lethality assays, are considered toxic.However, the mushroom species used in this study are edible and used by tribal population as a food since ages.This toxicity revealed may not be seen to human possibly because the toxins are heat labile and must be detoxified during cooking and toxins may be inactivated by gastric juices and proteolytic enzymes in the gastrointestinal tract.The effect of methanolic extracts and of selected mushrooms on radish seeds have been presented in Tables 3 and 4, respectively.Our results revealed that methanolic extracts of A. hygrometricus, C. gigantea and M. esculenta at 1 mg/ml did not show any significant effect on seed germination and shoot length of radish.However, root length and root/shoot ratio was significantly increased by C. gigantea as compared to control.The methanol extract of C. gigantea at 10 mg/ml decreased the seed germination and shoot length of radish by 16%.In contrast, a methanolic extract of M. esculenta significantly increased the shoot length as compared to control.The effect of methanolic extract of A. hygrometricus on seed germination and shoot length was not significant.Similarly, the root length and root/shoot was not significantly affected by methanolic extract of all three mushroom as compared to that of control.The application of higher concentration of M. esculenta methanol extract showed stimulatory effects on shoot length of radish.In contrast, C. gigantea extract inhibited seed germination and shoot length of radish.Allelopathy is a biological phenomenon by which an organism produces one or more biochemicals that influences the growth, survival, and reproduction of other organisms.Biochemical is known as allelochemical and can have beneficial or detrimental effects on the target organisms.Allelochemicals are a subset of secondary metabolites which are not required for metabolism.Yet these may help plants improve their growth and may also help reduce the risk of pathogen attacks on crop plants.Maximum phenolic content was recorded in the methanolic extract of A. hygrometricus followed by M. esculanta.Our results are congruent with the findings of Seng, Chy, Kheng, and Wai, who studied several wild mushrooms for their phenolic contents.Similarly, Kim et al. reported 28 phenolic compounds in mushrooms.According to this study, the average concentration of phenolic compounds was 326 μg/g; for edible mushrooms 174 μg/g and 477 μg/g for medicinal mushrooms.Phenolic compounds have significant biological and pharmacological properties and some also demonstrate remarkable ability to alter sulfate conjugation.The bioactivity of phenolic compounds may be related to their ability to chelate metals, inhibit lipoxygenase and scavenge free radicals.The mushrooms studied here possessed the phenolic compounds and therefore seem to be potential source of useful biological drugs.The protein content of M. esculenta was higher when compared with other mushroom species.The Morchella species growing naturally in South Waziristan is being collected by poor people and thus plays a significant role in the rural livelihood.It is sold in the local market in huge quantities and the production is mainly from natural stands.In rural communities, there is no adoption of mushroom cultivation or conservation resulting in its decline from these localities.The mushroom species growing naturally in South Waziristan are a source of natural antioxidants, phenolics and proteins.M. esculenta has been a rich source of protein and contributes towards the food requirement of people of remote tribal belt of Pakistan.Similarly, A. hygrometricus and C. gigantea also carry higher pharmacological importance in treating various ailments.However, there is a dire need for documenting and conserving these economically important mushroom species. | This study investigates the pharmacological importance of selected wild mushrooms viz., Morchella esculenta (common morel), Calvatia gigantea (Giant puffball) and Astraeus hygrometricus (False earthstar) collected from South Waziristan Agency, Federally Administered Tribal Areas (FATA), Pakistan. The selected mushrooms were collected from Tehsil Makeen, Wana and Birmal of South Waziristan Agency during a sampling survey conducted in April-May 2010. The dry fruiting bodies of mushrooms were methanol extracted and evaluated for total phenolic, protein, antioxidant activity, cytotoxicity and their effects on radish seed growth. The results revealed that methanol crude extracts prepared from fruiting bodies of A. hygrometricus and C. gigantea have higher phenolic content and total antioxidant activity as well as greater brine shrimp cytotoxicity. On the other hand, M. esculenta has a high level of protein content and promoted seedling growth in radish. Antioxidant activity for A. hygrometricus, C. gigantea and M. esculenta at IC50 values were: 9.3±0.3μg/ml, 22.2±0.3μg/ml and 18.0±0.1μg/ml, respectively. Methanolic extract of C. gigantea (10mg/ml) reduced the seed germination and shoot length of radish by 16%. In contrast, the methanolic extract (10mg/ml) of M. esculenta and C. gigantea enhanced the shoot length, root length and root/shoot ratio of radish. Higher LC50 value was recorded for methanolic extract of A. hygrometricus (19.0±0.3μg/ml) followed by M. esculenta (17±0.19μg/ml), whereas methanolic extract of C. gigantea showed lower LC50 value (16±0.23μg/ml). It is inferred from the present investigation that mushrooms collected from South Waziristan could be potential source of compounds with beneficial biological activities. |
363 | Osmotic dehydration of mango: Effect of vacuum impregnation, high pressure, pectin methylesterase and ripeness on quality | Mango is the second most important tropical fruit in the world after banana.Mango, as most fruits, is an important source of macro-, micronutrients, and a broad range of phytochemicals.There has been an increase in world demand for minimally processed mango products with a prolonged shelf-life, though maintaining the healthy and tasty experience.Osmotic dehydration can provide such added value to the product.OD could be applied before drying or freezing to creating new, less perishable food products or ingredients with high nutritional and sensory properties.OD is a mass transfer process which removes partially water and simultaneously increases the soluble solid content of fruit in an osmotic solution.The process results in modification of the fruit tissue which can be tailored towards the compositional, textural and sensorial quality of dehydrated fruit.The OD mass transfer can be influenced by the fruit properties.As mango is often harvested at mature-green stage, a full sized but unripe state, it has different characteristics from ripe mango, a fully developed, ripe and ready to eat product.During ripening mango undergoes biochemical changes causing tissue softening because of extensive pectin solubilization and progressive depolymerization in the middle lamella of cell walls, involving cell wall hydrolases.Several process variables affect OD mass transfer rates such as pretreatments, temperature, OS properties, agitation, fruit to OS ratio, and additives.Combination of OD with pretreatments, such as vacuum impregnation and high pressure have been shown to enhance the mass transfer.The main driving forces during OD of fruit have been illustrated in Fig. 1.In OD, flux of OS into cellular tissue is induced by initial capillary pressure.Meanwhile, three other mechanisms concurrently occur throughout the process: cell dehydration caused by aw gradients leading to water loss; both soluble solid diffusion and cell impregnation caused by cellular volume changes that generate pressure gradients related to mechanical deformation.In OD-VI, the capillary impregnation was combined with VI-induced impregnation which expands internal gas and liquid in pores and followed by compression.The gas partially flows out causing additional internal volume changes.Instead, in OD-HP, the imposed pressure changes are caused by high pressure and the following decompression increase cell membrane permeability.In this study, the response was also influenced by the tissue features of mango which change during ripening.Mass transfer and a mild temperature applied during OD result in tissue modification without damaging the fruit structure.Product firmness can be improved by incorporating texture modifying agents, such as pectin methylesterase, calcium or a combination of these agents.PME is able to de-methylate fruit pectin which can subsequently be bound by the available endogenous and/or added calcium into a calcium-pectin gel.Incorporation of these agents can be enhanced by applying VI or HP prior to OD.Understanding the effect of fruit ripeness is essential to improve the process as well as the sensory and nutritional quality of osmotic dehydrated fruit.Many studies investigated OD mass transfer rates and its efficiency in different fruits under different conditions, a. o. Torreggiani and Bertolo; Rastogi et al.The effects of OD process variables have been thoroughly investigated and reviewed.However, the effect of fruit ripeness with PME addition and using different pretreatments for OD has not been reported.Therefore, the objective of this study is to investigate the effects of pretreatments and PME as additives in the presence of calcium on OD efficiency and on quality parameters of osmotic dehydrated mango of different ripeness.Unripe and ripe mango from Brazil was provided by Nature's Pride and stored at 11 °C and used within three days after arrival.After being selected based on the firmness, mango was peeled and the flesh was cut into cubes using a potato cutter and a knife.Approximately 150 g of mango cubes were used for each replicate per treatment.Pectin methylesterase from a recombinant Aspergillus oryzae with a declared activity of 10 Pectin Esterase Units/ml, and calcium-l-lactate pentahydrate were used.Osmotic solutions were prepared with commercial sucrose in demineralized water.OD was carried out with a 1:4 mango to OS ratio at 50 °C in 60 °Brix sucrose, 2 g calcium lactate/100 g and 0 or 0.48 mL PME/100 g for 0.5, 2 and 4 h under continuous stirring.OD time started from the immersion in the OS which included the pretreatment time.Afterward, the cubes were separated, quickly rinsed with demineralized water, gently blotted with tissue papers and kept at 4 °C prior to analysis.Untreated mango was used as a control.Each treatment was performed in duplicate.VI was carried out in a vacuum chamber with a pump at 30 °C and 5 KPa for 15 min and 10 min for pressure recovery.After VI, samples completed the OD time.HP was carried out in a Resato FPU-100-50.OS at 35 °C and mango cubes were packed in a sealed polyethylene bag after removal of air.The packed sample was subjected to HP condition using water as the pressure medium.Pressure build-up rate was about 6.7 MPa/s.The processing time was counted after the solutions reached 300 MPa.Due to the adiabatic heating, a maximum temperature of 50 °C was obtained.Decompression time was about 10 s.After HP, samples completed the OD time.The firmness of intact mango was measured at four equatorial points using a penetrometer with 8 mm tip.Titratable acidity was measured by titrating supernatant of mango juice with 0.1 mol/L NaOH to pH 8.1 using a pH meter.Total soluble solid was determined with a digital refractometer.Water activity was measured with a LabMaster-aw.Moisture content was measured by drying in an oven at 103 ± 2 °C until reaching a constant weight.All physicochemical analyses were measured in duplicate before and after each treatment.A bulk shear test was performed in a TA-TX2 texture analyzer equipped with Texture Exponent 32 and mini-Kramer shear cell and a 50 kg load cell.Maximum force and total force to shear the bulk sample which is the total area under the curve of firmness, N.s) of a single layer of four and a double layer of eight mango cubes were measured before and after each treatment for unripe and ripe mango, respectively.Test speed of 1.5 mm/s; post-test speed of 1.5 mm/s; target distance of 39 mm were used."Data were statistically evaluated using analysis of variance combined with Duncan's multiple range test.The main mass transfer phenomena during osmotic dehydration involve water loss and soluble solid gain, and effects of pretreatments are shown in Table 2.OD resulted in a WL of 45–51 g water/100 g in unripe and ripe mango.VI and HP clearly led to a lower WL in unripe mango compared to OD alone, while no clear effect of pretreatments was observed in ripe mango.This could be caused by the strong cellular structure of unripe mango, so the imposed pressure changes removed most of the gas from the pores or increased cell permeability causing SSG exceeded WL.Similar results were obtained for OD ripe mango compared with OD-VI and OD-HP.The observed OD-HP results on WL in mango differed from other fruits where OD-HP maintained or promoted WL compared with OD alone, a. o. banana, tomato.The SSG of unripe OD-treated mango was 11–13 g solid/100 g and 4.3–5 g solid/100 g for the ripe mango.Also for unripe mango, the pretreatments showed a substantial effect; the SSG was up to 17.7–19.8 g solid/100 g for OD-HP and even up to 26.2–26.5 g solid/100 g for OD-VI mango.The imposed pressure changes by the pretreatments seem to enhance the mass transfer mechanisms for unripe mango.For ripe mango, effects on SSG were just marginally, with 6.5 g solid/100 g SSG for OD-VI as the highest value.The higher SSG is favorable to having sweetened fruit but is less favorable to obtain a lower sugar content.SSG of unripe mango was two to five-fold higher compared to ripe mango for all treatments.A similar result was also reported by Rincon and Kerr.Previous studies reported that OD-VI of ripe mango resulted in a more pronounced increase in SSG compared to OD.OD-HP was also reported to increase SSG in OD ripe mango about 1.5-fold compared to OD alone.OD-VI applied in the present study led to a remarkable enhancement of SSG in unripe mango, as was also observed in other fruits.A higher SSG in OD unripe mango compared to ripe mango could be explained by the fact that unripe mango has: a stronger cellular structure; a lower sugar content.The stronger cell walls are because polysaccharide modification and turgor reduction have not occurred yet.Hence, when water is removed from the fruit tissue, structural changes might be limited leading to a higher SSG.In ripe mango cell wall softening has occurred, when water leaves the fruit, the tissue structure could collapse, thereby physically hindering OS penetration into the fruit.Lower initial sugar content in unripe mango results in higher concentration gradients during OD facilitating a higher SSG.In addition, ripe mango has a decreased intercellular pore size, which could limit OS penetration into the fruit.OD efficiency of ripe mango was higher than unripe mango, as shown in Fig. 3, P < 0.05.This is in line with the lower SSG of ripe mango compared to unripe, while they had a similar WL.In unripe mango, the pretreatments significantly decreased OD efficiency from 3.5 to 4.8 to 1.1 and 1.5–1.6, respectively.Pretreated ripe mango without PME resulted in a 25% and 18% lower OD efficiency.Similar results of a 20–30% OD efficiency reduction when mango was pretreated with VI were reported by Ito, Tonon, Park, and Hubinger; Torres et al.There was no clear effect of PME addition on OD efficiency at 0.5 and 2 h for unripe and ripe mango.Though, after 4 h OD, PME addition significantly increased OD efficiency in both pretreated ripe mango.This result is due to an increased WL and reduced SSG led to a significant efficiency increase surpassing that of OD.Both pretreatments could facilitate more rapid and homogenous penetration of PME and calcium into cells, forming a calcium-pectin gel leading to SSG reduction by these modified cells.In addition, the applied OD-HP seems to be a suitable combination of pressure and temperature to stimulate fungal PME activity and to lower baro-sensitive polygalacturonase activity.Weight reduction of mango after OD was comparable to the observed WL although with much lower value especially for OD-VI and OD-HP of unripe mango.This difference could be due to higher SSG of OD pretreated unripe mango compared to OD alone.In unripe mango, the highest WR occurred in OD, while pretreatment of HP and VI lowered the value.Thus, HP and VI could be advantageous to limit great WR in unripe mango, but it is associated with a higher sugar content in the product.In Fig. 5, a composed overview is presented on main OD effects on unripe/ripe mango and combined pretreatments on relevant variables, caused by different mass transfer phenomena.This overview gives an indication of suitable combinations of mango ripeness and pretreatment to produce desired quality characteristics.A complete overview of data representing effects of each pretreatment and ripeness of mango on WL, SSG, OD efficiency, and WR is given in Table S1.In all treatments, after 4 h, h* values were generally slightly reduced for both ripeness, except for OD-HP of unripe mango.C* values were generally maintained between 50 and 60 in unripe mango implying carotenoid pigment stability, except for OD-VI of unripe mango, but were slightly increased in ripe mango.The OD setting seems to be suitable for preservation of color quality of mango.An unchanged C* value was also found in OD pineapple.After 4 h OD, L* values of unripe mango were greatly reduced in all treatments, but that of ripe mango were consistent between 48 and 51.For unripe mango, OD-VI mango had the highest L* reduction followed by OD and OD-HP.This reduction is in a good agreement with a previous study on OD-VI mango.This could be attributed to increasing of translucency resulting from internal gas loss triggered by VI or OS penetration during OD as was also reported for other fruits.L* value of OD-VI unripe mango was similar to fresh ripe mango, which might be preferred as it could contribute to the fresh-like appearance of the product.This OD-VI effect was not observed in ripe mango, the L* value of fresh ripe mango is already lower because of a lower internal gas.Total color differences were calculated.Changes in ΔE* reflected the changes in L*, due to the much smaller changes in a* and b*.Firmness and work of shear after 4 h OD of all treatments were consistently more reduced in ripe mango compared to unripe mango.Fresh unripe mango had higher values than ripe mango.Effect of pretreatments on texture was only clear for WOS of unripe mango which for all pretreatments resulted in a lower WOS after OD.Although VI and HP promote penetration of added calcium into fruit, it seems the treatments reduced endogenous PME activity and the stronger structure of unripe mango might limit fungal PME penetration into cells.In addition, we found that the lower WR, the higher the WOS.The cellular structure of fruit seems to be more preserved when the shrinkage due to the WR is minimized.Addition of PME with calcium had some influences on firmness and work of shear of ripe mango which generally increased, except for WOS of OD-HP mango.Similar firming effects on ripe mango treated with HP were also reported in mango and pineapple.Conversely, no textural effect of adding PME on OD unripe mango was observed, Figs. 7a and 8a.In the presence of calcium, ripe mango of all treatments showed a more pronounced firming effect with added PME.Similar firming effects of VI with calcium and PME were also obtained in other pectin containing fruits such as papaya and apple.The effect of added PME on textural attributes might be enhanced by increasing the concentration of added PME and calcium to result in more pectin-calcium gels in the tissue.Osmodehydrated unripe mango resulted in a remarkable two to five-fold higher SSG compared to ripe mango for all treatments.Unripe mango pretreated with OD-VI had the lowest WL and the highest SSG, while OD-HP had a similar but less pronounced effect.Fungal PME addition increased OD efficiency for OD-HP and OD-VI of ripe mango.Generally, h* were preserved and C* were maintained or only slightly increased in both ripeness in all treatments.Nevertheless, L* values of OD-VI unripe mango were greatly reduced and became similar to fresh ripe mango.A general trend in increasing firmness and WOS with added PME in OD unripe and ripe mango was observed.This study thus demonstrates that using different ripeness of mango resulted in different quality of OD mango upon pretreatments in the presence of calcium.The observed effects of VI and HP applied to unripe and ripe mango are valuable to tailor OD efficiency and achieve the desired quality of OD mango.Two contrasting product of OD mango from this study could be candied mango and OD mango with minimal sugar uptake.For candied mango, it is favorable to use unripe mango and apply OD-HP without PME to result in high SSG, the lowest WL, and WR, while minimizing color changes and maintaining fresh-like texture.To produce OD mango with minimal sugar uptake, it is favorable to use ripe mango and apply OD without PME to result in the lowest SSG and fresh-like C* and h* values as well as texture. | The effects of pretreatment with vacuum impregnation (VI) and high pressure (HP) and adding pectin methylesterase (PME) with calcium on the quality of osmotic dehydrated mango of different ripeness were investigated. Unripe and ripe ‘Kent’ mango cubes were osmotic dehydrated (OD at 50 °C in 60 °Brix sucrose solution containing 2 g calcium lactate/100 g and 0 or 0.48 mL PME/100 g), preceded either by VI (OD-VI) or HP (OD-HP). Use of unripe mango in OD showed two to five-fold higher soluble solid gain (SSG) compared to ripe mango for all treatments. Unripe mango pretreated with OD-VI showed the lowest water loss (WL) and the highest SSG. OD-HP had a similar but less pronounced effect as OD-VI on WL and SSG. Hue (h*) were generally preserved and color intensity (C*) were maintained or only slightly increased in both ripeness in all treatments. Lightness (L*) was greatly reduced in unripe mango but stable in ripe mango. In general, firmness and work of shear slightly increased when adding PME. |
364 | Spermidine ameliorates liver ischaemia-reperfusion injury through the regulation of autophagy by the AMPK-mTOR-ULK1 signalling pathway | In recent years, liver disease has become an increasingly frequent cause of mortality."Up to 25% of the world's population has risk factors for liver disease, including viral infection, alcohol abuse, and non-alcoholic steatohepatitis .Liver transplantation is the only effective therapy for end-stage liver disease, and it has been extensively applied worldwide.Ischaemia-reperfusion injury is an as yet unavoidable complication of liver transplantation that is challenging for clinicians.Due to a sharp increase in the number of recipients who suffer from emergency situations following transplantation, the gap between the demand for livers and suitable living donors is gradually increasing.Therefore, the transplantation community has focused on expanding the donor pool by using marginal grafts, which present at high risk of IR injury and may lead to potentially catastrophic scenarios.Although significant efforts have been made towards investigating the mechanism of IR injury and developing effective pre-treatment strategies, unsatisfactorily high probabilities of post-operative allograft failure, morbidity, and mortality remain .The main pathophysiological processes underlying hepatic IR injury involve ischaemia-induced cell damage and reperfusion-induced inflammation, which can result in severe inflammation and cell death .Hence, hepatic IR injury needs to be addressed urgently to understand the underlying pathogenesis and find sustainable solutions.Spermidine, a natural polyamine extracted from animals and plants, has crucial roles in various cellular processes, including DNA replication, transcription, and translation .Spermidine also exhibits anti-inflammatory and antioxidant properties, enhances mitochondrial metabolic function and respiration, promotes chaperone activity, and improves proteostasis .Intriguingly, external supplementation with spermidine exerts various beneficial effects on ageing and age-related diseases in a variety of model organisms .Importantly, the longevity-promoting activities of spermidine have been causally linked to its ability to maintain proteostasis through the stimulation of cytoprotective autophagy, which leads to the removal of the intracellular accumulation of toxic debris caused by ageing and disease .Autophagy is a strictly regulated cellular degradative pathway that controls the delivery of a wide range of proteins and organelles into the lysosome for catabolic degradation .Under basal conditions, autophagy is triggered to maintain homeostatic functions and is upregulated upon nutrient deprivation or cellular stress, such as IR injury, to provide amino acids and generate energy.Under pathophysiological conditions, autophagy dysfunction is generally characterized by the inability to remove damaged organelles or debris .Studies on autophagy in liver tissue following IR injury have shown that the consequent accumulation of damaged mitochondria, which are normally sequestered and degraded via autophagy, leads to the enhanced generation of reactive oxygen species and results in increased hepatocellular necrosis and tissue damage .Thus, improvement in the autophagic response to liver IR may reduce the levels of apoptosis and necrosis and protect against damage.To date, it is unclear whether spermidine can protect against hepatic IR injury by regulating autophagy.Given that pre-treatment with spermidine constitutes a well-documented avenue for protection against IR injury in multiple organs, we hypothesized that spermidine might alleviate liver IR injury by activating autophagy.To test this hypothesis, we investigated whether spermidine can protect the liver from IR injury and the mechanisms underlying this protection.Male C57BL/6 mice were purchased from Joint Ventures Sipper BK Experimental Animal.All animal protocols used in this study were in accordance with the guidelines for the Care and Use of Laboratory Animals of the National Institutes of Health, and all animal experiments were approved by the Scientific Investigation Board of Second Military Medical University, Shanghai, China.The mice were randomly divided into different groups as follows: sham-operated, sham-operated treated with spermidine, IR, IR treated with spermidine.For spermidine pre-treatment, the mice were administered 2 ml of water containing 3 mM spermidine by gavage once a day for 4 weeks before IR.This dosing strategy has been previously shown to protect the heart and extend lifespan .For compound C pre-treatment, mice received intraperitoneal injections for 4 days before IR .Rapamycin were administered intraperitoneally 1 h before IR .The mice were housed in standard animal rooms, and during the post-operative period, the mice were kept in a clean, warm, and quiet environment.All efforts were made to minimize animal suffering and the number of animals used.Western blotting was performed to measure the levels of the target proteins in liver samples collected 8 h after reperfusion.The following antibodies were used: polyclonal rabbit anti-mouse Beclin-1, LC3, Caspase-3, Cleaved-caspase-3, Bax, AMPK, p-AMPK, mTOR, p-mTOR, ULK1, p-ULK1, and GAPDH.The relative amount of each protein was determined with densitometry software.The data are expressed as the mean ± SD.GraphPad Prism 7 was used for data analysis."Two groups were compared using an unpaired Student's t-test or Mann-Whitney test. "ANOVA followed by Bonferroni's multiple comparison was used to compare the four groups to assess the statistical significance between the treated and untreated groups in all experiments.A p value less than 0.05 was considered statistically significant.Firstly, the mice were anaesthetized with sodium pentobarbital.Then, a midline laparotomy was performed to expose the hepatic portal blood vessels, and an atraumatic clamp was used to occlude the blood supply to the left lateral and median lobes of the liver.After 60 min of partial hepatic ischaemia, the clamp was removed to initiate hepatic reperfusion.Mice that underwent sham surgery served as the control group.After the indicated period of reperfusion, blood and liver samples were harvested for further analysis."A standard modular auto analyser was used to assess liver function by measuring serum AST and ALT concentrations 0, 4, 8, 12, and 24 h after reperfusion according to the manufacturer's protocol at the Central Laboratory of Changzheng Hospital, Shanghai, China.Tissue samples were fixed in 10% formaldehyde solution for 24 h and then embedded in paraffin .The tissues were cut into sections, and three were stained with haematoxylin and eosin and scored.For each tissue sample, at least 10 high-power fields per section were examined."Suzuki's histological grading was used to evaluate the histopathological damage. "Total liver RNA was extracted with TRIzol reagent according to the manufacturer's instructions, and then cDNA was synthesized with oligo d and the Superscript III Reverse Transcriptase Kit.Quantitative real-time RT-PCR analysis was performed using a StepOne Real-Time PCR System and the SYBR RT-PCR kit.All reactions were performed in a 20-μl volume in triplicate.Relative expression levels were normalized to those of GAPDH.Specificity was verified by melting curve analysis and agarose gel electrophoresis.The sequences of the primers are as follows:TNF-α, IL-6, IL-10, and GAPDH.The data were analysed by the comparative Ct method.Hepatic MPO activity is used as an index of neutrophil infiltration.Liver tissues were air-dried and fixed with acetone at −20 °C, and after blocking non-specific binding with foetal calf serum for 1 h at room temperature, the sections were incubated with a primary antibody against MPO overnight at 4 °C.After washing, the liver sections were immunostained with secondary antibodies for 1 h at 37 °C.The nuclei in the sections were stained with DAPI.Liver cryostat sections were prepared as previously reported , and then the slides were incubated with primary antibodies against LC3 overnight at 4 °C.Next, the liver sections were immunostained with secondary antibodies for 1 h at 37 °C.Finally, the immunostained and DAPI-stained tissue sections were observed under a fluorescence microscope.Liver samples were fixed with 2.5% glutaraldehyde in 0.1 mol/L PBS for 2 h and then sectioned and viewed under a transmission electron microscope.For autophagic vacuole quantification, 20 micrographs at a primary magnification of 15,000× from each sample were obtained by systematic random sampling."TUNEL staining assays were performed with an in situ cell death detection kit according to the manufacturer's instructions.The number of TUNEL-positive nuclei was counted in six randomly chosen images from non-overlapping areas of each group.The data are presented as the percentage of TUNEL-positive cells.To investigate the effects of spermidine on hepatic IR injury, we first induced warm hepatic IR injury using a well-established two-lobe IR model, and different groups were subjected to 60 min of partial hepatic ischaemia.At different time points post-reperfusion, serum ALT and AST concentrations were collected, and the results are shown.The serum levels of ALT and AST were significantly higher in the IR group than in the IR + spermidine group and peaked 8 h post-reperfusion.In addition, tissue sections were generated to observe histopathological liver damage under the microscope, and the results confirmed the above findings.Large necrotic areas as well as haemorrhagic changes were evident in the livers from the IR group.In contrast, the hepatic architecture of the livers from the IR + spermidine group was much better preserved, with only small and nonconfluent necrotic areas and attenuated haemorrhagic changes."Hepatocellular damage was graded according to Suzuki's criteria.These observations suggest that spermidine plays a protective role during hepatic IR-induced liver damage.It is well known that cytokines play critical roles in IR-induced hepatic injury, especially IL-6, IL-10, and TNF-α.TNF-α and IL-6 are key pro-inflammatory factors involved in the inflammatory response, which can aggravate liver inflammatory damage.Therefore, we compared the expression of these factors in IR + spermidine mice and IR mice.Notably, the mRNA levels of TNF-α and IL-6 were significantly higher in IR mice than in IR + spermidine mice.In sharp contrast, IL-10 was significantly higher in the IR + spermidine group mice.In addition, indirect immunohistochemical labelling of MPO was performed to assess the severity of neutrophil activation.Consistently, we detected a decrease in the number of MPO-positive cells in mice pre-treated with spermidine.These data indicate that spermidine can relieve IR-induced inflammation in the liver.LC3 is involved in phagophore formation and has been characterized as a distinctive autophagosome marker .As shown in Fig. 3A, LC3 expression was examined by fluorescence immunostaining, and the number of LC3-positive cells was notably increased in mice pre-treated with spermidine before IR injury compared to that in the IR group.The autophagic response was examined after spermidine treatment by assessing the accumulation of autophagosomes in hepatocytes by counting the number of autophagic vacuoles under a transmission electron microscope.Compared with the level in the liver tissues of IR mice, the level of autophagic vacuoles was much higher in the liver tissues of IR + spermidine mice.To verify these observations, LC3 and Beclin-1 were evaluated by Western blotting, and the results showed that the levels of Beclin-1 and LC3-II/LC3-I were remarkably elevated after IR injury.Notably, this elevation was much more apparent in mice treated with spermidine prior to IR injury.Taken together, these results show that autophagy is induced by spermidine to relieve IR injury.Mammalian target of rapamycin is a conserved serine/threonine kinase that regulates cell growth and autophagy.To achieve a better understanding of the molecular mechanisms underlying spermidine-mediated autophagy, we determined the possible involvement of signalling pathways.As shown, compared with the IR groups, the expression levels of p-AMPK and p-ULK1 were obviously increased in spermidine pre-treated mice upon IR insult, whereas the expression level of p-mTOR was significantly lower.To further confirm that spermidine-induced autophagy was activated via the AMPK-mTOR-ULK1 pathway, we treated IR mice with spermidine in the presence or absence of compound C or rapamycin.As expected, the compound C attenuated spermidine-induced p-AMPK and p-ULK1 upregulation, and upregulated p-mTOR level.On the contrary, rapamycin further potentiated spermidine-induced downregulation of p-mTOR level but increased ULK1 upregulation.Collectively, these results suggest that spermidine pre-treatment activates autophagy via the AMPK-mTOR-ULK1 pathway upon liver IR injury.Apoptosis, also called programmed cell death, plays an important role in the liver after IR injury.In the present experiment, hepatocellular apoptosis was detected in the liver by TUNEL staining, and the results showed that the number of TUNEL-positive cells was significantly lower in the IR + spermidine group than in the IR group, suggesting that hepatocellular apoptosis was significantly reduced by spermidine pre-treatment.Western blotting was used to assess the expression of the pro-apoptotic proteins Bax and cleaved caspase-3 during liver IR injury.The results showed that spermidine pre-treatment effectively inhibited the expression of Bax and cleaved caspase-3, which further indicated that spermidine attenuated IR-induced apoptosis.Hepatic IR injury is a pathological process that is magnified by the ensuing inflammatory response and persists throughout an increase in cell death, which can result in severe liver injury and the failure of liver transplant surgery .Unfortunately, preventing and attenuating IR injury is an unmet clinical need.Thus, identifying effective therapeutic approaches to interrupt this vicious cycle between the pro-inflammatory response and cell death is clinically significant for ameliorating IR-related liver damage .It is now well recognized that a protective stimulus can be applied at the onset of reperfusion to attenuate reperfusion injury.In this study, for the first time, we showed that spermidine pre-treatment may play a protective role during liver IR and may attenuate hepatocyte injury by regulating autophagy.Spermidine is involved in a considerable number of biological processes in organisms, such as the regulation of the immune response, cell proliferation, tissue regeneration, and organ development .Furthermore, it has been reported that spermidine extends lifespan and healthspan by maintaining proteostasis through the stimulation of cytoprotective autophagy .Although tissue concentrations of spermidine decline in an age-dependent manner in both model organisms and humans, the levels of spermidine can be upregulated through oral supplementation, leading to suppressed inflammation and improved longevity .It is worth noting that dietary spermidine can be rapidly resorbed from the intestine and distributed throughout the body without being degraded, and spermidine shows no adverse effects in mice due to life-long administration; thus, we chose to administered spermidine orally to our experimental mice .Increasing evidence has indicated that spermidine changes protein acetylation to regulate autophagy and confers protection in different organs and model systems , while its role in liver IR still remains poorly understood.In our study, the lower levels of histological damage and reduced serum enzyme levels demonstrated that spermidine pre-treatment significantly preserved liver function; this was in parallel with a decrease in neutrophil accumulation and pro-inflammatory cytokines, indicating that spermidine inhibits the inflammatory response, which is consistent with previous studies .In addition, electron microscopy and immunofluorescence staining were used to detect autophagy generation and confirmed that spermidine pre-treatment further promoted autophagy in the damaged liver.Autophagy, as a degradative pathway that is conserved in eukaryotic cells, is responsible for the turnover of damaged organelles and long-lived proteins.Once living organisms are exposed to radical environmental changes such as nutrient starvation, autophagy is rapidly activated, and autophagosomes fuse with lysosomes to degrade long-lived intracellular proteins and remove harmful cellular components .Previous reports have found that the protective roles of spermidine are strongly linked to the activation of autophagy .Our study revealed that spermidine administration distinctly induces autophagy in the damaged liver, suggesting that spermidine may restrain inflammatory factor generation through the regulation of autophagy.Furthermore, the main mode of hepatocyte death during IR is apoptosis and necrosis, while autophagy and apoptosis are in dynamic equilibrium, and autophagy usually precedes and inhibits apoptosis to protect cells.Consistently, our other studies demonstrated that spermidine administration before IR insult significantly reduces apoptosis in the liver, as evidenced by the decreased number of TUNEL-positive hepatocytes and the expression of pro-apoptotic proteins, which were suppressed by autophagy induction.To date, the detailed molecular mechanisms of autophagy in liver IR injury are unclear.Among the numerous components involved in the regulation of autophagy, mTOR, especially mTOR complex 1, is acknowledged as a key inhibitor that regulates autophagy in response to cellular physiological conditions and environmental stress in a coordinated manner.Notably, it has been reported that, like rapamycin, an immunosuppressant with protective and autophagy-stimulating properties, spermidine has the potential to inhibit mTOR .Moreover, it is well known that AMPK is a negatively regulated factor that is upstream negatively regulated factor of the mTOR pathway.Similar to autophagy, AMPK is initiated by nutritional deficiency and plays a vital role in maintaining glucose balance, promoting anti-inflammation, and regulating senescence .It has been reported that both the activation of AMPK and the suppression of mTOR can result in the dephosphorylation of ULK1, while ULK1 constantly associates with autophagy-related genes and subsequently induces autophagy .In this study, we showed that spermidine treatment increased the phosphorylation of AMPK and ULK1 but remarkably decreased the phosphorylation of mTOR in the liver after IR, but has the opposite effect in the presence of compound C, suggesting that spermidine can induce autophagy via the AMPK-mTOR-ULK1 pathway following liver IR injury.In summary, our research provides the first evidence that spermidine can ameliorate liver IR injury, and that this is dependent on the regulation of autophagy via the AMPK-mTOR-ULK1 pathway.This evidence may provide a novel therapeutic strategy for IR injury.The authors declare no conflict of interest. | Background: Hepatic ischaemia-reperfusion (IR) injury is a common clinical challenge lacking effective therapy. The aim of this study was to investigate whether spermidine has protective effects against hepatic IR injury through autophagy. Methods: Liver ischaemia reperfusion was induced in male C57BL/6 mice. Then, liver function, histopathology, cytokine production and immunofluorescence were evaluated to assess the impact of spermidine pre-treatment on IR-induced liver injury. Autophagosome formation was observed by transmission electron microscopy. Western blotting was used to explore the underlying mechanism and its relationship with autophagy, and TUNEL staining was conducted to determine the relationship between apoptosis and autophagy in the ischaemic liver. Results: The results of the transaminase assay, histopathological examination, and pro-inflammatory cytokine production and immunofluorescence evaluations demonstrated that mice pre-treated with spermidine showed significantly preserved liver function. Further experiments demonstrated that mice administered spermidine before the induction of IR exhibited increased autophagy via the AMPK-mTOR-ULK1 pathway, and TUNEL staining revealed that spermidine attenuated IR-induced apoptosis in the liver. Conclusions: Our results provide the first line of evidence that spermidine provides protection against IR-induced injury in the liver by regulating autophagy through the AMPK-mTOR-ULK1 signalling pathway. These results suggest that spermidine may be beneficial for hepatic IR injury. |
365 | Stability metrics for optic radiation tractography: Towards damage prediction after resective surgery | With diffusion tensor imaging the morphology of brain tissue, and especially the white matter fiber bundles, can be investigated in vivo, offering new possibilities for the evaluation of brain disorders and preoperative counseling.The optic radiation is a collection of white matter fiber bundles which carries visual information from the thalamus to the visual cortex.Numerous studies have accomplished to reconstruct the OR with DTI, by tracking pathways between the lateral geniculate nucleus and the primary visual cortex."In the curved region of the OR, configurations with multiple fiber orientations appear, such as crossings, because white matter tracts of the temporal stem intermingle with the fibers of the Meyer's loop. "Therefore, it is especially challenging to reconstruct the Meyer's loop, which is the most vulnerable bundle of the OR in case of surgical treatment of epilepsy in which part of the temporal lobe is removed.However, a limitation of DTI is that it can extract only a single fiber direction from the diffusion MRI data."With the advent of multi-fiber diffusion models it has become possible to describe regions of crossing fibers such as the highly curved Meyer's loop.Tractography based on constrained spherical deconvolution has been shown to have good fiber detection rates and has been applied in several studies to reconstruct the OR."Furthermore, probabilistic tractography is considered superior in comparison to deterministic tractography for resolving the problem of crossing fibers in the Meyer's loop.The probabilistic tracking results between the LGN and the visual cortex for a healthy volunteer are illustrated in Fig. 1.The tracking results are shown in a composite image along with other brain structures such as the ventricular system.In the current study the validity of the distance measurements is evaluated based on pre- and post-operative comparisons of the reconstructed OR of patients who underwent a TLR.It is investigated whether it is feasible to assess pre-operatively for each individual patient the potential damage to the OR as an adverse event of the planned TLR.The deviation between the prediction of the damage to the OR and the measured damage in a post-operative image is compared, giving an indication of the overall error in distance measurement.The main contributions of this paper are:Quantification of spurious streamlines.We provide FBC measures that quantify how well-aligned a streamline is with respect to neighboring streamlines."Stability metrics for the standardized removal of spurious streamlines near the anterior tip of the Meyer's loop.Robust estimation of the variability in ML-TP distance by a test–retest evaluation.Demonstration of the importance of the FBC measures by retrospective prediction of the damage to the OR based on pre- and post-operative reconstructions of the OR of epilepsy surgery candidates.Eight healthy volunteers without any history of neurological or psychiatric disorders were included in our study.All volunteers were male and in the age range of 21–25 years.Furthermore, three patients were included who were candidates for temporal lobe epilepsy surgery.For each patient a standard pre- and post-operative T1-weighted anatomical 3D-MRI was acquired.Patient 1 was diagnosed with a right mesiotemporal sclerosis and had a right TLR, including an amygdalohippocampectomy.Patient 2 was diagnosed with a left mesiotemporal sclerosis and had an extended resection of the left temporal pole.Lastly, Patient 3 was diagnosed with a cavernoma located in the basal, anterior part of the left temporal lobe and had an extended lesionectomy.All patients had pre- and post-operative perimetry carried out by consultant ophthalmologists.The study was approved by the Medical Ethical Committee of Kempenhaeghe, and informed written consent was obtained from all subjects.Data was acquired on a 3.0 T magnetic resonance scanner, using an eight-element SENSE head coil.A T1-weighted scan was obtained for anatomical reference using a Turbo Field Echo sequence with timing parameters for echo time and repetition time.A total of 160 slices were scanned with an acquisition matrix of 224 × 224 with isotropic voxels of 1 × 1 ×1 mm, leading to a field of view of 224 × 224 × 160 mm.Diffusion-weighted imaging was performed using the Single-Shot Spin-Echo Echo-Planar Imaging sequence.Diffusion sensitizing gradients were applied, according to the DTI protocol, in 32 directions with a b-value of 1000 s/mm2 in addition to an image without diffusion weighting.A total of 60 slices were scanned with an acquisition matrix of 112 × 112 with isotropic voxels of 2 × 2 ×2 mm, leading to a field of view of 224 × 224 × 120 mm.A SENSE factor of 2 and a halfscan factor of 0.678 were used.Acquisition time was about 8 min for the DWI scan and 5 min for the T1-weighted scan.The maximal total study time including survey images was 20 min.The preprocessing of the T1-weighted scan and DWI data is outlined in Fig. 2.All data preprocessing is performed using a pipeline created with NiPype, which allows for large-scale batch processing and provides interfaces to neuroimaging packages.The T1-weighted scan was first aligned to the AC-PC axis by affine coregistration to the MNI152 template using the FMRIB Software Library v5.0.Secondly, affine coregistration, considered suitable for within-subject image registration, was applied between the DWI volumes to correct for motion.Eddy current induced distortions were corrected within the Philips Achieva scanning software and did not require further post-processing.The DWI b=0 volume was subsequently affinely coregistered to the axis-aligned T1-weighted scan using normalized mutual information, and the resulting transformation was applied to the other DWI volumes.The DWI volumes were resampled using linear interpolation.After coregistration, the diffusion orientations were reoriented using the corresponding transformation matrices.Probabilistic tractography of the OR is based on the Fiber Orientation Density function, first described by Descoteaux et al.With probabilistic tractography, streamlines are generated between two regions of interest: the LGN, located in the thalamus, and the primary visual cortex.The LGN was defined manually on the axial T1-weighted image using anatomical references using a sphere of 4 mm radius, corresponding to a volume of 268 mm3.The ipsilateral primary visual cortex was manually delineated on the axial and coronal T1-weighted image."The primary visual cortex ROI's used in this study have an average volume of 1844 mm3.The FOD function describes the probability of finding a fiber at a certain position and orientation.In the current study the FOD function is estimated using CSD, which is implemented in the MRtrix software package.During tracking, the local fiber orientation is estimated by random sampling of the FOD function.In the MRtrix software package, rejection sampling is used to sample the FOD function in a range of directions restricted by a curvature constraint imposed on the streamlines.Streamlines are iteratively grown until no FOD function peak can be identified with an amplitude of 10% of the maximum amplitude of the FOD function.In MRtrix tracking, 20,000 streamlines are generated, which provides a good balance between computation time and reconstruction ability.A step size of 0.2 mm and a radius of curvature of 1 mm were used.These settings are reasonable for our application of reconstructing the OR and are recommended by Tournier et al.The FOD function was fitted with six spherical harmonic coefficients, which is suitable for the DTI scanning protocol used in this study.Anatomical constraints are applied when reconstructing the OR in order to prevent the need for manual pruning of streamlines and to reduce a subjective bias.Firstly, streamlines are restricted within the ipsilateral hemisphere.Secondly, fibers of the OR are expected to pass over the temporal horn of the ventricular system.The ventricular system is manually delineated using ITK-SNAP image segmentation software.Streamlines that cross through the area superior-laterally to the temporal horn are retained."Thirdly, an exclusion ROI is created manually of the fornix to remove streamlines that cross this region, which is in close proximity to the LGN and Meyer's loop.Furthermore, in order to remove long anatomically implausible streamlines, the maximum length of the streamlines is set to 114 mm based on a fiber-dissection study of the OR by Peltier et al.The stability metrics to identify spurious streamlines are outlined in Fig. 2, top-right box."These metrics are used to provide a reconstruction of the OR that is robust against the presence of spurious streamlines, which occur especially near the anterior tip of the Meyer's loop as shown in Fig. 1.The application of these metrics is important to obtain a stable measurement of the ML-TP distance as indicated in Fig. 1.To control the removal of spurious streamlines the threshold parameter ϵ is introduced, which is defined as the lower bound criterion on RFBC that retains a streamline.More precisely, every streamline γi that meets the condition RFBCα ≥ ϵ is retained."However, a careful selection of this threshold is required in order to prevent an underestimation of the full extent of the Meyer's loop.A method is introduced for the standardized selection of the minimal threshold ϵselected through test–retest evaluation of the variability in ML-TP distance.To this end, probabilistic tractography of the OR is performed multiple times, followed by the computation of the RFBC measure in each repetition.Subsequently, a parameter sweep is performed in which ϵ is varied between 0 ≤ ϵ ≤ ϵmax where ϵmax corresponds to the state where all streamlines are removed from Γ.During every step of the parameter sweep, the ML-TP distance is calculated for all test–retest repetitions by computing the Hausdorff distance between the temporal pole and the OR.Using these distance measurements, the mean and the standard deviation of the ML-TP distance are determined for each value of ϵ.For the patients studied, the distance measurement outcomes are compared to the predicted damage of the OR after surgery, as outlined in Fig. 2.The resection area is manually delineated in the post-operative T1-weighted image using ITK-SNAP.The resection length is measured from the temporal pole, at the anterior tip of the middle sphenoid fossa, up to the posterior margin of the resection.The predicted damage is determined by the distance between the pre-operative ML-TP distance and the resection length.The difference between the predicted damage and the observed damage, given by the distance between pre- and post-operative ML-TP distances, is named the margin of error.The margin of error indicates the maximal error in distance measurements, which includes both the variability in probabilistic tractography and unaccounted sources of error such as brain shift or distortions.The methodology for the robust reconstruction of the OR is available as an open source software package.The NiPype based pipeline for the basic processing of DW-MRI data, tractography, and FBC measures is available at https://github.com/stephanmeesters/DWI-preprocessing.An open source implementation of the FBC measures for the reduction of spurious streamlines described in Appendices A and B is available in the DIPY framework or as a C++ stand-alone application at https://github.com/stephanmeesters/spuriousfibers.Visualization was performed in the open source vIST/e tool.The effect of the removal of spurious streamlines on the ML-TP distance measurement using the FBC measures is demonstrated for eight healthy volunteers.For each volunteer the mean ML-TP distance and its standard deviation are listed in Table 1 for the left and right hemisphere, together with its corresponding test–retest variability.The additional value of the FBC measures for a robust ML-TP distance measurement is further evaluated for three patients who underwent a TLR.The parameter estimation based on test–retest evaluation is illustrated in Fig. 5 for the reconstructed OR of the left hemisphere for the eight healthy volunteers studied, showing for a range of parameter ϵ the standard deviation and the mean of the estimated ML-TP distance.The test–retest evaluation was performed with 10 repeated tractograms of the OR, which was empirically determined to be a good balance between group size and computation time.For all volunteers evaluated, a high standard deviation of the ML-TP distance was observed at low values of ϵ, which indicates the presence of spurious streamlines with a very low RFBC.The corresponding mean ML-TP distance reflects large jumps for an increase of the value of ϵ from 0 to 0.05, showing an average increase for the eight healthy volunteers of 8 mm.For each healthy volunteer the ϵselected is selected according to Eq.The ϵselected corresponds to a mean ML-TP distance that is depicted by the arrows in Fig. 5 for the eight healthy volunteers studied.After the initial high variability of the ML-TP distance, a stable region occurred for all healthy volunteers in which the standard deviation was below 2 mm.The healthy volunteers 1, 5 and 4 indicated regions of instability for relatively high values of ϵ.This can be attributed to gaps within the reconstructed OR with a lower number of streamlines compared to the main streamline bundle.Lastly, it can be observed that for volunteer 4 the selected ϵ is large compared to the other healthy volunteers.However, for this volunteer the mean ML-TP distance is stable from ϵ = 0.15 onward and therefore does not reflect an overestimation of the ML-TP distance.On the group level the ML-TP distances listed in Table 1 are on average 31.7 ± 4.7 mm for the left hemisphere and 28.4 ± 3.8 mm for the right hemisphere.The mean variability in probabilistic tractography on the individual level for the group of healthy volunteers is 1.0 mm and 0.9 mm for the left and right hemispheres, respectively.Large deviations in ML-TP distance were observed between the left and right hemispheres, especially, for volunteers 3, 7 and 8.The importance of the robust ML-TP distance measurement is illustrated for three patients who underwent resective epilepsy surgery.Fig. 6 displays the pre-operative and post-operative reconstructions of the OR and indicates for both hemispheres the estimated ML-TP distances.Given is also the resection length and the pre-operative reconstruction of the OR along with the predicted damage, indicated by the red colored streamlines.The pre- and post-operative distance measurements and the corresponding values of ϵ are listed for both the left and right hemisphere in Table 2.Furthermore, the predicted damage is listed in Table 2 and reflects the distance between the pre-operative ML-TP distance and the resection length.Finally, the margin of error is indicated, defined as the difference between the predicted damage and the observed damage."The tractography results indicate that for patients 1 and 2 the OR is damaged, likely resulting in a disrupted Meyer's loop for both patients.The perimetry results of these patients indicated a visual field deficit of 60 degrees for patient 2, which was smaller than the VFD measured for patient 1 at 90 degrees despite the larger resection of patient 2.Note, that for patient 3, for whom there was no damage to the OR, the reconstruction of the OR is well reproducible for both hemispheres, with a difference of maximally 3.0 mm including the variability in ML-TP distance.The difference between the predicted damage and the observed damage was small for these patients, indicating an maximum error of the predicted damage of the OR of 5.6 mm or less.The reproducibility of the reconstruction results obtained following the procedures as here described is further confirmed by the unaffected hemispheres of each individual patient, which show a similar anterior extent for both pre- and post-operative reconstructions of the OR.The ML-TP distance of the OR reconstructed for the OR of the non-pathologic hemisphere showed deviations for the two different scans of maximally 3.1 mm, 2.7 mm and 3.0 mm for Patient 1, Patient 2 and Patient 3, respectively, including the variability measure.The overall mean ML-TP distance pre-operatively is 31.4 ± 3.5 mm for the left hemisphere and 30.4 ± 1.4 mm for the right hemisphere.The mean variability in probabilistic tractography is 0.5 mm and 0.7 mm for the left and right hemispheres, respectively."Stability metrics were introduced for a robust estimation of the distance between the tip of the Meyer's loop and temporal pole.Standardized removal of spurious fibers was achieved, firstly by quantification of spurious streamlines using the FBC measures, and secondly by a procedure for the automatic selection of the minimal threshold ϵselected on the FBC measures."The results presented indicate that a reliable localization of the tip of the Meyer's loop is possible and that it is feasible to predict the damage to the OR as result of a TLR performed to render patients seizure free.For the estimation of the FOD function, CSD was applied on diffusion data obtained with the prevalent DTI acquisition scheme, thus allowing for a broad clinical applicability.In the current study, the DTI acquisition scheme has a relatively low number of directions of diffusion."Since the tip of the Meyer's loop has a high curvature, its reconstruction could especially benefit from the HARDI acquisition scheme, which measures a larger number of directions of diffusion such as 64 or 128 directions.However, unlike DTI, HARDI is not commonly applied within a medical MRI diagnosis.Instead, the DTI data may be improved by applying contextual enhancement, such as the one available in the DIPY framework.Additionally, in order to improve the image quality of the diffusion measurements it may be beneficial to apply denoising.This may, for example, be achieved by a recently proposed denoising approach based on non-local principal component analysis.The MRtrix software package was employed for the estimation of the FOD function and for performing probabilistic tractography.As an alternative to the rejection sampling method that is implemented in MRtrix for sampling the FOD during tracking, the importance sampling method as introduced in Friman et al. could be used.In contrast to the hard constraints used in rejection sampling, the importance sampling method provides a soft constraint on the space of positions and orientations, which is in line with the mathematical framework introduced in this paper.The seed regions of the LGN and visual cortex are highly influential for the tractography results.It may be possible to improve the fiber orientation estimation at the white matter to gray matter interface, such as near the LGN and visual cortex ROIs, by applying the recently introduced informed constrained spherical deconvolution.iCSD improves the FOD by modifying the response function to account for non-white matter partial volume effects, which may improve the reconstruction of the OR.In the current study, the LGN was identified manually and could possibly be improved by using a semi-automatic method such as presented by Winston et al.Another approach proposed by Benjamin et al. is to place different ROIs around the LGN and within the sagittal stratum, or by seeding from the optic chiasm."A recent study suggested using seeding around the Meyer's loop with an a-priori fiber orientation. "In order to remove spurious fibers while preventing an underestimation of the full extent of the Meyer's loop, a procedure for estimating ϵselected was introduced based on the test–retest evaluation of the variability in ML-TP distance.Using this methodology, a robust measurement of the ML-TP distance was achieved in the left and right hemispheres of eight healthy volunteers.The variability in the reconstruction results of the OR stems mostly from data acquisition.Therefore, ϵselected may vary between pre- and post-operative scans in the non-affected hemisphere.The mean ML-TP distances for both brain hemispheres, measured to be 30.0 ± 4.5 mm for the healthy volunteer group and 30.9 ± 2.4 mm for the patient group, are within the range of the ML-TP distance reported on by Ebeling et al. and outcomes from other OR reconstruction methodologies.For example, ConTrack showing 28 ± 3.0 mm, Streamlines Tracer technique showing 37 ± 2.5 mm and 44 ± 4.9 mm, Probability Index of Connectivity showing 36.2 ± 0.7 mm, tractography on Human Connective Project multi-shell data showing 30.7 ± 4.0 mm, and MAGNET showing 36.0 ± 3.8 mm.It appeared, furthermore, that the mean ML-TP for both the healthy volunteers and the patients was larger in the left hemisphere compared to the right hemisphere, which is not consistent with a recent study by James et al. that indicated a significantly higher ML-TP in the right hemisphere.A possible limitation of the parameter estimation procedure is that its application is tailored towards OR tractography.Unlike the FBC measures, which can be used for any tractogram, the parameter estimation procedure may not be generally applicable for other fiber bundles since a distance measurement between well-defined landmarks is required.However, a possible approach for generalized parameter selection is to fit the streamline bundle on a manifold such as used by BundleMAP and optimize ϵselected by minimizing the spread on the manifold.The methodology for the estimation of the ML-TP distance is applied for the surgical candidates, firstly to assess the validity of the distance measurements, and secondly to indicate its additional value for resective epilepsy surgery.An indication of the validity of distance measurements was given by the margin of error, which was the largest for patient 2 amounting to 5.6 mm.The margin of error observed for the three patients can be lowered, e.g. by correcting for brain shifts that occur due to resection and CSF loss and by correcting for distortions present in MR echo-planar imaging.The measurement of the ML-TP distance may be further complicated due to a shifted location of the temporal pole, or even its complete absence.However, the reproducibility of the pre- and post-operative reconstructions of the OR in the non-pathological hemisphere indicates that the effects of brain shift and imaging distortions may be limited.Small deviations in the ML-TP distance were seen, which suggests a good reproducibility, albeit for a limited number of patients.In the standardized estimation procedure of ϵselected the maximal variability was set at 2 mm, both for the OR reconstructions of the healthy volunteers and the patients, which is based on the maximal surgical accuracy that can be achieved during standard or tailored anterior temporal lobectomy before the leakage of cerebrospinal fluid.A surgical accuracy below 2 mm has been reported if a stereotactic frame is used or robotic assistance is involved.After the leakage of CSF however, cortical displacement up to 24 mm may be seen, while other sources of inaccuracy are likely present such as echo-planar imaging distortion, partial volume effects, and image noise.However, despite these inaccuracies the pre- and post-operative comparison of the OR reconstructions indicates that the procedures developed in this study are a valid tool to assess the robustness of the distance measurements.It appeared that the robust estimation of the ML-TP distance enabled to predict the damage of the OR after surgery, which was concordant with the actual damage for the three patients studied.Based on the damage prediction the margin of error was estimated, giving an indication of the overall error in distance measurement."The perimetry results of two of the patients studied indicated damage of either the left or right visual field, corresponding to a disruption of the Meyer's loop.A relatively small VFD was indicated for patient 2 despite the large temporal lobe resection.This result may be indicative of the large inter-patient variability in OR anatomy and function, but may also be the result of the non-standardized procedures for visual field testing in-between hospitals.It is recommended to evaluate the developed methodology further in a clinical trial including a sizable group of patients who are candidate for a TLR in order to be able to assess what the relation is between a VFD and the damage to the OR after a TLR."It was shown for a group of healthy volunteers included in this study that standardized removal of spurious streamlines provides a reliable estimation of the distance from the tip of the Meyer's loop to the temporal pole that is stable under the stochastic realizations of probabilistic tractography.Pre- and post-operative comparisons of the reconstructed OR indicated, furthermore, the validity of a robust ML-TP distance measurement to predict the damage to the OR as result of resective surgery, and the high reproducibility of the reconstructions of the non-pathological hemisphere.In conclusion, the developed methodology based on diffusion-weighted MRI tractography is a step towards applying optic radiation tractography for pre-operative planning of resective surgery and for providing insight in the possible adverse events related to this type of surgery.The authors declare that the research was conducted in absence of any commercial or financial relationships that could be construed as a possible conflict of interest. | Background An accurate delineation of the optic radiation (OR) using diffusion MR tractography may reduce the risk of a visual field deficit after temporal lobe resection. However, tractography is prone to generate spurious streamlines, which deviate strongly from neighboring streamlines and hinder a reliable distance measurement between the temporal pole and the Meyer's loop (ML-TP distance). New method Stability metrics are introduced for the automated removal of spurious streamlines near the Meyer's loop. Firstly, fiber-to-bundle coherence (FBC) measures can identify spurious streamlines by estimating their alignment with the surrounding streamline bundle. Secondly, robust threshold selection removes spurious streamlines while preventing an underestimation of the extent of the Meyer's loop. Standardized parameter selection is realized through test–retest evaluation of the variability in ML-TP distance. Results The variability in ML-TP distance after parameter selection was below 2 mm for each of the healthy volunteers studied (N = 8). The importance of the stability metrics is illustrated for epilepsy surgery candidates (N = 3) for whom the damage to the Meyer's loop was evaluated by comparing the pre- and post-operative OR reconstruction. The difference between predicted and observed damage is in the order of a few millimeters, which is the error in measured ML-TP distance. Comparison with existing method(s) The stability metrics are a novel method for the robust estimate of the ML-TP distance. Conclusions The stability metrics are a promising tool for clinical trial studies, in which the damage to the OR can be related to the visual field deficit that may occur after epilepsy surgery. |
366 | Mind the gap! Extraluminal percutaneous-endoscopic rendezvous with a self-expanding metal stent for restoring continuity in major bile duct injury: A case series | The majority of BDIs occur as iatrogenic injury during cholecystectomy.Non-iatrogenic BDI after penetrating or blunt abdominal trauma is rare, reported in 0.1% of trauma admissions .The standard approach to BDI involving disruption or complete transection of the common bile or hepatic ducts is hepaticojejunostomy .However, surgical repair may result in significant morbidity and mortality .In patients with iatrogenic BDI, reconstruction may be deferred for uncontrolled sepsis and to optimize the patient’s condition .In this interval, the biliary fistula is controlled with percutaneous drains resulting in external loss of bile production, and possible fluid and electrolyte imbalances .Maintaining or improving nutritional status during this period can be difficult due to the luminal absence of bile .Over 90% of patients with non-iatrogenic BDI have other associated intra-abdominal injuries .In a substantial percentage of these patients, initial surgery will be according to damage control principles.In this setting, the initial management of the BDI is similar to iatrogenic BDI with external drainage of bile and definitive repair at a later stage.Several case reports and small case series have suggested a minimally invasive approach using percutaneous transhepatic and/or endoscopic access for establishing biliary continuity .The use of a single modality to internalize bile drainage requires bridging the defect with a guidewire enabling placement of a PTBC or transpapillary stent.Rendezvous PTC/ERC has been described in patients where transpapillary cannulation of the bile duct is unsuccessful using ERC alone .A guide-wire is passed transhepatic with the rendezvous occurring in the duodenum once the guidewire passes the papilla allowing ERC interventions .We describe an extraluminal PTC/ERC rendezvous technique with placement of a fully covered SEMS for the acute management BDIs with substantial substance loss.The following presentation of two patient cases has been reported according to the PROCESS guidelines .Two patients with BDIs and substantial tissue loss, one iatrogenic and one non-iatrogenic, were included.Patient data were retrospectively retrieved from a prospective ERCP registry consisting of patients treated at a public academic hospital in Cape Town, South Africa.Additional information was collected from patient records.The demographic and clinical characteristics of the patients are presented in Table 1 and described below.In accordance with the declaration of Helsinki this case series was retrospectively registered in a publicly accessible database.Institutional ethics approval was obtained.This intervention was performed by the same senior interventional radiologist and experienced endoscopists.After establishing the extent of the injury on cross-sectional imaging, a PTC is performed to confirm the anatomy, specifically assessing whether the biliary confluence is intact and if the length of remaining proximal CHD will allow placement of a fully covered SEMS.At PTC, a PTBC is passed through the severed bile duct into the subhepatic space for drainage of collections.Step 1 of the rendezvous intervention is performing an ERC with a distal cholangiogram.Matching the PTC and ERC imaging, the extent of substance loss is determined.In Step 2, a stone retrieval basket is passed endoscopically through the distal end of the bile duct and opened in the subhepatic space.In Step 3, a standard 420 cm ERCP guidewire is passed transhepatic via the PTBC into the subhepatic space.In Step 4, the wire is caught in the basket under fluoroscopic guidance and pulled through the working channel of the duodenoscope out of the oral cavity after which the PTBC is removed.In Step 5, a fully covered SEMS is deployed endoscopically, bridging the defect.Care is taken to have the proximal stent border distal to the biliary confluence, ensuring bilateral biliary drainage.In Step 6, using the guidewire already in place, a new antegrade placed PTBC is deployed through the stent to prevent bile leak from the puncture on the liver surface, maintain percutaneous access for further intervention if needed and catch a migrating stent, should it occur.A 33-year-old morbidly obese female underwent an elective LC and was diagnosed with an iatrogenic BDI on post-operative day 22.She was taken for an exploratory laparotomy with washout and drainage and referred to our unit for further management five days later.Cross-sectional imaging confirmed a complete transection of the extrahepatic bile duct with 10 mm loss of substance.Due to uncontrolled sepsis the decision was made to defer definitive treatment.An ERC was performed that showed extravasation of contrast into the subhepatic space and no filling of the proximal bile ducts.After placement of a transhepatic drain an extraluminal rendezvous procedure was performed and a 10 × 80 mm SEMS placed, bridging the defect.A 40-year-old male presented with a trans-axial thoraco-abdominal gunshot wound.He was hemodynamically unstable, and a damage control laparotomy was performed.Gastric and diaphragmatic injuries were repaired and a grade IV liver injury was packed .The packs were removed after 24 h and a closed suction drain was left in the subhepatic space.Six days after laparotomy, CT abdomen showed non-perfusion of liver segments 2 and 3, a large central intrahepatic hematoma and a subhepatic collection.A percutaneous ultrasound-guided puncture of the collection returned bile and an 8 Fr pigtail drain was placed.He subsequently developed a persistent bile leak and rising serum bilirubin.ERC demonstrated extravasation of contrast into the subhepatic space and no filling of the proximal bile ducts.MRCP showed complete disruption of the extrahepatic bile duct but an intact confluence.A PTC was performed noting a porto-biliary fistula and an 8 Fr PTBC was positioned into the subhepatic space.At extraluminal PTC/ERC rendezvous a 10 × 80 mm fully covered SEMS was placed, bridging the defect.The patient developed haemobilia 48 h later.Angiography showed a bleeding right hepatic artery false aneurism successfully managed with an endovascular stent.Patient 1 had no post-procedural complications and following embolization, Patient 2 made an uneventful recovery requiring no additional intervention.Pre- and post-rendezvous blood tests are shown in Table 2.Notably, albumin normalized within 3 months in both patients.The PTBC for Patient 1 was removed after 4 months and after 3 months for Patient 2.Both continue to be followed with regular outpatient visits and liver function tests.They remain asymptomatic after 12 and 18 months of follow-up respectively, with no long-term complications.Although the initial intention was internalization of bile flow while surgical repair was delayed, we have subsequently embarked on a definitive endoscopic strategy with stent changes every 3 months.In the absence of any previous experience or guidelines for this situation, an arbitrary total stent time of 24 months was chosen.The described technique may serve as a “bridge to surgery” strategy for patients where definitive management of BDIs are deferred.PTC/ERC rendezvous is particularly useful for patients who have previously failed ERC and/or PTC alone and in whom immediate surgical repair is not an option.An important advantage of internalization in this approach is that it prevents external bile fluid loss, optimizing nutritional status, and preventing electrolyte abnormalities and dehydration .As forthcoming in the trauma patient, although rare, extrahepatic BDIs in trauma can present a formidable challenge due to the high incidence of associated injuries, especially vascular injuries.Often, definitive surgery is delayed for several weeks to months and even after delay can be technically difficult.In this group of patients, the definitive PTC/ERC rendezvous approach has the potential to minimize short- and long-term complications associated with non-iatrogenic BDIs.The technique of extraluminal rendezvous has been described previously.Odemis et al. performed an intraperitoneal rendezvous procedure with placement of a single plastic stent into the right biliary system at the time of rendezvous in a patient with a complex BDI .Over the next year, multiple additional plastic stents were place with resolution of the stricture.However, evidence of long-term success using a definitive stent strategy in these patients is lacking, especially long-term results for SEMS.In a series of 22 patients with complete bile duct transection after LC, 18 patients were asymptomatic and 4 underwent surgical repair after a mean follow-up period of 5 years .Schreduer et al. found a long-term success rate of 55% in 47 patients after a median follow-up of 40 months .Of note, only 31 of 47 patients had a complete transection of the bile duct and both studies report only patients with iatrogenic BDIs.Additionally, in both studies plastic stents were used.The use of SEMS for major BDIs in the acute setting has to our knowledge not been described previously.Although long-term results need to be confirmed the creation of a lumen substantially larger than plastic stents may contribute to better long-term results in these patients.This is the first study to report the use of intraperitoneal PTC/ERC rendezvous with placement of a fully-covered SEMS to immediately bridge the gap in BDI, allowing internal biliary drainage in the acute management of patients with BDI and substance loss.The authors have no financial or personal relationships resulting in conflicts of interest to disclose.There was no funding provided to complete this research.This study was approved by the Human Research Ethics Committee at the University of Cape Town.The Human Research Ethics Committee requires written consent to be obtained from all patients included in institutional databases, from which data for this report was retrieved.Written informed consent was obtained from the patients specifically for publication of this case report, including imaging used in the case presentations.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.Jessica Lindemann: Data collection and analysis, writing the paper, critical revisions, final approval.Christo Kloppers: Study concept, data collection, critical revisions, final approval.Sean Burmeister: Study concept, critical revisions, final approval.Marc Bernon: Study concept, data collection, critical revisions, final approval.Eduard Jonas: Study concept, data analysis, writing the paper, critical revisions, final approval.This research was retrospectively registered at researchregistry.com with the UIN: 4868.Not commissioned, externally peer-reviewed. | Introduction: Treatment of major iatrogenic and non-iatrogenic bile duct injury (BDI) often requires delayed surgery with interim external biliary drainage. Percutaneous transhepatic cholangiography (PTC) with biliary catheter placement and endoscopic retrograde cholangiography (ERC) with stent placement have been used to bridge defects. In some patients, bridging the defect cannot be achieved through ERC or PTC alone. Materials and methods: Two patients with major BDIs, one iatrogenic and one non-iatrogenic underwent an extraluminal PTC/ERC rendezvous with placement of a fully covered self-expandable metal stent (SEMS) for the acute management of BDI with substantial loss of bile duct length. Results: In both patients the intraperitoneal PTC/ERC rendezvous with SEMS placement was successful with no complications after 12 and 18 months follow-up, respectively. Discussion: This study is the first to report a standardized approach to the acute management of iatrogenic and non-iatrogenic major BDIs using extraluminal intraperitoneal PTC/ERC rendezvous with placement of a fully covered SEMS. The described technique may serve as a “bridge to surgery” strategy for patients where definitive management of BDIs are deferred. However, long-term data of the success of this technique, specifically the use of a SEMS to bridge the defect, are lacking and further investigation is required to determine its role as a definitive treatment of BDIs with substance loss. Conclusion: PTC/ERC rendezvous with restoration of biliary continuity and internalization of bile flow is particularly useful for patients who have previously failed ERC and/or PTC alone, and in whom immediate surgical repair is not an option. |
367 | Modeling of compression pressure of heated raw fish during pressing liquid | The reform of the Common Fisheries Policy of 2013 was aimed to eliminate discarding of the unwanted catches to the sea by introducing landing obligation.The goal was to improve fishing behavior through better selectivity.The CFP implementation was planned from 2015 to 2019 for all commercial fisheries in European waters and for European vessels fishing in the high seas.The landing obligation requires all catches of regulated commercial species on-board to be landed and counted against quota.These are species under TAC or, in the Mediterranean, species which have a MLS.Undersized fish cannot be marketed for direct human consumption purposes whilst prohibited species cannot be retained on board and must be returned to the sea.The discarding of prohibited species should be recorded in the logbook and forms an important part of the science base for the monitoring of these species.The production of animal feeds or fertilizers is an option that ensures that fish are not lost to the food chain.This concept refers to sprats in commercial Baltic Sea catches.The consumption demand of this species is not limited.Most of the Danish and Swedish sprat landings from the Baltic Sea are used for the production of fish meal and oil.Hence, in some countries including Poland, the lack of fish meal processing plants has led to the concern of managing this type of agri-food waste.Therefore, the Ministry of Science and Higher Education of Poland commissioned the development of a simplified technology for using sprats not destined for human consumption, discards, and by-catch for fish feed.This is because of the following reasons:it is possible to replace even as much as 80% of the fish meal component, which has been so far the dominant fish feed ingredient, with alternative sources of proteins including soy meal, black soldier larvaes, alges or other plant protein concentrates.it is also possible to produce various types of fish feed by extruding mixtures with higher moisture content in a process known as extrusion.A technology was developed to produce plant and fish feed by extrusion based on partially dewatered raw fish that allows replacing the fish meal.In this process, the fish meal is replaced initially with dewatered fish raw material.The amount of the fish meal content in the feed is determined not only by the quantity of initially dewatered fish raw material but also by its degree of dewatering.If the initial dewatering of the fish raw material is low, then its maximum share in the feed is correspondingly equal to a smaller quantity of fish meal.The advantage of this technology is not only the simplicity stemming from substituting the fish meal with the dewatered material, but it is also more energy efficient.A simple calculation shows that if the extrusion moisture is equal to 30%, the exclusion of the fish meal production step of drying dehydrated raw material will save energy of 190 kWh per 1t of product.Further, the exclusion of this step in fish meal production does not affect the microbiological purity of the product.The extrusion temperature is sufficiently high for the elimination of any pathogens present in the raw material that could possibly contaminate the feed, especially if this raw material is subjected previously to thermal processing.Dewatering can be performed with thermal, mechanical, or thermal–mechanical methods.Thermal dewatering is the least cost-effective method because of its high energy costs.The mechanical dewatering method requires the least amount of energy, but it is the least effective method because most of the water in the fish raw material is strongly linked to proteins.A compromise solution is the thermal–mechanical method, which is frequently applied, when feasible, to the expression of fluid from biological solids.In this method, the fish raw material is dewatered following initial preheating.For sprat fish analyzed in this paper, the application of thermal–mechanical dewatering method is possible only when the fish raw material is subjected to extrusion cooking in a later stage of processing.Numerous theoretical and empirical relations that describe the expression of fluid from biological materials can be found in the literature.Only a very few of these relations are generalized models that are not related to any particular material; however, none of them took into account initial preheating.Preliminary studies on expressing liquid from preheated fish material indicated that these models did not fit the experimental data.The study by Pérez-Gálvez et al., which described the dewatering of fresh sardines with a hydraulic press under different compression speeds and final pressures, is the only study similar to the present study.Authors state that the maximum yield of the expressed liquid increases the dry matter of dewatered cake by only 4%.A higher dewatering degree requires preheating of the fish raw material.Pérez-Gálvez et al. optimized pressing conditions of fresh sardines by means of a statistically designed experiment.The goal was to achieve maximum dehydration of the sardines with a minimum destruction of the solids.Authors used a multiobjective optimisation technique with a Pareto Front.The aim of the present study is to evaluate experimentally the influence of preheating and uniaxial pressing parameters on expression pressure and the dewatering degree expressed as the dry matter content after pressing.The subject of the study is sprat, which is used as a model for waste and by-catch.The test stand is presented in Fig. 1.The compression force F acting on the piston is exerted by the MA 25 screw motor.The drive system of the screw motor consists of the following components manufactured by Parker Hannifin Corporation:MD 3475 servo motor with constant torque,the SVAHX2500S electronic control unit, which controls the servo motor,The system allows adjusting of the piston movement with an accuracy of 1 mm and speed in the range from 0.25 to 8 mm·s−1.The effects of input variables on the dewatering degree during uniaxial pressing were investigated using the test stand as illustrated in Fig. 1.The dewatering degree depends on the following factors:compression ratio r—the ratio of the initial and final volume of pressed material,the initial thickness of pressed material h,geometrical parameters of the screen—mesh diameter ϕ and screen mesh area A,the speed of compression piston u,die filling ratio b.Preliminary tests were conducted to limit the number of input parameters.During tests, the effects of the mesh diameter, its working surface area, and the height of the initial samples on the dewatering degree were determined by repeated measurements for 6 times.Geometrical screen parameters and the initial thickness of the pressed material were limited by the equipment used for the study.The results of the standard analysis of variance indicated no statistically significant difference between individual dewatering degrees at 95% confidence level.This allowed limiting the input parameters to three assumed values as follows:preheating temperature T = 40, 65, and 90ºC;,compression ratio r = 2, 3.5, and 5,speed of compression piston u = 0.00025, 0.00050, and 0.00075 m·s−1.The values of the remaining parameters were set as constants:initial thickness of material h = 0.08 m,die filling ratio b = 0.9,screen mesh diameter ?,= 1 mm,screen mesh area = 30%.The output parameter is the dewatering degree expressed as the dry matter content after pressing.Baltic sprats were cut into 5−10 mm pieces and were heated indirectly to a given temperature, according to the Box–Behnken experimental design, in a water bath that allowed controlled heating to 100°C with an accuracy of±1°C.After heating, the thermal leak was drained gravitationally through a screen with mesh size ϕ = 1 mm in time t= 60 s. Then, 100 g of sample was pressed three times in the test stand as illustrated in Fig. 1.The quantity of dry matter in the pressed samples was determined.During the experiment for the compression ratio of r = 5, the compression force F acting on the piston was measured with a force transducer and converted into pressure as follows:Because the dewatering degree dm and compression ratio r are expressed by dimensionless numbers, the input parameters u and T were also expressed by such numbers.Preheating temperature T was replaced by its ratio to the water boiling point—Image 3, and the speed of the compression piston u—by the Cauchy number Image 4, which is used in the study of compressible flow.It was hypothesized that the relation between the dewatering degree and the pressing parameters expressed by dimensionless numbers could be described by the power function:As the value of compressed density ρ and the bulk modulus K are invariable under the experimental conditions, the Cauchy number can be expressed as Image 6, where Image 7,Then, equation can be written as follows:While equation can be simplified as follows:where:The results of the measurements were analyzed statistically to obtain the parameters for power function.By performing statistical analysis of measurements, the equation for dewatering efficiency was obtained as follows:The observed values for dry matter versus the values predicted by equation are illustrated in Fig. 2.This is a visualization of the precision of the dewatering efficiency predicted with equation.The various initial parameters that impact the dry matter value correspond to the anticipated results.According to equation, dm increases along with an increase in initial thermal processing temperature T and compression ratio r, and it decreases with increasing speed of compression u.Within the range of the tested initial parameters, preheating temperature had the greatest impact on the dewatering degree, while the speed of the compression piston had the least impact.The significant effect of the thermal pretreatment stems from the previously mentioned nature of the biological raw material.The thermal denaturation of proteins not only allows for the higher expression of constitutional water but also for the higher preheating temperature that is linked with greater natural water loss.The result shows that the dewatering degree dm in preheated raw material increased by 9.6%.The analysis of models presented in previous research indicated that the compression pressure of heated sprat during pressing liquid at particular preheating temperatures and speed of compression most precisely describe the model originally formulated by Faborode and O’Callaghan, which was built for the agglomeration by compression of fibrous agricultural materials.where:K= bulk modulus ,L = h =0.08 ,b = 0.9 ,S = piston displacement .The exemplary results of the test with sprat are presented in Fig. 3.However, preliminary studies indicated that the speed of compression and the preheating temperature significantly affect the compression pressure.Therefore, when expressing fluid from the preheated fish material, these two parameters need to be considered in the model.Both parameters can be expressed using the Cauchy number Nc and the ratio of preheating temperature to water boiling point Image 14,By introducing the Cauchy number and preheating index in equation, the following equation is obtained.Faborode and O’Callaghan transformed the Cauchy number into a function of the compression parameters, i.e., the compression ratio and speed of compression:Using equation, the following was obtained:Considering the differences between agglomeration compression and expressing liquid compression, the empirical constant c3 was introduced in equation:The values of K, b, and c3 for a given material and experimental conditions are constant and can be replaced with the empirical constant Image 19:The measured values of pressure p were analyzed statistically to obtain the parameters for equation 10.The following values were obtained:The plot of the p values that were observed versus those predicted with equation and the value of the correlation coefficient suggest that the experimental values fit the model poorly.This most probably results from the componentImage 22, which describes a rapid increase in compression pressure p during piston displacement S in the closed die.In the current study, compression occurs in a half-open die.Therefore, the increase in pressure due to an increase in piston displacement is much smaller and can be replaced with the appropriate power of the dimensionless component Image 23that binds the same parameters with each other.After making the changes in equation, the following equation is obtained:The following equation was obtained through repeated statistical analyses:Here, the plot of the p values that were observed versus those predicted with equation and the high value of the coefficient of determination R2 suggest that the model fits well with the experimental values.Therefore, the proposed equation represents the compression pressure during the expression of fluid from fish material in terms of the compression ratio r, speed of compression u, and the temperature of the preheated material, expressed by Image 26.The influence of the process parameters on the dry mass was evaluated by ANOVA method.Dry mass is a parameter that determines value of the final product and indicated efficiency of the dewatering process.Table 2 presents estimation of the effect of the input parameters on the dry mass.Linear effects for the three input parameters are significant; however non-linear effects are significant only for the ratio of compression and interaction between preheating temperature and compression ratio.Figure 8a shows Pareto chart that presents standardized effect of each input parameter.Compression speed had a negative effect on the dry mass alone and in interaction with compression ratio and preheating temperature.Temperature had the highest linear influence, followed by compression ratio, also linear.Figures 9 and 10 show that an increase of temperature and compression ratio promotes an increase of dry mass in final product.Moreover in fig. 10 it can be seen that rather than the highest values the mean values of compression speed are more beneficial for the output.The highest dry mass was obtained for 90°C, compression rate of 5 and compression speed of 0.0005 m·s−1.On the basis of the experiments performed, a modified Faborode and O’Callaghan equation is proposed to describe the pressure during fluid expression from the compression of preheated fish raw material in terms of the compression ratio, compression speed, and preheating temperature.Further, an equation describing the effectiveness of the dewatered preheated fish raw material as a function of these parameters was developed.The modified Faborode and O’Callaghan equation allows estimation of compaction energy.At particular compression speeds and preheated temperatures, the compaction energy is represented by the area under pressure versus the piston displacement curve.These values are equal to the product of the pertinent area and the scale conversion coefficient.Compaction energy values can be calculated with this procedure after converting the compression pressure into the compression force.Highest dry mass values can be obtained by preheating fish material to 90°C and by employing the highest compression ratio, as these two process parameters have the greatest effect on the output.As fat fully melts it can be easily pressed out, also the more material is pressed the more water can escape.However, compression speed has a negative effect.Dewatered fish material is more stable microbiologically and requires less storage space.Water-oil mix removed from fish material could be processed to retrieve the fats, process much easier to perform from liquids rather than from solids.Remaining dry mass could be processed into fish protein hydrosylate or fish collagen/gelatin.Moreover fish waste could be turned into fish silage.Fish silage can be used as an animal feed, especially beneficial for pigs or a as a fertilizer,Andrzej Dowgiałło: Conceptualization, Methodology, Investigation, Writing - Original draft preparation; Marta Stachnik: Software, Investigation, Validation, Resources, Writing - Original draft preparation, Visualization , Writing - Reviewing and Editing; Józef Grochowicz: Conceptualization, Methodology; Marek Jakubowski: Conceptualization, Methodology, Investigation, Resources, Validation, Writing - Original draft preparation, Visualization, Writing - Reviewing and Editing; | The aim of the present study was to develop models describing the pressure and the dewatering rate of preheated fish raw material in terms of expression parameters (i.e., compression ratio, compression speed, and preheating temperature of the material). The effects of independent and dependent variables were studied using the Box-Behnken experimental design. The obtained results showed that the proposed power law models fit well with the experimental data with correlation coefficient (R^2) of 90.3% for dewatering efficiency and 97.8% for pressure and that dewatering efficiency and pressure were significantly (p < 0.05) correlated with the expression parameters. The proposed models allow to estimate both efficiency of pressing liquid from heated raw fish materials and its energy and compression pressure. |
368 | Artificial synapses with photoelectric plasticity and memory behaviors based on charge trapping memristive system | Development of the digital computer based on the Von Neumann architecture has progressed over the past few decades and has propelled humans into the information age.However, computers with this structure are not suitable for processing massive amounts of data in the era of big data because their data processing and data storage systems are physically separated and they have a high power consumption .Conversely, the human brain, consisting of about 1012 neurons and 1015 synapses, can store and process data simultaneously in the same place .Therefore the brain is able to process a massive amount of data ultrafast in parallel processes with a low power consumption .A synapse, which is a node that connects two neurons, plays a key role in delivering the signal between two neurons .Thus, to promote the development of a computer with an artificial neural network for ultra-fast big data processing, a way of fabricating a device that mimics bio-synapses is an urgent requirement .Devices with different structures have been explored to simulate artificial synapses, such as two-terminal structures and three-terminal structures.The two-terminal structures mainly simulate the “point-to-point” connected synapse, while the three terminal structures mainly simulate the “point-to-line” connected dendrite synapse .The memristor, one of the most promising two terminal structure devices for simulating synapses, has attracted much attention owing to its compact structure and similarity to the structure of an actual synapse .Many kinds of two terminal devices based on organic materials, 2D Materials and quantum dots have been explored to mimic synaptic functions ."Han's group reported an artificial synaptic device with a solution-processed small moleculephenyl) phenylphosphine oxide based resistive random switching memory.The device shows resistive switching behavior and several synaptic functions ."Liu's group have demonstrated metaplasticity can be mimicked by metallic oxide based memristor .In those reports, most of artificial synapses are stimulated by electrical signals.In this case, the transmission speed is limited because of the low bandwidth .In contrast, photonic signals have a much higher bandwidth, faster propagation speed and lower consumption than electric signals .Further, in biological systems, photonic stimulation is more selective for some special neurons than electrical stimulation .Therefore, it is meaningful to develop artificial synapses based on photonic or combined photoelectric stimulation .As such, the active materials of memristive systems should have excellent photoelectric properties."Recent, Guo's group reported a photoelectric memristive synapses based on the mechanism of electrons trapping and detrapping at the MoS2/SiO2 interface . "Kim's group fabricated an all oxide based transparent photonic synapse by reactive sputtering .Those device were fabricated by high power consumption methods.Colloidal quantum dot, for example CdSe/ZnS core-shell quantum dots and all-inorganic perovskite CsPbBr3 quantum dots, can be fabricated chemical synthesis, which an easy and cost-effective method.Furthermore, CSQDs, which are type-I QDs, can trap electrons or holes in their cores owing to their quantum well structure, where the conduction band of the core is lower than that of the shell and the valence band of the core is higher than that of the shell ."These quantum well structures naturally form trapping centers and are therefore very conducive to modulating a device's resistance .All-inorganic perovskite CsPbBr3 quantum dots, have very unique electronic and optical properties that can be precisely modulated by adjusting the diameter and thickness of the QDs .Thus, utilizing the low-cost methods of solution process, fabricating memristors based on quantum dots can enable synaptic functions with stimulation by photoelectric signals to be achieved.Motivated by the above considerations, a photoelectric memristive artificial synapses based on CSQDs and CsPbBr3 QDs was fabricated by solution process.The quantum well structure of CdSe/ZnS quantum dots was designed as the trapping centers to modulate the resistance of the device.The memristive synapse shows electric plasticity potentiation and depression behaviors.Various synaptic functions, such as photoelectric excitatory postsynaptic current behavior, short-term memory, long-term memory, short- to long-term memory transition, and learning-forgetting-relearning behaviors, were all mimicked by applied pulses of light or electricity.Moreover, the device also has the potential to be used in flexible applications.It is suggested that the photoelectric plasticity and memory phenomenon can be attributed to charge trapping and detrapping, since the quantum well structure of the CdSe/ZnS quantum dots acts as a trapping center.This work provides a cost-effective method to develop artificial synapse devices, neural networks and computers with photoelectric operations.First, indium tin oxide coated glass was sequentially cleaned using ultrasonication in propanol, acetone, and deionized water for 10 min.Then a film of CsPbbr3 QDs was deposited onto the ITO film by spin-coating method from toluene solution at 2500 rpm for 60 s.The details of the synthesis method for the CsPbBr3 QDs are provided in the Supplementary materials.Then, bi-layer CdSe/ZnS QDs films were obtained on the top of the CsPbBr3 QD layer by spin-coating from n-hexane at a speed of 2500 rpm for 60 s.It should be note that the CdSe/ZnS QDs n-hexane solution was purchased from Wuhan Jiayuan Quantum Dots Co., Ltd.To enhance the contact stability between the active layer and the electrode, the solution of PMMA was spin-coated at a speed of 6000 rpm for 60 s and annealed at 90 °C for 20 min .Finally, the top Au electrodes were thermally evaporated onto the PMMA layer.The X-ray diffraction data of the CdSe/ZnS core-shell quantum dots and CsPbBr3 quantum dots were obtained using the Rigaku D/MAX-2500 diffractometer and the transmission electron microscope images of the material were carried out via the JEM-2100f.A cross-sectional image of the device was carried out employing a field-emission scanning electron microscope.Atomic force microscopy characterization was performed using a CSPM5500 in a tapping mode.To measure the current–voltage characteristics of the device, a source measurement unit was used to supply the bias voltage between the top and bottom electrodes.Ultraviolet–visible photoluminescence and absorption spectra of the prepared CSQDs and CsPbBr3 QDs were obtained using a Zolix-λ300 spectrometer.It should be noted that the device fabrication and characteristics measurements were implemented at room temperature.As an analogy to a synapse in the human brain, a Au/PMMA/CSQDs/CsPbBr3 QDs/ITO artificial synaptic device was fabricated by spin-coating all layers except the electrodes onto ITO glass, as shown in Fig. 1.Fig. 1 shows a cross-sectional scanning electron microscopy image of the artificial synaptic device stack.The thicknesses of the PMMA/CSQDs layer and CsPbBr3 QDs layer were about 200 and 150 nm, respectively.The PMMA layer was not distinguishable in the SEM image, because the thickness of the PMMA layer was only about 8 nm).The surfaces of the CsPbBr3 QDs and CSQDS layers were confirmed to be smooth by studying them with atomic force microscopy, as shown in Fig. S1 and.Fig. S1 shows an AFM image of the morphology of the CsPbBr3 QDs monolayer; the average height of the quantum dots was about 15 nm.Figs. 1 and S1 show the PL and UV–Vis absorption spectra, respectively, of the CsPbBr3 QDs and CSQDs.The sharp PL peaks of the CsPbBr3 QDs and CSQDs can be observed at 517 and 628 nm, respectively.The XRD pattern of the CsPbBr3 QDs and CSQDs samples are displayed in Fig. S1 and.A characteristic diffraction peak was found at 30.12° corresponding to the crystal plane of cubic CsPbBr3.The diffraction peaks of the CSQDs were at 26.3°, 43.6°, and 51.7°.As such, the diffraction peaks were a mixture of those of the standard diffraction peaks of pure CdSe and pure ZnS, indicating that the synthesized quantum dots were core-shell structures.In Fig. 1, panels and show transmission electron microscope images of the CSQDs and CsPbBr3 QDs, respectively.As can be seen from the images, the average size of the CsPbBr3 QDs and CSQDS was about 12 nm and 6 nm, respectively.As shown in Fig. 2 and, the current–voltage characteristics of the memristor device in the dark were investigated by applying positive or negative voltage sweeps.To prevent the device from suffering a hard breakdown, a compliance current of 1 mA was applied during the sweep.When the positive sweep voltages were applied to the memristor device, the current gradually increased.Then, when the negative sweep voltages were applied to the memristor device, the current level gradually decreased.Fig. 2 and shows the current variation of the device under a series of positive and negative bias voltages.The amplitude and interval of the positive and negative electrical pulses were 2 V, 1 s and −2 V, 1 s, respectively.The current can be increased or decreased by applying positive or negative pulses.These I–V response characteristics are similar to the synaptic potentiation and synaptic depression phenomenon that occurs in biological synapses.Stimulated presynaptic neurons cause neurotransmitters to be transmitted into the postsynaptic neurons, resulting in a postsynaptic excitation potential .In this paper, the top electrode and bottom electrode can be regarded as the presynaptic and postsynaptic terminals, respectively.The active materials should be regarded as representing the synaptic cleft."The synapses' characteristics can be simulated by controlling the conductivity of the active materials.The response current of the memristor is defined as an EPSC.The EPSC is evoked by a single presynaptic spike applied to the electrode.A schematic diagram of the excitatory postsynaptic current is shown in the inset of Fig. 3.During the measurement of the EPSC, an electric pulse was applied at the Au electrode.The result of the EPSC measurement is shown in Fig. 3.It is obvious that the postsynaptic current slowly decays to a steady state after the stimuli has been removed.The EPSC property triggered by the electric signal is responsible for electron trapping in the CdSe/ZnS core-shell QD layer, which will be explained later.The memristor device can not only mimic the postsynaptic excitation potential property via electrical stimuli, it can also simulate other essential synaptic properties, for example, short-term plasticity and long-term plasticity.In neuromorphic systems, STP is a temporary potentiation of the synaptic connection lasting a few seconds or minutes that subsequently gradually decays back to its initial state.Conversely, LTP refers to permanent changes in the synaptic connections that can last hours, years, or even a lifetime .Correspondingly, in psychology, memory can be classified into STM and LTM depending on the amount of time for which the memory is retained.The STP and LTP govern the two types of memory in the human brain .It should be noted that STP and LTP are used in neuroscience, but the STM and LTM are used in psychology.The psychological terms was used in this paper.The result of the imitation of STM stimulated by electrical pulses is displayed in Fig. 3.The intensity and duration of the electrical pulse are 4 V and 1 s, respectively.The read voltage was 0.5 V.It is evident that the current of the memristor is rapidly increased by applying the electrical pulse and decays back to an equilibrium value within 50 s after removing the electrical pulse.This suggests that STM behavior is effectively mimicked.Fig. 3 shows the “multistore model” proposed by Atkinson and Shiffrin in 1968, which is the most accepted memory model in psychology.In this model, Atkinson and Shiffrin suggested that STM can transit to LTM through a process of rehearsal .Here, such a transition from STM to LTM can also be mimicked in the memristor.First, an electrical stimulus consisting of five continuous pulses were applied to the memristor.As shown in Fig. 3, the output current of the device increases rapidly, then slowly decays to a steady value, before remaining unchanged for more than 1000 s.At last, the EPSC induced by presynaptic spikes with different strength and number was demonstrated, as depicted in Fig. 3 and.The amplitude and duration of the pulse was 4 V and 0.2 s, respectively.The read voltage was 0.1 V.The maximum output EPSC increases along with the applied voltage and number of pulses.It is obvious that the current decays slowly instead of disappearing immediately when the presynaptic spikes is removed.The maximum EPSC increase is attributed to electron trapping in the CdSe/ZnS core-shell QD layer, which will be explained later.In neuroscience, the synaptic weight is the strength or amplitude of the connection between two synapse nodes .In this paper, the currents refer to the connection strength between the presynaptic and postsynaptic terminals.Therefore, this EPSC behaviors of the device can be used to imitate the potentiation behavior of the synaptic strength."Next, the device's response to optical signals was demonstrated.CsPbBr3 QDs have excellent photoelectric properties; the I–V characteristics of the device under 405 nm laser illumination with different intensities are shown in Fig. 4.At the same read voltage of 0.1 V. the currents increased along with the increases in the laser intensity).Hence, synaptic plasticity in our device can be triggered not only by electric stimuli but also by light pulses.To measure the photonic EPSC, a photonic pulse was applied to the ITO electrode.As shown in Fig. S2, the postsynaptic current slowly decays to a steady state once the stimulus was removed.Thus, the light-stimulated STM function was mimicked, and the result is displayed in Fig. 4.The read voltage was 0.1 V.The intensity of the laser pulse was either 65 or 100 mW/cm2.As shown in Fig. 4, the current stimulated by the photonic-pulse increased and then decreased until it was close to the initial value.This indicates that STM behavior could be successfully mimicked.Further, when the intensity of the applied laser was increased, the output current curve shifted upward, but the current still returned to its initial state in a short time period.A possible reason for this is that no LTM was formed in such a short time.A STM to LTM transition via light illumination was demonstrated subsequently, as shown in Fig. 4.When the device was illuminated by a laser with an intensity of 120 mW/cm2 for 4 s, the output current increased.After removing the laser, the current decreased rapidly but the final steady state current was greater than the initial state, which is similar to LTM behavior.Subsequently learning, forgetting, and relearning behavior can be mimicked in the memristor device that was triggered by an optical signal.First, the device was illuminated by a 405 nm laser with an intensity of 100 mW/cm2 for about 6 s).After removing the laser, a spontaneous decay of the photonic current was observed.It was found that for illumination for a maximum of about 3 s roughly the same photonic current was obtained as before the illumination.It should be noted that a decrease in the photonic current after the second illumination signal was less than after the first one.This phenomenon is similar to learning and forgetting behavior in the brain .Finally, the data retention characteristics for the light-stimulated LTM ability of the memristor were measured under a reading voltage of 0.1 V. For that, the device was illuminated with a 405 nm laser for 20 and 40 s respectively.The intensity of the 405 nm laser was 120 mW/cm2.Next, the output current under a voltage bias of 0.1 V was measured).As shown in the figure, three current states could be demonstrated with the use of different illumination times, 20 s, and 40 s), and the retention time of all states was longer than 1000 s.To verify the mechanical flexibility of the memristor, the electrical characteristics of the device fabricated onto a PET substrate were studied.To achieve different bending angles and demonstrate flexibility, the device was bent to different bending lengths for 100 bending cycles, as shown in Fig. 5, and."After bending, the device's I–V characteristics was measured.The electrical characteristics of the device for different bending lengths are shown in Fig. 5 and.Although the I–V curves were not exactly the same as those of a flat device and), the gradual switching behaviors were also observed after the bending tests.These results suggest that the device has the potential to be used in flexible applications.To explore the mechanism behind the phenomenon, the I–V curve as logI–logV for positive and negative biases) were replotted to analyze the charge carrier transport mechanism.The fitted curves at positive and negative voltages region are shown in Fig. 6– and–."As shown in those figures, the slopes of the fitted curves are approximately 1 in the low voltage region, which indicates that the current and voltage follow Ohm's law, while the slopes of the fitted curves were about 2 in the high voltage region, which suggests that space-charge-limited-current dominates the carrier transport process in this region.The SCLC is attributed to the trapping centers of the active layer; in the high voltage region, the trapping centers in the active layer are gradually occupied by the injected carriers as the voltage is increased.The trapping centers in the active layer are potentially vacancies, interstitials, and antisities .In the CsPbBr3 QD layer, the main trapping centers are Br vacancies, and the resistance switching behavior caused by those vacancies are nonvolatile .However, the resistance switching in the device is volatile, as shown in Fig. S3.Further, the I–V characteristics and the synaptic potentiation and synaptic depression phenomenon were also observed in the memristor device based on CSQDs.Thus, the trapping centers are formed in the CdSe/ZnS core-shell QD layer.The initial energy band diagrams of the device are given in Fig. 7 and the conduction band energy of CdSe and ZnS is at −3.8 and −2.4 eV, respectively .Therefore, the CdSe/ZnS QDs can act as the trapping centers in the device because of the low energy level of CdSe, which lies between that of ZnS and the injected electrons are captured in the conduction band of CdSe.As a result, SCLC dominates the carrier transport mechanism in the high voltage region, as the trapping centers are gradually occupied by the carriers .Based on the aforementioned discussion, energy band diagrams for the above-described experimentally observed phenomenon was proposed, as shown in Fig. 7–.In the low voltage region, as shown in Fig. 7, there were fewer injected charges than thermally generated free carriers inside the QD films and a linear I–V curve can be observed.When the applied voltage gradually increased), the injected excess carriers significantly outnumbered the thermally generated carriers; the electrons injected from the ITO electrodes were captured in the conduction band of CdSe and the conduction obeyed the square-law.The diameter of the CdSe/ZnS core-shell QDs was about 4 nm and the thickness of shell was only about 2 nm .Therefore, the electrons in the conduction band of CdSe were gradually released through direct tunneling, as shown in Fig. 7.We propose that the active layer of the memristor device contains a conducting layer with occupied trapping centers and an insulating layer with unoccupied trapping centers.The more pulses are applied and the stronger those pulses are, the more the trapping centers are filled and the higher the output currents are.As a result, the gradual increase of current over consecutive positive pulses is caused by the gradual increase in the amount of occupied trapping centers).By contrast, the gradual decrease of the current after the applied signal is related to the progressive increase in the number of unoccupied trapping centers).When the pulse interval time was longer than the recovery time, the number of occupied trapping centers decayed to its initial state.Thus, the current also decays back to the initial value).But, if the pulse interval is shorter than the recovery time of the trapping centers, the following pulse may cause a larger number of trapping centers to be occupied than in the first pulse, corresponding to a higher current.Applying the signal multiple times causes an accumulation effect such that nearly all the tapping centers become occupied, and the steady output current may be larger than its initial value, and).When the device was illuminated by the 405 nm laser, the low energy photo-generated electrons were trapped by the trapping centers and spatially separated with the holes.Therefore, the electron–hole recombination time became very long.When the laser signal was removed, the current gradually decayed over time.The trapping centers were all occupied by photo-generated electrons and the device showed a high electrical conductivity when the device was illuminated for a long time.If a negative electric voltage pulse is applied, the trapped electrons are released and the device has a low electrical conductivity .In summary, a two terminal artificial synaptic device was fabricated by spin-coating the active layers at room temperature.The quantum well structure of CdSe/ZnS quantum dots was designed as the trapping centers to modulate the resistance of the device.The synaptic functions including plasticity potentiation and depression behaviors as well as EPSC, STM, LTM, STM to LTM transition, and learning and forgetting behaviors were imitated well by applying photonic and electric stimuli.Additionally, the device showed stable operation after bending tests, indicating that the device has the potential for flexible applications.The photoelectric plasticity and memory phenomenon can be attributed to charge trapping and detrapping, since the CdSe/ZnS quantum dots have a quantum well structure and thus act as trapping centers.This work provides a cost-effective way toward developing a multi-function artificial synapses device, neural networks, and computers with photoelectric operations.Zhiliang Chen: Conceptualization, Methodology, Writing - original draft, Writing - review & editing, Formal analysis, Investigation.Yu Yu: Methodology, Software, Writing - review & editing, Formal analysis.Lufan Jin: Writing - review & editing, Formal analysis.Yifan Li: Writing - review & editing, Formal analysis.Qingyan Li: Writing - review & editing, Formal analysis.Tengteng Li: Writing - review & editing, Formal analysis.Yating Zhang: Software, Validation, Project administration, Funding acquisition, Supervision, Writing - review & editing, Resources, Formal analysis.Haitao Dai: Supervision, Resources.Jianquan Yao: Validation, Project administration, Funding acquisition, Supervision, Resources. | Imitation of memory and learning behaviors of nervous system by nanoscale photoelectric devices is highly desirable for building neuromorphic systems or even artificial neural networks. In this work, artificial synapses with photoelectric plasticity and memory behaviors based on a charge trapping memristive system was fabricated. Versatile synaptic functions, such as photoelectric excitatory postsynaptic current behavior, short-term memory, long-term memory, short - to long-term memory transition, and photonic learning and forgetting behaviors, were all mimicked by applied pulses of light and electricity. Moreover, the device also has the potential to be used in flexible applications. The photoelectric plasticity and memory phenomenon can be attributed to charge trapping and detrapping, since the used CdSe/ZnS quantum dots with a quantum well structure that act as trapping centers. This work provides a cost-effective method to develop artificial synapse devices, neural networks, and computers with photoelectric operations. |
369 | STA-MCA bypass following sphenoid wing meningioma resection: A case report | Complex skull base tumors have always been a challenge for complete resection because of their invasion to critical structures such as internal carotid arteries, main cerebral arteries, optic nerve, and cavernous sinus… Vessel injury may be unavoidable to perform the complete resection.Vessel revascularization should be considered if vessel sacrifice would cause symptomatic cerebral ischemia.There were numerous cerebral revascularization techniques, which can be used for reconstruction after tumor resection.Choosing appropriate graft and bypass techniques depend on indications and variances in patient anatomy.Considered factors include the size of the recipient`s vessel, the desired amount of blood flow, the size and availability of donor`s vessels, the nature of the operation, and the anatomy of the revascularization site and the pathology being treated .There have not been yet any clinical reports on this issue in Vietnam before.This is because cerebral revascularization is a difficult technique for our country and the patients need to be evaluated carefully the blood supply of revascularization after removing the tumor.This article aims to report the first case emergent STA-MCA bypass due to MCA injury during sphenoid wing meningioma resection in Vietnam.The work has been reported in line with the SCARE criteria .A 22-year-old man with a history of head trauma a week ago was admitted to our hospital with complaints of headache for one week.He had no nausea, vomiting and blurred vision.On examination, he was alert, oriented, but malaise.He denied of paralysis and cranial nerves palsies.His muscle strength was grade V.The preoperative MRI showed a hypervascular left sphenoid wing meningioma, which enhanced heterogeneously and embedded the intracranial portion of the left internal carotid artery and proximal segment of the middle cerebral artery.On DSA, the branches of ICA and ECA on the same side fed the meningioma.The frontotemporal approach craniotomy was used.In operation, the tumor was hypervascular and infiltrated dura of inferior orbital fissure, temporal fossa and the MCA.A branch of the MCA was divided when dissecting the tumor.We clipped the MCA but it still was difficult to dissect its ends in the Sylvian fissure.We decided to extend craniotomy and did superficial temporal artery to M4 segment of MCA bypass.MCA was clipped for 45 min.Intraoperative blood loss was 1000 ml.The surgery took 7 h.After that, the patient was resuscitated in surgical high dependency unit for 3 days.Pathological findings proved transitional meningioma, WHO grade I. Surgical outcome in one year postoperative was good with KPS 90 out of 100 points.No neurosurgical deficits were reported.On MRA, STA-MCA bypass shown acceptable flow.There were a few cases of skull base tumors requiring vessel revascularization.In a recent systematic review, Wolfswinskel et al showed that only about 368 cases of EC-IC bypass due to vessel injury for skull base tumors resection were reported from 1950 to 2018 .According to Sekhar, there were 130 revascularization cases for tumors from 1988 to 2006 .The reason may be the increasing use of radiotherapy for tumor remnants left around vessels during surgery.Skull base tumors include pituitary tumors, sellar/parasellar tumors, meningioma, chordomas, chondrosarcomas, and squamous cell carcinoma… Most of the vessel revascularization cases were meningiomas .This is because of the variety in their behaviors resulting in different vessel encasement.Therefore, meningiomas resections often need revascularization.Most chordomas and chondrosarcomas can be dissected away from the vessel.However, complete resections of the slow-growing malignant tumors, such as adenoid cystic carcinomas, usually damage blood vessels.Indications of cerebral revascularization in skull base tumors were still controversial.This depends on the nature of the tumors, history of radiotherapy, recurrent tumors and relation to nearby critical structures.Sekhar at al recommended four criteria for EC-IC bypass in the patients with skull base tumors .In our case, this was an accidental intraoperative injury of the MCA and the artery cannot be repaired directly.Despite vessel sacrifice and revascularization, gross total resection was only achieved in 63% .In Champagne`s study, complete resection occurred in 2 out of 12 cases .In the previous studies, saphenous vein graft was the most commonly reported graft, followed by radial artery graft , , .However, in our case, we cannot dissect the vessel ends due to the tumor infiltrated deeply vascular wall in the Sylvian fissure.Therefore, STA-MCA bypass was another good choice because of its safety and usefulness.Another reason why we decided to choose the STA-MCA bypass was that the collateral vessels were present and the need for blood flow augmentation was minimal.Meningiomas, especially huge sphenoid wing ones, were the most common skull base tumors requiring revascularization.Despite the popularity of SVGs and RAGs, STA-MCA bypass was a safe and effective surgical management for vessel injury in sphenoid meningioma resection.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.Nothing to declare, as this is a single case report.At our center, we do not require ethical review by the Institutional Review Board for single case report studies.Written informed consent was obtained from the patient and his wife, for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.Not applicable – this is a single case report, not a systematic review or meta-analysis.Moreover, we attest that it is not a ‘first in man’ study, either.Not commissioned, externally peer-reviewed. | Introduction: Sphenoid meningioma engulfed cerebral arteries has always been a challenge. To achieve a gross total resection, vessel sacrifice may be unavoidable. Presentation of case: A 22-year-old man with a history of head trauma a week ago complained of a headache for one week. On examination, he was alert, denied paralysis and cranial nerves palsies. Preoperative MRI showed a hypervascular left sphenoid wing meningioma embedding left internal carotid artery and proximal segment of the middle cerebral artery. In operation, a branch of the MCA was divided when dissecting the tumor. The MCA was clipped but was still difficult to dissect vessel ends in the Sylvian fissure. We decided to extend craniotomy and did superficial temporal artery to M4 segment of MCA bypass. Then, the patient was resuscitated in surgical high dependency unit for 3 days. Surgical outcome in one year postoperative was good with KPS 90 out of 100 points and no neurological deficits. On postoperative MRA, STA-MCA bypass shown acceptable flow. Discussion: There were a few cases of skull base tumors requiring vessel revascularization. Most of the revascularization cases were meningiomas. Saphenous vein graft (SVGs) was the most commonly reported graft, followed by radial artery graft (RAGs). In case of difficulty in dissecting the vessel ends due to the tumor infiltration, STA-MCA bypass was a safe and helpful choice, especially the collateral vessels were present and the need for blood flow augmentation was minimal. Conclusion: STA-MCA bypass was effective surgical management for MCA injury in sphenoid wing meningioma resection. |
370 | Phytochemical and nutritional properties of underutilised fruits in the southern African region | Indigenous fruits possess the potential to contribute to food, nutrition security, health and the income of rural communities in southern Africa, especially in dry areas where cultivation of exotic fruit species is often not possible.These indigenous fruits yield a harvest even during drought, since they are well adapted to their local environment, while staple crops fail.Thus, they may be used as emergency food during times of food shortage.Nonetheless, the indigenous fruits are relatively unknown in the global market because they remain undomesticated.This is mainly due to the lack of knowledge and bias of research and development in profit-driven large-scale agriculture.The information about indigenous fruits to the livelihood of farmers and food nutrition is often neither documented in science nor acknowledged in poverty reduction strategies.Underutilised fruits may possess essential nutrients, but studies on their composition and consumption are limited and fragmented.This renders it difficult to evaluate the contribution of underutilised fruits to dietary adequacy.The information about the phytochemical, nutritional and functional characteristics is important to promoting and expanding the utilisation of these indigenous fruits, thereby facilitating the fruit tree domestication programme as well as enhancing food and nutrition security and income generation.The development of functional products from the underutilised fruits in the southern African region offers another alternative for the exploitation of these resources, to supplement their nutritional value and promote the new export channels.In this context, being the fruits used in Andean folk medicine since ancient times, one can establish a niche for future scientific research.Since most of the existing information is not confirmed by scientific studies, the collection of data is mainly necessary to filter the most essential and reliable information.This review highlights the information on agro-food, phytochemical, mechanical, physicochemical, nutritional, functional properties of the underutilised fruits of the southern African region, for example, kei apple, cape gooseberry, marula and monkey orange.Known as a deciduous fruit, kei apple is considered to be native to the Kei River in Namibia; thus the name “Kei apple”.It belongs to the flocourtiaceae family and is added to the other two Dovyalis cultivars, including wild apricot and common sour berry.According to DAFF, kei apple is predominantly found in the wild in the Limpopo, Mpumalanga, Eastern Cape and Kwazulu Natal provinces in South Africa.It grows in a wide range of soils such as those with high salinity.Despite being known as a subtropical fruit, it also grows well in the areas where the temperatures can drop to − 6 °C; hence it qualifies as a frost tolerant tree.The kei apple tree is described by Orwa et al. as an evergreen shrub with a vigorous growth pattern and strong thorns.Its common names are: kei apple; wild Apricot, kei-appel; appelkoosdoring, incagolo, umqokolo, motlhono, mutunu, amaqokolo and mukokolo.According to Loots et al., kei apple fruit is almost round, bright yellow and has a tough skin.The harvesting maturity of kei apple is reached after 90 days from full flower opening and its ripeness is determined by the full development of colour.The average sugar content of the ripe fruit is estimated to be 15–18%.The kei apple is harvested by selecting the fruit with a stalk in order to avoid tearing its skin.Then, a cluster is cut from the branch with a sharp knife or clippers.Thereafter, the fruits are detached from both the cluster and the stalk and the fruit is graded according to size and colour to ensure that the packs are uniform.Cape gooseberry, also known as golden berry, is a plant native to tropical South America and belongs to the Solanaceae family.Cape gooseberry includes many cultivars from different regions and countries.The cultivars are differentiated by size, colour, taste, flower shape, plant height and plant size.Of these cultivars, three types originated from Colombia, Kenya and South Africa and are currently cultivated in these regions.The Colombian type has an intense yellow colour and higher sugar content compared to the ecotypes found in Kenya and South Africa.These characteristics make this type more palatable to consumers.The annual production of cape gooseberry in Columbia is estimated at 12,000 tons; thereby rendering Columbia to be the largest cape gooseberry exporter in the world.Cape gooseberry is described as a domed shrub that can grow to 1 m.The flowers are yellow with purple blotches in winter.According to Ramadan, cape gooseberry is also described as an herbaceous, semi shrub, which is upright, perennial in subtropical zones, and can grow until it reaches 0.9 m.The fruit weighs approximatively 4–5 g, is protected by an accrescent calyx and covered by a brilliant yellow peel.Yahia reveals that the ripe cape gooseberry fruit suitable for processing consists of total soluble solids containing 14 °Brix and 1.3% acidity, resulting in a maturity index of 10.8.The plant is moderately adaptable to wide types of soils and good crops are obtained in poor sandy ground.Cape gooseberry also grows in Egypt, South Africa, India, New Zealand, Australia and Great Britain.The marula tree also known as marula, maroela, umGanu, nkanyi, morula, mufula indigenously grows in the savannah regions of sub-Saharan Africa.The marula tree is generally found in South African game parks and in the rural areas of Limpopo, KwaZulu-Natal, the Eastern Cape and Mpumalanga and broadens northwards through tropical Africa into Ethiopia and Sudan.The tree is more dominant in Phalaborwa in the Limpopo Province and Mpumalanga.One single tree can produce up to 500 kg of fruit per year as a prolific fruit bearer.Marula tree is an indigenous tree adapted to poor soils and naturally occurs in numerous types of woodland, in sandy soil or, occasionally, in sandy loam.The marula tree prefers a warm frost-free climate and is drought resistant.It produces flowers from September to November and bears fruit from January to March.In the middle of the rainy season, the marula fruit starts to drop from the trees in large quantities.On the ground, the pale green fruit ripens to a pale, waxy yellow colour around January to March or April.The marula fruit is borne on female trees, plum-sized, each with a thick peel; a translucent, white, highly aromatic sweet sour pulp; and a woody endocarp protecting the seed.In addition, the marula fruit is the size of a small plum and it is pale-yellow, thick and very juicy.Marula fruit is considered to be ripe when the pH range reaches 4.2 to 4.4 and it has a sugar content of approximatively of 11 °Brix.When ripe, the fruit exhibits a light yellow skin, with white, succulent flesh and a strong, distinctive and turpentine flavour.The stone is walnut-sized and possesses a thick wall; the flesh clings to its brown stone and is very fibrous and juicy.Each kernel is guarded by a small bony “lid” that detaches when the stone is cracked.The ripe marula is traditionally collected from the ground by hand.Monkey orange belongs to the Loganiaceae family, indigenous to tropical and subtropical Africa.Up to 75 Strychnos species exist in Africa, of which 20 species produce edible fruits, in drought prone areas and semi-arid regions in southern tropical Africa where the tree is dormant while water is not available.Monkey orange is found in the wild in the Eastern Cape, KwaZulu-Natal and Limpopo provinces in South Africa, inland of Swaziland, northern Botswana and Namibia.The most common species in the woodlands of Southern Africa are: S. innocua, S. cocculoides, S. pungens and S. spinosa.Monkey orange species are indehiscent, oval shaped, yellow or orange, and have a thick woody shell.The tree is characterised as being small, 1–7 m in height, and bears edible, balled-shaped fruits, 6–12 cm in diameter.Unripe fruits possess a bright green woody peel that turns yellow-brown upon ripening.The pulp is described as being edible, bright yellow or brown, juicy, and sweet or sour with few to numerous hard seeds seeds imbedded in the fleshy pulp.The entire fruit weighs between 145 and 383 g and a single tree produces between 300 and 700 fruits, translating to approximatively 40–100 kg of fruit per tree.The fruit is seasonal and manually harvested by hand-picking between August and December, the so-called “lean season”, a time of cultivated food shortage in Zimbabwe.Fresh monkey orange is immediately consumed after cracking because it is commonly believed that ripe fruits cannot be stored.Kei apple juice contains a larger amount of total polyphenols compared with the grape, strawberry and orange juices prepared under identical conditions.The findings of Loots et al. clearly showed that the total polyphenol content of the kei apple juice is approximatively twice that of strawberry juice and approximatively four times that of grape and orange juice.Loots et al. reported that the phenolic acids of kei apple juice contribute to 66.3% of the total phenolic fractions, followed by the procyanidin, catechin, and anthocyanin monomers and then by anthocyanin polymers and flavonols.Furthermore, these findings of Loots et al. are in accordance with the report of Miller and Begona Ruiz-Larrea.The low flavonol concentration of the grape juice might be explained by the fact that these polyphenols are concentrated in the juice.High levels of polyphenols, namely anthocyanins, are observed in the strawberry juice portions and are correlated with the intense red observed in these fractions.In comparison with grape, strawberry and orange juices, the kei apple juice shows a significantly higher content of phenolic acids.Also, a relatively higher procyanidin, catechin, and anthocyanin monomer fraction was also reported by Loots et al. and the concentrations were more or less similar to those observed in strawberry juice.On the other hand, the flavonols and anthocyanidin polymer content were reported to be lower than grape, strawberry and orange juices.Loots et al. determined the antioxidant capacity of the kei apple fruit juice and its fractions are determined by Oxygen Radical Asorbance Capacity and Ferric Reducing Antioxidant Power.The kei apple juice showed significantly greater total ORAC and FRAP values compared with the other fruit juices.ORAC and FRAP analyses of the fractionated samples highlight that the kei apple juice phenolic acid fraction is the highest contributor to its total antioxidant capacity, followed by the procyanidin, catechin, anthocyanin fraction > flavonol > anthocyanin polymers.However, in grape, strawberry and orange juices, the procyanidin, catechin, and anthocyanin fractions are the greatest contributors to antioxidant capacity, excluding the anthocyanin polymer fraction in orange juice.Therefore, based on the Loots et al. reports, both the ORAC and the FRAP analyses of the unfractionated kei apple juice showed significantly higher antioxidant capacity compared with that of grape, strawberry and orange juices.Derailed polyphenol compounds in the kei apple juice were analysed by Loots et al. using the GC–MS.Contributions of individual polyphenols to the total antioxidant activity and Trolox equivalent antioxidant capacity of the sample were determined by Loots et al.Caffeic acid was the most outstanding polyphenol in the kei-apple juice.The concentrations of nonflavonoid compounds were found in the following order: s p-coumaric acid > p-hydroxyphenylacetic acid > protocatechuic acid > 3-methoxy-4-hydroxyphenylacetic acid.While studying the antioxidant potential of these components using their TEAC values, caffeic acid prevailed because of its high concentrations.This was succeded by p-coumaric acid > protocatechuic acid > 3-methoxy-4-hydroxyphenylacetic acid.Despite gallic acid being identified at much lower levels, it is also a great contributor to the TAA of the mixture because of its high antioxidant capacity.Large amounts of flavonoids such as quercetin, myricetin and kaempferol, are reported in cape gooseberry by Bravo et al.Quercetin is the main flavonoid, followed by myricetin and kaempferol in gooseberry.The total phenolic compound content of cape gooseberry samples, however, varies from 0.06 to 0.74 mg gallic acid equivalent/100 g fruit.Considering that the presence of phenolics and ascorbic acid in cape gooseberry fruit might contribute to the high level of antioxidant capacity, it is necessary to study the variations in the attributes of fruit quality, bioactive phytochemical levels and functional profile.Phenolics in fruits and vegetables are of notable interest attributable to their essential health characteristics.A good quantity of phenolics is measured in cape gooseberry juice, wherein the level of total phenols is 6.30 mg/100 g juice as caffeic acid equivalent.Cape goosebeery juice contains 0.2% oil, wherein linoleic acid, oleic acid, palmitic acid, γ-linolenic acid and palmitoleic acid are identified as the main fatty acids.Given the aforesaid, the cape gooseberry may be a source of polyunsaturated fatty acids.Attention accorded to polyunsaturated fatty acids as health-promoting nutrients has recently broadened intensely with much literature describing their benefits.The major phytosterols of cape gooseberry are Δ5-avenasterol and campesterol.Vitamin E content is high, and γ- and α-tocopherols are the main components.High levels of β-carotene are also identified in the cape gooseberry juice.Hence, cape gooseberry juice could be used as a novel source of functional beverages without any demand for fortification with fat-soluble bioactives.The antioxidant activity of cape gooseberry juice was measured by a 1,1-diphenyl-2-picrylhydrazyl method.The findings indicate that fresh juice produces a 78% decrease and the absorbance of DPPH radical control solution when enzyme-treated juice leads to an 82% decrease.Miller and Rice-Evans highlights the notable contributory function of phenols to the antioxidant activity of orange juice, even though vitamin C is the most sufficient antioxidant.Therefore, the existence of a good content of phenolics in cape gooseberry juice might contribute to the high level of antioxidant capacity.A positive relationship is noted between the consumption of vegetables and fruits possessing carotenoids and the prevention of numerous chronic degenerative diseases.Carotenoids from cape gooseberry were assessed by HPLC-PDA-MS/MS and 22 compounds were identified.Trans-β-carotene was the main carotenoid, representing 76.8% of the total carotenoid content, followed by 9-cis-β-carotene and all-trans-α-cryptoxanthin, representing approximativaly 3.6 and 3.4% respectively.The content of carotenoid esters estimated to be lutein dimyristate equivalents was less than 0.5 mg/100 g.Marula juice contains 56 mg/100 mL of pyrogallol equivalence of phenols and an antioxidant capacity of 382 mg/100 mL of vitamin C equivalence.Hillman et al. demonstrate that the antioxidant capacity of marula juice is higher than that of orange and pomegranate juices.The total antioxidant capacity of marula fruit in terms of an equivalent concentration of L-ascorbic acid,is 2960 mg/100 g L ASC-eq and 1872 mg/100 g L-ASC-eq.Vitamin C represents approximatively 70% of the TAC of the marula fruit, which is 20 to 40 times greater than most common fruits.It is clear that marula fruit and its juice possesses higher antioxidant activity compared with the juice of other fruits such as pomegranate and orange.Nonetheless, further investigation of this aspect is important since the various analysis procedures employed renders it difficult to draw a valuable conclusion regarding the antioxidants obtained from the various fruits.The effect of thermal treatment on the antioxidant capacity of marula appears not to have been studied, even though in the food industry it is always applied as an essential processing stage to inhibit spoilage caused by microorganisms and enzymes in order to increase the shelf life.In addition, there is no literature available on the effect of storage conditions which could reveal many variations in the unprocessed or processed marula juice.Seven marula juice products contained large amounts of polyphenols, ranging from 226 to 414 mg/100 mL tannic acid equivalence in a study conducted for three consecutive years.The evaluation of polyphenol content from 17 clones using gallic acid as a standard showed that the content varied from 700 to 2500 mg GAE/100 g dry weight, whereas the phenolic content of banana and guava varied from 24 to 72 mg/100 g and from 109 to 191 mg/100 g respectively.The content of soluble phenolics of marula fruit juice was estimated at 56 g/100 g.The flavonoid content of pineapple, banana and guava varied from 1 to 4 mg/100 g, 5 to 24 mg/100 g and 14 to 45 mg/100 g respectively.The variation among the phenolic content might be explained by the various extraction methods employed, the various clones and the fruit quality of the selected clones.The recovery of phenols was dependent on the fruit type and the extraction solution employed, illustrating that some fruit molecules are efficiently extracted using 100% methanol or acetone when others are extracted using 50% of the same extraction solution.Therefore, optimising the extraction method is necessary.Hence, marula has a higher phenolic content compared with other fruits.Monkey orange exhibits a phenolic content radical quenching capacity and flavonoids expressed as catechin equivalents, with levels comparable to baobab nectar.Proanthocyanidins, expressed as percentage leucocyanidin equivalence, are similarly comparable to mobola plum.A high colour intensity is generally due to the high total antioxidant capacity of a product; this relationship requires further investigation for the monkey orange species, since they are bright orange and orange–brown, which indicates that the fruits are high in phytochemicals.Therefore, there is a need for research to quantify and classify the phenols of monkey orange.Loots et al. demonstrated that the total ASC concentration of kei-apple juice is comparable to that of strawberry juice and it is 100 mg/L greater than that of orange juice.It is noteworthy that the ascorbic acid in kei-apple juice is significantly higher than grape, strawberry and orange juices, with a similar low dehydroascorbate content as shown by Loots et al. and Materechera and Swanepol.However, the ascorbic content of the selection called Dovyalis caffra mananga is estimated at 347 mg/100 g.Kei apple is a good source of ascorbic acid compared to guava and citrus.It is also revealed from the findings of Loots et al. that the ASC in kei apple juice shows exceptional stability with very little oxidation of DHA.This is an important characteristic for both the industries of product development and health.Despite DHA being conveniently grasped by erythrocytes and other cells in vivo and reduces to ASC, which is the active form of vitamin C, it is not easily absorbed across the intestinal mucosa and possesses little antiscorbutic activity.Although very little is known about the nutritional composition of the Kei apple, it incorporates both macronutrients such as protein, fat, fibre and carbohydrates and micronutrients such as iron, sodium, calcium, magnesium, zinc and vitamins.Compared to Engenelerophytum magalismontanum, Vangueria infausta, Berchemia discolour, Ximenia caffra and Ximenia americana indigenous fruits, Kei apple also exhibits a high content of sodium and phosphorus.It also shows a content of Total Soluble Solids between 10% and 18%.It is reported that kei apple contains 3.7% pectin.The findings of Wehmeyer indicate that kei has a higher moisture content compared to some wild fruits including wild plum, wild apricot, baobab and sour plum.However, kei apple exhibits a lower protein, fat and carbohydrate content compared to wild plum, baobab and sour plum.The nutritional composition of kei apple shows that it is consisted up of: carbohydrates, protein, vitamin A, iron and calcium.Although the ash content of kei apple is lower than those of other edible wild fruits, it was discovered to be comparative to that reported for the wild apricot.The ascorbic acid level of cape gooseberry is higher than the commercial fruits such as pear, apple, peach, and is nearly comparable with orange and strawberry.However, there are significant differences in ascorbic acid levels among reports, which vary from 20 to 95 mg/100 g.The fruit is used as a rich source of provitamin A, minerals, vitamin C and vitamin B-complex.Cape gooseberry has 15% soluble solids and its high content of fructose renders it valuable for diabetics.The phosphorus content is high for a fruit.Its high level of dietary fibre is a necessity wherein fruit pectin plays a role as an intestinal regulator.The quantity of alcohol insoluble solids in fresh juice is 0.62 g/100 g.The total acid content is 0.9–1.0% and the pH is low in cape gooseberry juice.Total sugar content is estimated at 4.9 g/100 g in the juice and the predominant compounds are sucrose and fructose, which are similar to the sugar levels in common juices.The sugar levels in common juices are: 9.8% in pear, 7.0% in orange, 11.1% in apple, 8.5% in peach, and 5.7% in strawberry.The lipid composition of the Colombian cultivar of cape gooseberry shows that its pulp/peel oil is characterised by a large amount of saturated fatty acids.Saturated fatty acids are identified in high quantities in all lipid classes, especially monoacylglycerols which are characterised by a high content of palmitic acid.The cape gooseberry seed possesses 200 mg/100 g oil, which consists of numerous fatty acids, namely linoleic acid, oleic acid, palmitic acid, ??-,linolenic acid and palmitoleic acid.Hence the fatty acid composition and high amounts of polyunsaturated fatty acids found in the fruit oil render it ideal for the diet.The plant sterol levels are high and there are no remarkable differences between that of whole berry oils and seed oil and the composition when the pulp/peel oil which is characterised by a higher content of sterols.Cape gooseberry oil is characterised by a high level of vitamin K1, and consists of more than 0.2% of the total lipids in the pulp/peel oil.The content of vitamin K1 content is very low in most foods, and the majority of the vitamin is found in a few green and leafy vegetables.The fruit is also used as a good source of vitamins A, C, E and B complex in addition to minerals, tocopherols and carotenoids, as suggested by Bravo et al.Marula fruit juice has a very high content of vitamin C in the fresh fruit, thereby serving as an essential source of vitamin C for many rural people.Although the lowest values of vitamin C in marula were similar to the level of vitamin C in some fruits including orange juice, they are still greater compared to other citrus juices.Hiwilepo-van Hal et al. reported a very high level of ascorbic acid in marula fruit juice, between 700 and 2100 mg/100 g, which was 10 times higher compared to orange and pomegranate juices.Furthermore, Hiwilepo-van Hal et al. point out that the vitamin C content of marula fruits in Nigeria was twice that of those found in Botswana.The vitamin C composition of the same fruit from southern Africa shows some variations which could be due to a variation in the genotypes or environmental conditions during production, the place of origin, soil, climate, ripening stage of the fruits and the time that lapsed after harvesting.The origins of this variation are still undocumented and need to be investigated, since genetic change of this magnitude could be of great significance for the domestication programmes.However, studies conducted by Hiwilepo-van Hal et al. reveal that the variation in vitamin C content is due to differences among the clones of marula and the fruit ripening stages.Except for guava, most of the fruits, including grapes, oranges, apple, lemon and papaya have a lower vitamin C content than marula fruit as highlighted in Table 4."The nutritional quality of several selctions of marula has been reported by Thiong'o et al. "Vitamin C values in the pulp of marula varies from 90 to 300 mg/100 g of fresh matter when sugar level ranges from 7 to 11% sucrose. "Total acidity is approximatively 2%, and the nuts consisted of more than 50% fat. "Most minerals are present only in small quantities. "The seeds of marula are high in protein and fat and constitute an essential emergency supplement because of their high nutritive value and high oil level with a very good nutritional ratio of saturated to unsaturated fatty acids. "The endocarp consists of 28% protein, 57% oil, and an energy value of 2700 kJ per 100 g. "Fatty acid and amino acid profiles of marula nuts showed a high content of both oleic and palmitic acids, and good stability of the oil. "The amino acid of marula is rich in glutamic acid and low in lysine. "Nutritional significance is related to the higher protein content in the skin and pulp portion, although the variations occur only up to two-fold.The vitamin C content of monkey orange fruits varies from 34.2 mg/100 g to 88 mg/100 g.In comparison with other fruits, the reported maximum vitamin C content for monkey orange is similar to that of marula, baobab, oranges and strawberries.The mean vitamin C content of monkey orange exhibits a higher content than that of other species.The noted minimum and maximum macronutrient and micronutrient composition of monkey orange species differs between and within species.There is a significant difference between the minimum and the maximum carbohydrate levels within a species of monkey orange.S. innocua exhibits the highest total carbohydrate level variation: 15.4 g/100 g to 61 g/100 g.This variation appears to result from inaccuracies of the applied methodology as carbohydrates are measured by an indirect method, which is the difference method.Total sugars are estimated at 28.2 g/100 g and the most predominant sugar is sucrose, a disaccharide, followed by the monosaccharides, glucose and fructose.However, the protein content varied from 0.3 g/100 g to 12.8 g/100 g in S. innocua and the outlier values are attributed to small sample sizes used during the analysis.There are no reports available on the amino acid profile of monkey orange species.A remarkable variability in fat content was also noted for all the cultivars ranging from 0.3 g/100 g to 20 g/100 g.S. spinosa contains 31.2 g/100 g fat content, which is the maximum value for fruits of the same species.The large variation in fat level between and within S. spinosa and other species could be due to the sample size and the adopted analytical methodology.The energy values differed from 1315.4 kJ/100 g to 2083.6 kJ/100 g for all four species.The variations in values among the studies and from species to species could be attributed to the use of different coefficients for computing the energy values.S. spinosais ranked superior to other monkey orange varieties in a comparative study.S. spinosa contained higher energy levels 1923 kJ/100 g than those of ber and baobab.The crude fibre levels varied from 2.5 g/100 g to 22.2 g/100 g in the monkey orange species.The fibre level of S. spinosa is comparable to that of S. innocua.Higher fibre content and micronutrient levels could be attributed to the high micronutrient levels of S. innocua and S. spinosa.The ash level of the four monkey orange species were reported to be 0.5 g/100 g for S. innocua, 33.34 g/100 g for S. cocculoides and 4.7 g/100 g for S. innocua.The low ash content for S. spinosa, S. cocculoides and S. pungens could be linked to the soil composition and micro climate at the sampling location.The variation in moisture level was high in S. innocua: 60% to 91%.The information on sample preparation is not documented and the changes in moisture content could also be explained by the difficulty in obtaining the juice and or flesh due to the stickability of the mesocarp to the endocarp.According to Ngadze et al., the high fibre and fat content of monkey orange can be attributed to the contamination of the edible portion with seed material.Although the variation in moisture content levels was lower within the three species than between the cultivars of monkey orange, the higher moisture content impacts the shelf life if the fruits are inappropriately stored.The mineral content of monkey orange showed that the S. innocua is rich in Cu, Na and Zn, while S. cocculoides showed the highest Fe content.The reason for the wide variation in mineral content between and within the species was explained as being due to the phenotypic changes, climatic differences and soil composition.The data currently available are not sufficient to effectively compare the mineral content among the species.S. spinosa was identified as the highest source of Fe and Zn when compared to the other indigenous fruits such as baobab, marula, and the medlar.Although the mineral levels showed extreme differences in minimum and maximum values, in many studies the data often stem from only one source without a description of the collection methods.Based on the available data, it is recommended that the S. innocua and S. cocculoides are considered essential sources of Zn and Fe respectively.This requires further research to validate the mineral content and bioaccessibility studies of minerals of S. innocua and S. cocculoides since they possess the potential to complement local diets that are deficient in minerals.Thereafter, research on the bioavailability of Fe and Zn is required to assess the extent to which the species could contribute to enhancing human nutrition.Other vitamins, including thiamine and riboflavin are assayed.Wide variations between thiamine and riboflavin levels of S. pungens flesh adjacent to the shell, on the one hand, and on the other hand, adjacent to the seed, were observed.However, a thiamine content of 2.74 mg/100 g and riboflavin, 1.85 mg/100 g, were found in the flesh inside the shell, whereas a thiamine content of 0.10 mg/100 g and riboflavin, 0.74 mg/100 g, were found in the flesh surrounding the seeds.The reasons for the differences between the content of thiamine and riboflavin in the flesh around the seed and the flesh around the shell were not clear However, a lower thiamine level is observed in the pulp of S. cocculoides, S. pungens and S. spinosa.No information was reported regarding the riboflavin content for the other three species.Kei apple has an apricot-textured, juicy, highly acidic flesh with 5–15 seeds arranged in double rings in the centre and a frank taste.A few selections of Kei apple exhibited high Total Soluble Solids/Total Titrable Acidity ratios indicating that the taste of these selections is more acceptable.The maturity of the kei apple is reached after 90 days from full flower opening.Its ripeness is characterised by the full development of colour.Kei apple nectar is acceptable in terms of the sensorial attributes on a scale from 1 to 5 such as taste, colour and appearance.While varying the level of added sugar from 0 to 26% during the drying, the taste and colour of dried kei apple fruit remain acceptable using the scale from 1 to 5.Cape gooseberry is tomato-like in flavour and appearance, although the taste is much richer with a hint of tropical luxuriance.Although the cape gooseberry is generally accepted by consumers, knowledge about its flavour is ambiguous.Nonetheless, only a few studies exist concerning the volatile composition and aroma precursors of the cape gooseberry.The sensorial study of cape gooseberry confirmed a total of 83 volatile compounds identified and quantified in the fruit pulp, namely 23 esters, 21 alcohols, 11 terpenes, 8 ketones, 8 acids, 6 lactones, 4 aldehydes, and 2 miscellaneous.The main aroma components of the cape gooseberry comprise γ-hexalactone, benzyl alcohol, dimethylvinylcarbinol, 1-butanol, 2-methyl-1-butanol, cuminol, γ-octalactone, and 1-hexanol.The odour activity values showed that γ-octalactone, γ-hexalactone, ethyl octanoate, 2-heptanone, nonanal, hexanal, citronellol, 2-methyl-1-butanol, benzyl alcohol, phenethyl alcohol, 1-heptanol, ethyl decanoate, and 1-butanol were the dominant aroma components of cape gooseberry.Among these, γ-octalactone was the most powerful contributor to the aroma of the cape gooseberry.Therefore, cape gooseberry possesses indicator odorants that contribute to the overall aroma that can also be used as quality-freshness markers of this fruit.Although research studies have been undertaken regarding the composition of marula fruit, very little is known about the acceptability and preference for the texture and flavour characteristics of the product.Thus, the marula, which forms part of the diet of the Pedi people in South Africa, was classified as three different types: sweet and palatable; sour and palatable; undesirable due to its objectionable odour.The findings reveal that the marula juices of different selections vary little in respect of odour, flavour and aftertaste.In addition, the main characteristic regarding the flavour is the extreme sourness of all the samples in combination with an absence of sweetness.However, only the Namibian selections of marula fruit growing in sandy soil in a low rainfall region are known to possess an acceptable sweet/sour balance.Besides the fact that the fruitiness is described as the combination of odour and flavour, the odour and flavour of marula juice received an average score of 27 and 30 respectively on a 60 point scale.The mouthfeel attributes of the juice from the Pretoria selection are namely thick, grainy and smooth, whereas other selections are not.The juice from the Pretoria selection of marula has a grainy, pear-like structure and is thicker in texture than the others while its smoothness is notably lower than the other selections.Therefore, the texture of marula juice from the Pretoria selection was considered to be more desirable by the panellists than those of the other selections, as confirmed by the findings of the study conducted by Shäfer and McGill.According to Sitrit et al., the fruit exhibits a delicate complex of aroma volatiles, which are identified as a mixture of pineapple, apricot, melon, clove, and citrus.The ripe monkey orange species is characterised by a fleshy, sweet, yellow, very aromatic pulp and contains many hard brown seeds.Wide variations exist in the general description of taste, colour, texture and flavour between and within species.Fruit sweetness is highly related to the sugar composition.It was reported that the degree of monkey orange ripeness based on the sugar profile, influences the taste, which is associated with environmental factors such as soil, geographical location and climatic differences.The presence of organic acids is attributed to the acidic constituents that blend with sugars and lead to the cultivar characteristic blended acid-sweet taste.For instance, S. spinosa contains a malic acid 1.9 g/100 g, succinic acid 0.5 g/100 g and citric acid content of 2.4 g/100 g.Compared to other indigenous sour fruits, S. spinosa showed less citric acid, indicating a more palatable and less sour taste.S. innocua has a bitter taste which could be explained by the presence of tannins; however, tannin concentrations change between and among trees of the same provenance.Ngadze et al. report that the average acidity of S. cocculoides processed juice is 1.13%.The relatively high acidity of monkey orange contributed to the long shelf life of fresh fruits in comparison to other fruits.The partial solubilisation of pectin and cellulose by the endogenous plant enzymes, polygalacturonase, pectinmethylesterase, lyase, and rhamnogalacturonase, during ripening, affects the texture and juiciness of the fruit.Even the consistency varies among cultivars and at the stage of ripeness, monkey orange fruits have a thick gel or juicy texture.The level of pectin solubilisation affects the textural profile of the monkey orange juice: the higher the pectin hydrolysis, the juicier the resulting fruit juice.Ngadze et al. reported the comparison of the sensory properties of sugar plum, baobab, mango and monkey orange juice and concluded that the monkey orange juice was the most preferred by the sensory panellists.On the other hand, monkey orange jam is highly preferred by consumers due to its taste, and the general consumer acceptance of the other monkey orange fruit products is high, according to the available literature searched.Therefore, the sensory studies reveal that potential exists for product development and commercialisation of the species.Fresh monkey orange fruit has a special and delicious blend of complex aroma volatiles, which are observed by consumers as a blend of pineapple, apricot, melon, clove, and citrus.Monkey orange juice is incorporated into cereal porridge to improve the flavour and for vitamin enrichment, according to the data of surveys carried out in Zimbabwe.The most predominant volatiles of ripe S. cocculoides pulp of Malawian provenances are acetate and butyrate esters which highlight a fruity sweet flavour, concurring with consumer descriptions of fruit flavours.The main volatile flavour constituents in the peel of ripe S. spinosa fruits are identified as trans-isoeugenol and eugenol, which possess a pungent clove aroma, and p-transanol , while the unripe fruit lacks volatile constituents.There has been a growing interest in the value of polyphenols among researchers and food manufacturers during the past 10 years.This interest is predominantly due to the antioxidant characteristics of polyphenols, their abundance in our diet, and their probable function in the prevention of numerous diseases associated with oxidative stress, including cancer, cardiovascular disease, and neurodegeneration.Although a very limited knowledge exists regarding the pharmacological properties of Kei apple, the phenolic acids are known as the catalyst of the nutrient uptake, enzyme activity, protein synthesis and structural components, as mentioned in Table 8.Thus, caffeic acid blocks the biosynthesis of leukotrienes which are the constituents involved in immune regulation diseases, asthma, and allergic reactions.Kei apple contains 5.4 mM/L of flavonoids, which is high in comparison to that of grape, strawberry, and orange juices.This suggests that Kei apple has antimicrobial and anti-inflammatory properties, since the flavonoids exhibit numerous biological properties.Kei apple can be used in treating cytotoxicity and antitumor activity.The best characteristic of almost every group of flavonoids is their ability to act as powerful antioxidants, which safeguards the human body from free radicals and reactive oxygen species, thereby preventing colon cancer.The health benefits of cape gooseberry are mentioned in Table 8.A high content of vitamin K1 confers the most unique health promoting characteristic of cape gooseberry oil.Vitamin K acts as a coenzyme and catalyses the synthesis of a number of proteins participating in blood clotting and bone metabolism.This means that the vitamin K of cape gooseberry decreases the risk of heart disease, kills cancer cells, and improves skin health, thereby possessing antioxidant properties.High phylloquinone consumption is a marker of a dietary and lifestyle pattern that reduces the risk of coronary heart disease.Cape gooseberry oil seems to be nutritionally valuable since the high level of linoleic acid prevents cardiovascular diseases and linoleic acid is considered to be the precursor of the structural constituents of plasma membranes and some metabolic regulatory constituents.The level of tocopherols renders the oil of cape gooseberry to be nutritionally valuable.Cape gooseberry is used in folk medicine for treating disease, including malaria, asthma, hepatitis, dermatitis, and rheumatism, while it serves as a diuretic.Several medicinal properties are attributed to cape gooseberry, for example, it is antiasthmatic, antiseptic, and a strengthener of the optic nerve, while it is used in the treatment of throat infections and the elimination of intestinal parasites, amoebas and albumin from the kidneys.The aforesaid is in agreement with the findings of Bravo et al.It also promotes anti-ulcer activity and is powerful in lowering cholesterol content.Puente et al. report that cape gooseberry also exhibits antidiabetic properties, and recommend the consumption of five fruits a day.Studies indicate that eating cape gooseberry lowers blood glucose after 90 min postprandial in young adults, resulting in a greater hypoglycaemic impact after this period.So far, there are no studies that indicate the possible adverse impacts of cape gooseberry.The health benefits of marula fruit are summarised in Table 8.The stem-bark decoctions of marula are commonly used by the Zulu people in South Africa as enemas for diarrhoea treatment.Thus, 300 ml doses of stem-bark decoctions are taken for the treatment of dysentery and diarrhoea.In addition, the Zulu traditional practitioners wash in marula stem-bark decoctions before treating patients infected with gangrenous rectitis; they also give the decoction to their patients.The chewing of fresh leaves of marula and the swallowing of the astringent juice are reported to help with indigestion in many rural African communities.The stem-bark of marula is also used in the treatment of proctitis.Ojewole et al. also report that the Venda people of South Africa use the stem-bark of marula for the treatment of fever, stomach ailments and ulcers.The roots have been used for an array of human ailments such as sore eyes in Zimbabwe.In East Africa, the roots are used as an ingredient in an alcoholic phytomedicine administered to treat an internal ailment called ‘kati’, when the bark is used for stomach disorders.The Hausas of West Africa administer a cold infusion of marula stem-bark as a medication for dysentery.The leaves and roots are used in Tanzanian folk medicine to treat fungal infections and snake bites.Marula is used in Cameroon as a traditional medicine for diabetes mellitus.The leaves are used in Ghana for snake bites and filarial pruritus; while the stem-bark, roots and fruit of the plant are used for the treatment of pharyngitis, splenomegaly and goitre, respectively.Marula is used as an ethnoveterinary remedy to treat diarrhoea and fractures in South Africa.Monkey orange fruits are employed in traditional medicine for the treatment of sexually transmitted diseases.The Zulus use the monkey orange green fruits as a snakebite antidote.The pulp of monkey orange is nutritious, containing particularly high contents of Cu, thiamin, and nicotinic acid, all of which are 20% higher than the mean daily requirement, as reported by Sitrit et al.The different uses of kei apple are summarised in Table 9.Although the fruit is too acidic to be eaten directly from the tree, it is traditionally served as a dessert, where it is cut in half, peeled, seeded, sprinkled with sugar and allowed to stand for a few hours before serving.It is also incorporated into fruit salads, made into syrups, shortcake, jam, jelly, drinks and pickles and dried fruit.Gore attempted to manufacture a functional ready-to-drink beverage, kei apple flavoured juice.However, the problems associated with this process are mainly the bitter taste and the high level of acidity of Kei apple fruit.Since Kei apple itself has a distinct smell, it is required that either the consumer becomes accustomed to it and likes it for its uniqueness or it must be masked, using a masking agent such as a flavour/aroma.As it has been established that food and beverages that are bitter, acidic, or astringent tend to be rejected by consumers, this may have a detrimental economic effect.This explains why Gore used flavouring agents.As a result, a kei apple flavoured beverage is feasible and acceptable by the consumers, as revealed by Gore.While there is no difference between vanilla and mint-vanilla flavours, a slight decrease of total polyphenols, greater acidity and loss of vitamin C in the resulting beverage is noted.However, there is a significant gap in the knowledge about the processing of 100% kei apple juice without the addition of flavouring agents.Kei apple trees are cultivated along a border or used to form an impenetrable hedge around a garden to keep unwanted animals and people out.DAFF also reports that the leaves are used as fodder and animals such as monkeys, antelope and baboons also like the fruit.Cape gooseberry juice represents 72.6% of the berry weight.Currently, various products are processed from the fruit of cape gooseberry, including jams, raisins and chocolate-covered candies.The fruit can also be processed for juice, pomace and other products sweetened with sugar as a snack.In European markets, this fruit is used as ornaments in meals, salads, desserts and cakes.The juice of the cape gooseberry ripe fruit is high in pectinase, thus reducing the cost of manufacturing jams and other similar preparations.Numerous products, including beer, juice, jam and jelly, have been developed from the mesocarp of marula and are favourably marketed, the most current being a marula liqueur, conserves, dry fruit rolls and alcoholic beverages.Despite the traditional and commercial utilisation, the flavour compounds, and specifically the pericarp, remain poorly studied to date.Ripe marula fruit is consumed by biting or cutting through the thick leathery skin and sucking the juice or chewing the mucilaginous flesh after removing the skin.Certain tribes, such as the Pedi, produce a relish from the leaves of marula.The Zulu people of South Africa consider marula fruit to be a potent insecticide.In several parts of southern Africa, the fruit is used for brewing beer and distilling spirits.In Mozambique, South Africa and Zambia, marula is used to flavour liqueur.The gum acquired from the marula tree is rich in tannins, and thus, it is used in the manufacture of an ink substitute.Zimbabwean and South African villagers benefit from the customary use of S. birrea wood in the manufacture of dishes, mealie stamping mortars, drums, toys, curios, divining bowls and carvings.The peel from marula fruit is very useful in the manufacture of oil for cosmetic purposes.Marula is liked by many game animals and it is fed to livestock such as goats and sheep.It is also used as a sweetener of local foods and a curdling agent for milk.Its seed is rich in protein, oil, magnesium, phosphorus and potassium, rendering it essential to nutrition in Africa.Petje reports that the seeds are eaten, dried or ground and incorporated into soups, stews and vegetables, to which they are reputed to give a delicious flavour.Furthermore, fresh seeds are also incorportaed into freshly-boiled meat, which is then eaten immediately.The seeds are eaten raw or roasted as nuts, particularly by children.They taste delicious and many indigenous people considered them to be a delicacy, a ‘Food of Kings’.The seeds are high in protein and fat and represent an essential emergency supplement.Thus, the seeds are generally used to supplement nutrition during winter or periods of drought, being pounded and mixed with vegetables or meat.The seed, also known as the nut, possesses a high nutritive content and a high oil level with a very good nutritional ratio of saturated to unsaturated fatty acids.The antioxidant activity remains after pasteurisation and only 14% is lost after storage during refrigeration at − 18 °C after 4 weeks.Phytochemicals are known as non-nutritive, biologically active compounds, for instance, phenolic acids, flavonoids and carotenoids, to which health protective characteristics are attributed, including preventative actions against ageing, inflammation and certain cancers.Protective impacts of phytochemicals are mainly attributed to their characteristics such as free radical scavengers, hydrogen-donating compounds, singlet oxygen quenchers and or metal chelators.In Southern Africa, monkey orange fruits are generally dried by means of fire and direct sunlight for fruit rolls, leathers, and pounding into flour used to make porridge, known locally as “bozo” in Mozambique or re-cooked as a sauce.Ngadze et al. highlight that sun dried monkey orange pulp can be stored from two months to five years which renders thermal drying a perfect preservation method because of its affordability for rural communities and as a means to secure continuous fruit availability into the next season.Due to their high sugar level, monkey oranges are sticky and difficult to handle in dryers and the possess the potential for caramelisation, which changes fruit to a brown or darker colour, which has a negative effect on the sensory quality characteristics.Hence, appropriate drying methods are required to obtain a product with sufficiently acceptable sensory characteristics for the consumer.To the best of our knowledge, there is limited scientific information about the appropriateness of drying methods for monkey orange fruit, as confirmed Ngadze et al.Monkey orange juice is extracted, and processed manually by mashing with a handheld whisk or wooden spoon.The pulp is diluted with water, then heated to 92 °C for 3 min to initiate the precipitation of colloidal constituents, which can be removed later by filtration.The filtrate is acquired by sieving the pulp through a muslin cloth and is used in the processing of the juice, when the residue remains for jam making.The juice is expected to retain a notable quantity of antioxidants after processing.Therefore, clear monkey orange juice is relished by consumers compared with other indigenous fruit juices based on taste, mouth feel, flavour, and sweetness.To date, few studies have been carried out on nutritional and sensorial profiling of fresh and pasteurised monkey orange juice; thus further work is suggested in this regard.In fruit processing, preserves such as jams, sauces, pickles and chutneys are paramount to decreasing the loss of fresh produce.Jams and marmalades are traditional delicacies and are processed from monkey orange fruit on a small scale, where the type of preserve depends on species.Jam processes vary from author to author in the literature reviewed.No pectin is added, since monkey orange pectin depolymerisation contributes to jams setting and spreading well, a characteristic that contributes to the sensory quality of monkey orange jam.In comparison with other indigenous jams, consumers prefer the monkey orange jam because of its sweet taste and delicate flavour.Nonetheless, possible undesirable changes in sensorial, functional and nutritional profile by the destruction of phenolic antioxidants occur due to the thermal processing of the fruit.Therefore, the retention or loss of phenolic compounds, nutrients and organoleptic properties during jam processing should be further investigated.Although there is not exist up to date any report related to the toxicological properties of kei apple and cape gooseberry, some authors have worked on the toxicological properties of marula and monkey orange fruits.Ojewole et al. reveals that the intraperitoneal injection of marula stem-bark aqueous and methanolic extracts at graded levels to the mice is safe and/or non-toxic in comparison with some identified poisonous extracts of plants."Hexane, methanol and water extracts of the marula's stem-bark does not show any toxic impacts against the brine shrimps while running a test of lethality of brine shrimp.The test of acute toxicity on the animals shows that LD50 values of marula extracts are superior to 5000 mg/kg body weight and the animals remain alive during the duration of suppressive test of 4 days.Given to that, the Ames test of marula extracts shows negative results, pointing out that the animals are not affected by the mutagenesis.Nevertheless, the in vitro toxicity assay of the chronic utilisation of marula extracts shows some concerns.Remarkable reductions in the viability of cells after 48 and 72 h treatments are induced by the marula stem-bark extracts during the study of renal epithelial cell culture."Thus, it is observed that LLC-PK1 cells are more sensitive to the marula's stem-bark extract treatment compared to MDBK cells.Marula stem-bark extract reduces the cell viability, with proximal cells showing more susceptibility."This reduction is attributed to the presence of weak acid and phenolic compounds in the plant's stembark extract tending to decline the mitochondrial activity caused by the mitochondrial depolarization.Nevertheless, weak acid and phenolic compounds are secreted by the kidneys in an in vivo model whereas cultured model, the proximal and distal tubule cultured cells are continuously presented to phenolic components for up to 72 h in the in vitro system.This explains why the treatment of chronic marula stem-bark extract does not exhibit any notable impact on renal fluid and electrolyte manipulation in non-diabetic and STZ-treated diabetic rats.Despite the pulp of some ripe monkey orange is not toxic, some wild cultivars such as S. stuhlmanii even ripe could be toxic and it is why the pulp is cooked before eating by local people, denaturing any toxins.The crushed pulp of this cultivar is used in the Kruger National Park of South Africa as a fish poison due to its saponin component.Kei apple, cape gooseberry, marula and monkey orange are cultivated in southern African region but without any tangible results, thereby identifying as underutilised fruits.This is due the failure of their domestication."According to the key indices of crop domestication, the cultivated plots of these underutilised fruits are not properly managed and secured.Up to date, no trials of the cultivation of these fruits have been carried out and locally and they are only known as a wild crops.The predominant problems experienced by rural populations concerning the cultivation of the underutilised fruit as a crop are the: soil quality; labour inputs; availability of water/availability of land; slow growth cycle/low yields; and prevalent instant cash/economic culture.The soil is described as inadequate because of being too sandy in some villages.The cultivation of the underutilised fruits is also considered to be very labour-intensive as it needs continual weeding.The water availability is also considered to be a challenge even if the underutilised plants are mainly characterised as drought-tolerant or drought-resistant plants.According to key indices of fruit domestication, the establishment of irrigation infrastructure requires to be associated with other cultivated crops such as vegetable gardens.Predominantly in Botswana and Namibia, the land availability is also known to be an essential concern belonging to the large size of the tuber."Furthermore, the processes of continual relocation have decreased its availability in the villages' surrounding areas, thereby increasing the competition for arable land using mainly for products with higher yields including sorghum and maize.The slow growth cycles of the underutilised plants and their low yields are also considered as the constraints for the domestications of kei apple, cape gooseberry, marula and monkey orange fruits, given those ones are dealing with food crops and communities that are usually worried to observe instantaneous results.Economic interests are also mentioned as a constraint to the fruit domestication.According to a key indice in Botswana, the domestication of underutilised fruits signifies that people would be required to wait longer to produce income in comparison with other cash crops.The current predominance of occasional work substituting permament employment has created a culture in which people expect instantaneous cash upon completing the manual task.For this purpose, the underutilised fruits that grow spontaneously and provide instant payment are known more agreeably compared to their domesticated counterparts.The domestication may also induce the stealing of underutilised fruits for instant cash.Kei apple, cape gooseberry, marula, and monkey orange fruits possess the potential to impart health benefits and improve the nutritional status of the rural population, thanks to the micronutrients and macronutrients.These underutilised fruits have desirable functional properties and their nutritional composition is comparable to, and in some cases better than, that of their exotic and indigenous counterparts.This review demonstrates that these fruit trees are able to supplement the diet of many rural families by providing essential micronutrients and health benefits as well as serve as an alternative for a cash income, especially in times of famine.In this light, the wide distribution of the kei apple, cape gooseberry, marula, and monkey orange trees in drought prone areas and semi-arid regions, coupled with the fruit nutritional quality, renders the fruit an essential food source, particularly for children and pregnant women.Nonetheless, very little research work has been conducted regarding the added value of processing kei apple, cape gooseberry, marula, monkey orange fruits in southern Africa, in comparison with many exotic fruit species.Decreasing the loss of nutrients, antioxidants, organoleptic properties and reducing the content of potential toxic alkaloids during processing is essential in order to obtain a nutritious fruit product, since processing affects the overall quality of fruit products.The evidence collected in this review highlights that the effect of processing is not well documented, and that the assessment of the contribution by kei apple, cape gooseberry, marula, and monkey orange to the nutrient intake of regular consumers is inaccurate.Thus, the optimisation of the state of the art processing techniques and assays of the nutritional and sensorial quality of kei apple, cape gooseberry, marula, and monkey orange products is crucial for the implementation of preservation procedures and the consequent promotion of kei apple, cape gooseberry, marula and monkey orange consumption.Therefore, improving the production processes of foods through the optimisation of preservation techniques as a sustainable solution to malnutrition in rural areas of transition, needs to be investigated. | The underutilised fruits including kei apple, cape gooseberry, marula and monkey orange are fruits widely found in the southern African region. These fruits have the potential to cut to the heart of Africa's great problems in rural development, hunger, malnutrition, and gender inequality. Kei apple, cape gooseberry, marula and monkey orange trees are drought resistant or tolerant plants. Therefore, the domestication of the underutilised fruits found in the southern African region could be considered to be a sustainable solution to enhance the fruit availability, thereby increasing the food security since the global warming currently affects the food production. The fruits are rich in macronutrients, micronutrients, and dietary phytochemicals and have several health benefits. Despite this and the existence of a broad and unlimited niche in terms of the use of these fruits in new product development (food products, medicinal products, etc.), they are mainly processed on a small scale for the production of a few food products. This review also covers food product development from these fruits based on their functional characteristics. |
371 | Complete resection of an anterior mediastinal tumor by total arch replacement and pulmonary artery trunk plasty with a pericardial patch: A case report | Surgery is commonly indicated for both diagnosis and treatment of anterior mediastinal tumors.However, determining the optimal therapeutic strategy is difficult for tumors with substantial invasion, especially lesions adjoining the aortic arch.Total arch replacement is rarely performed for anterior mediastinal tumors, but we previously described a patient with an anterior mediastinal tumor who experienced long-term survival after TAR .Additionally, a few reports have indicated that malignant lymphoma may mimic various diseases such as malignant tumors and aortic aneurysms .In the present case, we performed complete resection of an ML with substantial invasion into the anterior mediastinum.The work has been reported in line with the SCARE criteria .Fumihiro Tanaka, M.D., Ph.D.A 76-year-old man of Asian descent presented to our hospital because of an abnormal chest computed tomography scan showing a 50- × 40-mm anterior mediastinal tumor.This tumor surrounded the left subclavian vein and touched the aortic arch and main pulmonary artery.Fluorodeoxyglucose positron emission tomography showed FDG uptake in the mass, with a maximum standardized uptake value of 36.7.The patient had only a persistent cough with no remarkable medical history.His interleukin-2 receptor level was slightly elevated at 757 U/ml.Although a definite pathological diagnosis of the tumor was difficult to obtain preoperatively, we suspected the tumor to be malignant, such as thymoma or thymic cancer, based on the CT and PET findings.Because of the tumor location, diagnostic procedures were associated with various risks such as dissemination, pneumothorax, and bleeding.We therefore decided to resect the tumor with preparation for TAR for both diagnosis and therapy.The operation was performed in three steps.First, we performed a mediastinal sternotomy.We observed no dissemination.However, the tumor had invaded the subclavian vein, so we resected this vein after adding a transmanubrial approach.The tumor had also invaded the aortic arch and PA trunk.We decided to perform tumor resection under cardiopulmonary bypass.Exfoliation of the distal aorta did not appear possible from the ventral side; therefore, we used a lateral approach to exfoliate the distal side of the aortic invasion.Second, we shifted the patient to the right lateral decubitus position and performed an anterior lateral incision.We performed exfoliation on the distal side of the aortic arch, securing the tumor margin, and partially resected the left upper lobe to treat the tumor invasion.Third, we shifted the patient to the dorsal position and implanted an artificial cardiopulmonary device.We resected the ascending aorta at the proximal site of the tumor.We then sequentially anastomosed the proximal site of an aortic graft with a four-branched graft.The descending aorta was resected at the distal site of tumor invasion.We performed PA trunk resection, securing the tumor margin.Complete en bloc resection of the PA trunk and aortic arch was performed.PA trunk reconstruction was performed using a pericardial patch.We then anastomosed the distal site of the aortic graft with the four-branched graft.Antegrade cerebral perfusion was performed through the graft as the distal anastomosis was completed.We performed TAR and PA trunk plasty with a pericardial patch.The operation was successful, with no major adverse events.However, two minor adverse events occurred: anesthesia of the left hand caused by congestion after resection of the left subclavian vein and intestinal peristalsis disorder induced by cutting of the vagus nerve.The anesthesia of the left hand lasted about 3 months, and the intestinal peristalsis disorder lasted 1.5 months.The patient was discharged 2.5 months postoperatively.Pathologically, immunohistochemical staining showed that the malignant cells were positive for CD20, CD30, and CD79a but poorly stained for CD3, AE1/AE3, and CAM 5.2.The MIB-1 labeling index was approximately 80%.The pathologic examination provided a diagnosis of diffuse large B-cell lymphoma.Six months postoperatively, we detected local recurrence by PET and CT.Chemotherapy was started; however, only one course was administered because of the development of pneumonia.The pneumonia was treated with antibiotics for 2 weeks in the hospital.The patient then underwent radiation therapy for the local recurrence.After this treatment, we performed PET and CT examinations every 6 months.Twenty months postoperatively, the local recurrence was controlled and the patient had no distant metastasis.This case illustrates two important points.The first is that we performed complete resection of a malignant tumor through TAR and PA trunk plasty with a pericardial patch.Although rare, some reports have described TAR for malignant tumors, such as lung cancer or sarcoma .However, few reports have described TAR with main PA trunk plasty with a pericardial patch.In the present case, we performed this technique with no major adverse events.If oncologically complete resection is preferable for tumors with substantial invasion, as in the present study, complete resection should be attempted even if the surgery involves replacement of substantial vascular tissue or combined resection of other organs.The second important point is that this case involved ML.This patient appeared to have primary B-cell lymphoma because no distant metastasis was detected, although primary B-cell lymphoma constitutes only 2%–4% of non-Hodgkin lymphomas .Major surgery, especially TAR, should be carefully performed in such cases.Treatment of ML generally involves chemotherapy.If we had known that this patient had lymphoma rather than another pathology, we would not have operated.However, preoperative diagnosis was impossible in our case.Moreover, even if frozen section is performed for diagnosis, discrimination between ML and thymoma is frequently difficult .We speculated that a second operation performed to distinguish between these two conditions would have been very difficult in our patient because we had to exfoliate substantial blood vessels, such as the aortic arch and main PA.Although rare, a few reports have described TAR for ML .In this case, we performed complete resection of an anterior mediastinal ML with TAR and PA trunk plasty using a pericardial patch.Great effort is required to achieve a correct diagnosis in such cases, although this was impossible in the present case.Fortunately, however, this patient remained alive for 20 months postoperatively with controlled disease despite having undergone a radical operation.Even if a patient seems to have no indication for an operation, we must keep in mind that long-term survival can be achieve in rare cases, as described in the present study.Not commissioned, externally peer reviewed.The Ethics Committee of the University of Occupational and Environmental Health Japan approved this study.The authors declare no financial support.Yasuhiro Chikaishi; Writing the paper, Study design.Hiroki Matsumiya; Others, Masatoshi Kanayama; Other, Akihiro Taira; Other, Yusuke Nabe; Other, Shinji Shinohara; Other, Taiji Kuwata; Other, Masaru Takenaka; Other, Soichi Oka; Other, Ayako Hirai; Other, Koji Kuroda a; Other, Naoko Imanishi; Other, Yoshinobu Ichiki; Other, Yosuke Nishimura; Others, Fumihiro Tanaka; Study design, and all authors read and approved the final manuscript. | Introduction: Patients with undiagnosed anterior mediastinal tumors commonly undergo surgery for diagnosis and treatment. However, determining the optimal therapeutic strategy is difficult for tumors with substantial invasion, such as lesions touching the aortic arch (AA). Case presentation: A 76-year-old man of Asian descent presented to our hospital because chest computed tomography (CT) revealed an anterior mediastinal tumor. This tumor surrounded the left subclavian vein and touched the AA. We suspected the tumor to be malignant. We therefore decided to resect the tumor with preparation for total arch replacement (TAR). The operation was performed in three steps. First, we performed a mediastinal sternotomy. However, the tumor had invaded the subclavian vein, so we resected this vein after adding a transmanubrial approach. However, because of invading the AA we needed next step. Second, we shifted the patient to the right lateral decubitus position. We performed partial resection of the left upper lobe and exfoliated the distal AA. Third, we shifted the patient to the dorsal position and implanted an artificial cardiopulmonary device, after which we performed TAR, and pulmonary artery (PA) trunk plasty with a pericardial patch. The operation was successful, with no major adverse events. Pathologically, the tumor was diagnosed as diffuse large B-cell lymphoma. Discussion: If oncologically complete resection is preferable for tumors with substantial invasion, complete resection should be attempted even if the surgery is difficult. Conclusion: We performed complete resection of an anterior mediastinal tumor with TAR and PA trunk plasty using a pericardial patch. |
372 | Somatosensory function and pain in extremely preterm young adults from the UK EPICure cohort: sex-dependent differences and impact of neonatal surgery | Participants were recruited from the UK EPICure population-based cohort of infants born extremely preterm in the UK and Ireland from March to December 1995.Although extreme preterm birth is defined as <28 weeks gestation, the EPICure cohort restricted recruitment to earlier high-risk births at <26 weeks gestation.Of 811 infants of the correct gestational age admitted to neonatal intensive care, 497 died in hospital and 314 were discharged home.25,Participation in longitudinal evaluation at 30 months,25 6 yr,26 11 yr,27 and at 19 yr has been previously described.22,The current study was approved by the National Research Ethics Committee Hampshire ‘A’, described on the cohort website, and potential participants received written information.Non-participants had previously asked not to be contacted, declined participation, or were uncontactable.EP participants in EPICure@19 did not differ in birth weight, gestational age, or sex from those lost to follow-up, but had higher mean full-scale intelligence quotient scores at earlier assessments and higher socio-economic backgrounds than non-participants.22,After giving written consent, participants underwent a 2 day evaluation at the University College London Hospital, Clinical Research Facility between February 2014 and October 2015.Pain and somatosensory function were evaluated in 102 EP and 48 term-born control young adults in a dedicated sensory testing facility at University College London Great Ormond Street Institute of Child Health.Additional data related to neonatal variables, participant characteristics, and questionnaires at 18–20 yr were extracted from the main EPICure database.Data related to conditioned pain modulation are reported in the companion manuscript.Reporting is in accordance with the STROBE Checklist for cohort studies.A standardised clinical pain history included: site, intensity, frequency, and duration of recurrent pain; impact on function and activity; interference with usual activity due to recurrent pain; and analgesic use.Overall pain report was graded by a pain clinician.Participants used visual analogue scales to report current pain intensity, interference with usual activities because of pain, and anticipatory anxiety before testing.29,Somatosensory function was assessed with a standardised protocol30,31 adapted to match previous preterm-born cohort studies.11,13,Evaluation was performed by a single investigator in the same temperature-controlled room with standardised verbal instructions.Before data acquisition, tests were demonstrated and participants advised they could decline or cease testing at any point.Testing was performed on the thenar eminence of the self-reported non-dominant hand to evaluate generalised thresholds and then on the chest wall.Localised testing adjacent to neonatal scars was restricted to thoracic dermatomes.Participants without scars had testing on the lateral chest wall within the second to sixth thoracic dermatomes.Thermal thresholds were not obtained in two of 38 EP females because of equipment malfunction.The need to ask about prior surgery, and the site and nature of neonatal scars, precluded the investigator being blinded to group.Modalities included: i) cool and warm detection, cold and heat pain thresholds using a handheld 18×18 mm contact thermode to match testing at 11 yr;13 ii) mechanical detection threshold with von Frey hairs; iii) mechanical pricking pain threshold with ascending PinPrick Stimulators until discomfort/pain rated 0–10 then after 1 s−1 train of 10 repeated stimuli to calculate wind-up ratio;11 and iv) pressure pain threshold mean of three values on middle phalanx of middle finger with hand-held 1 cm2 algometer and optical feed-back.As static thermal thresholds demonstrated reduced sensitivity in children after preterm birth, but a prolonged thermal stimulus unmasked increased sensitivity,11 cold pressor testing was also evaluated.The hand was immersed to the wrist with the fingers spread into a 5°C circulating water bath and immersion duration recorded.Self-report questionnaires included: i) Pain Catastrophizing Scale32; ii) Diagnostic and Statistical Manual anxiety t-score and internalising problems t-score extracted from Achenbach Adult Self-Report Questionnaire33; and iii) FSIQ using the Wechsler Abbreviated Scale of Intelligence Second Edition.34,We acquired 3D T1-weighted MPRAGE volumes at 1 mm isotropic resolution on a Philips 3T Achieva MRI scanner and carried out a multi-class tissue segmentation of the white matter volume using combined multi-atlas and Gaussian mixture model segmentation routines.35,This method produces a state-of-the-art segmentation and region labelling by voxel-wise voting between several propagated atlases guided by the local image similarity.This algorithm automatically estimates thalamus and amygdala volumes.See Supplementary material for pathway specific tissue properties.As this descriptive cohort study aimed to recruit the maximum available subjects, no a priori power calculation was performed.Statistically significant group differences in thermal thresholds were found when 43 EP and 44 TC participants from the current cohort were tested at age 11 yr.13, "Statistical analyses included: group-wise comparisons with Mann–Whitney U-test or two-tailed Student's t-test; two-way ANOVA with group and sex as variables for normally-distributed or log-transformed mechanical data36; two-sided χ2 test for categorical data; two-tailed Spearman's rho for bivariate correlations; and log rank Mantel–Cox for survival curves.Truncated regression models evaluated generalised thermal sensitivity with higher values reflecting increased thermal tolerance.For quantitative sensory testing profiles, sex-matched Z-transformed scores were calculated z= and adjusted so >0 indicates increased sensitivity and <0 decreased sensitivity.30,Analyses was performed with SPSS Version 23 and Prism Version 7.P values are reported with Bonferroni adjustment for multiple comparisons.One hundred and two EP and 48 age- and sex-matched TC participants underwent pain and somatosensory assessment.EP participants had lower height and weight, but the same BMI as TC.FSIQ scores were lower in the EP group, but did not differ between QST and remaining EPICure@19 participants.22,Thirty EP participants had required neonatal surgery.The surgery subgroup had longer initial hospitalisation, but did not differ in birth weight, gestational age or risk index score on neonatal ICU admission.QST results were excluded because of variability in three EP males."Chest wall testing was declined in three EP subjects, and one EP female with Raynaud's symptoms declined cold evaluation.No participant reported distress during testing.Thenar eminence sensitivity for all thermal modalities was reduced in the EP vs TC group.Consistent with previous group differences at 11 yr,13 median CPT was lower and HPT was higher in EP vs TC participants.This was on a background of age-related increase in threshold in both TC and EP participants.Within-subject sensitivity to heat and cold was inversely correlated in both TC and EP participants.When evaluating static thermal thresholds, more EP participants reached thermal test limits without experiencing discomfort/pain.Twenty-six EP and 2 TC had HPT >49°C, and 26 EP and 5 TC had CPT <11°C.Survival curves evaluated subgroup effects at the limits of testing, with failure to reach HPT or CPT most common in EP males with neonatal surgery.Raw data analyses also identify sex-dependent differences related to EP status and neonatal surgery.In response to a more prolonged noxious cold stimulus, EP participants were more likely to withdraw the hand before 30 s of cold pressor testing, particularly EP surgery females.In EP males, cold pressor tolerance did not differ from TC, and there was a relative left-shift compared with threshold survival curves.GTS provided a summary measure incorporating time to HPT and CPT and duration of cold pressor tolerance, with higher scores representing reduced sensitivity.Truncated regression modelling identified significant interactions between EP surgery and sex, with decreased sensitivity in EP surgery males but increased sensitivity in EP surgery females.Imaging data were available for 39 TC and 72 EP QST participants, including 16 of 30 EP neonatal surgery participants.The volume of pain-relevant brain regions was influenced by preterm status, sex, or both, with significant correlations with thermal sensitivity for the thalamus and amygdala.Amygdala volume was lower in EP than TC participants, with a significant main effect of EP status and sex.Amygdalothalamic tract fractional anisotropy differed between TC females and EP females, but there were no differences in axonal volume across groups and no difference in tissue composition using T2 relaxometry has been reported in this cohort.37,Lower amygdala volume sex-dependently correlated with reduced thermal sensitivity in males, but increased sensitivity in females.In EP participants, amygdala volume was negatively correlated with HPT in males but positively in females.Adjusting for amygdala volume increased effect sizes in the GTS model.FSIQ was not a significant predictor and therefore excluded.Differences from TC data are expressed as z-scores to illustrate sensory profiles across thermal and mechanical modalities.Decreases in thermal mechanical detection and pressure pain sensitivity in EP males were statistically significant in the neonatal surgery subgroup.Sensory thresholds on the unscarred chest wall are consistent with thenar values.Testing on the unscarred lateral chest wall was performed in all TC and 63 EP participants.Thirty-three EP participants had clearly visible thoracic dermatome scars related to open surgery or surgical vascular access and chest drain insertion.Localised decreases in static thermal and mechanical detection thresholds adjacent to neonatal thoracic scars were apparent in EP females but were more marked and on a background of generalised differences in EP males.Mechanical detection threshold was higher on the chest than the hand, with good correlation between the sites.Normalised data show a main effect of group, but not sex, with thresholds adjacent to scars higher than TC in both females and males.This is consistent with the scar-related localised decrease in static mechanical and thermal sensitivity in this cohort at 11 yr.13,A small number of participants in all groups reported either rapid change in perceived thermal intensity or paradoxical hot/cold sensations.Mechanical perceptual sensitisation was more common adjacent to scars .Allodynia to brush was reported over thoracic and other neonatal scars.Within the surgery subgroup, higher scar-related brush allodynia correlated with a lower GTS score.Three EP participants declined testing adjacent to scars because of persistent sensitivity.No participants reported brush allodynia on the unscarred chest wall or thenar eminence.There was a significant effect of group on FSIQ score, but no main effect of sex.Neonatal surgery had a similar added effect in both males and females.Lower FSIQ correlated with lower brain region volumes in both males and females, but not with sensory thresholds.Regular pain was common, particularly mild musculoskeletal pain related to work or sporting activity.Moderate-severe pain requiring analgesia or impairing function was more common in EP than TC participants.For those with regular pain, self-reported interference with activity because of pain was higher in EP participants.Higher anxiety and pain catastrophising scores correlated weakly with thermal pain thresholds and more strongly with increased pain severity in EP participants.No participants had taken analgesia on the test day.More females than males reported headache and use of analgesia, but these outcomes were not influenced by EP status.Prevalence data exclude menstruation pain as many did not spontaneously report this or were taking hormone treatment for symptom management or contraception.In those specifically asked, the mean intensity of period pain was 7.1, 2.3 with 12/30 EP and 5/18 TC females reporting problematic pain that reduced activity.After demonstration of sensory tests, pretest anxiety was low and did not correlate with thermal thresholds.DSM anxiety scores were higher in EP participants with clinically significant scores ≥70 in one of 38 EP males, five of 57 EP females, and two of 28 TC females."All pain catastrophising subscales had high internal consistency in TC and EP participants.Overall, pain catastrophising scores were influenced by female sex, and current pain experience, but not EP status or FSIQ.This is the first comprehensive evaluation of sex- and modality-dependent somatosensory function in young adults who had been born extremely preterm.Sensitivity to static thermal thresholds was reduced in EP males, but prolonged noxious cold unmasked increased sensitivity in EP females, with the greatest difference in neonatal surgery subgroups.The degree and sex-dependent directionality of altered thermal sensitivity in EP participants correlated with reduced amygdala volume but not with current cognitive function, suggesting the amygdala plays a sex-dependent role in central modulation of experimental pain stimuli.In contrast to these generalised changes, a mixed pattern of sensory loss and sensory gain was localised to neonatal scars in both males and females.EP participants were more likely to report current pain of at least moderate severity, with increased pain intensity also associated with higher anxiety and pain catastrophising scores.Extremely preterm babies undergo repeated procedural interventions as part of intensive care management and up to a third require surgery to manage complications or congenital anomalies.8,38,Cumulative pain exposure is difficult to quantify and is confounded by comorbidity.Duration of mechanical ventilation or NICU stay have been used as proxy measures of pain exposure39,40 and higher numbers of tissue breaking procedures correlate with worse outcome.9,We used neonatal surgery as an indicator of increased tissue injury, although this may also be confounded by disease severity or perioperative instability,41 and specific effects of analgesia or anaesthesia42 cannot be determined from the available data.As also seen here, surgery during initial hospitalisation has a persistent impact on cognitive outcome.8,However, FSIQ scores did not differ between our male and female EP surgical participants, and do not account for differences in the degree or directionality of altered thermal sensitivity in males and females.Temperature detection is mediated by multiple thermosensitive channels responsive to both stimulus intensity and duration.43,In children born very preterm thermal threshold sensitivity was no different39,44 or decreased.11,Our EP participants were born at an earlier gestational age and required longer hospital admission, and the reduced thermal threshold sensitivity and added impact of neonatal surgery noted at 11 yr13 had persisted.This was on a background of expected age-related increase in threshold,31 but clear sex-dependent differences had now emerged.The interindividual variability in thermal pain thresholds is consistent with previous reports,24 but within-subject consistencies included: discrimination of stimulus intensity; reduced sensitivity to both hot and cold; and correlations across different body sites.In contrast to these measures of static thermal thresholds, more prolonged and noxious thermal stimuli activate descending modulatory pathways that can shift the balance between inhibition or facilitation of spinal inputs and influence perceived pain intensity.45,Therefore, in addition to measures of static thermal threshold, we also performed cold pressor testing to assess sensitivity to a more prolonged and intense thermal stimulus.Previously, VP children were shown to have reduced threshold sensitivity, but prolonged heat unmasked increased perceptual sensitisation11 and increased activation in pain-relevant brain regions, including primary somatosensory cortex, thalamus, and basal ganglia.46,Reduced cold pressor tolerance has also been previously reported in EP young adults.40,Routine QST profiles do not include prolonged thermal stimuli, but a composite measure including time to thermal thresholds and cold tolerance highlighted decreased sensitivity in EP males, increased sensitivity in EP females, and the added impact of neonatal surgery in both.We postulate that increased tissue injury and pain in early life contributes to activity-dependent alterations in thermal nociceptive signalling, that are also influenced by sex-dependent differences in central modulation.Experimental pain sensitivity has been correlated with altered structure and connectivity in central sensory-discriminative and emotional/affective pathways, with sex differences in fMRI response predominantly in regions encoding affective pain response.49,In EP participants, thermal sensitivity correlated with amygdala volume.The amygdala attaches emotional significance to sensory information relayed from the thalamus, and altered amygdala connectivity has been associated with pain-related fear in adolescents50 and pain catastrophising in adults.51,Importantly for evaluation of future risk, alterations in amygdala volume and connectivity also predicted the transition from acute to chronic back pain in adults.52,After preterm birth, alterations in brain structure and connectivity persist beyond adolescence,2,37 and functional correlates include reduced cognitive ability53 and poorer psychosocial functioning.54,More specifically, differences in amygdala volume and connectivity influenced fear processing and emotion recognition after preterm birth.55–58,Here, amygdala volume correlated with both the degree and directionality of altered thermal sensitivity.As sex-dependent differences in amygdala activation also emerge during adolescence,59,60 divergence in thermal sensitivity between males and females may be clearer in early adulthood than at younger ages.Alterations in socio-emotional circuits, which are influenced by biological vulnerability, early life adversity, and parenting, have been proposed as a link between preterm birth and subsequent psychosocial and emotional outcomes,56 and we suggest extending this model to include effects on experimental pain sensitivity in EP young adults.These exploratory associations require further evaluation in functional imaging studies.Neonatal scars were associated with decreased static thresholds but increased dynamic mechanical sensitivity in both males and females, suggesting a different localised effect related to peripheral tissue injury.Comparison across multiple modalities is facilitated by conversion to z-scores, and differences from large reference control datasets identify specific sensory profiles in adults with peripheral neuropathic pain.30,61,Here, we restricted comparison to contemporaneous age- and sex-matched controls and used a protocol that facilitated comparison with previous preterm cohorts.Despite the relatively small subgroups and limited effect size for some modalities, the sensory profiles illustrate sex-dependent effects, the added impact of neonatal surgery, and a different pattern of generalised and localised sensory change adjacent to neonatal scars.Similar mixed patterns of sensory gain, loss, or both have been reported after inguinal or thoracic surgery in children62,63 and adults.64,65,While scar-related sensory changes do not always correlate with reported pain,66,67 several EP participants had marked brush allodynia or declined testing because of scar-related sensitivity, which may predispose to increased pain after re-injury.68,Repeat surgery in the same dermatome as prior neonatal surgery increased pain scores and analgesic requirements in infants.69,Our laboratory studies in rodents identified long-term alterations after neonatal hindpaw incision that include enhanced re-incision hyperalgesia in adulthood.70,71,Importantly, prevention by peri-incision local anaesthetic suggests activity-dependent mechanisms that can be modulated by clinically-relevant analgesic interventions.3,72,Although UK paediatric anaesthetists in 1995 reported regular use of opioids and local anaesthetic techniques for neonates requiring surgery,73 specific data for preterm neonates and this cohort are not available.Additional clinical studies are required to compare the ability of different systemic or regional analgesic techniques to modulate the long-term impact of neonatal surgery.Pain is a complex sensory and emotional experience, requiring a biopsychosocial approach to evaluation and management.74,Psychological comorbidities are common and are effective targets for intervention in adolescents and adults with chronic pain.75,76,While some psychosocial factors can increase resilience or be protective, others increase vulnerability,77,78 and contribute to sex differences in experimental pain sensitivity.79,After preterm birth, children reported higher pain catastrophising,12 and increased anxiety persists into early adulthood.16,Here, higher anxiety and catastrophising scores in EP young adults correlated with both increased thermal sensitivity and more intense current pain.Detailed pain phenotyping, which incorporates history, QST, anxiety, and pain catastrophising has been suggested for clinical trials,80 and along with neuroimaging,52,81 may enhance prediction of persistent pain risk and improve personalised pain management.Epidemiological studies associate early life adversity and childhood somatic symptoms with increased risk of chronic pain in adulthood.82,While preterm birth in 1958 had a minor impact on prevalence of widespread pain at 45 yr,83 EP survivors now reaching adulthood had more invasive NICU management at much earlier gestational ages.Longitudinal evaluations in extreme preterm cohorts have identified persistent effects on cognitive, mental health and system-specific health outcomes,16,84 but pain experience is not consistently reported.Based on quality of life or general health care questionnaires, current pain prevalence in VP or EP young adults has been reported as no different,17,19,85 decreased,86 or increased.87,Here, we found no difference in overall prevalence, as mild pain was common and the study was not adequately powered for this outcome.However, an increased proportion of EP participants reported moderate–severe recurrent pain that required analgesia and influenced activity.In VP and very low birth weight cohorts, self-reported pain increased throughout the third decade18,20,88 when chronic pain generally becomes more prevalent, particularly in women.23,Psychological interventions that encourage adaptive coping and improve self-management of pain have been suggested for preterm-born adults,18 and may be particularly advantageous if high-risk subgroups can be identified, such as females with both altered pain coping style and enhanced sensitivity to noxious stimuli.Standardised use of outcomes that incorporate type of pain, impact on function, and use of health resources by males and females would facilitate comparison across cohorts and more clearly delineate the impact of differing neonatal exposures and preterm birth on subsequent pain experience.Study limitations include potential selection bias as not all eligible EPICure subjects attended.As long-term follow-up tends to recruit NICU survivors with a relatively favourable outcome89 and EPICure@19 participants had higher mean FSIQ and socioeconomic status than non-participants,22 results may under-estimate overall effects.Some participants did not complete all tests, either because of participant preference, time or test availability, but sample sizes for analyses based on available data are noted.Only half of the neonatal surgery group underwent MRI, which limited the ability to analyse subgroup effects for this outcome.Fewer EP males were tested but with a matched proportion of controls.The vast majority of subjects were Caucasian and differences related to ethnicity were not assessed.As subjects were not asked to self-report gender, dichotomous sex-differences are reported for males and females.Extreme preterm birth affects 0.5–1% of the population1 and in the postsurfactant era more survivors are now reaching adulthood.For this vulnerable group, even modest increases in risk for future illness may represent significant healthcare burdens.84,90,Understanding persistent biological changes in nociceptive pathways and the psychosocial factors that modulate the risk and impact of persistent pain in later life will enhance awareness and recognition of targets for intervention84,90 to improve outcome throughout the lifespan.Early life experience and sex should be considered during clinical evaluations of somatosensory function or chronic pain, and when evaluating risk factors for persistent pain.Study design/planning: S.M.W., S.O., N.M.Study conduct and data acquisition: S.M.W., A.M., H.O’R., J.B., Z.E.-R.Data analysis: S.M.W., H.O’R., A.M.Writing paper: S.M.W. with review: N.M.Review and approval of final manuscript: all authors.Overall planning and conduct of evaluations: EPICure@19 Study Group. | Background: Surgery or multiple procedural interventions in extremely preterm neonates influence neurodevelopmental outcome and may be associated with long-term changes in somatosensory function or pain response. Methods: This observational study recruited extremely preterm (EP, <26 weeks’ gestation; n=102, 60% female) and term-born controls (TC; n=48) aged 18–20 yr from the UK EPICure cohort. Thirty EP but no TC participants had neonatal surgery. Evaluation included: quantitative sensory testing (thenar eminence, chest wall); clinical pain history; questionnaires (intelligence quotient; pain catastrophising; anxiety); and structural brain imaging. Results: Reduced thermal threshold sensitivity in EP vs TC participants persisted at age 18–20 yr. Sex-dependent effects varied with stimulus intensity and were enhanced by neonatal surgery, with reduced threshold sensitivity in EP surgery males but increased sensitivity to prolonged noxious cold in EP surgery females (P<0.01). Sex-dependent differences in thermal sensitivity correlated with smaller amygdala volume (P<0.05) but not current intelligence quotient. While generalised decreased sensitivity encompassed mechanical and thermal modalities in EP surgery males, a mixed pattern of sensory loss and sensory gain persisted adjacent to neonatal scars in males and females. More EP participants reported moderate–severe recurrent pain (22/101 vs 4/48; χ2=0.04) and increased pain intensity correlated with higher anxiety and pain catastrophising. Conclusions: After preterm birth and neonatal surgery, different patterns of generalised and local scar-related alterations in somatosensory function persist into early adulthood. Sex-dependent changes in generalised sensitivity may reflect central modulation by affective circuits. Early life experience and sex/gender should be considered when evaluating somatosensory function, pain experience, or future chronic pain risk. |
373 | Impact of a quadrivalent inactivated influenza vaccine on influenza-associated complications and health care use in children aged 6 to 35 months: Analysis of data from a phase III trial in the Northern and Southern Hemispheres | Children, especially those aged <5 years, are at the highest risk of suffering from serious complications from influenza infection, including acute otitis media,1 bacterial co-infections, acute respiratory infection, hospitalisation, and death .Severe outcomes of influenza are frequently associated with underlying conditions but occur even in children without risk factors .Influenza illness is caused by A and B virus subtypes, both of which can cause epidemics and lead to hospitalisation and death in all age groups .Efforts to reduce influenza B illness have been complicated since the 1980s, when two immunologically distinct lineages of B virus, Victoria and Yamagata, began co-circulating worldwide .The distribution of these two lineages varies greatly between and even within seasons and regions, resulting in frequent mismatches between the B strain in trivalent influenza vaccines and the circulating B strains .Due to uncertainty about cross-lineage protection and the potential for decreased vaccine efficacy , quadrivalent influenza vaccines containing both B lineages have been developed and, since the 2013–2014 influenza season, have been included in World Health Organization recommendations.A quadrivalent split-virion inactivated influenza vaccine is licensed for individuals aged ≥6 months.A recently completed multi-season placebo-controlled phase III trial conducted in the Northern and Southern Hemispheres demonstrated the efficacy of IIV4 in children 6–35 months of age .Overall VE to prevent laboratory-confirmed influenza was 50.98% against any A- or B-type influenza and 68.40% against influenza caused by vaccine-similar strains.The trial also showed that safety profiles were similar for IIV4, the placebo, and comparator trivalent split-virion inactivated influenza vaccines.As part of the phase III trial, data were collected on healthcare use, antibiotic use, parental absenteeism from work, and the occurrence of severe outcomes of influenza, including AOM, acute lower respiratory infection, and inpatient hospitalisation.Here, we describe the efficacy of IIV4 based on these additional endpoints.Furthermore, to add to the evidence for efficacy of IIV4 in the youngest children, we determined VE for different age subgroups.This was an analysis of data from the phase III, randomised, multi-centre, placebo-controlled trial of IIV4 in healthy children aged 6–35 months.2,The participants were randomised to receive two full doses 28 days apart of IIV4; the licensed trivalent split-virion inactivated influenza vaccine, an investigational trivalent split-virion inactivated influenza vaccine containing the World Health Organization-recommended A strains and a strain from the alternate B lineage; or a placebo.Further details of the study design and the primary efficacy, immunogenicity, and safety results are described elsewhere .The objective of the current analysis was to examine the VE of IIV4 in preventing laboratory-confirmed influenza in age subgroups; and to determine the relative risk for IIV4 vs. placebo for severe outcomes, healthcare medical visits, and parental absenteeism from work associated with laboratory-confirmed influenza within 15 days after the onset of the influenza-like illness.VE was calculated for the co-primary endpoints of the trial, i.e. the occurrence of influenza-like illness starting ≥14 days after last vaccination and laboratory-confirmed as positive for any circulating influenza A or B types or vaccine-similar strains .Briefly, influenza was confirmed by reverse transcription-polymerase chain reaction or viral culture of nasal swabs, and subtypes and strains were identified by Sanger sequencing, ferret antigenicity testing, or both.Genetic sequences identified by Sanger sequencing were compared with a database of known sequences corresponding to the vaccine and major circulating strains from 2005 up to the time of testing.AOM, ALRI, and healthcare utilization were recorded during ILI-associated visits occurring within 10 days of the onset of ILI and during follow-up phone calls 15 days after the onset of ILI.AOM was defined as a visually abnormal tympanic membrane suggesting an effusion in the middle ear cavity, concomitant with at least one of the following symptoms: fever, earache, irritability, diarrhoea, vomiting, acute otorrhea not caused by external otitis, or other symptoms of respiratory infection.ALRI was defined as a chest X-ray confirmed pneumonia, bronchiolitis, bronchitis, or croup.Inpatient hospitalisation was defined as a hospital admission resulting in an overnight stay.Outpatient hospitalisation was defined as hospitalization without an overnight stay.An outpatient visit was defined as an unscheduled ambulatory visit with a physician or other health professional.The phase III trial was approved by the independent ethics committee or institutional review board for each study site and was conducted in accordance with Good Clinical Practice and the Declaration of Helsinki.Written informed consent was provided by the parents or legal representatives of all participating children.VE in preventing laboratory-confirmed influenza caused by any A or B strain or by vaccine-similar strains was examined by age subgroup.The analysis was performed according to randomisation in the full analysis set for efficacy, defined as all randomised participants who received two doses of study vaccine and had at least one successful surveillance contact at least 14 days after the last dose.Relative risk in preventing laboratory-confirmed influenza associated with AOM and ALRI were performed in the per-protocol analysis set for efficacy, defined as all randomised participants without significant protocol deviations.RR in preventing laboratory-confirmed influenza associated with healthcare medical visits, inpatient hospitalisation, parent absenteeism, and antibiotic use were calculated in the full analysis set for efficacy.RR was calculated as 100% × /.The 95% CIs for VE and RR were calculated by an exact method conditional on the total number of cases in both groups.The study protocol did not include statistical tests for these endpoints, so no assessment of statistical significance was made.Missing data were not replaced.Statistical analysis was performed using SAS® version 9.4.This analysis included the 5436 participants in the phase III trial who were randomised to receive IIV4 or placebo, as described previously .The IIV4 and placebo groups were balanced for sex, age, and prevalence of at-risk conditions, regions, and ethnicities.Five participants in the IIV4 group and 16 in the placebo group had AOM associated with laboratory-confirmed influenza, and five participants in the IIV4 group and 23 in the placebo group had ALRI associated with laboratory-confirmed influenza.The RR of IIV4 vs. placebo was 31.28% for AOM and 21.76% for ALRI.Compared to placebo, IIV4 reduced the risk of healthcare medical visits, parent absenteeism from work, and antibiotic use associated with laboratory-confirmed influenza.Inpatient hospitalisation associated with laboratory-confirmed influenza occurred for three participants in each group, resulting in no difference in risk between IIV4 and placebo.VE against any A or B strain was 54.76% for participants aged 6–23 months and 46.91% for participants aged 24–35 months.For vaccine-similar strains, VE was 74.51% for participants aged 6–23 months and 59.78% for participants aged 24–35 months.Further exploration of the 6–23 month age group showed a VE against any A or B strain of 35.06% for participants aged 6–11 months and 63.13% for participants aged 12–23 months and a VE against vaccine-similar strains of 43.63% for participants aged 6–11 months and 80.54% for participants aged 12–23 months.A recent phase III trial conducted over four influenza seasons in the Northern and Southern Hemispheres demonstrated the efficacy and safety of two full doses of IIV4 in children 6–35 months of age in preventing laboratory-confirmed influenza .The current analysis, based on exploratory endpoints in the phase III trial, demonstrated similar efficacy of IIV4 in reducing the risk of severe outcomes of influenza in these children as well as on the burden of influenza for their parents and the healthcare system.The World Health Organization stated in 2012 that they had only moderate confidence in the efficacy of inactivated influenza vaccines in children aged 6 months to <2 years due to limited evidence .In the current study, we confirmed that IIV4 can protect children aged 6–23 months against laboratory-confirmed influenza.Efficacy was also confirmed in the subgroup of children aged 12–23 months but not in children aged 6–11 months, most likely because of insufficient numbers.Efficacy of another full-dose split-virion quadrivalent influenza vaccine in children aged 6–35 months was also demonstrated in a multinational randomised placebo-controlled trial across five influenza seasons .The VE was reported to be 50% against RT-PCR-confirmed influenza, which is similar to the overall VE in the current trial.Although age subgroups were different, they also demonstrated efficacy in children aged <2 years.Our analysis also demonstrated that IIV4 reduced antibiotic use associated with influenza.Despite current guidelines, unnecessary antibiotic use in influenza remains common and is an important cause of antibiotic drug resistance .A retrospective analysis of the US Impact National Benchmark Database from 2005–2009 found that antibiotics were prescribed for about 22% of patients with influenza, 79% of which was judged to be inappropriate because the patient had neither a secondary infection nor evidence of comorbidity .Another study in Europe showed that influenza results in antibiotic prescriptions in 7–55% of cases .This may be because both influenza and bacterial infections can cause high fever, AOM, and ALRI in young children .Thus, although influenza accounts for a relatively small proportion of antibiotic use, IIV4 can help reduce their inappropriate use in young children.The findings of this analysis should be widely applicable because they are based on a large study conducted over a wide geographical area in both hemispheres and over several influenza seasons.However, there are some limitations.Most importantly, the trial was not powered for the calculations included in this analysis.Indeed, insufficient numbers likely precluded efficacy from being confirmed in children aged 6–11 months.This also can explain the failure to confirm an effect on influenza-associated inpatient hospitalisation.Another limitation, shared by all influenza vaccines, is that efficacy depends on the specific strains circulating, so care should be taken when applying results to a specific region or season.The analysis showed that in children aged 6–35 months, vaccination with two full doses of IIV4 can protect against influenza and reduces the frequency of severe outcomes of influenza.IIV4 thereby helps reduce the burden of influenza in young children, their parents, and the healthcare system.These findings reinforce evidence that influenza vaccination can protect and can be used for infants and young children aged 6–35 months.This work was supported by Sanofi Pasteur.The sponsor participated in study design, the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. | Background: A multi-season phase III trial conducted in the Northern and Southern Hemispheres demonstrated the efficacy of a quadrivalent split-virion inactivated influenza vaccine (IIV4) in children 6–35 months of age. Methods: Data collected during the phase III trial were analysed to examine the vaccine efficacy (VE) of IIV4 in preventing laboratory-confirmed influenza in age subgroups and to determine the relative risk for IIV4 vs. placebo for severe outcomes, healthcare use, and parental absenteeism from work associated with laboratory-confirmed influenza. Results: VE (95% confidence interval [CI]) to prevent laboratory-confirmed influenza due to any A or B strain was 54.76% (40.24–66.03%) for participants aged 6–23 months and 46.91% (23.57–63.53%) for participants aged 24–35 months. VE (95% CI) to prevent laboratory-confirmed influenza due to vaccine-similar strains was 74.51% (53.55–86.91%) for participants aged 6–23 months and 59.78% (19.11–81.25%) for participants aged 24–35 months. Compared to placebo, IIV4 reduced the risk (95% CI) by 31.28% (8.96–89.34%) for acute otitis media, 21.76% (6.46–58.51%) for acute lower respiratory infection, 40.80% (29.62–55.59%) for healthcare medical visits, 29.71% (11.66–67.23%) for parent absenteeism from work, and 39.20% (26.89–56.24%) for antibiotic use. Conclusion: In children aged 6–35 months, vaccination with IIV4 reduces severe outcomes of influenza as well as the associated burden for their parents and the healthcare system. In addition, vaccination with IIV4 is effective at preventing against influenza in children aged 6–23 and 24–35 months. Trial registration: EudraCT no. 2013-001231-51. |
374 | The process of pair formation mediated by substrate-borne vibrations in a small insect | Substrate-borne vibrational signalling is an ancient communication channel that is widely used by both invertebrates and vertebrates.In insects alone, it is used by an estimated 195,000 species, often exclusively, but has so far received much less attention than airborne sound communication.The first step of mating sequences in sexually reproducing insects is pair formation that is achieved by identification and localization of a potential partner in the habitat.Species-specific vibrational signals used in sexual communication enable identification of the emitter and provide directional information.In some insects that rely on vibrational communication, the searching for a mating partner has been described as “trial and error” while in others, individuals travelled a shorter path than they would during pure random search, suggesting that they extracted directional information from signals themselves.Plants are the most common signalling substrate for invertebrates; however, they are complex structures and due to signal degradation and frequency filtering during transmission, signals may be distorted in the frequency and time domains.Differences in amplitude and time of arrival of the vibrational signal to spatially separated vibration receptors in legs are the most obvious directional cues that insects may use.In insects, most vibration receptors are located in the legs and therefore the size of the insect is an essential factor for creating time or amplitude differences large enough to be directly used in orientation.Amplitude differences at distances as short as 2 cm are large enough to be detected in the nervous system of insects.On the other hand, the intensity gradient on plants may not be a reliable cue due to amplitude oscillations of vibrational signals during transmission and the role of amplitude in orientation behaviour is still under debate.Furthermore, the majority of insects that rely on vibrational communication are smaller than 1 cm.In this case, deriving directional cues by directly comparing amplitude or time differences between sensory inputs may not be possible.Some small insects may instead be able to extract directional information from the mechanical response of the whole body, but solutions have been insufficiently studied.In the present work, we describe pair formation and searching in a small plant-dwelling insect for which obtaining directional information may be difficult.We used the Nearctic leafhopper Scaphoideus titanus Ball, which communicates with substrate-borne signals, as a model species.The body length of this leafhopper is around 5 mm, with a leg span that is probably too small to enable orientation by direct comparison of sensory inputs.Like other leafhoppers, S. titanus does not rely on chemical signals, which allowed us to focus on vibrational cues alone.In S. titanus, the male is searching for the female, and mating sequence is always initiated by the male emitting a calling signal to which the stationary females respond with pulses emitted in gaps between the male pulses.A successful copulation is preceded by a male–female courtship duet, which can be disrupted by a rival male emitting a disturbance noise and taking over the duet."Duetting systems are common in arthropod communication, often involving complex interactions where signalling is modified by the perception of the partner's reply.In such a system, replies by the stationary individual provide information needed for localization by the searching partner, but also by potential eavesdropping competitors.Therefore, a male should optimize the process of gathering the necessary information from the female signals in order to reduce both the energetic costs and competition.To achieve this, a male should perform accurate identification and rapid localization, and should only begin with more complex and demanding courtship after these tasks have been accomplished.The same general principle has been recognized in numerous other animals before, but the apparent monomodality of sexual communication in leafhoppers and the ability to accurately measure signals using laser vibrometry allow us to identify the cues that guide behaviour in these stages and trigger transitions between them.Understanding this process may then shed light on the problem of extracting information from vibrational signals by small insects.In the present work, we therefore tested the following assumption: during pair formation, sexual behaviour progresses through different stages that are characterized and triggered by specific vibrational cues, favouring reliability of recognition and speed of localization before the onset of the most complex advertising stage of courtship.The process is facilitated by the ability of S. titanus males to use information in female signals to make directional decisions and detect female proximity despite their small body size.Rearing of S. titanus from egg to adult followed the method described in Eriksson et al.All experiments were done with virgin and sexually mature males and females at least 8 days after their emergence.Each leafhopper was tested only once.We used grapevine cuttings with two different geometries as substrate.In each case the bottom of the stem was put in a glass vial filled with water to prevent withering and the vial was placed on an anti-vibration table.In one case the cutting had two leaves with petioles separated by a 10-cm long stem section, while in the other the cutting had three leaves with petioles separated by 5-cm long stem sections.For the purpose of analysis, the cuttings were divided into sections, each with a measuring point in the middle.For cuttings with two leaves, those were: basal leaf, basal petiole, stem between the two leaves, apical petiole and apical leaf.For cuttings with three leaves, the section labelling was equivalent, but followed the position of male and female.The cuttings were replaced with equivalently shaped fresh ones as they wilted.To prevent the insects from escaping, the setup was contained within a clear Plexiglass cylinder.Mating behaviour was observed for 20 min or until the male reached the female, whichever came first.The experiments were performed at 23 ± 1 °C between 5 pm and 9 pm local time to obtain highest sexual activity from S. titanus, except in tests 1.2 and 1.4 where the temperature was 28 ± 1 °C.Movement was recorded with a Canon MV1 miniDV camera.Vibrational signals were recorded with a laser vibrometer and digitized with 48 kHz sample rate and 16-bit resolution, then stored directly onto a hard drive through a LANXI data acquisition device.The laser beam was focused on a small piece of a reflective tape glued to each measuring point.Spectral and temporal parameters of the recorded signals were analyzed with Pulse 14.0 after applying Fast Fourier Transform with window length of 400 samples and 66.7% overlap.The equipment was calibrated, which enabled direct measurements of the actual substrate velocity.The terminology used for description of vibrational signals in S. titanus follows Mazzoni et al.Vibrational signals not previously described were labelled according to their behavioural context.Pair formation was studied using a male and a female of S. titanus, each placed on a different leaf of the same grapevine cutting with two leaves.Vibrational signals were registered from the measuring point on the lamina of the basal leaf with the male.To analyze the synchrony of male–female pulses within vibrational duets, we measured pulse repetition time in male signal in presence and absence of female reply, and female pulse latency."Each of these parameters was analyzed throughout the whole male–female communication sequence, from the starting position when a male was on a different leaf than a female, through the male's searching phase to his arrival to the leaf with the female.To quantify the effect of female reply on the period, we calculated male response phase and female latency phase.The MRP was equivalent to:/T) × 360°), where T and T′ were the average pulse period in male signal in absence and in presence of female pulse, respectively.The FLP was equivalent to: × 360°).The value of response phase delay was: α = MRP/FLP.α = 1 indicated a delay of an entire pulse period.A one tail paired t-test was used to compare the difference between T and T′ in order to evaluate whether period increased during each behavioural stage.To determine whether female pulse latency and α values differed among the behavioural stages, we performed the Kruskal–Wallis test followed by the Steel-Dwass pairwise multiple comparison test, using statistics software KyPlot 5.0.Test 1.1 clearly revealed a change of male signalling behaviour during the approach to a stationary female, detectable by the emission of a harmonic “buzz” that signifies progression from the location duet to the courtship duet.Such a behavioural switch in a male may be elicited by a significant change in perception of the female response; however, the first question is whether the change of female signals is passive, i.e. a cue created by transmission properties of the substrate alone, or active.The latter would imply that the static female reacts to cues about distance in male signals and makes a change in some parameter of her reply that was previously overlooked in creating the signal classification described in Mazzoni et al.To answer this question, we recorded female signals at the source and checked if the female emits a constant signal throughout the whole male mating approach or if there is an effective variation of any parameter in the female response that could elicit the male to progress from the Location to the Courtship signal.We recorded mating behaviour of S. titanus pairs on a grapevine cutting with two leaves, the female being always placed on the apical leaf.Male positions during each trial were noted, and the vibratory emissions were recorded from the centre of the apical leaf in close vicinity to the female.Up to 12 female replies were analyzed per male location per trial, and all the measurements from each plant section were pooled.The number of analyzed pulses was limited to prevent giving larger weight to trials where male search took longer.Sections were then compared using the Kruskal–Wallis test followed by the Steel-Dwass pairwise multiple comparison test.Test 1.2 showed that the female pulses remain relatively constant throughout the pre-copulative phase of the mating behaviour.This let us formulate a further hypothesis that males switch from location to courtship behaviour according to a cue in the female reply, related to their own perception of passive signal changes that indicate proximity of the female."We predicted that if the switch in behaviour normally occurs at a certain distance from the female, we should detect a significant difference in any parameter's value between the measuring point where the change first occurred and all measuring points further away.To test this hypothesis, a male and a female were put on separate leaves of the grapevine cutting with three leaves.The male and the female were randomly placed on either of the basal, middle or apical leaf, thus obtaining six different combinations, and leaves were labelled “female leaf”, “empty leaf” and “male leaf” accordingly; therefore, males had to distinguish not only the leaf with the female from their starting position, but also from another, empty leaf.Prior to the start of each trial, we used a minishaker to vibrate the plant with playback of pre-recorded MCS in order to initiate mating behaviour.The female replied to the playback and, as a result of such duet, the male responded with rivalry.The playback was then stopped to allow the male to establish a duet with the female.Location of the male during each phase of mating behaviour was determined from video recordings to synchronize the audio record with male movement.Since females remain stationary, the amplitude of their vibrational signals, measured as vibrational substrate velocity and its spectral components, could be measured directly at 8 measuring points distributed on the grapevine cutting by moving the laser beam during male search."The goal was to record the signals at all the measuring points during each trial, so we did not necessarily follow the male's movement with the laser beam.We calculated signal amplitude by averaging all the spectral components within the range 40–250 Hz where the majority of acoustic energy was concentrated.This measure, while absolute, should still be regarded as a proxy for the “true” amplitude because it is not yet certain which property of a broad-band signal the animals actually respond to."Additionally, we tested whether any particular frequency component of a signal, including low-amplitude ones, might also contribute to detection, by splitting the signal's frequency spectrum to 10 ranges with one dominant or subdominant spectral peak each, ranging up to 1 kHz.Peak velocity value within each of those frequency ranges was averaged from three female pulses per measuring point per trial.To test for statistical differences of the female pulse amplitudes and frequency ranges between the measuring points, we performed the Kruskal–Wallis test followed by the Steel-Dwass multiple comparison test.Finally, with the help of the videos, we took note of the position on the plant where the switch of male behaviour from location to courtship occurred.Thus, we associated the spectral analysis of the female pulse at each measuring point with the occurrence of behavioural switches on the plant.To further confirm the results of Test 1.3, we performed an additional playback test.We used playback to reduce variability caused by the male–female interaction, which enabled us to study transmission alone.The cutting for this test was the same as for Test 1.2.Again, the female was always placed on the apical leaf.A representative male calling signal was played back to females using a static loudspeaker placed parallel to the empty basal leaf, at the distance of 3 cm.The amplitude of resulting vibrations near the female was adjusted beforehand to a level naturally experienced by females when listening to a male signalling from a different leaf, as determined from recordings made in Test 1.2.A loudspeaker was used for stimulation in order not to influence signal transmission near the point of excitation."Each female's responses were recorded by a laser vibrometer at all the recording points. "We examined response latency and two spectral parameters: peak amplitude, and frequency of the dominant spectral peak in each signal's frequency spectrum.Only relative peak amplitude in dB was considered because the absolute values were of less importance for the purpose of this test.Values were compared between plant sections with the Kruskal–Wallis test followed by the Steel-Dwass pairwise multiple comparison test."To test the hypothesis that males make directional decisions on the basis of female reply, video recordings taken from Test 1.1 were also analyzed and the male's directional choices annotated.Three parameters were evaluated.First, we annotated whether the male started to walk towards the female, if yes, it was recorded as a right decision, if not, as a wrong decision.When males reversed the direction, it was annotated as a wrong decision if it turned away from the female and right if turned towards her.When males reached a fork between stem and leaf the right or wrong decisions at the branching point were also annotated.To evaluate if males were able to make directional decisions other than by chance, the numbers of right and wrong decisions were compared in a one-tailed t-test for dependent samples for each parameter.The main steps of the mating behaviour of S. titanus are summarized in Fig. 3.As described previously in Mazzoni et al., in all trials males initiated vibrational communication with emission of a male calling signal.When females were not responding, males either remained stationary or exhibited “call-fly” behaviour.When females responded, most males emitted pulse trains with an irregular rhythm and with an increased pulse period.The calculated male response phase delay was 0.85, which indicates that female response resets the emission of male pulse for almost a complete pulse period.Such delayed exchange of male and female pulses was termed identification duet and was observed only when a male and a female were placed on separate leaves.During IdD, males walked randomly on the leaf.Seven females also emitted short pulse trains in reply to the male signal.As a result, males either walked randomly and called again, or emitted disturbance signals.Following IdD, males moved towards the petiole and walked to stem and towards the leaf hosting the female.In this stage female reply had a small but significant effect on the pulse period in male signal.This phase of male–female vibrational interaction was named localization duet and was recorded from the beginning of the directional search until reaching the female leaf.LoD was composed of two sections repeated continuously.In section 1, males were stationary and emitted short series of pulses.In section 2, males walked for a few centimetres before stopping and often emitted a single strong pulse.Females were sometimes observed to emit multiple pulse trains after the last male pulse.The male behavioural response to the multiple female trains was either a directional search followed by another LoD, or emission of disturbance signal and a restart of the communication with an IdD, however, in the latter cases, the re-identification was limited to exchange of a few pulses between male and female – characterized by α value close to 1 – that immediately progressed into a LoD.The durations of IdD and LoD were similar.When the male arrived at the leaf hosting the female, courtship duet was established.During CrD, males emitted pulses at a regular rhythm and female reply had again a small effect on the pulse period in the male signal.The phase delays during two sections of the CrD were similar to one determined for LoD and significantly lower than in IdD.The female pulse latency was constant throughout all stages of male–female vibrational interaction, with values significantly lower only in section 1 of CrD.Measurements from the apical leaf revealed no consistent changes in female reply during the course of the male approach.Out of the initial 18 pairs tested, N = 18 sets of measurements were obtained with the male calling from the basal leaf, N = 6 from the basal petiole, N = 10 from the stem, N = 12 from the apical petiole, and N = 10 from the apical leaf.All the males switched to CrD after reaching the apical leaf lamina, at the distance of less than 10 cm from the female.A significant difference in the dominant frequency of female signals was measured only when males were calling from the basal petiole and the apical leaf.Conversely, female dominant peak amplitude was significantly higher when males were calling from the apical petiole than from the basal leaf.Female response latency decreased from 210 ± 23 ms to 182 ± 19 ms, then remained stable, with 184 ± 26 ms when the male reached the apical leaf, however, the Steel-Dwass test only showed significant difference between the basal leaf and the three sections closest to the female, i.e. stem, apical petiole, and apical leaf.Because female response latency changed only after the start of CrD in Test 1.1 and because we did not measure any changes in female reply consistent with behaviour in Test 1.2, we focused on spectral parameters and amplitude of female signals as perceived by the male.Most energy of the female signal was concentrated in the chosen frequency range 40–250 Hz.Amplitude of the female reply perceived by the male along the grapevine cutting is summarized in Fig. 5.There was no statistical difference in measured amplitude of female signals between male lamina, male petiole, empty lamina, empty petiole and stem, whereas the amplitude level of female pulse was significantly increased when perceived at the female leaf petiole and female leaf lamina.Twenty-five out of 27 courtship duets started on the section of the plant closest to the female, most commonly on the petiole.In the other two cases the courtship duet started on the stem, but it was always near the female leaf petiole.No difference in dominant frequency was found between sections.When peak frequencies within individual frequency ranges of the female reply were compared, up to three recording points differed significantly from the rest, nevertheless, differences were never related to the distance from the female.On the amplitude axis, peak amplitude within the following individual frequency ranges changed consistently with the distance from the female: 110–150 Hz, 160–200 Hz, 310–350 Hz, 360–400 Hz, and 410–550 Hz.Measurements of female replies recorded from different points on the plant revealed changes caused by transmission along the substrate.The stimulus elicited replies by each female, so we analyzed all the female signals from all the recording points in each trial.The baseline latency value, as measured on the apical leaf closest to the female, did not increase significantly until the basal leaf at 232 ± 37 ms, n = 147.A small, but statistically significant, difference occurred only between the apical petiole and the stem.Relative amplitude of the dominant peak decreased consistently, with all the differences between points significant, except between the stem and the basal petiole, and between the basal petiole and the basal leaf.Dominant frequency differed significantly between recording points, but no consistent changes related to the distance from the emitter was observed: frequencies at basal petiole, apical petiole and apical leaf were not statistically different, but all were higher than the frequency at the basal leaf, which was in turn higher than the frequency at the stem.To validate the average substrate velocity at which the switch in behaviour was observed in Test 1.3, we combined behavioural data from Test 1.2 and recordings from Test 1.4, both obtained on a cutting with two leaves.For this purpose, the amplitude was expressed as the average velocity in the frequency range 40–250 Hz, the same as in Test 1.3, and compared between tests using the Mann–Whitney test with Bonferroni correction.Average velocity increased from 0.049 ± 0.037 at female petiole to 0.124 ± 0.101 at female leaf lamina.Average velocity at the female leaf lamina on the cutting with two leaves was not different from the average velocity at the female petiole on the cutting with three leaves, while it was significantly higher than at the stem on the cutting with three leaves.Measured differences between plant sections for other analyzed parameters did not correspond to the behavioural switch in Test 1.2.The number of right or wrong directional decisions made by males moving towards a female after a female response is shown in Fig. 7.Significantly more decisions were towards the female and when reversing the direction, significantly more males made a correct rather than wrong directional decision.On the other hand, no difference between correct and wrong directional decisions was observed at branching points.A male that turned in the wrong direction made on average two additional moves in this direction before turning around.Pair formation in S. titanus starts with identification of the mating partner and continues with a localization stage until a final courtship stage before copulation."In general, signals should first inform the receiver about the sender's identity,",then the quality,and the location,.Our results indicate that the first act of pair formation in S. titanus is male identification of a conspecific female through a strict synchronization with his own pulses.Female reply has to arrive within a specific time window, like in several other leafhopper species, and should not overlap with the next male pulse, because overlapping would be mistaken by both partners for a disruptive signal emitted by a rival male; however, while it was previously thought that female pulses are emitted only in-between the male pulses, we also found in the present study that female pulses may be emitted as pulse trains, most often after the last male pulse.Such multiple replies occurred when the male was identifying or locating the female from distant plant parts, which may represent female adaptation for increasing detectability and/or traceability.Male calling song pulses are emitted with regular rhythm, indicating that they are generated by an endogenous oscillator.Females do not reply to all male pulses, suggesting that they listen out for each male pulse and reply to it.Resetting of the male endogenous oscillator by the central nervous system is comparable to signal interactions among chorusing males; however, S. titanus males do not form choruses, engaging instead in rival behaviour if a competing male is detected in the vicinity.The change in rhythm could in this case help the male to distinguish the female signal from conspecific male emissions in the first stage of mating behaviour, when recognition has not yet been achieved.Without a change in rhythm, two males emitting MCS out of phase might mistake each other for a conspecific female, while if a reply triggers prolongation of the pulse period, overlapping will necessarily ensue.The effect of female reply on male pulse period at later stages was small, suggesting that mate recognition is the main function of pulse period resetting in the calling phase.This observation indicates a complex and situation-dependent neuronal control of signal production in males.After identification has been achieved, accuracy of recognition became secondary and speed was the key, with male LoD signals three times shorter than IdD on average.As noted before in this and other species of small plant-dwelling insects, males interrupt walking bouts with calls after which directional decisions are made.Although plants constitute complex structures with branching points, leaves and stems, of woody and green tissues, males of S. titanus were able to make correct directional decisions when walking towards a stationary female, seemingly based on a continuous process of evaluating the perceived information.According to our results, S. titanus males are able to extract directional information from female reply during the search, since significantly more males walked towards the females; however, males made many mistakes at branching points.Males of the larger stink bug Nezara viridula, which can orient reliably at branching points, stop and stretch their legs between branches, thus extending the leg span.We never observed such behaviour in S. titanus.Instead, we showed that males are able to correct the direction after they had made a wrong decision, despite their short leg span.The correction process was rapid, with males making on average less than two moves away from the female before reversing their direction.This indicates a perception of a directional cue even in the orientation of such small insects, perhaps by comparing the relevant parameter of the female response between consecutive locations where they stop to signal.Decisions by a male were made as he walked after every identified female response, while in absence of female reply, he remained stationary.Such a search tactic would require short-term memory for comparison of signals between neighbouring locations, but not the capacity for direct stereo or multi-channel comparison, bearing more resemblance to triangulating search behaviour of some beetles and stoneflies on 2-D surfaces than to direct orientation of the more closely related Pentatomid bugs.In the leafhopper Graminella nigrifrons, searching is facilitated by positive phototaxis that, coupled with female preference for perching on the top of the plant, enables localization even without the need to extract exact directional information from vibrational signals.We observed no such preference in S. titanus.Directional decisions at petiole-stem crossings appeared random and males found females perched on leaves below their starting location without difficulty, further confirming that vibrational signals alone provide the information about both identity and location in this species.The change in behaviour as males progressed from identification to localization and ultimately to courtship suggests that a male is aware of whether the female is in close proximity or not.According to our results, female response is all-or-nothing and signal changes due to transmission through the plant act as a trigger for male courtship behaviour.Significantly higher female signal amplitude was detected on or near the female leaf compared to other sections of the plant.We demonstrated that the switch to courtship behaviour occurred when the average substrate velocity of female signals, as measured from calibrated frequency spectra of individual signals, increased to approximately 10 μm/s.Behavioural switch corresponded to the amplitude threshold, while exact geometry was not important.This is supported by previous results when pairs were placed on the same leaf at the start.In such a situation, males did not perform neither IdD nor LoD, and MCS immediately progressed into a courtship duet.It could be beneficial to restrain the emission of courtship signals, which are enriched with additional elements and thus likely more energetically demanding, to the stage when a female is already nearby.The complexity of the CrD contrasts with the relative simplicity of the LoD, which is formed by only one type of pulse and is shorter in duration than both IdD and CRS.In the present study, none of the males expressed “call-fly” behaviour after the duet was established, which may be due to the small size of the experimental substrate.The “call-fly” behaviour is usually associated with a strategy to increase signalling space and the amplitude in our experiments was probably high enough for the males to perform a more localized search by walking, unlike in the case of inter-plant communication.Previous work suggested that frequency-dependent attenuation may provide more reliable information about distance than total amplitude.We found several individual frequency components whose peak amplitude change reflected the change in the average velocity of the whole range 40–250 Hz.For this reason, they might have a role as a proximity cue, perhaps complementing the change in amplitude of the whole signal, which is the most likely trigger to switch localization stage into courtship; however, while we found significant changes on the frequency axis, no pattern consistent with behaviour or distance from the emitter was obvious, neither in the dominant frequency, nor the frequencies of individual subdominant peaks.Initially we also considered female response latency as a potentially relevant parameter, but while latency increase was consistent with distance, we found no abrupt significant change in this parameter between sections where the switch in behaviour occurred, so we did not consider it further.Hence, amplitude is the parameter we believe the males use, as opposed to response latency and frequency structure of a female reply.Polajnar et al. previously demonstrated that due to resonance, amplitude is an unreliable cue for assessing distance from the emitter in species that use almost pure-tone signals.In broad-band signals, on the other hand, amplitude oscillations average out between frequency components and lead to monotonous attenuation with distance, as also observed in the present study.Unpredictable propagation of pure-tone signals because of resonance may therefore be an additional reason why the emission of a harmonic “buzz” as an element of male vibrational signals is restricted to the final stage, emitted when the male is already close to the female.Another confounding factor is eccentricity of stem motion where perceived signal amplitude might depend on the angle relative to the emitter, but this is only noticeable very close to an emitter standing on a petiole.While visual or chemical cues may also be involved in eliciting courtship behaviour at short distances, these seem to be less likely possibilities.Our video recordings show that females were not visible from the petiole of the female leaf.Until now there has been no evidence that chemical communication plays a role in reproductive behaviour of leafhoppers, and adults rely exclusively on substrate-borne vibrations for intraspecific communication.The antennae of S. titanus adults in particular have a reduced number of olfactory sensillae and only the nymphs have been shown to use olfaction for recognition of the host plant.To summarize, pair formation in leafhoppers is a dynamic process where the identification stage seems to be optimized for reliability and the localization stage for speed, while energetic demands may be rationalized by starting the costliest and most complex advertising stage only after the first two tasks have been accomplished.Secondly, leafhopper males are able to interpret relevant information contained in female signals’ perceived amplitude and temporal synchrony with males’ own signals, therefore utilizing not only female reply per se, but also transmission properties of the substrate to guide their behaviour.It is the authors’ belief that behaviour in our chosen model species may, because of its simplicity, provide further insights in the insect mating process, either through observation or through manipulation of signals.Building upon these basic insights, similarly optimized strategies for mate recognition and localization should also be searched for in other species of animals. | The ability to identify and locate conspecifics depends on reliable transfer of information between emitter and receiver. For a majority of plant-dwelling insects communicating with substrate-borne vibrations, localization of a potential partner may be a difficult task due to their small body size and complex transmission properties of plants. In the present study, we used the leafhopper Scaphoideus titanus as a model to investigate duetting and mate searching associated with pair formation. Studying these insects on a natural substrate, we showed that the spatio-temporal structure of a vibrational duet and the perceived intensity of partner's signals influence the mating behaviour. Identification, localization and courtship stages were each characterized by a specific duet structure. In particular, the duet structure differed in synchronization between male and female pulses, which enables identification of the partner, while the switch between behavioural stages was associated with the male-perceived intensity of vibrational signals. This suggests that males obtain the information about their distance from the female and optimize their strategy accordingly. More broadly, our results show that even in insects smaller than 1. cm, vibrational signals provide reliable information needed to find a mating partner. © 2014 The Authors. |
375 | Genome sequence analysis of Zooshikella ganghwensis strain VG4 and its potential for the synthesis of antimicrobial metabolites | The emergence and spread of resistance against known antimicrobials has renewed interest in the discovery of microbial natural products with antimicrobial properties.Recent studies have revealed that microbes found in the Red Sea can produce a variety of antimicrobial compounds .The sequencing of microbial genomes has revealed the immense genetic potential of microbes to synthesize bioactive secondary metabolites ; however, the vast majority of secondary metabolites has remained unidentified .In a recent study, we isolated bacteria, from the Red Sea sediments, in the vicinity of seagrass, and tested their ability to degrade Acyl Homoserine Lactone molecules .While doing the initial screening, we observed that the culture supernatant of one isolate could kill the biosensor strain Chromobacter violaceum CV026 used in the assay.We hypothesized that this isolate produced secondary metabolites with antimicrobial properties.Therefore, we sequenced the genome of this isolate in order to investigate the genetic potential of this bacterium to synthesize such metabolites.The 16S-rRNA gene sequence showed a high homology to the Z. ganghwensis strain JC2044, which was isolated from sediments samples from Getbol in Korea .Similarly, to other Zooshikella isolates, this isolate also produced a red pigment that gave a red color to the colony.The red pigment was identified as Prodigiosin, which has shown anticancer and antimicrobial properties .Red Sea sediments were collected at a depth of 1–2 m, from the coastal area 12 km North of Thuwal, Saudi Arabia.Sediments were acquired using a 30-cm-long acrylic cylindrical tube.Sampled sediments were stored at 30 °C, and bacteria were isolated at the earliest to avoid any negative effect due to storage.For bacterial isolation, approximately 1 g of sea sediments were suspended in 1 mL of 0.2-μm filtered autoclaved seawater, and vortexed.This mixture was left to stand for 1–2 min to allow the bigger particles to settle down.The supernatant was then serially diluted, and plated on Marine Agar.The plates were incubated at 30 °C, for 1 week.Selected bacterial colonies were further sub-cultured onto fresh agar plates.Single colonies were subsequently streaked twice to obtain pure cultures.Quorum-quenching assay was conducted as described previously .Briefly, the isolates were grown in 0.5 mL of Marine broth and incubated at 30 °C with shaking.C6-AHLs were added to this bacterial culture to reach a final concentration of 10 μM and further incubated for 24 h at 30 °C with shaking.The bacterial cultures were centrifuged to pellet the cells, and the remaining C6-AHLs in the culture supernatant were detected by adding it to the wells of the LB agar plate overlaid with C. violaceum.The plate was incubated further for 24 h at 37 °C A purple halo indicated an absence of QQ activity, whereas no halo indicated a degradation of C6-AHLs.For the genome sequencing, the genomic DNA was extracted, using a DNA blood and tissue kit from Qiagen.The library for the whole genome sequencing was prepared by following the Pacific Biosciences 20-kb Template preparation protocol, also using the BluePippin Size Selection System protocol, and subsequently sequenced on PacBio RS platform.The PacBio chemistry resulted in 49,247 reads and 7.9 Gb of data.The PacBio sequence reads were assembled, using a CANU WGS assembler version 1.4 with default parameters.Assembly of the whole genome yielded 12 contigs with N50 of 5.9Mb and a total genome size of 6.6 Mbp.GC content of the genome was 41.09%.Functional annotation of this bacterium was performed using the Automatic Annotation of Microbial Genomes pipeline .Briefly, this annotation pipeline first validated the sequence quality using prinseq .The RNA prediction was then carried out using RNAmmer , tRNAscan-SE and Infernal .Open Reading Frames genes that were identified, 74% were annotated.NCBI annotations of the genome are available online at URL: https://bit.ly/2w6lelI,It has been suggested that large enzyme complexes, such as polyketide synthases and nonribosomal peptide synthetases, synthesize the majority of the bioactive natural products .Different bioinformatic approaches have been developed for identifying such enzymes in the genomes, and for predicting the structures of polyketides and nonribosomal peptides produced by these enzyme .These bioinformatic tools search for protein domains such as thiolation, condensation, acyltransferase, and adenylation domains that are involved in the biosynthesis of natural products.For the prediction of PK synthases and NRP synthetases, we used an open-source web application called PRISM 3.This computational resource is a valuable tool for the prediction of gene clusters involved in the biosynthesis of bioactive secondary metabolites such as type I and type II PK and NRP and their structures .An analysis of the Z. ganghwensis genome sequence, using PRISM, resulted in the identification of 5 gene clusters that could potentially synthesize NRP and PK.Two of the gene clusters were capable of synthesizing both NRP and PK.It is not clear if such gene clusters can produce both PK and NRP secondary metabolites, or a molecule that is a hybrid of both.Two gene clusters synthesized only NRP, and one gene cluster synthesized only PK.Cluster 1 consists of four open reading frames, two of which encode the antimicrobial resistance genes, a third one, VG4_000000308, that carry five domains involved in the synthesis of NRP, and a fourth ORF, VG4_000000309, that encodes a protein containing 14 domains, involved in the biosynthesis of both NRP and PK.We found that the modular structure of these ORFs was typical to that found in NRP and PK synthases .The predicted structure of the secondary metabolite produced by this cluster is presented in Fig. 2A. Predicted cluster 2 contains only one ORF, and its protein product is predicted to consist of 7 domains, involved in the production of NRP.The predicted structure of NRPs produced by this ORF is shown in Fig. 2B. Cluster 3 consists of three ORFs, and can only synthesize NRP.The predicted structure of NRPs synthesized by this cluster is shown in Fig. 2C. Cluster 4 consists of four ORFs each with one domain.This cluster is capable of synthesizing only PKs.PRISM was unable to predict the structure of PKs synthesized by this cluster.Lastly, we found that cluster 5 contained three ORFs and that the protein product of VG4_000004243 contained 9 domains, usually involved in the biosynthesis of NRP and PK.The protein product of VG4_000004245 is predicted to contain 6 domains involved in the synthesis of NRP only.We note that AAMG annotations for PRISM detected genes are in good agreement.In this study, a phenotypic and genomic analysis showed that Z. ganghwensis strain VG4 produced secondary metabolites with potential antimicrobial activity.This antimicrobial activity could be the result of Prodigiosin or other secondary metabolites, such as PK and NRP, that are potentially produced by this bacterium.In future studies, our goals will be to confirm the production of these metabolites and to investigate their bioactivity.The BioProject ID for this genome submission is PRJNA383317.This Whole Genome Shotgun project was deposited at DDBJ/ENA/GenBank, under the accession number NDXW00000000.The version described in this paper is version NDXW01000000.The authors declare no competing financial interests. | With antimicrobial resistance on the rise, the discovery of new compounds with novel structural scaffolds exhibiting antimicrobial properties has become an important area of research. Such compounds can serve as starting points for the development of new antimicrobials. In this report, we present the draft genome sequence of the Zooshikella ganghwensis strain VG4, isolated from Red Sea sediments, that produces metabolites with antimicrobial properties. A genomic analysis reveals that it carries at least five gene clusters that have the potential to direct biosynthesis of bioactive secondary metabolites such as polyketides and nonribosomal peptides. By using in-silico approaches, we predict the structure of these metabolites. |
376 | Influence of workpiece constituents and cutting speed on the cutting forces developed in the conventional drilling of CFRP composites | Understanding the cutting forces developed in the drilling of fibre reinforced polymer composites, especially those reinforced with carbon fibres, is a fundamental task towards further exploring other phenomena related to the machining of composites, such as chip formation, machining dynamics, heat generation, machining-induced damage, or tool wear/tool life.This last factor, tool wear, is one of the major concerns in aerospace industry, provided that a better understanding will allow optimising tool life models and tool replacement management, thus reducing the manufacturing costs .Investigations carried out by Davim et al. studied the impact of drill geometry, cutting speed and feed speed on the thrust force, amongst other factors, in the drilling of glass fibre reinforced plastic composites using cemented carbide tools.The analysis of variance data analysis showed that the specific cutting pressure decreased with increasing penetration speed and cutting speed, whereas thrust force increased with penetration feed speed.The spur geometry developed a lower thrust force than the 118° point angle drill when comparing the same cutting parameters and for the considered geometries, penetration speed was the factor having the highest impact on both the cutting pressure and the thrust force.Following on from their previous work, Davim et al. studied the impact of cutting speed, feed and of type of resin on the specific cutting force, delamination factor and surface roughness in the drilling of two different GFRP composites using a cemented carbide tool.The authors considered two composites having a 65% fibre volume and different polymer matrices: unsaturated polyester and propoxylated bisphenol A-fumarate.Based on the analysis average and ANOVA of the data collected, the authors reported that the unsaturated polyester-based composite showed a lower specific cutting pressure than the propoxylated bisphenol A-fumarate-based one, feed rate being the parameter having the most significant influence on it for both composite systems.The delamination factor increased with both cutting parameters; however the unsaturated polyester-based composite exhibited the lowest damage.On the other hand, the surface roughness increased with both the increasing feed rate and the cutting speed; however the cutting speed showed a higher impact on the surface roughness than on the feed rate.Tsao assessed and compared the influence of machining parameters on the drilling-induced thrust force and delamination in CFRP drilling using compound core and core-saw drill bits.A compound core-special drill consists of an outer core drill and a conventional drill bit of a varying geometry within the core drill.In these tools, the inner and the outer parts of core-special drills can rotate at different speeds and directions.Results obtained from applying the Taguchi method showed that core-saw drills yielded better results than core drills.Feed rate and spindle speed were the factors having the highest impact on thrust force and delamination, whereas the effect of diameter ratio was negligible.Tsao and Chiu expanded the previous work conducted by Tsao in this area and assessed the impact of drilling parameters on thrust force in the drilling of CFRP composites using compound core-special and step-core-special drills.In a compound step core-special drill, the height of the inner drill is higher than that of the outer drill.The ANOVA analysis conducted in this study revealed that cutting velocity ratio, feed rate and inner drill type were the factors having higher impact on the developed thrust force.A high negative cutting velocity ratio showed a significant reduction of thrust force.Recent work by Khashaba reviewed the work carried out on the correlation between drilling parameters and machinability outputs such as thrust force, torque, residual strength, surface roughness and thermo-mechanical damage in machining polymer matrix composite materials.According to this review, the contribution of the chisel edge to thrust force ranged between 40% and 60% of the total thrust.In order to reduce the thrust force and produce delamination-free holes, this work advised to consider tool geometries that are able to distribute the thrust towards the drill periphery instead of concentrating them at the hole centre, as well as applying pre-machining techniques, such as step drills, pilot holes or backing plates, when possible.Decreasing feed rate towards the hole exit was a strategy reported to yield delamination-free holes.Khashaba also reported that high drilling temperatures combined with the low thermal conductivity and glass transition temperature of the composite lead to matrix pyrolysis, composite damage and enhanced tool wear; whilst the stress concentration due to the softening and subsequent solidification of the matrix yielded a reduction in the residual mechanical properties around the holes drilled.Feito et al. carried out an investigation on the influence of the drill geometry on the drilling of CFRPs, focusing on the drill point angle and tool wear since these factors greatly affect cutting forces and hole quality.Fresh tools showed negligible influence of point angle on thrust force, however significant influence was observed when combined with the effects of wear.Entry and exit delamination factors increased with the increasing point angle; however entry delamination diminished with wear progression, although exit delamination increased.ANOVA analysis showed that wear, point angle and feed rate are the parameters having most influence on thrust force, on the other hand the effect of cutting speed was found to be negligible.A number of authors also applied analytical and numerical models to investigate the cutting parameters, machining forces and delamination in the drilling of carbon/epoxy composites.Phadnis et al. investigated the effect of cutting parameters on thrust force and torque in drilling CFRP composites both experimentally and numerically using a 3D finite elements model accounting for complex kinematics at the drill-workpiece interface.Experimental assessment of the drilling-induced damaged was performed using X-ray tomography.Results showed good agreement between experimental and numerical data.Thrust force, torque and delamination damage increased with the increasing feed rate, however reduced gradually with increasing cutting speed.The FE model showed that low feed rates and high cutting speeds yielded the best results in CFRP drilling for the considered parameters.Feito et al. studied the delamination prediction in CFRP drilling by comparing two numerical models.The complex model included the rotatory movement of the drill, penetration in the composite plate and element erosion.The simple model considered the drill acting as a punch that pierced the workpiece and whilst overestimating the predicted delamination factor, had a much lower computational cost compared to the complex model.The influence of thrust force on the delamination factor was studied using the simplified model.Results showed that the maximum level of delamination reached a plateau at a certain thrust force level, which matched that induced by complete perforation of the composite with a punch featuring geometry similar to the drill; therefore it can be used as an upper limit for conventional drilling.A recent review by Kumar and Singh compared the work reported on conventional and unconventional machining of CFRP and GFRP composite.Regarding the cutting forces, conventional drilling yielded higher values of thrust force and torque compared to rotatory ultrasonic conventional machining.Thrust force increased with the increasing feed rate, however tool geometry showed significant impact on thrust force and allowed it to reduce.As shown above, an important part of the work carried out in the study of the cutting forces in the drilling of FRPs focused on optimising a number of factors, such as drill bit geometry and cutting settings, in order to minimise delamination and improve analytical models to predict it.However, there is a gap with respect to the specific influence of the workpiece constituents on the cutting forces in the drilling of woven CFRP composites.The aim of this work focuses on investigating the impact of workpiece constituents, number of consecutive holes and cutting speed on the cutting forces developed in order to give further insight into the effect of resin on tool wear in the drilling of CFRP composites.This investigation captured the cutting forces and the damage around the boreholes developed in drilling three CFRP composites in dry conditions using a spindle dynamometer, scanning electron microscopy and X-ray microcomputed tomography; and discussed the impact of each factor on the forces developed based on the results obtained from graphical analyses and hypothesis tests.The carbon fibre/epoxy composite plates considered in this study were manufactured at the Composites Centre of the Advanced Manufacturing Research Centre with Boeing, The University of Sheffield.These plates were made from prepregs plies having a 55% Vf supplied by Cytec Engineered Materials following the vacuum bag moulding method.40 plies were laid-up to an approximate thickness of 10 mm in 5s stacking sequence.Curing of the plates then took place following cure cycles according to the specifications indicated by the supplier to develop full mechanical properties and maximum Tg.Finally, the cured panels were cut down to 150×150 mm using water-jet cutting technology.This investigation considered three different composite systems combining two types of woven carbon fibre fabrics and two types of toughened thermosetting resins:MTM28B is a toughened automotive grade DGEBA-based resin having a Tg of approximately 100 °C, whilst MTM44-1 is a toughened high-end aerospace grade resin having an approximate Tg of 180 °C.CF0300 is a high strength woven carbon fibre fabric featuring 3000 filaments/tow and an approximate tensile modulus of 230–240 GPa whilst CF2216 is a high modulus woven CF fabric having 6000 filaments/tow and an approximate tensile modulus of 340 GPa.Both carbon fibre fabrics feature 2/2 twill and a density if 199 g/m2.Table 1 summarises the thermomechanical properties of the resins utilised.Additional information about the thermal behaviour and heat dissipation of the CFRP systems considered is can be found in previous work carried out by the authors .The tool utilised in this investigation was an uncoated Ø6.35 mm WC-10%Co CoroDrill 856 drill bit, supplied by Sandvik Coromant, which features a 120° point angle, two-flute, double angle geometry.This geometry was utilised since it provided intermediate values of tool life, tool wear and hole quality amongst other two other considered geometries in previous investigations carried out by Sandvik Coromant .Full details on the tool geometry are available in previous research carried out by the authors .The CNC machine utilised was a three-axis DMG MORI DMU 60monoBLOCK fitted with a Kistler spindle dynamometer having an average measured static run-out of 12 μm.The experiments were performed in dry conditions, provided that this is the common practice in aerospace industry in machining large size parts, where the operation cannot be performed inside a CNC machine.Each cutting condition used a new tool in order to minimise any effect derived from the accumulated tool wear.The force measurement equipment consisted of a Kistler 9123C1 spindle dynamometer and a Kistler 5221B1 DAQ data acquisition unit, which captured the developed thrust force and torque at 10 kHz sampling rate.This acquisition rate exceeds the minimum suggested by the Nyquist-Shannon theorem, which indicates that the minimum acquisition frequency must be higher than the revolution frequency of the tool multiplied by the number of cutting edges .In this investigation, the minimum sampling rate for the maximum cutting speed considered is ∼333 Hz.At the same time, a Micro-Epsilon thermoIMAGER TIM 160 infra-red camera captured the temperatures developed in the drilling operation by measuring the temperature of the chips generated).The peaks marked with numbered balloons) correspond to the temperature of the chips generated during the drilling of each hole, whilst the sharp peaks corresponds to the subsequent tool retractions and were not computed.It can be observed that after approximately 5 holes, the maximum temperature reached a steady state.This infra-red camera features a 160 × 120 pixels optical resolution, spectral range 7.6–13 μm, thermal sensitivity of 0.08 K and an acquisition rate of 120 Hz.The temperature measurement window utilised was 0 °C/250 °C and the emissivity of the chip was set to ε = 0.90 .This work studied the impact of the workpiece constituents, cutting speed and number of consecutive holes drilled on the forces developed in CFRP drilling in a range of cutting speeds between 2500 and 10,000 rpm.Despite cutting speeds above 6000 rpm usually being utilised in conventional drilling of CFRPs, these have been considered in this investigation in order to test the influence of cutting speed in a wider range and to provide a wider range to assess the statistical impact of cutting speed and its interaction with the other factors considered.Twelve consecutive holes per condition were drilled in order to reach a steady state temperature, as shown in Fig. 2, plus an additional blind hole in order to inspect the drilled surface using scanning electron microscope and X-ray microcomputed tomography techniques.The SEM equipment utilised was a Carl Zeiss EVO LS25 microscope in variable pressure mode.This configuration allows scanning non-conductive materials, such as the epoxy resin, without coating the specimen.The X-ray μCT device utilised was a Bruker Skyscan scanner featuring a 20–100 kV X-ray source, an 11 MP X-ray detector and a maximum detail detectability of 0.5 μm.Table 3 summarises the scanning parameters utilised to perform the X-ray μCT scans.The data obtained was analysed and processed using Bruker’s NRecon, CTAn, CTVox and DataViewer software.A low pass filter and linear drift compensations were applied using Kistler DynoWare and Sandvik Coromant proprietary software, as illustrated in Fig. 3.The scatter in the original data, which was removed by the low pass filter applied, corresponds to the characteristic cutting of each fibre orientation, as studied by Wang et al. .However, as shown in the study, by filtering the signal the maximum cutting forces can be investigated, making it a relevant signal processing approach for this investigation.After conditioning the signal, the software divided each hole into drilling steps corresponding to the hole entry, tool fully engaged and tool exit and extracted the maximum values of thrust force and torque for each hole.The impact of each factor and their main effects on the cutting forces were assessed by means of hypothesis testing and graphical analysis using Minitab 17 software.Fig. 4 illustrates the maximum thrust force with an increasing number of holes for the three CFRP systems studied.As shown in the figure, the maximum thrust force curves obtained in drilling the composites with the same resin and different CF fabrics and CF2216, Fig. 4) did not differ significantly, whilst those corresponding to the composites with the same CF fabric and different matrices and MTM28B, Fig. 4) presented noticeable differences in their respective slopes and the maximum values, thus indicating a strong effect of the resin on the maximum thrust force.This suggests that, in the drilling of CF composites, the mechanical strength of the composite in the feed direction depends on the mechanical strength of the resin.Moreover, the curves corresponding to each cutting speed consistently overlapped for all the composites, suggesting a negligible influence of the cutting speed on the maximum thrust force, which will be further assessed in the statistical analysis.The observed effect of the resin on the maximum thrust force in the drilling of composites is in good agreement with the reported literature .As reported in the available literature, the thermomechanical properties of the resin, such as Tg and elastic modulus, are closely related to its degree of cross-linking and molecular structure .Hence higher cross-linked resins, such as MTM44-1, feature improved mechanical strength and stiffness compared to lower cross-linked systems like MTM28B, therefore explaining the behaviour described.The effect of the other factor having significant impact on the maximum thrust force, the consecutive number of holes drilled, can be explained by two contrasting subfactors: tool wear and machining temperature.Table 4 shows the tool wear developed after drilling 12 consecutive holes for each CFRP system and cutting speed.As shown in the table, it can be observed that despite tool wear was low, it accounted for the increase of the maximum thrust force.Furthermore, MTM44-1 CF0300 composite yielded flank wear values ∼25% higher than MTM28B CF0300, which suggests that the resin, despite not being an abrasive element itself, plays an important role in the abrasive properties of the CFRP system.As highlighted, temperature is also a factor with respect to the maximum thrust force results observed.Table 5 presents the maximum temperatures developed in drilling the three considered CFRP composite systems, which are often above the glass transition temperature of the composite.Above Tg the storage modulus decreases to a great extent, which should have a positive impact on the maximum thrust force.However, as seen above, the maximum thrust force increased with the increasing number of holes drilled.Therefore, temperature worked other ways and tool wear is the key factor: the increase of flank wear implies the growth of the contact surface between the tool and the workpiece and, therefore, the increase of the maximum thrust force.The impact of the factors considered on the thrust force was studied in an ANOVA and factors main effects analyses, as shown in Table 6 and Fig. 5, respectively.The hypothesis test indicated that the type of resin and the number of holes have a significant impact on the maximum thrust force developed, whilst the effects of the type of CF fabric and cutting speed were found negligible.Regarding the behaviour of the maximum torque, Fig. 6 presents the results obtained at cutting speeds between 50 and 200 m/min for the three CFRP systems studied.The maximum torque curves obtained exhibited characteristic features in common for all the drilled composite systems.The influence of resin and number of consecutive holes drilled on the maximum torque can be explained in the same terms as explained above in the case of thrust force.However, unlike in the case of the maximum thrust force, cutting speed exhibited significant impact on the maximum torque developed.Cutting speeds within the low-mid range yielded higher torque than those in the mid-high range, therefore indicating an inverse cutting speed-torque correlation.Furthermore, Fig. 6 shows that the composite reinforced with HM CF fabric exhibited to be more sensitive to the changing cutting speed compared to those featuring HS CF fabrics.This behaviour can be explained due to the impact of cutting speed on the drilling temperatures, strain rate and CF fabric failure.Lower cutting speeds, which imply lower strain rates and longer machining times, allow improved matrix-fibre load transfer compared to higher cutting speeds, thus yielding higher torque values.As explained in previous work carried out by the authors , at Tg the resin maintains its nominal elastic modulus, thus preserving its ability to transfer the load to the reinforcement.Table 5 showed that whilst the systems reinforced with HS CF fabrics developed maximum drilling temperatures above Tg, the composite reinforced with HM CF fabrics developed maximum drilling temperatures on the Tg of the resin.This suggests that those CF composites developing maximum drilling temperatures less than or equal to Tg are more sensitive to the effects of strain rate on torque.Inefficient matrix-fibre load transfer, combined with elevated drilling temperatures and the characteristic failure behaviour of each type of CF and fibre orientation, contribute to machining-induced surface defects, as illustrated in Fig. 7.Regions with ±45° fibre-cutting edge orientations developed extensive fibres pull-out, which created crater-type surface defects, as depicted in Fig. 7.The brittle fracture behaviour of CFRP composites, especially ⩽Tg, promotes the generation of fine debris formed by a mixture of crushed resin and micron/sub-micron size CF segments).This debris can act as a highly abrasive element in a three-body abrasion tribosystem, causing further damage on the softer resin-rich regions, which can also develop cracking) due to the cutting forces applied and the limited load transfer in the machining direction.The through-thickness damage assessment using X-ray μCT showed damage induced at different stages during the drilling operation.The initial stage, which corresponds to the engagement of the primary cutting edge into the workpiece, initiated fibre pull-out in the ±45° fibre-cutting relative angles).The following cutting stage, which corresponds to the engagement of the secondary cutting edge into the composite, generated the same type of damage to a lesser extent).The reason for this lower damage generation can be explained by the higher reinforcement-cutting edge relative angle compared to the earlier stage, which promotes a cleaner cutting of the fibres, thus reducing the fibre pull-out.Once the tool is fully engaged into the workpiece, fibre pull-out is still present but limited in depth, forming localised crater-type damage.However, in the vicinity of the surface, the extent of the induced damage increases due to the lack of support and stability, generating further sub-surface damage and, ultimately, delamination).Further details about this damage inspection using X-ray μCT are available in the Supplementary Video file provided.The statistical analysis and the factors main effects of the results discussed above found that all the factors considered had a significant influence on the maximum torque and confirmed the inverse and direct effects of cutting speed and number of holes on the maximum torque and), respectively.As in the case of the maximum thrust force, the model fitted can explain this relationship with good precision.The results discussed agree with the reported literature about failure mechanics of CFRP composites.Due to the nature of the drilling operation, the rotating cutting edges are constantly changing their relative position respect to the reinforcement and advancing through the thickness helically and the cutting edges apply the load off-axis and off-plane, where matrix failure modes dominate.Therefore, despite CF fabrics and yarns are considered strain-insensitive , the mechanical properties of CF fabrics can also exhibit strain-rate dependency .This work investigated the impact of workpiece constituents, number of consecutive holes drilled and cutting speed on the cutting forces developed in the drilling of carbon/epoxy composites.The most significant fact found in this investigation is that the resin was the factor exhibiting the highest impact on the maximum thrust force, the maximum torque and the machining temperatures.Provided that load, friction, abrasiveness and wear are terms closely correlated, this ultimately suggests that resin will also have a significant impact on the abrasiveness of the composite and tool wear.The main reason given to explain the impact of resin was that in the drilling of CFRP composites the forces are always applied in off-axis directions to the reinforcement.This prevents the matrix from properly transferring the load to the fibres, thus reducing the strength of the composite in machining.In the case of torque, the ability of the resin to transfer the load to the reinforcement is also affected by the drilling temperature and the changing strain rate, which exhibited an inverse strain rate-torque correlation.In this situation, a part of the load is still transferred to the fibres, therefore explaining both the dissimilar strain rate sensitivity exhibited by the CF fabrics considered and the correlation with between CF fabric and torque.As mentioned above, these results indicate the type of resin also influences both the abrasiveness of the composite, the machining temperatures and the tool wear in drilling.Based on the outcomes obtained in this investigation, cutting speed selection has to be considered with other factors, such as number of consecutive holes drilled and plate thickness.Cutting speeds around 100 m/min provide a good balance between machining times, tool wear rate and torque.Higher cutting speeds up to 150 m/min can also be considered for laminates thinner than those considered in this study or for short drilling intervals, as the impact of tool wear is minimised.The results obtained in this investigation present valuable insights into the influence of resin on the machining temperatures, the tool-composite friction and the abrasiveness of the composite, factors that will be further explored in future work.This paper investigated the impact of cutting speed and workpiece constituents on the forces developed in CFRP drilling by machining three carbon/epoxy systems, combining two types of thermosetting resins and two types of woven CF fabrics.From the results obtained in this investigation and their analyses, the following conclusions can be drawn:The type of resin and the number of consecutive holes drilled exhibited significant impact on the maximum thrust force, whereas the influence of both the type of CF fabric and cutting speed were found negligible.The contribution of resin is explained by the dominance of the thermomechanical properties of the resin in the feed direction, which are higher for those resins with higher degrees of cross-linking, therefore the maximum thrust force would also decrease.However, this is exceeded by the increase of the maximum thrust force produced by the increasing tool wear, which implies an increase of the tool-workpiece contact surface.All of the factors considered showed a significant impact on the maximum torque.The contributions of resin and number of holes can be explained in the same terms as in the case of thrust force.However the part of the contribution of resin on the maximum torque is related to the impact of the resin properties on the chip formation and the tool-composite friction, which will be investigated in future work.The maximum torque exhibited significant sensitivity to cutting speed, especially the composite reinforced with HM CF fabric.This behaviour can be explained by the characteristic helical cutting direction followed in drilling, the strain rate sensitivity of CFRP laminates in off-axis directions, the machining temperatures and the mechanical behaviour of thermosetting resins at different temperatures.In these directions, where the composite failure behaviour is mostly dominated by the matrix failure mechanisms, high strain rates prevent a proper load transfer from the matrix to the fibres, thus explaining the inverse cutting speed-maximum torque correlation. | This work investigates the influence of cutting speed and workpiece constituents on the thrust force and torque developed in the conventional dry drilling of woven carbon fibre reinforced polymer (CFRP) composites using uncoated WC-Co tools, by applying experimental techniques and statistical test methods. The type of thermosetting matrix showed significant impact on both the maximum thrust force and torque developed, whilst the type of carbon fibre fabric and cutting speed showed negligible effects on the maximum thrust force. Cutting speed exhibited a strong influence on the maximum torque developed; and high modulus CFRP composites showed increased sensitivity to cutting speed and strain rate compared with intermediate modulus composites. In the characteristic helical machining and feed directions in drilling, the strength and failure behaviour of the composite is dominated by the mechanical properties and failure mechanisms of the matrix, which explains the significant impact of resin on the cutting forces. On the other hand, the impact of cutting speed on torque is justified by the negative impact of strain rate on the ability of the matrix to transfer the load to the reinforcement, thus explaining the decreasing the maximum torque with the increasing cutting speed. |
377 | Insights into the biodegradation of weathered hydrocarbons in contaminated soils by bioaugmentation and nutrient stimulation | Land contamination from poor historical industrial practices or incidents is a widespread and well recognised environmental issue.In the UK alone, it has been estimated that ca. 300,000 ha of land could be affected by industrial activity leading to contamination.Petroleum hydrocarbons are one the most common contaminant, though a wide range of chemicals may be present.Once released into the environment, petroleum hydrocarbons are subject to abiotic and biotic weathering reactions e.g. physical and biochemical transformations, interactions with soils, that will change their composition and toxicity, and will influence their fate and biodegradation.The extent of these transformations will vary according to the type of petroleum products present, the soil conditions, and the bioavailability and susceptibility of the different compounds.Bioremediation has become the preferred method for the remediation of petroleum hydrocarbon contaminated soils, because it is considered cost effective and sustainable, and can accelerate naturally occurring biodegradation processes through the optimisation of limiting parameters.To be effective, it is important to investigate and understand all factors that might effect the efficacy of the process.For example, aliphatic hydrocarbons of intermediate length tend to be readily degradable by microorganisms despite their low solubility, whereas longer chain alkanes, especially those with branched or cyclic chain structures, are more resistant to biological degradation.Heavily weathered hydrocarbons are difficult to biodegrade and have relatively low toxicity, but high residual concentrations can severely alter the physical and chemical properties of the soils, thus reducing soil fertility.Remediation outcomes using biological methods for the treatment of weathered hydrocarbons are often unpredictable, and in some instances contaminated soil may be regarded as ‘untreatable’ via bioremediation.Debate around the benefits of bioaugmentation and its capacity to increase the microbial degradation of weathered hydrocarbons after indigenous microorganisms are no longer effective continues, and only a few studies demonstrate continued biodegradation after introduction of specific hydrocarbon degraders.Biodegradative performance via bioaugmentation can be further improved by the addition of appropriate nutrients; a process referred to as biostimulation.Due to the limited number of studies on the subject, and the complexity of weathered petroleum hydrocarbon products, there is a need for investigation into the potential for bioaugmentation coupled with biostimulation to enhance biotransformation and reduce residual toxicity.In this study, we investigated the potential for biotransformation of weathered hydrocarbon residues in soil.To do so, we determined whether it was possible to improve biodegradation with the simultaneous application of bioaugmentation and biostimulation on two soil types.Soil A was taken from a windrow where bioremediation had been completed and soil B was taken from a site prior remediation where oil drums had leaked contaminating the soil.Soil A treatment was deemed completed as no further degradation could be achieved.This research provides valuable knowledge concerning chemical and toxicological changes on a soil type not previously investigated and could be used to support the development of bioremediation strategies.Finally, we discuss the relationship between chemical change, toxicity, and total petroleum hydrocarbons measurements in the context of risk assessment, highlighting the effects that remediation might have on soil toxicity.Two different soils collected at a depth of 5–20 cm from two commercial oil refinery sites located in the UK were labelled A and B.Soil A is a sandy soil which was heavily contaminated with weathered hydrocarbons.After 6 month windrow treatment, TPH concentration was decreased to 22,700 mg kg−1 where it was believed no further degradation was possible.Soil B is predominantly clay soil contaminated with weathered hydrocarbons taken from a more recently contaminated site where there was no history of any remedial activity.The soils were air-dried for 24 h and sieved through 2 mm mesh to remove stones, plant material, and to facilitate mixing.Prior to air drying the field moisture content was determined in triplicate by oven drying at 105 °C for 24 h. Soils were then stored at 4 °C in the dark before use.For both soil samples, a routine set of characterisation was carried out.Soil pH was measured using a pH meter in a distilled water slurry after a 30 min equilibration period.Maximum water holding capacity was determined in duplicate by flooding the wet weight equivalent of 100 g of dry soil in a filter funnel and allowing it to drain overnight.Particle size analysis was performed by a combination of wet sieving and sedimentation, as described by Gee and Baude.The organic matter content as indicated by loss on ignition of each soil was measured by combustion at 450 °C in a furnace for 24 h, according to ASTM Method D297487.Total organic carbon was analysed by potassium dichromate oxidation, as described by Schnitzer.For nitrate, phosphate, and ammonium determination, 10 g of soil was first extracted in 0.5 M potassium bicarbonate.The extractant was then analysed by high-performance liquid chromatography for nitrate and phosphate as described by Brenner and Mulvaney and Olsen and Sommers, respectively.Ammonium was analysed using the colorimetric test described by Reardon et al.Soil microcosms were established using 700 g of either soil A and B in sterile 1 L, wide-mouth amber glass jars.Four different microcosm conditions for each soil were established and tested in triplicate.Soil grinding was done using mortars and pestles made from hard chemical-porcelain ware.The mortars had a lip and were glazed on the outside.The pestles were glazed to the grinding surface.Soil aliquots of about 70 g were ground for about 15 min in the mortar with the pestle to pass the soil through a 42.5 μm sieve.The ground soil aliquots were then combined for additional sample preparation.Nutrients were added in the form of ammonium nitrate and potassium orthophosphate to obtain a C:N:P ratio of 100:1:0.1.The hydrocarbon-degrading inoculum was composed of three bacterial isolates supplied by Remedios Limited.The bacterial isolates were isolated from an attenuated enrichment culture from No.6 oil impacted soil.Two bacterial isolates were related to Pseudomonas sp. and one to Klebsiella sp.Inoculum was grown using a minimal medium supplemented with diesel as carbon source.The cell concentration added to each microcosm was such as to give 5 × 107 CFU g−1 soil.For each amendment, a few woodchips were added to 10 ml of Bushnell-Haas broth supplemented with 1 g l−1 salicylic acid and 1% ethanol, adjusted to pH 7.The mixture was placed in an orbital shaker at 150 rpm in the dark at 20 °C and left overnight, after which 1 ml was added to 100 ml of fresh medium and grown to a stationary phase.The cell number at stationary phase was 108 cells ml−1.The inoculum solution was then added to the soils at 0.01 ml g−1 dry wt soil to achieve 106 cells g−1 dry wt soil.The moisture content of each microcosm was adjusted to 80% of the soil’s water holding capacity using deionised water.The microcosms were incubated in the dark at 15 °C.High humidity was maintained using damp cotton wool and moisture checked periodically.Each microcosm was mixed weekly and capped loosely to allow oxygen transfer.Soil from each microcosm was sampled at 0, 7, 14, 28, 56, and 112 days for subsequent microorganism respiration monitoring and hydrocarbon analysis.A soil sample from each microcosm was collected and sealed in a headspace vial.All vials containing microcosm samples were incubated under the same conditions as the microcosms for 24 h. For CO2 analysis, the headspace gas was sampled with a gastight syringe and manually injected into a Cambridge Scientific 200 series gas chromatograph with thermal conductivity detector using helium as carrier gas at 20 psi.The GC was fitted with a CTR1 concentric packed column.The column oven and injector temperature were 110 °C and 125 °C, respectively.The GC was calibrated using a standard CO2.Respiration values were determined on a mg CO2 kg soil−1 day−1 basis following subtraction of a blank vial containing atmospheric CO2 only.Homogenized soil was weighed into a glass Universal bottle and 20 ml of ¼ strength Ringer’s solution was added.Samples were then vortexed for 30 s and sonicated for 1 min and allowed to stand for a further 2 min.A 100 ml aliquot of soil suspension was removed and serially diluted in ¼ strength Ringer’s solution to the appropriate dilution factor.An aliquot of 10 ml of each dilution series was added in triplicate to ¼ strength Luria Bertani medium to determine heterotrophs and Bushnell-Hass with 1% diesel as the sole carbon source for hydrocarbon-degraders.Samples were incubated at 25 °C for 24–48 h thereafter and colony-forming units enumerated.Results are expressed as CFU g−1 of dry soil.Hydrocarbon extraction was performed as described by Risdon et al.Briefly soils were chemically dried with 5 g anhydrous Na2SO4 in 50 ml Teflon centrifuge tubes.Acetone was added and sonicated for 2 min at 20 °C.Acetone and hexane were added to the samples and sonicated for 10 min, followed by manually shaking to mix the solvent and soil.This step was repeated twice followed by centrifugation for 5 min at 750 rpm.After passing the supernatant through a filter column fitted with glass receiver tube, a sequential step series, including resuspension of the samples in 10 ml of acetone/hexane, sonicated for 15 min at 20 °C, centrifugation for 5 min at 750 rpm, and decantation into a filter column, was repeated twice.The final extract volume was adjusted to 40 ml with a mixture of acetone/hexane and homogenized by manual shaking.The silica gel column clean-up was performed by passing the extracts through a column filled with florisil.Total extractable and recoverable petroleum hydrocarbons, aliphatic and aromatic fractions were identified and quantified using a Perkin Elmer AutoSystem XL gas chromatograph coupled with a Perkin Elmer Turbomass Gold mass spectrometer operated at 70 eV in positive ion mode.The GC was fitted with a Restek RTX -5MS capillary column.Splitless injection with a sample volume of 1 μl was applied.The oven temperature was increased from 60 °C to 220 °C at 20 °C min−1, then to 310 °C at 6 °C/min and held at this temperature for 15 min.The mass spectrometer was operated using the full scan mode for quantitative analysis of target alkanes and PAHs.For each compound, quantification was performed by integrating the peak at specific m/z.External multilevel calibrations were carried out for both oil fractions, quantification ranging from 0.5 to 2500 μg ml−1 and from 1 to 5 μg ml−1, respectively.Internal standards for the alkanes were nonadecane-d40, triacontane-d62 and naphthalene d8, phenanthracene-d10, chrysene-d12 and perylene d12.For quality control, a 500 μg ml−1 diesel standard and mineral oil were analysed every 20 samples.In addition, duplicate blank controls were also performed by going through the same extraction procedure but containing no soil.The reagent control was treated following the same procedure as the samples without adding soil sample.The reference material was an uncontaminated soil of known characteristics, and was spiked with a diesel and mineral oil standard at a concentration equivalent to 16,000 mg kg−1.Seed germination and Microtox® assay were carried out at the start and the end of the microcosm experiment.The selection of the ecotoxicity assays were based on their ease of execution and representation of different ecological soil organisms.Seed germination tests were performed according to Saterbak et al.Ten white mustard seeds were planted into 20 g of test soil in 120 ml bottle.This was repeated 10 times.The seeds were left to germinate for 4 days at 25 °C in darkness.At the end of each test, if a root was visible, the seed was scored as germinated.Microtox® solid phase test assay was carried out according to Azur Environmental.Tests were done in triplicate.The soil dilution that inhibits 50% of the light output relative to oil-free soil collected nearby the sampling sites was calculated for each oiled samples and expressed as a percent of the pristine sample.Note that Microtox EC50 values decline as toxicity increases.A standard 100 g l−1 phenol solution was used to check the performance of both operator and analytical system and the 95% confidence range was maintained below 15% variation throughout the study.Statistical analysis of the results such as mean, standard deviation, standard error and analysis of variance were performed using Excel and SPSS.Differences in the TERPH, alkanes and PAH concentration between different treatments were compared using ANOVA by Fishers Least Significant Difference test.The difference was recognised as significant where P < 0.05.Soil characterisation provided baseline physical and chemical properties of the two soil samples used in this study.TPH concentration in Soil A was measured at 22,700 mg kg−1, and for Soil B, at 31,500 mg kg−1.Both values indicate that the two soils contained elevated concentrations of petroleum hydrocarbons.The ammonium and nitrate levels were undetectable in both soils, phosphate was undetectable in Soil B and measured at a low level in Soil A.These conditions suggest that both soils could benefit from biostimulation with nutrients.Biodegradation within each soil was not seen to be limited by carbon, nitrogen, pH or moisture conditions as these remained within acceptable ranges.Microbial respiration tests indicated that an active microbial population was present within both soils prior to the addition of microbial inoculum.Other soil properties are shown in Table 2.More detailed soil analysis results including metal concentrations have been reported in a previous study.All of the microcosms contained a viable microbial community as demonstrated by respiration rates that were measured by CO2 production from the soil samples.Soil A had a lower respiration rate than Soil B and this was likely due to the higher concentration of longer chain hydrocarbons and other recalcitrant fractions, as a result of a longer exposure to contamination and weathering.In contrast, Soil B was contaminated more recently and the presence of short chain hydrocarbons and other readily biodegradable compounds could be responsible for the initially higher level of CO2 production.Analysis of the initial soil hydrocarbon concentrations and hydrocarbon fractions support this claim.The amendment strategies applied to both soils, except soil grinding + biostimulation, increased by two and three orders of magnitude the numbers of culturable heterotrophs and hydrocarbon degraders and this translated into enhanced CO2 production when compared to natural attenuation processes alone.A significant increase in hydrocarbon degraders was observed within 42 days in both soils.At the end of experiment, the hydrocarbon degraders number in both soils were three orders of magnitude higher than those in control soils and in soil where grinding and biostimulation was applied.In addition, CO2 production in Soil B stabilised at ∼250 mg CO2 kg soil d−1 for 56 days and then after 100 days gradually stabilised at ∼200 mg CO2 kg soil d−1.A similar trend was observed in Soil A where CO2 production stabilised ∼150 mg CO2 kg soil d−1 after 28 days.Where grinding and biostimulation were applied to Soil A, respiration levels stabilised at ∼120 mg CO2 kg soil d−1 and after 98 days was closer to 100 mg CO2 kg soil d−1 in line with respiration levels measured with the microcosms where natural attenuation was applied.The TPH levels for Soil A microcosms were similar across the four studied conditions.Thus, this suggests that grinding changed the soil and hydrocarbons intrinsic physico-chemistry that effected both the microbial activity and the microbial abundance.Soil B hydrocarbons distribution showed a well-developed series of n-alkanes distribution.The distribution is heavy-end skewed and bi-modal with a higher proportion of C28-C40 n-alkanes.In contrast soil A hydrocarbons distribution confirms that the hydrocarbon source is weathered.More specifically, the concentration of aliphatic compounds in Soil A was 2.2 times higher than in Soil B.In contrast the concentration of aromatic compounds in Soil B was 100 times higher than in Soil A.After treatment, the most prominent residual hydrocarbon fractions in Soil A and Soil B were the aliphatic fractions C16-C35 and C35-C40, and the aromatic fractions C16-C21 and C12-C16, respectively.The largest reduction in both the aliphatic and aromatic fractions were obtained in the BioS and BioS + BioA microcosms.While the BioS + BioA combination improved bioremediation performance the addition of microbes did not necessarily provide additional improvement to the biodegradation process compared to the addition of nutrients alone, suggesting that nutrient addition is a key parameter for promoting biodegradation.Similar hydrocarbons loss percentage results were observed in Soil B which had not undergone remediation previously.Further to this, the percentage of degradation and the degradation constant suggests that the higher concentrations of aromatics did not limit the extent of bioremediation performance in soils.Even though Soil A was previously remediated and the soil had undergone considerable weathering, further degradation of the residual hydrocarbons was possible when suitable conditions were provided.As a result, bioremediation end points are variable and will depend on a range of factors, most notably the nutrient levels and the availability of microbes.In weathered soils, residual hydrocarbons are tightly bound to the soil matrix and can form rigid soil aggregates that can effectively entrap hydrocarbons and limit bioaccessibility.By grinding the soil, the contact surface area and oxygen transfer rates are increased and this increases the chance for microbes and hydrocarbons to come into contact.However, the combination of grinding and BioS was not observed to enhance the mineralisation of the aliphatic and aromatic hydrocarbon fractions compared to the natural attenuation.This unexpected result may be due to the disruption of the indigenous microbial consortium caused by soil grinding, as previously suggested by Powlson, or more likely, grinding facilitated the release of bound fractions of hydrocarbons that proved toxic to the microbes.This finding reinforces previous findings reported by Wu et al. and Huesemann et al. that state it is incorrect to assume that residual hydrocarbons after bioremediation treatment are recalcitrant and therefore they can be left in place without posing an environmental risk.Overall, the extent of biodegradation in both soils was less significant for longer chain hydrocarbons than shorter chain hydrocarbons.This effect became more pronounced for hydrocarbon fractions with an equivalent carbon number over C35 as these compounds are the most recalcitrant to degradation.It is important to evaluate soil ecotoxicology during and after any remediation treatment as it has been shown that a reduction in contaminants alone does not infer a reduction in toxicity.Further to this, ecotoxicological tests can provide information on the bioavailability of contaminants present in soil.Results of the seed germination were normalised using a clean uncontaminated soil to take into account the germination rate of the seeds used.Visual observations showed that seeds germinated quickly in the uncontaminated soil with an incidence of >90% of seed germination over the experimental period.Whilst seed germination was observed in both contaminated soils and for each treatment, it should be noted that visual observations during the course of the experiment showed that the rate of germination, and subsequently seedling growth, were reduced compared to the un-contaminated soil.Thus, even though Soil B, without any specific treatment, achieved 100% germination at the end of the experiment, the rate and degree of growth were less than that seen in the uncontaminated soil, thus suggesting an ecotoxicological effect.Overall, seed germination and Microtox® SPT showed that the toxicity of treated soils was higher while the natural attenuation approach showed the least change in toxicity.The seed germination tests showed almost 50% reduction in germination for all three treatments for Soil A and a 40% reduction in the two Soil B treatments.Therefore, although there was a considerable reduction in TERPH, the toxicity of the soils increased.Such findings have been reported previously, for example, Dorn and Salanitro reported in a 360-day lab scale bioremediation trial of soils contaminated with crude oil that there was no improvement of seed germination after bioremediation, although a significant degradation of contaminants had occurred.Grinding Soil A for the biostimulation condition offers one explanation for this increased toxicity.It is possible that the grinding process released toxic contaminants that were originally enclosed in soil aggregates or pores, enhancing their bioavailability.The Microtox® SPT measurements are in good agreement with the findings of the seed germination in terms of ecotoxicity ranking for both soils.The EC50 values decreased with a decline in TERPH, confirming that remediated soils have higher toxicity compared to original soil conditions and/or soil left to natural attenuation.These results suggest that the negative shift is due to the compositional changes observed during the active treatments and the by-product of biodegradation.In a recent study, Mamindy-Pajany et al. evaluated the ecotoxicological effects of four contaminated sediments treated with mineral additives using Microtox® assay.In all treated samples, a decrease of contaminants was observed.However, the rank of toxicity was not in accordance with the rank of contamination level, and in two of the less contaminated samples an increased toxicity level was observed.Similar findings have also been reported by Xu and Lu when using Microtox® SPT to evaluate the ecotoxicity of crude oil contaminated soil after bioremediation.In sum, the results suggest that there is no direct correlation between a decrease in total extractable hydrocarbons and a reduction in toxicity.There are several causes for these discrepancies including hydrocarbon bioavailability change during bioremediation treatment, complex soil-contaminant interactions as well as interactions between residual hydrocarbons rendering them more or less toxic than expected based on additive independent behaviour of toxicants, and sensitivity of the bioassay to the bioavailable fraction of the residual hydrocarbons as compared to the tightly bound fractions that could be a prominent fraction of the residual hydrocarbons.This is likely due to the more toxic intermediates formed during the biodegradation.This study confirms it is possible to improve the treatability of weathered hydrocarbons in soil by applying different bioremediation strategies such as bioaugmentation and biostimulation individually or in combination.The rates of biodegradation, however, may be affected by grinding, suggesting that the tightly bound weathered, hydrocarbon fraction can be disrupted, possibly leading to the release of toxic compounds.This observation was supported by monitoring of respiration rates and analysis of soil ecotoxicity, which confirmed that the reduction of the hydrocarbon content in soil, even for weathered hydrocarbons, does not necessarily lower the toxicity of the soil.Thus, assessing the potential biotransformation of weathered hydrocarbons in soil requires careful consideration of a wide range of factors including bioavailability change, and increased concentration of intermediates or biodegradation products during bioremediation treatment.Monitoring TPH alone is therefore not sufficient for determining the environmental risk posed by a contaminated site after remediation.Also, bioavailability is an important factor that can influence the extent of mass reduction achievable by bioremediation.However, the objective of bioremediation should not be mass reduction per se, but risk reduction and management.As such, it is important to consider these aspects in future research. | The potential for biotransformation of weathered hydrocarbon residues in soils collected from two commercial oil refinery sites (Soil A and B) was studied in microcosm experiments. Soil A has previously been subjected to on-site bioremediation and it was believed that no further degradation was possible while soil B has not been subjected to any treatment. A number of amendment strategies including bioaugmentation with hydrocarbon degrader, biostimulation with nutrients and soil grinding, were applied to the microcosms as putative biodegradation improvement strategies. The hydrocarbon concentrations in each amendment group were monitored throughout 112 days incubation. Microcosms treated with biostimulation (BS) and biostimulation/bioaugmentation (BS + BA) showed the most significant reductions in the aliphatic and aromatic hydrocarbon fractions. However, soil grinding was shown to reduce the effectiveness of a nutrient treatment on the extent of biotransformation by up to 25% and 20% for the aliphatic and aromatic hydrocarbon fractions, respectively. This is likely due to the disruption to the indigenous microbial community in the soil caused by grinding. Further, ecotoxicological responses (mustard seed germination and Microtox assays) showed that a reduction of total petroleum hydrocarbon (TPH) concentration in soil was not directly correlable to reduction in toxicity; thus monitoring TPH alone is not sufficient for assessing the environmental risk of a contaminated site after remediation. |
378 | Mechanisms and risk assessment of steroid resistance in acute kidney transplant rejection | Since the first successful transplantation performed in 1954 , kidney allograft transplantation has become the preferred renal replacement therapy for patients suffering from end-stage renal disease.At present, more than 36,500 kidney transplantations are performed annually in Europe and the United States .Despite the success of renal transplantation, approximately 40% of renal allografts fail within the first 10 years, and a shortage of donors hampers the number of transplants performed each year .These limitations stress the need to improve long-term graft survival and prevent adverse graft outcome.The occurrence of acute rejection is a dominant risk factor for adverse graft outcome.AR is primarily a cellular immune response directed against mismatched donor antigens present on the cells of the allograft , which generally occurs during the early post-transplant period, with the highest risk in the first 3 months.Reliable and timely detection of AR episodes is important for the prevention of adverse graft outcome.Diagnosis of AR episodes relies on clinical parameters and histopathologic assessment of kidney biopsy samples .Most patients who develop an AR episode are asymptomatic and present only with an increase in serum creatinine levels as an indicator of a decline in renal function .The cause of graft dysfunction is determined based on nephropathologic criteria and histological assessment of a renal allograft biopsy.The Banff classification is used to identify and designate the severity of rejection episodes on the basis of the site and degree of inflammation in the transplanted kidney .However, once AR is diagnosed, it is difficult to predict the response to anti-rejection treatment based on clinical parameters and histopathologic assessment.Availability of biomarkers could provide complementary parameters for assessing the risk of adverse graft outcome.The present review provides an overview of biomarkers of steroid resistant rejection in kidney transplantation.In the 1960s, AR was the most important cause of kidney transplant loss.Only 40% of renal allograft recipients had a functioning graft at one year after transplantation .The introduction of immunosuppressive medications and refinement in treatment regimens during the following decades has reduced the incidence of AR from over 80% in the 1960s to below 15% nowadays .Over the same period, the short-term survival of kidney grafts has substantially improved, with one-year graft survival rates in excess of 90% in current daily practice .Despite these advances in short-term outcome, long-term graft outcome improved only marginally over the past two decades .Approximately 50% of grafts from deceased donors and 30% of grafts from living donors fail within ten years after kidney transplantation .The graft attrition rates after the first year are between 3% and 5% annually.This is mainly due to death with a functioning graft and chronic allograft failure .Even after the introduction of immunosuppressive medication, AR continues to be a primary cause of renal allograft failure.Approximately 10% of all graft losses are directly caused by acute renal allograft rejection .In addition, the occurrence of AR correlates with a significant reduction in long-term allograft survival .Beside this association with risk of graft loss, AR is also correlated with the development of chronic allograft failure.Renal interstitial fibrosis and tubular atrophy—which was formerly known as chronic allograft nephropathy—is the most prevalent cause of chronic allograft failure after the first post-transplant year .Analyses of factors related to the development of IFTA revealed AR as one of the primary risk factors .Although the incidence of AR has decreased during the last decades, the negative impact of AR on subsequent development of IFTA and the risk for chronic transplant failure have become more prominent .Various parameters of AR determine the level of risk for adverse graft outcome including the timing, recurrence, severity, and therapy sensitivity of the AR episode .Risk of graft failure increases as the time between engraftment and occurrence of AR increases, and is most pronounced with late AR episodes .Similarly, patients experiencing repeated AR episodes are at greater risk of adverse graft outcome than those with no or only one episode .In addition, patients with acute vascular rejection have a higher risk of graft failure compared to patients with acute tubulointerstitial rejection .Furthermore, rejection episodes unresponsive to AR treatment have been associated with increased risk of allograft failure .Immunosuppressive medication has become a cornerstone in the transplantation field.Investigation of the use of immunosuppression for prevention of transplant rejection started in the early 1950s, when Medawar and colleagues revealed that AR is driven by an immunological process .The first tested therapies, i.e., total body irradiation and adrenal cortical steroids , led to prolonged skin graft survival.These early findings set the stage for the development of the current immunosuppressive drug therapies.Nowadays, almost all transplant recipients are treated with immunosuppressive drugs to minimize the chance of AR, which act by inhibiting the activation and/or effector functions of T cells.Renal transplant recipients can still develop episodes of acute allograft rejection despite optimization of human leukocyte antigen compatibility and application of induction therapy and maintenance immunosuppression.Several therapeutic options are available for the reversal of AR episodes.The first report on the use of immunosuppressive drugs for the treatment of acute renal allograft rejection appeared in 1960 ."A young female recipient of her mother's kidney developed multiple rejection episodes, which were temporarily reversed with the synthetic corticosteroid drug prednisone.This case sparked the interest in synthetic glucocorticoids for both the prevention and treatment of AR episodes.In 1963, Starzl and colleagues demonstrated that acute renal allograft rejection could readily be reversed by temporarily adding high doses of prednisone to the patients maintenance therapy .Ten renal allograft recipients showed an essentially complete recovery of their renal function.Based on these early findings, increased dosages of the daily maintenance regimen with oral prednisone became the main therapy for AR .Treatment of AR with high doses of oral prednisone was found to potentially induce toxic side effects, such as gastrointestinal hemorrhage and increased susceptibility to infection.To prevent these complications, the treatment was switched from oral prednisone to intravenous application of methylprednisolone during the early 1970s .Comparison of the two regimens revealed that both GCs are equally successful in reversing AR .However, pulse therapy with intravenous methylprednisolone is associated with fewer side effects than oral prednisone therapy .Nowadays, intravenous pulse therapy with high-dose methylprednisolone has become the first-line therapy for AR in most medical centers.The first report on antibody-based immunosuppression was by Metchnikoff in 1899 .His observations on the lymphocyte-depleting activity of heterologous anti-lymphocyte serum were validated in the 1960s .These findings resulted in the introduction of anti-thymocyte globulin, which represents serum-derived polyclonal antibodies obtained from horses or rabbits immunized with human lymphocytes, as a treatment of allograft rejection .ATG therapy causes depletion of circulating T cells and other leukocytes through various mechanisms, including antibody- and complement-dependent lysis and the induction of apoptosis .ATG is an effective treatment of AR with high graft survival rates .However, ATG can induce complications, such as leukopenia, cytokine release syndrome, and viral infections .Due to the risk of complications, ATG is mainly used for the treatment of steroid-resistant AR and recurrent AR.The development of cell-hybridization techniques provided the opportunity to produce monospecific antibodies .The murine-derived Muromonab-CD3 is a monoclonal antibody based treatment directed against the CD3 molecule, which is closely associated with the T cell receptor.OKT3 therapy modulates the TCR, resulting in the depletion of circulating T cells.OKT3 has been used as primary treatment of AR as well as a rescue therapy of steroid-resistant AR episodes .Due to its lower efficacy and higher incidence of side effects compared to ATG treatment, OKT3 has been withdrawn from the market and is no longer in clinical use .The therapeutic effects of synthetic GCs for the treatment of acute renal allograft rejection are attributed to their anti-inflammatory and immunosuppressive effects.These protective effects on the allograft are mainly obtained through direct and indirect regulation of immune-related gene transcription.GCs regulate approximately 20% of all genes expressed in leukocytes .Depending on the cell type, the estimated number of genes directly regulated by corticosteroids lies between 10 and 100 .However, the majority of inflammatory genes are indirectly regulated through interference with transcription factors and their co-activators.The major action of GCs is the suppression of inflammatory genes that are activated during AR, including genes encoding for cytokines, chemokines, adhesion molecules, and inflammatory enzymes .Besides the downregulation of pro-inflammatory genes, GCs increase the expression of anti-inflammatory cytokines and transcription mediators .In addition, glucocorticoid therapy can suppress AR through a variety of other mechanisms, including the prevention of leukocyte migration, induction of cell death in lymphocytes, and effects on the growth and lineage commitment of T cells .The actions of GCs are mediated by the intracellular glucocorticoid receptor, a ligand-dependent transcription factor of the nuclear receptor superfamily, which is ubiquitously expressed in most human cells .The genomic structure of the GR consists of 9 exons .Alternative splicing in exon 9 generates two C-terminal receptor isoforms, termed GRα and GRβ .The predominantly expressed GRα is activated by GC binding and mediates most of the known immunomodulatory effects, whereas the GRβ isoform expresses a different C-terminal region which inhibits GC binding .The expression of GRβ is induced by cytokines .GRβ can exert a dominant negative effect upon GRα-induced transcription.However, the functional importance of the GRβ isoform has not yet been determined.The GR protein structure consists of three domains: an N-terminal domain that directs target gene activation and the interaction with other transcription factors; a central DNA-binding domain, responsible for binding with glucocorticoid response elements in the promoter region of target genes; and a ligand-binding domain, which contains specific GC- and heat shock protein binding sites .After administration, GCs diffuse across the cell membrane and bind to the cytoplasmic GR.In its ligand-free state, the cytoplasmic GRα is associated with an inhibitory complex, in which two Hsp90 molecules and one molecule each of Hsp70 and FK506-binding protein 52 are included .This association stabilizes the hormone-responsive form of the receptor and inhibits nuclear localization.Upon ligand-induced activation, the GR undergoes conformational changes and dissociates from the molecular chaperone proteins , which enables rapid translocation of the GC-GR complex to the nucleus.In this location the complex regulates gene transcription through direct and indirect signaling pathways.GR dimers bind via two zinc finger motifs in the DBD to GRE in the promoter region of target genes .To initiate gene transcription, the GR uses transcriptional activation domains located in the NTD and LBD.The GR interacts with the promoter region and recruits transcriptional co-activators and basal transcription machinery to the transcription start site .This group of co-activators includes steroid receptor co-activator-1, CREB-binding protein, and GR-interacting protein 1, which induce histone acetylation and subsequent transcription of anti-inflammatory genes .Less commonly, the GC-GR complex interacts with negative GRE resulting in the repression of pro-inflammatory genes that contain GR-binding sites .The major action of corticosteroids is the indirect suppression of pro-inflammatory genes that are activated during AR .The GC-GR complex interferes with activating transcription factors, such as nuclear factor-κB, activator protein-1, and cyclic AMP-responsive element-binding, and the transcriptional co-activator molecules of these transcription factors.In addition, the GC-GR increases the transcription of inhibitor of κB and MAP kinase phosphatase-1, which inhibit NF-κB and mitogen-activated protein kinase, respectively .Furthermore, the GC-GR complex recruits histone deacetylase-2 to the activated inflammatory gene complex, resulting in deacetylation of nuclear histones and inhibition of pro-inflammatory gene transcription .The signaling pathways of the GC-GR complex inhibit the transcription of pro-inflammatory molecules, including cytokines, chemokines, adhesion molecules, inflammatory enzymes, and receptors.Alterations in the molecular mechanisms of GR signaling may lead to steroid resistance.The majority of acute renal allograft rejection episodes can be adequately treated with high-dose corticosteroids.However, in approximately 25 to 30% of the patients the rejection episode cannot be reversed with corticosteroid therapy alone .Similarly, poor or no response to steroid therapy for AR reversal also occurs in recipients of other solid organ transplants, including liver, lung and cardiac allografts .In such cases of steroid resistance, the patient requires more rigorous immunotherapy to reverse the AR episode.Renal allograft recipients with steroid-refractory rejection are generally treated with ATG, which results in a salvage rate of 70 to 90% .Diagnosis of steroid resistance primarily relies on post-transplantation follow-up of clinical parameters reflecting renal allograft function."An AR episode is considered steroid resistant when the patient's serum creatinine levels do not return to within 120% of the pre-rejection baseline value after pulse therapy with corticosteroids within 14 days after the start of the steroid therapy .At that point, ATG treatment is generally required.The first few days after the start of the steroid treatment are crucial.Analysis of creatinine courses of steroid-resistant and steroid-responsive cases revealed that the minimal time period for assessment of the response to steroids is five days after initiation of the pulse therapy .Changes in serum creatinine levels were similar between patients with steroid responsive and steroid resistant AR until day 5, at which time the responders showed a significant decrease in serum creatinine, while the creatinine level of non-responders remained high.This 5-day period is also the average time delay used by clinicians before considering a rejection as being steroid resistant .Incomplete restoration of graft function in steroid resistant rejection may lead to progression of chronic damage to the graft and has a detrimental effect on graft outcome .Prediction of steroid resistance at the time of biopsy could prevent unnecessary exposure to high-dose corticosteroid therapy.More importantly, the development and progression of irreversible nephron loss during the period that steroid resistant AR is undertreated with steroids alone could be avoided.This impact of steroid-refractory rejection on graft integrity stresses the need for tools to assess the response to AR treatment in an early stage.At present, clinical parameters and histopathologic assessment of kidney biopsies remain the golden standard for evaluating short- and long-term graft outcome.Several parameters have been associated with response to steroid treatment.Acute vascular rejection is related to resistance to high-dose steroid therapy and a subsequent higher chance of graft failure .In addition, unresponsiveness to steroid therapy has been associated with the presence of mononuclear cells at endothelial cells of large and small vessels in the graft .Another aspect associated with steroid resistance is the presence of an immune response directed against the microvasculature.Patients with moderate to severe microvascular destruction respond less adequately to steroid therapy compared to patients with only mild destruction of the microvascular endothelium .Steroid-refractory AR has been associated with more extensive leukocyte infiltration into the peritubular capillaries .Circulating leukocytes target HLA molecules expressed on the PTC, which results into cellular rejection.In addition, the HLA molecules can also be targeted by donor-specific antibodies, leading to local complement activation and humoral rejection.The activation of the complement cascade leads to the formation of complement degradation factor C4d, which can covalently bind to the PTC endothelium .C4d deposition in PTC has been associated with steroid resistance , although this association could not be confirmed in a recent study .It remains difficult to predict the risk of graft loss and the response to anti-rejection treatment on basis of histopathologic assessment and clinical parameters.Biomarkers for molecular and cellular mechanisms involved in graft survival and medication responsiveness could provide complementary parameters for assessing the risk of adverse graft outcome.Indeed, expression levels of various markers, particularly those of allograft-infiltrating inflammatory cell types, were found to be informative with respect to therapy response.Analyses of AR biopsies obtained from kidney transplant recipients have provided insight into the lymphocyte populations that are associated with poor graft outcome.Resistance to GCs has been associated with increased expression of cytotoxic T lymphocyte-, natural killer cell-, B lymphocyte-, and macrophage signatures .The first immune component linked with resistance to anti-rejection treatment and graft outcome was the presence of T lymphocytes.The extent of CD8 + T cells infiltration within the allograft was correlated with response to AR treatment with GCs .These findings are in line with data from gene expression studies in rejection biopsies of renal allografts in children, which revealed increased mRNA expression levels of cytotoxic T cell and NK cell markers in steroid-refractory AR samples .Furthermore, relatively high FasL mRNA expression and dense granulysin staining in renal allograft biopsies, as well as low FoxP3 expression in urinary sediments , have all been described to be associated with steroid resistant rejection.The extent of B cell infiltration within the renal allograft was shown to discriminate between steroid resistant and steroid response AR episodes.Patients with steroid-refractory AR expressed increased intragraft levels of the B cell marker CD20 and B-lymphocyte associated immunoglobulins .In line with these findings, immunostainings for CD20 revealed significant differences in the level of B cell infiltration during AR, which correlated with response to steroid therapy and long-term graft outcome .Besides their essential role in humoral immune response, infiltrating B cells may function as antigen presenting cells and amplify the alloimmune response by donor-specific T cells.However, more recent studies failed to confirm that the presence of intragraft B cells is related to therapy response and/or graft function after rejection .Macrophages derived from the transplant recipient are an important aspect of the immune infiltrate during AR .Macrophage infiltration within the kidney transplant has been related with response to GC therapy.Immunostainings for CD68 revealed the presence of intraglomerular and interstitial macrophages during AR as prognostic markers for steroid resistance and graft outcome .In addition, macrophage infiltration has been shown to associate with intimal arteritis and C4d deposition in PTC , which on themselves represent risk factors for resistance to anti-rejection treatment.Even though various immune markers have been proposed as prognostic biomarkers for graft outcome, clinical interpretation of these findings has proven difficult.The interpretation and validation of published data is complicated by diversity in clinical endpoint definitions and patient cohort characteristics.The definition of steroid resistance varies widely between studies, and reported cases of steroid-refractory rejection are frequently poorly defined .Other aspects that influence the inter-study reproducibility of biomarkers are differences in patient cohort characteristics, such as type of immune suppression and the time between transplantation and AR, as well as the techniques used for sample processing and expression analysis.As a result the prognostic value of proposed immune biomarkers for steroid-refractory AR could not be verified in later studies .In an attempt to overcome some of these challenges, we evaluated the expression levels of a broad panel of immunological markers within renal allografts with steroid responsive or steroid resistant AR , including the previously reported markers associated with response to steroid therapy .The selected panel reflected the full immune repertoire that may be present in the grafts.In addition, a combination of strict inclusion criteria, stringent clinical endpoint definitions, and quality controls for sample processing and expression assays were employed to ensure reliable and sensitive identification of prognostic biomarkers for response to GCs.The study showed that a combination of T cell activation markers CD25:CD3ε ratio and lymphocyte activation gene-3 offers an improved prognostic value for assessing steroid response, compared to conventional clinical parameters and histopathologic assessment.These two signal transduction molecules are involved in the regulation of T cells: CD25, the α-subunit of the IL-2 receptor, is an important regulator of T cell survival and proliferation ; while the activation-induced LAG-3 is involved in the negative regulation of homeostasis and T cell function .High expression of cytotoxic T cells has been associated with resistance to steroid treatment of acute renal allograft rejection .In addition, T cell characteristics, through disparities in IL-2 responses, have been correlated with steroid resistance .The findings described in the previous sections suggest that steroid resistance may reside in specific lymphocyte populations, with activated T cell populations as the prime candidate.However, the prognostic value of immune biomarkers is hampered by molecular heterogeneity among kidney biopsy samples with AR.This observation may be a reflection of the complexity of the mechanisms involved in response to steroid therapy.A relatively novel finding is the link between zinc regulation and resistance to anti-rejection treatment with steroids.Relatively high intragraft expression of metallothioneins and tissue inhibitor of metalloproteinase-1 during acute renal allograft rejection is associated with steroid resistance .Seven members of the MT-1 gene family were expressed to a significantly higher extent in steroid-refractory AR.MT expression was mainly detected in activated macrophages and tubular epithelial cells within the kidney.These findings are in line with findings in lung allograft recipients, where increased percentages of MT-positive macrophages were found in transbronchial biopsy samples of lung allograft recipients with steroid-refractory AR .MT are cysteine-rich proteins involved in the homeostasis of biologically essential metals, of which the regulation of zinc ions is the most important .By functioning as a zinc-donor or zinc-acceptor, MT can control cellular zinc distribution .Increased intragraft MT expression may lead to removal of zinc ions that are normally used in GC signaling.The binding of the activated GC-GR complex to GREs relies on two zinc finger motifs located in the DBD .Increased MT levels may lead to removal of zinc ions that are normally complexed in the zinc finger motifs , thereby preventing GR binding to GREs and inhibiting the immunomodulatory effects of trans-activation and cis-repression.Another GC signaling pathway that may be affected by MT is the zinc-dependent recruitment of HDAC-2 by the GC-GR complex .Increased expression of MT may lead to inhibition of the anti-inflammatory effects of this process.Interestingly, several studies in the oncological research field have also demonstrated that elevated MT expression is related to treatment resistance .TIMP1 has been identified as an endogenous inhibitor of matrix metalloproteinases .TIMP1 inhibits MMP activity through coordination of the zinc ions of the MMP active site by the conserved cysteine residues in its N- and C-terminal domains .Similar to MT, TIMP1 may diminish the zinc-requiring anti-inflammatory effects of the GR through regulation of the intracellular zinc concentrations.In addition, more recent studies have implicated TIMP-1 in the regulation of cell growth and apoptosis , which may influence the effects of GC signaling.Coagulation factor II receptor is a regulator of numerous intracellular signaling pathways, which include NF-κB and MAP kinase pathways .Differences in F2R may influence the pro- and anti-inflammatory effects of GCs .Further research will be needed to unravel the mechanisms through which F2R affects the response to steroid therapy."Even though the biomarkers described in the previous sections provide a strong prognostic value for predicting a patient's response to GC therapy, no single biomarker has been able to predict the response to steroid treatment with both high sensitivity and high specificity.This restricted power of single markers is most likely caused by the presence of multiple mechanisms underlying steroid resistance, which is reflected by the observed heterogeneity in transcriptional regulation among AR biopsy samples .Combination of biomarkers in a multivariate prediction model could enhance sensitivity and specificity, and facilitate risk assessment of steroid resistance in patients suffering from AR.Multivariate analysis of the proposed biomarkers revealed a prediction model that contains both immune and non-immune biomarkers as independent covariates .This multivariate model offers a superior prognostic value for assessing responsiveness to GC therapy compared to both conventional clinical and histopathologic indicators as well as single biomarkers.As described in a previous section, the anti-inflammatory and immunosuppressive actions of GCs are mediated by the ligand-dependent GR.Consequently, differences in response to treatment with GCs may be explained by variations in GR expression.Decreased GR expression has been implicated as a cause of steroid resistance in a wide variety of diseases, including nephrotic syndrome , acute lymphoblastic leukemia , and asthma .The observed differences in GR expression between responders and nonresponders might be a reflection of varying levels of GR autoregulation.This process, in which the presence of GCs can lead to down-regulation of the steady-state expression levels of GR, has been observed in both cell lines and tissues .However, more recent studies were unable to confirm this as the sole cause of GC resistance , indicating that other mechanisms may be involved in the varying GR expression levels.Response to GCs might also be related to the ratio of primary GRα and cytokine-induced GRβ isoforms .Upregulation of GRβ levels has been associated with resistance to steroid therapy in different diseases, such as asthma , inflammatory bowel disease , and ulcerative colitis ."GRβ's dominant negative inhibition of GRα-induced gene transcription may provide enhanced resistance to the effects of GCs.However, the role of GRβ in steroid resistance remains controversial, as other studies were unable to confirm a link between GRβ expression levels and responsiveness to GCs .In addition to GR expression levels, differences in treatment response may also be explained by genetic variability.Mutations in the GR gene can affect the functionality of the GR and result in steroid resistance.Although GR mutations leading to loss of function and generalized GC resistance are rare , single nucleotide polymorphisms in NR3C1 may alter the GC binding affinity or the downstream signaling of the receptor .A large number of NR3C1 SNPs have been described dbSNP database), but only a few polymorphisms are functionally relevant.The two adjacent and linked ER22/23EK polymorphisms are located in the NTD of the GR."The resulting amino acid substitution can affect the receptor's trans-activational and trans-repressional activity on target genes .The ER22/23EK polymorphisms have been associated with reduced sensitivity to GCs , although the responsiveness of R22/23K carriers may vary, ranging from asymptomatic to severely GC resistant .Similarly, the GR-9β polymorphism has been correlated with GC resistance in rheumatoid arthritis .This SNP, located in the 3′-untranslated region of exon 9β, has a stabilizing effect on the mRNA of the GRβ isoform, which subsequently leads to enhanced expression of the inactive GRβ protein.In contrast, two other NR3C1 SNPs have been associated with enhanced sensitivity to GCs.One of the most common functional polymorphisms in NR3C1 is the BclI polymorphism .This SNP consists of a C > G nucleotide substitution, 646 nucleotides downstream from exon 2.The BclI polymorphism is associated with hypersensitivity to GCs in both heterozygous and homozygous carriers of the G allele.A significantly higher frequency of BclI mutated genotype was observed in GC responsive patients with IBD compared to nonresponder IBD patients .The N363S polymorphism is located in exon 2 of the NR3C1 gene, which corresponds with the N-terminal domain of the GR.This mutation was shown to increase the receptors trans-activating capacity .The N363S polymorphism is associated with increased sensitivity to GCs, as was shown by increased cortisol suppression after dexamethasone suppression tests in a group of elderly individuals .While scientists in various disease fields have been interest in the potential correlation between GR expression and the response to GCs, very little is known about this relationship in the transplantation field.Recent data from our group revealed no correlation between steroid-refractory AR and the GR.Both the NR3C1 genotype distribution and GR expression levels in the renal allograft were similar between kidney transplant recipients with response and resistance to GC treatment of AR.Further studies will be needed to confirm the role of the GR in transplant recipients with GC resistance.The data discussed in this review demonstrate that steroid resistance is a complex and multifactorial condition, in which both immunological and non-immunological factors can be involved.Investigations of immune-related biomarkers revealed that both T cells and macrophages play an important role in the response to steroid therapy.Combined, these findings indicate that steroid resistance resides in specific cell populations.This may guide the therapeutic approaches for treatment of steroid-refractory AR episodes.Furthermore, Zinc regulation may play a role in the response to steroid therapy during AR.Kidney transplant recipients who express high intragraft levels of MT and TIMP1 during AR might benefit from extra zinc intake for optimal GC signaling.The presence of multiple mechanisms underlying steroid resistance probably accounts for the restricted predictive power of single markers.Molecular heterogeneity among biopsy samples may explain the difficulties in validating the prognostic value of previously proposed biomarkers for steroid resistance.In addition, it demonstrates the importance of using internal and external validation techniques to verify the robustness of potential biomarkers.We found that a multivariate prediction model, containing biomarkers related to different aspects of GC signaling, offers a superior prognostic value for assessing steroid response compared to conventional clinical parameters and histopathologic assessment, and to single biomarkers .Such a multivariate approach could identify patients with insufficient response to anti-rejection treatment with GCs, who would benefit from immediate ATG treatment.Availability of a multivariate biomarker model in the clinic may lead to reduced exposure to high-dose corticosteroid therapy.More importantly, it may help avoid the development and progression of irreversible nephron loss during the period that steroid-refractory AR is undertreated with steroids alone.In addition, multivariate models provide insight into the causative mechanisms in steroid-refractory AR episodes, which may guide the development of novel therapeutic approaches.Our recently proposed multivariate model, which includes T cell activation markers and zinc regulating molecules as independent covariates, does not reach 100% specificity and sensitivity for the prediction of steroid-refractory AR.This suggests that additional, yet unidentified factors influence the response to high-dose steroid treatment of acute renal allograft rejection.An essential requirement in biomarker discovery is validation and verification of the predictive value of proposed biomarkers.Validation techniques, such as cross-validation and the use of discovery and validation cohorts, ensure accurate and appropriate data collection and verify the clinical usefulness of proposed biomarkers.In addition, the prognostic value of the biomarkers should be confirmed in a prospective study before they can be introduced into the clinic.A relatively novel target for the identification of potentially informative biomarkers is the expression of microRNA transcripts.MicroRNAs are a class of small, non-coding RNA molecules that negatively regulate mRNA expression by degradation or translational repression .Since the initial discovery in the early 1990s, over 1000 microRNAs have been identified.It is estimated that expression of more than one third of all genes is regulated by microRNAs .In recent years, microRNAs have gained interest for their involvement in hematopoiesis and immune cell function , and for their role in allograft rejection .Because of their relatively stable expression, microRNAs are emerging as potential biomarkers.Analysis of microRNA expression profiles in renal biopsies may lead to the identification of novel prognostic biomarkers for the outcome of acute renal allograft rejection.Although evaluation of renal biopsies remains the gold standard for the diagnosis of graft outcome, its usefulness is slightly limited by the invasive nature of the biopsy procedure.Due to the associated risk of procedural complications, renal biopsies are mainly performed on clinical indication .An alternative to intragraft assessment could be the use of less invasive or non-invasive sources of patient material, such as peripheral blood and urine samples.Identification of molecular markers in blood and urine may provide means to monitor graft function more frequently, which could lead to earlier detection of graft dysfunction and timely intervention of the immune process.However, it is not yet clear whether expression levels of molecular markers in blood or urine are as reliable as measurements of molecular markers in renal biopsies.Peripheral blood samples may provide a suitable means to assess if a renal allograft recipient is responsive to steroid therapy.In vitro tests with peripheral blood mononuclear cells exposed to GCs have been used to correlate gene expression profiles with clinical disorders, including steroid responsiveness .However, the experimental design of these in vitro tests varied between studies, and the findings could not always be reproduced."Further studies will be needed to confirm if in vitro cultures of patient PBMC represent a useful indicator of the patient's response to steroid treatment in vivo.Resistance to steroid therapy is a complex and multifactorial condition, in which both immunological and non-immunological factors can be involved.The response to high-dose corticosteroid therapy for the treatment of acute renal allograft rejection correlates with the expression level and characteristics of T cells and macrophages infiltrating into the renal allograft.These findings indicate that steroid resistance resides in specific cell populations and is not a feature of all lymphocytes.Zinc regulation and drug metabolism may play a role in the response to steroid therapy during acute renal allograft rejection.Increased expression of zinc-regulating molecules may diminish the zinc-requiring anti-inflammatory effects of corticosteroid therapy.Therefore, kidney transplant recipients may benefit from additional zinc intake to optimize GC signaling.A multivariate prediction model, containing biomarkers related to different aspects of GC signaling, offers the best prognostic value for assessing steroid response.It is expected that the use of such a model contributes in clinical risk assessment of steroid resistance and helps in applying more individualized anti-rejection therapy.NVR is affiliated with Novo Nordisk, Inc. | Ever since the first successful kidney transplantation, the occurrence of acute rejection has been a dominant risk factor for adverse graft outcome, as it is associated with reduced graft survival and the development of chronic transplant dysfunction. Although the majority of acute renal allograft rejection episodes can be adequately treated with glucocorticoid therapy, 25 to 30% of the rejection episode cannot be reversed with glucocorticoids alone. At present, the diagnosis of steroid resistance primarily relies on post-transplantation follow-up of clinical parameters reflecting renal allograft function. However, it remains difficult to predict the response to the response to antirejection treatment. Prediction of steroid resistance could prevent unnecessary exposure to high-dose corticosteroid therapy and avoid the development and progression of irreversible nephron. This impact of steroid-refractory rejection on graft integrity stresses the need for tools to assess the response to AR treatment in an early stage. Here, we discuss our current understanding of resistance to anti-rejection treatment with glucocorticoids, and provide an overview of biomarkers for the detection and/or prediction of steroid resistance in kidney transplantation. |
379 | Promoting novelty, rigor, and style in energy social science: Towards codes of practice for appropriate methods and research design | It is surely a “fool’s errand” to try to define quality research in academia, especially in a field as diverse as energy social science—a term which we use to describe the broad set of literatures that apply social science disciplines, perspectives and approaches to the study of energy, including production, distribution, conversion and consumption.Studies in this area draw upon concepts, methods and theories from a range of specializations and aim to produce insights that are relevant to many social problems.For energy social science is not only a collection of disciplines, but also a social or epistemic community of scholars, a compendium of methods or ways of doing research, a collection of related concepts or theories, and a wide set of interrelated topics.Clearly, with such diversity and complexity, there is no one-size-fits-all approach, no “ten easy steps to quality”.However, there are practices and guidelines that can improve the quality of research, and increase the probability of positive impact.And the applied and socially-relevant nature of the field is all the more reason to be sure that published research answers useful research questions, is rigorous, and is effectively communicated.In an effort to encourage improvements in research practice, this Review aims to review and provide guidelines for enhancing quality under the headings of novelty, rigor, and style.The field of energy social science aims to address some of our most urgent and threatening global problems.For example, the International Energy Agency estimates that, if society is to have a reasonable chance of avoiding dangerous climate change, global energy-related carbon emissions must peak by 2020 and fall by more than 70% over the next 35 years, despite growing populations and increasing affluence around the world .Such deep decarbonisation will require transformational changes in most of the systems on which industrial society depends .At the same time, society must address other challenges such as air and water pollution , fuel poverty , energy insecurity and energy injustice .With so much on the line, it is worthwhile to pause and reflect on the state of research—are we producing high-quality studies and are they contributing to the solution of these real-world problems?,A number of recent papers across fields as diverse as energy, buildings, transportation, sustainability, the life sciences and geography have asked similar questions, arguing that while social sciences must play a larger role in research on these issues , this research also needs to improve in terms of rigor, interdisciplinary reach, policy-relevance, and the communication of results .Unfortunately, evidence suggests that energy social science research is falling short of the social goal of promoting effective decarbonisation and frequently falling short of the professional goal of excellence.For a start, many published studies do not make novel contributions to the literature, have uninteresting research questions, and do not rigorously apply a research design or method.In their survey of sustainability science, Brandt et al. noted that methods were often chosen based on the researcher’s familiarity or specialization, rather than the method’s suitability for a chosen research question .Schmidt and Weight further observe that, within energy studies more broadly, interdisciplinary work remains rare: “despite the predominately socio-economic nature of energy demand, such interdisciplinary viewpoints – albeit on the rise – are still the minority within energy-related research” .More generally, an independent review of the Research Excellence Framework in the United Kingdom noted that the academic community needed to deliver far more “game-changing” research that was both policy relevant and high quality .Other more severe critics have attacked academia for publishing “nonsense” or “utterly redundant, mere quantitative ‘productivity’” - owing in part to the “publish or perish” incentives created by the research funding system and the criteria for professional promotion .These conditions risk creating “vast amounts of commodified but disposable knowledge,” a sort of “fast food research” void of quality and nutrition .Aside from lack of relevance or excellence, criticisms have also been levied at the lack of rigor in academic research.By this, we mean a mix of carefulness and thoroughness.The simple Oxford definition of rigor is “the quality of being extremely thorough and careful.,This definition does not favor a particular research design, objective, discipline or method.Rather, this definition represents the practice of taking great care in establishing and articulating research objectives, selecting and implementing appropriate research methods and interpreting research results - while at the same time acknowledging omissions and limitations.Donnelly et al. thus define rigor in research as “identifying all relevant evidence” within the available resources or timeframe .A critique of lacking rigor seems particularly justified in energy social science, given that an examination of 15 years of peer-reviewed publications in this field found that almost one-third of the 4,444 studies examined had no description of an explicit research design—or method—whatsoever .In the related field of global environmental governance and politics, a review of 298 articles published over 12 years noted that only 35% included a discussion of, or a justification for, the research methods employed .Even articles with explicit research designs can still suffer from flaws.Hamilton et al. note that in the domain of energy efficiency and buildings: “analysis is often limited to small datasets and results are not applicable more broadly due to an absence of context or baselines” .Finally, drawing from our own experience as editors, peer-reviewers and readers of energy social science, we observe that many articles are stymied by bad “style” – that is, poor structure, unclear analysis and difficulties in expression.Even when they make a novel contribution and employ a rigorous research design, many authors struggle to communicate clearly due to a lack of care in writing or a lack of fluency in language.Their papers often lack persuasive or cohesive elements such as signposts, roadmaps, figures and tables; have many grammatical mistakes and typos; and exhibit a poor standard of written English.Put another way: many submitted articles are poorly written, and if they are published they seem destined to have a low impact—even if the research itself is novel and/or rigorous.To remedy these tripartite limitations of novelty, rigor, and style, this Review offers a guide for researchers so they can improve the quality of their research.We have four objectives:Bring attention to the importance of clearly articulating research questions, objectives, and designs.Provide a framework for conceptualizing novelty.Suggest codes of practice to improve the quality and rigor of research.Provide guidelines for improving the style and communication of results.Our hope is that this Review will contribute to more coherent, creative, rigorous and effectively communicated research that will enhance the contribution that energy social scientists make to both theory and practice.Our primary audience is researchers in energy social science, as well as readers who want to evaluate such research.Using our collective experience, we focus our suggestions on how social science research has been applied to energy-related research questions—though much of this content is relevant to other social science applications, especially to societal issues such as transport and mobility, or environmental and resource management.Further, while this Review is intended to be useful for early career researchers, we believe that researchers of all levels can benefit from an ongoing dialogue about what makes high quality, novel, rigorous and effective research in our field.Although the later parts of this Review will explore how to improve aspects of novelty, rigor, and style, a useful starting point is to consider four core elements: 1) asking concise, interesting, socially relevant, and answerable research questions; 2) applying and testing theoretical constructs or conceptual frameworks; 3) clearly stating research objectives and intended contributions; and 4) developing an appropriate research design.Although it is not always a linear process, our flow has a researcher starting with their research question, moving to discuss how they will approach it or filter data, identifying specific aims, and explicating a research design.Although there is a large element of subjectivity in the sections to come, our contention is that all good papers should include clear research questions, a clear conceptual or theoretical basis, precise objectives and an explicit research design.We start with these steps because, in our experience, their absence is often a fatal flaw.With some overstatement, getting the research question right could be half the work of writing a good paper.The research question guides a literature review or collection of data, suggests the type of answers a study can give and provides a strong disciplining device when writing.Bellemare proposes that good papers contain interesting ideas when they do one of three things: ask a question that has not been asked before; ask a “Big Question” that affects the welfare of many people; or ask a question that has been asked before but can be answered in a better way.For more detailed suggestions for how to craft research questions, we suggest Hancke’s Intelligent Research Design .Here, we summarize three tips.First, build your question from empirical or conceptual material—do “pre-search.,No research question can be constructed without reading.All good research questions are the product of prior engagement with empirical and/or theoretical material.Second, ensure that your research question are researchable.Is there reliable and accessible evidence that you can use to answer your question, or is there scope for producing such evidence?,Will this evidence be available to others?,Is your question limited in time or space?,Does it have clear enough boundaries and a logical “end” that you work towards and explain or answer?,Or are you chasing a moving target?,Third, ensure that your research question is answerable.A research question needs to be asked in such a way that your expectations can be wrong and that you can be surprised.When confronted with reliable evidence, the answer to the question should be apparent.Even better is a question that both advances theory and addresses a relevant social problem, meaning that your question matters to academia, practitioners and other stakeholders.The typology in Fig. 1 depicts four broad categories of research contribution.Stern et al. warn that too little research in energy social science falls into “Pasteur’s Quadrant” of both advancing scientific or theoretical understanding and being immediately useful at addressing a pressing energy- or climate-related problem .As Mourik put it recently, “We need scientists that are allowed to work in this in-between space, a boundary space between research and practice” .Similarly, O’Neil and her colleagues write that more problem-driven research is needed that confronts social or environmental issues, rather than merely describing them .Thus, asking socially relevant questions can facilitate broader social impact, something elaborated more in Box 1.Crafting research questions in this way can make a study socially- and policy-relevant by design, helping to ensure relevant insights for policymakers, practitioners, managers and/or other stakeholder groups.Under this logic, research is not only an art or craft, but a civic duty.We argue that more applied research is needed in the field of energy social science, that researchers should think about policy/practitioner applications when developing their research objectives, and that, where appropriate, researchers should seek to integrate practitioners directly into the research process.Separate from an abundance of possible research questions, there is no shortage of conceptual frameworks, analytical frameworks and theories available to the scholar.The selection of theory can also flow from a “paradigm,” a worldview or way of interpreting reality .There are many excellent reviews of these theories available.For example, reviews relevant to energy social science include:Edomah et al.’s comparison of the theoretical perspectives related to energy infrastructure ;,Jackson’s review of theories for consumer behavior and behavioral change ;,Kern and Rogge’s survey of theories of the policy process and their relevance to sustainability transitions ;,Peattie’s catalogue of theories relating to values, norms, and habits associated with “green consumption” ;,Scheller and Urry’s survey of sociotechnical transitions, social practice theory and complexity theory for transport and mobility researchers ;,Sovacool and Hess’s survey of 96 theories, concepts, and analytical frameworks for sociotechnical change ;,Wilson and Dowlatabadi’s analysis of decision-making frameworks relevant to energy consumption .As these theoretical reviews emphasize, different theories may be more or less suitable for different types of research question and may also have varying and sometimes incompatible foundational assumptions.Rather than dive into the many specific theories relevant to energy social science, we instead indicate some of the most important dimensions and features of those theories, and how these shape research questions, objectives and designs.One way of classifying theories is to identify their underlying paradigm, that is, their assumptions about the nature of reality, the status of knowledge claims about that reality and the appropriate choice of research methods.For example, Table 1 highlights the assumptions associated with three broad paradigms or philosophies of science - positivism, interpretivism and critical realism.Theories in the positivist paradigm assume that reality is objective, focus upon generating and testing hypotheses and are well suited to quantitative research methods such as multivariate regression.In contrast, theories in the interpretive paradigm assume that reality is subjective, focus upon uncovering the meaning actors give to events and are well suited to qualitative research methods such as participant observation.Critical realism is a more recent philosophy of science that partly reconciles these different perspectives and is consistent with both quantitative and qualitative research methods.Some theories align closely and explicitly with one of the paradigms in Table 1, while others are more ambiguous, or combine elements from more than one perspective.A second way of classifying theories is to identify their primary focus, namely: agency, structure or discourse - or a hybrid of these.As Table 2 indicates, agency-based theories prioritize the autonomy of the individual, and thus tend to emphasize individual behaviors and beliefs.For example, Karl Popper famously recommended that “…all social phenomena, and especially the function of all social institutions, should always be understood as resulting from the decisions, actions, attitudes etc. of human individuals, and….we should never be satisfied by an explanation in terms of so-called ‘collectives’….,In contrast, structural theories emphasize the opposite: macro-social relationships and technological infrastructures that constrain the autonomy of people and organizations.In contrast to both, discursive theories shift the focus away from individual choice and social structure and towards more cultural factors such as language and meaning.Again, different theories give differing degrees of emphasis to these factors, and many occupy a hybrid space, emphasizing the complex interactions among agency, structure, and discourse .Multilevel frameworks often sit within this hybrid category.Whereas the first four types of theories in Table 2 are inherently descriptive or explanatory, a fifth type of theory is normative and attempts to assess whether a technology, practice, policy or other unit of analysis is a net positive or negative for society or individuals.To do so, normative theories often rely on criteria set by ethics, moral studies, social justice or political ecology.Put another way, the first four theories are about explanation, whereas normative theories are about evaluation.A third way of classifying theories is to identify their particular assumptions about human behavior and decision-making.These approaches range from those subscribing to a rational actor model that sees people as basing decisions on reasons, utility or logic to more complex theories incorporating broader dimensions such as attitudes, beliefs, morals, habits, and lifestyles.These dimensions are not mutually exclusive, but different theories vary in the relative emphasis given to each and hence may be more or less useful in explaining particular behaviours and decisions.A final caveat to engaging with theory—especially within the positivist paradigm—is managing the tension between specificity and generalizability, as well as between parsimony and complexity .Jackson notes that more complex theories can aid conceptual understanding but can be difficult to use in practice—for instance they are poorly structured for empirical quantification or surveys .Less complex theories can be easier to test but may hinder comprehension by omitting key variables and relationships.Sartori found this to be the case in politics and international relations: as one moved up a ladder of abstraction, scope, purpose and concepts change to become more general but less robust .Azjen adds that theories have scopes—some have to be adapted to each study or application, whereas others can use concepts and measures that apply across a large range of dependent variables .The point here is that good studies not only employ a relevant theory or conceptual framework; they acknowledge its analytical emphasis, its underlying ontological and epistemological assumptions, its degree of complexity or abstraction and the strengths and limitations that result.In addition to selecting research questions and theoretical framework, the rigorous researcher must also clearly articulate the research objectives.As concisely summarized by Babbie , researchers should aim to specify as clearly as possible what they want to find out, and determine the best way to do it.This entails providing a concise statement of exactly what the researcher aims to do in a particular study—what should prove to be the guiding statement for the eventual considerations and details of the specific research design.In our experience, one to four objectives are appropriate for a standard journal article, and we encourage researchers to clearly state these objectives at the end of their introduction section, and to continually reflect back on them throughout the article.We distinguish objectives from more general research questions, and more specific hypotheses.Consider these oversimplified examples that draw from an application of value theory:Research Question: What consumer traits or motivations are associated with interest in electric vehicles?,Research Objective: Determine which values are associated with interest in electric vehicles by estimating discrete choice models using choice data collected from a sample of UK car buyers.Research Hypothesis: Interest in electric vehicles is positively associated with higher levels of biospheric and altruistic values.Well-articulated research objectives will communicate the type of analysis that is needed and the intended novelty of the contribution.As described by Babbie , the objective may be to: “explore” new research categories or relationships; “describe” or observe the state of something; or “explain”, typically meaning looking for causality through statistical analysis, experimental design or perhaps narrative analysis.Similarly, the research objectives must also communicate the intended scholarly contribution of the research, which might be theoretical, methodological or empirical—issues we explore in Section 3.A given study can be publishable if there is clear novelty in at least one of these categories, and sometimes in two.Only rare and exceptional papers make contributions across all three—and attempts to do so can lead to confusion or incoherence.Further, in an interdisciplinary field, rigorous researchers know that their objectives must somehow communicate the paradigm that is guiding their inquiry, that is, the underlying assumptions about the nature of reality, how the researcher interacts with reality and the appropriate methods to use .While numerous paradigms exist, we focus here on the very broad dichotomy between the positivist paradigm, which emphasizes quantitative research methods, and the intepretivist paradigm, which emphasizes qualitative research methods.As noted above, quantitative methods are not just about numbers, but rather stem from a paradigm that emphasizes hypothesis testing, large and representative sample sizes, statistical analyses, prediction, generalization and the objectivity of the researcher—notions dominant in disciplines such as social psychology, economics, and American political science.In contrast, qualitative approaches could be characterized as theory or hypothesis generating, rather than hypothesis testing, and focus more upon understanding, meanings, interpretation, social construction and the subjectivity of the researcher .These notions are dominant in disciplines such as anthropology, sociology, and European political science.These two broad paradigms are associated with different rules, standards and guidelines so it is important for researchers to communicate the nature of their research objectives—at a minimum whether they intend to generate theory or hypotheses, or to test theories and hypotheses.In short, the nature of the objectives will determine what types of methods, analysis and interpretations are appropriate—which leads into research design which we discuss next.Our final suggestion is that every article ought to have a clearly articulated research design—this ensures the conceptual frameworks are operationalized, research questions are answered, objectives are met and/or hypotheses are tested.In very simple terms: a research method refers to a technique for gathering or analyzing data, while a research design is how exactly such a method, or methods, become executed in a particular study.The goal of a research design should be to provide enough detail to make the study transparent, helping readers to assess the study in light of the stated research objectives, while facilitating replicability.In energy social science, most research designs use one of the seven categories summarized in Table 3 – or some combination thereof.Note that any taxonomy of research methods will give inadequate attention to some methods while missing others altogether—our taxonomy is merely an attempt to summarize the dominant categories within energy social science.Table 3 identifies the disciplines most associated with each of the seven research methods, describes their key elements, summarizes their research cultures and sketches out codes of practice for rigor.The table omits research designs using multiple or mixed methods, which we discuss further in Section 3.2.However, even where multiple methods are used in a single study, each individual method ought to follow the codes of practice summarized Section 4.As Table 3 implies the two general “classes” of research method—quantitative and qualitative—have different strengths and weaknesses.Quantitative methods are best for testing hypotheses or quantifying relationships, while qualitative methods are best for exploratory studies or accessing more in-depth information, such as how social actors construct meaning.Different methods may in turn be associated with different degrees of consensus or debate about what constitutes rigor.These tradeoffs and tensions will become more apparent as we examine codes of practice in Section 4, but here we offer a brief summary of each method category.Experiments involve human participants and seek to test for causal relationships between variables, while isolating the study or relationship from other potentially influential variables .“True experimental designs” are distinguished by: a) random selection and/or assignment of participants; and b) researchers having control over extraneous variables .In contrast, quasi-experimental designs seek to identify the causal effect of some treatment or effect, but lack random assignment to treatment groups .In some cases the experimental conditions are outside the control of the investigators, but nevertheless provide sufficient degree of control to permit causal inference.Experimental and quasi-experimental designs can be implemented in “lab” or “field” settings, as well as via trials, games, and simulations.A literature review is a compilation and integration of existing research, typically with the aim of identifying the current state of knowledge and specific research gaps.The relevant evidence may include both peer-reviewed and grey-literature.Reviews typically involve repeated searches of databases using specific keywords in order to identify large bodies of evidence.Depending upon the research question, the search may impose relatively narrow criteria for inclusion, or much wider criteria that allows consideration of different research designs and types of evidence .As discussed later, we distinguish between three broad types of literature review: meta-analysis, systematic review, and narrative review.Survey methods involve data collection using a survey instrument or structured questionnaire with a sample of respondents from a relevant target population.Surveys are used extensively within many social science disciplines, but both the practices and norms associated with implementing surveys and the interpretation of results can differ between those disciplines.Quantitative data analysis typically utilizes statistical techniques, though norms of implementation can again vary between social science disciplines, as can the relative use of specific techniques.This divergence results in part from variations in the type of data that is commonly used.For example, social psychology relies heavily upon primary data collected via experiments or surveys, which provide good controls for confounding variables.In contrast, economics makes greater use of secondary data sources such as government statistics, which can be incomplete or non-existent for some variables, and can be prone to measurement and other errors.Energy modeling includes techniques that quantitatively represent and analyze the technical, economic and social aspects of energy systems, typically in a forward-looking manner .These models may focus upon energy demand, energy supply or whole energy systems; their scope may range from the very narrow to the very wide; they may utilize a range of behavioral assumptions and mathematical techniques; and they may be integrated to a greater or lesser degree with broader economic models.Energy models are widely used to explore socially-relevant questions, such as how changes in income, technology or policy may shape energy consumption and carbon emissions over time, and what future energy systems may look like .For the most part, all modeling exercises boil down to translating a series of assumptions into mathematical form and then testing the logical consequences of those assumptions.Qualitative research designs cover a range of techniques for collecting and analyzing data about the opinions, attitudes, perceptions and understandings of people and groups in different contexts.Qualitative research methods differ according to the nature of data collection, as well as the means of analyzing that data.In energy social science, the most popular approaches to qualitative data collection tend to be semi-structured interviews, focus groups, direct observation, participant observation and document analysis .What each of these methods has in common is that they are inductive and exploratory by nature, seeking to access a particular perspective in depth, rather than to test a specific hypothesis.A final common research design is a case study, which is an in-depth examination of one or more subjects of study and associated contextual conditions.Case studies can use both quantitative and qualitative research techniques.George and Bennet define a case study as a “detailed examination of an aspect of a historical episode to develop or test historical explanations that may be generalizable to other events” , while Yin defines it as “an investigation of a contemporary phenomenon within its real-life context when the boundaries between phenomenon and context are not clearly evident” .Rather than using statistical analysis of data from a large sample, case study methods often involve detailed, longitudinal assessments of single or multiple cases - which may be individuals, groups, organizations, policies or even countries .This section of the review focuses on novelty: how to produce research that is original, fresh, or even exciting and unexpected.Studies can typically be classified by their primary form of novelty or contribution to the literature.Although this will vary, studies generally fall into one of three types:Theoretically-novel articles contribute to creating, testing, critiquing, or revising some type of academic concept, framework or theory;,Methodologically-novel articles focus on the research process itself and include testing, revising or developing new research methods;,Empirically-novel articles reveal new insights through new applications of existing methods and theories, as well as through analysis of new types of evidence or data.For the most part, articles that fit into the third category are more numerous—there tend to be far more applications of existing theories and methods than developments of new ones.Further, there is clearly overlap in these categories; e.g., a theoretically-novel article will frequently include some empirical novelty as well.The following sections describe each of categories in turn.In our experience, an article that does one of these three things well is sufficient.Seeking objectives that cross two can be better, but doing all three is overambitious and likely to lead to confusion rather than clarity.Theoretically novel studies can create, apply, advance, test, compare or critique concepts or theories.Here we briefly demarcate three types of theoretical novelty: inventing theories, synthesizing theories, and triangulating theories.Perhaps the most rare is theoretical invention or innovation.Scholars can sometimes develop new frameworks or further elaborate and advance existing theories.Prominent examples relevant to energy social science would be the initial papers that presented “technological innovation systems” and “social practice theory” .In both cases, the motivation for doing so was the perceived limitations of existing theories for explaining the phenomena in question.Theoretical synthesis attempts to integrate existing theories or concepts into a new conceptual framework.For example, the Unified Theory of Acceptance and Use of Technology model integrates concepts from psychology, technology studies, economics, and innovation studies .Similarly, the “Multi-Level Perspective” on sociotechnical transitions integrates ideas from evolutionary economics, science and technology studies and various traditions within sociology .At a more conceptually focused level, Axsen and Kurani integrate aspects of Rogers’ Diffusion of Innovations with theories of social networks, comformity, and translation to create a “reflexive layers of influence” herustic to assess low-carbon consumer behaviour and social networks .One must take care when syntheizing, however, to ensure that the theories being integrated are complementary and have commensurate underlying assumptions , and that the resulting framework is not overly complex.Theoretical triangulation refers to the comparison, evaluation and/or testing of multiple theories or concepts .This involves comparing a number of theories to see which best explain a particular set of empirical observations.One classic example from political science explained a single event, the Cuban Missile Crisis, through three different theories: Realism or Rationalism; Organizational or Institutional Theory; and Bureaucratic Politics and Negotiation .A more recent study in the domain of energy and social science sought to explain the consumer adoption of residential solar PV systems in the United States by testing the validity of concepts from Rogers’ Diffusion of Innovation theory, Azjen’s Theory of Planned Behavior, and Dietz and Stern’s Value-Belief-Norm Theory .Similarly, Ryghaug and Toftaker triangulate Social Practice Theory with Domestication Theory to explain electric vehicle adoption in Norway ; while Sovacool et al. compare the MLP, the Dialectical Issue Lifecycle Theory, and Design-Driven Innovation to explain the obstacles to electric vehicle diffusion in Denmark and Israel .Such theoretical triangulation can reduce bias in theory selection and improve theoretical constructions through critical reflection .It can also help researchers select the most appropriate analytical tools for their research question, properly credit those who contributed towards the development of theory, and avoid dogmatic adherence to particular ideas that can stifle both conceptual advancement and communication between disciplines .Another category of novelty applies to papers where the primary contribution is to develop a research method that is new, or modified from a conventional version or combined with other methods in a new way.Given the size and diversity of the energy social science research community, together with the dynamic nature of research methodology, it is impossible to present an exhaustive or even representative list of state-of-the-art methods.In some cases, this type of novelty can involve taking methods from one discipline or area and attempting to make it “better,” such as mixing it with other methods.In other cases, novelty can involve utilizing methods that are “new” and only beginning to emerge among academics more generally.To illustrate, we summarize three examples of novel methods in our field: multiple methods, longitudinal research and behavioral realism.A first example of methodological novelty is the use of “multiple methods” or “mixed-methods”.The first term is more general and refers simply to any research design that uses or blends several different methods.The second term is more specific and refers to the integration of quantitative and qualitative research methods in a single study .There is much debate about how to best implement mixed-methods , though in practice the most popular approach has been to combine quantitative surveys with qualitative interviews .Creswell provides a typology of mixed-methods approaches, which vary in the sequence and intention of integration, with the most suitable approach being one that is best matched to the research objective .The term “methodological triangulation” is used to describe the use of multiple methods to view a given social phenomenon through multiple perspectives , though the term triangulation has become controversial in some disciplines due to the potential implication that there is a single reality to “see” rather than multiple valid, and potentially very different, perspectives .Effective implementation of multiple methods can lead to more sophisticated answers to research questions and can help overcome the limitations of individual research approaches .A second example of methodological novelty is the addition of behavioral realism to quantitative energy models.A broad range of such models have been criticized for lacking realistic assumptions about behavior, including optimization models that assume that actors are hyper-rational and fully informed ; and agent-based models that lack an empirical foundation for their assumptions .Behavioral realism broadly refers to improvements in the representation of agents or decision-makers in these models, especially consumers, to better match real-world behavior in the target population—which of course can vary by region and culture and over time.This realism can come from the use of empirical data, representation of both financial and non-financial motivations, and representation of diversity or heterogeneity in behaviors and motives.Improving the behavioral realism of energy models typically involves the combining of methods in some form, for example via translation of insights from an empirical method to the model in question .As examples, some recent studies have sought to improve optimization models by using meta-analysis of behavioral studies to estimate parameters representing processes of social influence ; by representing heterogeneity in consumer valuation of product attributes ; and by incorporating “decision-making heuristics” such as present bias, habit formation and loss-aversion .For agent-based models, innovative research is exploring how to use results from surveys, laboratory experiments, case studies and other sources to inform the selection of model parameters .A third example of methodological novelty is approaches to repeated data collection and longitudinal research design.While most surveys and interviews are cross-sectional, longitudinal approaches offer the opportunity to improve the depth and reliability of collected data, as they aim to study changes in a sample of participants over time.Here, one can distinguish between “panel” studies that repeatedly survey the same participants, and “pooled cross-sectional” studies that repeatedly sample the same population but analyze different cross-sections over time .Such approaches can allow more accurate inference of relevant parameters; provide greater control of confounding variables; facilitate the testing of more complicated behavioural hypotheses; and permit more reliable investigation of dynamic relationships .For example, studies have shown that interview participants and survey respondents are able to express more stable preferences for electric vehicles if they have been given a multi-day trial of that vehicle .Panel-type survey research is benefiting from improvements in information and communication technologies that make it easier to follow a given respondent over time .The panel approach in particular comes with many challenges, including how to minimize and address attrition over time, and how to mitigate the behavioural effects of repeated surveying, such as conditioning—which can be costly and time consuming to overcome .Another novel future direction is the meshing of qualitative narrative analysis with quantitative longitudinal data .The final type of research novelty is empirical—where we distinguish between new applications, new data, and new types of evidence.This category represents the majority of studies in our field: those that apply existing theories and methods to new applications, such as new regions, case studies, contexts, or research questions.While such studies can provide incremental contributions to the testing of theories or the development of methods, their primary contribution is empirical, in improving understanding of the relevant topic or application.Such studies frequently score high on practicality, or the “immediate usefulness” dimension of Fig. 1, but trend towards the “Thomas Edison” rather than “Louis Pasteur” quadrant.Examples are highly diverse, including: using surveys to apply identity theory to different types of pro-environmental behaviors ; applying an existing technology adoption models to simulate compliance with US fuel economy standards ; using transaction cost economics to understand the conditions for success of energy service contracts ; and applying the MLP to the case of Norwegian electric vehicle policy .Some empirically-novel studies have no strong theoretical framework, being primarily descriptive, exploratory, or grounded in data.For example, such studies may ask: how many English citizens would support a carbon tax?,Or how have financial incentives influenced the uptake of household solar panels and electric vehicles?,Many empirically-novel studies also tend to be socially-relevant by design, seeking to generate immediate insights for policymakers, practitioners, managers and other stakeholders.Empirical novelty also includes collecting and/or analysing new types of data; typically such data are either difficult to collect or access, challenging to analyse, or neglected for some other reason.To illustrate, we identify four types of “exceptional” stakeholder groups that often prove difficult to access: elites, experts, small populations, and vulnerable populations.In some cases, collecting data from such populations can be a novelty itself.Perhaps the most common example of this approach is data collection from elites: people in a position of power, influence or expertise regarding energy decision-making .Examples of elites include business executives, heads of state, senior ministers, or senior directors and managers of energy programs .Elite interviews are especially useful for revealing the motivations and actions behind policy formation and adoption, although access to the highest levels of politics or policymaking is often restricted and confidentiality concerns abound .A second category is experts in a particular topic area, which may include inventors, entrepreneurs, researchers or intellectuals.Sampling or accessing such experts can be challenging, in particular because it may not be clear who makes up the target population, how to draw a sample, and how to best engage the sample.The perspective of experts can be accessed using “Delphi” techniques that can facilitate convergence towards a consensus view on a topic .Small populations include, for example, pioneer adopters of low-carbon technologies or venture capitalists .These can be difficult to access due to small or non-existent sampling frames, yet their viewpoints can provide an important, often missing contribution to a given literature.Finally, sensitive or vulnerable populations can include the survivors of energy accidents such as those at Chernobyl or Fukushima , indigenous peoples , children , the elderly or ill , and the chronically poor .Understandably, strategies for accessing these groups will be completely different from those for elites and experts, and will require cultural sensitivity and careful attention to ethics.Nevertheless, despite these added steps and challenges, it is often critically important for the perspectives of these groups to be considered in broader theory, research and decision-making.A third category of empirical novelty is new forms of evidence.Here we use the example of big data - interpreted as “extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions” .These datasets may cover large populations, or achieve high temporal resolution, or both .The data may be generated by people themselves, but is more commonly measured automatically by digital technologies such as smart meters , load-monitoring devices , and GPS devices .Although not widely used in energy social science, other sources of big data that could yield empirical insights are telematics in automobiles , online shopping profiles , and social media content such as Facebook and Twitter .Some applications combine data sources: for example, Chatterton et al. aggregate data from 70 million domestic energy meters and vehicle odometers, with the aim of identifying areas in the United Kingdom with high household and vehicle energy consumption .Hamilton et al use the term “energy epidemiology” to describe the use of such data to measure and explain energy demand patterns, and to predict future changes in energy demand from policy and other interventions .As they write:Energy epidemiology is the study of energy demand to improve the understanding of variation and causes of difference among the energy-consuming population.It considers the complex interactions between the physical and engineered systems, socio-economic and environmental conditions, and individual interactions and practices of occupants.Energy epidemiology provides an over-arching approach for all the disciplines involved, where findings from large-scale studies both inform energy policy while providing a context for conventional small-scale studies and information input for predictive models .Big data and energy epidemiology therefore open up new opportunities for exploring the relationships between consumer behavior and energy use.For example, automatically collected data can avoid the errors of self-reported behavior, while data on consumer purchases can provide insights into consumer preferences while avoiding the limitations of hypothetical, stated choice experiments.But such applications raise complex and important questions about data privacy, transparency, security and accountability as well as third-party verification of data quality .In this section, we focus on rigor: how to strive for careful and thorough research designs that ensure the research objectives are achieved.This definition relates to concepts of validity, which are defined in Box 2.We focus our discussion on three lessons:The usefulness of codes of practice for our seven research designs, where we advocate a “fit for purpose” approach.The limitation of hierarchies of evidence, where some disciplines emphasize a ladder of approaches.The need for appropriateness and balance, where studies need not excel across all criteria.Here, we propose some basic “codes of practice” for different research designs—recognizing that the strength of a particular approach will depend on the context, objectives and research questions.Rather than offering a definitive checklist, this is more of a “toolbox,” “horses for courses,” or “fit for purpose” approach to rigor.More detailed guidelines for each of the research designs can be found in the cited sources.To be clear, these codes of practice are intended to emphasize which research designs or methods might be appropriate in particular settings, but the choice is dictated not only by the codes of practice, but also by the logic of inquiry and the research objectives.Experiments have a long history in disciplines such as social psychology, but have been adopted more slowly in other areas of social science .In short, they aim to isolate and establish evidence for the causes of particular effects of interest.“True experiments” and “randomised controlled trials” in particular are defined by the randomized assignment of subjects to treatment conditions.Such designs are appropriate for research questions that seek to establish causal relationships between variables, such as: “do time-of-use electricity tariffs lead to reductions in electricity consumption?” ,; or “does the format and color of energy efficiency labels affect the adoption of efficient appliances?” .While such relationships are frequently inferred from non-experimental or “associational” studies, those inferences may be invalid .For example, survey data may indicate a positive correlation between reported happiness and reported engagement in pro-environmental behavior, but the causality may be in the opposite direction or the correlation may result from a third variable that is not observed.In order to provide stronger evidence of causation, the defining characteristic of true experiments is that the subjects or participants are randomly assigned to treatment or control groups.This minimizes the risk of selection bias and isolates both the magnitude and direction of the treatment effect.Experiments are most easily conducted in laboratory conditions, but extension to the field can allow for exploration of a broader range of research questions and may provide greater realism.True experiments are becoming increasingly popular in social science , and are commonly seen as the “gold standard” for determining causality .They also benefit from broad consensus on what constitutes best practice.For example, Bloom provides a useful overview of experimental designs for different contexts, including differing research questions.However, true experiments are not widely used within energy social science, even in areas where they appear feasible - such as the evaluation of energy efficiency programs .This is partly because energy social science asks a wide range of research questions, only a portion of which can be answered through experimental designs.But it is also because experiments can be time-consuming and expensive to conduct and can raise practical and ethical difficulties.For instance, it may not be possible to randomly withhold subsidies for energy efficiency improvements from qualifying applicants.True experiments can also have limitations, such as usage of small or unrepresentative samples, vulnerability to the Hawthorne effect, difficulties incentivizing replication studies, and a lack of guidelines for how to increase the reproducibility of results .Indeed, some argue that experiments must move beyond the bias towards Western, educated, industrialized, rich and democratic societies .Furthermore, the laboratory setting of most true experiments is rather artificial, and the results may be difficult to transpose to real-world settings.Those defending experiments counter that many of these limitations can be mitigated through either careful research design or the integration of experiments with other, complementary research methods .Where true experiments are impractical, it may be feasible to employ a “natural” or “quasi-experimental” research design, that includes treatment and control groups, but where allocation to those groups is determined by factors beyond the researchers’ control .The key to success in a quasi-experimental design is to ensure that the assignment to treatment or control group is not related to other determinants of the relevant outcome.If successful, this can obviate the need to specify and control for all confounding variables.Some of the most common approaches to quasi-experiments are summarized in Box 3.Quasi-experiments encompass a range of research designs, of varying degrees of robustness and sophistication.A recent variant utilizes “living laboratories” to provide user-centered social experiments with the aim of testing a particular technology, solution, idea or policy in a real-world environment .Distantly related examples include “transition experiments” and “governance experiments” .Still other designs utilize more complex simulations, games, or competitions to understand bargaining strategies, including those using the labels of “serious games” , “adaptable simulations” , and “gamification” .The codes of practice we recommend for experiments and quasi-experiments include:Clearly specify the experiment’s objectives, type and predicted result or effect ;,Follow best practice for experimental design that aligns with the research objectives, including selection of sample size, choice of setting and management of control groups ;,Ensure recruitment of participants to be as representative as possible for the purpose at hand ;,Utilize random assignment where feasible and appropriate, and where not, follow best practice for quasi-experimental design ;,Acknowledge limitations in external validity, and, where possible, use a multi-method approach to mitigate those limitations;,Where possible, consider replication or repeated experiments to gain stronger evidence of causality .A literature review is a study or compilation of other research—typically of peer-reviewed literature, though non-academic studies can also be included.We consider three types of review here, flowing from most to least structured: meta-analysis, systematic reviews, and narrative reviews.A meta-analysis combines quantitative results across a set of studies to draw conclusions about a specific topic of interest.A systematic review aims to provide a comprehensive, unbiased and replicable summary of the state of knowledge on a well-defined issue.A narrative review provides an exploratory evaluation of the literature or a subset of literature in a particular area.Meta-analyses and systematic reviews can each be further distinguished between a priori reviews that start with fixed criteria or search strings that do not change once the search begins, and iterative reviews that modify search strings based on ongoing results, leading to repeated searches.Meta-analysis is usually quantitative in nature, involving statistical analysis of the quantitative results from a series of comparable studies .Aggregate results can be pooled and analyzed with a meta-regression technique that estimates an overall effect size, while also explaining variations across studies.There are several comprehensive guides to meta-analysis, which is now an established technique in many fields .While the method is powerful, it is only appropriate for clear and precise research questions that have previously been addressed by a large pool of comparable quantitative studies.Put another way, meta-analyses may not be possible for some study types, and they do not always yield more useful results.Meta-analyses are common in fields such as medicine, but much less common within energy social science.There are exceptions, however, such as estimates of energy price elasticities , social influence effects for alternative fuel vehicle purchases , and the success of demand response programs .Systematic reviews are also very structured, but are more descriptive and can include both quantitative and qualitative evidence.Such a review usually works in phases, such as: crafting of explicit research questions; systematically searching the available literature using defined search terms; using explicit criteria for including or excluding studies; determining and then executing a coding strategy or analytical protocol; and analyzing or synthesizing the collected evidence.Compared to a typical narrative review, a systematic review aims to use an explicit and replicable research design, ensure comprehensiveness in the literature search, and reduce bias in the selection of studies .Further, most systematic reviews give greater weight to methodologically rigorous studies, although not all meet this criteria.Some researchers even suggest that systematic reviews belong at the top of a list of most rigorous methods.For instance, when discussing reviews, Khalid et al. state that “reviews should never be done in any other way” .Further, Huebner et al. suggest that there may even be a continuum of “systematic-ness” in literature reviews, moving up from purely narrative reviews to systematic reviews, and finally meta-analysis at the top .Fig. 2 is our own conceptualization of how such a continuum may look.Systematic reviews can be applied to topics where both quantitative and qualitative evidence is relevant, experiments may or may not be feasible, researchers are concerned with “what works” in what context, and multiple and competing factors are at play .Examples of systematic reviews in energy social science include: an assessment of the cost impacts of intermittent generation on the UK electricity system ; a review of the evidence for a near-term peak in global oil production ; an analysis of the social acceptance of wind energy in North America ; and an analysis of the barriers to and opportunities of smart meter deployment in the UK .The main drawback of systematic reviews is that they are resource intensive and time consuming.Systematic reviews are therefore not optimal in circumstances when resources are limitied or for fields where evidence is sparse or patchy.Also, they are more suited to relatively narrow research questions rather than multidimensional problems; and they tend to employ an “additive” approach to synthesizing research results that can neglect the complementary nature of different studies and perspectives .Further, a systematic review is not guaranteed to be comprehensive or unbiased—the inclusion and coding of articles is still sensitive to the researcher’s selection of criteria and concepts.Narrative reviews are the least structured and most common type of review, appearing in both review papers and the literature review sections of research papers.A narrative review synthesizes evidence familiar to an author on a given topic or theme, and is typified by the reviews published in Annual Reviews of Environment and Resources.Good narrative reviews will be comprehensive, and typically require an experienced author to uncover the nuances and themes of the relevant literature.The narrative review approach can be particularly useful for exploratory reviews that seek to synthesize insights from a variety of perspectives and disciplines, or areas where insufficient data exists to conduct a systematic review or meta-analysis.Further, a good narrative review will be organized in a way that is useful and easy to read: for example, by concept, theme, theory or discipline; or, if appropriate, by publication date .However, narrative reviews typically lack transparency and replicability, especially if the author uses a “convenience” sample with no explicit criteria for inclusion .Thus, narrative reviews can be more subject to bias compared to other methods, mainly in the inclusion and exclusion of research and in the weighting of research evidence - or at least, that bias might be better hidden.A final point relevant to all literature reviews is the need for careful use of citations.Many authors have had the experience of seeing their work cited, only to discover that their study has been misinterpreted, or mixed-up with another study.Researchers thus need to be careful with the documentation and organization of papers and citations, treating these as carefully as their own data or analyses .The codes of practice we recommend for literature reviews include:Be as explicit as possible about the process of the review you use, explaining your rationale and approach;,Employ meta-analysis when there is a large number of comparable quantitative studies of the topic and the research questions are specific, clear and consistent;,Utilize systematic reviews to comprehensively summarize and interpret large bodies of quantitative or qualitative evidence on well-defined research questions, and when sufficient time and resources are available;,Undertake narrative reviews for exploratory and/or multidimensional research questions or when resources are more limited;,In all three approaches, be transparent: if applicable, report the sources/databases covered, the dates and time period examined, the search term used, the languages searched, and whether any sampling of results was done;,Know your citations and references and ensure that you accurately utilize them.Surveys are a cornerstone of research in a range of disciplines, some of which have established criteria for best practice—though these are not always consistent with each other.Dillman’s “tailored design method” provides one of the most accepted guides to survey research and is now in its fourth edition .To set up this discussion, we first distinguish between the target population, the sampling frame, the invited sample, and the realized sample.For example, a researcher might want to study a city of one million people, and have a list of 100,000 motor vehicle owners.They randomly select and invite 5000 of these vehicle owners, and of those, 1000 end up completing the survey.In this example, the response rate is 20%—though researchers can vary in how they define and calculate response rate, so this should always be explained.One key consideration for survey design is the mode employed to conduct the survey, which can include phone, internet, mail or in-person, or some blend of these.A number of publications outline the relative strengths and weaknesses of each, which vary for different research questions and target populations .Internet surveys have become increasingly popular owing to their low-cost and versatility.Regardless of the survey mode, for many target populations it is difficult to find an appropriate sampling frame, and to recruit a realized sample of sufficient size and representativeness to achieve one’s research objectives.Dillman argues that researchers need to consider and minimize four types of error that threaten validity, namely: sampling error, coverage error, non-response error and measurement error.Unfortunately, many researchers focus almost exclusively on sampling error, which only describes the lack of precision resulting from selecting a sample rather than surveying the entire population—often leading to the erroneous perception that large sample size is the primary or only indication of a rigorous survey method.Table 5 illustrates the relationship between population size, sample size and sampling error.For example, consider a researcher that aims to draw a random sample from a population of one million, and desires the result of a binary question.If the researcher expects to have a 50/50 split in responses among respondents, and wants to know these observed proportions within a precision level of +/- 3%, the study would need a minimum random sample of 1067 respondents.It is this calculation that often leads to 1000 being considered the “magic number” for desired sample size among survey researchers.However, the choice of appropriate sample size depends upon the research question.Studies with descriptive research questions may use Table 5 to anticipate the degree of precision a given sample size will attain regarding survey responses.Studies focusing upon tests of association or causality may employ more complex calculations, where the appropriate sample size depends upon the anticipated effect size, the desired significance level, the desired statistical power of the test and the expected variance of the explained variable .For some causal or experimental studies, a very small sample size may be sufficient.Modest sample sizes may also be acceptable for studies trying to access a small population or the exceptional groups mentioned in Section 3.3.2.For example, if you want to assess the percentage of Russian citizens that support nuclear power, you will need a large, nationally representative sample of respondents.If, however, you want to undertake an exploratory study of how early adopters of smart homes in Wales feel about those technologies, a much smaller sample could be appropriate.In all cases, the sample size needs to be considered in the context of the research objectives and the intended method of statistical analysis.Despite the importance of sample considerations, we urge survey researchers to consider and balance efforts to mitigate sampling error with efforts to minimize the other three categories of error identified by Dillman .The second category is coverage error, where the sampling frame is not fully aligned with the target population, i.e. it misses certain types of people and/or oversamples others.For example, a sampling frame of household telephone numbers would miss households without a telephone, and a traditional phone book could miss households that only use a cell phone.The third category is non-response error, where those that respond to the invitation are systematically biased relative to the target population—say being higher income, older, or having a higher level of education.For example, a market survey of car buyers interested in electric vehicles could be more attractive to electric vehicle enthusiasts—since these are more likely to respond to the survey invitation, the realized sample may be biased.Survey results would then overestimate consumer interest in electric vehicles.Related to this is item non-response error, where a particular survey question is neglected by some subset of the realized sample – such as higher income households being more likely to refuse to report their income.The final category is measurement error, where the survey instrument does not record the information that the researcher thinks it is recording, typically as a result of poor or confusing wording of questions or response categories.This final category moves beyond the sample to highlight the importance of careful design and pre-testing of the survey instrument itself.In short, a rigorous survey research design should have an appropriate sample size, be representative of the target population and be effective in communicating questions and eliciting responses.The complexity of real-world research questions usually means that all four errors will be present in a survey project to some degree.However, rigorous survey researchers must address and manage such risks in their research design, and report how they have done so in their article.Thus, we propose the following codes of practice for survey data collection:Consider and acknowledge the strengths and weaknesses of different survey implementation and sample recruitment modes;,Aim to collect an appropriate sample size for the research objectives and context;,Examine and report how well the sample represents the target population—especially for descriptive research objectives;,Carefully design and pre-test the survey instrument to maximize the accuracy of responses;,Carefully interpret results according to the limitations of the realized sample.Many studies will require statistical analysis of collected data, so researchers must be able to select the most appropriate statistical methods, apply those methods effectively and interpret the results correctly.This requires a firm grounding in statistical methods.The appropriate choice of method will depend upon:The nature of the research objective, which can be exploratory, descriptive, or explanatory .Exploratory research does not have clear hypotheses and rarely requires statistical methods.Descriptive research simply summarizes the characteristics of the data and only requires basic statistics.Explanatory research searches for relationships among variables, typically starting with clear hypotheses about those relationships and often requiring sophisticated statistical analysis.Most analysts caution against “data-mining,” “p-hacking,” or “reverse-engineering” a paper, where the researcher tests a large number of models and variables and works backwards to focus on relationships they find significant.But some traditions – such as the general-to-specific methodology in econometrics – view such approaches more favorably .Whether a relationship is analyzed and which type: univariate analyses confine attention to single variables, including estimates of means, standard deviations and confidence intervals; bivariate analyses estimate the relationship between two variables, through correlation, ANOVA or a chi-square test of association; and multivariate analyses estimate relationships among many variables via multiple regression and other techniques.The types of variables to be analyzed, be they continuous, ordinal or nominal—which in some cases can be transformed from one type to another.The type of data to be analyzed, which can be cross-sectional, time-series, pooled cross-section or panel.Further distinctions include aggregate versus disaggregate data and different periodicities of time-series data.Table 6 lists some major data analysis methods by their typical application and main limitations.For some research objectives, particularly descriptive research, a simple procedure might be warranted.For example, a survey of citizen support for a given climate policy might only require the reporting of the proportion of respondents in favor, along with a confidence interval.However, most statistical studies in energy social science are interested in the relationships between two or more variables.Bivariate analysis explores relationships between two variables, but typically provides only limited insight due to the potential for the identified relationships to be spurious, owing to omitted variables.The exception is data from a true experiment, where bivariate analysis of the relationship between treatment and outcome can be interpreted as causal, due to the process used to generate the data.Some research texts present data analysis methods from least to most rigorous.Fig. 3, for example, proposes an arrangement of data techniques.For most studies, multivariate analysis will be required to produce meaningful insights, although the rigor of individual applications may vary widely depending upon both the nature of the data and the care taken by the analyst - for example, in conducting model specification tests.Among multivariate analyses, the most common approach is multiple regression, which explores how a number of independent variables are associated with a single dependent variable.Techniques such as MANOVA are a simply a subset of multiple regression, but are widely used in disciplines that employ true experiments, such as social psychology.In contrast, economics relies almost exclusively upon multiple regression.Linear or non-linear regression is used for continuous dependent variables, while logistic regression is used for categorical dependent variables.The primary advantage of multiple regression is that researchers can explore hypotheses about the relationship between two variables, while controlling for other variables that might also matter, such as respondent age, gender and political affiliation.Although such analyses can be powerful, researchers frequently pay insufficient attention to the various assumptions that must hold for different methods to give unbiased results.Nearly any introductory statistics or econometrics textbook will explain these assumptions, together with the tests required and strategies available when those assumptions do not hold .These issues are particularly important when using secondary data sources since these have multiple limitations that are largely beyond the researchers’ control – such as short time series, measurement error and missing or endogenous variables.Much of the sophistication within econometrics results from attempts to overcome such problems – for example, econometricians have developed “cointegration” techniques to extract the relationship between variables that share a time trend .However, since no amount of analytical sophistication can adequately compensate for poor quality data, there is an increasing trend towards the use of panel data and quasi-experimental techniques .Table 6 also lists some more advanced techniques, along with their main limitations.We can’t possibly mention all methods, so we only highlight a few that have proven popular in energy social science.For example, structural equation models can be used to explore complex relationships among variables, particularly when a theory or hypothesis proposes several layers of causation .For example, it may be hypothesized that a person’s values influence their beliefs about a particular energy technology, which in turn influences their likelihood of purchasing that technology.While this approach is powerful, rigorous analysts need to use theory carefully to guide their inquiry .Factor analysis includes methods that collapse or group similar variables into a single measure , and is used extensively within social psychology .Cluster analysis groups agents or cases in such a way that members of the group are more similar to each other than to those in other groups , but the most popular technique cannot be used for tests of statistical significance, and there is no universally accepted method to select the “best” number of clusters.Discrete choice modeling is a particular form of logistic regression that explains and predicts choices between two or more discrete alternatives, such as between an energy efficient and inefficient appliance, based upon the characteristics of the different choices, the characteristics of the relevant actors and other relevant variables.This approach has proven particularly popular in economics and transportation studies .Discrete choice models were originally informed by expected utility theory , but increasingly use other social theories as well .Finally, latent-class models are a particular type of discrete choice model that explicitly represent heterogeneity among individuals, splitting respondents into a number of similar classes or segments, and estimating choice models for each segment .Appropriate applications of each of these methods must consider many more issues than we can cover here, and the rigorous analyst will need to become familiar with textbooks and papers relating to their chosen method.In summary, the practices of the rigorous data analyst include:Effectively match the data analysis technique to the research question and type of data available;,Where multiple methods are appropriate, consider and acknowledge their individual strengths and weaknesses;,Where data are available, conduct more sophisticated and robust analysis of association;,For explanatory or comparative research questions, state hypotheses clearly up front, informed by theory and avoid re-working hypotheses to fit the results;,Balance the objectives of statistical performance with the interpretability and usefulness of the results;,Carefully distinguish between analyses of association versus causation;,Distinguish clearly between statistical significance and practical significance-where the latter relates to whether the difference is large enough to be of real-world importance.Quantitative energy models have held a central place in energy research for decades.Such models are computer-based, and are used for a variety of purposes, including exploring the range of possible futures under different assumptions and assessing the impact of particular policy interventions.The different types of energy models can be classified in a variety of ways , including: geographical coverage; sectoral coverage; scope; methodology; and time horizon.For simplicity, Table 7 distinguishes four broad categories of model and highlights their main strengths and weaknesses.As with other research methods, the appropriate choice of model depends upon the research question, and therefore it is important to acknowledge the limitations of each model type – though model-based articles often neglect such acknowledgement and comparison.Given our focus on energy social science, we place particular weight on behavioral realism: that is, better energy models will have a strong empirical basis for their parameters, include some degree of heterogeneity between relevant groups, and/or represent the potential for a broad range of actor motivations.We first distinguish “bottom-up” from “top-down” models, a distinction that represents the historical basis of many models.Although these categories have blurred in the last two decades , we believe the broad distinctionis still a useful starting point.First is “bottom-up” models, a term that is often equated with optimization models that have their origin in engineering and operations management.The term “bottom-up” is used because these models explicitly simulate the operation of individual energy-using technologies, which are aggregated across individual sectors or the energy system as a whole to give total energy use and emissions .These models simulate the ageing and replacement of technologies, with investment decisions being determined by capital costs, fuel prices, policy interventions and other factors.Bottom-up models usually include a large number of current and potential future technologies and simulate the “optimal” means of attaining some goal subject to constraints.However, this optimization assumption is also the main weakness of conventional bottom-up models, as consumers, energy suppliers and other actors are frequently depicted as hyper-rational decision makers operating with perfect information and foresight and motivated purely by financial costs– assumptions contradicted by empirical research on human behavior .However, significant efforts have been made to improve the behavioral realism of such models, including attempts to incorporate “myopic” decision-making , heterogeneity, intangible costs and benefits and social influences .In contrast, “top-down” models are macroeconomic and aggregated in nature, and are commonly used to simulate how changes, or “shocks” in one sector impact the entire economy, including changes in prices, investment, employment and GDP .Most common are computable general equilibrium models, which simulate regional or national economies by combining a social accounting matrix with equations for the behavior of each sector, under the assumption that the economy tends towards an equilibrium.CGE models are calibrated to the economic transactions in a base year and make the assumption that firms maximize profits and consumers maximize utility .System responses in a CGE model are strongly influenced by the assumed elasticities of substitution between factor inputs and different types of consumption good .Although the results are highly sensitive to these assumptions, their empirical basis is typically weak .The aggregate nature of top-down models means that they do not represent specific technologies or actors, but instead use abstract relationships such as production functions .This abstraction leads to the common perception of CGE models as “black boxes”, lacking transparency regarding the assumptions and processes that lead to a given finding – though admittedly, most complex energy-economy models can suffer a similar problem.In most cases, the “black box” issue can be mitigated in part by comprehensive sensitivity tests and elaborations in the documentation of the economic mechanisms contributing to the observed results.This category also includes input-output models, which can be seen as simplified CGE models with a fixed production structure and no scope for substitution.I-O models benefit from simplicity and transparency, but are unable to model price changes, supply constraints and other market feedbacks and are only suitable for investigating the impact of relatively small system shocks over the short-term.A third category may be called simulation models, grouping a variety of models that do not seek to optimize a system according to goals or macroeconomic assumptions—but instead seek to “simulate” real-world patterns of behavior.These models vary widely in structure and assumptions, making it particularly important for modelers to communicate those assumptions.In recent decades, so-called “hybrid” approaches have emerged, integrating aspects of top-down and bottom-up models, and attempting to balance the strengths of technological detail, behavioral realism and macroeconomic feedbacks .Indeed, most widely used energy-economy models have either a bottom-up or top-down origin, but have since moved to some degree of hybridization.Methods have also been developed to improve the representation of consumer behavior and preference change in such models; for example the CIMS model draws from stated and revealed preference choice models to assign behavioral parameters representing car buyer preferences .In turn, CIMS has been shown to produce more realistic estimates of the costs of emission reductions .Similarly, the REPAC-IESD model pairs empirically-derived discrete choice models with an electricity-utility dispatch model, finding that the societal benefits of vehicle-grid-integration are lower than indicated by optimization models .Another type of simulation model – systems dynamics - represent complex systems by means of stocks, flows, feedback loops, and time delays.It simulates the non-linear behavior of those systems over time – including phenomena such as increasing returns, path dependence and tipping points .The systems modelled can range in scope from individual organizations to the global biosphere and can incorporate a wide range of assumptions about system behavior .However, despite their long history, systems dynamics models have not been widely used in energy social science, in part due to their complexity and the lack of a firm empirical basis for the relevant assumptions.We also include agent-based models in this category, which are highly disaggregated models that simulate the behavior and interactions of multiple individual agents.Behavioral realism can vary widely in agent-based models, depending on how the modeler chooses to represent the determinants of decision-making, and whether there is an empirical basis for the parameters used.In contrast to system dynamics models, agent-based models are becoming increasingly prominent in the energy field .A final category is integrated assessment models, a term that is sometimes applied loosely to any approach that combines more than one model—making it important to communicate what exactly is “integrated”.Here we refer mainly to climate change IAMs, which can be further split between relatively simple cost-benefit IAMs, and the more complex cost-effectiveness IAMs.The cost-benefit IAMs rely on very simplistic representations of both social and natural systems, and in some cases can be run with a single spreadsheet.Such IAMs have been widely used to estimate and monetize the damage caused by climate change and thereby to estimate the welfare impacts of different mitigation options.Specifically, they can explore the interlinkages and feedbacks between natural and social systems: for example, how economic activities lead to increased greenhouse gas emissions, which warms the climate and in turn create damages that impact the economy.But this approach is controversial, owing to the highly simplified assumptions required, the enormous uncertainties about the magnitude of climate damages, the philosophical difficulties associated with monetizing those damages and the unresolved debates about the appropriate choice of discount rate .In contrast, the complex cost-effectiveness IAM models integrate one of the previously mentioned categories of socio-economic model with one or more natural science models - usually a climate model, and sometimes other ecological or land-use models as well.Due to this integration, such IAMs tend to be highly complex, and are typically constructed and maintained by large groups that specialize in such models, such as the International Institute for Applied Systems Analysis or the researcher teams informing the Intergovernmental Panel on Climate Change.The unique strength of such IAMs is that they are globally comprehensive, accounting for all types of greenhouse gas emissions from all emitting sectors—which can then provide useful inputs into climate models of radiative forcing and temperature change.However, since the social science component of complex IAMs are equivalent to one of the modeling types noted above, they suffer the same drawbacks.Further, because integrating several sub-models require substantial computing power, the natural science models used in these IAMs tend be more simplistic than a dedicated climate model.Based on the summary of energy models detailed above, we conclude that good practices of the rigorous modeler include:Carefully select a model type based on its suitability for the research objectives, rather than prior familiarity;,Consider and acknowledge the strengths and weaknesses of different model types, even if only one is used;,Aim for a parsimonious and useful model that avoids excessive complexity;,Maximize transparency in the structure and operation of the model and in the selection of model parameters;,Seek a firm empirical basis for model assumptions and, where appropriate, strive towards behavioral realism;,Conduct sensitivity tests and investigate and acknowledge uncertainties in the results.Qualitative research methods are particularly suited to inductive and interpretive approaches.Inductive approaches begin with empirical observations and seek to identify new insights and categories, and to generate rather than test hypotheses .Interpretive approaches aim to interpret the experience of individuals and to identify the meanings that those experiences hold, rather than looking only to establish causal inferences .However, qualitative methods can also support other forms of enquiry.Qualitative methods are sometimes attacked for lacking the widely-accepted standards of rigor associated with some quantitative disciplines and methods.However, this need not make qualitative research less rigorous and there have been multiple efforts to establish more robust standards for qualitative rigor .As with all research methods, qualitative research needs to be designed to suit the intended research objectives , and these objectives often differ in fundamental ways to those addressed by quantitative methods.Table 8 summarizes four approaches to collecting qualitative data and three approaches to analyzing that data.The most common approach to data collection is qualitative interviews, which may be either semi-structured or unstructured; implemented with individuals or small groups; and targeted at either the general population or particular stakeholders."Interviews provide access to people's experience, motivations, beliefs, understandings and meanings -- often providing a deeper understanding than surveys and allowing follow-up and more probing questions .These attributes apply equally to stakeholder interviews, but these raise the additional challenge of determining how the interviewees’ perspective relates to that of the organization they represent.While interviews are generally effective at eliciting individual perspectives, focus groups allow the elicitation of perspectives from groups of individuals, leading to more socially negotiated responses.Perhaps due to their association with market research, focus groups are often seen primarily as a low-cost method or an initial step in a larger study .However, focus groups offer their own unique strengths, namely by constructing a social context in which participants can collectively generate, negotiate and express perceptions and meanings—though of course, a rigorous researcher must understand and acknowledge the limitations of that context .The qualitative nature of both interviews and focus groups makes it difficult to code answers, and responses will vary significantly between different persons and groups.As with any face-to-face data collection method there is also the risk of bias, including a tendency for participants to provide responses that they see as socially desirable, or desirable by the interviewer.Also, as with surveys, interview participants may find it difficult to describe their behaviors, responses or motivations.More generally, effective implementation of qualitative interviews and focus groups requires the interviewer to develop a very different set of skills to those required for quantitative data collection methods .The three remaining methods of qualitative data collection can avoid or mitigate the challenges of interviewer-participant interaction.The first two, direct observation and participant observation, involve the witnessing of relevant behaviors of individuals or groups .Direct observation is unobtrusive by design, and might occur, as examples, in a study of environmental conditions at facilities, buildings, and other institutions .In contrast, participant observation is more in-depth, describing studies where the researcher participates and becomes somewhat immersed in the relevant culture or practices over a long period of time.Researchers will interact directly with subjects, typically in day-to-day contexts, in a sense combining aspects of direct observation with unstructured or semi-structured interviews.However, such participant observation can be resource intensive, requiring months or even years of the researcher’s time.The final category we consider is analysis of documents, such as reports, letters, websites and news media.Such data sources can provide insight into the information, frames and storylines presented by different actors, as well as the social interactions among them .Qualitative data collection also raises questions of “sample” size—but sample is in quotations because the objective is rarely to draw a random sample from the population.Qualitative samples tend to be “purposive”, that is, intending to access a variety of experiences to fit the purposes of the study .Unfortunately, there are few guidelines on how many cases is “enough” and no equivalent to the calculations of sampling error used for quantitative survey research.Some qualitative researchers argue that “less is more” in terms of sample size, since depth is more important than breadth .But there can also be value in larger samples, especially if that increases the breadth of perspectives, since this can strengthen both internal and external validity.Further, qualitative studies that compare samples from different cases, regions or settings can frequently produce more useful results.But that said, qualitative “sample” size needs to be examined and explained for each study’s unique research objectives.As with data collection, the analysis of qualitative data can take a range of forms – a feature that may have contributed to the perception that qualitative research lacks clear standards for analytical rigor.Here we mention three broad types of data analysis that represent different degrees of structure—acknowledging that the diversity is greater than we can demonstrate here, and that many qualitative studies use no formal methods of data analysis at all.The most structured approach is content analysis, which involves coding samples of interview or focus group transcripts, documents and communication records with the aim of systematically identifying categories, themes and patterns and reporting these numerically or graphically .Content analysis is most useful for studies that start with a clear theoretical framework or set of expected categories.However, it is not always effective for richer, deeper analysis or narrative description .Richer analysis can be achieved through narrative analyses which seek to analyze text or utterances with the aim of identifying “storylines” that particular actors or groups use to frame a topic or experience .The objective here can be interpretive, or explanatory in the sense of linking cause and effect.Narratives can be identified at an individual level , or more broadly for formal or informal social groups .Discourse analysis can be even more sophisticated, attempting to capture how narratives and rhetoric coalesce into stable meaning systems, institutional practices, and power structures that can constrain or shape agency .Finally, an example of the least structured analytical approach is grounded theory, which seeks to integrate the formulation of theory with the analysis of data, typically iteratively .This research is called “grounded” because researchers seek to avoid wedding themselves to a particular theory before they begin their investigation, instead “grounding” their analysis inductively in the data itself .One particular challenge for grounded approaches is that they appear in a number of forms, each with different descriptions and guidelines, across several sub-disciplines .In summary, the practices of the rigorous qualitative researcher include:Effectively match research objectives to the appropriate means of data collection;,Also match research objectives to the type of analysis;,Provide detail about the methods used - such as sample size, questions asked, interview duration, demographic details of respondents, whether results were transcribed, whether data is anonymized or attributed, etc.;,Clearly explain and justify the strengths of the chosen methods;,Include more data when interviews or focus groups are meant to access a wide range of experiences in a diverse and/or large population;,Use the qualitative data in an effective way within the manuscript - for example, by providing illustrative quotations or explaining example observations.Case studies involve in-depth examination of particular subjects or phenomena as well as related contextual conditions, often using multiple sources of evidence .The most cited guide to case study research is by Yin , who recommends the use of case studies for “how or why” questions about contemporary phenomena where the researcher has little control over events.However, case studies are equally appropriate for historical investigations.Case studies are commonly employed within energy social science, but the standards of rigor vary widely .We start by considering several dimensions: type, single versus comparative, temporal variation and spatial variation.Table 9 summarizes six broad types of case study .Typical case studies investigate common, frequently observed, representative, and/or illustrative cases.Examples include case studies of the energy transition in Germany , renewable portfolio standards in the United States and climate change adaptation in Bangladesh .Diverse cases attempt to demonstrate maximum variance along a relevant dimension, so they illuminate the full range of important differences.These capture the full variation of the population, but do not mirror the distribution of that variation.Examples include the nuclear phase out in Germany contrasted with the rebuild of nuclear in the UK , or a comparison of energy transitions in Mexico, South Africa, and Thailand .Extreme cases look for deviant, outlier, or unusual values of some explanatory or explained variable, or an example that illustrates a rare but important occurrence.Essentially, they look for “surprises.,Examples include case studies of the Chernobyl nuclear accident in 1986 or the Fukushima accident in 2011 , Iceland’s adoption of geothermal energy ; Denmark’s ambitious wind energy program ; Brazil’s ethanol program ; and the Deepwater Horizon oil spill .Influential cases seek to challenge or test the assumptions behind a popular or well-established case in the academic literature, say by challenging typical cases.Sticking with our examples, this would include critiques or alternative explanations for the energy transition in Germany , renewable portfolio standards in the United States or climate change adaptation in Bangladesh .The most similar method chooses a pair of cases that are similar on all measured explanatory variables, except the variable of interest.An example would be the progression of the Canadian and American nuclear power programs, which began around the same time in similar market economies but resulted in entirely different designs .The most different approach is the inverse, and refers to cases where just one independent variable as well as the dependent variable co-vary, and other independent variables show different values.An example is contrasting the Chinese nuclear program with that of India .The second dimension to consider is single versus comparative case studies.Single cases are useful for exploration and for generating hypotheses - for creating new conjectures in a sort of “light bulb” moment.Single case studies tend to be evidence-rich, allowing a range of relevant factors to be measured and assessed and allowing a consistent and coherent narrative and argument.A good example would be Geels’ historical analysis of the transition from sailing ships to steamships .By contrast, comparative cases are confirmatory and good for testing a hypothesis, or for refuting some of the conjectures arising out of single cases.A good example would be Oteman et al.’s comparative study of the conditions for success in community energy .External consistency is dominant, and comparative cases are useful for examining causal effects beyond a single instance.Empirically, comparative cases must be similar enough to permit meaningful analysis.Comparative case studies thus have greater variation but frequently also less depth since not all relevant factors can be examined.The third dimension to consider is whether a cross-case comparison requires temporal or spatial variation .Spatial variation can provide diversity but also challenge comparability of results.Temporal variation can permit more natural boundaries around analysis as researchers can include as many relevant temporal events as needed, but may require more complex analysis to capture the greater complexity of data.Combinations of spatial and temporal variation can only enhance these strengths and weaknesses.These thoughts lead us to the following codes of practice for case study research:Carefully consider whether to use a single case or comparative cases, as well as whether and how the latter will vary spatially or temporally;,Have a well-defined unit of analysis, with clear boundaries, consistent propositions and measurable dependent and independent variables;,Specify and justify the type of case study chosen, and justify single case studies to warrant publication;,Acknowledge the uniqueness of the chosen case or cases;,Carefully interpret results according to the limitations of the evidence and acknowledge rival hypotheses and explanations.Although we recommend a “codes of practice” approach to rigor, there are some disciplines, communities, and approaches where “hierarchies of evidence” are utilized to determine the strength of a particular study.The concept of hierarchies is most prominent in the health and medical literatures as part of developing concepts of “evidence-based research” or “evidence based policy and practice” and has since expanded to other fields such as social psychology and behavioral economics.The initial hierarchy is most relevant to research based on experimental designs, and it epitomizes a positivist view, placing personal experience at the bottom moving up through uncontrolled experiments to cohort studies and then multiple double blind experiments and randomized controlled trials, and with meta-analysis of randomized controlled trails as the “gold standard” .Similarly, although less prominent, Daly et al. have proposed another hierarchy for qualitative research and case studies with personal experience or a single qualitative case study at the bottom, descriptive studies in the middle, and conceptual or generalizable summarizes or analyses of cases at the top.We have modified this hierarchy in Fig. 5 by adding more details about types and variation within case studies.These hierarchies of evidence have at least two strengths.They are transparent about expectations in a given field, being exceptionally clear about what constitutes “good” or “better” research among peers in that discipline.Second, the implication that different methods can lead to cumulative impact, where studies can serve as the building blocks for others, can be useful and perhaps effective in moving towards a common understanding of certain, specific phenomena in a given field.For communities and disciplines that subscribe to such hierarchies, research methods at the lower levels—notably anecdotal experience, uncontrolled experiments, pilots, or single case studies—are not necessarily seen as being inferior to “higher” methods or having no value.Indeed, moving up the hierarchy is not possible unless others lay the bricks at the base of the period; meta-analysis for instance depends on the single cases or cohort studies placed lower in a hierarchy.However, these hierarchies are positivist by nature, and tend to reflect and propagate the narrow views of a particular discipline.Some disciplines have been known to rigidly subscribe to such hierarchies, systematically rejecting work that uses methods from a “lower level”.On a related note, the hierarchical view may reinforce the unfortunate notion that quantitative research is necessarily more rigorous, valid, or just plain “better” than qualitative research.As we argue throughout this paper, we favor a more neutral perspective on rigor—identifying codes or principles that improve the quality of each type of social research method.Ultimately, researchers will have to decide which view better aligns with their perspective - taking into account their objectives and disciplinary affiliations.But in general, we advise caution with regard to hierarchies of evidence and recommend the broader codes of practice summarized above.Excellent or at least effective research requires a balance between the codes of practice we mention above.By balance, we mean that studies should not focus solely on maximizing one criteria of rigor, e.g. having an enormous sample size, using a particularly sophisticated simulation model, or providing a particularly “thick” description of a case study—at least not just for the sake of doing so.More generally, and perhaps contradictorily, academic research has been criticized for placing too much emphasis on rigor at the expense of impact or creativity—leading to careful but boring research with little social relevance .Instead, the effective use of each method requires tradeoffs.For example: large sample sizes can be costly, and are not necessarily representative; complex energy models can lack transparency, be difficult to parameterise and add uncertainty; and in-depth analysis of a case study might be too detailed to permit extraction of practical, generalizable insights.In short, there are always tensions in research design, which rigorous researchers will consider, and effectively communicate in their research.Another theme that runs throughout our proposed codes of practice is appropriateness: the methods used must be well-suited to the research questions and research objectives.This consideration applies to the overall mode of inquiry, the research method applied, and the specific research design, including level of sophistication and depth of analysis.It is not possible to produce a complete guide of how to work through this “matching” process—though we provide some guidelines here.Overall, we argue that no method itself is necessarily “best”, or “good” or “bad” – rather it all depends on the context and goals of the project.That said, we have identified certain principles or codes that should lead to higher quality research.In considering balance and appropriateness, we emphasize that some studies can involve more than one research method.A paper could start with a narrative review to determine a gap and justify or frame a research question before attempting to answer it with a case study that draws from data collected via qualitative interviews.Another study could begin by surveying a group of actors to solicit their perceptions and expectations, then conduct semi-structured interviews with a subset of that sample to elicit richer, in-depth narratives of how those actors connect those perceptions with their identity and lifestyles.Mixed-method approaches hold particular promise, given that the two rough classes of inquiry—quantitative and qualitative—have particular advantages and disadvantages.Quantitative methods are very good at validating theories about how and why phenomena occur, testing hypotheses, eliminating variables and assessing correlations.However, weaknesses include the fact that a researcher’s categories may not reflect local understanding or context, may miss phenomena because of the focus on testing rather than generating new ideas or insights, and may focus inappropriately on measurable variables rather than underlying causal mechanisms .In contrast, qualitative methods enable data to be based on a participant’s own categories of meaning, are useful for studying a limited number of cases in depth, can be effective in describing complex phenomena or cases, and can better reveal how social actors “construct” different viewpoints .The drawbacks are that qualitative knowledge may not be generalizable to other people or settings, may be of no help in making quantitative predictions, may take more time to collect, and may be more easily influenced by the researchers’ own bias.Thus, there is much to be gained by mixing quantitative and qualitative methods, to avoid the weaknesses and to capitalize on the strengths of each.In this way, our definition of rigor is about being “careful and thorough” in one’s research, but not necessarily using the most advanced, sophisticated or complicated method.All methods have their strengths and limitations, so an effective definition of rigor is more of a “good balance across multiple criteria.,In fact, overly complex research designs can be counterproductive, due to limited resources, lack of transparency in the process or results, or diminishing marginal returns for the added effort.In short, temper ambition and do not become paralyzed by seeking perfection.We now turn to perhaps the most prosaic of our three dimensions of what makes good research: style.Although novelty and rigorous research designs are incredibly important, it can be equally important to effectively package and present your ideas to journal editors, peer reviewers, and eventual readers .In that vein, we have three suggestions:Seek a coherent and cohesive macrostructure to an article, including elements such as titles, sub-headings, placement of paragraphs and regular signposting;,Pursue clarity of expression in microstructure;,Aim for transparency, think critically and examine and communicate the limitations of the analysis, especially insofar as you can explicitly preempt objections, and bring humility to your research.These components of style amount to conveying information in a meaningful, accessible and well-reasoned manner.They remind researchers that producing research—asking research questions, designing a study, collecting data, analyzing data—is still very distinct from reporting that research on paper .This first element of style emphasizes the “big picture” of how a manuscript looks and reads.An effective writing structure boxes your analysis and funnels information.To assist researchers in developing better macrostructure, we offer a few tips.First, although the “standard” IMRAD structure of “Introduction,” “Materials/Methods,” and “Results and Discussion” can work well for many manuscripts, authors can deviate from parts of it.For instance, both the “Literature Review” and “Results and “Discussion” of the paper can be organized in numerous creative ways .For example:A chronological structure portrays events, or presents cases, as they happened over time, aiming to provide an overview or history of the relevant topic .A conceptual structure adheres to the units of analysis, components, or sub-components of a particular academic theory .A cross-disciplinary structure presents data according to the specific disciplines or domains of knowledge it comes from, e.g. linguistics, sociology, history, mathematics, or anthropology .A hypothesis-testing structure first introduces various hypotheses or suppositions and then organizes the results around testing, validating, or challenging them .A spatial or country structure organizes results by the countries or geographic case studies being examined .A technological structure organizes results by the specific systems, technologies, or energy services being analyzed .A thematic structure organizes results around the themes emerging from the analysis, from different dimensions to recurring topics .A narrative structure organizes the data and results around a compelling storyline .A hybrid structure combines some of the structures above, such as: laying out a theory alongside country case studies , by summarizing country case study results by theme , or by presenting propositions from within the disciplines they originate .Indeed, a compelling case has been made for greater use of narrative structures as an effective form of communication given that human beings are dramatic creatures at heart .That said, many students and novice writers may want to start with a more conventional structure.In any case, papers should aim to tell a good story, and the structure needs to be decided before writing commences—and in most cases will be adjusted as the writing proceeds.We also recommend beginning a paper by generating a high-level outline, to help plan the structure and to assess how it all fits together.Once a structure has been chosen and a condensed outline generated, we have a few other tips for structuring a manuscript .Authors should carefully select their title, headings and sub-headings, as these will help signpost an article.Titles are especially important, and should mention not only the topic but also findings and case studies.Provide roadmaps and textual bridges that connect the different sections of a manuscript; at times, summative tables and figures that preview or synthesize an article’s findings or structure can be useful.By leafing or scrolling through an article, a reader should be able to spot the main findings easily, as well as figure out how the research was conducted, and locate any crucial definitions needed to understand its results.Aim for similarity of length between the comparable sections of a manuscript—for example, cases or sub-sections should be roughly the same size.At the same time, do not force this, as in some instances there can be a good reason to have different sizes.Maintain paragraph cohesion and a clear flow of logic: paragraphs need to be tied together in a smooth manner, otherwise it appears as if an author is simply throwing facts at the reader.Some find particular success with the use of a “topic sentence outline” that specifies each section title, and a single, topic sentence to represent each paragraph of the manuscript.Such an exercise helps to initially map out the article, and can be adjusted iteratively with the eventual manuscript throughout the drafting process.Such outlines can be particularly effective for planning and organizing expectations among a set of co-authors.Recognizing there is a strong subjective element to “good” structural writing, we nevertheless recommend the list in Table 10 as a starting point.It contrasts a generically “good” paper with a “bad” paper across the constituent components of a typical manuscript.If an article’s overall macrostructure is the foundation on which a manuscript is built, then the microstructure—sentences, words, diagrams, tables, figures, references—are its mortar and bricks.Although there is no universal approach to the mechanics of microstructure, most well-written manuscripts maintain the following :Paragraph unity, or “one idea per paragraph.,Each paragraph should have one topic sentence.That is, a sentence that contains a subject, verb and object that define what the paragraph is all about.In most cases, the topic sentence is the first sentence but it can appear elsewhere.All other sentences are support sentences - intended to support the claim made in the topic sentence.So in this case, one would expect to see evidence that demonstrate the price is increasing.The paragraphs should not have any other information.So, if an author wants to explain why the price of oil is increasing, it should be either done in a separate paragraph with a new topic sentence or the topic sentence for the original paragraph should be rewritten.Paragraph parsimony.Authors should keep most paragraphs to a reasonable length; avoid excessive support sentences or examples, and let a paragraph rest when the point has been made.Subject or verb/object congruence.Authors should ensure analysis or examples are coherent.For example, if one writes that “the price of oil is booming,” this is incongruent as prices cannot boom, however often reported as such in the media.Idioms and colloquialisms work only when compatible.Comprehensive referencing.Authors should properly reference every factual claim, statistic, direct quote, or study/finding that influenced your argument.Always err on the side of referencing, and always go to the original source.Further, authors should strive to put others’ work into their own words—and be sure to use quotation marks in those rare instances where it is appropriate to use the source’s original words.Appropriate length.As a general rule, authors should aim for brevity.If a researcher can say it in fewer words, or with fewer examples, do so.As the saying goes, “I would have written you a shorter letter but ran out of time.,Conveying information via a condensed number of words is often more difficult than lengthy exposition—yet the condensed version can be much more readable and useful to a target audience.Minimal jargon and acronyms.Arguably, any piece of writing should seek to be accessible to a wide audience, and this is especially true for the interdisciplinary and applied work in our field.Authors should thus take the time to identify and carefully define any pieces of “jargon” used in the paper, and to minimize the use of such jargon where possible.Similarly, acronyms should be used sparingly, and when used should be carefully spelled out when first introduced, or summarized in a list of abbreviations at the beginning of the manuscript.Admittedly, the above tips are mostly about the mechanics of writing.What about the stylistic elements—adding vim, vigor, flair, and character to your writing so the words sparkle and the manuscript keeps readers riveted?,Here, although it is even more difficult to distil lessons, we advocate a few.Aristotle believed that effective communication rested not only on logic but also emotional connection and credibility—good manuscripts often possess all three.Writing more than a half century ago, George Orwell critiqued writing for being prone to dying metaphors that have worn out and lost all power; for using phrases instead of verbs; and for dressing up simple statements with big or foreign words.To counter these trends, Orwell offered six general rules that we find helpful:Never use a metaphor, simile, or other figure of speech which you are used to seeing;,Never use a long word where a short one will do;,If it is possible to cut a word out, always cut it out;,Never use the passive voice where you can use the active;,Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent;,Break any of these rules sooner than say anything outright barbarous.And, because Orwell was talking about writing in general, we have a few more tips tailored especially for academic articles:Effectively utilize visual aids to enhance the impact of your writing;,Use rhetorical devices to enhance the appeal of your writing;,Have fun, be creative, and don’t be afraid to experiment .Writing is too important a part of the academic career to not enjoy at least part of it.Our last suggestion is to be transparent about assumptions, to think critically and to actively acknowledge and explain limitations.Although such an exercise could fall partly under rigor, we have put it in style because it is an important stylistic technique that we wish every manuscript employed.One way of systematically being critical is to always consider the five “tests” for a manuscript .Do the assumptions of a model or a theory fit?,Do the conclusions follow from the premises?,Do the implications of the argument find confirmation in the data?,How much better is the argument than other, competing explanations?,How useful is the explanation for understanding or explaining other cases?,Considering these tests may mean explicitly adding text to your manuscript that acknowledges the key limitations in method, theory, generalizability of findings and so on.Furthermore, part of aiming for transparency, reflection, and humility is to appreciate the necessity of the process of revising and editing.Experienced writers commonly report that only 20% of their writing time is on the first draft, with the remaining 80% on revisions, edits and re-writes.Kazuo Ishiguro, who won the 2017 Nobel Prize in Literature, remarks that good writing requires “a willingness to be terrible” the first time around, before people see it .Feedback from others—colleagues, peers, editors, even expected critics—is always good before submission.Actively seek comments and criticism on a manuscript, since these are far more helpful than praise.To conclude, we’ve thrown a capacious amount of recommendations at readers.As such, it is difficult to offer any type of definitive guidance or checklist for how to design, implement and write more novel, rigorous, and stylistic studies.After all, in many ways research itself is a “method of discovery” or a “craft of inquiry” with no predetermined answers or fully agreed upon processes.Albert Einstein is reputed to have said that “if we knew what we were looking for, it wouldn’t be called ‘re-search’.,In particular, the codes of practice and hierarchies of evidence that we identify reveal a diversity of research designs and very different approaches, goals, and aims.All too often, when one moves away from the limits of a single disciplinary idea of novelty, rigor, or style, then the guidelines disappear, so we end up with an abundance of low quality work, and in some cases a lack of appreciation for high quality work.Thus, given the clear importance of interdisciplinarity in energy social science, we argue that guidelines are strongly needed.This is not to say that a rigorous researcher needs to be completely interdisciplinary, fully trained in all relevant research methods—but at a minimum they need to have a basic awareness and appreciation of alternative paradigms, viewpoints, and methods.Such appreciation will inject an appropriate level of humility into their work and will improve their ability to conduct and comprehend literature reviews, identify research gaps and effectively build collaborative, interdisciplinary research teams.In this admittedly lengthy but hopefully holistic review, we have sought to establish a comprehensive and clear set of guidelines for the interdisciplinary field of energy social science.These are not dogmatic, but instead highlight general principles that are often missing or implied.We therefore posit that stronger research tends to:Clearly state objectives.Good papers explicitly ask a research question and/or set out to achieve particular aims and objectives.Be empirically grounded in evidence.Good research is data-driven, based on a foundation of empirical data rather than opinion.Have and communicate a research design.Good papers are as explicit as possible about the research design and methods employed, cognizant of codes of practice, and appropriate and balanced in their execution.Appreciate multiple methods.Rigorous researchers will explain how their method compares to alternative methods and approaches.Even better, novel and rigorous research designs can combine at least two complementary methods.Theorize.Many good papers connect themselves to social science concepts or theories.They test concepts, engage in debates, and elaborate on conceptual findings about the relationship between energy and society.Address generalizability.Comparative research can have broader impact.Research in one region, such as a survey conducted in one country, or a single case study, needs to make a strong argument for how the results contribute to theoretical development or are applicable beyond that case.Be stylistically strong.Good papers utilize a coherent macrostructure and microstructure, and are written in a way that is crisp, clear and creative and fun.Emphasize strengths and weaknesses.Rigorous researchers fully acknowledge, explain, and preempt limitations in design, case study selection, methods or analysis.These principles suggest that energy social science research is enhanced by the principles of diversity, inclusion, creativity and reflection.Such research is clearly conveyed so assumptions are apparent as well as strengths and weaknesses.It may require teams of researchers and years of hard work to make a significant contribution, thus requiring both persistence and patience.There is value to smaller-scale, incremental contributions, where the guidelines we provide above apply just as well.Each new published insight can contribute to the broader body of knowledge, in particular through eventual literature reviews on the subject.Similarly, in more positivist, quantitative disciplines, individual experiments and statistical analyses are the building blocks for a later systematic review or meta-analysis.That said, as much as we want to offer tips and guidance, we must also remember that energy social science is both a science and an art .It must be not only logical but emotionally impactful and credible.It is not only dialectic but rhetoric.It is not only analysis but argument – the effective presentation of ideas to an audience.While energy social science remains a collective endeavor, outstanding research shines when it excels across the three dimensions of novelty, rigor, and style. | A series of weaknesses in creativity, research design, and quality of writing continue to handicap energy social science. Many studies ask uninteresting research questions, make only marginal contributions, and lack innovative methods or application to theory. Many studies also have no explicit research design, lack rigor, or suffer from mangled structure and poor quality of writing. To help remedy these shortcomings, this Review offers suggestions for how to construct research questions; thoughtfully engage with concepts; state objectives; and appropriately select research methods. Then, the Review offers suggestions for enhancing theoretical, methodological, and empirical novelty. In terms of rigor, codes of practice are presented across seven method categories: experiments, literature reviews, data collection, data analysis, quantitative energy modeling, qualitative analysis, and case studies. We also recommend that researchers beware of hierarchies of evidence utilized in some disciplines, and that researchers place more emphasis on balance and appropriateness in research design. In terms of style, we offer tips regarding macro and microstructure and analysis, as well as coherent writing. Our hope is that this Review will inspire more interesting, robust, multi-method, comparative, interdisciplinary and impactful research that will accelerate the contribution that energy social science can make to both theory and practice. |
380 | Hydro-geometrical data analyses of River Atuwara at Ado-Odo/Otta, Ogun State | The dataset comprises of hydro-geometric analyses of selected sampling points on the River Atuwara, located in Ado-Odo/Otta, in southwest Nigeria.The hydro-geometric data was collected with the use of equipment such as depth meter, paddled boat, tape measure, and a global positioning system.Fig. 3 is illustrative of the hydro-geometric data collection process.Geometric values are shown in Table 1, with their respective unit standards.Relationships between various units of measurement were derived statistically and presented in Figs. 4–6.Hydro-geometric data of the Atuwara River were collected along Sixteen referenced points.The Sixteen referenced points were taken with the use of a boat and a Speedtech portable depth sounder.A global positioning system unit was used to get the location of the sixteen-referenced point within Atuwara river.Fig. 2 shows the River Atuwara Watershed and built-up areas, while Fig. 1 is a plot of cross-section within the Atuwara river system, and their respective hydro-geometric channel label.The dispersion was analysed as the function of Eq. .Where D=Dispersion.River Atuwara, located in Ado-Odo/Otta local government with co-ordinates 523883N 745372E in Ogun state.River Atuwara moves transversely toward other neighboring villages and serve as a water source .Fig. 2 shows the river and other built-up areas.The course of River Atuwara flows westward toward the Atlantic Ocean.After collecting the hydro-geometric cross-sectional data, the hydro-geometric data was analyzed with the use of Microsoft office.The study assumes that an irregular channel cross-sections can be represented with hydraulically equivalent trapezoidal cross-sections as shown in Fig. 1.The hydro-geometric data was processed to determine the average depth of each cross-section, assuming the top-width of each cross-section were unchanged.Methods and processes of measurements, data collection and recordings employed along the river course are shown in Fig. 3.The statistics analyses such as comparison of various unit of measurements are applied.The statistical summaries are shown in Figs. 4–6.The relationship between two-compared variable can obtained through the coefficient of the x-variable in the regression equation indicated in Figs. 4–6.Negative gradient indicates inverse relationship while positive gradient shows direct relationships.The authors received no direct funding for this research. | The dataset analyzed in this article contains spatial and temporal values of the hydro-geometric parameters of River Atuwara. The hydro-geometrical data analyses of various sampling point on River Atuwara was examined and their geometric properties were taken with the use of a paddled boat, depth meter and global positioning system (GPS). The co-ordinates, width, depth, slopes, area, velocity, flow were gotten in-situ while the area and wetted perimeter were computed ex-situ. The statistical relationships between separate variables were considered using scatter plots and regression line equations. Inferences drawn from various variable comparisons can be used to validate predictive models for various time seasons. |
381 | Prospective techno-economic and environmental assessment of carbon capture at a refinery and CO2 utilisation in polyol synthesis | Carbon dioxide can be used as feedstock in the synthesis of fuels, chemicals and materials .CO2 utilisation has recently gained interest and is, for instance, part of the latest European Union strategy to mitigate climate change .Identifying and understanding the challenges and performance of CO2 utilisation technologies, however, is complex.There is no current consensus on what role these technologies can play in realising large reductions in CO2 emissions .To play a major role, the environmental performance of utilisation options should lead to extensive net CO2 emission reductions.However, studies have shown that, depending on the process and system boundaries, net emissions could in fact increase .Besides reducing net CO2 emissions, CO2 utilisation needs to be a viable candidate for upscaling and offer sufficient revenue to become a realistic solution to climate change.Although most literature sources link CO2 utilisation to the power sector, CO2 utilisation can and probably will be implemented in industrial clusters .It is therefore important to assess how such concepts could also be integrated in industrial CO2 mitigation strategies.The refinery sector is responsible for 10% of industrial emissions, of which 20% originates from the production of hydrogen .Hydrogen production processes have the advantage that CO2 separation facilities are already available on-site .Furthermore, CO2 capture can be implemented in hydrogen manufacturing units using commercially available technology in a cost-effective manner since the CO2 stream is emitted at relatively high pressure.Moreover, waste heat integration from nearby facilities may reduce the energy penalty of the capture unit .One utilisation route that has drawn attention is the synthesis of polyethercarbonate polyol for polyurethanes .Different research groups have studied the feasibility of CO2-based polyol synthesis , and the manufacturing process has been described in several patents .Moreover, Covestro started a demonstration production line in 2016 in Dormagen, Germany, with a capacity of 5 kt/a of polyol for application in polyurethane flexible foams Bio-based News, 2016 indicating the technical feasibility of the option.Environmental assessments have shown that polyol synthesis based on a pilot plant for CO2 capture from a power plant had lower global warming impacts than traditional polyol manufacturing routes .However, the environmental assessment of this study was carried out at a demonstration scale rather than at commercial scale.Also, the mismatch between the CO2 amounts emitted by the source and the amounts used by the CO2 sink were not addressed.An integrated assessment of the technology, costs, and elaborate environmental impacts of CO2 utilisation for polyol production at full commercial scale with system boundaries including an alternative CO2 source and steam production, is yet to be carried out.Polyols are already included in the chemicals product portfolio of some refinery companies.Therefore, the use of the large amounts of CO2 emitted at a refinery for on-site polyol synthesis may benefit from synergies.With a current global polyols market of about 6.7 Mt/a, a demand of 0.12 Mt/a of CO2 for polymer application is estimated if the European polyol market continues to grow at the expected rates .However, this amount is small compared to the CO2 emissions from industrial hydrogen units technologies is characterized by large uncertainties and limited information due to confidentiality or the lack of process data.Therefore, a comprehensive uncertainty analysis that allows a better understanding of the knowledge gaps and robustness of the results must accompany an evaluation of the technology performance.In this study, an integrated techno-economic and environmental assessment in combination with uncertainty analysis is conducted of CO2 utilisation for polyol production at a refinery.The goal of this study is to investigate whether the implementation of CCU in combination with partial carbon storage is a cost-effective mitigation option for this industrial sector.The structure of this paper is as follows: the integrated approach applied is presented in Section 2.The three different case studies developed are described in Section 2.1.The technical modelling is explained in Section 2.2.Based on the results of the technical models, an economic evaluation is carried out.Technical and economic models are used to develop a life cycle inventory and perform an environmental assessment.Section 2.5 describes the uncertainty analysis.In Section 3, the outcomes and key indicators of the technical, economic, environmental and uncertainty assessments are presented and discussed.Finally, in Section 4, the limitations and the major implications of this research are addressed.This research uses the environmental due diligence framework developed as part of the European EDDiCCUT project .The framework provides a systematic assessment of existing and emerging carbon capture, storage and utilisation technologies by integrating technical performance, cost estimation and life cycle inventory data with uncertainty analysis.The key elements of the framework and their application to the case study are described in this section.To assess whether the implementation of this CO2 utilisation option in combination with partial carbon storage has advantages with respect to the common practice in industry, a reference case was designed: a refinery with a hydrogen unit without CO2 capture and a conventional polyol synthesis process.Additionally, a case with carbon capture and storage but without CO2 utilisation was investigated to understand potential benefits of CCUS over CCS.To ensure system equivalence, in the reference and CCS cases, the same amount of hydrogen, polyol are produced as in the CCUS case.The three different systems investigated are:Reference case, Fig. 1a: refinery with H2 manufacturing unit without CO2 capture; conventional polyol synthesis.Storage case, Fig. 1b: refinery with H2 manufacturing unit with CO2 capture and storage; conventional polyol synthesis.Utilisation and partial storage case, Fig. 1c: refinery with H2 manufacturing unit with CO2 capture and utilisation for CO2-based polyol synthesis.The captured CO2 that cannot be used in polyol synthesis is stored, similar to case ii.The temporal scope for all cases is 2015 and the geographical location is Northwestern Europe.The same process sizes were defined for the three cases: 77 kt/a of H2 production and 250 kt/a of polyol production.The different processes that are part of the value chains have been combined in interconnected system areas taking into account sequence, location and similarities.In this way, data is consistently organized and easily shared among the different research disciplines.Fig. 1 presents the SAs of each case study.A more detailed description of each process is provided in the Supplementary material.The reference case is based on data from a real refinery in Asia, which produces 77 kt/a of H2 at 99.99% purity via naphtha steam reforming followed by a water gas shift reaction and pressure swing adsorption.In this refinery, desulphurised naphtha and steam are pre-heated to 520 °C and fed to the reformer.After heat recovery, the reformer products flow to the WGS reactor.The WGS product stream contains 43 wt% water, which is removed in a process condensate separator unit.After water removal, H2 is recovered in a pressure swing adsorption unit with an overall yield of 89 wt%.The offgas of the PSA unit is fed to the furnace section of the reformer and burned with air for heat recovery.The energy provided by burning the PSA offgas is not enough to drive the endothermic steam reforming reactions, so additional naphtha is used as fuel to achieve the reformer temperature and duty requirements.Hot flue gases and process gas from the reformer are cooled by preheating the reformer feed and by generating steam.In the reference case, there is no carbon capture, thus 890 kt/a of CO2 are emitted to the atmosphere.Alternatively, CO2 can be captured in the H2 unit.As in the previous case, H2 is produced via naphtha steam reforming followed by a WGS reaction.The most efficient CO2 capture point in steam reforming facilities is upstream the PSA unit .Chemical absorption with ADIP-X solvent and piperazine) leads to a capture efficiency of 95% of the total CO2 emissions, which corresponds to 552 kt/a of CO2.Also in this case, the offgas of the PSA unit is burned in the furnace of the reformer.Since the CO2 is captured upstream the PSA, the PSA offgas has higher calorific value, and consequently naphtha fuel requirements for the furnace are lowered with respect to the reference case.The reduced CO2 content in the feed gas will affect the PSA cycles and time, which should be adjusted so the separation targets are met despite the CO2 feed variation.Note that the impact of CO2 capture on the performance of the PSA unit is however not covered in the scope of this study.H2 recovery efficiency in the PSA was assumed 89% for all cases.The captured CO2 can be either transported for storage or be partially utilised in polyol synthesis and partially stored.In the CCS case, a compression train formed by four compression stages with intercoolers and a final pump is applied to reach 110 bar.At that pressure, CO2 is in a supercritical state for transport 2.5 km onshore and 95 km to an offshore aquifer, where it is stored.In the CCUS case, the CO2 stream is split after the second compression stage.The required amount of CO2 is used in polyol synthesis while the rest is further compressed to 110 bar and sent to storage.In the CCS and CCUS cases, CO2 emissions are reduced to 271 kt/a.Further details are provided in the Supplementary material.Propylene oxide, glycerol and monopropylene glycol are the starting materials in the synthesis route of conventional polyether polyol,.The reaction takes place at 135 °C and 3 bar .Double metal cyanide is used as catalyst, recovered via filtration after the reaction step and disposed as waste.Odours and other impurities are removed from the polyol product in a vacuum-stripping step.The key difference is that part of the PO used in the conventional route is substituted by CO2.Reaction conditions are 135 °C and 20 bar .After the reaction, the excess CO2 is recovered in a flash step and recycled back to the reaction.Cyclic propylene carbonate is produced as a by-product .In this study, we assume it is removed in the vacuum stripper together with the odours .The CO2 content in the polyol is 20 wt% because at higher shares, the polyol viscosity increases to the point of making it unsuitable for flexible PU foam application .In the Supplementary material, a more detailed description of each process is provided.Process models were developed for the H2 unit with and without CO2 capture and for the conventional and CO2-based polyol synthesis.The H2 unit was modelled in Aspen Plus V8.4.Using process data from a refinery hydrogen manufacturing unit in Asia, the model of the H2 unit was validated with good accuracy.The process streams, pieces of equipment and the efficiencies of the reformer, WGS and PSA unit are equal regardless the location of the H2 unit.However, cooling water temperature, cooling requirements and availability vary depending on the local ambient temperature.Since the geographical scope of this study is Northwestern Europe, sea filtered water at 15 °C with no limited availability is used to fulfill the cooling requirements.The model of the capture unit was based on a previous in-house study at Utrecht University .The H2 concentration entering the PSA must be equal to or greater than 70 mol% for an economical PSA process that achieves 85% per-pass H2 separation .The H2 concentration entering the PSA was 72 mol% in the model of the H2 unit without capture and 91 mol% in the model of the H2 unit with capture.The conventional polyol production process was assessed with a spreadsheet model using reaction parameters, polyol properties and process line-ups described by experts in polyol R&D and manufacturing .The CO2-based polyol model was based on several literature sources and patents and also specified in a spreadsheet.Following consultation with experts from the polyol manufacturing sector , the heat of reaction of the CO2-based polyol is reduced compared to the heat of reaction of the conventional polyol, by the amount of CO2 introduced into the polyol.The PO ring opening reaction is exothermic and the CO2 bond breaking is an endothermic reaction .Since in the CO2-based polyol synthesis, CO2 substitutes part of the PO that reacts, the total heat released in the CO2-based polyol is lower than that of conventional polyol.The overall polymerization reaction in both conventional and CO2-based polyol synthesis is exothermic, but the energy released in the CO2-based polyol is lower.Although an external cooler is required in both exothermic reaction steps, the cooling requirement of the CO2-based polyol synthesis is lower than that of the conventional polyol.Details on the data used in the polyols models are reported in Appendix A and the Supplementary material.Using these models, the mass and heat balances and the equipment sizes of the three cases were calculated.Key performance indicators were selected to compare the technical performance of the three alternatives.CO2 flows were chosen to evaluate the emissions reduction and potential for utilization.Naphtha fuel consumption was selected to quantify the savings in the cases with CO2 capture, due to an enhanced heating value of the PSA offgas burned in the furnace of the reformer.PO is the main feedstock for polyol synthesis, and replaced by CO2 in the utilization case.The primary energy use indicator reflects the increase in energy demand due to the capture unit and compression train in the capture cases and the additional energy of the CO2-based polyol production.To carry out the cost estimation, it was assumed that the H2 unit and the polyol plant are extensions to an existing plant located in Northwestern Europe.They are built in an existing industrial area with all utilities and support in place.Specific control rooms or buildings were excluded.The host site was assumed to deliver the utilities and therefore facilities such as cooling towers or steam production were excluded from the cost estimates.The same level of detail was implemented for each case study, allowing a fair comparison of the results.To estimate the capital costs, a detailed equipment list was derived from the technical models.The Capex of SA 2 was based on a previous detailed in-house economic evaluation of a H2 unit with the same pieces of equipment and stream compositions, but with smaller capacity .The different sizes of the equipment were adjusted to the equipment sizes required in this study using the exponent method .The scale exponent varies for different types of plants.As a typical value for petrochemical processes, 0.65 was chosen .For estimating capital costs of SA 4, design conditions and equipment size from the technical models were used as input to the Aspen Capital Cost Estimator.Aspen software provided the purchased equipment costs.Based on the PEC, the bare erected costs of the equipment and the engineering, procurement and construction costs were estimated applying typical factors for project capital cost items.Maintenance costs were assumed 4% of the capital costs .Appendix B provides further details on the values assumed for the Opex estimation.Data from the European Zero Emission Platform was used as basis for estimating CO2 transport and storage costs.It was assumed that the number of injection wells drilled is proportional to the amount of CO2 stored and that the field has constant injectivity and permeability.Therefore, the storage costs provided in the ZEP report were proportionally adjusted to the amount of CO2 stored in each case study.Transport costs were estimated based on the pipeline diameter, length and pressure drop using an in-house pipeline model.LCOP: Levelised cost of product,Ii: Investment cost in year i,Oi: Operational costs in year i,r: Real discount rate,Pi: Product production in year i,This indicator allows the comparison of the economic performance of H2 and polyol synthesis following different routes, as in the three cases investigated.In the CCUS case, the LCOP per kg of polyol included the costs of polyol production and a share of the costs of CO2 capture and compression.This fraction was estimated using the mass percentage of the captured CO2 that was used for polyol production.The LCOP per MJ of H2 included the costs of SA 1, SA 2, SA 3 and the remaining capture and compression costs.A break-even analysis was carried out based on the LCOP of H2 and polyol, their annual production capacities and the amount of CO2 emitted in each case study.The break-even analysis shows the minimum cost of CO2 that would make the CCS and CCUS cases, including CO2 capture, transport and storage, economically more attractive than the reference case.The payback period was also estimated to compare the time needed to recover the investment in each case study.A H2 market price of 1135 €/t was assumed based on crude prices of about 45 US$/barrel, since naphtha derived from crude is the source of H2.The market price of the polyol was estimated based on the values reported in Shen et al. , which are specific for flexible polyols for polyurethane foam application.The value was updated to 2015 using the chemical products price index so a value of 1700 €/t of polyol was used for the payback period calculation.In this framework, a hybrid life cycle assessment was used.Hybrid life cycle approaches combine economic and process data to develop life cycle inventories with high detail from process flows and improved completeness by addition of cost data.This allows for input of plant-specific production and capital expenses data that can improve LCA modelling as conventional LCA comprises a high resolution of bottom-up physical processes but suffers from incomplete system boundaries .The environmental assessment comprises the inventory development and impact quantification for the whole value chain.A hybrid approach was applied to assess the environmental performance of the H2 production, CO2 capture, CO2 compression and both conventional and CO2-based polyol units.For these units, process data from the technical assessment was supplemented with the capital cost data to model the infrastructure.The value chains for naphtha and precursor chemicals and CO2 transport and storage, were modelled entirely using a process LCA approach.Key assumptions taken and the full LCI are in the Supplementary material.Advanced contribution analysis and structural path analysis were used to determine key processes and process chains responsible for environmental impacts.Seven environmental impact indicators were evaluated applying the ReCiPe 1.11 characterization methodology with the hierarchist approach .The complete list of the environmental indicators evaluated is presented in Appendix C.The ecoinvent v.3.2 database was used to characterise the physical background of the production systems.The 2011 dataset from the EXIOBASE 3.3 environmentally extended, multi-regional supply-use/input-output database was used to model the economic background for infrastructure of some SAs for hybrid modelling.Since a key driver of applying CCU is to reduce CO2 emissions and to substitute fossil feedstock by CO2, from the seven indicators included in the environmental assessment, climate change and fossil depletion were selected as key environmental performance indicators to compare the CCUS system with the reference and CCS system.Photochemical oxidant formation was also selected as a key indicator to capture the differences in impact from the H2 unit with and without carbon capture due to a different composition of the PSA offgas.As the goal of the study is to assess the co-production of hydrogen and polyols, the system expansion approach is used for fair comparison of the three systems.The functional unit for all three cases is thus the production of 1 MJ H2, 0.03 kg polyols and 0.187 kg low pressure steam.In the REF and CCS cases, the polyols are produced through conventional synthesis, while in the CCUS case some of the captured CO2 is used as a feedstock to the novel polyol synthesis.Annual product output, or plant capacity, remained constant for all three cases, at 77 kt/a H2 production and 250 kt/a polyol production.To allow a fair comparison, the same net output of 1727 kt/a of low pressure steam from heat integration is assumed in the three case studies.As a result, additional low pressure steam, which is produced in a natural gas boiler, is required to meet this output in the CCS and CCUS cases.Qualitative and quantitative uncertainties were identified performing pedigree analysis and sensitivity analysis, respectively.Pedigree analysis addresses the strengths and weaknesses in the knowledge base underlying a parameter and/or model by carefully reviewing the background of that parameter/model .In combination with sensitivity analysis, pedigree analysis allows understanding the limitations of the prospective assessment carried out for the CCS and CCUS technologies studied in this work.Uncertainties, strengths and weaknesses of particular areas are identified at an early stage, which is added value information for researchers, companies and policy makers when assessing the performance of emerging CCS/U technologies.To minimize subjectivity, pre-defined pedigree matrices were used.For each research discipline, a different pedigree matrix was applied, reflecting the specific characteristics of technical, economic, or environmental data and models.An ordinal scale from 0 to 4 was used to evaluate the knowledge strength of each parameter or model.The scores were expressed with a colour code to aid the easy interpretation of the uncertainty status.Sensitivity analysis was conducted for the technical and economic models of the CCUS case.A contribution analysis of the SAs to the environmental impacts was carried out for the environmental assessment.Six technical input parameters were varied to assess their impact on the primary energy requirements of the total production system and the CO2-polyol synthesis,.All these parameters are reaction parameters of the CO2-polyol synthesis.They were selected because the CO2-polyol is the most novel part of the system and therefore the level of uncertainty of those input parameters is intrinsically higher.The economic parameters chosen for the sensitivity analysis were the prices of the major feedstocks, the Capex of the H2 unit and polyol SA and the discount rate.The Capex was varied −30% to +50% because this is the inaccuracy range of the estimated baseline values .The effect of varying these parameters on the LCOP of H2 and polyol was calculated to identify in which scenarios CCUS for polyols is still an interesting business case.The results of the technical, economic and environmental models developed for the reference, CCS and CCUS cases are discussed in the next sections.The technical model outputs are presented first because the cost estimation built upon them.Since both the technical and economic results were used for the life cycle assessment, the environmental results are presented last.Uncertainty analysis outcomes are discussed within each research area.Table 3 shows the key results of the technical models.Further details of the energy and mass balances are shown in Appendix A.The combination of H2 and polyol production processes is interesting from both the refinery and polyol manufacturing perspectives.CO2 capture in the refinery leads to lower emissions.In the CCS and CCUS cases, there is a reduction of 65% of the CO2 emissions with respect to the reference case.The remaining 35% of CO2 is emitted to the atmosphere as part of the reformer furnace flue gas.More specifically, since the commercial-scale polyol plant can only use 10% of the CO2 captured from the typically sized hydrogen plant, the rest of the captured CO2 is sent to storage in the CCUS case.This is a relevant finding because it shows the limitations of this CO2 utilisation option in mitigating the CO2 emissions of an industrial source.Alternatively, the implementation of CO2 utilisation to larger markets such as transport fuels have been investigated .The production of fuels from CO2 would not contribute to mitigation of CO2 emissions by long-term storage time before the CO2 is re-emitted to the atmosphere as in the polyol case, but by integrating renewable energy into the fuel-value chain .From a refinery perspective, 14 wt% less naphtha is needed as fuel for the reformer furnace in the CCS and CCUS cases.Since CO2 is captured upstream the PSA, the PSA offgas has an enhanced heating value and contributes more heat to the reformer furnace.The use of CO2 as feedstock for the polyol synthesis reduces the requirements on fossil resource demand.CO2-based polyol benefits from 17 wt% lower PO feedstock requirement.Although the reduction of the amounts of naphtha and PO feedstock seems small, it has a substantial positive impact on the economic and environmental performances of the CCUS case.So although the CO2 utilisation capability of CO2-based polyols is small, there is added value in a significant replacement of fossil feedstock.Note also that the introduction of a capture unit and a compression train requires extra utilities.In all case studies, there is low-pressure steam produced from heat integration.However, in the CCS and CCUS cases, part of the produced steam is required in the CO2 capture unit; the net steam production is reduced by 35% as compared to the reference case.Cooling water and electricity requirements are larger in the CCS and CCUS cases as compared with the reference case because of the capture unit and compression train.Therefore, the primary energy use increases in the CCS and CCUS cases with respect to the reference case.The CCUS case shows slightly more primary energy used with respect to the CCS case due to more steam and electricity needed in the pre-heater of the stripper and in the compressor for recycle CO2.The knowledge base uncertainty of the different research areas was systematically assessed using pre-defined pedigree matrices.Scores for the pedigree criteria of the technical input data and submodels are presented in Tables 5 and 6.The input parameters have a high score for the Proxy criterion since they were based on data from the refinery and information from industrial experts in carbon capture and polyol synthesis.The Theoretical understanding is also of good quality.The Empirical basis and the Methodological rigour show a higher level of uncertainty.The input data of the conventional polyol process were provided by experts of a polyol R&D and manufacturing plant .However, the input data of the CO2-based polyol was derived from conventional polyol data and thereby the level of uncertainty increased.The Validation process is the criterion with the lowest scores, especially for the polyol SAs.The values of the conventional polyol were validated against data from experts of a polyol manufacturing site.However, this was not done for the CO2-based polyols.Although there is experimental work and a demonstration plant has been built for CO2-based polyols, publicly available peer-reviewed or independent industrial independent information that could be used for validation purposes was unavailable.The pedigree assessment of the technical submodels shows good Theoretical understanding and Methodological rigour.The CO2-based polyol system area presents higher uncertainty in the Methodological rigour since the model was derived from the conventional polyol system.The Modelling resources scored 2 for all SAs except for the conventional polyol synthesis, which scored 3.Most of the technical models were developed by a single modeller with limited expertise in this area but with enough time to build skills for the specific purpose.However, for the conventional polyol, senior and junior polyol technologists contributed to the development of the model, and therefore the Modelling resources present higher scores.As for the input data, the Validation process shows the highest uncertainties.The models of the H2 unit were validated by comparing them with data from a real refinery, and therefore they have the highest score.Although there is no information on a real refinery with the specific CO2 capture unit included in the models, CO2 capture by chemical absorption has been applied to other systems.The model of the capture unit could thus be validated although the measurements included proxy variables or spanned a limited domain.In the case of the polyol processes, the flowsheets were validated by personal communication with experts from a polyol manufacturing site .This information was not peer-reviewed, and therefore lower pedigree scores were given to these SAs.There was no validation performed for the thermodynamics, chemistry, and kinetics included in the models, resulting in the low scores.Besides the qualitative pedigree analysis, a sensitivity analysis was carried out to quantify the effect that six selected input parameters have on the primary energy requirements for the polyol system area, and for the overall system.Fig. 3 shows that the reaction temperature is the parameter with the largest influence on the primary energy requirement of the polyol system area.The temperature of the reaction products flowing into the pre-heater of the stripper varies and therefore affects the amount of steam consumed in the pre-heater.However, variations in the reaction pressure have the most important effect on the overall system.This is because the reaction temperature only affects the polyol system area, while the reaction pressure also affects the CO2 compression train.Therefore, changes in the reaction pressure have larger implications in the primary energy requirements of the overall system.Nevertheless, the primary energy use of the overall system only shows minor changes because only 10% of the total amount of CO2 captured is used in the polyol synthesis.This shows that changes to input parameters in the polyol SA only have a minor impact on the energy use of the overall system.It also means that the higher uncertainty in the knowledge base of the polyol model has little impact on the performance of the total system.The higher uncertainty of the polyol SA is therefore justified for the purpose of assessing the technical performance on the integrated system.Table 7 displays the economic performance indicators of the reference, CCS and CCUS cases.The Capex is 60% lower in the reference case because it does not include a capture unit nor a compression train.The cost of PO is the main driver of the differences among the total cost in the three cases.In the CCUS case, the CO2 captured from the refinery replaces 17 wt% of the PO used as feedstock for polyol synthesis.Thus, whilst the LCOP per GJ of H2 is higher in the CCS and CCUS cases, the LCOP per kg polyol produced is the lowest in the utilisation case.The break-even analysis shows that 47 €/t is the minimum CO2 cost that would make the CCS case economically more attractive than the reference case.However, a negative CO2 cost would be required to make the reference case more cost-effective than the CCUS case.This shows that at system level, CCUS is the most economically interesting alternative.The reduction in the polyol costs in the CCUS case compensates for the higher LCOP of H2.Implementation of carbon capture at refineries sets a business case when CO2 is partially utilised as in the CCUS case, but not when there is only CO2 storage, as in the CCS case.Assuming a H2 market price of 1135 €/t and a polyol market price of 1700 €/tonne, the payback period is 5 years in the reference case, 8 years in the CCS case and 6 years in the CCUS case.The high PBP of the CCS case can be explained by the additional capital investment required for the capture and compression unit and the lack of economic benefits from CO2 utilisation due to lower PO feedstock demand, both of which are present in the CCUS case.In Tables 8 and 9, the Capex and Opex are presented per system area.The Capex of the H2 unit is larger when it includes a CO2 capture unit and compression.There is a small difference in the Capex of CO2 transport and storage between the CCS and the CCUS cases.In the CCUS case, 10% of the captured CO2 is used in polyol synthesis, and therefore the CO2 transported and stored is 90% of the CO2 transported and stored in the CCS case.The CO2 flow determines the costs of storage and the pipeline diameter.However, its length and materials are the main drivers of the Capex.Since the pipeline diameter is very similar and the length is the same in both cases, the Capex for transport is only slightly lower in the CCUS case.Details on transport and storage costs are available in the Supplementary material.The CO2-based polyol route has only slightly higher Capex than the conventional route.The difference is caused by an additional flash vessel and a compressor, which are required to separate and recycle the excess of CO2 after the reaction.Note, however, that additional costs of PPC and cPC separation are not included in this study, meaning that in a real plant, the capital costs of the CO2-based polyol process may be higher.The operational costs are mainly caused by the feedstock and chemicals.The Opex of SA 1 is 15% lower in the CCS and CCUS cases.This is due to naphtha fuel savings as a result of the enhanced heat content of the PSA offgas burned in the reformer furnace.This partially compensates for the operational costs of CO2 capture and compression in the CCS and CCUS cases.The Opex of the H2 unit of the CCS and CCUS cases is a factor of 3.7 higher than in the reference case because of the energy penalty of CO2 capture and compression.Replacing part of the PO by CO2 reduces the Opex of CO2-polyol production by 14% with respect to the conventional route.The savings in PO feedstock in the CCUS case compensate for the extra operational costs due to CO2 capture, transport and storage.Consequently, the CCUS case has the lowest total Opex among all cases.The Opex for storage in the CCUS case is 10% lower than in the CCS case, which is proportional to the amount of CO2 stored.However, the Opex for transport in the CCUS case is higher than in the CCS case because it requires additional pump work due to higher pressure drop.Details on transport and storage costs are available in the Supplementary material.The pedigree scores assigned to the economic input data are shown in Table 10.The scores of the criterion Proxy were the highest and the ones for the Reliability of source were intermediate.The Capex input data was derived from independent open literature and therefore scored a 2.The Opex input data sources were qualified estimates by industrial experts supported by industry data and therefore it scored a 3.However, CO2 transport and storage scores a 2 since the Opex was based on the ZEP reports , which include inputs from industrial partners, but do not explicitly constitute an industrial quote and assumptions are not fully documented.Completeness of equipment scored relatively low since only the major units were included in the equipment list.This is typical practice in the early phases of a project, when the initial feasibility is evaluated and rough choices about design alternatives are made.Input data for the other parameters included in the Capex estimation was mostly complete.As already indicated in the technical assessment, the Validation criterion had the largest uncertainties.The Capex of the H2 and the capture units were validated against independent cost estimation of the same equipment and scope.However, due to scarce availability of real project data on polyol systems, they scored 1 in the Capex validation.Opex data was taken from only one source and not compared with other independent data.Although the Reliability of the sources is appropriate, the values were not validated and therefore they scored a 0.The sensitivity analysis shows that the economic parameters have different impact in the LCOP of the hydrogen and the polyol.Both product costs are largely affected by the price of their respective major feedstocks, although the LCOP of the polyol is twice as sensitive as the hydrogen one.Whereas the LCOP of H2 is also affected by changes in the Capex and the discount rate, the LCOP of the polyol is stable against variations in these economic parameters.Although the accuracy of the baseline value of the Capex for the polyol plant was −30% to +50%, the sensitivity analysis shows that those inaccuracies have no impact on the final product costs.The LCOP of polyol is directly influenced by the PO price but this does not negatively affect the competitiveness of the CO2-polyols.Since PO is also the feedstock for the synthesis of conventional polyol, at higher prices of PO, the CO2-polyol process will have a larger economic advantage over the traditional route.Selected key environmental indicators are shown in Table 11.The full list of results for the seven impact categories assessed are in Appendix C.Fig. 5 shows the environmental burdens of the three cases, broken into contributions from the system areas, relative to the reference case.Typical trends of CCS scenarios are observed, where advantages in climate change impacts are identified for CCS over the reference scenario, but moderate increases in other environmental impact categories.Overall, an improvement of the CCUS case over the reference is observed in all but one impact category, i.e., photochemical oxidant formation.This implies an overall conclusion that CCUS appears to have an improved environmental performance over both the REF and the CCS cases for the impact categories evaluated.However, the differences range between 2 and 14% improvement over the REF case and may in some cases potentially fall within uncertainty margins.From the figure, the REF and CCS cases show similar impacts in terrestrial acidification, freshwater eutrophication, particular matter formation and human toxicity.In both cases, these impacts are dominated by the higher demand of polyol precursors in the conventional polyol synthesis used in both of these cases.The use of these polyol precursors are reduced by use of captured CO2 in the CCUS case and is evident in the lower SA 5 impacts in these categories.On the other hand, the carbon capture process induces similar trends in CC, POF and FD impacts for the CCS and CCUS cases.These arise from the carbon capture process, which reduces the CC impact relative to the REF case, but increases relative POF impacts because the PSA offgas, which is released to the atmosphere, is richer in CO.As shown in Fig. 5, the naphtha value chain, H2 production unit, and the chemicals value chain dominate the impacts.Within each of these system areas, a few key processes contribute to the majority of the environmental impacts.From the contribution analysis and structural path analysis, the production of propylene oxide reactant in SA 5 is a major source of emissions for conventional polyol synthesis in the REF and CCS cases.In particular, these methods indicate that important contributions to all of the impact categories for SA 5 include the direct emissions from the production of propylene oxide and its precursors and their required energy of production, which is partially sourced from coal.Naphtha production and transport for all cases is also a key contributor, particularly to PMF and FD, while the combustion of naphtha and steam reforming in SA 2 are the dominant processes contributing to CC and POF.The CCS case presents a slight increase in most of the impact categories relative to the reference case.The reduction in naphtha fuel consumption in the furnace due to higher heating value of the PSA off-gas does not fully compensate for the increase on the impacts associated with the extra fuel required for the capture unit and the electricity needed for CO2 compression.In the CCUS case, CO2 replaces part of the energy intensive PO feedstock for polyol synthesis, offsetting the increase of energy consumed due to the capture unit and compression train.As a consequence, all of the investigated impacts in the CCUS case decrease relative to the reference and CCS cases, with the exception of POF.A complete list of the seven indicators included in the environmental assessment can be found in Appendix C.The uncertainty of the LCI is evaluated in Table 12 below.The evaluation criteria can be found in Appendix D.Infrastructure for SA 1, 3 and 5 are modelled from ecoinvent and therefore not as highly rated due to differences in some of the correlation parameters and some missing flows.Similarly, the operations part of the LCI for SA 1 and SA 5 modelled from ecoinvent are not completely representative of the cases studied here.From the table, it can be seen that the chemicals used in the facility, show the lowest scores, representing a lack of available and representative data to model the required chemicals.In particular, proxy chemicals were necessary to model the DMC catalyst and the ADIP-X solvent, and the database processes used for the propylene oxide, monopropylene glycol and glycerol are somewhat outdated and incomplete.This same SA is a significant contributor to all of the studied impact categories, which indicates an incentive to obtain higher quality data for the chemicals used in this system.Due to the novelty of the technology, the CO2-polyol system area received low scores in Reliability.This is a reflection of the low scores received for this system area in the technical and economic performance parameters.However, the results presented in Fig. 5 indicate negligible contribution of the CO2-polyol system area to overall impact in the investigated categories, so the low scores in for this system area are of less concern.The remaining system areas score fairly high as these were based on the technical modelling, which was specific to the plants studied.The differences in results between REF, CCS and CCUS cases are generally small, and given the uncertainty assessment, the conclusion that CCUS is the environmentally superior option should be used carefully.A detailed technical, economic, and environmental impact assessment combined with uncertainty analysis was carried out to evaluate the feasibility of using CO2 captured from a hydrogen manufacturing unit at a refinery complex.In the CCUS case, 10% of the total captured CO2 is utilised in polyol synthesis while the remainder of the CO2 is stored.The results show that this combination of CCUS and CCS can provide a feasible option to reduce the CO2 emissions associated with this type of refinery operations while improving the business case.From an economic point of view, a refinery could choose to build a small capture unit to satisfy the CO2 demand for polyol synthesis.In this case, all of the captured CO2 would be used and partial storage would not be needed.The capture unit would be significantly smaller, and there would be no transport and storage costs.However, economies of scale might have a negative impact on the costs and the refinery will not profit from naphtha savings.This alternative case was not included in the present study because it would effectively only represent a 10% reduction in total CO2 emissions for the system and the cases were defined with large CO2 emission reductions goals.When capturing all CO2 emitted at a H2 unit of a refinery, both CO2 emissions and the amount of naphtha fuel used in the reformer furnace decrease.By utilising the captured CO2 in polyol synthesis, propylene oxide demand decreases with 17 wt% compared to the conventional polyol synthesis.These factors have a large impact in the comparison of the economic and environmental performance of the three cases included in this research.From the H2 unit perspective, the savings in naphtha fuel are not large enough to compensate for the extra costs of the capture unit and compression train required in the CCS and CCUS cases.The LCOP of H2 is 7.8 and 7.7 €/GJ H2 in the CCS and CCUS cases, respectively.This value is 58% and 55% higher with respect to the reference case in which no CO2 is captured.However, the levelised costs of polyol decrease to 1.2 €/kg polyol in the CCUS case, 16% lower than in the conventional process.A break-even analysis carried out at the system level showed that the reduced costs of the CO2-polyol in the CCUS case compensate for the increase in H2 costs, thus making the CCUS case more economically attractive than the reference case.However, a minimum CO2 cost of 47 €/t would be required for making the CCS case more cost-effective than the reference case.The results indicate that using 10% of the total CO2 captured from the refinery and storing the rest of the CO2 presents an interesting business case for refineries because expensive PO feedstock is replaced by waste CO2.CO2 utilisation in combination with partial storage provides an economic advantage compared to storage alone and to a reference case without CO2 capture.The uncertainty analysis shows that these economic results are robust because the most uncertain system areas have low impact on the overall economics.The environmental assessment revealed that the introduction of the CCUS process in the hydrogen unit in combination with storage of the remaining CO2 reduces the climate change impacts by 23% compared to the reference case.Of the other 6 environmental impact categories included in the LCA, all but one present slightly better performance in the utilisation case than in the reference case where no CO2 is captured.However, the differences between the three cases are approximately 15%, indicating relatively small differences in environmental performance outside of CC.Propylene oxide feedstock used in the polyol synthesis, and its precursors, the naphtha value chain and naphtha combustion are identified as a particularly environmentally intensive contributors in this system.Given the uncertainties in the model, the environmental determination of the investigated systems remains inconclusive.The integrated techno-economic and environmental assessment performed in this study indicates that CO2 utilisation in combination with CO2 storage can become a cost-effective mitigation option that still provides environmental advantages.Implementation of CCS alone reduces the CO2 emissions with respect to a reference case without capture.As compared to the reference and CCUS cases, CCS alone increases the costs and other environmental impact categories analysed. | CO2 utilisation is gaining interest as a potential element towards a sustainable economy. CO2 can be used as feedstock in the synthesis of fuels, chemicals and polymers. This study presents a prospective assessment of carbon capture from a hydrogen unit at a refinery, where the CO2 is either stored, or partly stored and partly utilised for polyols production. A methodology integrating technical, economic and environmental models with uncertainty analysis is used to assess the performance of carbon capture and storage or utilisation at the refinery. Results show that only 10% of the CO2 captured from an industrial hydrogen unit can be utilised in a commercial-scale polyol plant. This option has limited potential for large scale CO2 mitigation from industrial sources. However, CO2 capture from a hydrogen unit and its utilisation for the synthesis of polyols provides an interesting alternative from an economic perspective. The costs of CO2-based polyol are estimated at 1200 €/t polyol, 16% lower than those of conventional polyol. Furthermore, the costs of storing the remaining CO2 are offset by the benefits of cheaper polyol production. Therefore, the combination of CO2 capture and partial utilisation provides an improved business case over capture and storage alone. The environmental assessment shows that the climate change potential of this CO2 utilisation system is 23% lower compared to a reference case in which no CO2 is captured at the refinery. Five other environmental impact categories included in this study present slightly better performance for the utilisation case than for the reference case. |
382 | Marine plastic pollution and affordable housing challenge: Shredded waste plastic stabilized soil for producing compressed earth bricks | There has been a general increase in housing prices worldwide and in many countries, the range of housing prices has also continued to widen.The average property prices in many countries have increased.The average property price increase across all capital cities in Australia was said to be by nearly 30% from 2008 to 2018 .Glaeser et al. reported a 72% increase in average housing prices and a 247% increase in the standard deviation of prices in the United States of America, after comparing housing data of the year 1970 with those of 30 years after.Housing rent, which has been described as a better indicator of housing affordability than property prices , has also increased over the years.Martin and Troy stated that housing can be said to be unaffordable if an individual’s or household’s median rent is greater than 30% of its income.According to Carliner and Marya , an average of 32.3%, 31.1% and 30.1% of the income of persons renting houses in Spain, the United States and the United Kingdom, respectively, is spent on rent.In the report of Szekely that ranked 30 cities, considered to present the best deal of opportunities to its residents, based on their rent-to-income ratio, it was found that cities with >30% rent-to-income ratio include Tokyo, Japan, Hong Kong, Hong Kong, Madrid, Spain, Stockholm, Sweden, Amsterdam, Netherlands, Jakarta, Indonesia, Chicago, United States, Dubai, United Arab Emirate, London, United Kingdom, San Francisco, US, Mumbai, India, Singapore, Singapore, Paris, France, Los Angeles, US, Lagos, Nigeria, Manhattan, New York, US, and Mexico City, Mexico.The overall cost of property prices, housing rent and consequently, housing affordability seem to be influenced by the cost of the materials used for building construction .The prices of frequently used conventional building materials like cement and steel have been on the increase.Consequently, requiring the search for alternative materials that are cheap and affordable.Some researchers have proposed the modified use of indigenous earth building technologies and the use of sustainable materials .Some of these indigenous technologies include cob, earth or adobe bricks, rammed earth and, wattle and daub construction.On the other hand, agricultural and industrial wastes have been receiving the attention of researchers for improving earth materials .Geyer et al. estimated the total weight of virgin plastic that has ever been produced globally to be 8.3 billion tonnes, about 9% of which have been recycled and stated that except for about 12% that has been incinerated, almost all the plastic produced still exist today.Consequently, there is a growing campaign to reduce plastic waste because of the environmental nuisance its disposal is creating in the society and the risk waste plastic that gets into ocean poses to marine life.Improperly disposed plastic waste has resulted in the blockage of waterways and drainages causing flood in some cities .The United Kingdom Government Office for Science, in its report titled “Foresight Future of the Sea”, stated that if nothing is quickly done, ten years from now the current amount of plastic in our oceans would have tripled .Recently, plastic pollution was suspected to be the cause of the death of a whale in Indonesia, after plastic bags, bottles and 115 plastic cups were found in the stomach of the dead whale .Some researchers have investigated the use of plastic waste or recycled plastic as a construction material in the built environment industry.Some researchers found that the incorporation of waste plastic reduced the self-weight of concrete and increased its resistance to corrosion and sulphuric acid attack .Concrete blocks having waste plastic produced by Mondal et al. contained 0–10% waste plastic, 15% Portland cement, 15% fly ash and 60–70% and it was found that they were good for constructing energy-efficient buildings.Some researchers have also investigated the use of waste plastic in the production of asphalt for road pavement construction .Waste plastic bottles filled with earth or other dried solid waste have also been used to construct plastic bottle brick houses .This study presents an exploratory investigation of the effects of stabilizing a soil with shredded waste plastic on the suitability of using the stabilized soil to produce compressed earth bricks.This study is unique in that no peer-reviewed article in open literature, as far as the authors know, has reported the use of waste plastic for the production of CEB.It was hypothesized that using low-cost building materials, such as CEB stabilized with plastic waste, will reduce housing prices and make housing more affordable."Disturbed soil sample was obtained from a location in Ota, Ogun State, Nigeria corresponding to latitude 6°40'52” N and longitude 3°9'11” E.The soil sample collected was initially air-dried in the Geotechnical laboratory of Covenant University, Ota, Nigeria immediately after its collection and transportation to the laboratory.Soil lumps of the air-dried sample were broken, using mortar and pestle.Only those passing the 4.75 mm sieve was used and nearly all the soil sample particles passed through this sieve opening.Polyethylene terephthalate bottles were collected within Covenant University, Ota, Ogun State Nigeria.They were crushed using an industrial crushing machine to sizes less than 6.3 mm and greater than 9.6 mm.Some of the geotechnical properties of the natural soil were investigated.The tests conducted include sieve analysis, specific gravity, Atterberg limits and compaction.Sieve and hydrometer analyses were carried out in accordance with ASTM D422-63 , while the specific gravity was done in accordance with ASTM D854-00 .Atterberg limits tests were conducted in accordance with ASTM D4318-00 , while the standard Proctor compaction test was done in accordance with ASTM D698-07 .The soil with varying percentages of waste plastic was proportioned by mass and mixed at optimum moisture content.The bricks were formed using a hydraulic compacting machine to produce the CEB.The Australian Bullet 5 Spray Erosion test was modified in this study to determine the durability of the test specimen in accordance with Obonyo et al. and similar to the set up used by Arooz and Halwatura .The specimens were placed 470 mm away from the nozzle of the setup and water at a pressure of between 2.07 and 4.14 MPa allowed to penetrate the specimens.Readings were taken after every 15 min to establish the depth of penetration, using a 10 mm diameter flat ended rod.The soil sample used in this study is brown.Table 1 presents a summary of the physical properties of the soil.It has a specific gravity of 2.67 and its plasticity index is 15%.From Fig. 5, it can be seen that more than 50% of the particles of the soil sample was retained on the 75 μm opening sieve.Based on the Unified Soil Classification system, the soil was classified as SC – clayey sand.The results of the Atterberg limits indicate that the fines of the soil sample are of low plasticity.The maximum dry unit weight and the optimum moisture content of the soil was found to be 17.1 kN/m3 and 15.6%, respectively.The compressive strength of earth bricks is one of its most important properties.The compressive strengths of bricks are a function of the soil type and the percentage of fiber in the bricks .The variation of the compressive strength of the CEB with varying shredded waste plastic content is shown in Fig. 7.The compressive strength of the CEB was low having a value of 0.45 MPa.Irrespective of the particle size of the shredded plastic that was studied, the application of increasing percentage of shredded plastic resulted in an initial increase in the compressive strength before a progressive decrease.No brick could be formed for the soil sample containing 7% of shredded waste plastic of sizes greater than 9.6 mm.The highest compressive strength was obtained for CEB containing 1% shredded waste plastic, whose particle sizes were less than 6.3 mm.This compressive strength amounts to an increase of 244.4% when compared with that of the CEB containing no shredded waste plastic.Mondal et al. , who investigated the progressing replacement of sand with 0–10% waste plastic in a concrete containing 15% Portland cement and 15% fly ash, found that the compressive strength of the concrete bricks decreased with increasing percentage of waste plastic in the concrete bricks.The strength development of the CEB containing shredded waste plastic was as a result of the adhesion between the plastic fiber and soil matrix, facilitated by the application of heat for 24 h after the dense packing of the CEB brought about by compaction.However, the strength increase due to the application of the shredded waste plastic was limited to the application of 1% shredded waste plastic.With progressive increase in the shredded waste plastic in the CEBs beyond the 1% level, the CEBs progressively contained more shredded waste plastic imbedded in them that may not have melted.Consequently, creating more slip surface that the soil can slide over when subjected to compressive load.This, therefore, induced failure at progressively lower compressive strength with increasing content of shredded waste plastic.Similarly, the CEBs containing shredded waste plastic with larger particle sizes created more slip surfaces within the CEBs and potential weak points for strength failure.According to the Turkish Standards Institution , the minimum compressive strength for an unfired clay brick should be 1 MPa.Only the CEB samples containing 1% shredded waste plastic, irrespective of the size of the plastic particles, satisfied this requirement.This imply that the CEB samples containing 1% shredded waste plastic can be used in the construction of earth brick walls that are lightly-loaded or non-load bearing.For use in constructing heavily-loaded walls, the earth bricks will need to be stabilized.The erosion rate test results were considered an indication of the durability of the CEB.Fig. 8 presents the erosion rates of the compressed earth bricks containing various percentages of the shredded waste plastic.The erosion rate of the CEB with no shredded waste plastic is 2 mm/min.The erosion rate increased with increasing percentage of shredded waste plastic.For the CEB containing 7% of shredded waste plastic of particle sizes of less than 6.3 mm, the erosion rate increased by 389.8%, when compared with the erosion rate of the CEB containing no shredded waste plastic.These results indicate that the durability of the CEB decreased with the increasing percent of shredded waste plastic.The decreased durability of the CEB with shredded waste plastic was attributed to the weak interface between the shredded waste plastic and the soil.Having the shredded waste plastic pulverized to finer particles and the use of a binder may improve the erosion rate.However, if the CEB containing shredded waste plastic is to be used for building walls of houses, the external surfaces of the walls may need to be covered with mortar or stucco installed.The purpose of this research work was to investigate the effects of stabilizing a soil with shredded waste plastic on the suitability of using the stabilized soil to produce compressed earth bricks.The soil used was classified as SC – clayey sand, based on the Unified Soil Classification system.Two categories of shredded waste plastic were used, one with particle sizes less than 6.3 mm and the other having particle sizes greater than 9.6 mm.The shredded waste plastic was applied to the soil in varying percentages and CEBs were produced.The effects of the application of the shredded waste plastic on the strength and durability of the CEB were investigated.It was found that the compressive strength of the CEB with no additive was low.There was an initial increase in the compressive strength of the CEB with increasing content of shredded waste plastic before a progressive decrease was experienced.An optimal compressive strength for this study was obtained for CEB containing 1% shredded waste plastic, whose particle sizes were less than 6.3 mm.The compressive strength increase was by 244.4% when compared with that of the CEB containing no shredded waste plastic.Also, only the CEB samples containing 1% shredded waste plastic satisfied the Turkish Standards Institution required minimum compressive strength of 1 MPa for an unfired clay brick.The erosion rate of the CEB was found to increase with an increasing percentage of shredded waste plastic content.For the CEB containing 1% of shredded waste plastic of particle sizes of less than 6.3 mm, the erosion rate increased by 50%, when compared with the erosion rate of the CEB containing no shredded waste plastic.Since the erosion rate gives a measure of the durability of the CEB, it was recommended that exterior walls made using CEB containing shredded waste plastic be covered with mortar or installed stucco.Based on these results, the use of an optimal shredded waste plastic content of 1% was recommended for the production of CEB of higher compressive strength.The use of waste plastic in the built environment industry has the potential of minimising the waste plastic that would have been improperly disposed of in public spaces blocking drainages and causing flood or washed away into water bodies causing marine pollution and endangering marine life.Also, since waste plastic is cheap to get and its inclusion at the optimal level in the CEB improved its compressive strength, it may become valuable for the provision of affordable housing in developing countries.To improve the compressive strength and durability of CEB containing shredded waste plastic, a binder such as cement, lime or another additive with adhesive properties may be mixed with the soil and shredded waste plastic during the production of the CEB.However, a binder that is cheap, environmentally-friendly and readily-available locally should be preferred.The authors declare that there are no conflict of interest. | This research work was aimed at investigating the suitability of making compressed earth bricks (CEB) with a mixture of soil and varying percentages (0, 1, 3, and 7%) of shredded waste plastic. Specific gravity, particle size distribution, Atterberg limits and compaction tests were carried out on the soil to determine the engineering properties of the soil. The compressive strengths and erosion rates of the CEB made with the soil and the mixture of soil and varying proportions of shredded waste plastic of two size-categories (<6.3 mm and >9.6 mm) were determined. The soil was classified as clayey sand (SC). The highest compressive strength was obtained for the CEB containing 1% waste plastic of sizes <6.3 mm and its compressive strength amounted to a 244.4% increase. Of the CEB samples stabilized with shredded waste plastic, the sample containing 1% waste plastic of sizes <6.3 mm also had the least erosion rate. Provided the exterior surfaces of walls produced using the CEB are protected from erosion, the use of 1% shredded waste plastic with particle sizes <6.3 mm was recommended. The use of waste plastic that would have constituted an environmental nuisance has the potential to produce stronger and affordable bricks for providing affordable housing. |
383 | OTX2 Transcription Factor Controls Regional Patterning within the Medial Ganglionic Eminence and Regional Identity of the Septum | Otx2 is one of two mammalian orthologs of the Drosophila homeodomain transcription factor, Orthodenticle.Otx2 is essential in the visceral endoderm during gastrulation for specification of anterior neuroectoderm, where it induces early forebrain-specific genes.Telencephalic expression of Otx2 continues in the rostral patterning center, ventral telencephalon, and caudodorsal telencephalon.Abrogation of Otx2 function after early neurulation through enhancer deletion showed that Otx2 maintains forebrain identity after its specification and is required later in the caudodorsal telencephalon for medial pallial morphogenesis.Otx2 roles in the RPC and ventral telencephalon have not been reported.The RPC is a source of fibroblast growth factors with essential roles in forebrain growth and patterning.Notably, Otx2 expression also abuts the midbrain/hindbrain boundary patterning center, which secretes the same array of Fgfs to help instruct development of midbrain/hindbrain structures.Otx2 may be involved in regulating and responding to FGFs from the RPC and MHB.FGF8 bead implantation in prosencephalon, mesencephalon, and optic vesicle tissue indicated that FGF8 represses Otx2 expression.Conversely, there is evidence for Otx2 regulation of Fgf8 expression: reduced Otx2 expression results in anterior expansion of Fgf8 expression in the MHB.Otx2 is expressed in the septal primordium, the GEs, and the preoptic area.The MGE gives rise to the globus pallidus and striatal and cortical interneurons.The LGE gives rise to striatal neurons and olfactory bulb interneurons.The CGE gives rise to cortical interneurons.The POA gives rise to preoptic nuclei and some of the amygdala and cortical interneurons.Fate mapping demonstrated that Fgf8+ ventral MGE and ventral Se progenitors generate cholinergic neurons of the basal ganglia.In addition to their neuronal derivatives, the embryonic MGE, LGE, and POA also generate oligodendrocytes that populate the basal ganglia and cortex.The GEs are subdivided into molecularly and functionally distinct progenitor subdomains along the dorsoventral axis that give rise to different subclasses of neurons.Factors that establish these progenitor subdomains are unknown.We used conditional mutagenesis to elucidate roles of Otx2 in the ventral forebrain and septum.These analyses revealed novel telencephalic roles of Otx2.In the RPC, Otx2 restricted Fgf8 and 17 expression along the rostral-caudal and D-V axes and controlled Fgf signaling through feedback regulation of Sprouty expression in the RPC, MGE, and POA.In the GEs, Otx2 promoted MGE neurogenesis and oligodendrogenesis and controlled MGE sub-regional identity.Based on mRNA expression changes and OTX2 chromatin immunoprecipitation sequencing data, we propose molecular mechanisms how Otx2 patterns MGE regional subdivisions.In the embryonic day 11.5 telencephalon, Otx2 is expressed in the ventricular zone of the Se, GEs, POA, choroid plexus, and hippocampal anlage.To investigate Otx2 functions in the telencephalon, we conducted conditional mutagenesis experiments using Otx2f mice.We used two Cre deleter alleles to abrogate Otx2 expression after gastrulation: RxCre, in which Cre is expressed in the telencephalic neural plate anlage, and Nkx2.1Cre, in which Cre is expressed in the MGE, POA, and vSe beginning around E9.5.We used immunohistochemistry and in situ hybridization to evaluate levels of protein and mRNA in Otx2 conditional knockouts.Anti-OTX2 IHC showed that OTX2 protein expression was lost throughout the telencephalon in RxCre cKOs by E11.5, except in caudal dorsomedial structures.Anti-OTX2 IHC confirmed that Nkx2.1Cre cKOs lacked OTX2 expression in the E11.5 MGE.ISH using a full-length Otx2 probe detected increased levels of the mutant transcript in RxCre cKOs, suggesting that conditional deletion of Otx2 leads to increased state-steady levels of Otx2 transcripts, particularly in the MGE VZ and in the caudal MGE subventricular zone.We performed OTX2 ChIP-seq three times from E12.5 wild-type subpallium.A ChIP-seq peak is presumptive evidence for OTX2 in vivo binding and possible function at this locus.We observed multiple ChIP-seq peaks near the Otx2 locus.These data suggest that Otx2 negatively autoregulates its expression.Otx1 is also expressed in the developing forebrain.Otx1 and Otx2 had complementary expression patterns in the E11.5 telencephalon: Otx2 is expressed strongly in the subpallial VZ, whereas Otx1 is expressed predominantly in the pallial VZ and dorsal LGE but is expressed at lower levels in the GEs.Notably, Otx1 and Otx2 are expressed in the SE, LGE, caudal MGE, and dorsomedial cortical structures.Otx1 mRNA expression was not demonstrably altered in Otx2f/-; RxCre embryos at E11.5.RxCre E11.5 cKOs had hypoplastic MGEs, with reductions in the SVZ and marginal zone.We conducted an RNA expression microarray experiment using RNA from control and RxCre cKO Se, MGE, and LGE.This analysis identified 139 significantly deregulated genes.In parallel, we used microdissected subpallium from wild-type E12.5 embryos in three independent anti-OTX2 ChIP-seq experiments.By comparing microarray and ChIP-seq datasets, we developed hypotheses as to which genes are direct targets of OTX2 that mediate its functions in the RPC and GEs.RxCre cKO telencephalons also are hypoplastic along the rostrocaudal axis at E11.5; the rostral pole > caudal septum distance was 528 ± 32 μm in Otx2f/+ embryos and 294 ± 97 μm in Otx2f/−; RxCre embryos.Fgf8 and Fgf17 mRNA expression domains expanded rostrally and ventrolaterally into the MGE at E11.5.Thus, Otx2 restricts RPC Fgf signaling.Fgf8 hypomorphs had reduced Otx2 expression in the rostral telencephalon.Thus, Otx2 and Fgf8 function and expression are tightly coordinated.FGF signaling induces the expression of negative feedback inhibitors, including Sprouty1, Sprouty2, and MAP kinase phosphatase 3.These genes were deregulated in RxCre cKOs.Sprouty1 expression was diminished in the POA.Sprouty2 was upregulated in the RPC, MGE, and LGE, and Mkp3 was upregulated in the MGE.Anti-OTX2 ChIP-seq data provided evidence that Fgf8, Sprouty1, Sprouty2, and Mkp3 were direct targets of OTX2.We did not observe reproducible ChIP-seq peaks for Fgf17.Thus, Otx2 regulates FGF signaling in the Se and GEs in several ways.Fgf8 and Fgf17 establish gradients of gene expression that pattern the cortical primordium.RxCre cKOs at E13.5 had altered D-V gradients of COUP-TF1 and Sp8.COUP-TF1 expression was downregulated in the dorsal cortex.Conversely, SP8 was upregulated in the dorsal and medial cortex and its graded expression extended further ventrally.These changes are predictable consequences of increased Fgf8 signaling.The RNA expression microarray identified two midbrain/hindbrain genes, En2 and Pax3, which were upregulated in RxCre cKO telencephalons.Normally, En2 is highly expressed around the MHB.Pax3 is expressed in the midbrain and hindbrain at E11.5.Both En2 and Pax3 were ectopically expressed in the RPC/Se and medial PFC.Notably, we detected low levels of Pax3 in a small subdomain of the caudal RPC in control forebrains.We did not detect ChIP-seq peaks near the En2 locus, but Pax3 had two strong intragenic peaks, suggesting that this is a direct OTX2 target.Ectopic/elevated expression of En2 and Pax3 suggests that Se progenitors are mis-specified.The bHLH TFs Olig1 and Olig2 were downregulated in the E12.5 RNA microarray experiment.ISH at E11.5 confirmed that Olig1 was reduced throughout the caudal MGE and POA VZ, and Olig2 was selectively downregulated in the ventral subdomains of these structures.At E13.5, RxCre cKOs had reduced Olig1 expression in the Se and MGE VZ and fewer scattered Olig1+ immature oligodendrocytes in the MZ.E12.5 ChIP-seq data revealed numerous OTX2 binding sites in the vicinity of the Olig1/Olig2 locus, suggesting these two genes are direct targets of OTX2 regulation.Olig1 expression in Nkx2.1Cre cKOs was slightly reduced in the vMGE and POA at E11.5.At E13.5, Olig1 expression in the Se VZ appeared normal but was strongly reduced in the caudal MGE VZ, and there were fewer scattered Olig1+ cells in the MZ.Thus, Otx2 is required in the early MGE VZ for oligodendrogenesis.The GEs of RxCre cKOs were hypoplastic at E11.5.We examined E11.5 expression of Dlx1, a homeobox gene expressed mosaically in the MGE and LGE VZ, homogeneously in their SVZs, and in differentiating neurons.Dlx1 expression was reduced in the VZ, SVZ and MZ.OTX2 binds to two enhancers in the Dlx1/2 locus that are active in the embryonic subpallium.This phenotype was also observed in Nkx2.1Cre cKOs, suggesting that Otx2 participates in MGE neurogenesis.In support of this model, several markers of differentiating MGE neurons were downregulated.For example Arx RNA was reduced ∼1.7-fold in the microarray.OTX2 ChIP-seq peaks were identified near Arx, including at two enhancers with subpallial activity.At E11.5, Arx was expressed in MGE VZ, SVZ, and MZ but was reduced in the RxCre cKOs.Similarly, Shh, PlxnA4, and Gbx2 expression was reduced.All three OTX2 ChIP-seq experiments detected OTX2 peaks on a Shh enhancer that promotes MGE expression.Markers of immature MGE-derived interneurons, including Lhx6, c-maf, Somatostatin, NPY, and Gad1, were also downregulated.Furthermore, neurogenesis in the E11.5 MGE was reduced, as indicated by IHC to the pan-neuronal marker, β-III-tubulin.Unlike Arx, β-III-tubulin and Dlx1, which were moderately reduced in the cKOs, expression of Robo2 was barely detectable.Nkx2.1Cre cKOs exhibited a similar phenotype.Several ChIP-seq peaks were identified ∼1 Mb from the Robo2 locus, within the Robo1 locus.We next examined proliferation in the E11.5 MGE of the RxCre cKO using IHC to phospho-histone H3.pH3+ cells were strongly reduced in the SVZ, while the VZ did not show a clear phenotype.In considering the mutant’s neurogenesis deficit, and the reduction of mitotic SVZ cells, we were intrigued that anti-neurogenic factors Hes1 and Id4 were both upregulated in the E12.5 RxCre microarray.Hes1 and Id4 are expressed in dorsal > ventral gradients in the MGE VZ at E11.5; these gradients were lost, as these genes were upregulated throughout the MGE.ChIP-seq data indicated that Otx2 binds genomic DNA near the Hes1 locus and may also bind near the Id4 locus.Together, these data support a model in which Otx2 directly regulates genes that control the generation and differentiation of MGE-derived neurons.Otx2 represses inhibitors of MGE differentiation, and thus, loss of Otx2 function results in reduced production of SVZ progenitors and neurons.Furthermore, Otx2 promotes neuronal maturation by positively regulating Arx and Dlx1 expression, genes that support the differentiation of MGE-derived neurons.Olig2, Fgf, and Sprouty1 expression changes in RxCre cKOs indicated that the vMGE and POA were particularly affected by the loss of Otx2.Thus, we hypothesized that Otx2 may play a role in regional specification of the basal ganglia.To investigate this, we examined microarray data for expression changes relevant to MGE and POA patterning, and performed ISHs at E11.5.Multiple genes that had restricted expression in the POA were upregulated; ISH revealed that their expression expanded rostrally and/or dorsally into the ventral MGE.These “POA genes” included Nkx5.2, Dbx1, Slit2, Arhgap22, Sox3, Sox14, and mShisa.Conversely, several MGE markers were identified as downregulated on the microarray.ISHs validated these results and demonstrated that the RxCre cKO MGE VZ failed to express Tal2 and Tll2, which are novel markers of the vMGE.Furthermore, Sall3 was downregulated within the vMGE VZ, and Tgfb3 was downregulated in the vMGE SVZ.COUP-TF2, which is strongly expressed in the CGE and dorsal MGE of wild-type E11.5 embryos, was overexpressed in the vMGE VZ and SVZ of Otx2 cKOs; this could reflect a rostral and/or ventral shift in this gene’s expression domain.Together, these findings provide evidence that Otx2 patterns MGE regional identity by specifying vMGE properties and by repressing POA identity.Importantly, ChIP-seq data revealed OTX2 binding peaks in or near several downregulated MGE genes.In contrast, most upregulated POA genes, including Nkx5.2, Slit2, Arhgap22, and Sox3, did not have nearby OTX2 binding peaks.One notable exception was Dbx1, a POA marker that had OTX2 ChIP-seq peaks.Furthermore, OTX2 occupied enhancer elements, with subpallial activity at E11.5, near genes with reduced expression in the Otx2 RxCre cKOs.These data support a model in which Otx2 patterns the basal ganglia by direct, positive transcriptional regulation of MGE genes and by repressive effects on POA gene expression that are predominantly indirect.At E13.5, RxCre cKOs phenotypes continue to demonstrate expansion of POA identity into the MGE.Nkx5.1 expression, which is normally restricted to a POA SVZ subdomain and a subset of POA MZ cells, expanded dorsally and rostrally.COUP-TF1 is normally expressed in the LGE VZ, the POA VZ and MZ, and in a dorsal > ventral gradient in the caudal MGE VZ.In cKOs COUP-TF1 had increased expression MGE MZ and was ectopically expressed in the caudal MGE MZ.ChIP-seq data suggest that COUP-TF1 may be directly regulated by OTX2.Zic1 is a marker of the vMGE VZ, the GP, and a subset of other MGE MZ cells at E13.5.In cKOs Zic1 was not detectable in the vMGE VZ or GP and labeled fewer MGE MZ cells.ChIP-seq data revealed multiple peaks near the Zic1 locus, suggesting direct regulation by OTX2.GP hypoplasia in RxCre cKOs was confirmed by Nkx2.1 and ER81 ISH at E13.5 and with ER81 and NPAS1 ISH at postnatal day 0.Nkx2.1Cre cKOs at E13.5 exhibited similar, though less severe, POA and MGE phenotypes.While the ventral MGE had the clearest patterning defects in RxCre cKOs, we also observed subtle deficits in the dMGE.At E13.5, transcriptional co-activator Sizn1 is expressed in the VZ of the ventral LGE.In cKOs, Sizn1 expression extended ventrally into dMGE; we did not observe OTX2 ChIP-seq peaks near Sizn1, suggesting this was an indirect effect of Otx2 functions.Although Otx2 is expressed in the VZ of the LGE and MGE, the LGE had only a mild phenotype in RxCre cKOs at E11.5, E13.5 and P0.For example, Ikaros, which labels neurons in the MZ of the LGE, was expressed in the normal domain, albeit at lower levels, at E15.5.The MGE and POA give rise to cortical and striatal interneurons and basal ganglia cholinergic neurons; as such, RxCre cKOs affected these neurons.Lhx6 and Lhx8 expression were reduced in these regions.Lhx8+ striatal interneurons were reduced.Lhx6 and c-maf ISHs suggested that RxCre cKOs may have reduced numbers of cortical interneurons at P0.However, at P13–P15, we did not observe significant changes in somatostatin and parvalbumin IHC-positive cortical interneuron numbers.Approximately 80% of cholinergic neurons in the basal ganglia originate in the vMGE and septum from RPC-derived progenitors.As this domain was severely affected in RxCre cKOs, we examined cholinergic neurons numbers.Gbx1 and TrkA are expressed in basal ganglia cholinergic neurons at P0; both markers were reduced in the mutants, as were ChAT+ neurons at P13–P15.Nkx2.1Cre cKOs exhibited a milder reduction of ChAT+ neurons.OTX2 ChIP-seq was performed three times from E12.5 subpallium.Replica 1 had 995 peaks, replica 2 had 1,416 peaks, and replica 3 had 19,881 peaks.We focused on ChIP-seq peaks present in all three replicas, yielding 590 regions, which were analyzed Regulatory Sequence Analysis Tools.Four variations of a frequently detected motif were assigned to CRX by the JASPAR database.CRX and OTX2 both have bicoid homeodomains that bind to the same motif.269 out of 590 OTX2 ChIP-seq peaks had the core binding sequence GGATTA.These regulatory domains also had motifs for other homeodomain proteins and for high-mobility group box proteins.53% of OTX2 motif-containing enhancers had either the other homeodomain or HMG box motifs; 61% of OTX2 motif-negative enhancers did not have either of these motifs.Several of the dysregulated genes in the Otx2 mutants had OTX2 ChIP-seq peaks; those with OTX2 motifs have yellow stars in Figure 2.In some cases, broad domains of OTX2 binding lacked the OTX2 motif, suggesting that OTX2 binding in these cases may be through protein-protein interactions.Gene ontologies were computed using the Genomic Regions Enrichment of Annotations Tool.The most frequent GO molecular function terms showed that OTX2 target genes were highly enriched for transcription regulators.The most frequent GO biological function terms showed OTX2 target genes were highly enriched for regulators of neural development.Thus, ChIP-seq analysis on E12.5 ganglionic eminences provided strong support for OTX2 binding in vivo to regulatory elements containing the OTX consensus sequence near genes that regulate transcription.Using transcriptional profiling and conditional mutagenesis with two Cre alleles, we demonstrated that Otx2 regulates RPC identity and signaling and specification of the vMGE and promotes MGE neurogenesis and oligodendrogenesis.OTX2 ChIP-seq provided evidence that a subset of genes deregulated in cKOs were direct transcriptional targets of OTX2.To our knowledge, this is the first report of a genome-wide TF ChIP-seq analysis from embryonic basal ganglia.It enabled us to deduce the in vivo binding site motifs for OTX2, provide evidence for the other TFs that bind in adjacent regions, and make predictions about which domains have OTX2 binding that do not depend on its association with the OTX2 core motif.Otx2 plays pivotal roles in the early specification of the forebrain and midbrain and, at later stages, in midbrain/hindbrain patterning and differentiation.Here, we show using RxCre that Otx2 expression after E8.5 is required for specifying RPC function and identity based on the findings that Fgf expression domains were expanded and genes expressed in the MHB patterning center were ectopically expressed in the RPC.In the developing midbrain, ectopic Pax3 can induce transcription of Fgf8, En2, and Pax3.OTX2 ChIP-seq identified two Pax3 intragenic peaks; no peaks were found near En2.These data suggest that direct repression of Pax3 by OTX2 is required to inhibit En2 expression and restrict Fgf8 and Pax3 expression in the RPC.This may be a crucial step in defining the identity of the Se and/or in distinguishing forebrain and midbrain/hindbrain fates.Misspecification of the Se likely contributes to Se hypoplasia and reduction in Se cholinergic neurons.FGF8-bead experiments demonstrated that FGF8 represses Otx2 expression.Furthermore, Fgf8 gain of function studies show that increased Fgf8 represses growth that is consistent with the hypoplastic rostral telencephalon.Furthermore, Otx2 cKO and Fgf8 hypomorph analyses revealed that Otx2 spatially restricts Fgf8 to the RPC, whereas Fgf8 positively regulates Otx2 in the early rostral telencephalon.These data support a model in which Fgf8 and Otx2 regulation are interdependent.Indeed, Otx2 controls feedback regulators of Fgf signaling.ChIP-seq data suggest that Fgf8, Spry1, Spry2, and Mkp3 are direct OTX2 targets.Deregulated Fgf signaling in Otx2 mutants likely contributes to deficits in cell populations that arise from, or are adjacent to, the RPC.Otx2 expression in the E9.5–E12.5 subpallium is restricted to the VZ and SVZ, where in the MGE it is required to generate normal numbers of SVZ progenitors and MZ neurons.Otx2 cKOs overexpress anti-neurogenic TFs that inhibit neurogenic TFs such as Ascl1.Furthermore, Otx2 promotes oligodendrogenesis through positive regulation of Olig1 and Olig2.These mechanisms for neurogenesis and oligodendrogenesis appear to be mediated by direct binding of OTX2 at genomic loci of key regulatory TFs.Later, compensatory mechanisms may rescue these phenotypes as neurogenesis has improved by E13.5-E15.5.This compensation may in part be mediated by Otx1, which is expressed at low levels in the MGE.There is evidence that vMGE generates most of the GP, whereas the dMGE may principally generate interneurons.Whereas Nkx2.1 function is required throughout the MGE, Otx2 preferentially controls the identity of the vMGE.This is a surprising result, given that Otx2 mRNA and protein are expressed throughout the MGE.Analysis of gene expression changes and OTX2 genomic binding sites provides three lines of evidence that Otx2 specifies vMGE identity through direct regulation of TF genes expressed in the vMGE and POA: Otx2 autoregulates its transcription in the MGE and POA. Otx2 drives vMGE expression of Sall3, Tal2, and Tll2.OTX2 has binding sites near these genes. Otx2 represses POA identity in the MGE by blocking Dbx1, Slit2, and Sox3 expression.There is OTX2 binding near Dbx1.Note that Nkx2.1 expression persists in vMGE progenitors in Otx2 mutants; thus, Otx2 and Nkx2.1 may specify vMGE identity via parallel pathways.vMGE respecification in Otx2 mutants has consequences for subpallial development, including GP agenesis.There is dorsal and rostral expansion of POA progenitor and neuronal properties.Other vMGE neuronal cell types are reduced, including Lhx6+ neurons in the ventral pallidum and cholinergic neurons in the nucleus basalis, diagonal band, and striatum.Otx2 also impacts patterning of the telencephalon along the rostrocaudal axis.RPC Fgf expression domains are expanded rostrally by E11.5 in Otx2 mutants.COUP-TF1, a TF expressed in the caudal MGE, is repressed by Otx2, as COUP-TF1 is ectopically expressed rostrally by E13.5 in the mutants.In addition, several POA markers expand rostrally as well as dorsally into the MGE.Thus, Otx2 controls both rostrocaudal and D-V MGE patterning.In summary, Otx2 is essential in the E8.5–E13.5 telencephalon for regional specification of the RPC and vMGE and for MGE neurogenesis and oligodendrogenesis.In the absence of Otx2, the RPC takes on MHB properties and the vMGE takes on POA properties, leading to Se, GP, and cholinergic deficits.OTX2 ChIP-seq provided evidence for direct mechanisms through which Otx2 controls regional and cell-type identity in the subpallium.We used the following published mouse lines: Fgf8neo, non-hypomorphic Otx2f, RxCre, Nkx2.1Cre, Fgf8CreER.Otx2f/+ mice were crossed to βactin::Cre to generate Otx2+/− mice.Unless otherwise specified, conditional knockouts were of the genotype Otx2f/−; Cre+, generated by crossing Otx2f/f mice to Cre lines maintained on an Otx2+/− background.Mice were maintained in social cages in a specific-pathogen-free barrier facility at University of California, San Francisco on a 12-hr light/dark cycle with free access to food and water.All animal care and procedures were performed according to the University of California at San Francisco Laboratory Animal Research Center guidelines.For embryonic experiments, day 0.5 was designated as noon on the day a vaginal plug was observed.At the time of experiment, mice were euthanized by CO2 inhalation followed by cervical dislocation.Embryonic heads or isolated brains were fixed overnight in 4% paraformaldehyde, transferred to 30% sucrose for cryoprotection, and then embedded and frozen in OCT for cryosectioning.Section thickness ranged from 10 to 20 μm depending on stage.For postnatal experiments, animals were anesthetized with intraperitoneal Avertin and perfused transcardially with 1× PBS and with 4% PFA, followed by brain isolation, fixation, cryoprotection, and freezing/embedding.We used the following antibodies: ChAT, Otx2, Tuj1, pH3, parvalbumin, and somatostatin.Cryosections were rinsed in PBS, blocked in 10% normal serum/PBST, incubated in primary antibody overnight, washed in PBST, incubated in secondary antibody for 1–3 hr, and washed in PBS.For fluorescence detection, we used Alexa-488- and Alexa-Fluor-594-conjugated secondary antibodies.For colorimetric detection, biotinylated secondary antibodies were used with the ABC/diaminobenzidine detection method.For ChAT IHC, antigen retrieval was achieved by incubating slides in 2.94 g/l trisodium citrate dehydrate, 0.05% Tween-20 for 15 min at 90°C.Blocking and antibody incubations were done in 1% BSA in PBST.Sections were incubated two days at 4°C with primary antibody, and signal was amplified with biotinylated anti-goat prior to fluorescent detection with streptavidin-594.For OTX2 IHC, we modified the IHC protocol according to the recommendations of Yuki Muranishi in the Furakawa laboratory.Briefly, antigen retrieval was achieved as for ChAT IHC, and samples were blocked in 4% donkey serum in PBST.We performed ISH on a minimum of n = 2 and n = 3 biological replicates for controls and mutants, respectively.In each case, a rostrocaudal series of at least ten sections was examined.Reduced expression was interpreted as reduced RNA per cell, unless otherwise stated.Section ISHs were performed using digoxigenin-labeled riboprobes as described previously with the following modifications.Prior to acetylation, sections were incubated with proteinase K and post-fixed in 4% PFA.Slides were equilibrated in NTT prior to antibody incubation and then washed in NTT 3 × 30 min at room temperature.They were then washed three times in NTTML and transferred to BM purple for colorimetric detection.Slides were rinsed in water, then postfixed, dehydrated, incubated briefly in xylene, and coverslipped using Permount.Acetylation buffer consisted of 1.33% triethanolamine, 0.065% HCl, and 0.375% acetic anhydride.Riboprobe block/hybridization buffer consisted of 50% formamide, 5× SSC, 1% SDS, 50 μg/ml yeast tRNA, 50 μg/ml heparin.Antibody blocking buffer consisted of 0.15 M NaCl, 0.1 M Tris, and 0.1% Tween-20.Subpallial tissue was microdissected from E12.5 female brains, snap frozen, and stored at −80°C.Total RNA was isolated using the QIAGEN RNAeasy kit.RNA was amplified with Agilent low RNA input fluorescent linear amplification kits, and cRNA was assessed using the Nandrop ND-100.Equal amounts of Cy3-labeled target were hybridized to Agilent whole mouse genome 4 × 44K Ink-jet arrays by the UCSF Genomics Core, who then performed the differential gene expression analysis.Significant changes in gene expression were defined as B value greater than zero.B = log 10 posterior odds ratio is the ratio between the probability that a given gene is differentially expressed over the probability that a given gene is not differentially expressed.B ≥ 0 means equal or greater probability that a gene is DE versus non-DE.ChIP was performed using anti-OTX2.E12.5 CD1 GEs were fixed in 1.5% formaldehyde for 20 min and neutralized with glycine.Fixed chromatin was lysed and sheared into 200- to 1,000-bp fragments using a bioruptor.Immunoprecipitation reactions were performed in duplicates using goat immunoglobulin G as negative controls.Precipitated fractions were purified using Dynabeads.Libraries were prepared using an Ovation Ultralow DR Multiplex System, size selected in the range of 200–300 bp on a LabChip, quality control tested on a Bioanalyzer, and sequenced on a HiSeq.Reads from ChIP, input, and negative control libraries were mapped to the mouse genome using BWA, and peaks were called using model-based analysis for ChIP-seq considering both input and IgG as the control sample with filtering to remove peaks in repeat regions.For downstream analysis of ChIP-seq data, only peaks that overlapped in each of the three OTX2 ChIP-seq replicates were selected.Nucleotide motifs were identified using the Regulatory Sequence Analysis Tools peak-motifs tool.Gene Ontology for biological process and molecular function was computed using GREAT.R.V.H. designed, conducted, and analyzed data for all experiments described in this manuscript, except the ChIP-seq study.S.L. performed and helped to analyze ChIP-seq experiments and provided comments on the manuscript.J.D.P. performed informatics analyses of the ChIP-seq data.J.L.R.R. provided funding and laboratory resources for this study and helped guide the project and analyze results.R.V.H. and J.L.R.R. prepared the manuscript. | The Otx2 homeodomain transcription factor is essential for gastrulation and early neural development. We generated Otx2 conditional knockout (cKO) mice to investigate its roles in telencephalon development after neurulation (approximately embryonic day 9.0). We conducted transcriptional profiling and insitu hybridization to identify genes de-regulated in Otx2 cKOventral forebrain. In parallel, we used chromatin immunoprecipitation sequencing to identify enhancer elements, the OTX2 binding motif, and de-regulated genes that are likely direct targets of OTX2 transcriptional regulation. We found that Otx2 was essential in septum specification, regulation of Fgf signaling in therostral telencephalon, and medial ganglionic eminence (MGE) patterning, neurogenesis, and oligodendrogenesis. Within the MGE, Otx2 was required for ventral, but not dorsal, identity, thus controlling the production of specific MGE derivatives. |
384 | The combining effects of ausforming and below-Ms or above-Ms austempering on the transformation kinetics, microstructure and mechanical properties of low-carbon bainitic steel | A superfine bainitic structure, composed of nano-scale bainitic ferrite laths and thin films of retained austenite between the laths, can be obtained in high-C rich-Si steels .Such a nano-scale superfine bainitic structure possesses ultimate tensile strength in excess of 2.0 GPa and noticeable uniform elongation in the range of 5–20%, being a likely candidate to satisfy industrial demands for advanced steels.This superfine bainitic structure, i.e. so-called low-temperature bainite, is generally obtained by isothermal treatment at an extremely low temperature, usually below 300 °C but above martensite start temperature .Therefore, nano-structured bainitic steels are often designed to contain a higher carbon content and thus to have a lower Ms so that austempering treatment can be performed at as low as possible temperatures.However, this in turn leads to a much prolonged bainite transformation duration, being a hindrance for industrial applications of such steels .Recently, it has been attempted by researchers to transfer the nano-scale bainite concept to low-carbon steels by designing low-C bainitic microstructures .A decrease in carbon content has been known to increase the driving force and thus accelerate bainitic transformation rate .Besides, a steel with a low carbon content has the advantage of improved weldability.However, low-C steels usually have a higher Ms; thus, it is not feasible to produce nano-scale low-temperature bainite via conventional above-Ms austempering.For example, a series of low-C high-Ni steels have been designed by adding substitutional solutes of Ni to decrease the Ms .Unfortunately, the bainite obtained was coarsened due to the coalescence of fine bainitic laths during austempering.Furthermore, multi-step austempering has been attempted to produce superfine bainitic structure in low-C steels .However, the complex multi-step procedure unfavorably prolongs the total heat treatment time, and the thicknesses of the resulting bainitic laths in an order of submicron remain not satisfactorily refined.As a consequence, it was claimed by Bhadeshia that the prospects for the design of low-C nano-scale bainitic structure do not look promising.Ausforming is a thermo-mechanical process for microstructure refinement, by which supercooled austenite is plastically deformed prior to bainite or martensite transformation.A number of studies have been devoted to the effects of ausforming on the kinetics of bainite transformations and the amount, morphology and crystallography of the transformation products .It has been shown that prior ausforming can effectively enhance the yield strength of supercooled austenite, which in turn hinders the growth of bainitic ferrite and leads to refined bainitic laths in the final microstructure .Additionally, ausforming may increase or decrease the volume fraction of transformed bainitic sheaves, depending on the steel, strain level, and deformation temperature as well.Nevertheless, most of the previous studies on ausforming dealt with medium/high carbon steels and only a few cases coped with steels containing lower carbon content of ∼0.2 wt% or below .Furthermore, conventional austempering treatment for bainite transformation is often performed at temperatures above Ms. Alternatively, according to literature, isothermal transformation may also occur below Ms , and the resulting microstructures are mainly bainite mixed with tempered martensite .For low-C steels, it was observed that below-Ms austempering notably refines bainitic structure, as compared with above-Ms austempering; however, the thicknesses of bainitic laths obtained by below-Ms austempering alone are still not satisfactorily fine.It is understood that the features of bainitic laths, retained austenite and/or martensite/austenite blocks are all important factors affecting the mechanical properties of bainitic steels .These microstructural features are in turn affected by carbon content, austempering temperature and ausforming of austenite .According to the theory of T0 curve , in high-C bainitic steels, there will be more supercooled austenite untransformed at the finish of isothermal treatment at a given austempering temperature because of the increased incompleteness of bainitic transformation.In addition, there are more carbon atoms available in high-C steels to partition from the formed bainitic ferrite to the untransformed austenite and accordingly increase its thermal stability .Consequently, more austenite, mostly in large blocks, will be retained at room temperature.However, for low-C bainitic steels, there will be less supercooled austenite untransformed at the finish of isothermal treatment, because of the decreased incompleteness of bainite transformation.As a consequence, less austenite, fewer in large blocks, will be retained at room temperature.Furthermore, prior ausforming may further complicate the bainite transformation of, in particular, low-C steels, since it may affect the amount of untransformed austenite at the finish of austempering through mechanical stabilization of austenite .In this circumstance, if larger amounts of austenite are untransformed on austempering, most of the austenite grains, especially in the central region, may not be sufficiently enriched with carbon because the total amount of carbon in low-C steels is limited or because the surrounding carbon atoms are not sufficiently partitioned to the austenite .As such, these austenite grains, especially in the central region, tend to transform into martensite due to their low stability, with less austenite retained at room temperature.Moreover, with prior ausforming, whether the subsequent austempering is carried out below Ms or above Ms further increases the unpredictability of the final transformation products.The purpose of the present paper is to systematically investigate the isothermal transformation kinetics, microstructure and mechanical properties of a low-C Si-rich bainitic steel, subjected to below-Ms or above-Ms austempering with or without prior ausforming, and to explore the possibility of producing superfine bainitic structure with improved mechanical properties, with the emphasis on the understanding of the different combining effects of ausforming and below- or above-Ms austempering.The outcome of this study is expected to provide an approach for designing and preparing superfine bainitic steels with low carbon content, facilitating the application of such advanced steels.The steel employed in this study was melted in a vacuum induction furnace.The steel billet was homogenized at 1200 °C for ∼ 6 h and subsequently hot-forged into square bars at a finishing temperature of 960 °C.The chemical composition of the steel is shown in Table 1.The addition of Si was to suppress the precipitation of cementite during bainitic transformation so as to obtain carbide-free bainite; alloying elements Mn, Cr, Ni and Mo were added to increase the hardenability, hence ensuring that the supercooled austenite does not decompose into diffusion-controlled products during cooling and ausforming process.According to the thermo-mechanical processes shown in Fig. 1, dilatometric measurements were carried out on a Gleeble-3500 thermo-mechanical simulator to examine the bainitic transformation behavior and to measure the Ms temperature, which is probably affected by ausforming.The sample size for dilatometric measurements is 5 mm in diameter and 10 mm in length in the heated parallel section; dilation was measured by attaching a C-gauge to record the dimensional change in radial direction.Specifically, for the austempering process without prior ausforming, after full austenitization at 960 °C for 20 min, samples were quickly cooled down to austempering temperature of 400°C or 355°C with high-pressure air and held isothermally for 20 min.For the combined processes of prior ausforming and austempering, after full austenitization the samples were cooled to 600 °C or 400°C immediately and deformed compressively at a thickness reduction of 50% and strain rate of 1 s−1 before cooling down to the designated temperatures for austempering.Tensile tests were performed on a Shimadzu test machine at a crosshead speed of 0.3 mm/min.Plate tensile specimens, with a parallel section of 8 × 2 × 2 mm3, were taken from the austempered samples with or without prior ausforming and the surfaces of the specimens were carefully ground and polished.It should be noted that for ausformed samples, the specimen length is along the rolling direction.An axial extensometer with a gauge length of 5 mm was attached to the specimens for measuring strain.Impact toughness was measured using non-standard Charpy U-notched specimens with dimensions of 5 × 10 × 55 mm3 according to ASTM E23.Specimens for impact toughness tests were taken with specimen length direction along the rolling direction.For each process, two tensile specimens and three impact toughness specimens were tested at room temperature.Longitudinal-section microstructures were examined by optical microscopy, scanning electron microscopy and transmission electron microscopy.Samples for OM and SEM observations were prepared by careful mechanical polishing to smooth and mirror surfaces in successive stages by silicon carbide papers from 320 to 2000 grit.OM was used to examine prior austenite grain boundaries of ausformed and non-ausformed samples.After polishing, OM samples were etched using an etchant, composed of saturated picric acid, sodium dodecyl benzene sulphonate and distilled water, heated at 60 °C for 40∼60s in a water bath.SEM samples were etched with 3% nital solution after polishing.TEM samples were prepared by cutting samples into 300 μm thick slices, followed by mechanical grinding down to ∼30 μm in thickness.Thin disks of 3 mm in diameter were twin-jet electropolished to perforation on a TenuPol-5 twin-jet unit at a voltage of 27 V.The electrolyte for electropolishing consisted of 7% perchloric acid and 93% glacial acetic acid.The perforated foils were examined using TEM at an operating voltage of 200 kV.Fracture surfaces of the impacted samples were observed by the SEM to identify the fracture mode.Constituent phases were analyzed by the X-ray diffraction method, using the X-ray diffractometer with unfiltered Cu Kα radiation.All XRD samples were mechanically polished carefully, and subsequently a thin surface layer was removed by chemical etching using 2% nital solution to avoid martensite possibly induced by mechanical grinding operation so as to obtain more precise data.Specimens were step scanned with a scan rate of 2°/min ranging from 20 to 120°.The volume fraction of retained austenite was determined using the integrated intensities of γ, γ and γ austenite peaks andα, α and α ferrite peaks .Three XRD specimens were tested for each condition to avoid experimental errors as much as possible.Fig. 2 shows the dilation-temperature curves of the specimens during cooling and austempering with or without prior ausforming.All the curves appear to be linear throughout the cooling process till the temperature of 400 °C for above-Ms austempering or till the Ms temperature for below-Ms austempering.For the above-Ms austempering, large isothermal dilation occurs at 400 °C, corresponding to the conventional bainite transformation.For the below-Ms austempering, gradual dilation occurs with decreasing the temperature from Ms to 355°C; this is associated with the formation of athermal martensite.This gradual dilation is then followed by a suddenly increased dilation at 355 °C, which is ascribed to the below-Ms isothermal bainite transformation .The Ms temperatures for specimens with or without prior ausforming were determined from the dilation curves in Fig. 2 using the tangent method .The results are given in Table 2.The Ms for the specimen without prior ausforming is 383 °C.With prior ausforming, the Ms drops down; as the ausforming temperature decreases from 600 to 400 °C, the Ms drops from 378 to 367 °C.These observations are consistent with the literature .As is known, martensite transformation takes place in a displacive mode and can be suppressed by the increase in the strength of supercooled austenite.Ausforming increases the strength of the supercooled austenite, which in turn increases the shear resistance of austenite-to-martensite transformation and accordingly decreases the Ms. Furthermore, decreasing the ausforming temperature further strengthens the supercooled austenite and thus further decreases the Ms as well.During the continuous cooling to even lower temperatures, all dilation curves deviate upwards from the straight lines.These upward deviations indicate further decomposition of the untransformed austenite into martensite; the more the deviation the more the martensite formed from the untransformed austenite .The above-Ms austempered samples exhibit significantly larger deviation than the below-Ms austempered samples, indicating that more martensite is formed in the former than in the latter samples during final cooling from isothermal treatment to room temperature.Fig. 3 shows the dilatation-time curves of the specimens treated by various processes.It should be noted that the dilatometric curves here show only the initial period of isothermal treatment rather than the entire austempering process.The total bainite transformation times were obtained using the tangential line method on the whole dilation-time curves, as shown in Table 2.It is observed that bainite transformation in all specimens is finished in less than 7 min, and the isothermal transformation behaviors, including the incubation period, finishing time and transformation rate, are influenced largely by the thermo-mechanical processes.For the above-Ms austempering, regardless of prior ausforming or not, the dilatation curves are characterized by the presence of an incubation period and a gradual dilation, indicating the conventional time-dependent bainite transformation.For the below-Ms austempering, the initial explosive expansion, which is caused by athermal martensite transformation, is followed by the time-dependent bainite transformation; however, no obvious incubation period is observed.This indicates that below-Ms austempering accelerates bainite transformation by shortening the incubation period.Furthermore, the incubation period for bainite transformation is also shortened by ausforming; and with decreasing the ausforming temperature, the incubation period is further shortened.This is apparent, in particular, for the above-Ms austempered specimens.However, such shortening of incubation period by prior ausforming can hardly be observed for the below-Ms austempered specimens, probably because the bainite transformation incubation time has already been nearly zero in the absence of prior ausforming so that the shortening effect of ausforming is obscured.Moreover, as shown in Table 2, the total bainite transformation time is reduced obviously by ausforming and by decreasing the ausforming temperature as well, regardless of the above- or below-Ms austempering.The dilation rate curves of various specimens are shown in Fig. 4.In the case of above-Ms austempering, a larger maximum transformation rate and a shorter time to reach the maximum transformation rate are observed for specimens with prior ausforming.With decreasing the ausforming temperature, the maximum transformation rate is further increased and the time to reach the maximum transformation rate is further reduced as well.In sharp contrast, in the case of below-Ms austempering, the maximum transformation rate always occurs at the very start of isothermal holding, regardless of prior ausforming or not.Fig. 5 shows the OM micrographs of austenite grain boundaries in the samples without ausforming and with ausforming at 600 °C or 400 °C.Clearly, equiaxed austenite grains are observable in the non-ausformed samples, whereas ausforming causes pancaking of the supercooled austenite grains.This means that the austenite boundary area per unit area is increased by prior ausforming.Fig. 6 shows the SEM micrographs of the specimens treated by different processes.The microstructures are all composed of lath bainite sheaves, retained austenite, and/or martensite/austenite blocks.The M/A blocks result from some blocky austenite which has not transformed into bainite at the finish of isothermal transformation but partially transforms into martensite during cooling to room temperature because of its low stability.The formation of such athermal martensite after isothermal transformation is confirmed by the upward deviations of the dilation curves in the later stage of cooling.The martensite formed in the samples is further verified by TEM images, as typically shown in Fig. 7.The area fraction and size distribution of the M/A blocks, measured from multiple SEM images, are shown in Figs. 8 and 9, respectively.It is observed that much smaller amounts of M/A blocks are present in the below-Ms than in the above-Ms austempered samples, regardless of prior ausforming.Furthermore, for the above-Ms austempered samples, prior ausforming largely increases the amount of large M/A blocks; however, for the below-Ms austempered samples, prior ausforming significantly decreases the amount and size of M/A blocks.It should be noted that the correction factor of 2/π proposed by Mack was based on the assumption of isotropic growth, which does not apply to cases where there is a higher probability of bainitic growth in certain directions.Therefore, considering the possible anisotropy of bainite sheaves in ausformed samples, the lath thicknesses in both transverse and longitudinal sections were measured, and their average values were calculated, aiming to obtain values roughly close to the real ones.For each thermo-mechanical process and for either transverse or longitudinal sections, at least 300 measurements were made on multiple images to obtain statistically representative results.The obtained results are shown in Table 3.It is shown that the tB-L is slightly smaller than the tB-T for all the ausformed samples.These results are consistent with the results in literature .From the average values of tB, prior ausforming at either 600 °C or 400 °C largely reduces the thickness of bainitic laths in both the above-Ms and below-Ms austempered samples.Furthermore, under the same condition of ausforming or non-ausforming, the tB values in the below-Ms austempered samples are much thinner than in the above-Ms austempered samples.Particularly, a superfine bainitic structure with bainite laths of ∼100 nm in thickness is obtained by the coupled process of prior ausforming at 400 °C and below-Ms austempering at 355 °C.The XRD patterns of various samples are shown in Fig. 11.Only bcc and fcc structured phases, without diffraction peaks of carbides, are observable.These XRD results indicate that too small amounts of carbides or no carbides are present, consistent with the TEM observations.The volume fractions of retained austenite, as calculated based on the XRD patterns, are given in Table 3.In the case of non-ausforming, the Vγ in the below-Ms austempered samples is 7.5%, which is slightly higher than that in the above-Ms austempered samples, 6.3%.With prior ausforming at either 600 °C or 400 °C, the Vγ in the below-Ms austempered samples is largely increased to as high as ∼12%; in sharp contrast, in the above-Ms austempered samples, the Vγ is significantly decreased to as low as ∼4%.The stress-strain curves of various samples are shown in Fig. 12 and the corresponding mechanical properties are summarized in Table 4.Without prior ausforming, the below-Ms and above-Ms austempered samples show similar tensile properties, though the former samples exhibit slightly higher yield strength and tensile ductility.With prior ausforming, the yield and tensile strengths of both the below- and above-Ms austempered samples increases largely; however, the ductility of the former samples remains almost unchanged while the latter samples show significantly decreased ductility.As a consequence, the product of strength and ductility of all the below-Ms austempered samples with prior ausforming reaches as large as ∼43 GPa%, whereas the above-Ms austempered samples with prior ausforming exhibit a smaller PSD of ∼33 GPa%.The impact toughnesses of various samples are shown in Fig. 13.Without prior ausforming, the below-Ms austempered samples exhibit an impact toughness of 152 J/cm2, which is much higher than the toughness of the above-Ms austempered samples, 109 J/cm2.With prior ausforming at 400 °C and 600 °C, the impact toughnesses of the below-Ms austempered samples are 149 and 178 J/cm2, respectively; however, for the above-Ms austempered samples, the impact toughnesses decrease to 74 and 80 J/cm2, respectively.These findings clearly demonstrate that prior ausforming largely decreases the impact toughness of the above-Ms austempered samples, but maintains or even increases that of the below-Ms austempered samples.Typical SEM fractographs of the impact toughness specimens are shown in Fig. 14.Regardless of prior ausforming, the fracture surfaces of the above-Ms austempered samples are characterized by quasi-cleavage facets, indicative of mainly brittle fracture, whereas the fractographs of the below-Ms austempered samples are featured by many dimples, typical of predominantly ductile fracture.Specifically, for the above-Ms austempered samples without prior ausforming, plenty of quasi-cleavage facets, with only a few small shallow dimples, are visible.With prior ausforming, the fractographs of the above-Ms austempered samples become much flatter, with fewer dimples observable.This observation corresponds well with their poor impact toughness.By contrast, in the below-Ms austempered samples, a large number of dimples are visible, regardless of prior ausforming.Furthermore, the dimples in the below-Ms austempered samples with prior ausforming at 600 °C are even finer and denser.However, in the below-Ms austempered samples with prior ausforming at 400 °C, bands of dimples are distributed alternately with bands of quasi-cleavage facets.These dimple bands and quasi-cleavage facet bands, both being parallel to the rolling plane, may result from the non-uniformly deformed microstructure caused by ausforming.Because ausforming is performed at a relatively low temperature of 400 °C, the crystal defects in large number generated in the supercooled austenite are difficult to recover, thus causing seriously non-uniform deformation bands.Microcracks are, thus, more likely to initiate from these micro-bands and propagate along them , which deteriorates the impact toughness.Different combining effects of ausforming and below-Ms or above-Ms austempering on the kinetics of isothermal transformation, microstructural evolution and mechanical properties of a low-C rich-Si steel have been demonstrated above.The affecting mechanisms concerning various thermo-mechanical processes are summarized in Fig. 15 and discussed in detail in the following sections.It has been demonstrated that prior ausforming refines the bainitic laths in both the above- and below-Ms austempered samples.These observations are associated with two major factors: the amount of nucleation sites and the growth of bainitic ferrite.According to the literature , ausforming increases the boundary area of austenite grains per unit volume and generates more crystal defects in the supercooled austenite, both of which provide more nucleation sites for bainite transformation.An increased amount of nucleation sites lead to the refinement of the final size of bainite by the impingement effect.Besides, ausforming, i.e. plastic deformation, can strengthen the supercooled austenite; and decreasing the ausforming temperature further increases the degree of the strengthening .An increase in yield strength by ausforming means a higher mechanical stabilization of the supercooled austenite against bainite transformation, which has been proven to be effective in reducing the size of bainitic laths .Thus, strengthening of supercooled austenite by ausforming results in thinner bainitic laths.In this scenario, the plates of bainite ferrite are also expected to become even thinner when ausforming is performed at lower temperature, because decreasing the ausforming temperature tends to further increase the strength of the supercooled austenite.Under the same condition of ausforming or non-ausforming, the below-Ms austempering, relative to the above-Ms austempering, plays an additional significant role in refining the bainitic laths.This may be understood as follows.Firstly, the athermal martensite, which forms during the decreasing temperature from the Ms to the designated isothermal temperature, produces interfaces between the martensitic laths and the remaining supercooled austenite.Such interfaces hence provide abundant heterogeneous nucleation sites for bainite transformation .Secondly, relative to the above-Ms austempering, the below-Ms austempering causes an increase in the driving force for bainite transformation, associated with a larger undercooling, and thus provides a larger number of bainite nucleation sites .Therefore, the combined process of prior ausforming and below-Ms austempering has a favorable coupling effect on refining the bainitic structures, via increasing the number of nucleation sites for bainite transformation and suppressing the growth of bainitic laths.As demonstrated in Section 3.3, the volume fraction of retained austenite in the above-Ms austempered samples with prior ausforming decreases from 6.3% to as low as ∼4%; however, for the below-Ms austempered samples with prior ausforming, the Vγ increases from 7.5% to as large as ∼12%.As is known, prior ausforming increases the nucleation sites for bainite transformation, which may aid in increasing the amount of bainite.Alternatively, ausforming also leads to an increase in the strength of supercooled austenite, which in turn increases the shear resistance of austenite-to-bainite transformation, hence suppressing the displacive growth of bainite and increasing the mechanical stabilization of austenite against bainite transformation .If the latter plays a controlling role, prior ausforming tends to decrease the amount of transformed bainite and leave more of the supercooled austenite untransformed at the finish of bainite transformation.Hence, most of the untransformed austenite, especially in its central region, may not be sufficiently enriched with carbon, given that insufficient amounts of carbon is available in the low-C steel.Thus, lower carbon concentration is expected in the UA due to prior ausforming.To confirm this supposition, the portion of the XRD peaks ofγ in Fig. 11 is magnified and re-examined, as shown in Fig. 11b. Clearly, theγ peaks of all the ausformed samples, relative to those of the non-ausformed samples, shift obviously towards right, indicating that prior ausforming decreases significantly the carbon concentration of the retained austenite although elastic strains due to prior ausforming may also partly be responsible for the displacement of the XRD peaks.It is known that lower carbon concentration causes lower thermal stability of austenite .As such, with prior ausforming, the UA at the finish of bainite transformation, especially in its central region, is more likely to transform into martensite when cooled to room temperature because of its low carbon content and low stability, leaving less of the austenite retained.Indeed, this is true for the above-Ms austempered samples with prior ausforming.Alternatively, it has been recognized that enhanced thermal stability also results from the refinement of austenite grain size, being another strong austenite stabilizer .As discussed in Section 4.1.1, for the below-Ms austempered samples with prior ausforming, the resultant bainitic laths are much refined, associated with below-Ms ausforming.This, in turn, greatly refines the UA at the finish of isothermal transformation.Thus, with prior ausforming, more of the greatly refined UA in the below-Ms austempered samples tend to be retained at room temperature, with less of it transformed into martensite, although the carbon concentration in the UA is also not as high.It has been shown that prior ausforming largely decreases the amount and size of M/A blocks in the below-Ms austempered samples, but significantly increases the amount of large M/A blocks in the above-Ms austempered samples.These observations are closely associated with the nucleation rate of bainite and the incompleteness of bainite transformation.For the above-Ms austempered samples, prior ausforming tends to increase the incompleteness of bainite transformation, thus increasing the amount and size of UA blocks at the finish of bainite transformation.These UA blocks, exhibiting lower stability due to their larger size and lower carbon concentration, are prone to transform into larger M/A blocks when cooled to room temperature.Therefore, many large M/A blocks are clearly visible in the above-Ms austempered samples with prior ausforming.However, for the below-Ms austempered samples, in spite of the similarly enhanced incompleteness of bainite transformation by ausforming, the UA blocks are much refined because of several factors.These factors include the increased amount of crystal defects produced by prior ausforming and the increased driving force for bainite transformation due to the relatively lower isothermal temperature .Additionally, the martensitic laths that are formed during cooling to the below-Ms isothermal temperature produce more interfaces between the formed martensite laths and the supercooled austenite .The combined effects of these factors increase largely the heterogeneous nucleation sites for bainite transformation, which favor dividing the larger blocks of UA into smaller ones, leaving fewer larger M/A blocks visible at room temperature.Furthermore, it is noted that the area fractions of the M/A blocks measured from the SEM micrographs in the above-Ms austempered samples with prior ausforming exceed ∼30%, which is far more than the amount of the retained austenite in these samples.This is because of the fact that the M/A blocks in the above-Ms austempered samples with prior ausforming consist of a larger amount of fresh martensite and a smaller amount of retained austenite, as confirmed by the larger upward deviations of the dilation curves at the later stage of cooling and further verified by the TEM micrographs.In sharp contrast, the area fractions of the M/A blocks in the below-Ms austempered samples with prior ausforming are measured to be only ∼8%; this is even smaller than the amount of the retained austenite, ∼12%.This observation may be associated with the resolution limit of SEM micrographs: thin films of retained austenite existing between bainitic laths are not clearly measurable by the SEM; and the sizes of some M/A blocks in the below-Ms austempered samples with prior ausforming are rather small and hence some of these small M/A blocks cannot be detected by the SEM.The below-Ms austempered samples with prior ausforming have been demonstrated to show superior tensile properties to other samples.Such superior tensile properties are associated with their unique microstructures and working-hardening behavior.The work-hardening exponent curves of various samples during tensile deformation as a function of true strain are shown Fig. 12.Obviously, the below-Ms austempered samples with prior ausforming show more enhanced and sustained work-hardening exponent than all other samples.The coupling process of the below-Ms austempering and prior ausforming not only greatly reduces the sizes of bainitic laths and M/A blocks but significantly increases the amount of retained austenite.The superfine bainitic laths should be one major reason for the improved yield strength while the considerable amount of retained austenite gives rise to much enhanced and persistent strain-hardening via the improved transformation-induced plasticity effect during deformation, which in turn increases both the tensile strength and ductility .In addition, the presence and tempering of prior athermal martensite in below Ms samples has been reported to increase the yield strength but slightly decrease the strain-hardening capacity ; however, the mechanisms of tempered martensite affecting the mechanical properties remain not quite clear and need further investigations.In comparison, the above-Ms austempering combined with prior ausforming similarly reduces the size of bainitic laths, but unfavorably increases the amount of large hard M/A blocks and decreases the volume fraction of retained austenite.That is, the microstructure of the above-Ms austempered samples with prior ausforming consists mainly of fine laths of bainite and coarse blocks of M/A along with less retained austenite.The fine bainitic laths together with hard coarse M/A blocks are responsible for the higher yield and tensile strengths of the above-Ms austempered samples with prior ausforming.However, the small volume fraction of retained austenite of Vγ = 4% gives rise to a limited TRIP effect, which should be a major reason for the poor ductility of these samples.The impact toughness of bainitic steels can be affected significantly by the amount, size and morphology of retained austenite and/or M/A blocks .It is generally accepted that thin films or small blocks of retained austenite play a positive role in relieving stress concentration and delaying crack initiation and growth, hence improving the impact toughness.Conversely, large blocks of retained austenite or M/A have poor stability, tending to transform into martensite at small strains .These newly-transformed martensite blocks contain high carbon content; they are untempered and thus hard and brittle.Hence, these martensite blocks favor the initiation and propagation of microcracks, tending to decrease impact toughness .Therefore, it would be beneficial for improving the impact toughness to replace large blocks of retained austenite or M/A with thin films or small blocks of retained austenite as much as possible.In the below-Ms austempered samples with prior ausforming, the relatively larger amounts of fine retained austenite along with fewer large M/A blocks should be one major reason for their superb impact toughness.By contrast, in the above-Ms austempered samples with prior ausforming, plenty of large M/A blocks along with less retained austenite are responsible for their poor impact toughness.Furthermore, the characteristics of bainitic laths also significantly influence the impact toughness in various ways, in particular, by varying the crack growth resistance .As claimed in Refs. , bainitic laths, with a variety of crystallographic orientations, have a propensity for improving the toughness via frequently arresting or deflecting the cracks and thus increasing crack growth resistance.In the present study, prior ausforming at 400 °C, versus prior ausforming at 600 °C, leads to stronger orientation selection of bainitic laths rather than multiple variants of bainite, as confirmed by Gong et al. .Such a stronger orientation selection is unfavorable for changing crack path and suppressing crack growth.As a consequence, the below-Ms austempered samples with prior ausforming at 400 °C show lower impact toughness than in the case of prior ausforming at 600 °C.It has been demonstrated that the below-Ms austempered samples with prior ausforming exhibit an excellent combination of strength, ductility and impact toughness, as compared with the above-Ms austempered samples with or without prior ausforming.Since the formation of athermal martensite when cooling from Ms to the austempering temperature is inevitable and this athermal martensite tends to be autotempered during the subsequent isothermal treatment, it is necessary to well understand how this athermal martensite and its amount affect such properties .Furthermore, as reported in literature , the volume fraction of athermal martensite formed at 20 °C below Ms is 16% and it reaches 77% when austempered at 50 °C below Ms, indicating that the amount of athermal martensite is sensitive to the austempering temperature below Ms. Hence, to control the below-Ms austempering temperature and thus to control the amount of athermal martensite is a critical issue and requires to be further studied.Different combining effects of ausforming with below-Ms or above-Ms austempering on the isothermal transformation kinetics, microstructures and mechanical properties of a low-C bainitic steel have been investigated.The conclusions drawn are as follows:As compared with the above-Ms austempering, the below-Ms austempering, which is preceded by the formation of small amounts of athermal martensite during cooling from the Ms to the isothermal temperature, tends to shorten the incubation period and accelerate bainite transformation rate.Ausforming as well as decreasing the ausforming temperature plays a role in accelerating the bainite transformation for both the above- and below-Ms austempering.Below-Ms austempering, ausforming as well as decreasing the ausforming temperature all tends to reduce the thickness of bainitic laths, thus refining the bainitic structure.The combined process of prior ausforming and above-Ms austempering increases the amount of large, unwanted brittle M/A blocks, and decreases the volume fraction of retained austenite to as low as ∼4%.In contrast, the combined process of prior ausforming and below-Ms austempering decreases the amount and size of M/A blocks, and increases the volume fraction of retained austenite to as high as ∼12%.The below-Ms austempered samples with prior ausforming exhibit an excellent combination of strength and ductility, with the product of strength and ductility of as high as ∼43 GPa%, in sharp contrast to the ausformed and above-Ms austempered samples showing the PSD of ∼33 GPa%.Furthermore, the impact toughness of the below-Ms austempered samples with prior ausforming approaches to ∼180 J/cm2, twice higher than that of the above-Ms austempered samples with prior ausforming, ∼80 J/cm2.The excellent mechanical properties of the below-Ms austempered samples with prior ausforming are exclusively due to the favorable effects of the coupled process on microstructure.The synergetic role of prior ausforming and below-Ms austempering not only largely refines the bainitic laths, but effectively reduces the size of unwanted brittle M/A blocks and increases the amount of retained austenite for improved TRIP effects.Particularly, when austempering was performed at a below-Ms temperature of 355 °C with prior ausforming at 400 °C, a superfine microstructure with bainitic laths ∼ 100 nm in thickness, of the same order of medium-carbon bainitic steels, was obtained.Such a superfine bainitic structure, obtained in a low-C steel at much reduced transformation time, shows the potential of the coupled process of ausforming and below-Ms austempering for wide industrial applications.L. Zhao Data curation; Formal analysis; Investigation; Writing-original draft; Writing - review & editing.L. Qian Conceptualization; Funding acquisition; Investigation; Writing-original draft; Writing-review & editing.Q. Zhou Data curation; Formal analysis; Investigation.D. Li Data curation; Formal analysis.T. Wang Data curation; Formal analysis.F. Zhang Conceptualization; Investigation.J. Meng Formal analysis; Investigation. | The isothermal transformation kinetics, microstructure and mechanical properties of a low-carbon bainitic steel, subjected to below-Ms/above-Ms austempering with or without prior ausforming, have been investigated via dilatometric measurements, microstructural characterization and mechanical tests. The results show that for all austempered samples, prior ausforming largely refines the bainitic laths and enhances the mechanical stability of supercooled austenite, the latter leaving more austenite untransformed at the finish of isothermal treatment. Nevertheless, divergent consequences on the final microstructure arise: for the above-Ms austempered samples, prior ausforming increases the amount of large, unwanted brittle martensite/austenite blocks and decreases the volume fraction of retained austenite at room temperature; however, for the below-Ms austempered samples, prior ausforming decreases the size and amount of martensite/austenite blocks and increases the volume fraction of retained austenite. Accordingly, the below-Ms austempered samples with prior ausforming exhibit a product of strength and ductility of ~ 43 GPa% and impact toughness of ~180 J/cm2, in sharp contrast with those of the above-Ms austempered samples with prior ausforming, ~33 GPa% and ~80 J/cm2, respectively. The present results clearly demonstrate a favorable effect of the combined process of ausforming and below-Ms austempering against the adverse combining effect of ausforming and above-Ms austempering. |
385 | Varying cognitive targets and response rates to enhance the question-behaviour effect: An 8-arm Randomized Controlled Trial on influenza vaccination uptake | Using the effect size from Conner et al. study of the QBE and influenza vaccination, G*Power indicated that 1539 participants per condition would provide 95% power to detect a significant effect at an alpha of 0.05 using a two-tailed test.We recruited seven General Practices in northern England who were not taking part in a centralized influenza vaccination invitation scheme in Fall/Autumn 2012.The study population consisted of all patients in each practice eligible for an influenza vaccination that year by being age 65 years or over at their next birthday.Patients were randomized individually to one of eight conditions by the second author using a random number generator but were not blinded to condition.A total of 15 patients were excluded to leave a final sample of 13,803.A total of 5095 completed questionnaires were returned from 12,076 distributed.Fig. 1 details the randomization, exclusions, and questionnaire return rates by condition.Examination of the samples sizes per condition indicates that our intention-to-treat analyses based on all respondents were appropriately powered.However, per-protocol analyses based on participants who completed and returned the questionnaires were underpowered.This study received ethical approval from NHS Ethics, was registered retrospectively, and all standard ethical procedures were applied.Participants in control condition 1 did not receive a questionnaire.Participants in the control condition 2 received a questionnaire tapping whether they had children, their occupation, marital status, and ethnic origin.Participants in the other six conditions received questionnaires tapping the same demographic questions plus questions about influenza vaccination: intention + attitude questions; anticipated regret + intention + attitude questions; beneficence + intention + attitude questions.Conditions 4, 6 and 8 additionally had a sticky note attached to the front that included a message,printed in blue on a yellow sticky note but with the message appearing to be hand-written as used in previous research.Fig. 1 summarizes the differences between the conditions.The QBE does not fit easily into extant taxonomies of behaviour change techniques.The closest categories from Michie et al. taxonomy for the QBE would seem to be prompts/cues and review behavioural goals and this would apply to conditions 3–8 with no behaviour change techniques applied in conditions 1 and 2.The sticky note manipulation does not appear to fit any of the specified behaviour change technique categories.Participants in all conditions received a letter from their General Practice informing them of the upcoming influenza drive and their eligibility to take part.In conditions 2–8, participants also received a letter requesting them to complete the enclosed questionnaire and return it in the stamped addressed envelope.Those returning a questionnaire could tick a box to opt into a prize draw for £200.A code number on each questionnaire allowed questionnaire data to be matched to patient records.After matching, the data were anonymized.Materials were sent out by each General Practice approximately one month before the influenza vaccinations were made available.Vaccination behaviour over the next four months was the primary outcome variable and was obtained from patient records in a database maintained by each General Practice.Demographic questions tapped if participants had children, whether they supervised other employees, their occupation, marital status, and ethnic origin.Cognition items were generated based on published recommendations concerning the principle of correspondence.Intentions were tapped by two items and attitudes by three items.Anticipated regret was tapped by two items."Beneficence was tapped by four items.All these items were responded to on 7-point scales with higher numbers indicating more positive reactions to influenza vaccination.Demographic questions appeared first, followed by anticipated regret or beneficence questions and then intentions and finally attitude questions.Sex, age, deprivation status, and influenza vaccination during the current drive were retrieved from patient records.Our deprivation measure used the Townsend index derived from postcode data linked to the 2011 UK Census.The Townsend index taps material deprivation that has been shown to be related to vaccination rates.Higher scores indicate greater deprivation.Data were analyzed in SPSS and HLM.Our analyses focused on the full sample but also reports per-protocol analyses on the sub-sample returning questionnaires.First, a randomization check compared the eight conditions on sex, age, deprivation status, and previous influenza vaccination taken from GP records.No missing data imputation was performed since the primary outcome was assessed objectively.Second, multilevel modelling analyses that controlled for the fact that participants were clustered within one of seven General Practices examined the impact of condition on rates of vaccination controlling for any differences across conditions.For each predictor we report unstandardized coefficients, standard errors, odds ratios and 95% confidence intervals.We initially examined if receiving a demographics questionnaire compared to no questionnaire increased vaccination rates.We then examined whether receiving a questionnaire on vaccination compared to control increased vaccination rates.Next we examined differences in vaccination rates among the six conditions receiving questionnaires on vaccination.We dummy coded whether the condition only included questions about intention and attitudes or not; anticipated regret, intentions and attitudes or not; beneficence, intentions and attitudes or not; whether a questionnaire was sent with a sticky note or not; and interactions between different sets of questions and inclusion of a sticky note.These dummy coded variables were included as predictors of vaccination rates.The final analyses assessed the effect of condition on questionnaire return rates.Our per-protocol analyses focused on the sub-sample who returned questionnaires and broadly replicated the intention-to-treat analyses.A randomization check compared the seven questionnaire conditions on sex, age, deprivation status, previous influenza vaccination, self-reported having children, being retired or not, being married or not, and being white British or not.Subsequent per-protocol analyses examined the impact of condition on rates of vaccination controlling for any differences across conditions again using multilevel modelling.We assessed whether receiving, completing and returning a questionnaire on vaccination compared to a demographics questionnaire increased vaccination rates.We then examined whether different sets of questions and inclusion of a sticky note or not and the interactions between the two influenced vaccination rates.Finally, we examined variations in intentions, attitudes, anticipated regret and beneficence among participants who completed and returned the questionnaires about vaccination.The sample was 56.3% female with a mean age of 75.7 years, mainly lived in areas of low deprivation, and the majority had previously received an influenza vaccination.The 8 different conditions were equivalent on sex, age, and previous influenza vaccination rates but significantly different on deprivation.Subsequent analyses of condition on influenza vaccination rates for the full sample therefore controlled for deprivation.In total, 10,598 participants were vaccinated against influenza during the vaccination campaign.Multilevel modelling controlling for deprivation indicated that vaccination rates did not differ between the two control conditions, B = 0.058, SE = 0.081, p = 0.50, OR = 1.06, 95% CI = 0.87, 1.29.Thus, receiving a demographics questionnaire was not sufficient to increase behaviour.Multilevel modelling controlling for deprivation indicated that vaccination rates were significantly higher when participants received an influenza vaccination questionnaire compared to when participants did not receive a questionnaire, B = 0.160, p = 0.04.Vaccination rates were also significantly higher in the flu questionnaire conditions compared to the two control conditions that did not receive a questionnaire about influenza vaccination, B = 0.119, p = 0.04.Using the conversion formula suggested by Chinn, this effect represents a QBE of small magnitude.Multilevel modelling controlling for deprivation indicated that neither the cognitive target manipulation nor the response rate manipulation influenced vaccination rates.The interaction terms also were not significant.Multilevel modelling controlling for deprivation indicated that receiving an influenza vaccination questionnaire did not influence return rates compared to a demographics only questionnaire, B = 0.014, SE = 0.057, p = 0.81, OR = 1.01, 95% CI = 0.88, 1.17.The response rate manipulation significantly increased questionnaire returns, B = 0.222, SE = 0.075, p = 0.03, OR = 1.25, 95% CI = 1.04, 1.50.Questionnaire return rates were not significantly influenced by the cognitive target manipulations, Bs = −0.015, 0.086, SEs = 0.085, 0.095, ps = 0.40, 0.86, OR = 0.98, 1.09, 95% CI = 0.80, 1.38, nor by the interaction between the cognitive target and response rate manipulations, Bs = −0.016, −0.153, SEs = 0.107, 0.108, ps = 0.20, 0.89, ORs = 0.86, 0.98, 95% CI = 0.66, 1.28.The sub-sample returning questionnaires was 55.9% female with a mean age of 75.1 years.The sub-sample was mostly retired, white British, married, had children and mainly lived in areas of low deprivation.A majority of the sub-sample had previously received an influenza vaccination.The 7 different conditions were equivalent on sex, being white British, being married, age, previous influenza vaccination rates and deprivation status but significantly different on being retired and having children.Subsequent analyses examining the effect of condition on influenza vaccination rates in the sample returning questionnaires therefore all controlled for having children and retired status.Multilevel modelling indicated that vaccination rates were not significantly higher in the influenza vaccination questionnaire conditions compared to the demographics questionnaire condition, B = 0.068, SE = 0.137, p = 0.64, OR = 1.07, 95% CI = 0.77, 1.50.Findings were equivalent controlling for having children or not and being retired or not.Multilevel modelling indicated that vaccination rates were not influenced by the cognitive target or response rate manipulations or their interaction.Not covarying for having children or being retired did not alter these findings.Table 3 indicates that vaccination rates were substantially lower among participants not completing questionnaires compared to those who completed a questionnaire.Receiving an influenza vaccination questionnaire or not, varying the cognitive target, or the presence versus absence of a sticky note had no effect on vaccination rates among participants who did not return the questionnaire.Examination of the mean scores on the measured variables for participants that returned completed questionnaires revealed positive overall reactions to influenza vaccination on all measured variables.There was no evidence that the cognitive target or response rate manipulations influenced scores on these cognitive measures.The intention-to-treat analyses demonstrated that sending a questionnaire tapping cognitions about influenza vaccination significantly increased influenza vaccination in older adults compared to two control conditions.The observed QBE was equivalent to increasing vaccination rates by approximately 3%, or 414 additional vaccinations among our sample size of 13,806 participants.Sending a demographics questionnaire did not generate a significant increase in vaccination rates compared to not sending a questionnaire.Importantly, there was no evidence that including questions about different cognitive targets enhanced the QBE.In addition, although our manipulation of questionnaire response rates produced a significant increase in response rates, that increase in response rates did not generate a reliable increase in influenza vaccination rates.There was also no interaction between our manipulation of cognitive targets and response rates on influenza vaccination rates.Thus, the present study indicates that the QBE can be used to improve influenza vaccination rates among older adults, but also shows that asking questions about anticipated regret or beneficence, or including a sticky note that increases response rate, does not enhance the magnitude of the QBE for influenza vaccination.The present findings replicate and extend previous work on using the QBE to promote influenza vaccination in health professionals, although the effects observed here were smaller.In the UK context, such improvements in vaccination rates could ensure that the current influenza vaccination programme achieves the current target of at least 75% vaccinated despite only 71% being vaccinated in 2015/16.Only in the no-questionnaire control condition did that vaccination rates fall below this 75% target.Although the effect sizes for the QBE intervention observed here was small, the practical importance of even a small effect can be substantial given the reduction in episodes of severe illness, hospitalization and deaths that might be avoided in this high-risk group through even a modest increase in influenza vaccination rates.The effect size observed here is comparable to that reported in a review of 57 RCTs designed to increase influenza vaccination rates in the over 60s.It is worth noting that, in general, these other interventions to increase influenza vaccination rates were more intensive and expensive to administer.The relatively modest costs and simplicity of the present QBE intervention may add to the appeal of the QBE as an additional behaviour change strategy for improving public health.Although no formal cost-effectiveness analyses were conducted, it is notable that the additional costs would be relatively modest if the questionnaires were sent out with screening invitations.The lack of significant differences in vaccination rates between conditions with different cognitive targets suggests that the QBE is mainly driven by asking intention and attitude questions.Adding anticipated regret or beneficence questions to intention and attitude questions did not affect the magnitude of the QBE.Our research also indicated that attaching a sticky note with a request for help to the front of a questionnaire significantly increased questionnaire return rates.This finding supports Garner analysis of “the sticky note effect” though the increase in return rates observed here was much smaller than the improvement in return rates reported by Garner.This difference may be due at least in part to our using a printed request for help rather than the hand-written request that Garner used.It appears that the modest increase in return rate obtained here was not sufficient to increase the overall magnitude of the QBE.Thus, in our per-protocol analyses, we failed to find support for a key implication of previous analyses showing that the QBE is greater among participants who complete and return the questionnaire.Even a statistically significant increase in response rate did not serve to improve vaccination rates here in the intention to treat analyses despite respondents being generally positive about influenza vaccination.Research that attempts to simultaneously increase positive reactions to the target behaviour and promote questionnaire completion and return in those with positive reactions may be more likely to promote a QBE.The present research has several strengths and weaknesses.Strengths include the use of a strong RCT design in a large sample that was powered a priori and included an objective primary outcome measure.One important weakness was the fact that the per-protocol analyses were underpowered in relation to the small effect size that we expected to observe, which limited the conclusions that can be drawn from the present data and our ability to explore how the use of a sticky note could increase questionnaire return rates but not affect vaccination rates.The present research suggests that manipulating return rates was not sufficient to increase vaccination rates, although it would be useful to confirm this with manipulations that produced larger effects on return rates.It may also be the case that it is necessary to increase questionnaire return rates mainly among participants who are favourably disposed towards the behaviour to observe an impact on behaviour.Increasing rates of return among those less favourably disposed may have no effects on the behaviour or could even lead to less behaviour.Further research might usefully explore different means of manipulating questionnaire response rates especially when it is known that a substantial proportion of the sample favour performing the behaviour.Testing manipulations that increase response rates to on-line surveys would be another fruitful direct for QBE research given that postal questionnaires are becoming less frequently used.Another limitation of the present research is that the sample already had a high rate of influenza vaccination.Improving vaccination rates for such a group may be more difficult than for groups with lower rates and, perhaps, offers a stern test of the QBE.Nevertheless, it is just such groups that are routinely offered influenza vaccination in the UK and elsewhere.Combining the QBE with other effective methods such as messages to promote intentions to vaccinate or financial incentives to get vaccinated may be a useful direction for research to promote influenza vaccination.A final limitation of the present work is that it provides little contribution to our understanding of the mechanisms underlying the QBE.For example, Wood et al. review presented evidence in relation to attitude accessibility and cognitive dissonance as the main mechanisms underlying the QBE.Nonetheless, it is notable that neither mechanism has received unequivocal support across studies included in that review, and the present research did not offer evidence either way concerning these mechanisms.In conclusion, the present study targeted an important preventive health behaviour and offered a strong test of the QBE by recruiting a large, at-risk sample, using an RCT design, and deploying objective measures of behaviour and intention-to-treat analyses.Findings indicated that survey questions about influenza vaccination improve vaccination rates, supporting the QBE.The present research thus corroborates previous studies that used the QBE to change influenza vaccination rates in health professionals but also offers novel evidence that adding questions tapping anticipated regret and beneficence or improving questionnaire return rates do not enhance the QBE.Although the manipulations of cognitive targets and response rates tested here did not improve vaccination rates, the present study offers insights that should prove valuable in informing future efforts to enhance the QBE in behavioural medicine settings. | Rationale The question-behaviour effect (QBE) refers to the finding that survey questions about a behaviour can change that behaviour. However, little research has tested how the QBE can be maximized in behavioural medicine settings. The present research tested manipulations of cognitive targets (questions about anticipated regret or beneficence) and survey return rates (presence vs. absence of a sticky note requesting completion of the questionnaire) on the magnitude of the QBE for influenza vaccination in older adults. Method Participants (N = 13,803) were recruited from general practice and randomly allocated to one of eight conditions: control 1 (no questionnaire); control 2 (demographics questionnaire); intention and attitude questionnaire (with or without a sticky note); intention and attitude plus anticipated regret questionnaire (with or without a sticky note); intention and attitude plus beneficence questionnaire (with or without a sticky note). Objective records of subsequent influenza vaccination from general practice records formed the dependent variable. Results Intention-to-treat analyses indicated that receiving an influenza vaccination questionnaire significantly increased vaccination rates compared to the no questionnaire, OR = 1.17, 95% CI = 1.01, 1.36 and combined control conditions, OR = 1.13, 95% CI = 1.01, 1.25. Including the sticky note significantly increased questionnaire return rates, OR = 1.25, 95% CI = 1.04, 1.50. However, there were no differences in vaccination rates between questionnaires containing different cognitive targets, a sticky note or not, and no interactions. There were no significant differences in the per-protocol analyses, i.e. among respondents who completed and returned the questionnaires. Conclusion The QBE is a simple, low-cost intervention to increase influenza vaccination rates. Increasing questionnaire return rates or asking anticipated regret or beneficence questions in addition to intention and attitude questions did not enhance the QBE. |
386 | Part 2: Physicochemical characterization of bevacizumab in 2 mg/mL antibody solutions as used in human i.v. administration: Comparison of originator with a biosimilar candidate | A good similarity of the physicochemical properties of undiluted, 25 mg/mL, Avastin® originator drug products with a candidate biosimilar monoclonal antibody was recently reported .In the drug product formulation bevacizumab is stable for up to 2 years storage at 2–8 °C .Avastin® is administered to patients by i.v. infusion after dilution with 0.9% NaCl.The use of 5% dextrose as infusion solution is not allowed in the drug product prescribing information, a prohibition supported by our observations of the formation of aggregates in mixtures of human plasma with bevacizumab diluted in 5% dextrose .Studies of the compatibility of biopharmaceuticals with excipients and diluents for i.v. solutions are required by regulatory bodies and the package inserts of such products should contain “essential information on drug and diluent compatibilities and incompatibilities” .Bevacizumab diluted in 0.9% NaCl i.v. solution is administered in a concentration range of 1.4 mg/mL to 16.5 mg/mL .To study bevacizumab stability after dilution in 0.9% NaCl and for comparing originator drug products with ABX-BEV, we selected a monoclonal antibody concentration of 2 mg/mL, which is at the lower end of the therapeutic i.v. administration range.A very good similarity in the physical and chemical properties of Originators USA and EU with ABX-BEV was observed using 14 orthogonal analytical methods and 20 parameters.Besides methods used for the characterization of the undiluted drug products at 25 mg/mL mAb two other orthogonal methods were used: main flow fractionation and asymmetrical flow field-flow fractionation.As discussed in our previous paper the use of orthogonal methods is strongly recommended by regulatory guidance .Fluorescence spectroscopy, mFF, FFF, nanoparticle tracking analysis, Nile red fluorescence microscopy, particle flow imaging, UV-visible absorption, 90° light-scattering and ultrasound resonance technology showed that antibody secondary structure and aggregation states were similar for the three bevacizumab diluted products.Very few aggregates were observed in the 2 mg/mL bevacizumab i.v. saline diluted solutions within 5 h at 24 °C after dilution of the drug products.Unexpectedly, aggregates were formed in originators as well as in the biosimilar candidate after overnight storage at 2–8 °C.These data show that bevacizumab at 2 mg/mL after dilution with 0.9% NaCl i.v. solution is not as stable as in the drug product formulation at 25 mg/mL.ABX-BEV, a bevacizumab candidate biosimilar manufactured by Apobiologix Inc., was provided as a drug product at 23.6 mg/mL mAb in the same formulation as the originator: 60 mg/mL trehalose dihydrate, 5.8 mg/mL sodium phosphate monobasic monohydrate, 1.2 mg/mL sodium phosphate dibasic anhydrous, and 0.04% polysorbate 20, pH 6.2.Thermally stressed ABX-BEV, prepared by incubation for 4 weeks at 40 °C and 75% relative humidity, was supplied by Apobiologix Inc.Originator drug products were Avastin® manufactured in Europe, 25 mg/mL, batch B7100B16, 4 mL vials, and Avastin® manufactured in the United States, 25 mg/mL, batch 522210, 4 mL vials.Originator samples were analyzed about one year prior to expiration date.The concentration of undiluted drug products was determined by UV absorbance at 280 nm : 23.6 mg/mL ABX-BEV; 25.0 mg/mL Originator USA; 25.1 mg/mL Originator EU.The drug products were diluted to 2.0 mg/mL with 0.9% NaCl, final volume 9 mL, in 15 mL Falcon tubes; the 2.0 mg/mL concentrations of the bevacizumab solutions were confirmed by UV absorbance at 280 nm.The samples were analyzed within 5 h at 24 °C after dilution and within 5 h at 24 °C after 24 h storage at 2–8 °C.The 5-hour interval was the time required to perform all the measurements.Nile red was dissolved in ethanol to prepare a 100 μM stock solution.A 5 mM stock solution of ANS was prepared in Milli-Q water.D--trehalose dihydrate, polysorbate 20, sodium azide, sodium chloride, sodium phosphate monobasic, and sodium phosphate dibasic were of analytical grade.Absorbance of the 2 mg/mL bevacizumab samples was monitored with a double beam, two monochromator Cary 300 Bio spectrophotometer.The absorbance spectra were recorded at 25 °C between 240 and 400 nm with a pathlength of 4 mm in a 1 cm × 4 mm quartz cuvette.The estimated instrumental and experimental error was 2%.90° static light-scatter spectra were measured at 25 °C with a photon counting FluoroMax spectrofluorometer between 400 and 750 nm.The measurements were performed in a 10 mm × 4 mm quartz cuvette.The spectra were recorded with an 0.01 s integration time per 1 nm increment.Excitation and emission slits with bandpass of 4.25 nm and 2.12 nm, respectively, were used.The 90° light-scattering intensities are given in counts per second.The estimated experimental error of the method was 2%.Steady-state fluorescence emission and anisotropy spectra were recorded with the photon counting FluoroMax spectrofluorometer thermostated at 25 °C.The emission intensities are in cps.The measurements were performed in a 10 mm × 4 mm quartz cuvette.The excitation was on the 10 mm-wide side of the cuvette and the emission was measured from the 4 mm-wide side of the cuvette.The use of this cuvette and the excitation on the large surface permitted measurements of fluorescence properties for 2 mg/mL bevacizumab.Fluorescence emission was monitored between 290 and 450 nm with an excitation wavelength of 280 nm.Spectra were recorded with an 0.05 s integration time per 1 nm increment, excitation and emission slits with bandpass of 4.25 nm.The estimated error, instrumental and experimental, was 2%.Fluorescence anisotropy spectra were monitored in an L-format configuration with Glan-Thompson prism polarizers.The wavelength of excitation was 280 nm.Intrinsic fluorescence anisotropy was monitored between 330 and 352 nm with excitation / emission slits with bandpass of 8.50 nm / 19.00 nm.All spectra were recorded with a 2-second integration time per 1 nm increment.Fluorescence anisotropy was calculated from the equation:A =)/)where G is a correction factor, G = I90/0 / I90/90.Im/n is the fluorescence intensity at a given wavelength and the subscripts refer to the positions of polarizers in the excitation and emission beams relative to the vertical axis.The error was estimated to be 2%.Fluorescence lifetime was measured at 25 °C using time-correlated single-photon counting on a FluoroCube 5000 U equipped with LED sources and time-gating.The NanoLED excitation wavelength was 279 nm and the emission wavelength was set to 338 nm.A synchronous delay of 20 ns, calibration time of 0.113 ns/channel, coaxial delay of 95 ns with an emission slit with bandpass of 32 nm were used as instrumental parameters.The error was estimated to be 2%.The fluorescence intensity decay was deconvoluted with the instrument response function, as measured using a diluted suspension of colloidal silica.The calculated fluorescence intensity decay with time was fitted with a multi-exponential model using DAS6 software and the equation:I = α1exp + α2exp + α3expwhere τ1, τ2 and τ3 are the decay times of the three components, and α1, α2 and α3 are the exponential factors at the emission wavelength λ.X2 reflects the quality of the mathematical fit of the fluorescence lifetime decay.The best theoretical fit is X2 = 1 .The best fit for the intrinsic fluorescence lifetime decay of diluted bevacizumab samples was obtained with three exponentials.The mean weighted average lifetime, τF, was calculated from the individual fluorescence decay times, τ, and the normalized pre-exponential values, α, using the equation:τF = ∑i αiτi2/∑i αiτi,The steady-state fluorescence anisotropy, A, and the mean fluorescence lifetime, τF, are related to the average rotational correlation time, τc, and the viscosity of the chromophore environment, η, by the Perrin equation:A = A0 = A0where A0 is the limit anisotropy, k the Boltzmann’s constant, T the absolute temperature, and Vh the molecular volume of the chromophores.This correlates the observed fluorescence changes with changes in the chromophore environment.It follows that the average rotational correlation time, τc, can be calculated:τc = τF A/; η = τc k T/Vh,For intrinsic fluorescence, A0 is equal to 0.3 .The estimated error was 2%.10 μL of 5 mM ANS in water were added to 1 mL of the protein samples, which were characterized immediately after addition of the dye.Emission spectra were monitored between 390 and 650 nm with an excitation wavelength of 375 nm.They were recorded with an 0.02-second integration time per 1 nm increment.Excitation and emission slits with bandpass of 4.25 nm were used.The error was estimated to be 2%.ANS anisotropy values were calculated from fluorescence spectra between 482 and 516 nm using an excitation wavelength of 375 nm, with two seconds integration time per nm increment.Excitation and emission slits with bandpass of 4.25 nm and 8.50 nm, respectively, were used.ANS anisotropy was calculated using equation.The estimated error, instrumental and experimental, was 3%.Fluorescence lifetimes were measured using time-correlated single-photon counting on the FluoroCube 5000 U equipped with an LED source with time-gating.Measurements were performed at 25 °C.The NanoLED excitation wavelength was 371 nm, and the emission wavelength was set to 498 nm.Data analysis was performed using DAS6 software.The following instrumental parameters were used for the ANS fluorescence lifetime measurements: synchronous delay, 20 ns for unstressed bevacizumab samples, 140 ns for thermally stressed ABX-BEV; calibration time, 0.113 ns/channel for unstressed bevacizumab samples, 0.225 ns/channel for thermally stressed ABX-BEV; coaxial delay, 95 ns; emission slit, 16 nm bandpass.The best fit for the ANS fluorescence lifetime of diluted bevacizumab samples was obtained with three exponential terms, equation.ANS average lifetimes, τF, were calculated using equation.The estimated error was 0.5%.τc is related to both ANS fluorescence anisotropy and lifetime.ANS rotational correlation times were calculated using equation with A0 = 0.4 .The estimated error was 2%.FFF measurements were performed using a Wyatt Eclipse 3 separation system coupled with an in-line UV detector followed by a Dawn Heleos II multi-angle static light-scattering instrument and an in-line refractometer.The separation channel consisted of a porous frit covered with a Microdyn Nadir polyethersulfone ultrafiltration membrane with a molecular mass cut-off of 10 kDa and was separated from the top block by a 350 μm-wide trapezoidal spacer.The Agilent instruments were synchronized using the ChemStation for LC systems software.The carrier solution for bevacizumab samples was 0.9% sodium chloride and 0.02% sodium azide.For separation, the main flow rate was set at 1 mL/min, the cross flow to 2 mL/min and the focus flow to 1 mL/min.The autosampler temperature was 6 ± 2 °C and the injection time was 3 min.2 mg/mL bevacizumab samples were analyzed by FFF between 7 h and 10 h after dilution.Data analysis was performed with the Astra software using a dn/dc value of 0.185 and a second virial coefficient of 1 × 10-4.The UV absorbance was measured at 280 nm; the extinction coefficient employed was 1.7 × 103 mL·g-1·cm-1.The error of the molar mass determined by FFF was 5%.The mFF measurements were performed with the FFF system .Both the channel and injection flows were set to 0.2 mL/min; neither focus flow nor cross flow was used for this method.The injection time was 5 min.The error of the molar mass determined by mFF was 5%.NTA measurements were performed using the NanoSight LM20 instrument.A laser beam with cross-section of 80 μm × 12 μm illuminated the field of view having dimensions of 100 μm × 80 μm with a depth of 12.5–15 μm.NanoSight software was used for both capturing and analysis of the movies.For each bevacizumab sample analysis, five movies of 2 min each were recorded; the sample was at 25 °C.Each movie was recorded using a fresh volume of the sample.Movies were recorded with parameter settings of 1500 for the shutter, camera gain of 680, a detection threshold of 5, a minimum track length of 6 and the minimum expected size in automatic.The estimated error was 10%.Immediately after addition of 1 μL of 100 μM Nile red in ethanol into 50 μL of antibody solution, 5 to 10 μL of the solution were placed in the counting chambers of FastRead 102® slides.Inside the chamber, the particles present in a volume of 1 μL were counted.A Leica DM RXE microscope equipped with a mercury lamp and a bandpass filter cube with an excitation filter BP515-560 nm and suppression filter LP 590 nm was used.From the arithmetic mean of three particle-counts in 1 μL volume each the number of particles per mL was calculated.The lower size limit of the aggregates which could be observed by this method was 0.5 μm .The estimated error of the method was 20%.Bevacizumab samples were analyzed at 25 °C using the Occhio FC200S particle counter, with a background of filtered Milli-Q water and a 150 μm spacer.For each sample, the flow cell was rinsed with 200 μL of that sample and then 250 μL were analyzed once.The samples were analyzed after equilibration for about 30 min at room temperature to remove possible air bubbles.A camera objective of 2X and a threshold of 160 were used.The size of particles was calculated by the instrument software.The size is the maximum distance between two points in a particle.Air bubbles were eliminated from the counting based on their specific circular shape and black appearance with, in some cases, a small white dot in the center.Silicon oil particles are also circular but are gray and have a large white center.The estimated error was 10%.The URT measurements were performed with the ResoScan system.200 μL of bevacizumab were analyzed.The optimal resonance peak for the measurement was set to 8.1 MHz.Samples were equilibrated for 5 min at 25 °C, then the difference in ultrasound velocity, delta U, between the sample and Milli-Q water was measured.Three separate replicates of each sample were analyzed.The estimated error was 1%.The data wheel representation was developed by Therapeomic to compare the physicochemical properties of samples.In the data wheel the results from multiple analytical methods are plotted in one figure.On the wheel, the methods are grouped into regions describing different properties, such as shown in Fig. 7: I) secondary structure, conformation and chemical degradation; II) hydrophobic and electrostatic pockets, ANS binding; III) particles from monomer to 1 μm; IV) particles from 1 μm to visible; and V) changes in aggregation states.Other regions can be added depending on the methods that are used, for example for chemical degradation.For each parameter a data range is defined.This data range is considered to be and the percentage value of each parameter is calculated and plotted in the wheel.Due to differences in the analytical methods three groups of data ranges were defined:Data range : X is equal to 0 and Ymax is the maximum measured parameter value.The majority of measured parameters belong to this group.An example is Trp fluorescence emission intensity data where Ymax is the maximum measured intensity, the value on the rim of the wheel.Data range : Xsmall is defined to be smaller than the smallest measured parameter value and Ymax is the maximum measured parameter value.ANS maximum fluorescence emission wavelengths belong to this group.For unstressed bevacizumab, ANS fluorescence λmax was 498 nm.The dye λmax for thermally stressed ABX-BEV was 491 nm.The data range for ANS fluorescence λmax was defined between 470 nm and 498 nm.The value Xsmall = 470 nm was selected to obtain a wheel value showing that after thermal stress an important change in ANS λmax was observed for ABX-BEV.Had we selected Xsmall = 0 and Ymax = 498, the change from 498 nm to 491 nm would not be visible on the wheel and the significance of the change in ANS λmax would have been lost.Data range : X is equal to 0 and Ylarge is defined to be larger than the maximum measured value.This group contains the parameters related to particle concentration.For example, for unstressed 2 mg/mL bevacizumab solutions the number of particles larger than 25 μm detected by PFI were 0, 0, 8 particles/mL for ABX-BEV, Originator EU and Originator USA, respectively.The Ylarge value was set to 150 as the permitted limit of < USP 788> for a 4 mL drug product vial .The data wheel representation permits the visualization and assessment of full datasets.The differences that can be observed in the data wheel reflect relevant differences in the measured parameters since the radial representation of parameters is based on their numerical values and their errors.The wheel representation was performed using Radar chart type in Microsoft Excel software.The physicochemical properties of ABX-BEV biosimilar candidate were compared with those of bevacizumab in Avastin® drug products manufactured in the USA and in Europe after dilution with 0.9% NaCl to a clinically relevant concentration of 2 mg/mL.A 25 mg/mL ABX-BEV solution that was stressed for 4 weeks at 40 °C was also analyzed after dilution to 2 mg/mL in 0.9% NaCl: we will refer to this sample as “thermally stressed ABX-BEV”.The samples were characterized within 5 h at 24 °C after dilution and after 24 h storage at 2–8 °C.As was the case for the undiluted products , a good similarity was shown between the three diluted unstressed mAb samples.After 24 h storage at 2–8 °C of the 2 mg/mL bevacizumab solutions, a comparable increase of the number of aggregates was observed for ABX-BEV and the two originator products indicating similar behavior under this mild stress condition.Absorbance between 310 and 400 nm provides information on the aggregated state of proteins in solution .After incubation for 4 weeks at 40 °C and dilution to 2 mg/mL, ABX-BEV had an absorbance at 350 nm which was higher than that of the other diluted mAb samples: 0.089 for thermally stressed ABX-BEV and 0.039-0.042 for the unstressed samples.There was no difference in the 350 nm absorbance of ABX-BEV, Originator EU and Originator USA, within the error of the experiment.After 24 h storage at 2–8 °C the UV-Vis absorbance at 350 nm of the unstressed bevacizumab samples diluted to 2 mg/mL in 0.9% NaCl did not change.The UV-Vis absorbance background of thermally stressed ABX-BEV diluted to 2 mg/mL in 0.9% NaCl increased slightly after 24 h storage at 2–8 °C, noticeable at 250 nm, 280 nm and in the region between 310 nm and 400 nm.A 90° light-scatter spectrum consists of 90° light-scatter intensities measured at different wavelengths and may be used to monitor small changes in particle size and concentration.The light-scatter signal increases with higher aggregate content .ABX-BEV incubated at 40 °C for 4 weeks exhibited a strong light-scatter signal after dilution to 2 mg/mL, 11.0 × 106 cps at 550 nm.The light-scatter of 2 mg/mL ABX-BEV and Avastin® originators was the same, within the 2% error of the method, 3.5–3.6 × 106 cps at 550 nm.After 24 h at 2–8 °C an increase in 90° light-scatter was observed for all unstressed 2 mg/mL bevacizumab samples: at 550 nm from 3.6 × 106 cps to 4.1 × 106 cps for both ABX-BEV and Originator EU, and from 3.5 × 106 cps to 4.9 × 106 cps for Originator USA.The fluorescence of 2 mg/mL solutions of bevacizumab, dominated by tryptophan , had an emission maximum at 338 nm.After dilution to 2 mg/mL, the Trp fluorescence intensity of thermally stressed ABX-BEV was slightly lower than that of unstressed antibody, 4.4 × 107 cps for thermally stressed ABX-BEV and 4.6-4.7 × 107 cps for the unstressed samples.Originator and biosimilar bevacizumab had similar intrinsic fluorescence emission intensity values.For all samples, the intrinsic fluorescence emission of bevacizumab remained the same after 24 h storage at 2–8 °C.The local mobility of Trp in the bevacizumab samples can be assessed by fluorescence anisotropy, with an increase in anisotropy indicating less mobility, a more rigid environment .Thermally stressed ABX-BEV diluted to 2 mg/mL had a higher fluorescence anisotropy than ABX-BEV, Originator EU and Originator USA: 0.118 for thermally stressed mAb compared to 0.102, 0.100, 0.101 for unstressed ABX-BEV, Originator EU and Originator USA, respectively.After 24 h incubation at 2–8 °C, the intrinsic anisotropy of Originator EU increased slightly, from 0.100 to 0.105.The anisotropy of the other samples remained the same after 24 h incubation.The local environment of a fluorophore in a molecule can be assessed by its fluorescence lifetime, an increase indicating more rigid environments .Thermally stressed ABX-BEV diluted to 2 mg/mL had a mean intrinsic fluorescence lifetime of 2.50 ns, longer than those of unstressed bevacizumab samples which were about 2 ns: 2.08 ns for unstressed ABX-BEV, 2.06 ns and 2.03 ns for Originator EU and Originator USA, respectively.The biosimilar and originators had similar intrinsic fluorescence lifetimes within 5 h at 24 °C and after 24 h storage at 2–8 °C.The rotational correlation time, τc, is determined by the conformation and flexibility of a molecule and measures local movements of the fluorophore, which are related to local viscosity in the secondary structure.τc is calculated from the anisotropy and fluorescence lifetime using equation, a high value indicating a more rigid environment.Thermally stressed 2 mg/mL ABX-BEV had a τc of 1.62 ns, which was larger than the τc values of 1.03–1.07 ns of the unstressed 2 mg/mL mAb samples.After storage for 24 h at 2–8 °C the τc of thermally stressed ABX-BEV decreased from 1.62 ns to 1.52 ns; the τc of 1.07 ns of unstressed 2 mg/mL ABX-BEV did not change, and small increases in the τc of originators were observed, from 1.03 ns to 1.08 ns for Originator USA and from 1.03 ns to 1.10 ns for Originator EU.Interestingly, the rotational correlation times of the bevacizumab samples at 25 mg/mL antibody, τc of 1.01–1.02 ns , were similar to the τc of 1.03–1.07 ns for the 2 mg/mL antibody solutions.These data indicate that dilution in 0.9% NaCl of the drug products did not induce changes in the antibody conformation in the vicinities of Trp residues.ANS is used to probe changes in protein electrostatic pockets .ANS fluorescence in the 2 mg/mL thermally stressed ABX-BEV was 29% stronger than that in the unstressed 2 mg/mL ABX-BEV.There was also a 7 nm blue shift for the ANS λmax in 2 mg/mL thermally stressed ABX-BEV.ANS fluorescence intensity of 2 mg/mL stressed ABX-BEX increased by about 10% after storage at 2–8 °C for 24 h.ANS fluorescence intensity of the 2 mg/mL ABX-BEV unstressed sample was slightly smaller than the fluorescence intensities of 2 mg/mL bevacizumab Originators EU and USA.The origin of this difference could be either the presence of more electrostatic / hydrophobic pockets for ANS binding in the originator samples and / or differences in polysorbate 20 properties.All unstressed samples had about a 10% increase in ANS fluorescence intensity after 24 h at 2–8 °C, indicating structural / aggregation changes in the antibodies resulting in the creation of new ANS binding sites after 24 h of refrigeration.ANS anisotropy was higher for the thermally stressed 2 mg/mL ABX-BEV sample compared to unstressed ABX-BEV, likely due to more rigid environments for the dye after thermal stress.There was no difference between 2 mg/mL ABX-BEV and originator products: 0.141 for ABX-BEV, 0.137 for Originator EU and 0.139 for Originator USA.The thermally stressed 2 mg/mL ABX-BEV sample had an increased ANS anisotropy of 0.181 after incubation for 24 h at 2–8 °C compared to 0.172 measured after dilution, within 5 h at 24 °C.For the unstressed 2 mg/mL bevacizumab samples, ANS anisotropy did not change after incubation for 24 h at 2–8 °C.ANS mean lifetime was longer in thermally stressed ABX-BEV than in the three unstressed antibodies, indicating that the dye was less mobile: 8.21 ns for thermally stressed mAb and 5.44–5.49 ns for unstressed samples.The small differences in ANS lifetimes between diluted ABX-BEV and the originator products were within the 0.5% error of the method.After 24 h storage at 2–8 °C of the diluted mAb samples, the ANS lifetime for ABX-BEV increased slightly from 5.44 ns to 5.60 ns; for the originator products the ANS lifetime remained stable within the error of the method.ANS rotational correlation time was longer for ABX-BEV incubated at 40 °C for 4 weeks than for the unstressed bevacizumab samples.After 24 h storage at 2–8 °C of the diluted samples, the ANS rotational correlation time for thermally stressed ABX-BEV increased about 12%.The minor difference observed within 5 h at 24 °C after dilution between τc values of diluted ABX-BEV and originator products were within the 2% error of the method: 2.96 ns, 2.86 ns and 2.92 ns for ABX-BEV, Originator EU and Originator USA, respectively.After 24 h storage at 2–8 °C, the ANS rotational correlation time for 2 mg/mL Originator USA did not change; small increases within the error were observed for ABX-BEV and Originator EU.mFF is a separation method developed by Therapeomic which uses field flow fractionation equipment.In the mFF method the sample is injected into a flowing buffer solution and neither a focusing step nor cross flow is used.Under these conditions, shear forces are much reduced, permitting, besides the detection of monomers and multimers, also the detection of loose, large aggregates.No monomeric mAb population was detected by mFF in thermally stressed ABX-BEV diluted to 2 mg/mL; the sample contained only aggregates with molecular weights between 300 kDa and 5000 kDa.The aggregates in the thermally stressed 2 mg/mL ABX-BEV with molecular weights larger than 300 kDa, at the beginning of the chromatogram elution peak, represented 54% of the eluted area; the remaining 46% of the eluted area had molecular weights between 300 kDa and 400 kDa, which are likely mixtures of mAb dimers and trimers.ABX-BEV and originator products diluted to 2 mg/mL were similar in their monomer and aggregate size distributions.The elution peaks of these 2 mg/mL unstressed bevacizumab samples contained a population of monomeric antibodies of about 40% to 60% followed by populations with mean molecular weights of about 200 kDa, likely mixtures of monomers and dimers.The mFF data presented in Fig. 3a for unstressed bevacizumab are similar to those published by our group .During the standard FFF analysis, protein loose aggregates are disassembled by the mechanical stress that occurs during focus flow and cross flow steps, providing results similar to size exclusion chromatography .The large structures measured in the main mFF peak for the thermally stressed 2 mg/mL ABX-BEV were not present in the FFF chromatogram.The FFF chromatogram of the thermally stressed 2 mg/mL ABX-BEV sample had a small degradation peak between 7 and 8 min, a main peak of antibody monomers, a third peak consisting of aggregates and a tailing of higher molecular weight structures up to 3000 kDa,.The high content of monomeric antibodies, 76%, shows that a large majority of the aggregates detected by mFF in the thermally stressed 2 mg/mL ABX-BEV sample were disassembled to mAb monomers during the FFF measurement.FFF analysis of unstressed 2 mg/mL bevacizumab samples showed a good similarity between ABX-BEV and the originator products.Their chromatogram profiles consisted of one main monomeric peak, a minor subsequent peak and a tailing,.NTA characterizes particle size distributions in the range 20–1000 nm.ABX-BEV incubated for 4 weeks at 40 °C and diluted to 2 mg/mL contained a much higher concentration of particles, 55.3 × 106 particles per mL, compared to the unstressed bevacizumab products.The concentration of particles in thermally stressed ABX-BEV decreased after 24 h at 2–8 °C, from 55.3 to 43.0 × 106 particles/mL.This decrease in the number of particles of sizes below 1 μm may be related to the increase in the number of particles larger than 2 μm measured by PFI: particles larger than 2 μm may have formed by aggregation of particles smaller than 1 μm.The number of particles in unstressed 2 mg/mL samples changed slightly after 24 h incubation at 2–8 °C: it increased for ABX-BEV and for Originator EU, and decreased for Originator USA.These changes are likely related to the formation of the large particles measured by PFI, indicative of complex aggregation mechanisms.Staining of particles with the hydrophobic fluorescent probe Nile red permits the microscope detection of particles larger than 1 μm which contain hydrophobic regions .Thermally stressed ABX-BEV diluted to 2 mg/mL contained more particles than the three unstressed antibodies, 8.0 × 104 particles/mL compared to 1.2 × 104 particles/mL, respectively.After 24 h storage at 2–8 °C of thermally stressed ABX-BEV the number of particles stained by Nile red increased from 8.0 × 104 particles/mL to 2.5 × 105 particles/mL.There was no difference in the number of particles detected by Nile red between ABX-BEV and the two originator products within 5 h after dilution to 2 mg/mL.After 24 h storage at 2–8 °C there was no increase in the number of Nile red stained particles in ABX-BEV.A small increase in the number of Nile red stained particles was observed in originator products: from 1.2 × 104 particles/mL to 1.8 × 104 particles/mL and 2.0 × 104 particles/mL for Originator EU and Originator USA, respectively.PFI characterizes particles in the size range 0.5–500 μm.As expected, thermally stressed 2 mg/mL ABX-BEV sample contained more and larger particles when compared to the three unstressed bevacizumab samples.The particle size distribution was very similar for the unstressed 2 mg/mL ABX-BEV and the originator products.Interestingly, after 24 h at 2–8 °C, more and larger particles were found in all unstressed 2 mg/mL bevacizumab samples.The increase in concentration of particles larger than 10 μm was from 32-40 particles/mL after dilution in 0.9% NaCl of the 25 mg/mL bevacizumab drug products, to 176-248 particles/mL after storage of the 2 mg/mL solutions for 24 h at 2–8 °C.An increase was also observed in the number of particles larger than 25 μm: from 0–8 particles/mL to 40–66 particles/mL.The morphologies of the largest particles in the 2 mg/mL unstressed bevacizumab samples were similar before and after 24 h storage at 2–8 °C.The large particles measured in the thermally stressed 2 mg/mL ABX-BEV had different morphologies, a more fragmented border surface, suggestive of less compact structures.The unexpected aggregate formation detected by Nile red microscopy and PFI within 24 h storage at 2–8 °C after dilution to 2 mg/mL in 0.9% NaCl of the 25 mg/mL bevacizumab drug product will be discussed in Section 3.3.URT measures the speed of sound waves in solutions and permits a sensitive characterization of protein solutions since the sound speed is influenced by protein hydration and electrostatic surfaces, protein concentration, protein conformation as well as by the formulation ingredients, their concentrations and their chemical stabilities .The ultrasonic velocities of the 2 mg/mL thermally stressed ABX-BEV and unstressed bevacizumab solutions measured within 5 h after dilution were similar.No significant changes in the velocities were observed in the samples after storage for 24 h at 2–8 °C.The fact that the sound velocities were very similar in the thermally stressed and unstressed 2 mg/mL and 25 mg/mL bevacizumab samples indicate that the thermal stress of 4 weeks at 40 °C did not change the hydration and electrostatic surfaces of the antibody structures at a URT-detectable level.The data wheel representation in Fig. 7 summarizes and compares the differences in the physicochemical properties of unstressed, and thermally stressed, 2 mg/mL ABX-BEV solutions.Changes in Trp fluorescence parameters indicated a structural degradation in the antibodies incubated at 40 °C for 4 weeks leading to a more rigid environment for Trp residues.The changes in ANS fluorescence parameters show that, compared to the unstressed ABX-BEV, the thermally stressed ABX-BEV contained more electrostatic/hydrophobic binding sites for ANS and that ANS molecules were in more rigid environments.Thermally stressed ABX-BEV contained more and larger particles, as detected by mFF, FFF, NTA, Nile red microscopy, PFI, absorbance at 350 nm and 90° light scattering.Comparison of the mFF and FFF data shows that the aggregates present in the 2 mg/mL ABX-BEV sample stressed for 4 weeks at 40 °C and measured by mFF, are disassembled to mAb monomers during FFF measurements.The data wheel in Fig. 8 shows the very good similarity of the physicochemical properties for the unstressed 2 mg/mL bevacizumab samples.PFI detected slightly more particles greater than 2 μm in ABX-BEV compared to Originator EU and Originator USA, 1.6 × 103, 0.6 × 103 and 0.9 × 103 particles/mL, respectively.For particles larger than 10 μm, PFI measured 32 particles/mL for ABX-BEV and Originator EU, and 40 particles/mL for Originator USA.These values correspond to 3200 and 4000 particles/100 mL bevacizumab infusion solution, which are below the <USP 788> limits of 6000 particles equal to or greater than 10 μm per container for volumes equal to or smaller than 100 mL .The <USP 788> limits are based on measurements using a light obscuration particle count test or a microscopic particle count test, methods which are less sensitive than PFI in detecting aggregates .The data wheels in Fig. 9 compare the 2 mg/mL unstressed bevacizumab solutions within 5 h at 24 °C after dilution and after 24 h incubation at 2–8 °C.After 24 h at 2–8 °C an increase in the total number of particles was observed by Nile red microscopy and PFI: the increase observed in the 90° light-scatter is likely due to these particles.Small increases in intrinsic and ANS fluorescence parameters 3, 5, 6, 9, 10 indicated that the conformation of bevacizumab after incubation at 2–8 °C changed slightly.After 24 h overnight refrigeration at 2–8 °C all 2 mg/mL bevacizumab solutions showed an increase in the number and sizes of aggregates, especially the particles larger than 10 μm and 25 μm.PFI measurements showed that the particles larger than 10 μm increased from 40, 32, 32 particles/mL measured within 5 h at 24 °C after dilution, to 232, 176, 248 particles/mL after 24 h at 2–8 °C.These values correspond to 23200, 17600, 24800 particles /100 mL bevacizumab infusion solution, which are above the <USP 788> limit of 6000 particles equal to or greater than 10 μm per container for volumes equal to or smaller than 100 mL .The number of particles larger than 25 μm detected by PFI also increased: from 8, 0, 0 particles/mL measured within 5 h after dilution to 64, 40, 66 particles/mL after storage for 24 h at 2–8 °C.We performed the 24 h, 2–8 °C incubation experiments in Falcon tubes and not in infusion bags.The use of Falcon tubes was preferred since it is reported that saline-containing infusion bags may shed leachables that can induce antibody aggregation .The Falcon tubes were found not to shed particles when the formulation buffer without antibody was incubated for 24 h at 2–8 °C.Thus, the increase we observed in the number of particles in the 2 mg/mL bevacizumab solutions after storage in Falcon tubes for 24 h at 2–8 °C is very likely related only to changes in the formulation after dilution of the drug products in 0.9% NaCl.The formulation of the 25 mg/mL bevacizumab drug product which is stable for 2 years under refrigerated conditions consists of 50 mM sodium phosphate buffer pH 6.2, 159 mM trehalose, 0.04% polysorbate 20; the i.v. solution of 2 mg/mL bevacizumab consists of 4 mM phosphate buffer pH 6.1, 12.7 mM trehalose, 0.003% polysorbate 20, and 0.83% NaCl.One possible origin for the aggregate formation in the 2 mg/mL bevacizumab sample after 24 h at 2–8 °C is the dilution of polysorbate 20 below its critical micellar concentration, from 0.04% to 0.003%.Other possibilities are the individual or combined effects of the presence of NaCl, reductions in buffer strength and trehalose concentration but not the pH since the undiluted and diluted bevacizumab solutions had pH values of 6.2 and 6.1, respectively.Our data support more the prescribing information from FDA, stating that "Diluted Avastin solutions may be stored at 2–8 °C for up to 8 hours” .The data do not support the usage guidance for originator in Europe which is less restrictive: “Chemical and physical in-use stability has been demonstrated for 48 hours at 2 °C to 30 °C in sodium chloride 9 mg/mL solution for injection.. If not used immediately, in-use storage times and conditions are the responsibility of the user and would normally not be longer than 24 hours at 2 °C to 8 °C, unless dilution has taken place in controlled and validated aseptic conditions” .In summary, the 2 mg/mL bevacizumab solutions prepared as used in i.v. administration, by dilution in 0.9% NaCl of 25 mg/mL Originator EU and Originator USA drug products and ABX-BEV biosimilar drug product candidate, showed a very good similarity of the tested physicochemical properties: these data complement the previous similarity study of undiluted, 25 mg/mL bevacizumab products . | The physicochemical properties of Avastin® manufactured in the USA (Originator USA) and in Europe (Originator EU) and ABX-BEV, a bevacizumab biosimilar drug product candidate produced by Apobiologix Inc., were characterized at a clinically relevant concentration of 2 mg/mL following dilution of the 25 mg/mL drug products with 0.9% NaCl. Measurements using 14 orthogonal analytical methods performed within 5 h after dilution showed good similarity of the three antibodies as regards secondary structure, conformation, aggregation properties, subvisible and visible particles. No significant protein aggregation was observed within 5 h at 24 °C in the 2 mg/mL bevacizumab solutions. The same solutions that were measured within 5 h after dilution were analyzed again after 24 h overnight refrigeration at 2–8 °C: all 2 mg/mL bevacizumab solutions showed an increase in the number and sizes of aggregates, especially the particles larger than 10 μm and 25 μm. The data show the very good similarity in the physicochemical properties of ABX-BEV with Originators USA and EU at a clinically relevant concentration. |
387 | FCA based ontology development for data integration | Business productivity and competitiveness are increasingly being driven by the effective access and use of data.Data provides a mine of information that can help us spot undiscovered patterns of business importance and to create the knowledge that will be needed to tackle the challenges of future.However with data becoming available and growing at unprecedented rates, organisations struggle to take full advantage of valuable data.One main reason for this is that data is usually created and maintained by a range of organisations.This results in mismatch between datasets, i.e., datasets differ from one organisation to another not only in what is encoded but also in how it is encoded.In order for organisations to use and digest heterogeneous data and uncover the untold business patterns, there is a growing interest to develop techniques that investigate complex data phenomena and facilitate better data interoperability.Among various techniques developed, ontology research is one discipline that can deal with data heterogeneity and improve data sharing.Ontology-based integration systems are usually characterised by a global ontology which represents a reconciled, integrated view of the underlying data sources.Systems taking this approach usually provide users with a uniform interface—all queries made to source data are expressed in terms of a global ontology, as are the query results.This frees the user from the need to understand each individual data source.Unfortunately, in many domains one faces the problems of either having no established ontology that can be readily employed in the integration work, or existing ontologies do not fit for the purpose.In this paper we contribute a formal and semi-automated approach for ontology development.Rather than starting from scratch, we build an ontology by effective discovering and use of the knowledge that is buried in the datasets to be integrated.The method is based on Formal Concept Analysis, which is a mathematical approach for data analysis.FCA supports ontology development by abstracting conceptual structures from attribute-based object descriptions, and it enables considerable ontology development activities automated.Our research extends classical FCA theory to support ontology development for integrating datasets that exhibit implicit and ambiguous information.Implicit information is caused by the fact that some organisations tend to take some domain knowledge as granted, and do not explicitly specify it in their design documents or datasets.This can lead to an ontology that is ill-formed, and does not correctly capture critical concepts and the semantics of the domain.Ambiguous information is due to the fact that organisations differ from each other in culture, conventions and requirements in system development, hence they may vary in how they choose to represent a business object, and at what levels of granularity such information is encoded.This causes inconsistencies between the datasets of different organisations.We consider that overcoming this implicity and ambiguity is an important step in ontology development.The work reported here is a follow on research of Beck et al., Fu and Cohn and Fu and Cohn.In this paper we report further technical advances we have made.To restore implicit information, we introduce a rule based method.We discuss how rules are derived and deployed for recovering implicit information.To resolve disambiguate information, we define a set of primitive operations to deal with simple matches in data alignment.These operations are then composed to deal with more complicated matches.Finally, we report on our experiments that are carried out to construct an ontology for integrating non-trivial datasets from several UK water companies.We measure the quality of the developed ontology by utilising the metrics of classical information theory and also in terms of its fitness to the application domain.Our experimental results demonstrate that techniques described in this paper provide an effective mechanism for reconciling and harmonising heterogeneous data from disparate sources, and they support development of ontologies that better fit and respect the underlying knowledge structures of domains.The remaining part of the paper is organised as follows.Section 2 reviews related research.Section 3 recalls relevant notions of FCA and briefs our framework for ontology development.Sections 4 and 5 present techniques that deal with implicit and ambiguous information.Section 6 discusses how to derive an ontology by using results generated from Sections 4 and 5.Section 7 reports our experimental results.Section 8 concludes the paper and suggests future research.Several areas of research are interesting to this work.Firstly, integration techniques investigated in database and information integration are quite relevant.Various topics have been studied by these communities and the ones that are the most interesting here are mapping discovery and schema integration, and techniques have been developed to support these.Mapping discovery takes two or more database schemas as input and produces a mapping between elements of the input schemas that correspond semantically to each other.Many of the early as well as current mapping solutions employ hand-crafted rules or heuristics to match schemas.Examples of such heuristics include linguistic matching of schema element names, detecting similarity of structures of schema elements, and considering the patterns in relationships of the schema elements.Techniques have also been proposed to use learning based methods.Schema integration constructs a global schema based on the inter-schema relationships produced in mapping discovery.Each mapping element is analysed to decide which representation of related elements should be included in the global schema.When a mapping describes the corresponding schema elements as identical, their integration is straightforward—simply includes one of schema elements into the global schema.More frequently, the corresponding schema elements are not the same but are mutually related by some semantic properties, and schema merging is performed manually or semi-automatically with the assistance of domain engineers to guide the designers in their resolution.Ontology research is another discipline that deals with data integration.A common definition of an ontology is that it is a formal, explicit specification of a domain of discourse.As it provides a shared understanding and explicit specification of a domain, an ontology is considered to have a key role to play in data integration.Unfortunately, for many domains one faces the need to develop ontologies from scratch, and a growing number of methods have been proposed in recent years to address the issues of ontology design and development.Most methods are based on the traditional knowledge engineering approach.These methods usually start with defining the domain and scope of ontologies.This is followed by a data acquisition process: important concepts are collected; a concept hierarchy is derived, and properties and semantic constraints are attached to concepts.As developing ontologies from scratch is an expensive process to perform, there has been increasing interest in reusing or merging existing ontologies that are developed independently in different applications.Central to these studies is research on ontology mapping and ontology integration.Approaches to ontology mapping are similar to ones for matching database schemas and other structured data, and they use lexical and structural components of definitions to find correspondences.However, as an ontology captures richer data semantics than traditional database schemas, the methods for finding mappings tend to exploit these extra data semantics.For example, in Noy and Musen a tool has been developed to use linguistic similarity matches between concepts for initiating mappings, and then use the underlying ontological structures to suggest a set of heuristics for identifying further matches between the ontologies.In Duong and Jo, a method has been proposed to mapping ontological concepts using propagating Priorly Matchable Concepts.The method exploits information such as concept types, relations and constraints to provide suggestions for possible concept matches.The method guilds on how to priorly check the similarity between concepts and it reduces computational complexity by avoiding checking similarity among unmatchable concepts.In Nguyen, an approach has been proposed to resolve three levels of ontology conflicts: instant level, concept level and relation level, using consensus method.The techniques developed in Doan, Madhavan, Domingos, & Halevy and Spohr, Hollink, and Cimiano employs learning based techniques to find ontology mappings.They exploit information in data instances and taxonomic structure of ontologies, and then uses a probabilistic model to combine results of different learners.Based on the inter-ontology mappings derived in mapping discovery, a merging process integrates the source ontologies and generates a global ontology.However, deriving a meaningful ontology is a hard problem even with the ground set of inter-ontology mappings provided, and most methods that support the merging process are performed in an interactive manner with the assistance of human users, as is done in database and information integration research.Another branch of research studies ontology development and integration with formal methods.Of particular interest here is research based on Formal Concept Analysis.FCA is a formal method for concept classification and conceptual structure derivation.FCA related tools enable considerable knowledge processing activities to be automated, particularly concept generation and hierarchy derivation.As a result, FCA has been attracting great interest to support systematic, semi-automated development and integration of ontologies.For example, inRouane, Valtchev, Sahraoui, & Huchard ontological hierarchy merging is studied in the framework of FCA by taking into account of both taxonomic and other semantic relationships of ontologies.A method FCA-MERGE has been developed in Stumme and Maedche to use FCA to support ontology integration.FCA_MERGE takes as input the two ontologies and a set of natural language documents, and computes a concept lattice from two source ontologies using FCA techniques.The concept lattice is then exploited by domain experts to derive a merged ontology.In Zhao, Wang, and Halang a similarity method has been introduced to map ontology concepts basing on Rough Set and Formal Concept Analysis theory.The idea is to construct from two source ontologies a concept lattice with FCA and similarity measure of two concepts are then computed using Rough Set theory.In Chen, Bau, and Yeh authors proposed a method that combines WordNet and Fuzzy Formal Concept Analysis techniques for merging ontologies.WordNet is firstly used to align concepts from a source ontology to concepts in a base ontology, and the remaining unmapped concepts are then aligned to the base ontology using a similarity measure based on fuzzy FCA.Our approach is in line with FCA based research.Yet it differs from previous studies in several aspects.Firstly, while most research focusing on similarity measure of ontology concepts, we contribute an integrated framework that offers a structural and systematic description of ontology merging process.Secondly, with FCA as backbone we investigate how to resolve implicit and ambiguous information.Previous research is either implicit on how these problems are resolved, or only address particular types of these problems.For example, in Rouane et al. there is an interesting discussion on attribute conflicts, but the authors do not address in detail how these problems are resolved.Thirdly, while most previous research considers one to one mapping between concepts, our method is able to deal with more complicated issues, i.e., an ontology concept may have multi-mappings from another ontology, which has not been investigated sufficiently in literature.Finally, we applied the proposed techniques to non-trivial industrial datasets, and examined how effectively the proposed method can help with improving data interoperability.This has rarely been reported in other FCA-based works.In this section, we introduce the basic concepts of FCA and brief our framework for ontology development.We will use data and examples from water infrastructure domain to present techniques developed in this research.FCA theory was developed in Wille and a typical task that FCA can perform is data analysis, making the conceptual structure of the data visible and accessible.Central to FCA is the notion of formal context, which is defined as a triple K:=〈G, M, I〉, where G is a set of objects, M is a set of attributes, and I⊆GXM is a binary relation between G and M.A relation 〈g, m〉 ∈ I is read as “object g has the attribute m”.A formal context can be depicted by a cross table as shown in Fig. 1, where the elements on the left side are objects; the elements at the top are attributes; and the relations between them are represented by the crosses.A formal concept of a context K:= 〈G, M, I〉 is defined as pair, where A⊆G, B⊆M, A´= B and B´=A. A´ is the set of attributes common to all the objects in A and B´ is the set of objects having the attributes in B.The extent of the concept is A and its intent is B.The formal concepts of a context are ordered by the sub- and super-concept relations.The set of all formal concepts ordered by sub- and super-concept relations forms a concept lattice.Fig. 1 shows the concept lattice for the context in Fig. 1, where a node represents a concept labelled with its intensional and extensional description.The links represent the sub- and super-concept relations.The formal contexts introduced above are not the ones that occur most frequently in applications of FCA.Most often data is encoded in many valued contexts.A many valued context K:= 〈G, M, W, I〉 consists of a set of objects G, a set of attributes M, a set of attribute values W, and a set of ternary relations I⊆G × M X W.A relation <g, m, w> ∈ I is read as “object g has the attribute m and its value is w”.Fig. 2 shows a many valued context which lists different water pipes having different attribute values.In order for FCA theory to be applied to a many valued context, it needs to be unfolded into a one valued context through conceptual scaling.Fig. 2 shows the one valued context for the many valued context in Fig. 2 after conceptual scaling.As the extent and intent of a concept overlaps with those of its super- and sub-concepts, redundancy exists in a concept lattice.To prevent this, reduced labelling is introduced.A lattice with reduced labelling is obtained by replacing each concept with,N), where N contains the non-redundant elements in A, and N contains the non-redundant elements in B.An object o will appear in N if the corresponding concept is the greatest lower bound of all concepts containing o.An attribute a will appear in N if the corresponding concept is the least upper bound of all concepts containing a. Fig. 3 shows the lattice derived from the one in Fig. 1 with reduced labelling.Furthermore we can eliminate in a lattice the concepts which do not possess their own attributes or objects.This leads to a structure called a Galois Sub Hierarchy.A GSH only consists of so called attribute concepts and object concepts.An object concept represents the smallest concept with this object in its extension, and an attribute concept represents the largest concept with this attribute in its intension.The GSH of the lattice in Fig. 3 is depicted in Fig. 3, where concepts 1, 5, 8 and 12 are removed due to the empty N and N.Concept 2 is an attribute concept and concept 9 is an object concept.With the FCA theory as the backbone, we have developed a framework to support ontology development.The framework essentially consists of three components: Context Formation, Context Composition and Ontology Derivation, as illustrated in Fig. 4.To generate an integrated ontology for two datasets, Context Formation takes the datasets as inputs and generates a one valued context for each of them.The generated contexts are then fed to Context Composition to produce an integrated GSH.Ontology Derivation takes the GSH generated in Context Composition and generates an integrated ontology as well as concept mappings between two datasets.We will describe Context Formation in Section 4, and elaborate on Context Composition and Ontology Derivation in Sections 5 and 6.Fig. 5 shows the components of Context formation.Given a dataset, Data Acquisition derives concepts encoded in the dataset as well as their attribute definitions, and the result is a many valued context for the dataset.The component looks at sources where various feature types and their definitions can be extracted.The most common sources here are text/web documents created by system designers/developers for specifying system requirements and design.Other important sources are conceptual/logical data models of the concerned dataset.The generated context is then fed to the Information Explication component to restore implicit information.The component Conceptual Scaling transforms a many valued context into a one valued context, in order for classic FCA techniques to be applicable.The main challenge here is to deal with implicit information.Implicit information is caused by several factors.As an example in water infrastructure domain, when defining a feature type, organisations tend to explicitly state specific properties, but leave common ones unarticulated in their design documents.For instance, a sewer pipe is characterised by how it conveys sewage: either by gravity or by pressure, with the gravity distribution employed more often than the pressurised form.Most water companies explicitly specify the pressurised characteristic of a sewer pipe, but not the gravity one.Furthermore, many organisations take some domain knowledge as granted, and do not encode it explicitly.For example, a sludge sewer is usually pressurised rather than gravity.As this is well understood in the domain, many water companies choose not to encode this information explicitly.Table 1 shows a portion of a many valued context that is generated for a sewerage dataset, where many blank cells exist due to implicit or unarticulated domain knowledge.The main consequence of this is that it can lead to an ontology that is ill-formed, and does not correctly capture critical concepts and semantics of the domain.Fig. 6 shows the GSH for the context in Table 1.Due to implicit information, many important concepts, such as gravity sewer and underground sewer, are missing from the hierarchy and therefore from the resultant ontology.Furthermore, different organisations may choose what not to articulate in their datasets.We believe this hidden knowledge is one of main reasons that hinder data compatibility or interoperability across organisations.We classify implicit information into two groups: attribute-specific and object-specific.Attribute-specific implicit information is concerned with a particular attribute, and is applicable to all objects having that attribute.Object-specific implicit information is concerned with an attribute of particular objects only.An example of former is with the how attribute in Table 1.The unarticulated domain knowledge here is that a sewer pipe carries sewage by gravity if not explicitly specified, and this applies to all sewerage pipes having how attribute.An example of object-specific implicit information is with the how attribute of pipeType3.The implicit information here is that if a pipe carries sludge sewage, by default it carries it by pressure.This is relevant to the how attribute, but applies to pipeType3 only and therefore is classified as object-specific implicit information.We use a rule based approach to recover implicit information.As implicit information is largely unarticulated domain knowledge, we need to work closely with domain experts to acquire these rules.We have two types of rules, attribute rules dealing with attribute-specific implicit information, and object rules dealing with object-specific implicit information.To elicit attribute rules, we iterate each attribute.An attribute has implicit information if it has missing values for some objects.Each attribute with implicit information in a context table incurs a rule.Involvement of domain experts is required at this point to generate such a rule.For example, Rule 1 in Fig. 7 is collected for the how attribute in Table 1.To elicit object rules, we iterate each object in the context, and examine each of its attributes that do not have a value.If an attribute has implicit information which cannot be recovered with an attribute rule, an object rule is elicited to recover implicit information with the help of domain experts.For example, for object pipeType3, the attribute how has implicit information.As the implicit information for how in this case is pressurised, it cannot be recovered with Rule 1 discussed above.An object rule, Rule 4 in Fig. 7, is acquired in this case for pipeType3.Fig. 7 shows a set of rules elicited for the context table in Table 1, where Rule 1, 2 and 3 are attribute rules.Rule 4 is an object rule, which works for the how attribute of the object pipeType3 only.This step is concerned with how a context table can be manipulated to restore implicit information.To recover implicit information for an object, we first identify a set of rules applicable to it.This includes all relevant attribute rules and object rules for the object.Each attribute of the object is examined to see if it has implicit information.If the answer is yes, the relevant attribute rule is identified.The identification of an object rule is straightforward as it is linked to the concerned object directly.For an object, if both an attribute rule and an object rule are identified as relevant to an attribute, the object rule overrides the attribute rule when restoring implicit information.For example, for PipeType3, both Rule 1 and 4 deal with how attribute, but only Rule 4 is applied when restoring implicit information for how of this object.Once applicable rules have been identified, we generate new objects by applying different combination of the rules.This allows objects with different combination of attributes to be identified.Each derived object retains the existing object attribute relationships of the original object and derives new ones by applying corresponding rules.For example, for sewerPipeType1, there are two attributes that have implicit information, what and location.Accordingly, two attribute rules are identified: Rule 1 for what attribute and Rule 2 for location attribute.There is no object rule identified for pipeType1.By applying different combination of the rules, three new objects are derived from pipeType1, pipeType1_object1 by applying Rule 1, pipeType1_object2 by applying Rule 2, and pipeType1_object3 by applying Rule 1 and 2.All new objects retain existing object attribute relationships of pipeType1, and with different relationships derived due to the different rules applied.Depending on the number of rules applicable, each original context object derives different number of new objects.For example, there are 2 applicable rules for PipeType1, PipeType2 and PipeType3.The combination of these rules generated 3 derived objects for each original object.PipeType4 has 3 applicable rules and 7 new objects have been derived.Table 2 lists the many valued context after implicit information has been restored with rules.This many valued context is then fed to Conceptual Scaling component to generate a one valued context table.Table 3 lists the one valued context table after the conceptual scaling of the context in Table 2.Context composition takes two formal contexts as input, and generates an integrated GSH.The main components of Context Composition are Context Integration and Hierarchy Generation, as shown in Fig. 8.The main challenge here is to deal with ambiguous information during context integration, i.e., different terms may be employed to refer to the same attribute, and attributes may be modelled at different levels of granularity.An example here is that one dataset may model a sewerage pipe as either main or lateral and another may classify it as trunk main, non-trunk main, or private pipe.Attribute disambiguation is a process to match attributes from different datasets.In this research we use a pre-defined data dictionary developed in Fu and Cohn to disambiguate attributes.The data dictionary maintains a set of terms that describe concepts in a domain, as well as their terminological relationships, e.g. BT/NT etc.Using the data dictionary, we can decide semantic relationships of two attributes.In what follows, we will use the context tables K1 and K2 shown in Table 4 and 5 to illustrate the context integration process.Adding these into K results in the formal context shown in Table 12, which is also final integrated context table.The GSH constructed from this integrated context is illustrated in Fig. 9.Ontology derivation component of our framework takes the GSH generated in Section 5 and generates an ontological structure1.Fig. 10 shows the components of ontology derivation.The GSH is exploited to derive several types of information, including ontological concepts, subsumption relationships between concepts, and attributes of concepts.The information identified forms an ontological structure from which a full ontology can be developed.The mapping between concepts of different datasets can also be identified from the GSH.This subcomponent derives mappings between concepts of two datasets.Given a formal concept in a GSH, if its extent contains more than one objects, then it indicates a potential mapping between these source concepts.The validation of domain engineers is requested at the evaluation stage to judge whether a mapping identified is correct.If the answer is negative, features need to be identified to differentiate one concept from another.This often involves the identification of new attributes or relationships of concerned concepts.The existence of incorrect matches triggers the need to iterate context composition or integration operations.As we employ a GSH in the research, intermediate, abstract concepts are reduced in the context integration step and the resulting hierarchy consists only of object concepts and attribute concepts.Object concepts have to be kept in the resultant ontological hierarchy as they correspond to the initial concepts of datasets and therefore need to remain in ontological structure to respect the initial class specification of the datasets.For an attribute concept, the assistance of domain engineers is required to decide whether it should be kept or discarded by taking into account its significance or interest to the application.When an attribute concept is discarded in a GSH, all elements in its intent are passed on to its sub concepts, and super-/sub-concept relationships are established between its super-concepts and sub-concepts.After a decision has been made on which concepts are to be kept in the resultant ontology, the rules for identifying relationships and attributes of a concept are straightforward:All elements in the intent of a formal concept are declared as attributes of the ontological concept.Sub/super relations between two formal concepts are identified as is-a relationships between the corresponding ontological concepts.An evaluation of the proposed techniques has been performed on several industrial datasets.We first describe the experimental setup and the ontology similarity measures employed in the evaluation.We then report on the evaluation results.Datasets we used for performing our experiments were sourced from four UK water companies.These datasets essentially encode same types of information, including various water pipes, metering and treatment facilitates for transporting freshwater/wastewater for customers across the UK.However each organisation records its information with little thought towards interoperability with others.This results in data heterogeneities.Due to data confidentiality agreement we have with our industrial partners, we cannot publish these datasets.Nevertheless we have list in Table 13 the statistics on the datasets.The mapping and integration was carried out in a semi-automated manner, where data acquisition and attribute disambiguation were conducted manually, the open source tool Galicia was employed for context manipulation and GSH generation, and all other processes such as information explication and conceptual scaling were completed with Java and SQL codes.The evaluation was performed in three phases.Phase I experiments constructed local ontologies for each dataset involved.Pairwise comparison was conducted to measure the similarity of these local ontologies, and the results were then served as benchmarks for the subsequent evaluation.Phase II experiments studied how implicit information impacts on ontology interoperability, and demonstrated how information explication can help with ontology alignment.Phase III compared an ontology developed in this research with a handcrafted ontology developed with traditional knowledge engineering approach.The performance of two ontologies was evaluated by studying how best the two ontologies fit and respect the knowledge structures of datasets to be integrated.Experiments were firstly performed to restore implicit information for ontologies generated in Phase I, the resultant ontologies were then compared with each other using same measures.To do this, rules for restoring implicit information were acquired for each dataset with the help of domain engineers.Table 16 shows the statistics on these rule sets.The rules were deployed to formal contexts generated in the Phase I experiments to restore implicit information as well as derive new feature types.The resultant contexts were used to generate ontologies in the same way as did in Phase I experiments.The similarity measures were calculated for these ontologies, and the results were compared with the ones we obtained in Phase I experiment, which are shown in Figs. 11 and 12.Comparing with the baseline similarity scores, we can see a substantial improvement in the similarity of these ontologies, both at the lexical level and at the taxonomic level.The average lexical precision increased to around 60% which was below 20% in the Phase I study.This was mainly due to the increase of the common feature types which were restored in information explication process.Taxonomic precision was improved similarly: from 20% to around 60% by average.This improvement was mainly due to the resulting ontologies bearing a similar level of detail in their hierarchies once they were enriched with derived objects generated with rules.A concept in one ontology had an increased number of common super- and sub-concepts with its matching concept in another ontology.This resulted in improved local taxonomic similarity and therefore improved global taxonomic similarity.This led to the conclusion that implicit information impacts greatly on the similarity of the local ontologies, and similarity of these ontologies can be improved significantly if we can have implicit information restored.The concepts and attributes from four datasets have been identified and used to generate context tables.The context tables were then fed to Galicia to derive GSHs.An ontology was generated from a GSH by discarding all attribute objects and keeping the object concepts.Four ontologies were generated, each for a dataset.Matrices described in Section 7.1 were used to measure the similarity of these ontologies.We observed that the four water companies differ greatly from each other on what business objects they record in their systems, which leads to ontologies that are incompatible to each other both lexically and taxonomically.These local ontologies only agreed with each other to a small extent: only a relatively small percentage of terms in one ontology were also found in another ontology.This was measured with lexical precision LP.Ontology O2 is the one that has the least common terms with other ontologies.Manual inspection of these ontologies found that this lexical disagreement was mainly due to the different aspects of the domain that an organisation chose to encode in its data management systems, and this resulted in different ontology concepts.The poor performance of O2 ontology was due to the granularity issues—it encoded concepts at a finer level than other ontologies, which resulted in lexical mismatches with other ontologies.The taxonomic level similarity of these ontologies was slightly better but scores were still quite low, as shown in Table 15.The presence of different concepts in the hierarchies of these ontologies led to disappointing results.Again, ontology O2 performed the worst—it has a much lower taxonomic precision when compared to the other ontologies.Examination revealed that the granularity mismatch was again the main cause for this.As O2 ontology encoded business objects at a finer granularity than others, it had a very different hierarchy to those of other ontologies.The availability of vast quantities of data presents organisations with both opportunities and challenges.Data integration techniques offer a promising way for addressing the issue of data heterogeneities and promoting data sharing and interoperability across organisations.In this paper we present a formal and semi-automated method for ontology development, with the aim to reconcile heterogeneous data and support data integration.The research extends classical FCA theory to address the issues of implicit and ambiguous information, which, we consider, are important but have not been sufficiently investigated by previous studies.The research enables considerable ontology engineering activities automated, including concept derivation and hierarchy generation.In contrast to studies that draw upon either small or simplified datasets, we evaluate the proposed techniques on non-trivial industrial datasets.Our experimental results demonstrate the techniques described in this paper can help curate and fuse data from disperse sources, and support the development of ontologies that better fits and respects the underlying knowledge structure of domain.There are a number of works which we plan to undertake in the future, including developing techniques to deal with incomplete information in data integration, and validating the proposed techniques on datasets in other application domains.The four local ontologies, which had implicit information restored in Phase II experiments, were then integrated to build a global ontology.This was achieved by first performing the context integration as described in Section 5.The contexts of O1 and O2 were integrated first, and resultant context was then integrated with O3 context and so on, as shown in Table 17.The main activity performed here was attribute disambiguation.Table 17 shows the types of attribute matches found during various stages of the integration process.For example, for the 21 attributes of the O2 context, 12 found an exact match from the O1 context, and 2 found narrower matches, 5 did not find any match, and 2 found multiple matches.After attribute disambiguation, the integrated context was used to generate a GSH, from which an integrated ontology was derived.The total number of concepts in the integrated ontology was 248 and the depth of hierarchy was 6.To evaluate the quality of this integrated ontology, we compared it against a handcrafted ontology that was developed with a traditional knowledge engineering approach as described in Fu and Cohn.Both FCA ontology and KE ontology had the same local ontologies as major inputs, but they differ from each other on how ontological hierarchies were built and how implicit/unarticulated information was recovered.The hierarchy of FCA ontology was generated automatically with FCA tool Glacia basing on the attribute definition of objects, and the hierarchy of KE ontology was generated manually basing on the domain knowledge from domain experts.FCA ontology achieved information explication via the domain rules as discussed in Section 4.KE ontology did this through a manual semantic enrichment process.Extra data semantics of KE ontology were manually derived from both system design documents and domain engineers.The resultant KE ontology consists of 216 concepts which was organised in 5 hierarchical levels.We evaluate the two ontologies in the similar fashion as done in.We consider that an ontology is of good quality when it conforms to and has a good coverage of knowledge structures of datasets to be integrated.This was performed by comparing FCA ontology and KE ontology against local ontologies O1, O2, O3 and O4 as developed in Phase II experiment.Table 18 summarises the results.Both ontologies had similar scores for the lexical precision LP when compared against these ontologies.This can be largely explained by that both FCA ontology and KE ontology had these local ontologies as input, i.e., concepts in these ontologies were major lexical sources of both ontologies.FCA ontology outperformed KE ontology on its similarity to the local ontologies at the taxonomic level.This is because FCA ontology was generated systematically based on attribute definitions of input feature types, and sub- and super-concept relationships between concepts were identified in the same fashion as the local ontologies.This led to the improved taxonomic precision of the FCA ontology.However the ontological hierarchy generated with KE method is rather subjective, i.e. depending upon human judgement on what intermediate concept to add, and when a sub-/super-concept relationship should be established.The hierarchy tends to be distorted with missing sub- and super-concept links when the number of concepts increases.FCA ontology also outperformed KE ontology on the overall similarity measure GF.This leads to the conclusions that FCA ontology fits and respects the local ontologies better and therefore better serves the integration purpose in this case. | Data is a valuable asset to our society. Effective use of data can enhance productivity of business and create economic benefit to customers. However with data growing at unprecedented rates, organisations are struggling to take full advantage of available data. One main reason for this is that data is usually originated from disparate sources. This can result in data heterogeneity, and prevent data from being digested easily. Among other techniques developed, ontology based approaches is one promising method for overcoming heterogeneity and improving data interoperability. This paper contributes a formal and semi-automated approach for ontology development based on Formal Concept Analysis (FCA), with the aim to integrate data that exhibits implicit and ambiguous information. A case study has been carried out on several non-trivial industrial datasets, and our experimental results demonstrate that proposed method offers an effective mechanism that enables organisations to interrogate and curate heterogeneous data, and to create the knowledge that meets the need of business. |
388 | Auto-HPGe, an autosampler for gamma-ray spectroscopy using high-purity germanium (HPGe) detectors and heavy shields | High-purity germanium gamma ray detectors have proven be essential for environmental, geological, and atmospheric research .High-resolution γ-spectrometry using HPGe provides a nondestructive, multi-elemental analysis, enabling the simultaneous measurement of uranium, thorium, and their decay products in a range of sample types.Germanium semiconductor detectors were first introduced in 1962 and the HPGe crystals were first developed in the mid1970s .The HPGe are now the detectors most used for high energy resolution gamma ray research, and in the last ∼15 years, large improvements in efficiency, sensitivity of energy resolution, and access to liquid nitrogen have led to this equipment being widely used around the globe of which has led to important discoveries in environmental and geological sciences .Automation is routine in most chemical analyses, with instruments provided by manufacturers often coming together with autosamplers.However, autosamplers can be expensive, and some laboratories opt for not employing autosamplers for some analyses as a result of these high costs.In particular, gamma ray detection using HPGe is a kind of analysis for which autosamplers can be prohibitively expensive, exceeding the price of the analyzers themselves.An important reason for the high price of the autosamplers for HPGe detection is the need of maneuvering the thick, heavy lead shields that prevent interference from background γ rays on the detectors.This specific lead shielding minimizes background natural environmental and cosmic radiation, especially for samples with low radiation levels .However, this reason alone is not enough to justify the high price of the current available autosamplers from a technological stand point: while the analyzers themselves almost always employ cutting edge technology based on advanced concepts in physics and chemistry, autosamplers are merely mechanical devices that substitute a human operator.Therefore, there have been recent examples of the substitution of standard and expensive autosamplers with low-cost devices including robotic arms and Cartesian machines similar to 3D printers .Here we present auto-HPGe, an autosampler that costs a minuscule fraction of commercial models.As the other low-cost autosamplers previously mentioned, auto-HPGe can be easily integrated to any analytical instrument using AutoIt, a scripting language for the Windows operating system .Also, auto-HPGe is easy to build and operate, without need of knowledge on electronics or low-level computing.Auto-HPGe can be broadly described as a cage mounted around a HPGe detector, on top of which a moveable gantry is placed.The gantry moves a suction cup holder that brings the samples into the analyzer body.The movement is done on the X, and Y axes, both horizontal, and the Z axis, the vertical one.Additionally, a syringe attached to the suction cup has its plunger moved vertically.The analyzer lids are open using two linear mechanisms attached to the cage around the analyzer.A sample tray is fixed to the cage, sideways to the detector, where samples in petri dishes are placed.A video showing the sampling procedure is available.The moveable parts of auto-HPGe are all controlled using stepper motors, which enable movement reproducibility <1 mm.Five of these motors deal with petri dish manipulation, which is done by the gantry and syringe portion of the autosampler placed above the analyzer.These five motors are controlled by a MKS Gen-L control board, commonly used to control 3D printers.Another two motors are used to open and close the lids, and are controlled using a second MKS Gen-L board, since such boards can only control up to five motors each.The motors dealing with the lids have higher torque compared to those in the gantry.Marlin, the software package used to control the motors, is fully open source, and enables the use of G-code , the standard language used operate all sorts of machines like 3D printers and CNC routers.Auto-HPGe can be synchronized to a HPGe detection device using AutoIt, in a similar fashion to other low-cost autosamplers .Auto-HPGe cages the analyzer and uses a gantry to transport petri dishes inside and out the analyzer,Auto-HPGe employs a suction cup to move petri dishes,Auto-HPGe opens and closes the lid on the analyzer using a leadscrew based linear mechanism powered by relatively high-torque stepper motors,Auto-HPGe is controlled using G-code the standard language for 3D printer control,Auto-HPGe can be synchronized to a HPGe detector using AutoIt.Most parts used to build auto-HPGe consisted into general purpose parts available from suppliers to build 3D printers and related equipment.A few parts consisted into 3D-printed parts available from online repositories, or from a previous publication .These 3D printed parts were slightly modified to fit the needs for auto-HPGe.When available, the OpenSCAD codes for each 3D-printed part are given.If not available, only a link for the STL files are listed in the table below.This bill of materials is an expansion of the one presented in a previous paper , which amounts to ∼AU$ 700.Thus, it is necessary to obtain all the items in that bill of materials in addition to the ones listed here, except those the microsyringe, which costs ∼AU$ 100, and the hose clamps, ∼AU$ 5.Auto-HPGe is supported by a cage that surrounds the HPGe detector.The cage consists of twelve t-slots connected: four placed horizontally making a square at the base, another four vertically connected to the ones at the base, one 80 cm placed horizontally at the front, about 5 cm lower than the lids of the HPGe analyzer, one 65 cm one placed at the same height but making 90° with the 80 cm one, and two 80 cm ones at the top of the 80 cm vertical ones.These steps are the same listed in a paper describing the building of a different autosampler .See Sections 5.1–5.5 in that paper.The main differences are that the Z axis is now built using longer t-slots, rods, and leadscrew, legs are no longer needed, and that a guide for a syringe needle is no longer needed.The suction cup mechanism is very simple, consisting in a suction cup connected to a 20 mL syringe with a luer slip tip actuated by a stepper motor.Make a hole with a diameter narrower than that of the luer slip tip at the center of neck of the suction cup measuring 5.5 cm in diameter.Then, press the syringe tip inside the hole.Place the syringe in the syringe holder and connect it to the lower end of a 24 cm aluminium slot.Place the stepper motor in 3D printed the motor mount, connect the 10 cm leadscrew, the sliding nut to the screw, the plunger driver to the nut, and place the motor mount at a distance from the syringe so that when the plunger driver is at the bottom of the screw the syringe has no dead volume.Place the plunger clip around the plunger and plunger driver.Replace the normal handle of the lids with longer 1/2 in.screws.The mechanisms to open and close the lids are similar to that of the Z axis on the gantry.Each mechanism consists of a leadscrew connected to a stepper motor, and 2 guiding rods, all horizontally oriented.The leadscrew is connected to the shaft of the gearbox equipped stepper motor using a shaft coupler with both ends for 8 mm shafts.The motors are fixed on the t-slot using the motor mount described in Section 3.It is necessary to replace the original 3 mm screws with longer ones.Each door pusher is fixed to the leadscrew and sliding on the parallel rods using the ant backlash nut and the linear bearing holder, both described in Section 3.The slot that pushes the door to open it must be shorter than the one used to close it.They must not touch the lid itself, only the handles.Door pushers must be placed so that they do not hit each other when closing the door: for example, the slots on the door pusher of the right door can be mounted differently from those for the left door.Details about this part can be found elsewhere , see Section 5.6 in that paper.The only difference is that for auto-HPGe two controls boards, and not only one, are necessary.The sampling station consists of three 20 × 80 cm aluminum slots connected to the side of the cage.Samples are kept in between 5 mm screws, attached to the 20 × 80 mm aluminum slots using hammer nuts.It is necessary that samples are aligned on the Y line so that the code used to control auto-HPGe works properly.Supplementary data associated with this article can be found, in the online version, at https://doi.org/10.1016/j.ohx.2018.e00040.Samples are kept in between 5 mm screws, attached to the 20 × 80 mm aluminum slots using hammer nuts.It is necessary that samples are aligned on the Y line so that the code used to control auto-HPGe works properly.The dumping station can be any soft surface to where the measured samples are discarded.Here we used a cardboard box filled with bubble-wrap plastic.Auto-HPGe is controlled using G-code via the program Hype!Terminal.The serial connection settings are: 115,200 for Baud rate, 8 data bits, none for parity, 1 stop bit, and none for flow control.Details about G-code commands are found in Section 6.2 of the paper describing a similar autosampler .Because there are two control boards, two simultaneous instances of Hype!Terminal must be run, each for the corresponding COM port.Therefore, it is necessary that the COM ports referent to each control board are identified.The automated control of auto-HPGe consists of writing an AutoIt script that sends instructions to auto-HPGe via the two Hype!Terminal windows in synchrony with the software controlling the HPGe detector.The sequence of actions is: 1) opening the analyser lids; 2) moving the gripper to a sample petri dish; 2) gripping the petri dish; 3) moving the petri dish to the analyser; 4) leaving the petri dish inside the analyser; 5) getting out of the analyser; 6) closing the analyser lids; 7) starting the measurement; 8) waiting until the measurement is finished; 9) opening the lids again; 10) griping the petri dish inside the analyser; 11) taking the petri dish to the dump station.This sequence of actions can be repeated as many times as necessary.All horizontal movements by the gripper are done at a predetermined safe height, that is, a position on the z axis where the needle moves without colliding with any object.All vertical movements are done at predetermined X and Y positions for each element involved.Most of the actions are done by auto-HPGe via Hype!Terminal, but steps 7 and 8 involve Genie.While step 7 is straightforward, step 8 consists in the monitoring of the measured value.Once every minute, the areas displayed at the positions shown in Fig. S2.4 are analysed.If the first area shows “6”, and the other area shows any number, the waiting finishes.The analysis is done by evaluating the pixels in the screen section where the numbers are displayed.Therefore, it is necessary that the computer is “frozen” for a few seconds every 10 min and the window of Genie is brought to the foreground of the screen, so that it can be evaluated without interference of other programs being used simultaneously on the computer.The full code for all the steps is provided in Supplementary information 2.The automated control of auto-HPGe consists of writing an AutoIt script that sends instructions to auto-HPGe via the two Hype!Terminal windows in synchrony with the software controlling the HPGe detector.The sequence of actions is: 1) opening the analyser lids; 2) moving the gripper to a sample petri dish; 2) gripping the petri dish; 3) moving the petri dish to the analyser; 4) leaving the petri dish inside the analyser; 5) getting out of the analyser; 6) closing the analyser lids; 7) starting the measurement; 8) waiting until the measurement is finished; 9) opening the lids again; 10) griping the petri dish inside the analyser; 11) taking the petri dish to the dump station.This sequence of actions can be repeated as many times as necessary.All horizontal movements by the gripper are done at a predetermined safe height, that is, a position on the z axis where the needle moves without colliding with any object.All vertical movements are done at predetermined X and Y positions for each element involved.Most of the actions are done by auto-HPGe via Hype!Terminal, but steps 7 and 8 involve Genie.While step 7 is straightforward, step 8 consists in the monitoring of the measured value.Once every minute, the areas displayed at the positions shown in Fig. S2.4 are analysed.If the first area shows “6”, and the other area shows any number, the waiting finishes.The analysis is done by evaluating the pixels in the screen section where the numbers are displayed.Therefore, it is necessary that the computer is “frozen” for a few seconds every 10 min and the window of Genie is brought to the foreground of the screen, so that it can be evaluated without interference of other programs being used simultaneously on the computer.The full code for all the steps is provided in Supplementary information 2.With auto-HPGe, a range between 1 and 6 times more samples have been run in our laboratory for a same period compared to manual operation.This is based on the activity levels of particular samples and the availability of personnel to change the samples.We expect that, with the continued use of auto-HPGe, people will not need to visit the laboratory on weekends and public holidays, which has been the laboratory routine due to the slow turnover of most samples, and the differing times associated to when the counts are high enough that samples can be changed with thigh accuracy in HPGe detection.The suction cup mechanism employed in auto-HPGe has been very reliable for the usual samples being processed.It is possible that heavier samples could be a problem, and for those a different mechanism may be necessary.Also, for containers much smaller than the petri dishes used here, or with shapes without a flat surface, a different kind of gripper not based on suction cup can be more appropriate.Using suction cups, it is very important that the surfaces are kept clean, without dust or irregularities that can interfere with the suction holding.Also, there is a limit for the weight that the suction cup mechanism can handle without problems.In our experience, samples with mass between 50 and 200 mg could be handled successfully.Finally, samples cannot be too large or wide, they must fit inside the HPGe detector sampling area.The design presented here is completely open-source, and can be freely modified to suit different purposes.For example, a large sample base can be built, so that more samples can be handled.Also, the dump station can be improved to deal with more fragile samples.Finally, currently auto-HPGe cannot process external signals, so it is essentially blind to mistakes and failures.Although this simplifies the assembly of the system, the autosampler will not stop if accidents occur.Future improved versions of auto-HPGe can incorporate sensors strategically positioned so that checks can be performed to avoid damage from accidents.The control board employed here and its firmware are equipped to deal with endstops , which makes the task less complicated than for other control options.The authors declare that there are not known conflict of interests related to the work presented in this manuscript | Radionuclide measurements have proven to be essential for determining processes related to pressing environmental issues as well as reconstructing historical events related to natural and anthropogenic activities. The detection of radionuclide tracers in environmental and geological samples provides unique and essential insights into specific sources and sinks. Despite its usefulness in measuring natural and anthropogenic radioisotopes, high-purity germanium (HPGe) gamma ray detectors are rarely automated as a result of the heavy shielding required to use this equipment. Consequently, the commonly available autosamplers for this kind of analysis can be very expensive, exceeding AU$400,000. Here we present auto-HPGe, an autosampler for gamma ray detection in heavy shields that costs about AU$1100 to build. Auto-HPGe has potential to make HPGe analysis more attractive to scientists, especially when the equipment is located in remote locations or when the ability to change samples at odd hours is limited. |
389 | Micromechanical properties of canine femoral articular cartilage following multiple freeze-thaw cycles | Articular cartilage is a viscoelastic heterogeneous material divided into layered zones with varying material properties and functionalities.The extracellular matrix is heterogeneous in nature, where variations exist in composition, structure and vascularity at a micro-level.It is composed of proteoglycans, collagens and glycoproteins, which are all macromolecular components.Cartilage also contains chondrocytes that become embedded within the matrix, maturing and dividing to deposit new cartilage.Its primary function is to maintain a smooth surface allowing lubricated frictionless movement and to help transmit articular forces, therefore minimising stress concentrations across the joint.Knowledge of material properties of cartilage is crucial to understanding its mechanical function and morpho-functional alterations that occur during ageing, disease and injury.Whilst valuable data in isolation, material property information is also crucial to other mechanical analyses, including computational models that attempt to predict in vivo joint behaviour.Material properties of articular cartilage ECM have been widely reported utilising varying testing, storage and preservation techniques.Specific testing techniques have changed over time and varied according to investigator preference and overall experimental goals.In general, however, all studies seeking to quantify the mechanical behaviour of biological tissues strive to maintain biological fidelity of the testing conditions in the experiment; for example testing fresh tissue samples under hydrated conditions that are representative of the internal environment of the studied organism.However, accomplishing this may be challenging for numerous reasons including the need for transportation between dissection and testing locations, availability or failure of testing equipment and the desire to test large sample numbers from individual specimens thereby minimising tissue waste.In such circumstances it is standard practice to store and preserve samples, often requiring tissue to undergo one or more freeze-thaw cycles before mechanical tests can be carried out.Therefore in situations where logistical limitations prevent testing of fresh samples, it is beneficial to explore if preservation of tissues samples through freezing can be utilised without compromising mechanical properties.In recent years there have been a number of systematic investigations into the effects of multiple freeze-thaw cycles on the mechanical properties of ligaments and tendon.Although some variation between individual studies exists, these analyses suggest that ligament and tendon tissue can undergo a minimum of two freeze-thaw cycles before significant changes to their material properties occur, thereby providing important constraints on experimental designs involving these tissues.However, despite its fundamental importance to joint biomechanics, to the best of our knowledge, no such data exists exploring the effect of more than one freeze-thaw cycle on material properties of articular cartilage.The aim of this paper is therefore to quantify how articular cartilage mechanical properties are affected by multiple freeze-thaw cycles directly addressing this important gap in knowledge.Dynamic nanoindentation is used to determine the shear storage modulus, shear loss modulus, elastic modulus and the loss factor of canine femoral condyle articular cartilage across three freeze-thaw cycles.One disease free canine cadaveric knee joint from a skeletally mature Staffordshire Bull cross mix was dissected 36 h after being euthanized.Ethical permission for use of this cadaveric material was granted by the Veterinary Research Ethics Committee, University of Liverpool.Healthy articular cartilage samples measuring < 1 cm2, were harvested from the medial and lateral bilateral femoral condyles using a low speed band saw.Gross examination of the samples showed no sign of fibrillation or wear.Following dissection, each of the 11 samples were submerged in phosphate buffered saline and stored in cooled temperatures for up to 12 h until they were tested when still fresh using nanoindentation techniques, as detailed below.Following testing, all 11 samples were then frozen at −20 °C for up to 48 hours.Samples were then individually thawed for three hours at 3–5 °C and re-tested using the same nanoindentation protocol after having undergone one freeze-thaw cycle.This was completed within one hour and hydration of cartilage was maintained through constant exposure to PBS prior to and during testing.This freeze-thaw procedure was repeated for three cycles and material properties of all 11 samples were measured after each freeze-thaw cycle.Samples were specifically thawed in cooled conditions, as room temperatures have been shown to thaw cartilage samples too quickly and cause damage to the ECM.Cartilage samples underwent dynamic nanoindentation equipped with an ultra-low load DCM-II actuator utilising a Continuous Stiffness Measurement module to determine the micromechanical complex shear modulus.Samples were mounted into a custom made liquid cell holder, with a 1 cm radius and 2 mm deep well, which could allow partial submersion of the samples in PBS during testing.Samples were then examined under the built-in optical microscope to randomly select ten indent locations per sample totalling 110 measurements per cycle of freezing.Given that it was not possible to differentiate between microstructural features in the cartilage with the optical microscope, indentation sites were based on topographical homogeneity for accurate surface detection.Repetition or overlapping indentations in subsequent cycles of freezing was possible although it has previously been reported that there is no visible deformation of cartilage following low loads such as those experienced during nanoindentation when a recovery time is incorporated.Similarly to previous research investigating viscoelastic materials, a flat-ended cylindrical 100 µm punch tip was utilised as opposed to a sharp Berkovich tip which has been used in other studies testing cartilage.After the indenter head detected the surface of the sample, a pre-compression of 8 μm was applied until the indenter was fully in contact with the sample.The surface detection was determined by a phase shift of the displacement measurement.In order to accurately detect the surface, the phase shift was monitored over a number of data points which has previously been shown to be effective.Once the surface detection requirement was fulfilled over the predefined number of data points, the initial contact was determined from the first data point in the sequence.Once the indenter was fully in contact with the sample surface it vibrated at a fixed frequency of 110 Hz with 500 nm oscillation amplitude.Contact stiffness and damping were obtained through electromagnetic oscillation sequences.The initial oscillation measured instrument stiffness and damping and these were subtracted from the total measurement to obtain the contact response.Material properties were then obtained during the second oscillation.After each indentation, the tip was cleaned to prevent any transfer of biological material to the subsequent indentation site which may affect measurements.This was achieved by indenting an adjacent sample holder which was mounted with 3 M double-sided Scotch tape.This method was found to be effective at cleaning the tip without picking up any residue from the Scotch tape.Following testing of each sample, further indents were made on fused silica with the test sites remaining free of any residue, hence confirming that the tip was clean before further cartilage testing.An a-priori power analysis was performed using G*Power software which specified a total of eight samples would be required to distinguish an effect size of 0.8 with α error probability of 0.05 and power of 0.95 across four groups of testing parameters."Statistical analysis of G’, G” and E, as well as the loss factor, were conducted using a repeated measures ANOVA in SPSS, specifically Mauchly's Test of Sphericity, after which a Bonferroni post-hoc test was performed if results were significant, producing pairwise comparisons.Individual sample means were analysed after each cycle of freezing, as well as the means of all samples combined, to give a whole specimen analysis.The overall mean G’, G”, E and loss factor for all 11 samples combined for the different cycles are presented in Fig. 3.Shear modulus decreased from 1.76 ± 0.78, 1.41 ± 0.77, 1.25 ± 0.54 to 1.21 ± 0.77 MPa) between fresh samples and samples tested after one, two and three freeze-thaw cycles respectively.Shear loss modulus increased from 0.42 ± 0.19 to 0.46 ± 0.18 MPa between fresh and one freeze-thaw cycle, but then decreased to 0.43 ± 0.15 and 0.39 ± 0.17 MPa following two and three freeze-thaw cycles respectively.Elastic Modulus were 5.13 ± 2.28, 4.11 ± 2.25, 3.64 ± 1.57 and 3.52 ± 2.24 MPa during fresh, one, two and three freeze-thaw cycles respectively.The mean and SD of the loss factor changed throughout each cycle from 0.31 ± 0.38, 0.58 ± 1.66, 0.41 ± 0.26 and 0.71 ± 1.40 when using a mean of all 11 samples during fresh, one, two and three freeze-thaw cycles respectively."Changes in the values for G’, G”, E and the loss factor, across freeze-thaw cycles were not found to be statistically significant.Numerical results for individual samples are tabulated in Tables 1–4.Repeated freeze-thaw cycles led to some significant differences in G’ and E across individual samples but no differences in G” or the loss factor.Bonferroni post-hoc pairwise comparisons showed between freeze-thaw cycle effects on the individual sample mean G’ and E were not statistically significant between fresh and one freeze-thaw cycle, one freeze-thaw and two freeze-thaw cycles, and two freeze-thaw and three freeze-thaw cycles.Further post-hoc pairwise comparison was not necessary for G” or the loss factor, as these were not statistically significant.A high degree of variability in each mechanical property was observed both within and between the 11 discrete samples analysed at each freeze-thaw cycle, as indicated by high standard deviations about the overall mean values and the substantial absolute ranges of individual sample means and coefficient of variation.For example, the E value in an individual sample in the same cycle of fresh testing varied by as much as 10.47 MPa equivalent to a change of up to 96.29% of the overall mean value on one occasion.Across the 11 samples tested, E varied by as much as 14.73 MPa or equivalent to a 188.89% change to the overall mean within the same cycle of freezing seen in Table 3.Inter-sample variation was such that in some instances individual samples exhibited changes in mechanical properties across freeze-thaw cycles that differed qualitatively from the overall mean trends.This study provides the first systematic investigation of the effects of multiple freeze-thaw cycles on the mechanical properties of articular cartilage.Szarko et al., compared the mechanical properties of canine femoral articular cartilage stored at −20 °C, −80 °C and snap frozen in liquid nitrogen using indentation techniques.They found that with rapid thawing and exposure to PBS, both −20 °C and −80 °C can be used as reliable preservation methods for one freeze-thaw cycle as this produced results consistent with those from fresh samples.However, snap freezing tissue can cause ice crystallisation to form on the sample and therefore compromises the integrity of the tissue.Further research also considered the effects of one freeze-thaw cycle at −80 °C on the mechanical properties of bovine femoral and tibial articular cartilage in comparison to fresh samples.Using a custom made indenter samples were exposed to PBS to maintain hydration and thawed at room temperature.No significant change in material properties was found with a tensile modulus of 4.1 ± 2.2 MPa for fresh samples and 4.5 ± 2.4 MPa for frozen samples.However, individual samples were randomly assigned to a fresh or frozen cohort and testing was not repeated on the same sample.Therefore results did not account for biological variability that may exist spatially within one specimen or cadaver.Wilusz et al. used two freeze thaw cycles at −20 °C of human femoral articular cartilage prior to atomic force microscopy-based indentation.Justification for using two freeze-thaw cycles was recommended by Athanasiou et al. who established this aspect of the protocol on anecdotal unpublished data.Samples were exposed to PBS to maintain hydration and results from healthy cartilage ECM presented an E of 491 kPa.However in this study, a comparison to fresh samples was not made therefore what effect two freeze-cycles had on the material properties is unknown.Our research study demonstrated that mean cartilage G’ and E for the joint overall showed a sharp decreasing trend after one cycle of freezing, although this reduction appeared to lessen following two and three freeze-thaw cycles, despite not reaching statistical significance.Interestingly G” and the loss factor showed no such trends and both increased and decreased during various cycles of freezing.The loss factor in particular showed high standard error mean in comparison to other parameters.When analysing the SD it appears that there is no consistent trend or change in G’ and E where values both increase and decrease in various cycles of freezing.With the exception of two outliers G” and the loss factor SD remains unchanged during all cycles of freezing.Systematic testing of articular cartilage across multiple freeze-thaw cycles in our study shows that samples can undergo three freezing cycles without statistically significant changes to material properties when handled and stored correctly.These results therefore provide some support for the use of freezing as a method of preservation of cartilage where material properties are required to remain unchanged for mechanical testing.However the authors note that a number of changes in individual mean material properties for the joint were observed here, and although these fell below thresholds of statistical significance in this study they may represent meaningful magnitudes in the context of other studies.For example, the overall mean E showed relatively large decreases with increasing number of freeze thaw cycles such that the values decreased by 1.02 MPa, 0.47 MPa and 0.12 MPa of the mean value compared to fresh samples.Such relative changes in magnitude may well be extremely important in the context of comparative studies such as comparison of material properties between cohorts of different age and/or disease status and computational modelling studies of joint biomechanics.Kleemann et al., researched the differences in cartilage material properties obtained from human tibial plateau samples and found that changes of as little as 0.1 MPa or 20% can be found between grade one and grade two osteoarthritic samples.Furthermore, in a human knee finite element model sensitivity analysis by Li et al. the material properties of cartilage were varied between 3.5 and 10 MPa, to understand the effect on joint contact stresses."Results showed that magnitude changes had substantial effects on the functional predictions of the model, specifically that E linearly increased with peak contact stresses and a Poisson's ratio increase significantly increased peak von Mises stress and hydrostatic pressure in the knee joint cartilage.Given the absolute and relative changes in overall material properties measured across freeze-thaw cycles, it may be preferable for experiments seeking to test multiple tissue types from the same cadaver to prioritise cartilage for fresh testing, particularly given that previous research has suggested that other joint tissues are relatively insensitive to freezing.For example, Jung et al., concluded that the human patella-tendon can be exposed to eight freeze-thaw cycles, without compromising mechanical properties; provided testing conditions and tissue handling are approached with great care.This protocol involved allowing samples to re-freeze for a minimum of 6 h and thaw at room temperature for 6 h with exposure to saline.Furthermore, a study has shown the human flexor digitorum superficialis and flexor pollicis longus can undergo three freeze-thaw cycles before the integrity of their material properties is compromised.In addition freeze-thawing over five times also results in decreased mechanical and structural behaviour.Other studies focusing on ligaments include Woo et al. who explored the mechanical properties of the rabbit medial collateral ligament following one prolonged freezing cycle and concluded that this has no effect when compared to fresh samples.Moon et al. also used the rabbit MCL to determine the effect when two freeze-thaw cycles and likewise concluded that no apparent changes to material properties occurred when compared to fresh samples.Therefore most published studies are in agreement that at least two freeze-cycles, under the correct handling and storage conditions, allow ligament and tendon samples to remain mechanically unchanged.The modulus values obtained within this study fall within the range of those reported in the literature for other mammalian femoral condylar articular cartilage.Shepherd and Seedhom and Wilusz et al. reported a range of E from 0.1 to 18.6 MPa for human femoral condyle articular cartilage, although Moore and Burris reported lower values of 0.62 ± 0.10 MPa for bovine stifle cartilage.In our study mean values for E lie between 0.56 and 7.62 MPa, falling within this range already reported; however in both the literature and the current study there is a high variability of modulus.More specifically, previous canine research has found an E of 0.12 ± 0.10 MPa, and 0.385–0.964 MPa when samples have undergone indentation testing following one freeze cycle.These values are generally lower than those reported in our study and have smaller absolute variability."Previous canine cartilage studies have reported CoV's of up to 23.61%, which although being quite considerable are much lower than the CofV's reported here up to 96.3% for G’ and 114.29% for G”.Although the current data is more variable than previous canine research, it should be noted that it is less variable than the human studies discussed above.Cartilage is a highly heterogeneous material and therefore some variability of modulus is widely expected and accepted; however differences seen in the current study as compared to other studies in the literature may be as a result of the frequency-dependent properties of cartilage.Higher frequencies have been shown to increase G’ and E; however G” remains unaffected.In our study, 110 Hz was selected for the testing because it is the resonant frequency of the indenter and thus most sensitive frequency for the surface detection.In other studies in the literature, a range of frequencies have been used including 0.5 Hz, 10 Hz and much higher frequencies up to 200 Hz and 250 Hz where dynamic nanoindentation and mechanical analysis methods were also utilised.Although high frequencies may account for increases in G’ when compared to other canine studies, the most important comparison is that seen between each freeze cycle, where frequency used remained standardised throughout testing cycles.Additional limitations to the current study which may also affect variability include indenting sites affected by preceding measurements; however it has been suggested that low load indentation has been shown to cause no visible deformation of samples.Although some variability may be expected from the nanoindentation technique used in the current study, we have found that it yields highly repeatable data on other compliant materials which have a more homogenous structure than cartilage e.g. on a type of ballistic gelatine the CoV for the elastic modulus was 3.3% following ten indentation tests.As the nanoindenter was unable to differentiate between cellular and non-cellular substance, the current study is subject to high variability in results depending on the exact material tested, limiting interpretation of changes to modulus.Other studies have attempted to differentiate the material properties of cartilage sub-components using AFM and found variation between E of the peri- and extra cellular matrix.However soft tissues are often dehydrated during AFM testing and maintaining hydration can be challenging.With these considerations in mind, future research could aim to accurately assess the effect of freezing on articular cartilage by first repeatedly indenting the same site of a fresh sample to fully understand the effect and variability of material properties seen in an identical position.Then secondly, indenting an identical position following multiple freeze-thaw cycles, aided by marking an area of the cartilage and noting at which exact position the sample was tested to understand the effect of freezing.In summary, the results of this study suggest that three freeze-thaw cycles do not have a statistically significant effect on the overall ‘whole-joint’ material properties of canine femoral condyle cartilage samples provided the correct handling, storage and hydration of the tissue are maintained throughout preparation and testing.However, relative changes in mean material properties are observed and the failure to reach thresholds for statistical significance is likely the product of high biological variability across the joint.Therefore the changes in material properties observed over multiple freeze-thaw cycles may be sufficient to significantly impact on certain comparative or functional studies, such as finite element modelling, where subtle changes in material properties can indeed modify the true behaviour of articular cartilage under mechanical stress.Changes in material properties reported here should be considered when planning experimental protocols, as they may be sufficient in magnitude to impact on clinical or scientific cartilage studies.There are no conflicts of interest to declare. | Tissue material properties are crucial to understanding their mechanical function, both in healthy and diseased states. However, in certain circumstances logistical limitations can prevent testing on fresh samples necessitating one or more freeze-thaw cycles. To date, the nature and extent to which the material properties of articular cartilage are altered by repetitive freezing have not been explored. Therefore, the aim of this study is to quantify how articular cartilage mechanical properties, measured by nanoindentation, are affected by multiple freeze-thaw cycles. Canine cartilage plugs (n = 11) from medial and lateral femoral condyles were submerged in phosphate buffered saline, stored at 3–5 °C and tested using nanoindentation within 12 h. Samples were then frozen at −20 °C and later thawed at 3–5 °C for 3 h before material properties were re-tested and samples re-frozen under the same conditions. This process was repeated for all 11 samples over three freeze-thaw cycles. Overall mean and standard deviation of shear storage modulus decreased from 1.76 ± 0.78 to 1.21 ± 0.77 MPa (p = 0.91), shear loss modulus from 0.42 ± 0.19 to 0.39 ± 0.17 MPa (p=0.70) and elastic modulus from 5.13 ± 2.28 to 3.52 ± 2.24 MPa (p = 0.20) between fresh and three freeze-thaw cycles respectively. The loss factor increased from 0.31 ± 0.38 to 0.71 ± 1.40 (p = 0.18) between fresh and three freeze-thaw cycles. Inter-sample variability spanned as much as 10.47 MPa across freezing cycles and this high-level of biological variability across samples likely explains why overall mean “whole-joint” trends do not reach statistical significance across the storage conditions tested. As a result multiple freeze-thaw cycles cannot be explicitly or statistically linked to mechanical changes within the cartilage. However, the changes in material properties observed herein may be sufficient in magnitude to impact on a variety of clinical and scientific studies of cartilage, and should be considered when planning experimental protocols. |
390 | How does mental health stigma get under the skin? Cross-sectional analysis using the Health Survey for England | Mental health disorders are now a leading cause of disability worldwide.Substantial inequalities exist in life expectancy for people with severe mental illnesses, quantified at 8.0 to 14.6 life years lost for men and 9.8 to 17.5 life years lost for women.Despite this, research that attempts to explain the premature and excess mortality and morbidity observed for those with mental disorders has received comparatively little attention, in contrast to other major risk factors such as diabetes and obesity.Stigma associated with mental illness is thought to be a key contributing factor.Stigma is a fundamental social determinant of health that leads to health inequalities, yet there is a lack of research relating to the role of stigma in patterning health.This may be partly due to the inconsistent definition of stigma, the difficulty in measuring the concept and the many circumstances in which stigma has been used.Stigma can be understood as applying to a wide range of circumstances not limited to mental illness, such as welfare receipt, HIV and AIDS, and lung cancer.Link and Phelan conceptualise stigma as the co-occurrence of several dimensions: labelling, stereotyping, separation, status loss, and discrimination.They further stress that for stigmatisation to happen, power must be exercised and that stigma can lead to the unequal distribution of a variety of life chances including employment, housing and health.Within mental health, several ways in which stigma can be expressed have been distinguished: public stigma; internalised or self-stigma; and structural stigma.Public stigma occurs when members of the general public endorse prejudice and discrimination against people with mental illness, such as believing people with mental disorders are highly dangerous.Self-stigma occurs when people with mental illness endorse and internalise these negative stereotypes, which can result in a loss of self-worth, shame, and lead individuals to give up on life goals, also known as the “why try” effect.However, self-stigma is not inevitable.In different situations people with mental illness may respond to stigma with low self-esteem and diminished self-efficacy, righteous anger, or indifference.Some people may find their identity empowers them and that they can use their anger to improve their own circumstances and help others.Indeed, research has demonstrated that public stigma and self-stigma are correlated, but not necessarily strongly associated.Structural, or institutional stigma, occurs when policies, rules or regulations within society intentionally marginalise the opportunities of those with mental disorders or produce unintended consequences that hinder their prospects, resources and wellbeing.An example being the chronic under-funding of mental health services.Given the lack of a consistent definition of stigma and its various forms, there is no consensus on how best to measure mental health stigma and numerous methods have been used.A body of research has focussed on measuring public stigma towards people with mental disorders and its evolution over time.Data spanning ten years from 1996 to 2006 from the United States has revealed no decrease in public stigma, measured by asking respondents, for example, how willing they would be to have a person with a mental illness work closely with them, or marry into the family.There was also a small increase in beliefs that people with schizophrenia would likely be violent towards others.This was supported by a study examining public attitudes across 8 years in Australia, where an increase in the perception that people with mental disorders are dangerous and unpredictable was observed.A systematic review on this topic supports these findings; despite improved population mental health literacy, the social rejection of people with mental disorders has remained pervasive over the last 20 years and negative stereotypes relating to the dangerousness of people with severe mental illness persists.In a study examining public attitudinal trends associated with the implementation of the Time to Change campaign to reduce mental health stigma and discrimination in England, Evans-Lacko, Henderson, and Thornicroft found little evidence for significant long-term improvements in knowledge and attitudes towards people with mental illness, or changes in reported behaviour from 2009 to 2012.However, there was some evidence to support improved intended behaviour, such as the intention to live, work and have a relationship with someone who has a mental illness.More recent evidence suggests progress has been made in reducing levels of public mental health stigma between 2009 and 2017 in England.Measures of public knowledge, attitudes, desire for social distance and reporting having contact with people with mental health problems have all shown improvements over time.Few studies have adopted a multilevel approach to mental health stigma.One study that used data from 14 European countries found that people with mental illness who resided in countries with less stigmatising attitudes had lower rates of self-stigma and perceived discrimination.Individuals who lived in countries where the public felt more comfortable interacting with people who had a mental illness also had lower levels of self-stigma and felt more empowered.In this study self-stigma was measured using the Internalised Stigma of Mental Illness Scale, which contains alienation, stereotype endorsement, perceived discrimination and social withdrawal subscales.Stigma has been associated with a range of outcomes amongst people with mental illness.A systematic review demonstrated a strong relationship between internalised stigma and poorer psychosocial outcomes, as well as psychiatric symptom severity and poorer adherence to treatment.Increased depressive symptoms and poorer quality of life are also related to internalised stigma amongst those with mental disorders.Stigma may also contribute towards suicidality and suicide rates, impede recovery from mental illness, and hamper efforts to prevent mental disorders.Holding stigmatising beliefs about people with mental disorders is also related to less active help-seeking behaviour for mental ill health.However, a key gap in the literature relates to the lack of research focusing on possible objective health outcomes associated with self-stigma.A difficulty of measuring the potential health effects of self-stigma is the lack of available indicators included in large scale health surveys.An exception is the Health Survey for England, which included the Community Attitudes Towards the Mentally Ill scale in 2014, developed to measure public attitudes towards people with mental illness.The scale has been used extensively to evaluate the Time to Change anti-stigma and discrimination campaign.One way to measure aspects of self-stigma and its potential effects on health is to assess the extent of stereotype endorsement using the CAMI scale amongst people with mental disorders and relate this to the range of health outcomes included in the HSE.To date, there have been no studies that have analysed the impact of mental health stigma on biological indicators of health, or biomarkers.A biomarker can be defined as “a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention”.This covers a variety of measures from pulse and blood pressure through to more complex laboratory tests of blood to assess levels of cholesterol or inflammatory markers.Examining biomarker data has several advantages.Biomarkers are not affected by reporting biases as is the case for self-reported outcomes.They can help to identify individuals at an increased risk of health problems before people are aware of problems themselves, and elucidate potential causal mechanisms at play between social exposures and disease.Given that self-stigma is associated with diminished self-esteem, poorer mental health and quality of life, it could be hypothesised to impact on biological indicators of health, particularly those related to cardiovascular and metabolic health.This has been demonstrated for other stressful experiences, such as informal caregiving, financial insecurity, threat of redundancy, and household debt.Potential stress associated with self-stigma, as well as perceived and anticipated discrimination, may operate directly on metabolic and cardiovascular health via chronic physiological stress responses or through poor health behaviours, such as a diet characterised by high sugar and fat, and a lack of physical activity.The process by which stigma may affect biological systems could be considered an example of embodiment.Embodiment is “a concept referring to how we literally incorporate, biologically, the material and social world in which we live, from conception to death”.It is a useful construct to theorise how social exposures ‘get under the skin’, become biologically embedded and ultimately influence health and health inequalities.Social epidemiologists have drawn on embodiment to examine the potential biological pathways through which socioeconomic inequalities in health may arise, such as via the inflammatory system and allostatic load.Allostatic load has been proposed as a measure of the overall cost of adapting to the environment and is usually operationalised as a composite measure including various physiological systems which represent physiological wear and tear.Markers included in allostatic load scores often comprise blood pressure, pulse rate, body mass index, and blood glucose, which are associated with metabolic and cardiovascular diseases, as well as all-cause mortality.The composite allostatic load scores have been found to predict mortality more accurately than the individual indicators themselves.In this study, mental health stigma is conceptualised as a cumulative social exposure which can become embodied to impact on cardiovascular and metabolic function amongst people with mental illness.The study aims to advance the evidence base on the relationship between mental health stigma and health in a general population sample.Previous research on the outcomes of mental health stigma has often used only self-reported psychological outcomes.To compare with previous research, measures of wellbeing and quality of life are also included.The research questions and specific hypotheses are detailed below:What is the extent of mental health stigma amongst those with and without mental disorders and are there differences between individuals with severe and common mental disorders?,Individuals with mental disorders are hypothesised to display less stigmatising attitudes compared to those with no experience of mental illness.Is mental health stigma associated with metabolic and cardiovascular biomarkers, wellbeing and quality of life and does any relationship differ between individuals with and without mental disorders?,Mental health stigma is not expected to relate to health and wellbeing in individuals with no diagnosed mental disorder.Individuals with mental disorders who hold stigmatising attitudes are hypothesised to have more adverse metabolic and cardiovascular biomarker profiles and poorer wellbeing and quality of life compared to those with no mental disorder and the associations may be stronger amongst those with more severe mental illness.Data were taken from the 2014 round of the Health Survey for England and are available via the UK Data Service.The HSE is a repeated cross-sectional survey, which has been conducted annually since 1991.The sampling is based on a multi-stage stratified random sample of individuals living in private households in England.In the 2014 survey, the sample included 8077 adults and 2003 children, with a household response rate of 62%.Each survey contains a range of health and sociodemographic related questions collected via face-to-face Computer Assisted Personal Interview and self-completion methods, as well as measurements taken by a nurse at a follow-up visit for consenting participants.In addition, each year contains different modules focusing on a specific topic.The 2014 survey included a self-completion module dedicated to mental health, which was asked of adults only during the nurse visit.Participants were included in this study if they completed the follow-up nurse visit and were aged 16 years and over.During the nurse visit participants were provided with a list of 17 mental health disorders and asked to select which ones they had ever been diagnosed with by a health professional, at any point in their life.Due to the low prevalence of specific disorders, they were grouped into common mental disorders, severe mental illnesses or other complex mental illness.Respondents may have reported having more than one disorder and there is significant overlap across categories.Participants were grouped into those who did not report a diagnosed mental disorder, those who reported a common mental disorder or other complex mental illness, and those who reported a severe mental illness.Individuals who reported having both a severe mental illness and a common mental disorder were classified as having a severe mental illness.Mental health stigma was measured using the Community Attitudes toward the Mentally Ill scale.The original CAMI questionnaire contained 40 statements relating to mental illnessNatCen Social Research, 2014b).The 2014 HSE contained a shortened 12-item scale containing items designed to measure mental health stigma and tolerance, which participants aged 16 + years were asked to self-complete.The 12-tem CAMI scale has been used in previous research evaluating the impact of the Time to Change social marketing campaign.The HSE team conducted factor analysis on the scale, which revealed a two-factor structure relating to prejudice and exclusion and tolerance and support for community care.Participants were asked to rate how much they agreed or disagreed with each statement on a 5-point Likert Scale, which was scored as follows for positive statements: agree strongly, agree slightly, neither agree nor disagree, disagree slightly, disagree strongly.Participants were also given the option to answer “don’t know”, but these were excluded from the analysis.Negatively worded statements were reverse scored so that for each item, the mean scores ranged from 0 to 100, where a higher score corresponds to a more positive attitude.A composite score was then calculated for each factor which was derived from the mean of the six items relating to each factor.Participants were included in the composite score if they answered at least two of the six statements relating to each factor.Two binary variables were then derived distinguishing those who scored ≤ the 25th percentile on both CAMI scales as indicators of more stigmatising attitudes and those who scored > the 25th percentile as less stigmatising attitudes.Amongst people with mental illness, the former variables therefore represent a key aspect of self-stigma, the endorsement of negative stereotypes.Eight biomarkers were included as outcomes: glycated haemoglobin, total cholesterol, high-density lipoprotein cholesterol, systolic and diastolic blood pressure, resting pulse rate, body mass index and waist-hip ratio.Resting pulse rate, systolic and diastolic blood pressure are measures of cardiovascular function, whereas HbA1c, cholesterol, BMI and waist-hip ratio are indicators of metabolic function.Non-fasting blood samples were taken from participants at the time of the nurse visit and were sent to the labs at the Royal Victoria Hospital in Newcastle for analysis.Glycated haemoglobin, total cholesterol and HDL-cholesterol values were derived by procedures outlined elsewhere.Systolic and diastolic blood pressure measurements were also taken from participants during the nurse visit using an Omron HEM 907 blood pressure monitor after the participants had been sitting quietly for 5 minutes.Three measurements were taken and the mean value of the second and third measurements was used.As recommended, 10 mmHg and 5 mmHg were added to the systolic blood pressures and diastolic blood pressures of individuals who reported they had taken antihypertensive medications in the past seven days, respectively.Resting pulse rate was also recorded using the Omron HEM 907 three times, the first value was used due to the increase in pulse rate across the three measurements.In line with previous research, 1.18 mmol/L was added to total cholesterol if an individual reported taking statins, 4% was subtracted if they reported taking diuretics, 10% was added to HDL-cholesterol if they reported taking beta blockers and 1% was added to HbA1c if they reported taking insulin or any other anti-diabetic medications.Measurements of height, weight, waist and hip circumference were also taken from participants, enabling the calculation of BMI and waist-hip ratio.Additionally, for each of the eight biomarkers, participants were classified into sex-specific quartiles based on the distribution of scores.Individuals who fell into top quartile or bottom quartile were classed as ‘high risk’ and given a score of 1 and the remaining sample was given a score of 0.From that, a measure of allostatic load was calculated from the sum of each binary biomarker variable.This method used to derive the allostatic load score has been used in numerous previous studies.Individuals were included in the score if they had at least four complete biomarkers and excluded if they had missing values for more than four biomarkers.Sensitivity analyses excluding those missing more than four biomarkers did not affect the substantive results.Two measures of wellbeing and quality of life were included.Mental wellbeing was measured using the Warwick-Edinburgh Mental Well-Being Scale, in which participants are asked to tick the box that best describes their experience over the last two weeks on a scale from none of the time, rarely, some of the time, often, or all of the time.Quality of life was measured using the EuroQol-5D scale, which asseses five dimensions: mobility; ability to carry out usual activities; self-care; pain/discomfort; anxiety/depression.Participants are asked to rate whether they had ‘no problem’, ‘some problem’ or an ‘extreme problem’ with each dimension.The HSE team converted answers to a single utility value based on a British EQ-5D scoring algorithm and weighted according to the social preference of the UK population.For both WEMWBS and EQ-5D higher scores reflected more positive outcomes.Age in years, gender, ethnicity, marital/partnership status, education level and social class were included as potential confounding variables.Highest education level was categorised as degree level or equivalent; A Level or equivalent; General Certificate of Secondary Education or equivalent; no qualifications.Social class was categorised as managerial and professional occupations; intermediate occupations; or routine and manual occupations, according to the National Statistics Socio-economic Classification three-category social class classification scheme.Firstly, descriptive statistics of the key variables were examined in this cross-sectional analysis.Relevant weights were applied to account for non-response and selection into the different elements of the survey.Glycated haemoglobin values were logged due to their skewed distribution and when results from these models are presented their exponentiated coefficients are shown to help with the interpretation.First, the association between mental disorders and mental health stigma was assessed using logistic regression, adjusted for the covariates: age, gender, education level, social class, ethnicity and marital status.Next, the association between mental disorder, stigma and each biomarker and wellbeing outcome was examined using linear regression, adjusted for the covariates.Six mental disorder/stigma groups were derived: individuals with no mental disorder/less stigmatising attitudes; no mental disorder/more stigmatising attitudes; common mental disorder/less stigmatising attitudes; common mental disorder/more stigmatising attitudes; severe mental disorder/less stigmatising attitudes; severe mental disorder/more stigmatising attitudes.The standardised beta coefficients from these models were also calculated and graphed to help interpret the pattern of results, effect sizes and direction of associations.All statistical models adjusted for household clustering.Missing data for the independent variables and covariates were excluded from the analysis.Each statistical model may contain a different number of individuals as participants who had complete data for at least one outcome variable were included.Statistical analyses were conducted using Stata/MP 15.1.A total of 4967 individuals were included in the analysis sample, 51.5% were female.The mean age of participants was 46.7.73.2% of the sample reported having no diagnosed mental disorder, 22.3% a common mental disorder and 4.5% a severe mental illness.32.2% of the sample exhibited more stigmatising attitudes towards mental health according to the tolerance and support for community care measure, compared to 25.5% using the prejudice and exclusion measure.Descriptive statistics for the outcome variables are found in Table A1.Individuals with experience of a common mental disorder or severe mental illness were less likely to exhibit stigmatising attitudes compared to those with no mental disorder.Using the measure of tolerance and support for community care, individuals with a common mental disorder were slightly less likely to have stigmatising attitudes than those with a severe mental illness, but using the measure of prejudice and exclusion results were equivalent.Women were also less likely to hold stigmatising attitudes compared to men, as well as those with a more advantaged socioeconomic position, according to both education level and social class.Non-white ethnic groups exhibited more stigmatising attitudes particularly in relation to the measure of prejudice and exclusion.A mixed pattern of results was found for the metabolic and cardiovascular biomarkers.Inconclusive results were found for the cardiovascular biomarkers, systolic and diastolic blood pressure.Individuals with severe mental illness generally displayed higher resting pulse rates compared to those with a common mental disorder, and those with a common mental disorder exhibited higher values compared to those with no disorder.However, there were no notable differences dependent on the degree of mental health stigma possessed.Similar results were found for the metabolic biomarkers.For waist-hip ratio, glycated haemoglobin, and cholesterol, in general, more adverse biomarker levels were found with increased severity of mental illness, but little consistent differences were observed between stigma groups.Amongst those with severe mental illness, those with more stigmatising attitudes according to the measure of tolerance and support for community care exhibited higher levels of glycated haemoglobin, compared to those with less stigmatising attitudes, but this was not found for the other measure of stigma and differences were not statistically significant.Likewise, for waist-hip ratio and BMI, higher values were observed for those with severe mental illness who displayed more stigmatising attitudes, compared to those with less stigmatising attitudes, but this was only observed for the measure of stigma related to prejudice and exclusion.No clear pattern of results was found for allostatic load.Individuals with common mental disorders generally exhibited higher allostatic load scores compared to those with no history of mental disorder, and those with severe mental illness had higher scores than those with a common mental disorder.For example, amongst individuals displaying less stigmatising attitudes according to the measure of stigma relating to tolerance and support for community care, those with a common mental disorder had higher allostatic load scores compared to those with no disorder, and those with a severe mental illness had even higher scores.However, no consistent differences were apparent between those with higher and lower levels of mental health stigma.A clearer pattern of results was found for the measures of wellbeing and quality of life.Compared to individuals with less stigmatising attitudes, those with more stigmatising attitudes generally exhibited poorer scores across all mental disorder/stigma groups.Two exceptions were found amongst those with severe mental illness.Those who exhibited more stigmatising attitudes had slightly better quality of life when stigma was measured using the tolerance and support for community care indicator and slightly better wellbeing when using the prejudice and exclusion indicator, compared to those exhibiting less stigmatising attitudes.It was interesting to note that, when using the tolerance and support for community care indicator of stigma, worse wellbeing was apparent among those with more stigmatising attitudes compared to those with less stigmatising attitudes even amongst those with no experience of a mental disorder.This study is the first to investigate the association between mental health stigma and a range of metabolic and cardiovascular biomarkers, alongside measures of wellbeing and quality of life in a general population sample.Less stigmatising attitudes were found amongst those with experience of mental ill health.A potential negative influence of mental health stigma was suggested for the measures of wellbeing and quality of life.Even for those with no mental disorder, individuals with more stigmatising attitudes had lower wellbeing compared to those with more positive attitudes and there was some indication that wellbeing and quality of life were worse amongst those with more stigmatising attitudes in each mental disorder group.The results for the metabolic and cardiovascular biomarkers were less convincing and often differed depending on the measure of stigma being used.There was evidence that those with more severe mental illness had more adverse levels of several biomarkers compared to those with a common mental disorder, and those with a common mental disorder generally had a better biomarker profile compared to those with no history of mental disorder.However, results were inconsistent for any additional influence of mental health stigma.Similarly, findings for allostatic load were mixed with regards to mental health stigma, but individuals with experience of a mental disorder had higher scores compared to those with no history.Previous research has demonstrated that mental health stigma is related to wellbeing, life satisfaction and quality of life among people with mental illness.Our findings add to and expand on this literature, suggesting that more stigmatising attitudes relate to poorer wellbeing and quality of life amongst those with mental disorders and associations may be stronger amongst those with severe mental illness.A novel finding of this paper which has not been examined before relates to the lower levels of wellbeing amongst those with no history of mental disorder who hold more stigmatising attitudes, compared to people who hold more positive attitudes.One contributing factor that merits further research may be mental health literacy, which has been shown to relate to higher wellbeing, and may influence help-seeking behaviour and positive coping skills.A key strength of this study was the use of nationally representative data for England, obtained via the Health Survey for England.The analysis also used several different measures of metabolic and cardiovascular function and two measures of wellbeing and quality of life, as well as two indicators of mental health stigma, which are widely used and validated measures.The definition of mental health disorders also focused on lifetime diagnosed conditions, which is an improvement on some studies which often define mental ill-health based on a cut-off point using a scale measuring recent psychological distress, such as the General Health Questionnaire.The analysis also included a range of potential confounding factors, although the possibility of unmeasured confounding cannot be eliminated.This study has a few limitations that should be acknowledged.The definition of a mental disorder was based on self-reported diagnosis, which itself could be affected by stigma.Stigma could influence the disclosure of a diagnosed mental disorder within the survey and some individuals may have experienced mental ill health, but not sought a diagnosis.Although a validated measure of mental health stigma was used, answers to the questionnaire may be subject to social desirability bias.The CAMI also does not measure personal experience of self-stigma, such as the experience of shame and discrimination related to mental health disorders, which has been found to strongly associate with comorbid depression and anxiety.It may be possible that different results may be obtained depending on the measure of stigma used.The measure of mental health stigma used in this study is intended to measure public stigma and self-stigma was implied via low CAMI scores amongst those who experienced a mental disorder.It does not measure the internalisation of stigmatising beliefs; it is possible to hold stigmatising attitudes towards other people with mental disorders but not apply or internalise them personally.People at the most severe end of mental illness may be less likely to participate in health surveys and stigma may affect participation in surveys, the choice to complete the mental health questionnaires, and the answers provided.In addition, some of the included analyses comparing differences between severe and common mental disorders were underpowered due to the small number of people with a severe mental illness.Reverse causation also cannot be ruled out especially for the measures of wellbeing as those with poorer mental health may attribute this to stigma.The cross-sectional design of the study also precludes any inference of potential causal effects; longitudinal data are needed to investigate the research questions in more depth.At present, there are a lack of longitudinal data collected on mental health stigma and even fewer which also collect biomarker health data.The measure of allostatic load used in this study also only covered metabolic and cardiovascular function.This study highlights the need for more research into the potential relationships between stigma, health and wellbeing.It is likely that multiple stigma processes operate in a complex manner.This includes stigma related to mental health, but also associated with other minority and disadvantaged statuses related to, for example, gender, ethnicity, sexual orientation, socioeconomic position, physical illness and disabilities.Therefore, future research would benefit from taking an intersectional approach to stigma to analyse how different stigmatised statuses interact to influence health and health inequalities.There is also a need to consider stigma at multiple levels and how these might interact to influence individual and population health.Longitudinal research that adopts a life course perspective and examines the evolution of mental health stigma through time within the same individuals to investigate whether there are particular critical periods in the life course that matter more for future health and social outcomes would also be valuable.No ethical approval was required as the study is an analysis of secondary data.Ethical approval for the Health Survey for England was obtained by the survey team. | Despite increased awareness of mental health problems, stigma persists. Little research has examined potential health and wellbeing outcomes associated with stigma. The aim of this study was to investigate relationships between mental health stigma, metabolic and cardiovascular biomarkers, as well as wellbeing and quality of life among people with no mental disorder, common mental disorders and severe mental illness. Data were taken from adults aged 16 + years participating in the Health Survey for England in 2014 (N = 5491). Mental health stigma was measured using the 12-item Community Attitudes towards the Mentally Ill (CAMI) scale, intended to measure attitudes around prejudice and exclusion, and tolerance and support for community care. Individuals were divided into six groups based on their mental health (no mental disorder, common mental disorder, severe mental illness) and whether they exhibited more (≤25th percentile) or less (>25th percentile) stigmatising attitudes. Metabolic and cardiovascular biomarker outcomes included systolic and diastolic blood pressure; total cholesterol; high-density lipoprotein (HDL) cholesterol; glycated haemoglobin, body mass index (BMI), waist-hip ratio and resting pulse rate. Biomarkers were analysed individually and as an allostatic load score. Wellbeing was measured using Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS) and quality of life via Euro-QoL-5D (EQ-5D). Linear regression models were calculated adjusted for confounders. Compared to individuals with less stigmatising attitudes, results suggested that those with more negative attitudes exhibited poorer wellbeing and quality of life across all mental disorder/stigma groups, including those with no mental disorder (WEMWBS (range 14–70): b = -1.384, 95% CI: -2.107 to -0.661). People with severe mental illness generally had unhealthier biomarker profiles and allostatic load scores, but results were inconsistent for any additional influence of mental health stigma. Reducing stigma may be beneficial for population wellbeing, but further research is needed to clarify whether stigma contributes to adverse biomarkers amongst people with mental illness. |
391 | Varieties of coal-fired power phase-out across Europe | To prevent the damages resulting from climate change, governments around the world have committed themselves to an energy transition that will require them to significantly limit the amount of greenhouse gases in the years to come.This energy transition necessitates the deployment of two related policies: the adoption of new, less carbon-based technologies that replace the old technologies as well as phasing out the use of fossil fuels for generating electricity.This paper addresses how countries can phase-out coal more rapidly through analyzing which institutions present barriers to energy transition by hampering a coal-fired power plant phase-out.Many countries by now have policies increasing the share of less carbon-based technology in electricity generation.Yet, this does not automatically imply that the amount of carbon-based electricity generation has decreased to the corresponding extent.Germany provides a good example.Between 1990 and 2015 the amount of electricity generated from renewables grew to 171 TWh annually.This however was not accompanied by a concomitant decrease in the use of coal; the use of coal only went down with an amount of 38 TWh annually between 1990 and 2017.1,The relatively poor decrease in the use of coal in Germany is a typical example of the phenomenon of “carbon lock-in", a self-perpetuating inability to change from existing carbon-intensive activities and technologies to less carbon-based activities and technologies in time to prohibit large scale damage from climate change.In this paper we seek to shed more light on these lock-in mechanisms, by focusing upon the determinants of institutional lock-in, a form of lock-in which arises from “conscious efforts by powerful social, economic and political actors to reinforce a status quo trajectory that favors their interests against impending change”.Our analysis draws upon the explanatory framework of historical institutionalism and that of the Varieties of Capitalism literature in particular.We argue that institutional carbon lock-in tends to be much higher in so-called co-ordinated market economies.In CMEs it is difficult to quickly phase out existing carbon intensive forms of electricity generation because of institutionalized employment protection, government ownership and the room that consensual processes leave for key stakeholders to delay or block political decisions.Such institutions are much less supported in liberal market economies, where ownership of energy supply is much more in the hands of the private sector and governments leave it to market parties to choose the most efficient source of electricity supply.Additionally, the prevalence of majoritarian instead of consensual policy-making constellations in LMEs reduces the ability of interested parties to block policy-changes using institutional veto-points.We investigate the effect of VoC type through a qualitative comparative process tracing of the phasing out of coal between 1990 and 2017 in four European countries: Germany, Poland, Spain and the UK.In section 2 we show how and why the VoC framework provides us with a better understanding of the institutional dimension of carbon lock-in.We argue that a proper understanding of lock-in processes for coal require a distinction between the extent to which countries are involved in coal mining and the extent to which coal is used as an energy source.After outlining the methodological set-up in section 3 we show in section 4 how in the CME countries the phase out of coal-fired power proceeded much slower compared to what happened in the LME setting of the UK.We also show how governments in the CME countries sought to prop up uncompetitive domestic coal mining through many decades of cross-subsidizing and how they repeatedly sought exemptions to EU regulations aiming to decarbonize the EUs energy provision.After bringing together the results from the four cases in section 5, we draw conclusions and discuss policy implications in section 6.The research paradigm that has been coined “Varieties of Capitalism” advances a relational view of actors in the political economy by analysing the way labour interests, firms and government interact.In so-called “Coordinated Market Economies” markets are regulated to a considerable extent via formal institutions.CMEs provide for a cooperative infrastructure that allows for deliberation, information-sharing, the making of joint agreements, monitoring and sanctioning between firms, employees and the government.Because in CMEs trade unions and employment protection are relatively strong, labour interests exert considerable influence on the shape of these agreements.Ownership of firms in CMEs is less often in the hands of shareholders.Where ownership of utilities is in the hands of governments, this make governments more directly responsible and accountable for operating decisions.As a result policy choices are susceptible to political influence of stakeholders, who may have veto player power to avoid policy changes that may be harmful to their interests.Such institutional ‘constellations’ may make it difficult to change the status quo and may slow down the process of adopting policy changes that are necessary in the light of new policy challenges, such as climate change.In Liberal Market Economies coordination takes place primarily via market mechanisms, making it less feasible for governments and labour interests to reach long term agreements through collective bargaining.Equilibrium outcomes are determined primarily by relative prices and marginalist considerations, coordinated mainly through competitive markets."Trade unions are relatively weak and citizen's employment protection is relatively low, making labour markets comparatively fluid.Firms are owned more often through dispersed and private shareholding via stock markets.Through the dynamics of stock market value, management is incentivised to focus on current profitability and short term returns.Empirical studies have confirmed the above distinctions.There is a broad consensus that the UK fits the LME archetype and Germany fits the CME archetype, while data confirms that countries group around these archetypes for relevant socio-economic parameters like employment protection and ownership of firms.If we apply the VoC-framework to climate and energy policy, we should expect that the varieties are visible in the organization of national energy markets, and that countries’ approach to the phase-out of coal mining and CFPP differs according to their VoC-type.Where in LMEs we expect that privatised ownership, stock markets and shareholders play a dominant role in firm activities, in CMEs we expect a stronger involvement of governments and labour interests in decision-making concerning electricity supply.Several authors have shown how the type of VoC indeed affects the way countries are able to innovate through the introduction of low-carbon technology.Mikler and Harrison show that CMEs support incremental technological innovation with a long term focus and analyse this as a stimulus for the development and deployment of renewable energy technology.Lachapelle and Paterson perform a broader small-N quantitative study of the impact that variety in institutional constellations has on national climate policy.They find that government intervention in markets, a democratic regime, parliamentary system and proportional representation positively affect the presence of climate policies like regulations, incentives, carbon prices, voluntary agreements and R&D.Ćetković and Buzogány apply VoC in a qualitative, comparative study of the deployment of renewable energy technology in Germany, the UK and four nations in East Central Europe."They find that Germany's CME provided the best conditions for developing innovative mechanical and electrical products and facilitated Germany's comparatively strong growth in renewable energy and related technology.While the studies above show us how VoC type affects the adoption of low-carbon technology, we should expect these institutional constellations to affect the process of the phasing-out of high carbon technology as well.Fossil fuel resources and assets that are created to exploit these resources, together with their end use, lead to “carbon lock-in”, a self-reinforcing inertia in the energy system.Seto et al distinguish three distinct types of carbon lock-in: infrastructural-technical, behavioural and institutional.Infrastructural-technical lock-in here is caused by the fact that power plants are stranded assets for which it is economically disadvantageous to write them off before their end of term."Behavioural lock-in refers to the way established modes of energy consumption and use hamper the adoption of alternative energy sources, for example through people's habits of cooking on gas-stoves.Institutional lock-in refers to the inertia that results from the way stakeholders that benefit from the status quo successfully use governance structures to maintain existing forms of electricity generation.As Seto et al note, through effectively mobilizing their interests politically these institutional lock-ins can reinforce and strengthen the other two types of lock-ins.Coal traditionally was the backbone of the energy system and therefore many European countries have a history of many decades or even more than a century of major investments in coal mines, coal shipping and transfer, and coal-fired power plants.Generally, the domestic presence of fossil fuel resources leads to considerable investments in related infrastructure and assets over time."Following Seto et al's terminology we coin the term “coal lock-in” to describe the degree to which a society is locked-in on investments, resources, assets and activities related to coal.In many European countries the activities, resources and assets related to coal mining and electricity generation fuelled by coal were historically ‘vertically’ integrated with electricity transmission and distribution, trade and supply.The integrated nature of coal mining and coal-fired power aligns with the fact that historically the use of coal as an energy source was related to the ability to produce coal domestically.For example in 1991, Poland featured a 116% coal self-sufficiency and 78% share of coal in electricity production, whereas both the UK and Germany had high coal self-sufficiency and had about a third of primary energy use from coal."A first indication of lock-in processes at work is the fact that many countries in Western Europe kept their coal mines open despite the fact that coal mining became uncompetitive since the late 1950's, when the price of imported non-domestic coal fell drastically.Open-cast coal mines in Colombia, Indonesia, Venezuela, Australia and the US were producing coal that was up to 20 times less expensive to operate than the deep deposit production sites or “pit mines” in Western-Europe.More recently, coal mining in Poland has become internationally uncompetitive as well.Particularly if domestic coal activities are uncompetitive, then degrees of strategic, non-market coordination become relevant to a study of national coal-fired power phase-out.We can now formulate our hypothesis on how the institutional constellation and national coal resources determine coal lock-in as well as the climate and energy policy in a nation, shaping its energy transition to a sustainable energy system.In an LME we expect that availability of domestic coal leads to the domestic use of that coal in CFPP as long as incentives from market prices and climate policies support that choice.If the use of coal or domestic coal becomes more expensive than alternatives, we expect that through market coordination in a LME the electricity supply industry opts for cheaper alternatives.2,In general, in LMEs markets are more competitive, fluid and dynamic, and there is no a priori reason to expect otherwise for the energy industry.In CMEs by contrast, market forces and climate policy are not the only concerns that drive decision-making surrounding the use of coal as an energy source.We expect that in CMEs governments will own coal mines or CFPP more often than in LMEs.This makes decisions to phase out coal-related activities essentially public decisions, which will be taken in political arenas and will involve a much wider range of considerations than competitiveness alone.A significant consideration concerns the protection of labour interests, those in coal mining in particular.Since CMEs typically display strategic interaction rather than market coordination, in CMEs we would expect labour unions and regional governments to be able to effectively slow down the phasing out of coal as an energy source in CMEs through strategic interaction.The ability to do so in CMEs is reinforced by the fact that politically these systems are usually of the consensual type, while LMEs are usually more majoritarian in nature.Consensus systems give leverage to a wide range of players through coalitions and have many “veto players”), while majoritarian systems, like the UK, have few; once a majority is in favour of a policy, it will be enforced.Taken together, the room for veto play from governments and unions, the inclination towards subsidies and the resistance to liberalisation in CMEs would mean that the phase-out of coal-fired power in CMEs would take longer than in LMEs, given a sufficiently comparable material coal lock-in.To test our hypothesis it is necessary to engage in a comparative, longitudinal analysis of the phasing out of coal in CME and LME countries that have exhibited a relatively high and comparable dependence on coal, both in terms of coal mining and in terms of using coal as an energy-source.Fig. 1 compares several European countries regarding dependence on coal at our starting point 1990 and shows that four countries feature such a combination: Poland, the UK, Germany and Spain.In terms of the type of the VoC classification for these countries the UK ranks as a LME, while the other three countries are CMEs.3,In section 4 we provide a country by country historic process tracing of the phasing out of coal in these four countries.Our analysis describes the historical trajectory of the use of coal as an energy resource as well as developments in coal mining.In the UK, our LME case, we expect a relatively fast phase-out of coal, as dictated by market forces and climate and energy policies.In the other three CME countries we expect a much slower coal phase-out, because stakeholders use the consensual political infrastructure to influence policy-making.First, where coal-related activities are uncompetitive, stakeholders will successfully argue for subsidies in order to maintain jobs and avoid loss of local dividends.We expect in CMEs a diverse set of domestic policies being rolled out in order to keep uncompetitive coal-mining afloat as well as the use of domestic coal for electricity generation.Secondly, we expect that in CMEs climate and energy policies that are imposed through EU regulations will be met with more resistance and result in attempts to delay their implementation or receive opt-outs or temporary derogations.The relatively greater resistance of CMEs to implementing various EU schemes is a second way to demonstrate the impact of VoC type on decarbonization.Three sets of EU policies are in particular relevant here.The EU Emissions Trading System, the Large Combustion Plant Directive and the Industrial Emissions Directive.The ETS is basically a market-fixing approach to climate policy, that attributes a price to the externalised cost associated with CO2, claiming to create a “level playing field”4.A cap was set on the total amount of greenhouse gases that can be emitted by the over 11.000 installations covered by the system, from industries causing 45% of all emissions in the EU, including CFPP.The cap is reduced over time so that the total of CO2-emissions would fall.Within the cap, companies receive or buy CO2-emission allowances which they can trade with one another as needed.The LCPD, introduced in 2008, is an air-quality directive which limits air pollution and directly affects CFPP.It has been asserted that between 2008 and 2015 the LCPD was related to the closure of 35 GW of CFPP capacity.It is difficult to assess whether or not these CFPP would have been closed anyway for age or other economic reasons."In 2016, two-thirds of Europe's CFPP are over 30 years old, with about 10 years to go.Thirdly, the 2010 Industrial Emissions Directive tightens air pollution rules, focusing on nitrous oxides from a “the polluter pays” perspective, which increases the costs for running CFPP.It basically leaves Member States the choice to either modernize or to close the energy unit).All of these measures can be seen as additional challenges to the operation of CFPPs.Accordingly we expect CMEs to exhibit greater resistance towards implementing these directives compared to LMEs.Even though in 1990 Poland, Spain and certainly Germany and the UK featured similar degrees of coal lock-in, their progress from 1990 to 2017 in escaping coal lock-in has been very different.The UK has made considerable progress, more than Spain and Germany, while Poland has made hardly any progress.As a first step in the analysis, Fig. 3 provides a comparative overview of the absolute use of coal5 as a fuel for electricity generation.6,Coal use in all nations shows a degree of impact from the economic depression of 2008.Poland shows no decrease over the 27 year period in scope.The use of coal in Germany and Spain decreases slowly, but actually increased starting with the coal mini-boom of 2010-2012, until 2015.Considering intensified pressure for climate action, the EU ETS, the LCPD and the IED, these facts would seem unexpected.By contrast, in the UK the use of coal declined sharply after 2012.Fig. 4 shows an overall decrease of domestic coal mining in all four nations.Increasingly larger quantities of cheaper imported coal have fuelled CFPP.How can we account for these different trajectories and what has been the role of institutional lock-in in these processes?,In the next sections we investigate case by case how the institutional constellations shaped the policies and events which affected the use of coal and coal-fired power.For each country we first describe the institutional constellation surrounding coal.We then trace the process of phasing out coal mining and the use of coal as an energy source and the political and market dynamics that surrounded this.Also we discuss how respective governments dealt with implementing EU regulations that affected the coal sector.Germany is a political economy where government, firms and unions coordinate comparatively many actions through strategic interaction and non-market relations.Privatisation and liberalisation of the energy sector are implemented in moderation.In 1990, after the reunification, the German electricity supply industry featured three types of firms operating at national, regional and local levels, under a mixture of municipal and private ownership.Also, vertical integration in the electricity sector is common.7,German coal mining is located mainly in the region of North-Rhine Westphalia, one of the economically and politically most powerful Länder in the Federal Republic, and Saarland.The underlying relation between government, firms and region is supported by the German political and social system.Typically German mineworkers’ leaders are members of the Social Democrat Party.They regularly became members of the Federal and State parliaments, even as energy spokesmen for the SPD.This allowed affected employees direct political influence or “veto play”.The powerful and influential trade union, “I.G. Bergbau und Chemie”, represented the coal miners and was closely affiliated with the SPD.Headquarters, history and partial government ownership of both E.On and RWE are situated in North Rhine-Westphalia."Regional municipalities are RWE's single largest shareholder, owning 23% or more shares in RWE for decades, 14% through RWEB GmbH, while 68% is dispersed.RWE tops the ranking of CO2-emitters in the EU ETS companies database with 7% of 2005-2012 EU ETS emissions, E.On is second with 5%.Both companies were strong supporters of the EU ETS and used the EU ETS mechanism to defend their existing CFPP instead of investing in renewables, particularly RWE.In the 1990s, growing awareness of climate change in all political parties led to a consensus on climate protection goals."In fact, by 2007, it was the German government that moved the EU-leaders to agree on the 20-20-20 targets, aiming for a 20% share of renewable energy sources in the EU's primary energy supply by 2020.Fig. 5 provides an overview of German electricity generation over time.The steep increase in renewable energy in Germany from 1990 to 2017, pushed by strong government policy, was not accompanied by a concomitant decrease in the use of coal as a fuel.Because of an increase in demand and in the use of renewable energy sources for electricity generation, only in relative terms has the share of coal in Germany really diminished over these 27 years.Why has the phasing out of coal been so limited?,The Federal government enabled support for coal mining in 1990 by having electricity firms agree to use 40 Mt of domestic hard coal per year and compensated the electricity firms for not using cheaper imported coal.For decades coal subsidies were funded through a coal levy, the “Kohlepfennig” or “coal penny” paid by electricity consumers.The coal penny was a regulatory measure from a collection of over a dozen, where regional governments and Federal government together allocated public money to support firms and citizens dependent on coal mining.Introduced in 1975 after the Oil crises, the coal penny averaged 8.5% of the price of electricity by 1995, about 3 billion euros per year in payments to electricity utilities.Overall it generated a subsidy volume of 37.2 billion euros.In 1990, government assistance per coal miner employed was almost 90.000 US$ for Germany.It was only in 1994 that the coal penny was abandoned, through a ruling of the Federal Constitutional Court that declared it unconstitutional.As a response to this the Federal government, the regional governments, mining firms and the mining trade union agreed on hard coal subsidies in March 1997.The federal subsidies decreased from 4424 million in 1998 to 3408 million in 2002, but were compensated by increased regional funding by North Rhine-Westphalia.Halfway, in 2000, Germany featured the largest subsidised hard coal production of OECD countries with 4000 million euros.This amount of state aid around 2000 is remarkable in the light of the West-European liberalisation, globalisation, cheaper imported coal and the corresponding decline of domestic coal production between 1990 and 2000.By Spring of 2003 the regional government of North Rhine-Westphalia established a 20 Mt guarantee, but was overruled by the Federal government in July 2003.Because of budget constraints the target was set at 16 Mt per year.This amounts to an annual subsidy payment of around 1.6 billion euros.In 2010, Germany asked the EU to extend acceptance of financial support for coal mining until the end of 2014.North Rhine-Westphalia and the federal government, through intense negotiations, agreed on a base production after 2010, motivated by them as necessary to secure the energy supply.This agreement coincides with the 2010 increase of the share of coal in the fuel mix for gross electricity generation in Germany, cf. Fig. 5.As discussed in footnote 1, Germany started to export electricity when it could have shut down CFPP, even while simultaneously the Energiewende called for the phase-out of nuclear energy.With many nations to import cheaper coal from, coal subsidies in Germany basically amount to the public sector providing jobs."This aligns with the high degree of employment protection established for Germany as a CME, as does the strategic interaction between Federal government, local government, unions, employees and firms that led to subsidies, quota's and state aid.Under the IED, Member States can propose a plan to either modernize energy units or opt for lifetime derogation."However, by 2017, the EU Commission has raised objections to Germany's proposed plan, pp. 268, fn.33)."By 2018, Germany's federal government has set up a “coal exit commission” consisting of government, civil society, business and labour unions to manage the phase-out of CFPP.A large part of the discussion, with former prime ministers of industrially weak East-German lignite-mining states in the commission, is about compensation to firms and to regions.This process is in alignment with the institutions we have described; “employment protection”, “government as a shareholder” and “strategic interaction”.January 2019 a deal was closed, stating that compensation and other shutdown details should be agreed between the government and the CFPP operators on a contractual basis, which aligns with strategic interaction, not with market coordination."In accordance with the UK's LME type of political economy, it features a liberalised energy market with private ownership.After liberalisation as early as 1990, a series of mergers and the entry of large foreign multi-national utilities led to the emergence of ten generation companies, owning 85.8% of UK generation assets by 2012.In the 1990s England and Wales moved into an unprotected and privatised coal sector, selling coal to a restructured, vertically unbundled and generally privatised electricity sector.The roots of these changes can be traced back to the election of Margaret Thatcher in 1979, which firmly placed market ideology at the core of government policies.The state became strongly adverse to aid for coal mining and allowed the electricity industry to gradually increase the imports of cheaper foreign coal, considerably earlier than in Germany.Financial aid to coal mining became conditional on deep restructuring.The move under Thatcher against coal and the power of unions was motivated by political reasons, pp.155–156) as well as from a policy paradigm and was enabled by the majoritarian political system."Tension with labour unions escalated which led to the year-long Great Miners' Strike of 1984-1985, a major industrial dispute in British history, where the unions were defeated by the single-party Conservative government.It led to the closure of many British coal mines and a diminished position for trade unions."Political pressure led to downsizing and restructuring of the coal industry from the mid-1980's. "Government policies were aimed at privatisation and a competitive coal industry free from subsidies, resulting in the 1985 National Coal Board's “New Strategy for Coal”.The chairman of NCB appointed by Thatcher accelerated mine closures and showed little concern for social implications and mining communities, like those in Yorkshire."In the early 1990's though, UK government assistance to coal production was still equivalent to providing a domestic producer price 40% higher than the import price, 38.000 US$ per coal miner employed. "But by December 2015, the UK's last deep coal mine was closed in Yorkshire, putting an end to the UK's coal lock-in regarding domestic coal mining.In line with our expectations the decrease of coal use in the UK was comparatively fast, cf. Fig. 6.The decline of the use of coal was strongly facilitated by the increased competitiveness of natural gas as an alternative."In the early 1990s the then still-regulated Regional Electricity Companies were able to access long-term fixed price contracts from the UK's own domestic gas resource in the North Sea and largely built the combined cycle gas turbines or “CCGTs”, making use of technological advances in gas turbine generators.When the electricity market was liberalised and the prices of natural gas fell in the late 1990s, market dynamics diminished the demand for coal-fired power.Technological advances in gas turbine generators and financial advantages of building gas-fired power plants made them a more competitive choice for market players."This policy paved the way for investment in this competitive gas technology and subsequently the UK's “dash for gas”, breaking the electricity utilities' lock-in on domestic coal as they turned to CCGTs.The privatisation also doubled the import of cheaper foreign coal from roughly 10% in the decades before to roughly 20% between 1992 and 1997."Employment in coal mining fell from 49.000 in 1990 to 10.000 by 1996, as the dash for gas saw 100 TWh a year of coal-fired power generation and about 40 million tonnes of coal replaced by CCGT's.At the end of the 1990s, a newly elected Labour government implemented a temporary ban on the construction of new CCGTs, which was in fact a modest form of support for coal.The UK embraced CO2-reduction with the adoption of the 2003 White Paper “Our Energy Future: Creating a Low-Carbon Economy”.The ambitious 2008 “Climate Change Act” adopted carbon budgets that were in line with the 2020-targets and a 80% greenhouse gas reduction for 2050 compared to 1990.Nonetheless CO2-emissions from electricity generation did not decrease between 2000 and 2012.Coal-related firms promised the government and the public ‘clean coal’ and ‘innovation’ through carbon capture and storage.Prices for oil and gas were rising, and the 2005 conflict between Russia and Ukraine caused problems with Russian gas supply.The lower prices for coal made utilities more interested in coal in the periods of 1999-2006 and 2010–2012, causing two coal ‘revivals’.Market circumstances determined the use of coal.The price of coal and gas stayed roughly in line until 2011.After that, cheap imports of coal meant that the price of coal fell to about 60% of the price of gas."The UK's more market based approach resulted in a greater effect from environmental requirements as set by the EU air-quality directive “LCPD”, which meant that CFPP running hours were to be restricted.As a result, in the UK building new CFPP was considered risky because of recent climate policy.In fact, political pressure from campaigning NGOs to prevent new CFPPs being built successfully led to the Labour majority government banning new CFPPs without CCS in 2009.By then, unlike in Germany, there was no active political pressure from unions, firms or governments left in the UK to support coal-related investments.EU policies in the form of the LCPD and its successor, the 2013 EU Industrial Emissions Directive resulted in the forced retirement of several CFPP.The IED offers the option for lifetime derogation for energy units, and January 2016 the UK decided to use this derogation for 12 named power stations, pp. 269).But most responsible for the decrease of coal use in the UK from 2013 was the introduction of the carbon price floor.Not directly required by EU policies, we analyse it as an example of a national market-fixing policy that, like the ETS, attributed a price to the externalised costs of CFPP.It tripled the cost level for CO2 of the EU ETS.This high national carbon tax brought coal and gas prices in line with one another and made it more difficult for coal to compete.It was enabled further by available gas supply infrastructure and by coal and gas prices being sufficiently close, particularly in 2016.The CPF was implemented in March 2013.Decades of pressure on coal paved the way for the spectacular reduction of 25% of emissions from the UK electricity sector between 2015-2016, through the CPF driving rapid fuel switching).When the CPF doubled in April 2015, the share of coal in the UK fuel mix took another deep dive.April 2017 Britain experienced its first coal-free day since the 1880s.January 2018, the UK government announced that CFPP will have to close, unless their CO2 emissions are no higher than 450 kg/MWh at any time, from October 2025 onwards.This makes building new CFPP pointless.In “statist” nations like Spain, strategic interaction in corporate governance and labour relations is higher than in LMEs.This is reflected in the ownership structures for the electricity sector in the early 1990s.We see a mixture of large state- and private-owned firms, the most obvious example being Endesa: 75.6% held by the state, the rest privately.Already after the Spanish Civil War, protection of non-competitive domestic coal mining was concentrated in the state-owned utility Endesa.Endesa concentrated on coal and lignite as fuel for CFPP, mostly in the Northern regions that produced coal.In 1985 the government organised an asset swap that transferred assets from smaller and weaker firms to state-owned Endesa, which had a 40% market share around 1995.Typical for strategic interaction was the 1988 “Marco Legal Estable”, a framework that remunerates electricity utilities using a concept of standard costs.Also, in Spain vertical integration in the electricity sector and even backward into fuel supply remained common.Linkages involve explicit state ownership but also long term contracts.The entrance of Spain into the EU per 1986 started a gradual decline of the coal mining industry in Spain.Policies for substitution of coal in the fuel mix were indirect: through supporting other technologies.In Spain, between 1990 and 2010, renewables and natural gas replaced coal and nuclear in the fuel mix for electricity generation in relative terms.In absolute terms though, coal-fired power hardly decreased from 59,7 TWh to 51,4 TWh yearly.Despite the EU ETS, LCPD and IED, coal even replaced natural gas in the fuel mix for electricity generation between 2010 and 2015.Direct government intervention revived the use of coal at a relatively late point in the energy transition pathway."Coal is the only domestic fossil energy source in Spain and coal mining has played an important role in Spain's energy history.In 1990, Spain featured 200 mining companies and 45.000 employees producing 35.8 Mt of coal."Spain's phase-out policies focused on benefits and subsidies, not on creating new jobs.With the highest unemployment rate of any advanced economy, regional labour impacts of power plant closures would be severe.When in 1996 coal stock was piling up, the Ministry of Energy and Industry ordered fuel quotas to eliminate this oversupply, reducing market share for nuclear and hydro power.This amounts to direct government intervention in markets to protect employment.The 1997 “Ley del Sector Eléctrico” introduced competitiveness in the electricity sector and the expansion of renewable energy.Simultaneously, in 2000 Spain featured the second-largest subsidised hard coal production of OECD countries with 0.7 billion euros."The EU Council Regulation of July 2002 discouraged state aid to coal mining but made an exception for Spain, because of the importance of coal to Spain's electricity production.As both the EU Regulation and the exception for Spain were about to expire by December 31st, 2010, in the fall of 2009 Spanish prime minister Zapatero, from coal region León, proposed a Royal Decree to have Spanish CFPP use volumes of domestic coal."However, May 2010 the EU agreed on a policy stopping member states' financial support for uncompetitive coal mines unless the aid was accompanied by a plan to close said coal mines.The Spanish domestic coal mining industry, reduced to 5000 employees by 2009, faced decreasing demand for coal as Spanish utilities overwhelmingly purchased cheaper, imported coal.By July 2010 the two largest coal mining groups in Spain stopped paying their workers, citing lack of funds.This was followed by a series of miner protests known as the “Black March” and strikes in September 2010.Spain asked the EU to extend acceptance of financial support for coal mining until the end of 2014.The EU agreed that Spain could support domestic coal mining until December 31st, 2014 at the latest, provided the share of electricity concerned remained below 15%.The Royal Coal Decree became effective in February 2011.The Spanish government obligated 10 specific CFPP to burn specified volumes of domestic coal for a specified reimbursement per MWh to the firms.It was clearly strategic interaction, not market coordination which supported this increase of the use of coal.The EU extension of acceptance of support for coal mining to the end of 2014 benefitted the coal producing regions of Asturias, León and Teruel.It was influenced by electoral incentives.Electricity companies Endesa, Iberdrola and Natural Gas Fenosa as well as the region of Galicia appealed against the Royal Coal Decree unsuccessfully.In that way, political influence or “veto play” from regions and employees blocked the demise of domestic coal use in Spain, enabling a revival in the use of coal as late as 2010.The EU extended this acceptance to the end of 2018.But more recently Spain established a plan for modernization of energy units under the IED that was accepted by the EU, pp. 268).September 2018, Spain agreed to an EU proposal that effectively bans State aid for coal.October 2018, Spain decided to close most of its coal mines, after government and unions struck a deal with the EU that will mean €250m will be invested in mining regions over the next decade, early retirement for miners, re-skilling and environmental restoration)."With domestic coal providing just 2.3% of Spain's electricity, the political impact was not a problem for the new Labour government.After communism, Poland developed a democracy built on proportional representation and multi-party coalitions which functioned between 1990-2015.Successive multi-party Polish governments were strongly in favour of coal and CFPP.The right-wing Law and Justice party elected in 2015 was the first single party majority in Polish parliament since 1989 and featured an even more hard-line position in favour of coal.Most industrial sectors in post-communist countries did see a large-scale privatisation."In the early 1990's, the Polish energy sector began with the launching of privatisation as well. "However, later in the 1990's Poland established state-owned companies to operate individual power plants.After Poland entered the EU in 2004, these state-owned companies were consolidated, and, in 2006, vertically integrated in order to improve financing prospects for investment requirements.State-owned PGE was formed between 2004 and 2007, bringing together the most polluting installation in the EU ETS, the 5400 MW coal-fired Belchatów power station, with the Turów, Opole and Dolna Odra CFPP."This centralised 70% of the Polish state's EU ETS emissions in one state-owned company and made the government of Poland directly responsible for 674 Mt CO2 or 5% of the EU ETS emissions between 2005 and 2012.Four of the other consolidated companies were privatised, paving the way for some Foreign Direct Investment from multi-national energy companies.However, firms with a decisive amount of shares owned by the Polish state serviced over 75% of the Polish electricity market.Four out of five hard coal mining companies in Poland are owned by the Polish state, linking these firms to considerable political influence or “veto play”.The Polish state has a direct financial stake in the success of the coal industry.Also, coal sector unions in Poland have a history of being remarkably powerful, with 240 trade unions for 100,000 coal jobs having significant political power."Poland's electricity supply industry is strongly dominated by coal as a fuel. "Though in relative terms natural gas and renewables did replace part of coal's share in the fuel mix between 1990 and 2017, in absolute terms, coal remained exactly stable.In the early 1990s, because of geophysical and economic differences, the coal lock-in in Poland was different.Polish coal mining was internationally competitive."Poland's domestic prices for coal and electricity were far below the price levels for imported coal from the US, whereas they weren't in Western Europe.Poland had coal seams which were internationally competitive to mine, so there was no need for subsidies or obligations for the use of domestic coal in electricity generation.The government in Poland though, as owner of the hard coal mines, continued to make significant losses.Essentially, Polish electricity users were subsidised then by the Polish government through having domestic coal prices of just about half the border prices.Poland has a history of being dependent on gas and oil imports from Russia: in 2014 90% of oil imports and 65% of gas."Poland's priority is to become as energy independent as possible, making “energy security” the guiding principle for Polish energy policy.Its key problem is financing the necessary investments, as many power plants are end of life while energy demand is increasing."In 2014, 40% of Poland's CFPP were over 40 years old, about half of them to be phased out by 2030 because of technical constraints and environmental constraints like the EU LCPD and IED.Polish coal mining is strongly concentrated in the south-west region of Silesia and, as in Germany and Spain, regional concerns around coal carried politicians into government."In 2015 Prime Minister Beata Szydlo swept to office in October with PiS on a promise that she would protect the coal industry's 100,000 jobs. "She is a coal miner's daughter from the region of Silesia, home of the now suffering state-owned Kompania Weglowa, the EU's biggest coal mining company.The IED offers the option to modernize CFFP, and Poland offered 47 energy units for modernization instead of lifetime derogation."For Poland's leading lignite-fired power plant “Adamów”, Poland wanted to take advantage of the higher limit of operating hours under the IED, in December 2015; however the EU turned this request down, pp. 269).Since 2015, Polish coal mining productivity has declined and the coal mining sector is in financial trouble.Polish firms that are forced by the state to help the suffering coal companies are now losing value themselves, and citizens face having to bear losses.Essentially citizens and firms now subsidise coal mining and CFPP, something which underlines the strategic interaction type of coordination that characterizes the CME variety of capitalism of Poland.The comparative longitudinal analysis of the phasing out trajectories in the four countries clearly shows that the coal policies in the three CMEs have been significantly different from those that evolved in the UK, our LME case.First our analyses shows that there is a very good fit between the typology of political economies of the VoC-framework and the ownership structures in the different countries."All three CMEs feature a vertically integrated electricity sector from the 1990's on with government ownership, either regional and municipal or state.In LME the UK the electricity sector is privatised and unbundled.Secondly, these ownership structures in turn have set the scene for strategic interaction in CMEs.It has enabled regional stakeholders to prolong the use of coal in Germany, in Spain, and Poland.In our LME case the UK we did not see similar strategic interaction: local concerns from the region of Yorkshire were not able to block the phasing out of coal-related activities, because of the prevalence for market coordination in the UKs government approach to energy policy.Thirdly, our analyses have confirmed that in CMEs institutional support for employment protection is stronger.Spain and Poland displayed direct government intervention to protect coal mining jobs.Germany intervened more indirectly, through arrangements involving regional governments, unions and specified volumes of domestic coal.The UK as a LME typically displayed the lowest support for employment protection, as confirmed by the way the Thatcher government responded to the Great Miners’ Strike and its lack of support for coal activities.Fourthly, the patterns we expect for subsidies to coal are also confirmed by the data.Coal mining has been uncompetitive in the UK, Spain and Germany since 1958.In Spain, in 1992, coal was sold to CFPP under protected contracts at prices over 3.12 times those in the UK and 1.42 times those in Germany in 1989."The higher Spanish and German coal subsidy policies align with strategic interaction, whereas the UK's policy aligns with market coordination.Regarding coal dependence, we must note that the availability of domestic natural gas sets the UK apart from the other three nations."However, the UK's liberal policies for limiting coal aid and reducing protection date back to the earliest 1980s, preceding the 1990's dash for gas.We contend that the UKs early reduction of aid for coal mining is in line with its institutional constellation supporting market coordination.The UK had the option to stick with the 1991 status quo in coal jobs and dividends, just like Germany and even more so than Spain, cf. Table 1."The availability of domestic natural gas did facilitate the UK's policies to abandon coal. "Wilson and Staffell hold that it was the price on carbon which was the main driver for coal's final rapid substitution starting April 2013, enabled further by available gas supply infrastructure and by coal and gas prices being sufficiently close, particularly in 2016).The late increase in coal use in Spain and Germany between 2010 and 2015, despite the EU climate policies, illustrates the resistance of CMEs to implement climate and energy policies as demanded by the EU.Where in 1990 the UK and Germany were quite similar in terms of coal lock-in, by 2015 the UK had moved to a situation of comparatively low coal lock-in.Comparatively, by then Germany had moved considerably less from its 1990 situation of coal lock-in than the UK.We contend that because of its institutional constellation Germany was comparatively more constrained to change its situation.The same goes for Spain, but from a better starting point.Also, by 2015, Poland has become even more of an outlier because of its institutional constellation and its material coal lock-in.Coal has a huge impact on climate change."With the UNFCCC Paris Agreement of December 2015, the global community has chosen to address climate change through voluntary Nationally Determined Contributions.As a consequence, nations in a situation of coal lock-in are asked to voluntarily phase-out coal and CFPP.As has become clear from this paper, this means that a number of CME-like nations will have to challenge deeply-rooted institutional constellations that have so far supported coal-related government interests, employment protection, regional concerns and preferences for strategic interaction for about six decades, even when market considerations suggested otherwise.Previous studies have found that CMEs do better in having countries adopt new carbon neutral electricity generation techniques.Our study shows that it is these same institutional constellations in CMEs that make it difficult to disband older types of electricity generation.CMEs might only be able to phase out coal through consensual agreements that require extensive compensations and side payments in order to compensate for job losses and for writing off sunk assets.However, the success of the Carbon Price Floor in phasing out British coal suggests that a majority for a significant carbon tax might lead to results considerably faster."We view the introduction of the carbon price floor as an example of a typical LME-style “arm's-length” market-fixing policy.The CPF builds externalities into the electricity price indiscriminately for all players in the national energy market.By contrast, CMEs lean on strategic interaction amongst relevant stakeholders, enabling specific agents to use veto play and slow down the process."Both our theoretical model as well as the data suggest that the UK's approach with an arm's-length carbon tax like the CPF which indiscriminately applies to the entire market is more successful.This paper has contributed to the awareness that deployment of low-carbon technologies is but a part of the climate challenge, and the phase-out of existing carbon-intensive technology is a topic that is at least equally relevant and deserves further study.As Seto et al rightly note processes of lock-in “pose significant obstacles to adoption of less-carbon-intensive technologies and development paths”.Overcoming institutional carbon lock-in is especially difficult as institutions are sticky and hard to change in the short run.Still, even in situations of considerable lock-in, changes may be on the horizon.First, technological advancements may make a further decarbonization more feasible.Secondly, institutional stickiness may be overcome through bursts of disruption as a result of swift social, political or technological changes, that make it unfeasible for veto-players to still block decarbonization.Thirdly, the very institutional framework that is responsible for the slow phasing out of coal in CMEs also provides for a consensual deliberative infrastructure that allows a soft-landing type of exit from coal dependency.As the recent developments in CMEs Spain and Germany show, this can be achieved through negotiated settlements with labour interests and affected regions, involving considerable side-payments to compensate for the loss of jobs and revenues associated with the use of coal as an energy source.While these agreements are costly in the short run they form an indispensable element in facilitating the transition to a carbon-neutral energy future.This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors.Herman Lelieveldt acknowledges support from the European Commission through the Erasmus+ program of the European Union.This support does not constitute an endorsement regarding the contents of the article, which solely reflects the views of the authors, and the Commission cannot be held responsible for any use that may be made of the information contained therein. | Meeting climate goals is a particular challenge for countries that combine extensive use of coal as a fuel for power generation with a significant history of coal mining. We argue that these countries are prone to institutional carbon lock-in processes that significantly affect the phase-out of the use of coal. We use the analytical framework of Varieties of Capitalism to compare degrees of carbon lock-in in Coordinated Market Economies (CMEs) with Liberal Market Economies (LMEs). In CMEs “strategic interaction”, “employment protection” and “government ownership” translate into protection of uncompetitive domestic coal activities and assets through (cross) subsidies and veto play. In LMEs the use of coal will be more dependent upon its market price in the international energy market. Through a qualitative comparison of the development of coal-mining and coal-fired electricity generation in three CMEs (Germany, Spain, Poland) and one LME (the UK) over the period between 1990 and 2017 we show that the UK's liberal market economy facilitated a relatively swift phasing out of coal mining and the use of coal, compared to a much more reluctant transition in the other three countries. |
392 | Multiscale alterations in bone matrix quality increased fragility in steroid induced osteoporosis | Anti-inflammatory glucocorticoid treatments are mostly prescribed to an elderly population suffering from diverse disorders such as asthma, rheumatoid arthritis, immune diseases and following organ transplants .However, glucocorticoid induced osteoporosis, a form of secondary osteoporosis, is a clinically serious long term side effect of glucocorticoid treatment, resulting in loss in cancellous bone followed by cortical bone and affecting 0.5% of the general population .Osteoporotic fractures associated with glucocorticoid use occur in up to 30–50% of patients on chronic glucocorticoid therapy .Further, GIOP is the most notable clinical skeletal disorder where the established paradigm of using only bone quantity to predict fractures is clearly insufficient to explain increased fracture risk, as bone mineral density measurements show no significant association with fractures.GIOP patients have greater risk of fracture at higher BMDs when compared to postmenopausal osteoporotic women .It has been shown that glucocorticoid therapy affects both the amount of bone as well as the micro-architecture and material level properties due to the down-regulation of bone-forming osteoblasts with concurrent alteration in the bone remodeling cycle .Up to 40% reduction of both the mineral to matrix ratio and the elastic modulus was observed around osteocyte lacunae in , along with 18% reductions in trabecular bone volume, 12% lower trabecular connectivity and 7% lower trabecular number as measured with microcomputed tomography.However, the mechanisms by which these micro- and nanoscale changes in bone material quality lead to increased fracture risk in GIOP are currently unknown.Deformation mechanisms at multiple structural levels between the nano- and the microscale – from the largest down to the smallest – lead to load-bearing bones achieving both a high stiffness and high work of fracture .Bone quality changes occur initially at the smaller length scales, dictated by rates of formation of new basic multicellular units by osteoblasts, at level of lamellae and mineralized fibrils.Alterations in the bone extracellular matrix induced by glucocorticoid therapy are downstream effects of altered cellular activity of bone cells in the pathological condition and may be involved in the reductions of the global mechanical competence of bone.However, a gap exists both in our knowledge of the structural changes in GIOP , and more significantly in the relation between such structural changes at the bone material level and the increased macroscopic fragility in GIOP.Therefore, there is a clear need to apply high-resolution imaging techniques to close the gap between onset of fracture relevant changes and diagnosis.We hypothesize that enhanced fracture risk in GIOP is associated with nanomechanical alterations at the fibrillar level, which link into larger scale deformation mechanisms.Here, to test this hypothesis, we combine multi-scale imaging techniques and mechanical testing on an animal model of GIOP."For the animal model for GIOP, a recently published mouse model of endogenous hypercorticosteronaemia was used , as the fracture risk associated with endogenous and exogenous GIOP have been shown to be similar .The mouse model was generated via an N-ethyl-N-nitrosourea induced mutation of the corticotrophin releasing hormone promoter.The mutation, which involved a T-to-C transition at − 120 bp within the Crh promoter, resulted in increased transcription activity, and in vivo assessment of Crh− 120/+ mice revealed them to have obesity, hypercorticosteronaemia, hyperglycaemia, and low bone mineral density when compared to wild-type mice.Crh− 120/+ mice, when compared to WT mice, showed reduction in mineralizing surface area, mineral apposition rate, bone formation rate and osteoblast number; this was also accompanied by an increase in adipocytes in the bone marrow."These phenotypic changes validate the used of the Crh− 120/+ mice as a model for Cushing's syndrome and GIOP .In this study, the alterations in fibrillar-level deformability, mineralization and cortical micro-architecture in GIOP can thus be quantified and linked to macroscale mechanical properties using in situ X-ray nanomechanical imaging synchrotron micro-computed tomography and scanning electron microscopic investigations respectively.Crh− 120/+ mice were identified in a dominant ENU mutagenesis screen at the MRC MGU Harwell .Female Crh− 120/+ mice on a C57BL/6 genetic background were used in all experiments; littermate WT mice were used as controls.Animals were 26 weeks of age at the time of sacrifice.Animals were anaesthetised before cervical dislocation; internal organs were removed from the body cavity and the whole body skeleton stored at − 20 °C until used.Quantitative BSE imaging was performed on transverse cross section at the mid-diaphysis to determine the microscale degree of mineralization.Mice femora were sectioned into halves using a low speed diamond saw before dehydrating in ethanol and embedding in poly-methyl-methacrylate .Digital BSE imaging was performed with an Inspect-F, FEI scanning electron microscope equipped with an annular solid state BSE detector.The electron beam was adjusted to 20 kV accelerating voltage and 160 μA sample current was used to perform the analytical imaging.The working distance in the SEM was adjusted to 15 mm.The pixel resolution of the Digital BSE images from the midshaft transverse cross section was 0.3125 μm 1024 × 943 pixels) with gray level resolution of 256.These gray levels values were converted into calcium weight% values using carbon and aluminum as gray level references.In Crh− 120/+ mice bone, two distinct regions were observed.Three to six regions of interest from each BSE image of the anterior transverse cortex were used to produce the grey level histograms.Using BMDD histograms, the mean calcium weight percentage Camean and full width at half maximum were calculated.Synchrotron radiation microCT was performed at the imaging beam line I13-2 at Diamond Light Source to visualize the vascular canal network.Three tibiae specimens from three WT mice and three tibiae from three Crh− 120/+ mice were oriented with their longitudinal axis parallel to the rotation axis during scanning.The effective voxel size was 1.6 μm3, providing a spatial resolution of 3.2 μm and field of view of 4.2 × 3.5 μm.Tomographic scans were obtained using photon energy of 18 keV and exposure time was 0.1 s. For each 3D data set, a total of 3600 projections were acquired over a range of 180°.The Diamond Light Source in-house algorithm was used to reconstruct tomographic data and 3D volumetric visualization of tibiae mid diaphysis was created by segmentation tools in Avizo 3D software.Image volumes at the mid diaphysis of 1.35 × 1.34 × 1.00 mm3 were used for further morphometric analysis.Using Aviso intracortical cortical lacunae and canals were segmented from dense cortical tissue with simple thresholding due to their densities being significantly different, and this also provided volume measurements of the bone.STL meshes were created for segmented volumes and imported into Blender software where it was separated by loose parts to give a total count of lacunae, canals and artefacts.The artifacts largely occurred outside the bone so could be selected, isolated and counted in Blender.The mesh was then imported into Meshlab where components less than 1% of the total mesh size were removed with the “remove isolated components algorithm” leaving mainly the canals which could then be manually counted.Both the artefact and canals counts were then subtracted from the total count to give the lacunae number.We derived morphometric measures for female Crh− 120/+ and wild-type littermates, including lacunae number density and canal number density.Mouse femora from female Crh− 120/+ and wild-type littermates were dissected, skinned and muscle tissue removed.Then the bones were systematically prepared for in-situ tensile testing.Bone strips only from anterior sections of the femora were used in this experiment such that long axis of specimens are parallel to the femur.The average length, width and thickness of the gauge regions were 5.0 mm, 1.0 mm and 0.2 mm, respectively.Samples were loaded at a constant velocity of 1 μm/s in a custom-made micromechanical testing machine in the path of a microfocus synchrotron X-ray beam at beamline I22, Diamond Light Source.A schematic of the experimental setup is shown in Fig. S1.Samples were maintained in physiological saline in a fluid chamber and strained at 10− 4 s− 1, with SAXD spectra taken every 0.05% tissue strain up to failure.X-ray wavelength λ was 0.8857 Å and beam cross section 10 μm × 12 μm.Sample to detector distance was 1.034 m, measured with a calibration standard.During the experiment, exposure time for each SAXD spectra was kept to approximately 1 s, limiting the total X-ray radiation dosage to 29.4 kGy to minimise the influence of the X-ray radiation on the bone mechanical properties .The collagen fibril strain εf was measured from change of the centre position q0 of the third order reflection peak, as described previously .2D SAXD patterns were reduced to one dimensional profiles by radial integration over a 20° sector oriented parallel to the tensile loading axis.Subsequently, the third-order meridional fibrillar reflections were fitted to Gaussians with a linear background term obtain peak position q0.Axial fibrillar periodicity D = 6π / q0, and fibril strain equals the percentage increases in D relative to the unstressed state.Tissue strain was measured by non-contact video extensometry of displacement of horizontal optical markers on the bone mid diaphysis.We consider only the elastic regime of bone deformation in this paper, and hence only data collected from the linear region was used for further analysis.The elastic region for each sample was identified using the baseline of > 10% reduction the slope of the stress strain curve, shown in Fig. S2 in supplementary information.To determine the degree of orientation of the collagen fibrils with respect to the loading axis, the full width at half maximum of the meridional 3rd order reflection Ic was estimated from unstrained samples.To eliminate mineral scattering background in SAXD, the total azimuthal intensity I at q = q0 and the azimuthal distribution of mineral-scattering Im;cwere calculated.I was calculated by radially averaging the SAXD intensity in a narrow band around q0.Im;c was similarly calculated by averaging I at wave-vectors lower ql and higher qh than q0.Ic is the difference between I and Im;c.The angular intensity of the 3rd order fibril reflection Ic is plotted in Fig. S1F.The intensity was fitted to a Gaussian function I = I0 exp / Δχ0)2 / 2), where χ0 is the centre of the intensity distribution and Δχ0 proportional to the width.The average rates of collagen fibrillar reorientation were determined for WT = 4 and Crh− 120/+ = 6 by calculating the slopes of FWHM vs. tissue strain curves.To compare nanomechanical and synchrotron X-ray micro-computed tomography results between WT and Crh− 120/+ mice, Student t-tests were performed.One way ANOVA test with post-hoc Tukey HSD test was performed on mean calcium weight percentage Camean and full width at half maximum data to assess statistical significance between periosteal and endosteal regions of Crh-120/+ mice and their WT littermates.Excel 2007 was used for the Student t-test, ANOVA and post-hoc Tukey HSD test.Backscattered scanning electron microscopy was performed on femoral transverse cross sections to examine possible mineralization defects in Crh− 120/+ mice.Cortical microstructure of the Crh− 120/+ mice femora was markedly different to the WT mice cortical structure.BSE images of femoral transverse cross-sections of WT mice showed a uniform cortical thickness, whereas in the Crh− 120/+ mice, the posterior cortex was substantially thinner compared to the anterior cortex.The anterior, lateral and medial cross sections of Crh− 120/+ femora had a very large fraction of cavities.In contrast, WT femoral bone was uniformly dense around the full cortex.High magnification BSE images of the WT cortex show uniformly distributed lacunae.However, in Crh− 120/+ mice osteocytic area is low compared to WT mice.Strikingly, Crh− 120/+ mice cortices had numerous localized cement lines surrounding low mineralized tissue) near cavities, which were absent in WT cortices.These structures were ~ 50 × 50 μm in area, surrounded a significant number of osteocytes and localized to the endosteal cortex.Bone mineralization density distribution histograms were plotted for Crh− 120/+ and WT femoral transverse cross-sections.Since we observed very distinct intra-tissue variation of mineralization on the Crh− 120/+ mice femora as reported above, both regions were used separately for quantitative BSE analysis.Crh− 120/+ mice showed a lowered average mineral content compared to WT.Mean calcium weight percentage was lowest at regions near the cavities in Crh− 120/+ mice, and significantly greater in the cortical periosteum at the regions away from cavities.WT mice had significantly higher Camean compared to Crh− 120/+ mice.The homogeneity of tissue-level mineralization was highest at regions around cavities in Crh− 120/+ mice, and substantially lower in regions near the periosteal surfaces away from the cavities.In contrast to Crh− 120/+ mice bones, WT mice had significantly lower FWHM.In order to gain better understanding of 3D distribution of cavities and micro structure of cortical bone, synchrotron X-ray micro tomography imaging was carried out.Results presented in Fig. 3C and D show most of these cavities are localized into the anterior cortico-endosteal bone along the tibiae shaft.These localized cavities are not present in WT mice.3D reconstructions of vascular canal network and lacunae presented in Fig. 3B show that WT and Crh− 120/+ bones have individual canals directly connected to the medullary cavity.However, WT bone exhibited very condensed network of canals and osteocyte lacunae homogenously distributed across the cortical bone.In contrast, in Crh− 120/+ mice most of the canal network and lacunae space have been replaced by cavities.Morphometric evaluation on vascular canals and osteocyte lacunae are shown in Fig. 4A and B respectively.These results indicated a significant reduction in canal density and lacunae density in Crh− 120/+ mice bones.Average volume fractions for porous sections and residual regions) are 0.025 and 0.0083 respectively.In contrast, in WT mice no such structures were observed.Furthermore, Crh− 120/+ mice bones exhibits some unmineralized tissue within the medullary cavity attached to the cortico-endosteal bone.This tissue has lower gray value compared to the cortical bone of Crh− 120/+ mice.The increased cavity structure in Crh− 120/+ mice bone is present across the entire length of the bone shaft, as evidenced from synchrotron X-ray microCT measurements across the entire mid-diaphysis.Furthermore, Crh− 120/+ mice bones exhibits some unmineralized tissue within the medullary cavity attached to the cortico-endosteal bone.This tissue has lower gray value compared to the cortical bone of Crh− 120/+ mice.The increased cavity structure in Crh− 120/+ mice bone is present across the entire length of the bone shaft, as evidenced from synchrotron X-ray microCT measurements across the entire mid-diaphysis.Tissue level and fibrillar and mechanics of cortical bone from femoral mid-diaphyses of 26 week old WT and Crh− 120/+ mice were measured using in situ micro mechanical tensile testing combined with microfocus SAXD.Porosity-corrected stress versus tissue strain was plotted as a function of genotype.Tissue level elastic moduli are significantly lower in Crh− 120/+ mice compared to WT mice.Average yield stress of the WT mice is significantly larger compared to Crh− 120/+ mice.Tissue yield strain of Crh− 120/+ mice was not significantly different from WT mice.Considering the fibrillar-level deformation, the gradient of stress versus fibril strain in the elastic regime is denoted as the effective fibril modulus, as per our previous definition .Average fibril modulus shows a significant reduction of ~ 79% in Crh− 120/+ bone relative to WT.To calculate the fraction of tissue strain taken up at fibrillar level , fibril strain was plotted against macroscopic strain.Gradients of fibril strain versus tissue strain are clearly different, with the dεF/dεT much higher in Crh− 120/+ mice compared to WT compared to 0.57 ± 0.2 S.D.) Maximal fibril strain for Crh− 120/+ specimens was significantly higher compared to WT mice.The degree of fibrillar orientation between WT and Crh− 120/+ mice at unloaded state was significantly higher compared to WT, indicating a lesser degree of fibrillar alignment relative to the tensile axis.The load-induced fibrillar reorientation, the percentage change in fibrillar orientation for WT and Crh− 120/+ mice shows the resultant change, indicating that the fibrillar orientation reduces for all samples, but is much less pronounced than Crh− 120/+ mice relative to WT mice.Rate of fibrillar reorientation with fibrillar deformation in Crh− 120/+ mice is significantly lower compared to WT mice.Here, we have applied a combination of nano- and microscale structural and mechanical probes to quantify the mechanisms by which bone material quality changes in a mouse model of GIOP leads to increased macroscopic fragility.Glucocorticoid-induced osteoporosis is an especially appropriate osteoporosis model to clarify the mechanistic role of bone quality, as it is well-established that the steroid-increased increase of fracture risk appears uncorrelated to changes in bone quantity.Nonetheless, while cellular changes in bone metabolism have been identified in GIOP , less is known about the alterations in bone material , and very little about the altered deformation mechanisms of the bone material in GIOP."We used X-ray nanomechanical imaging techniques combined with micro-structural probes of mineral content and 3D microarchitecture to provide a quantitative link between structure at the nano- and microscale and the mechanical quality deterioration in a mouse model for Cushing's syndrome, with relevance for GIOP. "While the mouse model used in this study exhibits endogenous steroid production characteristic of Cushing's syndrome , which is in contrast to the usual etiology of GIOP where steroids are administered exogenously as part of anti-inflammatory medication , there are sufficient similarities to make it a worthwhile comparison.The Crh− 120/+ mice in this study have been previously shown to exhibit osteoporosis, specifically showing reduced bone formation, number of osteoblasts, mineral apposition rate, and fraction of the endosteal surface of cortical bone which was covered by osteoblasts.Further, an increased adipocyte concentration in Crh− 120/+ mice suggest that bone marrow stromal cells preferentially differentiate to adipocytes rather than osteoblasts, as seen in GIOP In addition, atomic force microscopy and other imaging methods have shown that GIOP cortical murine bone exhibits “haloes” of lesser mineralized tissue around osteocyte lacunae.Similar microstructural alterations are observed in the current mouse model as well as significantly altered 3D microstructural architecture, which gives confidence that the alterations in bone matrix mechanics and structure visible here are relevant to the case of GIOP.As discussed earlier , the current model can hence be considered a complement to existing models of exogenously induced GIOP with special application to understanding the longer-term effects of GIOP on bone structure and quality, in line with the continuous production of steroids over the lifetime of the animal.The main findings of our study can be summarized as follows:Nanoscale mechanical alterations: We observed a reduced fibril modulus, increased fibrillar extensibility, increased randomness of fibril texture and reduced rate of fibrillar reorientation in the cortical bone of Crh− 120/+ mice subjected to tensile loading, compared to WT controls,Microscale material and structural alterations: A reduced average mineral content and increased heterogeneity of mineralization were accompanied by significant increases in porosity and alterations in 3D microarchitecture, presence of lower mineralized tissue around these pores and disrupted endosteal structure in the cortices of Crh− 120/+ mice compared to WT controls.Macromechanical changes: The tissue-level stiffness reduced, the maximum tissue strain increased and the breaking stress reduced also reduced in the bones of Crh− 120/+ mice compared to WT controls."In the following, we will discuss these findings, and their relation to existing knowledge about the alteration of bone structure in GIOP as well as in related disorders like Cushing's syndrome in more detail.At the microstructural level, the microCT images indicate substantial, interconnected pores inside the cortex of bones, which is accompanied by the reduction in mineralization of the remaining matrix.The open question is whether this difference is the product of disrupted endosteal structure, or due to the presence of blood vessels inside these cavities, leading to Haversian-type remodelling with secondary tissue formation.From prior histochemical studies of this model , it is found that due to a combination of reduced osteoblast coverage on the cortico-endosteal surface, a lower mineralizing rate and lower overall number of osteoblasts, the endosteal surface develops with a ruffled, porous surface characteristic of cancellous bone, which are the large voids visualized by our microCT data.Further, there is no direct evidence of blood vessels inside these cavities from microscopy or SEM images, suggesting that secondary remodelling may not be playing a role.Thus, the first possibility is more likely.However, it should be noted that osteocytes inside already formed cortical bone have been proposed to be capable of resorbing formed bone by leaching and proteolytic process of osteocytes .While this view is contested it is possible that enhanced removal of endocortical bone tissue by osteocytic osteolysis activated in GIOP may also be a causative factor behind the large voids and cavities found, and at present we cannot conclusively exclude either possibility.The microstructure of the tissue toward the endosteal surface exhibit regions of low overall mineralization, which bear some resemblance to the lowered mineralized haloes observed in GIOP bone in , and also show some highly mineralized thin lines at their peripheries.The low mineralized zones also have a more heterogeneous distribution, as characterized by the larger FWHM in BMDD results.The overall reduction in mineral content and the increased heterogeneity in Crh− 120/+ may be linked to the adverse effect of glucocorticoid treatment on cellular activities.Synchrotron X-ray microtomography data demonstrated that cavities are interconnected to each other along the longitudinal direction of the tibiae in Crh− 120/+ mice.Furthermore this 3D representation and morphometric evaluation showed that the Crh− 120/+ bones has reduced proportion of normal vascular network and less density of osteocyte lacunae compared to WT cortical bone.However, a much larger fraction of intracortical cavities are found, unique to Crh− 120/+ mice, which may be the late-stage result of enhanced osteocytic osteolysis inside the cortical shell .These findings are less apparent when investigating 2D-only images such as BSE imaging.Quantitative analysis of 3D morphological analysis such as canals density and lacuna density was performed in this study.Despite the limitation of small number of samples capable of being measured in the limited synchrotron time, qualitatively and quantitatively significant differences in microstructure was observed between Crh− 120/+ and WT mice.In terms of bone matrix mechanics, we observed significantly reduced elastic modulus and strength in the femora of GIOP-exhibiting Crh− 120/+ mice at the macroscopic scale, which remain significant after correcting for elevated microstructural porosity in Crh− 120/+.However, the origin of the material-level changes causing this mechanical deterioration may lie at either or both the micro and the ultrastructural length scale in the structural hierarchy.We find evidence that reduced stiffness at the fibrillar level plays an important role in this mechanical deterioration: the effective fibril moduli in Crh− 120/+ mice are significantly less stiff than controls.We speculate that the alterations could be due to the reduced stiffness in extrafibrillar environment in Crh− 120/+ tissue.In the linearly elastic region of deformation, the externally applied tensile strain can to be divided into a tensile stretching of the mineralized collagen fibril together with deformation at larger length scales, which may include shear in the extrafibrillar matrix between fibrils or between lamellae.Our in situ SAXD results show that because the fibril modulus is lower, the maximum deformation in the mineralized fibrils in Crh− 120/+ is significantly higher relative to WT.The fibril-strain/tissue-strain ratio in the WT mineralized collagen fibrils is ~½, consistent with previous in situ SAXD on bovine fibrolamellar bone, but in contrast, the fibril-to-tissue strain ratio for Crh− 120/+ bone is ~ 1, within experimental error.While an increase in fibril-strain/tissue-strain ratio in well-oriented bovine fibrolamellar bone was earlier explained by us as due to increased mineralization in the extrafibrillar compartment, this mechanism clearly cannot hold for the osteoporotic Crh− 120/+ mice, because they exhibit significant reduction in degree of mineralization.As the fibril orientation distribution is significantly more random in Crh− 120/+ than in WT, we conclude that microscale inhomogeneity in lamellar level fibril orientation in Crh− 120/+ may be playing a significant role in the altered fibril-strain/tissue-strain ratio as well as in the altered macroscale mechanics.The alterations of macroscopic mechanics with fibril orientation is consistent with a recent study showing that microfibril orientation dominates the local elastic properties of lamellar bone , The alteration in fibril-strain/tissue strain ratio is quite consistent with the highly porous, heterogeneous mineralized matrix observed in Fig. 1B.In such a system, the local tissue strain in the more randomly oriented fibrils in Crh− 120/+ mice will be different when compared with WT mice.Our results show that the mineralized fibrils in both groups undergo a reduction in degree of fibrillar orientation on loading, which corresponds physically to a stress-induced alignment of the fibrils toward the loading axis.However, the rate of the fibrillar reorientation is different between Crh− 120/+ and WT with a significantly lower rate of reorientation in Crh− 120/+ mice.Prior to discussing these differences, however, the magnitudes of the changes in the width of the fibril angular distribution deserve comment.For tissue strains of ~ 1–2% we observe much larger percentage changes in the width, of the order of 10–20%.At first glance this finding of relatively large reduction in the width is a very surprising result, as it is expected that the percentage change of the angular distribution will be comparable to the percentage change of the fibrillar elongation, and not much larger.It is important to note that the large angular change is not related to the disease-phenotype — both WT and Crh− 120/+ specimens have comparable order of magnitude effects, and the Crh− 120/+ reorientation is actually lower.In order to exclude artefacts from our data analysis, we took special care to fit the angular intensity profile to a Gaussian without a baseline as we found that the introduction of an artificial baseline significantly affected the width of the peak of the remaining Gaussian, and as a result the percentage reductions were even larger.Further, we kept the meridional width of the 3rd order reflection large enough such that all the intensity in the peak was averaged, not just the intensity along the meridional peak position.Lastly, to exclude the possibility that this large change was a characteristic of our mouse bone cortical specimen preparation protocol, a comparable analysis of the percentage change in fibril width for standard tissue types like the bovine fibrolamellar bone and antler cortical tested by us previously show similarly large reductions of the order of ~ 10% for strains < 1%.We can therefore say with confidence that this effect is a real one which is characteristic of cortical bone of various types in our samples.In order to explain this large reduction, we need to consider the local loading environment of the fibril.If we assume that the fibrils and surrounding interfibrillar matrix are in an strain-controlled deformation mode, then it can be readily seen that the percentage change of angular position of the fibrils is of the same order as the fibril strain itself, which is not what is observed.However, if we consider the fibrils to be relatively rigid fibers in a partially ductile interfibrillar matrix which transmits shear , it can be seen that while the fibril strain can be small the reorientation of the fiber due to the resolved force perpendicular to the fiber can be significant.It is therefore clear that large percentage reduction in angular width of the fibril distribution, relative to the fibril strain, can be definitely possible in this case, and that the effect will be larger as the stiffness ratio between interfibrillar matrix and fibers increases.With this in mind, it is possible to consider the reduced rate of reorientation in GIOP as a possible alteration of the stiffness ratio between the mineralized fibrils and the extrafibrillar matrix, specifically to a stiffer extrafibrillar matrix and less stiff fibrils.In the absence of a detailed TEM level study of the type in , our discussion is speculative, but the reduced reorientation may indicate that the fibrils in GIOP are less completely mineralized than in WT, or that an excessive mineral deposition occurs outside of the fibrillar compartment.We can also link the altered microstructure in Crh− 120/+ mice – specifically the demineralized structures around osteocytes , the reduced frequency of osteocytes, and their qualitative shape – to the changed fibrillar mechanics and the altered loading environment in the bone tissue of Crh− 120/+ mice.Previous work has found that the geometrical properties and shape of osteocytes lacunae depend on age , anatomical site and collagen fibre arrangement , with more elongated osteocytes in regions of greater collagen fibre alignment and rounder osteocytes in tissues with more random fibre orientation like woven bone.Here, our SAXD results showed that fibrillar orientation of Crh− 120/+ mice is lower than in WT mice, and also that the fibril modulus is lower.It is therefore likely that this alteration in the collagen fibril orientation and mechanics is linked to the change in osteocyte morphology to a less elongated structure .The alterations in intracortical porosity will most likely also play a significant role in reduction of mechanical competence, since extrinsic toughening mechanisms like crack bridging and crack deflection depend sensitively on the lamellar structure, orientation and mineralization .These microstructural alterations may arise by three main mechanisms crack path deflection at the interface between the lower mineralized halos and the surrounding tissue, or crack initiation at the cement lines around the halos, both of which will change the fracture toughness and a disrupted mechanosensory osteocytic network and to the apoptosis of osteocytes .Fig. 7 shows some key elements in the alteration of the mineralized matrix in Crh− 120/+ mice, encapsulating the lower mineralization, more extensible and greater randomness of the fibrillar network.In this study we demonstrated that in a mouse model for glucocorticoid-induced osteoporosis, both the fibrillar deformation mechanisms at the nanoscale and the microscale mineralization distribution are significantly altered compared to healthy bone.At the nanometre length scale, we found altered fibrillar deformation in Crh− 120/+ mice bone, as well as less oriented fibrils.At the microscale in Crh− 120/+ mice, a lower mineral content, increased heterogeneity in mineralization near osteocytes and significant alterations in the three-dimensional mineralized matrix is observed.In contrast, WT bone is more uniformly mineralized as shown by the qBSE results.We propose that the altered deformation mechanisms at the nanoscale – increased flexibility, lower fibril modulus, altered fibrillar reorientation – in conjunction with altered microstructural toughening mechanisms due to heterogeneous mineralization are critical factors leading to the increased macroscopic fragility in GIOP.The following are the supplementary data related to this article.3D reconstruction of WT tibia mid diaphysis showing vascular network and distribution of osteocyte lacunae.3D reconstruction of Crh− 120/+ tibia mid diaphysis showing reduced vascular network and disturbed distribution of osteocyte lacunae.Resorption cavities can be observed along the entire length of the bone and they are segmented with red color for better visualization.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.bone.2015.11.019.Medical Research Council UK; Diamond Light Source Ltd., Diamond House, Oxfordshire, UK; School of Engineering and Material Sciences, Queen Mary University of London, London, E1 4NS, UK; Engineering and Physical Research Council UK, Swindon, UK.The authors state that they have no conflicts of interest.All authors have no conflict of interest. | A serious adverse clinical effect of glucocorticoid steroid treatment is secondary osteoporosis, enhancing fracture risk in bone. This rapid increase in bone fracture risk is largely independent of bone loss (quantity), and must therefore arise from degradation of the quality of the bone matrix at the micro- and nanoscale. However, we lack an understanding of both the specific alterations in bone quality n steroid-induced osteoporosis as well as the mechanistic effects of these changes. Here we demonstrate alterations in the nanostructural parameters of the mineralized fibrillar collagen matrix, which affect bone quality, and develop a model linking these to increased fracture risk in glucocorticoid induced osteoporosis. Using a mouse model with an N-ethyl-N-nitrosourea (ENU)-induced corticotrophin releasing hormone promoter mutation (Crh-120/+) that developed hypercorticosteronaemia and osteoporosis, we utilized in situ mechanical testing with small angle X-ray diffraction, synchrotron micro-computed tomography and quantitative backscattered electron imaging to link altered nano- and microscale deformation mechanisms in the bone matrix to abnormal macroscopic mechanics. We measure the deformation of the mineralized collagen fibrils, and the nano-mechanical parameters including effective fibril modulus and fibril to tissue strain ratio. A significant reduction (51%) of fibril modulus was found in Crh-120/+ mice. We also find a much larger fibril strain/tissue strain ratio in Crh-120/+ mice (~1.5) compared to the wild-type mice (~0.5), indicative of a lowered mechanical competence at the nanoscale. Synchrotron microCT show a disruption of intracortical architecture, possibly linked to osteocytic osteolysis. These findings provide a clear quantitative demonstration of how bone quality changes increase macroscopic fragility in secondary osteoporosis. |
393 | Fuel emissions optimization in vehicle routing problems with time-varying speeds | Technical developments and the growth in road traffic pose new challenges for research in vehicle routing and scheduling for freight transport.Remote vehicle tracking techniques enable the road traffic data for different times of day and different days of the week to be collected, so as to provide detailed information on transit times for different roads by time of day and day of week.This provides an opportunity to plan vehicle routes and schedules taking time-varying speeds into account.In addition, the growth in road traffic and the use of road freight transport also bring problems of environmental pollution.Concerns about the environmental impact of transport activities have led to new vehicle routing models where the objective is to minimize the harmful effects of transportation on the environment.An increasing number of papers are being published where fuel emissions are explicitly modelled.However many of these simplify the model by assuming that paths between customers are fixed or that the speeds of the vehicles are time-independent.In the model described in this paper, the speed of the traffic on the underlying road network is time dependent.In addition, the path used by a vehicle between a pair of customers and the speeds on the road segments are decision variables.This paper will describe a column generation based tabu search algorithm, which can work together with a solution method for single paths, in order to minimize fuel emissions for Vehicle Routing Problems with time-varying speeds.The algorithm is then used for modelling a distribution operation using real traffic data from a road network located in London.The aim of these experiments is to discover how much reduction in CO2e can be obtained by using the algorithm described in this paper, compared with other approaches that are faster to compute.Experiments are also carried out to determine the effect of allowing more waiting time at customers.The paper is organized as follows.Section 2 contains a review of relevant literature.The problem is described in Section 3, which is followed by a set-partitioning model for the VRP.Section 4 introduces the framework of the column generation based tabu search algorithm, which is used to find a prospective sequence for a set of customers, and goes on to discuss details of the algorithm.The computational experiments and their results are then presented in Section 5.Finally, conclusions are drawn in Section 6, and the main findings are highlighted.In recent years, there has been increasing interest in estimating the environmental effects of vehicle routing policies.A survey of recent work in this area can be found in Eglese and Bektaş.Various models have been proposed for estimating the fuel used by vehicles when travelling on roads.Examples include one published by the European Commission in the MEET report described by Hickman, Hassel, Joumard, Samaras, and Sorenson and the Comprehensive Modal Emissions Model described by Scora and Barth.The CO2 emissions are normally calculated as being proportional to the fuel used.The fuel consumption and hence the emissions may relate to factors such as the vehicle type, weight and speed.Demir, Bektaş, and Laporte provide a comparison of a number of such models.Recent research on minimizing emissions in vehicle routing models can be divided into two main categories: the first is the set of models where time-independence is assumed and the second set includes models where the road conditions are subject to traffic congestion and so the time needed to travel along a road segment depends on the time of day.Among the time-independent models, Palmer developed a model where vehicle speeds are inputs to the model and the approach is tested on a case study of home deliveries for grocery stores in the UK.He found an average saving of 4.8% in CO2 emissions was possible compared to using routes that minimize time, but at the expense of a 3.8% increase in the time required.His model does not take vehicle loads explicitly into account, but Suzuki uses a model where load is taken into account and finds that delivering relatively heavy items early in a tour can be worthwhile in reducing the fuel consumption.Several case studies have been reported using time-independent approaches with the objective of minimizing fuel consumption and hence emissions.For example, Ubeda, Arcelus, and Faulin consider emission factors in planning routes for a food delivery operation.They show savings of around 25.5% in CO2 emissions, but this is mainly due to reducing the number of routes needed compared to the original plan.Other time-independent models allow the speeds of vehicles to be decision variables.The approach adopted by Bektaş and Laporte in their Pollution-Routing Problem uses a CMEM-based model and considers both load and speed in estimating a cost function to be minimized.They propose a non-linear mixed integer mathematical programming formulation and show how it can be linearized.The formulation can only solve small PRP instances, but Demir, Bektaş, and Laporte provide an adaptive large neighbourhood search algorithm for much larger PRP instances.Van Woensel, Creten, and Vandaele develop a model showing how queuing theory can be used to describe traffic flows and calculate emissions using the model described in the MEET report.In the set of time-dependent models, Eglese, Maden, and Slater make use of traffic speed information collected at different times on sections of a road network to create a Road Timetable showing the quickest times between origins and destinations starting at different times of the day.In Maden, Eglese, and Black the Road Timetable is used with a tabu-search called LANTIME to minimize the total time required for a distribution operation.Vehicles are assumed to travel at the speed which minimizes their emissions per unit distance unless the congestion indicates that this is not possible, when the vehicles travel at the average speed of the traffic recorded for that road segment at that time.Results from a case study based on the distribution plans for an electrical goods wholesaler in the UK show that CO2 emissions can be reduced by around 7% with this approach.This is because routes with high congestion and hence, enforced low speeds and high emissions, are avoided.Figliozzi also takes into account congestion in minimizing emissions using a model based on the MEET report.An integer programming formulation is presented and a solution algorithm is described which is tested on modified Solomon benchmark problems.In contrast to Maden et al., the model allows vehicles to travel faster than their optimum speed that minimizes emissions if the traffic conditions and speed limits allow.Thus, there are examples where uncongested conditions can lead to increased emissions.Vehicle speeds may also be used as decision variables in time-dependent models.Jabali, van Woensel, and de Kok use a similar model to Figliozzi but with speed as an additional decision variable, though without the use of time windows.Their model is based on a complete network where the nodes represent the depot and customers, while the maximum speeds on the arcs linking the nodes are subject to similar profiles.They describe a tabu search heuristic for solving the problem and test it on standard benchmark instances.The results suggest that a reduction of about 11.4% in CO2 emissions can be achieved, but with a 17.1% increase in travel times.Franceschetti, Van Woensel, Honhon, Bektaş, and Laporte follow a similar approach which also takes costs into account in a similar way to Bektaş and Laporte.A mathematical formulation is produced and provides insights on when it is profitable to wait at customers.The model presented in this paper is in the last category of models which take into account time-dependent conditions and where vehicle speeds are decision variables.It is designed for use on a road network where information is available on the speed of traffic on individual road segments at different times of the day.The solution provides the path to follow between customers and the speeds to be applied on each road segment.It thus provides a more detailed model than the one used by Jabali et al.; the path used between a pair of customers may change depending on the time of travel in our model.Also, it allows time window constraints for serving customers which are not included in Jabali et al.If it is assumed that the path used between customers is fixed, then some other recent research on speed optimization is relevant.Fagerholt, Laporte, and Norstad present the Speed Optimization Problem in the context of shipping, provide models to formulate the problem and a solution algorithm.Norstad, Fagerholt, and Laporte provide a recursive smoothing algorithm for the SOP that runs fast and has been shown to be optimal by Hvattum, Norstad, Fagerholt, and Laporte.It is often the case that reductions can be made in the emissions resulting from a distribution operation, but at the expense of more time or cost.There are methods explicitly aimed at modelling this issue through a multi-objective approach.One example is provided by Jemai, Zekri, and Mellouli where an evolutionary algorithm is used to solve a bi-objective VRP where one objective minimizes total distance, while the other minimizes CO2 emissions.Demir, Bektaş, and Laporte consider the bi-objective PRP where fuel consumption and driving time are the two relevant objectives.Finally, there is an emerging strand of research considering vehicle routing problems for alternatively powered vehicles that are designed to be more environmentally friendly.Such vehicles may have a more limited range before requiring refuelling and there may be a limited availability of refuelling points.An example is given by Erdoğan and Miller-Hooks in which they define a “Green Vehicle Routing Problem” where there are additional constraints on how far the vehicles may travel without refuelling and the refuelling stations are located at specific places.They formulate a mixed integer program to minimise the total distance and develop heuristics for its solution.Tests are carried out based on the location of stations supplying biodiesel fuel in a part of the USA.The proposed solution method uses a column generation algorithm that takes advantage of the power of the branch-and-price technique to solve a set partitioning problem.It is based on a branch-and-price-based large neighbourhood search algorithm for the Vehicle Routing Problem with Time Windows proposed by Prescott-Gagnon, Desaulniers, and Rousseau.The current solution is destroyed in the destruction step by selecting one of four operators randomly.This leaves a set of partial routes and isolated customers.The large neighbourhood then contains all feasible solutions that are compatible with the partial routes.A heuristic column generation algorithm is used to reconstruct the solution, where tabu search is used to generate columns of negative reduced cost.Step 1: Generate an initial solution with Clarke and Wright Savings Algorithm.Go to Step 2.Step 2: Apply a destroy operator to determine the neighbourhood.Go to Step 3.Step 3: Solve the RMP, i.e. a set-partitioning problem formulated as an LP relaxation based on the current set of columns.If the stopping rule for the column generation process is met, go to Step 6; otherwise, go to Step 4.Step 4: Apply tabu search to generate new columns.Go to Step 5.Step 5: Find the fuel emissions for the new columns using NHA.Go to Step 3.Step 6: If the solution is not integer, apply a branching strategy to get an integer solution.While the number of iterations for the algorithm is less than a specified maximum number, go to Step 2; otherwise, stop.Details of the algorithm corresponding to these steps are discussed in the following sections.Additional details can be found in the PhD thesis.There are four destroy operators used to modify the current solution in order to diversify the search.The destroy operators are analogous to the ones presented by Prescott-Gagnon et al. and more details are given in Qian.With each destroy operator, only part of a complete route will be removed.The column generation process has already been outlined at the beginning of Section 3.In Step 3 of the algorithm framework, the stopping rule is that either an improved integer solution has been found or the value of the objective in has not been improved for a certain number of iterations.Two simple operators are used to reconstruct new routes, which are removing and inserting a customer.Every time a customer node is removed/inserted, the reverse move is tabu for the next tbmax iterations.Hence, at one iteration, a move is defined as removing a customer and inserting another customer at a possible insertion place.All possible removals and all possible insertion places are tested to find the best one.Only feasible moves are allowed in the search process, so the capacity and time window constraints have to be checked for each customer insertion and only the time window constraints for customer removal.One of the advantages of applying a heuristic method to solve a VRP in a static network is that only the sub-routes changed by a neighbourhood move have to be re-evaluated.However, in a time-varying road network, the evaluation of a neighbourhood move is not so straightforward.The fuel emissions from any customer ci to customer cj may change according to different departure times from customer ci, so the re-evaluation process is no longer simply an addition and subtraction operation of a few static values relating to the links between customers that have been changed.The time dimension has to be taken into account which leads to the need to re-evaluate significant parts of the affected routes.Harwood, Mumford, and Eglese has examined different ways to estimate the cost of a neighbourhood move within a single tour with time-varying traversal speeds, when the objective is to minimize the total time.The tour is divided into three parts according to the nodes being moved: the pre-change part is the tour from the depot until the first node to be changed; the post-change part is the tour from the last node to be changed back to the depot; and the remaining section is called the changed part.To determine whether a move leads to an improvement, the pre-change part of the tour does not need to be recalculated and provided the First-In-First-Out property holds, the post-change part of the tour does not need to be recalculated either.Furthermore, the necessary and sufficient condition for an overall improvement to be achieved is that the tour from the depot to just before the post-change part should be improved.Therefore, only the changed part of the routes has to be revaluated to find out whether a neighbourhood move will improve the cost, if the FIFO property is maintained in a time-varying network.However the post-change part of the route still has to be recalculated in order to obtain the overall improvement, once a neighbourhood move has been identified as leading to an improvement.When the objective is to minimize emissions or costs that depend on the time of travel, then these results do not apply.Suppose the optimal solution has been found from an origin node c0 to node cj which passes through ci.That solution may not contain the optimal solution for a path from c0 to ci.This is because it may be better to travel faster/slower than the optimal speed with lower fuel efficiency from c0 to ci, so as to arrive at ci earlier/later to avoid the congestion from ci to cj.Consequently, any neighbourhood change requires the whole route to be re-evaluated to find the exact change in fuel emissions, which makes the problem of minimizing fuel emissions in a time-varying network more difficult to solve.In order to evaluate neighbourhood moves quickly, approximate values of the fuel emissions between customers are used to evaluate each move in the tabu search procedure.Using these approximate values means that only the links that have changed in the neighbourhood move need to be considered in estimating the change in emissions and so can be calculated quickly.When the solution obtained by RMP in the last iteration is not integer, a heuristic branching strategy is applied to derive an integer solution.The branching strategy is to simply fix the decision variable with the largest fractional value at 1, and solve the linear problem again.This process is repeated until the solution is integer or no feasible solution can be found.If any new integer solution can be obtained by applying the branching strategy, no matter whether it is better or worse than the initial solution at the beginning of the current iteration of large neighbourhood search, the solution will be used for the next iteration.This can help to diversify the search.If there is no feasible integer solution, the initial solution at the beginning of the current iteration of large neighbourhood search is used again in the next iteration.The VRP algorithm is tested in this section with real traffic data for a road network located in London.The data set used includes the locations of 60 stores and a depot.The depot and the stores are located in the southeast of London and the locations are based on those used by a well-known supermarket company.A map showing the locations is provided in Fig. 1.Customer nodes 51–60 are located in the London congestion charge area.There are 208,488 nodes in the London network, which are linked by 219,880 bidirectional road segments and 37,651 unidirectional road segments.Each bidirectional road segment is replaced by a pair of unidirectional arcs, one in each direction.Hence, the network has 208,488 nodes and 477,411 unidirectional arcs.The distribution of arc lengths has a mean of 92.2 m with a maximum of 2848.7 m.About 70% of the arcs are shorter than 100 m, and only about 9% of arcs are longer than 200 m.Traffic information was supplied from ITIS Holdings.From information obtained by tracking fleets of vehicles in the area, observations of speeds were obtained for all road segments.The 24-h period during a weekday was divided into 15 different time slots.Within each time slot the observed speeds were relatively stable, but speeds could be very different in different time slots due to the way that traffic congestion builds up at different times of the day.For each road segment, the mean speed in each time slot was taken to be the maximum speed that could apply to any vehicle starting to travel over the road segment within that time slot.The stores are divided into five sets named A, B, C, D and E, and each set contains 25 stores or customers.Specifically, set A contains customers 1–25, set B contains customers 36–60 and set C contains customers 11–35.In set D, the customers are clustered and set E contains a randomly selected customer set.Both set B and set D include all of the customers located in the congestion charge area.The demands of customers are between 4 and 8 cages.The average service time for unloading one cage is two minutes.For some runs, no particular customer time windows were applied.In these cases, any store may have a delivery between 7 am and 5 pm which corresponds to the drivers’ shift time.For runs where customer time windows are applied, the earliest start times are randomly generated by following a uniform distribution with associated range between 7 am and 2 pm.The time window intervals are uniformly distributed between 1 and 5 h. Instances with no time windows are labelled by the set of customers with the addition of “0”, and the instances with time windows are labelled with the addition of “1”.The details of the customer demands and time windows are given in Appendix A.Due to London night time regulations for freight deliveries, the starting time from the depot is set to be after 7 am.The driver shift time is 10 hours, so the finishing time at the depot must be by 5 pm.Waiting time at each customer node or the depot is initially set to be a maximum of 5 minutes.A fleet of homogenous vehicles is used in the case study, which are rigid Diesel HGVs > 32 t subject to the EURO V emission standard.The corresponding speed-fuel curve is shown in Fig. 2.The valid speed range is between 6 kilometre/hour and 90 kilometre/hour.Using this fuel consumption curve is an approximation as it does not take into account effects like acceleration, deceleration and the gradients of the roads.However, in the case of road gradients, vehicles depart from and return to the same depot, so there is no overall change in altitude for a vehicle for each route.This means that there will be some cancellation of the effects of uphill gradients requiring more fuel, while downhill gradients require less fuel.Also no allowance has been made for the changing weight of the load carried.The fuel used in litres is converted to the equivalent greenhouse gas effect of CO2 in kilogram by multiplying by a factor of 3.1787.This was the factor proposed by the UK Government in 2010 in its guidance for reporting emissions.The approach described in this paper was applied to the instances described and a summary of the results is provided in Appendix B.This shows the sequence of customers on each route, the total CO2 emissions, the distance travelled, the time required and the total waiting time for each instance.There are two sets of results: one set for instances without customer time windows and one set for instances with time windows.Each vehicle is only allowed to undertake one trip.The results for the instances without customer time windows show that the average trip takes between 2 and 3 hours for each instance.If more than one trip per day were allowed, then the number of vehicles could be reduced, otherwise the vehicles would be available for other duties following the completion of their routes.The results show that the set of instances with customer time windows require about 20% longer computational running time to produce the solutions compared to the set of instances with no customer time windows.Overall, the solutions for the set of instances with customer time windows produce just over 1% more emissions compared to the set of instances with no time windows, even though a longer waiting time is allowed for the set of instances with customer time windows to ensure feasible solutions.In the following subsections, various aspects of the problem and solution method are investigated.The first subsection investigates the impact of the speed adjustment and path selection process, waiting time and starting time based on the tests on the five customer sets without time windows.The speed adjustment and path selection process refers to the path chosen and speeds adopted on the road network in travelling between specific customers.Then, the original method is compared against a simpler method, which generates customer sequences based on static information.Finally, the effect of the time window constraint is examined.NHA, as described in Qian and Eglese, finds the best path between two customers and the best speeds for the vehicle when leaving the first customer at a particular time.NHA thus has a path selection process and a speed adjustment process.The contributions from these different processes are examined in this section.The solutions from three methods are presented in Table 1.The column labelled ‘With PS & SA’ shows the solutions obtained by the model proposed in the first part of this paper using NHA; the ‘PS Only’ column shows the solutions obtained by the model without the speed adjustment process; and the last column shows results from the model with neither the path selection process nor the speed adjustment process being used.Without the speed adjustment process, the vehicle is assumed to travel at the speed allowed by the current congestion up to a specified maximum speed, corresponding to the optimal speed of the vehicle; while without the path selection process, the fastest path is used.However, the sequences of customers in all the methods are determined by the approximated fuel emissions matrices.The path selection and speed adjustment processes reveal some contribution to the CO2e emissions reduction.On the average, about 10 kilogram CO2e emissions can be saved by applying both the path selection and speed adjustment processes, equivalent to a reduction of about 2–3%.In particular, the reduction for B_0 is up to 15 kilogram, which is 4% less than ‘Without PS & SA’.Compared to the speed adjustment process, the path selection process plays a more significant role in reducing fuel emissions.Around 4 kilogram extra CO2e is emitted on average by choosing a promising path and travelling with the fastest speeds allowed by the traffic conditions up to an optimal speed; while in the worst case B_0, another 12 kilogram emissions is produced by following the fastest paths at the fastest allowed speeds.One reason that the performance of the speed adjustment process does not have a greater effect could be the shape of the fuel curves used in the experiments.As illustrated in Fig. 1, the maximum valid speed is 90 kilometre/hour, whose fuel efficiency is close to that of the optimal speed 65 kilometre/hour; and the fuel efficiency reduces quickly with the decrease of the speed.Therefore, it would not be wise to slow the speed any more than necessary.If the fuel emissions curve is very sensitive to the speeds, or increasing the speed makes the fuel curve steep, then the speed adjustment would be worth applying; otherwise, just carefully selecting a path, then travelling as fast as traffic conditions allow up to an optimal speed still provides a good solution.All experiments so far are based on the condition that all the deliveries should start at 7 am, and waiting time at each customer node as well as the depot cannot exceed 5 minutes.Note that 7 am is the start of a peak period, when the road traffic is getting busy.Vehicles may be caught in traffic congestion, which results in extra fuel emissions.The fuel emissions may be improved by allowing longer waiting, or starting after the rush hour.Two scenarios are tested: one is starting at 7 am, but allowing waiting up to 4 hours; and the other is starting at 10 am, and with maximum waiting time being 5 minutes.They are compared to the original solutions with the starting time being 7 am and maximum waiting time being 5 minutes.Results are summarized in Tables 2 and 3.For each instance, seven vehicle routes are required to serve all customers.By allowing more waiting time, about 2–3 % CO2e can be saved.However, it is at the expense of 16–30 hours’ total waiting time.In instance C_0, for example, one complete route starts from the depot, then serves customers 2, 7, 1, 17 and returns back to the depot.By allowing a longer waiting time, the vehicle and driver wait at customer 2 for about 1.8 hours and wait at customer 1 for about 3 hours to avoid peak time congestion.As a result, the total CO2e for the route is reduced from 85 kilogram to 81 kilogram.In practice, the effect on costs of the extra waiting time will depend on how the drivers are being paid and whether any of the additional waiting time coincides with their breaks.For instance for B_0 and D_0, the improvement by waiting is smaller compared with others, but the total waiting time is also shorter, which is about 17 hours while others are more than 20 hours.When the starting time is changed to 10 am and the morning peak is avoided, results from instances A_0, C_0 and E_0 show that about 1–2% CO2e can be saved.However the results for B_0 and D_0 are slightly worse than those for a starting time of 7 am.This may be because both instances contain 10 customers located in the congestion charge area, and the traffic flows in the city centre keep busy during the day time.It may also be because more travel is moved towards the evening peak when traffic is more congested.In practice, the starting time is restricted by many factors, such as local authority regulations and the driver shift times.Attempts may be made to provide night time deliveries to avoid daytime congestion, but other problems like noise, safety and out-of-hours deliveries may mean that night time deliveries are not feasible.In the original method, the customer sequence is determined by approximate fuel emission matrices, the values of which are updated from one iteration to the next.This requires repeated application of NHA within the method and results in high run times as shown in Appendix B.In order to reduce the computational burden, a simpler method is proposed for comparison that only considers static information when determining the vehicle assignment and the customer sequence.The VRP is first solved by minimizing the total distance, i.e. using the distance matrices within the column generation based tabu search algorithm.Then, NHA is applied to decide on speeds and paths for the pre-determined complete routes.In other words, the time-varying speeds are not considered until the sequences of customers have been decided.A comparison between the original method and the distance based approach in terms of solution quality as well as computational time is shown in Table 4.As shown in the table, DBA is able to obtain a solution close to the original method for instances A_0, C_0, D_0 and E_0.The performance of DBA for instance B_0 is not as good as for other instances, but it is only 1.6% worse than the solution obtained by the original method.In general, minimizing distance seems to be a good criterion for obtaining a good sequence of customers, before using NHA to find the detailed paths and speeds to be used when travelling between customers.Meanwhile, the running time for DBA is much less than the original method.The running time of DBA is about 5–6 minutes, while that of the original method is more than 20 hours.This is because NHA is only used at the end of the process, after the customers have been allocated to routes and their sequences have been determined.DBA is able to get solutions that are very close to those obtained by the original method, but it needs less than 0.5% of the running time of the original method.For the instances without time window constraints, DBA provides good solutions in terms of running time as well as solution quality.However, DBA has the potential risk of missing time windows by using static travel time data in scheduling the customer sequence.The impact of the time window constraint will be discussed in the next section.All the above discussion is based on the instances without time window constraints, but only with 7 am as the earliest starting time and 5 pm as the latest finish time for the drivers’ working shift.In this section, the instances with time windows are tested.In order to guarantee a feasible solution for each instance, the maximum waiting time is set at 4 hours.Table 5 shows results obtained by the original method as well as the simpler method.We note that DBA has again found solutions close to those obtained by the original method for instances C_1, D_1 and E_1 within a relatively short running time.However, the performance of DBA on A_1 is about 8% worse; and it cannot even find a feasible solution for instance B_1.DBA does not consider the time-varying speeds when scheduling the vehicle assignment and the customer sequence, and the travel time may be underestimated.Consequently, there is a risk that a vehicle has to speed up and follow a higher-emission strategy in order to satisfy the time window constraints or even cannot meet the time window constraints, when the time-varying speeds are taken into account.In this paper, a column generation based tabu search algorithm is proposed to solve a vehicle routing problem where there are time-varying speeds and the objective is to minimize fuel emissions.The main findings can be summarized as follows:The proposed algorithm can produce sets of routes for an urban distribution problem with a reduction in GHG emissions of about 3% compared with an approach where the objective is to minimize total time.Within NHA, the path selection process plays a more significant role in reducing fuel emissions, when compared with the speed adjustment process.When the speeds are allowed to be as fast as the traffic allows up to a maximum speed, the best fuel emission solution is only slightly worse than the one obtained using NHA, where the speeds are decision variables.By allowing a long waiting time at customer nodes, vehicles can avoid being caught in congestion, and the fuel emissions can be reduced.However, the costs of such waiting, such as drivers’ salaries, have to be considered.An alternative way to attempt to avoid congestion is to depart later to avoid the morning peak period, but this may not significantly reduce the fuel emissions.Minimizing distance can be an effective way to obtain a good sequence of customers.DBA, which determines the customer sequences by minimizing distance, has a good performance on instances without time windows, but there is a risk that it could fail to find a feasible solution, when it is applied with time-varying speeds.In this case, some simple repair strategy could be used to obtain a good feasible solution. | The problem considered in this paper is to produce routes and schedules for a fleet of delivery vehicles that minimize the fuel emissions in a road network where speeds depend on time. In the model, the route for each vehicle must be determined, and also the speeds of the vehicles along each road in their paths are treated as decision variables. The vehicle routes are limited by the capacities of the vehicles and time constraints on the total length of each route. The objective is to minimize the total emissions in terms of the amount of Greenhouse Gas (GHG) produced, measured by the equivalent weight of CO2 (CO2e). A column generation based tabu search algorithm is adapted and presented to solve the problem. The method is tested with real traffic data from a London road network. The results are analysed to show the potential saving from the speed adjustment process. The analysis shows that most of the fuel emissions reduction is able to be attained in practice by ordering the customers to be visited on the route using a distance-based criterion, determining a suitable path between customers for each vehicle and travelling as fast as is allowed by the traffic conditions up to a preferred speed. |
394 | Adaptive content recommendation for mobile users: Ordering recommendations using a hierarchical context model with granularity | Mobile phones have become one of the most popular information browser devices, evolving into adaptive or personalized services for mobile users.Recent advances in light-weight reasoning techniques promise to enable mobile phone users to interact with the available masses of data in a more efficient and satisfying way.Mobile personalization systems have several shortcomings when compared to desktop-based systems, especially regarding computational power and screen size.As a consequence, it is difficult to process or display larger quantities of data on a mobile device.In contrast, they have a clear advantage regarding sensory equipment and usage: they are used in everyday life, not only in office-like environments.Thus, they have access to more of the user’s personal information, which can be employed for personalization.We propose an approach for efficiently retrieving and personalizing content relevant to the user’s needs.We retrieve content related to a query, and then find user-preferred items from the retrieved content.We then show these results to the user for personalization.Our approach is to map mobile user context and properties of content onto a knowledge-based context model .A modified reasoning engine allows us to compute semantic similarity between content and context using the context model.It uses internally a hierarchical data structure with size-based granularity.The context model stores three types of tags: tags that signify temporal and spatial contexts, such as ‘Morning’ or ‘Seoul’, and tags that purely indicate content, such as ‘Baseball’ or ‘Jazz’.The context model also includes hierarchical information about these tags, e.g., that ‘Seoul’ is in ‘Korea’, and that ‘Jazz’ is a type of ‘Modern Music’.During runtime, this data structure is used to represent mobile user context and properties of content, and to extract semantically related tags of measured context and content.We compute the priority of content based on similarity between measured context and content.The paper explores two variants of measuring similarity in the context model depending on whether size information is available or not.We evaluate our approach with respect to a range of algorithms for retrieving content regarding the proposed context model.According to our evaluation, it is advantageous to use semantic context rather than raw context data.In general, the best algorithm utilized the proposed context model, which retrieves and uses semantically relevant terms of context.This method achieves a precision of 70% and a pooled recall of 46%.The difference in precision between our approach and the baseline algorithm is up to 22% points, and up to 17% points in pooled recall.Our approach performs fast enough to guarantee real-time processing on a mobile device and requires only slightly more time to configure and explore.Our evaluation indicates that the proposed approach is effective in advancing the reliability of personalized content recommendations on mobile devices.The organization of the rest of this paper is as follows: Section 2 is devoted to analyzing related works.The hierarchical context model derived from and its modified reasoning engine are presented in Section 3.In Section 4, the proposed approach for recommending mobile content based on user context is presented.The experiments and evaluations of the approach are shown in Section 5.Conclusion and future works are presented in Section 6.In this section, we introduce several works related to the proposed recommendation approach.The features provided by the approach are based on various research areas, including context-awareness, content recommendation and personalization.Technologies for making systems context-aware can be used to personalize services or recommend user-centered content, as they enable systems to infer user interests and needs in a given situation.Xu Sun showed that spatial context can be used to improve mobile user experience by providing personally relevant mobile services .Location-based services are one type of context-aware service that takes advantage of location-sensing technologies and location information.In , a system that guides mobile users based on spatial, personal and social contextual information was presented.The system exploited sensors, such as GPS and history, obtained from a mobile phone.Context-aware applications usually rely on a data structure or information repository called the context model, which handles the processing and abstraction of contextual information.Context models were designed to describe contextual situations and to represent semantic relations between context in order to allow applications to make use of this information.Since the first context models, numerous proposals have been put forth .Over the years, context modeling approaches proceeded from the first tree-based models able to run on mobile phones and sensor nodes to approaches for querying fully-fledged ontologies stored on a remote server.A key characteristic of context models is the inherent uncertainty that results if sensor measurements are the main sources of information.The inherent uncertainty of contextual information and how to address it has received considerable attention in the area of context modeling .We adopted an interval-based approach for modeling uncertainty in our system.Moreover, we employed partially defined levels of granularity, first proposed in , to allow representation of sizes in a manner that respects uncertainty of information obtained from users.The context-aware application model underlying the approach used in this article, was first proposed by Jang et al. .They suggest that the context of an interaction can be characterized by asking who interacts when and where with whathow and why, the 5W1H questions .In order to allow formal reasoning in 5W1H context models, the modeling language of was later generalized and complemented with a formal semantics based on partial order reasoning, thus yielding a full-fledged decidable logical reasoning framework .From the earliest location models , most context modeling approaches include not only a collection of contexts but also basic inferential capabilities.A context model can be a graph structure as in , then reasoning is implemented by following the edges of the graph.It can be a set of statements or filters as in .Or it can be a full-fledged ontology with one or more reasoning mechanisms.Our approach is a combination of an ontology-based context model , founded upon the theory of mereotopology widely used in ontology research, with a light-weight graph-based reasoning mechanism.Using our own light-weight graph-based mechanism has two advantages: first, all reasoning required for personalization, can be computed on the phone, so that potentially privacy-sensitive context data is not transmitted; second, using our own reasoning mechanism allows us to integrate the computation of context similarity, the key notion of our recommender system, directly into the reasoning mechanism.Our context model does not directly re-use existing ontologies, but considerably overlaps with standard ontologies as it is founded upon basic concepts of mereotopology , a theory that underlies current standard ontologies and reasoning systems, in general : mereotopology underlies, for instance, RCC , the Region-Connection-Calculus, and SUMO , the Suggested Upper Merged Ontology.While we did not directly re-use existing ontologies for the prototype developed for this study, and a discussion of semantic information retrieval is clearly beyond the scope of this article, semantic information retrieval mechanisms can be a valuable source for importing information into our system.WordNet , SUMO and other semantic information sources could be extracted or abstracted, perhaps automatically, with some seed concepts, so that a part could be used in our mobile application for recommendations.However, the approach presented in this paper is neutral with respect to the information source, the only strict requirement is the specific mereotopological basis upon which we build.This axiomatic basis for reasoning about context has been studied from a formal point of view and has also been practically employed in a wide range of application areas, from monitoring industrial facilities to preservation of business processes .Any knowledge repository that is compatible with these concepts, including end-user defined repositories and folksonomies could be applicable.In our prototype, the context model was developed by hand using public information repositories such as Wikipedia.2,Research into personalized content recommendation relates to the proposed approach as many recommendation systems use machine-learning technologies to select personalized content.In , a web recommender was introduced in which behaviors of users are modeled by constructing knowledge based on temporal web access patterns.It utilizes fuzzy logic to represent real-life temporal concepts and to construct a knowledge base of users’ behaviors.A collaborative filtering method was used to improve the recommendation performance of electronic program guides in .Yu-Cheng et al. proposed a community-based program recommendation .This recommendation analyzed user habits and categorized users based on similar habits.These users were classified as belonging to specific communities in order to find and recommend programs.Many resources can be used as information when recommendation systems model user behaviors for personalization.In , social information, such as community and friendship, were exploited to find user interests.Because social friends often have similar interests, social networks have become major resources for personalization.User histories of behavior, habits, and input also are used for inferring the next user interest.Shin et al. proposed a method that integrates user behavior profiling and content rating , while Bjelica introduced a recommender system that efficiently learns the user’s interests based on user modeling.In order to provide preferred content to mobile consumers, we need to consider their information needs.Several studies have shed light on the types of information searched when consumers use a mobile search engine. ,investigated the characteristics of mobile search by analyzing the log of search queries.Karen Church states that the personal nature of mobile devices is important for mobile searching .Kamvar et al. ,and Yi et al. ,analyzed the patterns of search behaviors based on categorization of queries.In , both interests of users and mobile context, such as location and time, were found to be closely related to mobile users’ information needs.Following the analysis of context modeling approaches and mobile user information needs, we concluded that a context model for personalized content recommendation for mobile users should contain knowledge about time, space, and content descriptions based on user interests.The context model is provided in a logical language format and represented internally in the form of directed acyclic graphs describing time, space, and general content or interests in terms of partial order hierarchies, implementing a mereological part-of relation in different domains of context.Context tags are added as nodes in the graphs, and where available, coarse size information can be given with a tag.During runtime, we map user context, such as sensor value and profile, onto the graph, and calculate the similarity between two context tags based on specificity.The key idea of the proposed approach is to find the most specific context tag of which both, the tags describing the user context and the tags describing a given content are part of; we then compute how close the user context and content context are, based on how specific this most specific common containing context is.The rationale behind this idea is simple: two users who share a very specific context are close, while users who share only an unspecific, generic context are more distant.Take an example of spatial context: a user A is uploading a comment while standing in front of a picture in a museum in Paris in Fall 2009; in Summer 2011, another user B is standing in the same spot, looking for information.Our recommender system guesses that A’s comment should be more relevant to B in this context than information contributed by a user C from a different location in the museum, where only the museum is the common spatial context: the area in front of the picture is a more specific common context.We develop two methods for computing specificity and consequently similarity: a graph-based method, which uses the length of paths in the graph structure of the context model, and a size-based method, which employs size information given explicitly.Information about the context of a mobile user can be obtained from sensors of a mobile phone, e.g., GPS yields latitude, longitude, and altitude coordinates; information about the context of content can be obtained from the tags attached to it.However, it is difficult to obtain meaningful information from the absolute values of a sensor signal, and therefore difficult to understand the semantic relation between a tag describing content and sensor data.For the example of geographic location: we need a location service to know that the GPS coordinate is in Seoul.It should be noted that directed acyclic graphs in contrast to trees allow nodes to have several parent nodes, that is a context can lie in the overlap of two contexts.For example, ‘October 23rd, 2011 13:00’, obtained from a time-stamp, belongs to ‘October 23rd, 2011’ and ‘Lunch Time’.Transitive reasoning is implemented conveniently by following the edges in the graph.So, ‘Weekend’ and ‘Fall’ can be retrieved from a time-stamp ‘October 23rd, 2011 13:00’ following the upper node ‘October 23rd, 2011’.A second remark regards the semantics of spatially or temporally scattered entities: ‘Baseball Field’, for instance, is spatially a scattered region consisting of the space occupied by all baseball fields; ‘Lunch Time’ is a periodically reoccurring event temporally consisting of the intervals of all individual lunch times.We need entities like these to be represented in the context model, however, care must be taken to not confuse such scattered intervals/regions, which are the sum of separate convex intervals/regions, with sets of intervals/regions.In this hierarchical context model, more generic types of context intuitively have a tendency to be located in the upper levels, towards the root node of the graph, whereas more specific types of context are located in the lower levels further away from the root.To illustrate this intuition for the case of the three context domains:If we want to allow for personalized and for incrementally changing user-defined context models, it should be less relevant, which of the models is more detailed.We therefore want to keep our assumptions about the quality and detail of context models as low as possible.A way to do this, which is also employed in tag clouds, is to weigh context tags using a measure of cardinality or size.From this idea we formulate the size-based notion of specificity.The main idea of our context model is that levels of sizes stratify the hierarchical context model in a way that is more neutral, compared to the graph-based method.It depends less on an equally detailed view of the world and is thus closer to the way folksonomies are constructed, namely incrementally with different conceptual resolution in different parts.We use tags to distinguish and to retrieve information as necessary.Consider the following example: a person in his hometown, an expert of the area, might have content tagged with concepts from a personal ontology of high detail with distinct, detailed descriptions for locations of his town and area; in contrast, an international traveler coming to the area, a layperson with respect to knowledge of the area, might use a range of descriptions at different levels of detail, loosely assembled into a collection of tags.Size information can help in this case, and it is easy to acquire, even if only in terms of rough comparisons as shown in Fig. 1.The proposed approach takes advantage of the similarity between mobile user context and properties of content to select user-preferred content.Both the mobile user context and properties of content are represented as context knowledge nodes in the hierarchical context model; the priorities of retrieved content are determined on the model based on the similarity of the nodes.To compute the similarity, we extract relevant terms of context and content property from the context model.Then, we compute the similarity of content based on the LCS node, which subsumes relevant nodes.Finally, we select user-preferred content according to their similarity.The overall procedure of our approach is illustrated in Fig. 3.First, the context model proposed in Section 3 is configured on a mobile device.After a user inputs a search query, content is retrieved using the query and context is obtained from the mobile device’s sensors.Then, a system using the proposed approach maps the content and context onto the context model in parallel.It retrieves relevant terms of the mapped content and context, respectively.It computes the similarity of the retrieved content and the obtained context, before selecting content based on the similarity.Finally, it displays the selected content on the mobile device.We also map properties of content onto the context model.We assume the user is searching for content and provides only minimal information, such as ‘History’ or ‘Comment’.Our system then retrieves all content that matches the query, from the database.Additionally, we retrieve information about each content, such as tags and spatial information.Then, they are mapped onto the context model as we did before with the mobile user context.The information about content mapped onto the proposed context model is also used to extract relevant terms.From the hierarchical context model, we extract relevant context knowledge by exploiting the mapped nodes of context and properties of content.The relevant terms are ancestors of the nodes on the model.Because the meaning of a model’s link is ‘part-of’, upper layer context knowledge involves lower layer context knowledge when they are connected.Upper layer context involves diverse content, but correlation among the content is low.In contrast, lower layer context knowledge involves a smaller range of content than upper context knowledge, but correlation among the content is high.For instance, photos taken in Europe are likely to include photographs of Paris as well as Rome.However, photos taken in Rome are more correlated than photos taken in Europe.Based on the extracted relevant terms located in the upper layer, we compute similarity of content and context.We re-rank the retrieved content according to the similarity between the content and user contexts.Our approach to re-ranking content is to compute similarity between nodes of content and nodes of context on a context model graph, and to re-rank the content according to the similarity scores.The similarity is estimated by Eq. for each content and user context.After computing similarities of all retrieved contents, the contents are re-ranked in order of the similarity scores.Finally, we select the top five k to show to the user as content related to the user’s preference and situation.We implemented a system that uses our approach to recommend photo content related to an input query and user context.As shown in Fig. 5, we display selected photo content on a mobile device.The system initializes the context knowledge model first.Fig. 5 shows a part of the configured context model example.After a user inputs a query for search, the system retrieves photos from a web-based content sharing database.It applies the proposed personalization approach to the retrieved photos, and then displays selected photos.The thumbnails of recommended content are shown at the bottom of the screen, as shown in Fig. 5.A user is able to enlarge a photo from the recommended list.The rest of the recommended content is displayed if a user scrolls from side to side.The system provides additional information, such as photo tags and relevant contexts.Fig. 5 shows the tag cloud of recommended photos.It is possible to use a tag as a query for a new search.Through this system, we recommend personalized content with additional information.In order to verify the suitability of the proposed approach, we evaluated the approach’s results using the developed system.We checked the availability of the context model on a mobile device and compared the appropriateness of personalized content results with other approaches.Mobile Device.In this evaluation, we used a smartphone equipped with a GPS sensor as a mobile device.It contained a 1 GHz single-core CPU.Users could input a user profile on the mobile device before starting our system.The user profile was compiled with user context and sensor data.Data Set.We used photos as content to provide to a mobile user.The photos were stored on a web-based photo sharing service, Flickr.5,To exploit spatial information of content, we only used geo-tagged photos containing latitude and longitude.The additional information about the photos, such as tags and timestamp, also were used.The number of retrieved photos per query was 50, and a photo had 4.6 tags, on average.We took advantage of geocoding to convert latitude and longitude values to names of regions, and used the names as tags.To evaluate the results of the proposed approach, we distributed our system to 10 participants for a week.We recruited university students, staff and graduate students.We selected participants who did not have knowledge related to this work.Among the participants, 8 participants are male and 2 participants are female.Their ages range from 22 to 36, and the average of their age is 29.3.They were experienced in image search using mobile devices.The number of queries used by the participants totaled 74.In order to make certain that situations were natural, we did not set a limit on the usage of the system.Instead, we recorded logs of sensor values of a GPS and a timestamp; participants freely recorded things that they wanted to find in this diary.We generated a context knowledge model manually.As shown in Fig. 1, we generated context nodes that had a hierarchical structure.In order to get context tags and their links, we referred to the Open Directory Project6 .We also referred to Wikipedia7 information for getting knowledge related to granularity of the context tags.We restricted temporal context nodes to Oct, 2011, and restricted spatial context nodes to South Korea, because our evaluation was performed in South Korea in autumn 2011.The generated nodes captured the real context for describing situations when participants used our system.When we evaluated the impact of the number of context knowledge nodes, we randomly selected context nodes from the manually generated nodes.We iterated 10 times when the randomly generated models were used.In order to provide the first evaluation of the feasibility of the proposed context model, we tested the model’s performance on a mobile phone.We compared the elapsed time to explore the context model to find relevant contexts in models having different numbers of context nodes.We measured the amount of time it took to retrieve relevant contexts from a context model.This task was necessary to compute the similarity between content and context.We compared our approach with the following five algorithms: As a baseline algorithm, instead of using mobile user context, the number of matched tags with queries was used to decide the ranking of content.Another algorithm used all context: spatial, temporal and personal information without the proposed model.The others used each context separately without the model.They used only absolute values of context, not the proposed model.To evaluate different algorithms on the same data set, participants evaluated all of the retrieved content off-line after using our system.They were supplied with queries, the lists of content and sensor values recorded when the participants used our system.Sensor values consist of a timestamp, a GPS coordinate, and orientation by a compass.In order to help participants to understand the values, we also provided a map marked the point of the GPS and the compass, together with the name of places by reverse geocoding.We did not give any information related to algorithms that recommend the photos.The lists of content contained photos and tags with a score box.Thus, the participants simply input a score for a photo into the box.They used the following scoring instructions to decide when a photo was relevant to their interests in a given situation:If a photo was given a score of 3 or higher by a participant, we regarded the photo as “relevant”.We regarded the photo as “very relevant” if its score was 4 or 5.In addition, we also employed the normalized discounted cumulative gain to evaluate the effectiveness of our approach .It is often used in information retrieval according to relatively ideal positions.In order to measure the appropriateness of re-ranked content, we computed nDCG values based on evaluated scores by participants.We considered the ranks given by participants as ideal results for normalization.In order to verify the reliability of our recommended results, we compared photos retrieved by the proposed approach with the results of the algorithms described in Section 5.2.Table 3 presents the precision and pooled recall for all the different photo retrieval algorithms, i.e., percentage of photos exactly on a query.All algorithms exploiting context performed better than the baseline algorithm, which did not use any context.The algorithm that had the highest precision and recall value was the proposed approach.When we used a set of relevant photos, i.e., the percentage of photos with score of 3 and over, the precision of our approach reached 70%, and the recall was 46% compared to 29% from the baseline algorithm.When we used a set of very relevant photos, the precision and recall of our approach was higher than the other algorithms compared in this evaluation.Therefore, the table indicates that context improves content recommendations and the proposed context model is effective in enhancing context usage.Fig. 7 compares the five algorithms and the proposed approach based on nDCG.Apart from the proposed approach, the most effective algorithm was A5, exploiting all context without the hierarchical context model.The nDCG values of our approach exceeded 0.8, whether we used the first 5 retrieved photos or 10 photos.They were higher than A5 by 0.12 and 0.09 respectively.This finding indicates that content re-ranking of our approach enhances the order of retrieved content because the nDCG value is affected by the order of retrieved items.Although the value of nDCG@10 was lower than nDCG@5 in the case of our approach, it was higher than the other algorithms.The nDCG@10 of our approach was lower than nDCG@5 because the additionally retrieved 5 photos by nDCG@10 were more generalized than the commonly retrieved 5 photos.Through Fig. 7, we can ascertain that our approach is effective for the order of content, and that fewer retrieved items are more appropriate in our approach.In addition, we evaluated nDCG@5 values with different numbers of context nodes.The bar graph of Fig. 7 shows nDCG values of our approach with 5 retrieved photos.Overall, if the number of context nodes was increased, nDCG values also were increased.When we used 700 context nodes for our approach, nDCG value was maximized.Adding more nodes did not improve results further.Rather, the nDCG remains quite stable without significantly decreasing when there were more than 700 context nodes, although context nodes added in the upper layer lead to more general content retrieved.From Fig. 7, if we use more than 500 context nodes with our approach, the retrieved content is more appropriate than the other algorithms.Overall, context helps recommender systems understand the information needs of mobile users.Compared to the baseline algorithm, the precision, recall and nDCG were improved when context was used.Among algorithms that use context, our approach is the most effective, although it requires slightly more time to configure and explore a context model, as shown in the evaluation.Because our hierarchical context model is able to provide contexts that are semantically similar with a query and context, it enhances measuring similarity between content and context considerably in comparison to the other algorithms.Here, we give examples to illustrate the effectiveness of our approach.For instance, when a user searches ‘food’ in ‘SAFECO Field’ baseball stadium at noon, the baseline algorithm provides photos including the ‘food’ tag in the order of popularity.The other algorithms exploiting context without our model filter photos taken far from the stadium or photos taken at different times of day.The photos that satisfy context conditions are more appropriate for the user; however, the problem is that there are too few of these photos.Our approach recommends not only photos taken in ‘SAFECO Field’, but also photos dealing with fast food in other American baseball stadiums at lunch time according to personal preference.Fig. 5 shows the example of retrieved photos and the relevant context tags used by our approach.The number of retrieved photos via our approach is sufficient compared to the other approaches.Our approach is able to provide content that is more appropriate for a given situation and more familiar to users.Table 3 and Fig. 7 show the appropriateness of recommended photos using our approach.However, there are a number of limitations to our approach.Although we can measure consistent similarity between context nodes regardless of the level of detail in a context model using size-based granularity, a highly-detailed model is better at extracting relevant contexts.As shown in Fig. 7, the number of context nodes affects the relevance of retrieved content because more relevant contexts retrieved by the model make measuring similarity between context and content more accurate.It follows that users highly benefit from detailed contextual knowledge.This could be an incentive for users to create and share their expert knowledge with others.Tools for easy maintenance and sharing are needed to realize this.A positive feature of our approach is that adding more and more nodes, the system reaches a relatively stable nDCG value of around 0.87 at around 700 nodes.In contrast to graph-based approaches, the size-based method for computing similarity between nodes, keeps the behavior of the system stable once the maximum is reached even as highly generic nodes or intermediate nodes are added randomly to the graph.To verify the actual usefulness of the developed mobile system, it would be necessary to conduct a qualitative analysis, such as a usability study for the system usage.We concentrated on the relevance of retrieved photo content in our evaluation.This evaluation is helpful in understanding the accuracy of the recommended content, but we need to evaluate the ease of use of the developed system.Our current approach can be improved by considering these issues.In this paper, we introduced an approach for recommending personalized content to mobile users through a hierarchical context model with size-based granularity information.The proposed context model is organized as a hierarchical directed acyclic graph with size information.We use it to compute similarity between content and context for personalization.Our research goal was to retrieve content effectively in order to help mobile users browse information.The evaluation shows that the proposed approach is effective in personalizing and recommending relevant content through exploiting contextual information.We expect that our approach can improve information browsing by personalizing content effectively on a mobile device.The context model for our current prototype was developed by hand from publicly available information sources, such as Wikipedia.In ongoing and future work, we explore the possibilities for information integration and automatic information retrieval, e.g. from standard ontologies and other knowledge repositories, such as SUMO or WordNet , but also from folksonomies and other knowledge repositories created by end-users.Using size-based granularity to compute similarity instead of a structural, graph-based method, our approach can work with ontologies or folksonomies of varying conceptual density.In contrast to other recommender systems, which require user information to be sent and stored on a server, privacy protection is a key feature of our approach.Context libraries and contents can be shared as a user wishes, but personal information from sensors giving the current context of the user always stays on the user’s phone, since the reasoning engine is light-weight, designed to work locally. | Retrieving timely and relevant information on-site is an important task for mobile users. A context-aware system can understand a user's information needs and thus select contents according to relevance. We propose a context-dependent search engine that represents user context in a knowledge-based context model, implemented in a hierarchical structure with granularity information. Search results are ordered based on semantic relevance computed as similarity between the current context and tags of search results. Compared against baseline algorithms, the proposed approach enhances precision by 22% and pooled recall by 17%. The use of size-based granularity to compute similarity makes the approach more robust against changes in the context model in comparison to graph-based methods, facilitating import of existing knowledge repositories and end-user defined vocabularies (folksonomies). The reasoning engine being light-weight, privacy protection is ensured, as all user information is processed locally on the user's phone without requiring communication with an external server. © 2013 The Authors. Published by Elsevier B.V. |
395 | Isotopic diversity in interplanetary dust particles and preservation of extreme 16O-depletion | Interplanetary dust particles originate from comets and asteroids and can be collected by aircraft in the Earth’s stratosphere.However, despite some dedicated collections that coincide with particular meteor showers, it is unknown from which parent body an individual IDP originates.Dynamical modelling of dust ejected from comets and asteroids indicates that over 85% of the total mass influx of dust to the Earth originates from Jupiter-family comets.Samples of comets should retain the best preserved components of the Solar System starting materials because they formed at large heliocentric distances of 5–30 AU, where temperatures, at their most extreme, reach down to ∼30 K.Furthermore, IDPs have remained locked in ice at low temperatures until their release from the cometary surface not long before arrival at the top of the Earth’s atmosphere.Laboratory analysis of IDPs has revealed their primitive nature, such as high abundances of presolar grains, the presence of GEMS and an abundance of primitive carbonaceous material.These primitive features suggest it is likely that IDPs provide access to samples of the early Solar System from bodies that are otherwise hard to access from Earth, and which may never have been sampled by meteorites, or survived the aqueous and/or thermal alteration processes experienced by meteoritic material on parent asteroids.IDPs are composed of a complex mix of silicate and organic material that forms a wide range of particle textures.To understand the earliest-formed material in the Solar System it is the finest-grained IDPs that are likely to contain the most primitive material as their texture is similar to that expected for direct condensates from the solar nebula.However, the origin of individual components within fine-grained IDPs can also clearly be extra solar.IDPs also contain abundant fine-grained amorphous siliceous material and primitive carbonaceous matter.The origin of amorphous silicates in IDPs, such as GEMS, is very much debated, with some models suggesting that the majority form in the solar nebula as late-stage non-equilibrium condensates, while others prefer formation in the interstellar medium.In many cases, large crystalline silicate mineral fragments are contained within the ultra-fine-grained IDPs.Such minerals require a high-temperature formation environment and are likely to have originated in the hot inner Solar System, which is supported by their chondritic-like O isotope compositions.The presence in IDPs of minerals that crystallised at high temperatures can be accounted for by transport of such material from the inner Solar System out to large AU in relatively short timescales by turbulent radial mixing.Such transport has been suggested in order to account for the presence of high-temperature minerals in the Wild2 samples collected by Stardust.Therefore, analysis of these mineral fragments, or of the bulk composition of IDPs containing such material, will not provide information about the composition of the outer solar nebula.Only a small number of fine-grained IDPs have been measured for bulk O isotopes at a level of precision that is high enough for their comparison to meteorites.These analyses have shown that IDPs cover a wider range of O isotope values than that displayed by bulk meteorites, from relatively 16O-rich values of δ18O ≈ −20‰, to 16O-poor, chondritic-like values of δ18O = 0‰ to +20‰.Such large isotopic variations appear to reflect the wide range of parent body sources sampled by IDPs.These parent bodies presumably cover a range from primitive asteroids and comets that preserve, in part, the signature of original solar nebula dust, to parent bodies that are dominated by material processed in the inner Solar System.However, considering the extremely fine-grained texture of IDPs, and the potential for them to have incorporated fine-grained material from a wide portion of the early solar nebula, it is paramount to investigate IDPs at the micrometre-scale to understand the reservoirs they have sampled and the nature of the transportation and mixing processes that created the components.Very few detailed O isotopic studies have been made of multiple fragments from single cluster IDPs.The limited data set available shows that fragments from any one cluster IDP produce results within error of each other, suggesting some level of O isotopic homogeneity within a single IDP parent body.This could reflect a limited number of different components being available, efficient mixing at all scales prior to accretion, and/or homogenisation of components.Considerable expansion of the IDP data is required to assess the importance of these different processes.It is also important to analyse ultrafine- to fine-grained IDPs as these materials are more likely to not have been processed close to the Sun, as opposed to larger mineral fragments that may represent high-temperature components formed in the inner Solar System.H, C and N isotopes in IDPs can show substantial isotopic variations at the micrometre-scale.These isotopic variations normally reflect the presence of carbonaceous material with isotopically anomalous compositions indicating the potential survival of molecular cloud or cold outer disk material.In addition, extreme isotopic anomalies at the sub-micrometre scale can also be due to the presence of presolar grains carrying isotopic signatures of nucleosynthetic events.Although C and N isotopic anomalies within single IDPs are sometimes related, H isotope anomalies have not been observed to follow the same pattern.Studies that relate the bulk H, C and N isotope signatures with bulk O isotopes in the same IDP samples are of paramount importance for understanding how the silicate and organic reservoirs that contributed to these primitive materials may be related.This study focusses on fine-grained fragments from two IDP cluster particles obtained from the NASA Cosmic Dust Laboratory.The relatively large size of the fragments meant that it was possible to obtain multiple O isotope analyses across the single fragments to detail O isotope variations at the few-micrometre-scale in fine-grained materials, at a level of precision that was suitable for comparison to the meteorite record.In addition, it was possible to obtain corresponding H, C, and N isotopic compositions on the same areas analysed for O.In addition, TEM analysis of one of the fragments was undertaken post-NanoSIMS to assess the mineralogy of the isotopic anomalies identified.An FEI Quanta 2003D DualBeam™ focused ion beam scanning electron microscope fitted with an Oxford instruments 80 mm X-max energy dispersive X-ray detector system was used at the Open University to perform EDX analyses at various locations on IDP fragments.The same instrument was also used to obtain an electron-transparent section of a 16O-depleted region of Balmoral1 by FIB-lift-out post-NanoSIMS in order for the mineralogy to be investigated by transmission electron microscopy.Further details are available in the Electronic Annex.A Zeiss Supra 55V analytical field emission gun scanning electron microscope at the Open University was used to obtain high resolution, high magnification secondary electron images of the samples to assess particle texture.The NanoSIMS 50L at The Open University was used to obtain both O isotope spot and imaging and H, C, and N isotopic imaging analyses of single IDP fragments at a level of precision that allows for their comparison to the meteorite record.The H, C and N isotope analyses were made prior to O isotope analyses.The protocol used follows that set out in Starkey and Franchi and is summarised here with some further details in the Electronic Annex.In all cases negative secondary ions were collected on electron multipliers with the first analytical set-up collecting 12C, 13C, 16O, 12C14N, 12C15N and 28Si simultaneously and the second analytical set-up collecting 1H, 2H and 12C simultaneously for each particle.A Cs+ probe with a current of 1.5 pA for C and N isotope measurements, and 3 pA for H isotope measurements was rastered over the sample with a raster size relevant to the particular area of fragment being analysed.The probe size was typically ⩽150 nm and the raster were conducted with a pixel step of ⩽100 nm and a dwell time of 1000 μs per pixel.Charge compensation was applied with an electron gun with the same settings used on the sample and standard.Data were collected in planes with total analysis times of ≈40 min.Planes of image data were corrected for detector deadtime and sample drift, combined and processed using the Limage software to provide bulk δ13C, δ15N, δD and C/H ratios for each fragment, as well as regions of interest for the same areas as obtained in the O isotope analyses to provide complementary data.Further analytical details are available in Electronic Annex.H, C and N isotope results are reported as δ13CPDB, δ15NAIR and δDSMOW.All isotope ratio errors are reported as 2σ and include the external reproducibility from all standards analysed during the session together with internal uncertainty from each IDP measurement.The standard used was IOM) from the CM2 Cold Bokkeveld that was analysed immediately before and/or after each IDP.IDPs were analysed in O isotope imaging mode prior to O isotope spot analyses.A Cs+ ion beam of 2 pA was rastered across a 10 × 10 μm analysis areas, allowing for 3 image analyses on Lumley1 and one on Balmoral1.The instrument was set to a mass resolving power of >10,000 primarily to resolve the interference of 16OH on 17O.The probe size was typically ⩽150 nm and the raster were conducted with a pixel step of ⩽100 nm and a dwell time of 1000 μs per pixel.Data were collected in 50 planes with auto-centring of the peak positions every 10 frames.Charge compensation was applied with an electron gun with the same settings used on the sample and standard.Secondary ions of 16O, 17O and 18O were collected simultaneously on electron multipliers along with 28Si, 24Mg16O and 40Ca16O.Total counts of 16O for the mapped areas were ∼1 × 109.Standard analyses in imaging mode were performed on flat, polished San Carlos olivine crystals on comparably-sized areas to the IDP fragment analyses.Similar analyses on flat, polished Eagle Station olivine calibrated against San Carlos Olivine gave correct values within error of the true value as measured by laser fluorination.Results from smaller regions of interest within the images were processed using the Limage software and corrected for position drift, detector dead time and quasi-simultaneous arrival effect.Oxygen isotope spot analyses were obtained following the protocol described in Starkey and Franchi and summarised here with further details in the Electronic Annex.In spot mode, a Cs+ ion beam with a 25 pA current was rastered over 5 × 5 μm analysis areas.The instrument was set to the same mass resolving power conditions as for the O image analyses.In spot mode, secondary ions of 16O were collected on a faraday cup while secondary ions of 17O and 18O were collected simultaneously on electron multipliers.Total counts of 16O for the spot analyses were on the order of ∼4 × 109.Charge compensation was applied with an electron gun with the same settings on the sample and standard.Isotope ratios were normalised to Standard Mean Ocean Water using a San Carlos olivine standard that bracketed the sample analyses in order to generate δ17O and δ18O values and also to provide corrections for instrumental mass fractionation.All errors for O isotope analyses, whether spot or imaging mode, are given as 2 sigma which combines internal errors for each analysis with the standard deviation of the mean of the associated standard.Errors are, on the whole, larger for imaging analyses due to poorer counting statistics because of the smaller probe size required to measure 16O on an electron multiplier.High resolution transmission electron microscope imaging of the FIB-produced section from Balmoral1 was carried out at The Open University on a JEOL JEM 2100 equipped with a lanthanum hexaboride emitter operating at 200 kV.Images were captured using an Orius SC1000 digital camera from Gatan at column magnifications up to ×250,000.The Balmoral1 FIB-lift-out section was also examined at the University of Glasgow by low voltage scanning transmission electron microscopy using a Zeiss Sigma field-emission SEM operated at 20 kV/1 nA and following the procedures of Lee and Smith.LV-STEM enabled the acquisition of bright-field and annular dark-field images, and chemical analyses were obtained using an Oxford Instruments X-Max silicon-drift X-ray detector operated through INCA software.All of these analyses have X-rays contributed from Al from the STEM holder and/or the substrate surrounding the IDP fragment, and many also contain Cu from the grid onto which the foil was welded, and from Pt that was deposited prior to FIB milling.Following LV-STEM work, selected area electron diffraction patterns were obtained from the FIB-section using a FEI T20 TEM at the University of Glasgow operated at 200 kV.All SAED patterns were acquired using a ∼200 nm diameter aperture, and manually indexed.The smallest grains that could be identified by SAED were ∼50 nm across.Two individual fragments were obtained from IDP cluster particle 10 on collector L2009 and are named hereafter Lumley1 and Lumley2.Lumley1 is large and irregularly shaped after pressing.FEG-SEM imaging reveals that Lumley1 exhibits a variation in texture across the fragment, including large featureless, smooth/compact-looking regions up to 10 μm in size, fine-grained areas where the grains are ≪1 μm, and more coarse-grained areas where grains are observable in the ∼1–2 μm range.EDX spectra reveal an approximately chondritic major elemental composition across various regions of the fragment.C/H ratios obtained from NanoSIMS imaging reveal that the majority of Lumley1 has a C/H ⩾ 1 but the values can vary in different locations from C/H = 0.5–1.3.It should be noted that measuring C/H value by SIMS may not be the perfect technique because of differences in H ion emission from hydrous and organic phases.However, the results here are discussed within the context of the empirical observations by Aleon et al., the results of which have shown some consistency for IDPs in the study of Starkey and Franchi.In these studies, a C/H > 1 is generally interpreted as indicating the anhydrous nature of an IDP suggesting that Lumley1 is composed predominantly of anhydrous material, but with some smaller hydrous areas.The differences in C/H ratio and texture do not co-vary, and because of the large range in textures and C/H ratios it makes it hard to define Lumley1 as either chondritic-porous, even though it is predominantly anhydrous in C/H ratio, or chondritic-smooth.Three O isotope spots and three O isotope maps were obtained across different areas of Lumley1.Complementary H, C and N isotope ratio maps were obtained for the same areas from 25 × 25 μm NanoSIMS isotopic maps in which ROIs were subsequently defined in the Limage software to correspond with the O isotope regions.Lumley 2 is a slightly smaller fragment from the same cluster as Lumley1.The texture across Lumley2 is less variable compared to Lumley1, with it being composed predominantly of fine-grained material but with some areas that are more featureless in appearance.EDX spectra reveal an approximately chondritic elemental composition across most regions of the fragment, but in one location a small Fe-rich region is observed.C/H ratios for Lumley2 vary from 0.9 to 1.0, which is similar to Lumley1 and suggests that the particle is largely anhydrous.One O isotope spot analysis and one O isotope map of the bulk particle was obtained on Lumley2.H, C and N isotopic data for Lumley2 were obtained in imaging mode but from a different region of the particle which had split up on pressing into gold.One individual fragment was obtained from the IDP cluster particle 2 of collector L2071 and is hereafter named Balmoral1.Once pressed into gold foil, Balmoral1 is approximately 12 × 16 μm in size with a fine-grained texture, composed of grains that appear in SEM imaging to be ≪1 μm, similar to the fine-grained texture of CP-IDPs.EDX spectra obtained from Balmoral1 are consistent with a chondritic composition.The bulk fragment has a C/H = ∼1 but there is a small region contained within the fragment with C/H < 1 and which will be discussed in more detail.One O isotope map of the bulk fragment was acquired prior to two O isotope spot analyses which were obtained at different ends of the fragment.H, C and N isotope ratios were obtained from NanoSIMS imaging analysis of the bulk fragment and ROIs were drawn in the Limage software to correspond with the O isotope regions as well as areas with distinctive O-isotopic compositions or elemental ratios.Bulk H, C, and N isotope values for Lumley1 and 2, and Balmoral1, along with Raman spectroscopy data, are reported in Starkey et al.Isotopic values for the individual regions are presented and discussed here for the first time and are compared to O isotope values determined for these particles.It is not straightforward to define individual IDPs as either cometary or asteroidal in origin based only on particle texture and/or C/H ratio.Starkey et al. used Raman spectroscopy to show that Lumley and Balmoral contain organic material that is primitive, particularly in relation to bulk meteorites.In addition, the fine-grained texture and relatively high C/H ratio of Balmoral1 suggests it is anhydrous and CP-IDP-like in nature.CP-IDPs are generally considered to originate from comets.Lumley is more variable but the Raman spectroscopy features suggest that it also originates from either a cometary or primitive asteroidal source that has preserved unprocessed Solar System components.The O isotope ratios across the single fragment of Lumley1 vary from δ17O, δ18O = −24.3, −30.8‰ to 13.7, 19.2‰, with nearly all analyses falling within error of the Carbonaceous Chondrite Anhydrous Mineral mixing line and Young and Russell slope = 1 line.The Lumley1 analyses are compared in Fig. 3 to anhydrous and hydrated IDP data available from previous studies that were analysed at a similar level of precision.The O isotope values from different regions of Lumley1 cover the entire range of O isotope ratios displayed by IDPs measured in these previous studies.The isotopic variations in Lumley1 do not, at first, seem to vary systematically across the fragment, with 16O-enriched and depleted areas immediately adjacent to areas possessing what would be considered ‘normal’ chondritic O isotope ratios over distances of only a few micrometres.It would appear that the chondritic values occur in areas of the particle that have, on the whole, a fine- to medium- grained texture, but these areas can also contain smoother looking material so it is hard to characterise even individual areas of Lumley1 as CP- or CS- like.The 16O-enriched region, with δ18O = −30.8, is the most 16O-enriched signature yet recorded in an IDP.However, area 1f is centred on what appears to be a ‘blocky’ shaped crystal as opposed to the more typical fine-grained material comprising the rest of the fragment.Therefore, this region should not be considered alongside the other Lumley regions in discussion about the outer solar nebula reservoirs, although it is still interesting in its own right and will be discussed further separately.The heavier O isotope region in 1e coincides with a very smooth and featureless region of the fragment which is similar to the texture in some parts of 1b, that is similarly depleted in 16O.The texture of 1b is a mix of smooth-featureless material and some fine-grained silicates, as indicated by large abundances of Mg and Si in the EDX spectrum of this region.Lumley2, although smaller, also displays O isotope variability across the particle with the O isotope spot analysis, centred in the area of the particle that is composed of a mixture of extremely fine-grained and possibly smooth/featureless-looking material, giving δ17O, δ18O = −22.3, −24.2‰ but with the bulk particle O isotope map giving δ17O, δ18O = 5.3, 5.2‰.While these results are very different to each other, a ROI corresponding to the area of the spot analyses gives δ17O, δ18O = −16.0, −23.7‰, which is within error of the ratio obtained from the spot analysis, and verifies that spot and image analyses can be reliably compared on the same particle.These results show that Lumley2 contains a small region that has a δ18O around −24‰ with the surrounding region being characterised by more chondritic-like values of δ18O ≈ 0‰.The O isotope signatures in Lumley1 and Lumley2 are variable across each fragment, but covering a similar range.Both fragments contain material with a chondritic O isotope composition which is generally associated with regions that exhibit a fine- to medium- grained texture.Both fragments also contain an 16O-enriched region, although the texture associated with this signature differs between the fragments.In Lumley1 the 16O-enriched region appears to be dominated by a single crystal grain while in Lumley2 the region comprises very fine-grained material.It is only Lumley1 that contains a 16O-depleted region and this is associated with smooth/featureless-looking material.This region has a C/H > 1 so may be composed of anhydrous material.Two spot analyses positioned at either end of the Balmoral1 particle gave O isotope ratios of δ17O, δ18O = 1.3, 1.6‰ for spot 1 and δ17O, δ18O = 56.0, 55.3‰ for spot 2.An O isotope ratio map obtained prior to the spot analyses revealed a bulk particle value of δ17O, δ18O = 44.3, 37.7‰.Close inspection of the O isotope ratio image map reveals that one region of Balmoral1 exhibits a much more 16O-depleted ratio, in rough agreement with the spot analysis performed in the same area of the particle.This 16O-depleted region is composed of two small regions, approximately 3 μm apart, with δ17O, δ18O = 215, 198‰, placing them within error of the slope 1 line.These small 16O-depleted regions are surrounded by an area of approximately ∼7 μm in size exhibiting smaller, but still significant 17O-, 18O-enrichments, with δ17O, δ18O = 93.6, 88.8‰, also sitting within error of the slope 1 line.Subtraction of these 16O-depleted regions from the bulk image analysis gives δ17O, δ18O = 24.9, 13.5‰.An ROI obtained from the O isotope map matched as closely as possible for the area measured as spot 1 gives δ17O, δ18O = 21.0, 11.0‰.These values should be approximately comparable to spot 1, which falls outside of the 16O-depleted region, but they are instead more 16O depleted, albeit with large uncertainty because of the count-limited precision of the relatively small area.The reason for this discrepancy is not clear.Spot and image O isotope analyses performed on the same regions in other samples and standards show that the two types of analysis produce comparable results.This, in turn, suggests that the QSA corrections, necessary because 16O is measured on an EM detector for image analyses, are working adequately when the data are processed with the Limage software.QSA corrections are not necessary for the spot analysis because a FC detector is used to measure the 16O isotope in that case.There is no drift observed in the count rates or O isotope ratios during the O spot analysis of Balmoral, suggesting that the sputtered area was stable and that the sample was not sputtered away during the run, which may have otherwise affected the ratio.A remaining option is that, because NanoSIMS analysis is a destructive process, the spot and image analyses produced differing results because they measured different layers of the sample which may have differed with depth.The image analysis clearly seems to have analysed material that is more 16O-depleted, which may be due to the presence of fine-grained 16O-depleted material within the region of spot 1, that was preferentially sputtered away during the image analysis prior to performing the spot analysis.In order to help clarify the discussion, the very 16O-depleted region composed of two very small areas, will be termed ROI 1.The intermediate O isotope region surrounding the very 16O-depleted material will be termed ROI 2.The rest of the fragment will be termed ROI 3.SEM imaging reveals a subtle difference in the particle texture between ROI 1, 2 and ROI 3.All ROIs are ultra-fine-grained but ROI 1 and 2 appear to be more compact, and the isotopic variability within ROI 2 is much less than that displayed by ROI 3, with a relatively sharp boundary between these regions.In addition, the isotope map shows the presence of fine-grained, 16O-depleted material, possibly related to ROI 2, unevenly dispersed throughout parts of ROI 3.The data presented here for Lumley1 are from the same set of analyses as those presented for these fragments in Starkey et al. but the data have been further processed to reveal the isotopic composition of H, C and N associated with the same areas that were analysed for O isotopes.A different area of the Lumley 2 fragment was analysed for H, C and N isotopic composition to that analysed for O and so these data are not compared.The Lumley 1 data make it possible to compare different isotope systems across a single IDP fragment to assess micrometre-scale intra-fragment isotopic variability of silicate and organic components.δD, δ13C and δ15N values for the six individual areas of Lumley 1 are presented in Table 1 along with the C/H ratios.The data are also shown in Fig. 5 where the values are plotted against δ18O because Lumley1 shows large O isotope variability and δ18O indicates the relative placement of the analysis along the CCAM line.δ18O is matched to δD, δ13C and δ15N for each individual region measured in Lumley1.The intra-fragment co-variation between the various isotope systems appears to show broadly positive relationships of δD, δ13C and δ15N with δ18O, although none of the correlation coefficients are significant at more than the 90% level.Bulk H, C, and N isotope compositions for the Balmoral1 fragment are reported in Starkey et al. but the raw data have been reprocessed here in the same way as for the Lumley1 results in order to observe the finer-scale detail.Maps of δD, δ13C, δ15N and C/H for Balmoral1 are available in Electronic Annex Figure H.Despite the small volumes of material being investigated, it was possible to generate δD, δ13C and δ15N values for ROI 2 and 3 along with the element ratios C/H, Mg/Si, Mg/O and Si/O from the NanoSIMS mapping.It was not possible to obtain all the ratios for ROI 1 because the 16O-depleted region could not be accurately matched up onto the NanoSIMS images that were performed without O isotope ratio measurements.The silicate element ratios reveal that ROI 2 does not have a composition lying between that of ROI 1 and 3, as might have been expected from its intermediate O isotope composition.ROI 1 and 2 have C/H = ∼0.6 indicating that these areas of the IDP are hydrated whereas ROI 3 has C/H = ∼1.2 suggesting it is more anhydrous in nature.ROI 3 gives a bulk δD = 1000‰ whereas the more 16O-depleted ROIs 1 and 2 give δD = 493‰ and 502‰, respectively, indicating that they are characterised by a depletion in D compared to the bulk fragment.C and N isotopes are not available for ROI 1 but ROI 2 and 3 give very similar values with δ13C = −44 and δ15N = 248 for ROI 2 and δ13C = −42‰ and δ15N = 272‰ for ROI 3.STEM-EDX and HR-TEM imaging was performed on a FIB lift-out which cuts across the Balmoral1 fragment to obtain a section of ROI 2 and ROI 3.Unfortunately, because of their very small spatial extent, the very 16O-depleted regions were missed on sectioning.TEM images reveal Balmoral1 is composed of an aggregate of small grains which are, on the whole, in the 20–80 nm range, held within areas of finer-grained and/or a small amount of amorphous material.Those minerals large enough to be identified include clinopyroxene, orthopyroxene and olivine, which has a composition towards the forsteritic endmember.Ni-bearing Fe-sulphides and magnetite are also present in smaller quantities.Where possible, lattice fringe imaging to obtain mineral d spacings was performed from HRTEM images and revealed the likely presence of clinoenstatite planes = 0.91 nm), pentlandite planes = 0.5 nm) and either pentlandite or olivine planes = 0.3 nm), in keeping with the results from STEM-EDX.Fig. 6 shows how the FIB lift-out section corresponds with the original ion images and, therefore, how the mineralogy relates to the isotopic signatures across Balmoral1.Clinopyroxene, orthopyroxene and olivine constitute a large part of the section, from ROI 3 into the more 16O-depleted ROI 2.A relatively large olivine grain can be seen in Fig. 6d which is in ROI 3 of Balmoral1.These mineral phases are consistent with those expected in IDPs, along with C-rich phases which may be represented by the small amount of amorphous material seen in Balmoral1.Although a lot of fine-grained material in an amorphous matrix was observed in the TEM section, the composition of the small grains was not identified and so it is not possible to confirm whether this material is GEMS.However, the exact texture does not appear to resemble GEMS that has been observed in other TEM studies of IDPs.Pyrrhotite and magnetite are observed in ROI 3 but it is only pentlandite that is observed along with the olivine and pyroxenes in ROI 2.Pentlandite is often cited as indicating that an IDP is hydrated, which would be in keeping with the lower C/H ratio determined for ROI 1 and 2 and their more compact appearance, and lack of isotopic variability in ROI 2.The FIB-lift-out only covers a very small region of Balmoral1 and most of the minerals are too fine-grained to identify definitively, even by TEM.Therefore, there may be additional mineral phases present to those observed by TEM.The presence of magnetite in Balmoral1, albeit in small quantities, may indicate that the IDP experienced heating during atmospheric entry.However, Fraundorf stated that the presence of fine-grained magnetite within IDPs may not necessarily be a result of atmospheric heating but instead represent a primary phase.Importantly, in Balmoral1, no observation was made of magnetite as rims on sulphides, which would have otherwise supported the idea of atmospheric heating.In addition to this, the isotopic variation observed is not falling on a mixing line to terrestrial oxygen which indicates again that there was no significant terrestrial O contribution that could have occurred during heating.As such, the evidence suggests that Balmoral1, particularly the isotopically anomalous region, did not experience significant atmospheric heating.Such wide O isotope variability as that seen at the micrometre-scale in Lumley has not been observed in a single IDP fragment previously.The isotopic variability measured in Lumley provides some important clues about the formation history of the Lumley parent body, whether it was a primitive asteroid or a comet, and about the silicate and organic reservoirs of the early Solar System.The Raman D and G band parameters of the organic matter in Lumley indicate that this material is more primitive than insoluble organic matter extracted from bulk meteorites.Assuming that there is a common organic reservoir of organic matter in the early solar nebula, as argued for by Alexander et al., Alexander et al., 2012, this indicates that at least some portion of the Lumley parent body experienced very little, or no, processing compared to that experienced by carbonaceous chondrites.To account for the variable isotopic, elemental and textural features across Lumley1 it is possible that the Lumley parent body formed from varying mixtures of at least three discrete reservoirs.Reservoir 1, represented by Lumley 1f, is relatively pristine, retaining a more solar-like, 16O-enriched signature with relatively low δD, δ13C and δ15N.The C/H for Lumley 1f is low but the measured C/O ratio is also low compared to other areas of Lumley, indicating a low C abundance as opposed to it being hydrated.The texture of Lumley 1f is blocky/compact, which together with the isotopic and elemental compositional information available, suggests that it is a refractory grain originating from the inner Solar System, such as a CAI.A grain presumed to be a CAI was observed as a terminal particle collected by Stardust from comet 81P/Wild2 and provides evidence for outward radial transport of inner Solar System solids to the comet-forming region.The presence of CAIs in comets was actually a prediction of the bipolar outflow X-wind model of Shang et al., but radial transport of material outwards by turbulent flow is also a plausible mechanism to move inner Solar System materials to large AU.Reservoir 2, represented by Lumley 1a, 1c and 1d that exhibit a fine-grained CP-IDP-like texture, has chondritic-like δ18O, relatively low δD, δ13C and δ15N but variable C/H.The isotopic signatures of the material forming these regions are similar to that of many carbonaceous chondrites.It would appear that Reservoir 2 is dominated by fine-grained dust from the same chondritic reservoir as was sampled by most asteroids, most likely originating in the inner parts of the protostellar disk, that was subsequently transported out to the comet-forming region at larger AU by radial transport.The variable C/H ratios across these regions may indicate that there is a mix of anhydrous and hydrated material but this parameter does not co-vary with δ18O.However, low C/O ratios for 1c and d indicate that these regions have a low C abundance and therefore are still largely anhydrous in nature.Reservoir 3 is represented by Lumley 1b and 1e that exhibit a smooth/amorphous texture and high C/H and C/O ratios.δ18O values for Lumley 1b and e are relatively high, 19.2‰), with variable δD, 861‰) and high δ13C and −2‰ and δ15N and 553‰).NanoSIMS ion images illustrate these features, with brighter pixels showing high δ15N in the 1b and 1e regions.Although the δD of 1b is not as high as 1e, the NanoSIMS D/H ion image shows that the majority of the material comprising 1e has a δD value closer to that of 1b, and that the high δD for 1e may be dominated by a small region with high δD within the area defined as 1e.The high δ18O values are comparable to the O isotope signatures for hydrated IDPs, that tend to fall above δ18O = 0‰.Although the higher C/H ratios suggest that the silicates may be predominantly anhydrous, these values coupled with high C/O could instead suggest a high abundance of C. However, the high δD and δ15N signatures in these Lumley regions, coupled with the presence of δ15N hotspots up to several thousand permil, indicate this material is primitive.As such, it is unlikely this material has experienced processing within the inner protostellar disk or on a parent body, as such signatures are thought to be destroyed during processing, particularly those of the presolar grains.The elevated organic isotope signatures also indicate a cometary source as these are high where they have been determined for comets.The evidence, therefore, suggests an origin in the outer Solar System for these Lumley regions.The isotopic and textural variation between discrete areas, in Lumley1, that are separated by only a few micrometres, have distinct formation histories before ultimately being assembled on the Lumley parent body.The material in each of these areas draws upon components from different reservoirs with distinct isotopic signatures that were affected by, as yet undefined, processes to produce different textures.The detail apparent in the NanoSIMS ion images reveals that the bulk values for the regions defined here in Lumley1 may not be restricted to sampling only one particular lithology and may themselves encompass a mix of material.The limited data for IDPs measured for O isotopes previously have shown a relatively homogenous composition within single particles indicating that individual IDPs sample a well-mixed local reservoir.Starkey and Franchi suggested from a study of a collection of fine-grained IDPs that the more chondritic IDPs originated from parent bodies that formed at smaller heliocentric distance where the mix of inner Solar System versus outer solar nebula dust was much higher.Conversely, the more 16O-rich IDPs originated from parent bodies formed at larger heliocentric distance where the influx of inner Solar System dust to the mix was lower.Size-density sorting of silicate and sulphide grains observed in CP-IDPs and Wild2 provide further evidence of transport mechanisms in the comet-forming region that may also be a function of heliocentric distance.In addition, CP-IDPs that have the smallest grain sizes are also reported to contain the highest abundances of amorphous silicates and circumstellar grains.Lumley, a single fragment from a cluster IDP, appears to sample a range of reservoirs and/or lithologies.This implies that the Lumley parent body sampled mixtures from a range of AU.These materials, represented by different lithologies and isotopic signatures, appear to have been incorporated into the Lumley parent body as individual clasts at the several micrometre-scale, themselves composed of nanometre-sized grains, rather than as individual grains of fine-grained dust.These clasts, although they do not display sharp boundaries, are apparent from the diverse textures and isotopic signatures observed.The formation of each clast, presumably originally from the aggregation of fine-grained dust, must have occurred at an earlier time, possibly in a different location, followed by disruption of these materials before final re-aggregation on the Lumley parent body.It is not clear if these micrometre-scale fragments represent aggregates that were formed in the protostellar disk, of a series of larger bodies that were subsequently disrupted and dispersed.Alternatively, the distinct areas in Lumley may all originate from a single parent body that incorporated material from only one reservoir which was subsequently altered in situ on the parent body.Although details of the original inter-relationship of the areas has been lost when the particles were pressed in gold, the intimate mix of very different reservoirs is difficult to reconcile with any scenario involving modification in situ at such a fine-scale; altering some regions and not others to produce the diverse range of compositions observed.Complex mixes of material are frequently observed in primitive meteorites, but in this case the aggregated clasts are at least an order of magnitude larger.Such mixes are generally believed to have formed by brecciation processes on asteroidal bodies.The 16O-rich grain in Lumley 1f appears to be a fragment of a refractory grain, providing evidence of material from the innermost regions of protostellar disk being delivered to the Lumley parent body formation zone, most probably by turbulent mixing processes.The size of this grain fragment is approaching that of the other regions in Lumley, which may indicate that it was incorporated as a discrete fragment at the time of Lumley final accretion.This may indicate that transport of inner disk material was still being delivered to the Lumley at the time of final accretion.The O isotope ratios measured in ROI 1 and 2 of Balmoral1 are considerably more enriched in 17O and 18O than any ratios measured in IDPs to date, other than those found in rare sub-micrometre pre-solar grains.These ratios are also much more 16O-depleted than O isotope ratios obtained for bulk meteorites.ROI 1 is composed of two discrete areas with essentially identical 17O, 18O enrichments.There are no large, discrete grains coinciding with the location of the ROI 1 hot spots and therefore these areas must be composed of aggregates of small grains.There is clearly a strong relationship between the material in ROI 1 and ROI 2, discussed below, such that this feature needs to be considered as a distinct entity.The size of ROI 1 and ROI 2, and the large number of grains associated with this enrichment, is unlike any previously identified pre-solar grain.The texture apparent in the SEM images appears typical of fine-grained CP-IDPs and so it would appear extremely unlikely that this material has a pre-solar circumstellar origin.Although the composition of ROI 1 and 2 overlaps with that of some possible presolar grains reported in the literature, the relatively large size of the enriched region in Balmoral1 sets it apart from these grains.In addition, ROI 2 appears to be roughly chondritic in composition and is composed of a multi-grain clast that looks similar to normal IDP material.It would seem unlikely that such a complex mix of phases could be generated and aggregated together from a single nucleosynthetic event so it is suggested that this material is not presolar in origin.Almost all early solar nebula components found within meteorites have oxygen isotopic compositions that fall along a mixing line with a slope of approximately 1.Their isotopic compositions range from around the solar value, as determined from the Genesis solar wind samples, up to values around +10‰ in δ17O, δ18O.The origin of this variation remains elusive, although isotope selective self-shielding is one of the most plausible mechanisms to account for the non-mass dependent variations in O isotope compositions observed in the Solar System.However, this mechanism requires a reservoir enriched in the heavy isotopes of O.In order to impart the signature of this reservoir into large amounts of solid silicate material it is generally inferred that abundant, and reactive, water was involved.Identification of a 16O-depleted primordial water reservoir in the early Solar System, and its distribution and interaction with the rock record, is currently not well established.O isotope compositions similar to those in ROI 1 in Balmoral1 have been found in one other instance, in the meteorite Acfer 094, where δ17O, δ18O values around +200‰ were reported.Although, as noted above, both Nguyen et al. and Keller and Messenger report presolar grains with compositions in error of CoS.The presence of large 16O depletions in Acfer 094 was presented as evidence for interaction of the Acfer 094 parent body, or components within it, with primordial water strongly depleted in 16O.The material containing the 16O-depleted signature was originally termed new-PCP by Sakamoto et al. but was later named cosmic symplectite after further investigation by Seto et al.CoS is distributed ubiquitously in the matrix of Acfer 094 and TEM results reveal its ‘wormy’-like symplectite texture, composed of intergrown magnetite and pentlandite.One formation mechanism proposed for CoS is that it forms from Fe-metal or Fe-metal sulphide that has been radially transported out from the inner Solar System, sulphurised to Fe sulphide as the ambient temperature drops, and oxidised to magnetite by water vapour moving in from the outer solar nebula.Alternatively, it was suggested that CoS may have formed by oxidation on the parent planetesimal, in the very earliest stages of aqueous alteration, prior to the onset of hydrous mineral formation.It is possible that a signature of interaction with primordial water could be available in Acfer 094 because of its very primitive, unaltered nature meaning that the signature did not decompose during subsequent alteration.If this is correct then cometary samples should be expected to contain CoS because of their primitive nature.Indeed, Yurimoto and Kuramoto suggest that the O isotope composition of cometary ices should lie in the range of δ18O = +50‰ to +200‰.However, no evidence of CoS-like material or such 16O-poor material has previously been reported in cometary samples prior to this study.The size of the 16O-depleted regions in Balmoral1 are similar in size to the regions containing similar O isotope signatures observed in Acfer 094, which can be as large as 160 μm but with most being less than ten micrometres.The presence of Fe- and Fe-Ni-bearing minerals in Balmoral1 do not, on their own, necessarily confirm the presence of CoS because these minerals are observed commonly in IDPs that do not exhibit extreme 16O-depleted signatures.In addition, the symplectite ‘CoS’ texture documented in Acfer 094 is not observed in the Balmoral1 FIB-section.However, of the 16O depleted regions present in Balmoral1, only ROI 2 was sampled by the FIB section.As only ROI 1 displays 16O depletions comparable to CoS, there is no direct evidence for the nature of the mineralogy of the most extreme 16O depletions in Balmoral1.However, two lines of evidence provide a strong indication that ROI 1 has a mineralogy quite distinct from that of CoS.Firstly, if the isotopic signature of ROI 2 were the result of mechanical mixing of ROI 1 material and “normal” IDP material, then ROI 2 should contain approximately 50% of ROI 1 material.That there is not an abundance of pentlandite and magnetite in the FIB-section sampling ROI 2 rules out mechanical mixing of a CoS-like assemblage.Secondly, the measured ratio of 16O- and 28Si-ion intensities for ROI 1 is essentially identical to that of ROI 2 and 3.This is inconsistent with any significant CoS-like material being present as CoS is essentially devoid of Si and, therefore, would be expected to generate a large shift in the measured 16O-/28Si- signal.The FIB-section reveals that ROI 2 and 3 are primarily composed of ferromagnesian silicates and, therefore, it would appear that ROI 1 is also dominated by such phases.Although ROI 1, 2 and 3 are all primarily composed of ferromagnesian silicates, the 24MgO-/16O-, 28Si-/16O- and 24MgO-/28Si- ion ratios of ROI 2 are not intermediate between those of ROI 1 and 3.Therefore, it can be concluded that the intermediate O isotopic composition of ROI 2 is not the result of mechanical mixing of ROI 1 material with that of the surrounding ROI 3.Mixing of ROI 1 with other CP-IDP-like material prior to final accretion on the Balmoral parent body is also considered unlikely as the homogeneous O isotope composition of ROI 2 requires exceptionally efficient mixing of two components, potentially differing by over 200‰.As discussed earlier, it appears likely that ROI 1 and 2 share a common origin but that they then experienced variable, or selective, processing, interacting with an O isotope reservoir quite distinct from the starting composition.The C/H of ROI 2 is very low, typical of CS-IDPs, and generally taken as indicative of aqueous alteration.Certainly, the rather homogeneous O isotope composition of ROI 2 is more consistent with that expected from aqueous alteration than would be expected from CP-IDP-like material that contains a wide variety of components.The presence of pentlandite within ROI 2 is indicative of some aqueous alteration, and the possible lack of GEMs.ROI 2 is also devoid of any isotopically anomalous D hot spots.Such characteristics are all consistent with the effects of aqueous alteration of primitive material.If aqueous alteration played a role in establishing the distinct O isotope composition of ROI 1 and ROI 2, it is unclear whether it was the fluid or the initial silicate material that was heavily depleted in 16O.Small shifts in δ18O along a slope = ½ line could be affecting the isotopic composition of the altered material within Balmoral.For example, the effect in the carbonaceous chondrites is around 6‰ and so comparable effects could easily be lost in the uncertainty of any slope defined with the ROI 1 composition as an endmember.Indeed, the uncertainties on the larger ROI 2 measurement are comparable to this 6‰ effect.However, these effects are very small compared to the difference in the endmember compositions required in the mixing.It may be that ROI 1 and 2 were both originally heavily depleted in 16O and that their interaction with more typical Solar System water at a late stage resulted in some exchange which shifted ROI 2 material towards the water composition, while ROI 1 remained largely, or completely unaffected.Alternatively, ROI 1 and ROI 2 may both have been typical of protostellar disk material but then interacted with water heavily depleted in 16O.In this scenario ROI 1 would be represented by areas where exchange was complete while ROI 2 represents areas where alteration was only partial.However, there is limited information available on ROI 1, and therefore it is difficult to establish the exact nature of the relationship between ROI 1 and ROI 2.The δD of ROI 1 and ROI 2 is ≈500‰, considerably more D-depleted than pristine CP-IDP material.Any aqueous alteration would have had a pronounced impact on the δD and therefore it would appear that the fluid was depleted in D. Given the common δD and C/H of ROI1 and ROI2 it is more likely that both regions are aqueously altered, with ROI 1 being more completely altered and reflecting the composition of the fluid.The distinct boundary to ROI 1 and 2 indicates that this component was incorporated into the Balmoral IDP ROI 3 material as a distinct clast after the alteration event.However, the formation of ROI 3 itself is not straightforward because it also displays an elevated O isotope signature in relation to chondrites, albeit one much less than that displayed by ROI 1 and 2.Based on information provided by the NanoSIMS O isotope ratio images it is proposed, despite its higher than chondritic O isotope bulk signature, that ROI 3 is primarily composed of fine-grained typical CP-IDP-like chondritic material with δ18O ⩽ 0‰.However, it must also contain material with a similar O isotope composition to ROI 1 and/or 2.The O isotope ratio images show that there are numerous small fragments of material with isotopic compositions similar to ROI 2, particularly in the lower half of the particle more adjacent to ROI 2.This indicates that during the final accretion event that formed the Balmoral parent body, further disruption of the ROI 1, 2 clast occurred to fragment them and mix this material locally with the more typical CP-IDP material.The high δD signature of ROI 3 reflects the abundant D-rich organic material present in the more abundant CP-IDP component of ROI 3.These results indicate that there are some broad similarities between Lumley and Balmoral.Both IDP fragments are composed of an aggregate of small sized clasts where the clasts can exhibit a wide range in isotopic signatures and lithology.As discussed above, it seems possible that these clasts were originally formed from primary fine-grained dust in different settings that were characterised by different lithological, elemental and isotopic characteristics.It is proposed, based on the primitive nature of the IDPs, that these original reservoirs had to be located at relatively large AU.Subsequent disruption of these materials/reservoirs/early bodies, possibly through collisional events, then their re-incorporation as micrometre-sized clasts into new bodies could account for the observations seen.The small size of the clasts in relation to those observed in brecciated chondrites may be related to the more primitive and fragile nature of the early Solar System dust that formed these original reservoirs/bodies, a reflection of the limited processing experienced by these bodies.This could be a function of a number of parameters such as limited alteration processes or the small size of the intermediate bodies involved in the assembly, disruption, and final accretion that ultimately led to the formation of comets.It is proposed that the Balmoral1 parent body formed in close proximity to the 16O-depleted reservoir at large AU.The rarity of 16O-depleted material, and its presence in a CP-IDP, which most likely originated from a comet, also supports the idea that this component formed at large heliocentric distance.As the level of 16O depletion is consistent with that expected from isotope selective self-shielding, it is likely that the formation location was also close to that where O self-shielding effects were most pronounced.The ultra-fine-grained/featureless texture of parts of ROI 1 and 2 would be a reasonable fit with this model as silicates formed in the outer solar nebula are expected to be amorphous because this region was too cold to form crystalline materials, and >97% of silicates formed in the ISM are reported to be amorphous.The composition of ISM amorphous silicates has been estimated as 84.9% olivine and 15.1% pyroxene, and the grains are thought to be spherical with radii of less than 0.1 μm, which is generally in keeping with the grains observed in Balmoral1.It is possible for crystalline silicates to occur in the outer solar nebula but these are more likely to have arrived there by turbulent radial mixing from the inner Solar System.The results of this study confirm that IDPs are important samples for preserving information about early Solar System reservoirs that are not readily available from, or preserved in, samples originating from asteroids.The IDPs in this study show an extremely wide variation in compositions across single fragments, reflecting the incorporation of a range of different early Solar System reservoirs.The IDP Lumley reveals that diverse isotopic reservoirs carrying material with distinctive textures that must originate from different settings in the early solar nebula, can be transported and mixed together in the comet-forming region.These materials appear to have been incorporated as micrometre-sized clasts on the Lumley parent body suggesting that they may have been disrupted into smaller clasts from their primary reservoir or location of formation, possibly by collisional events, prior to re-aggregation in the Lumley parent body.These findings are in broad agreement with the model set out in Wozniakiewicz et al. of pre-accretional sorting of cometary dust.The IDP Balmoral fragment preserves evidence for the existence of a 16O-depleted reservoir in the early Solar System.It appears that the 16O-depleted material in Balmoral formed directly from the 16O-depleted reservoir itself.Evidence for this reservoir may be rare in the meteorite record either because it is present in parent bodies that formed at large AU and so are not sampled efficiently on Earth, or, that the signature is easily lost through interaction with reservoirs of different compositions during, or after, formation of the parent bodies.The IDP fragments studied here support models for transport of material from the inner Solar System out to larger heliocentric distances.We hold the view that parent bodies formed at larger heliocentric distance will be expected to have incorporated less inner Solar System material than parent bodies formed at smaller heliocentric distance.However, the new results reveal that the early solar nebula may have formed a number of early reservoirs, from initially primary solar nebula dust condensates, that experienced varied histories to produce a diverse range of compositions.These reservoirs, or possibly primary parent bodies, were then disrupted into micrometre-sized clasts and re-incorporated into parent bodies in the comet-forming region where they were also able to incorporate varying degrees of material mixed from the inner Solar System.The IDP Balmoral1 reveals that any models accounting for mixing processes in the early solar nebula must also account for the presence of an extremely 16O-depleted reservoir in the comet-forming region. | Two interplanetary dust particles (IDPs) investigated by NanoSIMS reveal diverse oxygen isotope compositions at the micrometre-scale. The oxygen isotope values recorded at different locations across the single IDP fragments cover a wider range than the bulk values available from all IDPs and bulk meteorites measured to date. Measurement of H, C, and N isotopes by NanoSIMS, and the use of scanning and transmission electron microscopy (SEM and TEM) to determine elemental compositions and textural information allows for a better understanding of the lithologies and organic signatures associated with the oxygen isotope features.IDP Balmoral, a ~15μm-sized fragment with a chondritic porous (CP)-IDP-like texture, contains a region a few micrometres in size characterised by 16O-depleted isotope signatures in the range δ17O, δ18O=+80‰ to +200‰. The remainder of the fragment has a more 16O-rich composition (δ18O=0-20‰), similar to many other IDPs and bulk meteorites. Other than in discrete pre-solar grains, such extreme 16O-depletions have only been observed previously in rare components within the matrix of the Acfer 094 meteorite. However, TEM imaging and FeO/MgO/Si ion ratios indicate that the 16O-depleted regions in Balmoral did not form by the same mechanism as that proposed for the 16O-depleted phases in Acfer 094. As the level of 16O depletion is consistent with that expected from isotope selective self-shielding, it is likely that the 16O-depleted reservoir was located close to that where oxygen self-shielding effects were most pronounced (i.e., the outer solar nebula or even interstellar medium).Individual regions within IDP Lumley cover a range in δ18O from -30‰ to +19‰, with the oxygen isotope values broadly co-varying with δD, δ13C, δ15N, light-element ratios and texture. The relationships observed in Lumley indicate that the parent body incorporated material at the micrometre-scale from discrete diverse isotopic reservoirs, some of which are represented by inner Solar System material but others which must have formed in the outer Solar System.The IDP fragments support a model whereby primary dust from the early solar nebula initially formed a variety of reservoirs in the outer solar nebula, with those at lower AU incorporating a higher proportion of inner Solar System chondritic dust than those at larger AU. These reservoirs were subsequently disrupted into micrometre-sized clasts that were re-incorporated into IDP parent bodies, presumably at large AU. These results reveal that any models accounting for mixing processes in the early solar nebula must also account for the presence of an extremely 16O-depleted reservoir in the comet-forming region. |
396 | Assessing the conservation potential of fish and corals in aquariums globally | Healthy aquatic ecosystems are essential for biodiversity and humanity alike, but freshwater and marine biomes are experiencing increasingly severe threats to their species and at ecosystem level.Freshwater habitats cover less than 1% of the world’s surface and yet contain 7% of the estimated 1.8 million described species, including 25% of the estimated vertebrates.This vertebrate component includes ∼40% of the known global fish diversity with new species being discovered each year.Despite their important ecosystem service roles and biological richness, freshwater habitats are being degraded by human activity, which is leading to an extinction crisis.The United Nations Environment Program’s Millennium Ecosystem Assessment report states that inland water ecosystems are in worse condition overall than any other broad ecosystem type, and estimates that about half of all freshwater wetlands have been lost since 1900.The degradation and loss of inland water habitats and species is driven by water abstraction, infrastructure development, land conversion in the catchment, overharvesting and exploitation, introduction of exotic species, eutrophication and pollution, and global climate change.These stressors are increasingly threatening the viability of entire freshwater systems and their dependent biodiversity.The marine biome is also severely impacted by human activities, which are observable at species, ecosystem, and biophysical levels.Reid et al. detail the many and diverse direct and indirect anthropogenic impacts acting on the marine environment and their consequences for biodiversity and human well being.These include habitat alteration and loss, disturbances leading to mortality of marine life, pollution, disease translocation, nutrient overloading, changes in salinity, sea-level raise, ocean heat content and sea-ice coverage decrease, deoxygenation, and ocean acidification.A dramatic example of a disturbed environment is the Florida Reef coral disease outbreak, which is the result of a combination of more than one stressor.The warmer water temperatures associated to climate change combined with opportunistic pathogens have affected nearly 390 km2 of Florida’s reefs only in the last four years.Addressing the aquatic biodiversity crisis requires concerted engagement across all relevant agencies and organizations.Stand-alone aquariums and zoos holding aquatic taxa fill a diverse range of roles.With more than 700 million visitors worldwide every year, technical expertise, physical and financial resources, these organizations are uniquely placed to help protect and understand biodiversity.Like the wider zoo community, aquariums range from leading research and conservation facilities to purely commercial organizations.In addition to their potential for public awareness-raising and policy influencing, there are many specialist conservation and research possibilities including species threat assessments, conservation breeding, assisted colonization and reintroduction programmes, bio-banking, ecosystem monitoring and conservation support.Current freshwater fish conservation initiatives, such as FishNet, highlight the potential of aquariums for providing both in situ and ex situ species conservation assistance.Aquarium based research into coral propagation is another example of the valuable contribution that these facilities can provide.However, an appreciation of their conservation role needs to be better understood and acted upon if their full potential is to be realized.Aquariums can provide important information on basic biology and life history traits as well as genetic reservoirs for species threatened with extinction in the wild.These institutions have the potential to be important contributors to bio-banking initiatives such as the Frozen Ark cryopreservation program.Moreover, aquarium staff often possess wide-ranging species knowledge which, coupled with in situ and ex situ conservation management expertise and institutional financial commitment, allows the creation of diverse partnerships that makes the aquarium community well placed to respond to aquatic conservation challenges.For instance, aquarists’ knowledge on life histories of species can inform threat evaluation of species for which data on wild populations is not available.The conservation potential of aquarium populations is compromised by a current lack of readily available information of the total number of species held.Although one population in a single aquarium can have a critical role for the conservation of a species, the interaction among different institutions through standardized shared animal records is often essential for optimal population management and for informing the prioritization for species conservation assistance.For terrestrial species, Conde et al. showed that zoo members of the Species360 network hold one in every seven threatened species, but the same kind of information is currently unavailable for aquatic species.Addressing this knowledge gap is essential for a comprehensive assessment of the importance of aquariums for ex situ conservation.The conservation potential of animals held in aquariums can be optimized when combined with species threat assessments and prioritization schemes, such as their CITES designation, their IUCN threat status, their vulnerability to climate change, their evolutionary distinctiveness, and their prioritization in the Alliance for Zero Extinction.To further inform the conservation potential of populations held in aquariums and demonstrate the importance of global standardized shared animal record keeping, here we analyzed how many species of the Chondrichthyes and Osteichthyes fishes and the Anthozoa corals and anemones are represented among those species prioritization schemes.Based on our results, we provide recommendations to support the decision-making process for current and potential new ex situ species and collection planning for conservation programs in aquariums.Populations of high conservation value are usually managed in studbooks to ensure their genetic variability and demographics).However, aquariums have generally been slower to manage their populations due to the complexities and lack of protocols for group management in the way that most aquatic species are kept.For instance there are only 26 studbooks for two classes of fish species and none for corals or anemones, while there are 704 studbooks for the mammals, birds, reptiles and amphibians across five regions .In the case of fish, coral and anemone species, the collection of wild specimens by aquariums is still a relatively common practice.There are a number of reasons for this, including difficulties in breeding some species in captivity, added costs associated with captive breeding and the wide availability of animals via established commercial ornamental fisheries.For many aquariums the requirement to establish managed programs for aquatic species was shadowed by efforts to learn technologies to aid aquarium system management.An increasing number of aquariums are realizing significant ex situ breeding success across a wide range of taxa and developing managed programs for several species of threatened fish.Gradually, aquariums are also beginning to follow best practices for sustainable harvesting of wild animals that can provide in situ conservation benefits.For example, the project Piaba in Brazil aims to create a sustainable supply of wild-caught ornamental fishes, which provides a livelihood for local communities and encourages good management of fish stocks.However, zoos focusing on terrestrial species have been significantly more restricted to ensuring the genetic viability of their populations by not importing animals from the wild.This is partly the result of increased numbers of zoos focusing on conservation goals and the strict regulations imposed by the Convention on International Trade of Endangered Species of Wild Fauna and Flora.On the other hand, aquariums have not faced the same limitations partly due to a historical focus on terrestrial species by CITES.To help address overexploitation of natural populations and ensure species survival in the wild, several states and regional economic integration organizations joined in 1973 to create CITES.Today 183 Parties are bound by CITES, an international agreement to regulate international trade in plants and animals and their products.CITES lists a species when it is either endangered with extinction or when the international trade affects its population’s sustainability.However, to date, there are only 147 aquatic species listed in CITES.This is of concern since the sustainability of many aquatic species is threatened by international trade.This includes some species of sharks and tuna, which are currently unsustainably harvested and traded.Here we analyzed which species in aquariums are indexed in CITES and its overlap with other prioritization schemes, such as the IUCN Red List.The International Union for Conservation of Nature’s Red List assesses the threat status of species by their extinction risk.Although Red List assessment of aquatic taxa is still incomplete, of the 67 assessed bony fish, six were found to be ‘Extinct in the Wild’.The representation of these species in aquariums illustrates the conservation role that aquariums have in preventing species extinction and providing the opportunity for such species to be safely returned to the wild.However, other threatened criteria are important, such as populations of species described as ‘Critically Endangered’, which, if managed properly, could support conservation programs in the wild.While exploring the representation of other IUCN Red List categories is crucial, we would like to emphasize the importance of species listed as ‘Data Deficient’ in aquariums.This is because there are a great number of aquatic species that have been reviewed by the IUCN Red List, but a lack of knowledge prevents an accurate listing for these species.This is partly due to taxonomic uncertainty, which prevents cataloguing a species in a threatened status.Some of these species might already be threatened and at risk of extinction before there is enough information to list them under a threatened category.For example, Bland, Collen, Orme, and Bielby estimated that 63.5% of all DD mammals were threatened with extinction and have smaller geographical ranges than species with sufficient data for the IUCN Red List assessment.The same was shown for amphibians, in which DD species are more threatened with extinction than their data sufficient counterparts.Therefore, here we analyzed the number of species in aquariums within the IUCN Red List categories and highlighted not only the number of threatened species but as well those listed as DD.Despite the importance of the IUCN Red List to explore the conservation potential of populations held in aquariums, there are other criteria that should be considered.For example, a species listed as ‘Least Concern’ may not be given immediate conservation attention, despite being at risk from climate change if it has not been reassessed in recent years or is susceptible to rapid declines or thresholds.Failure to identify such species threatens their survival and undermines the role that aquariums can play in protecting them.IUCN’s trait-based assessment of species’ vulnerability to climate change estimates the relative vulnerability of all birds, amphibians, and corals globally.Relative scores of high and low vulnerability were based on the species’ exposure to climatic change, in combination with their inherent sensitivity and adaptive capacity.Sensitivity and adaptive capacity were assessed based on the species-specific ecological, distribution, morphological and life history traits that exacerbate or mitigate the impacts of climate change.Populations of species that are vulnerable to climate change and are held in aquariums can provide valuable information on biological traits, including through observations of their sensitivity to environmental stresses.These may in turn inform conservation responses and provide important ‘insurance’ populations.Here we assess how many Anthozoa species in Species360 member aquariums have formally been assessed as highly vulnerable to climate change.When looking at conserving evolutionary uniqueness, the EDGE score of a species is critical.The EDGE score represents both the amount of evolutionary history and the threat level of a species.EDGE species are those that in addition to having been formally assessed as threatened by the IUCN Red List assessment process are phylogenetically distinct from their closest related surviving species.EDGE relative score has only been developed for the classes Mammalia, Amphibia, Aves, Reptilia and Anthozoa.Here we looked at EDGE species, but also species with high Evolutionary Distinctiveness that are ‘Least Concerned’ or ‘Near Threatened’.Species at the tipping point of extinction are those assessed by the Alliance for Zero Extinction.AZE is a consortium of conservation-oriented organizations with the goal to ensure the survival of ‘Critically Endangered’ and ‘Endangered’ species that are restricted to single sites.There are 920 species in the AZE list distributed among 588 sites globally for mammals, birds, amphibians, reptiles, conifers and reef-building corals.There are only two AZE listed coral species: Porites pukoensis from Molokai Island, US and Siderastrea glynni from Uraba Island, Panama.Tropical coral reef ecosystems occupy less than 0.1% of the ocean floor but provide habitat for at least 25% of known marine species.Although corals play a key role in the maintenance of marine biodiversity, in 1998 58% of the global corals were threatened by human actions.Furthermore, 25% of corals have been destroyed or severely damaged by the effects of climate change.The loss of corals severely impacts associated biodiversity, including sharks, bony fishes, sea turtles and sponges.Climate change, intensive fisheries, pollution, and the wildlife trade present major threats to corals.From 2003 to 2013, corals constituted 98% of the total trade of live animal specimens from Indonesia to the Netherlands.Aquariums are already playing a key role by providing knowledge and expertise in coral reproduction and restoration techniques in natural habitats.Some examples include the SECORE International initiative in which a collaboration between aquariums and researchers are working to re-establish ‘Critically Endangered’ stony corals in the Caribbean Sea and also the work of Taronga Zoo, Australia with the cryopreservation of two species with enough genetic material for the production of over 200 million colonies.Craggs et al. provide a further example of the important role aquariums can play in advancing coral spawning capability and associated production of potentially more climate change resilient populations.However, an assessment of the overall value of aquariums for coral conservation is seriously compromised by a current lack of available information on total species numbers held worldwide.Because of the current concern on coral reefs, we gave a special focus to this group to better explore and identify the potential of aquariums to help conserve these taxa.To assess the number of aquatic species in aquariums, we used data from the Species360 organization that manages the Zoological Information Management System, a real-time international database used by 1 111 aquariums and zoos.We analyzed species holdings of the 594 member institutions that report to have species belonging to the fish classes of Actinopterygii, Elasmobranchii, Holocephali, Myxini, Sarcopterygii and to one class from the Cnidaria phylum, Anthozoa.We excluded 6% of the records because they referred to groups of individuals.Of these, 2 441 records referred to species, 315 to genus, 25 to families, 19 to subspecies, 8 to order, 7 to domestic, 3 to subclass, and 1 to class.Only for Anthozoa, we excluded a total of 13% of the records, comprising 276 species due to reporting only at a group level without precise counting of number of individuals.In order to determine the conservation potential of aquariums we compared the species from the selected aquatic taxa in ZIMS with the CITES Appendices or species index, the IUCN Red List, Vulnerability to climate change, EDGE and AZE.We further calculated the total number of individuals in the entire Species360 network.For some species, the number of individuals is not recorded because they are managed as groups or colonies, ranging from approximately two to hundreds of thousands of individuals, depending on the species and management strategies.Due to the complexities to interpret the number of individuals by groups we did not include groups in our analysis.Species listed in CITES were indexed in three different appendices.In the Appendix I are species that are threatened with extinction and in which trade is not permitted, except in special circumstances such as scientific research.In Appendix II are species which trade should be controlled in order to promote sustainable trade, and in the Appendix III are species protected in at least one country and, consequently, should be treated with special concern from all parties.Here we analyzed the number of species in each appendix for each of the target assessment classes.The IUCN Red List status provides a species-specific indication of the globally threatened status, by listing species in different categories.Species that no longer exist as their last individual has died are categorized as ‘Extinct’ and species that have disappeared from the wild but still have representatives in captivity are classified as ‘Extinct in the Wild’.Species in the categories of ‘Critically Endangered’, ‘Endangered’ or ‘Vulnerable’ are referred as threatened.Species that are close to meeting a threatened threshold but evaluated to have a low risk of extinction are considered as ‘Near Threatened’, and ‘Least Concern’.Species can also be catalogued as ‘Data Deficient’ when there is not enough information for their evaluation.In this study we analyzed the number of species in each threat category.Most of the data for marine species assessed for vulnerability to climate change by Foden et al. were of the Scleractinian order.Although the IUCN Red List investigates threats related to climate change, the Foden et al. assessment identified species that might not yet be threatened but can potentially become at risk in a near future due to climate change.We looked at whether focal species were categorized as of high and low vulnerability to climate change.Data for species listed in EDGE and AZE were only available for the Scleractinian corals of the class Anthozoa.We considered as EDGE all the species with an evolutionary distinctiveness score equal or bigger than the mean of all assessed species, according to EDGE, independently of the species IUCN Red List category.To standardize taxonomic names across the six different data sources, we used the accepted scientific name according to Catalogue of Life.Subspecies for which the accepted scientific name were not found or when the species was not specified in the database were not considered in this study.We automatically retrieved the IUCN threat status and scientific names using the taxize R package, which searches for accepted names based on synonyms and fuzzy matching names.We manually searched for the species names that could not be retrieved automatically.We mapped aquarium geographical locations by their associated species IUCN threat category using the R package ggmap.In the case of aquariums holding more than one species, only the species with the highest IUCN Red List threat status was plotted to give an overview of the geographical location of the institutions that hold threatened species.We generated a Venn diagram with the web tool from Bioinformatics and Evolutionary Genomics.For the rest of the maps and plots we used R.For fish, we generated a list of targeted species for species prioritization, based on their overlap between the IUCN Red List status and Species360.We put special focus on those that are already being managed as a studbook in the regional association of EAZA, AZA, PAAZA, ALPZA and ZAA.For corals and anemones, the prioritization consisted on species based on the IUCN Red List, as well as how many of the species from the different threat categories overlap with a high Evolutionary Distinctiveness, AZE, and their vulnerability to climate change.For fish, we provided a list of conservation potential based on the number of species listed in CITES and with an active studbook.The ZIMS database has records of 3 511 aquatic species for the six studied taxonomic classes.For this analysis, we were only able to retrieve the accepted scientific names of 96% of the species, due to a combination of taxonomic conflict issues and genus level only taxa being recorded.The most represented taxa by the number of individuals in aquariums are that of the cartilaginous fishes although not the most species-rich.The class with the highest number of species in aquariums is the ray-finned bony fish, with almost 3 000 species, representing ∼9% of all the described species in this class.The Holocephali and Myxini, on the other hand, had the lowest percentage of the described species present in aquariums.As shown in Fig. 1, most of the aquariums are geographically located in temperate zones in Europe and America, while the natural distribution of many fish and corals is located in tropical areas.We found that 62% of the institutions in this analysis have at least one threatened species under their care.From the fish species described in Catalogue of Life, 14% are represented in Species360’s aquariums.Of the fish species listed in CITES, 34% are within Species360 aquarium’s members.Divided by IUCN Red List threatened categories we found that these aquariums hold four of the six fish species listed as ‘Extinct in the Wild’, which have a mean population size of 637.75, with the biggest population of 2 200 individuals, for the butterfly splitfin.However, aquariums do hold additional species whose assessments need updating that are also ‘Extinct in the Wild’, such as Cyprinodon veronicae.From the fish in aquariums, 8% are considered threatened by the IUCN Red List.Of the ‘Critically Endangered’ listed species 15% are in aquariums, which represents only 2% of their fish collections.The largest populations of ‘Critically Endangered’ species are of blackfin tilapia and the Tilapia deckerti, with 570 and 560 individuals, respectively.Aquariums collections are constituted of 2% of ‘Endangered’ species and 5% ‘Vulnerable’ species.The most represented IUCN Red List category is ‘Least Concern’– 100% of the Holocephali and Sarcopterygii, 78% of the Actinopterygii, 67% of the Myxini, and 30% of the Elasmobranchii.Also, it is important to notice the proportion of ‘Data Deficient’ species in aquariums – 33% of the Myxini, 17% of the Elasmobranchii and 5% of the Actinopterygii, with a mean population size of 102.9364 individuals.There are 1 249 species not yet assessed by the IUCN Red List and they have the species with the highest population numbers recorded in aquariums.For example, the guppy, Poecilia reticulata, not yet assessed by the IUCN Red List, is the species with the largest population registered in aquariums with 410 328 individuals.Aquariums hold, at least, 4% of the 6 407 coral and anemone species of the class Anthozoa described in Catalogue of Life.CITES lists 27% of the described Anthozoa, of which 9% are in Species360’s aquariums.There are 234 threatened coral and anemone species of which 14% are in aquariums, accounting for 13% of their Anthozoa collection.Two of the six species of ‘Critically Endangered’ species are in aquariums.Aquariums also hold 11% and 14% of all the species assessed as ‘Endangered’ and ‘Vulnerable’, respectively.For non-threatened species, aquariums hold 23% of ‘Near Threatened’, 26% of the ‘Least Concerned’ and less than 1% of the ‘Data Deficient’ Anthozoa species.Furthermore, 24% of the 611 coral and anemone species assessed as vulnerable to climate change are at least in one aquarium.Broken down by the two categories of high and low vulnerability zoos hold 23% and 24%, respectively, with the highest percentage of those listed as ‘Least Concerned’ .Aquariums in the Species360 network hold 19 out of the 111 Anthozoa coral species listed as evolutionary distinct.From the five zoological regions, only institutions part of EAZA and AZA have active studbooks for Elasmobranchii and Actinopterygii.We found that aquariums in the Species360 network have 88% of the 26 species with a studbook in the two mentioned regions.Of the species with a studbook that are not part of the Species360 network, one of them is listed in Appendix I of CITES.Seven species with a studbook are considered ‘Critically Endangered’ by the IUCN Red List, with population sizes ranging from two to 406 individuals.The least represented IUCN Red List status in a studbook is of an ‘Endangered’ species which has a population of 11 individuals.The biggest population with a studbook is of the ‘Vulnerable’ lined seahorse, with 1 746 individuals.Aquariums have 21% of the 82 coral and anemone species listed both as ED and vulnerable to climate change.Moreover, aquariums hold one species assessed as vulnerable to climate change that has not yet been assessed by the IUCN Red List.Species of concern not yet represented in aquariums are the two species listed by AZE and the 65 species listed as both vulnerable to climate change and evolutionary distinct.Furthermore, none of the species indexed in Appendices III or I by CITES overlap with another prioritization scheme.However, the two AZE species are indexed in CITES Appendix II and 81 species listed in Appendix II are considered evolutionary distinct and Vulnerable to Climate Change.Of the 17 species held by aquariums listed as ED and assessed as vulnerable to climate change, the biggest population is of Catalaphyllia jardinei, with 10 042 individuals recorded, while the species with the least number of individuals is of Cyphastrea ocellina.Of those, more than half have more or equal to 20 individuals.Out of the 107 species held by aquariums that have not been assessed by the IUCN Red List yet, the biggest population is of the species Corynactis californica with 12 057 individuals, followed by Ricordea florida with 10 113.Moreover, 12 of those 107 species) have only one individual under human care and another 11 species only have two individuals in ex situ collections.Active management of these species should be considered a priority conservation action.We draw attention to the conservation potential of species listed in different prioritization schemes in Table 4.Given the current biodiversity crisis, coral and fish populations held in the world’s aquariums will certainly play an increasingly critical conservation role.Still, the potential of these populations in captivity to respond to the extinction crisis has not been fully explored.Here we helped to fill this gap by i) assessing the number of described fish and corals recorded in Species360’s aquariums network, ii) highlighting targeted species of concern based on different prioritization schemes to inform the development of management programs, and iii) showing the value of aquariums sharing real-time standardized animal records globally to better respond towards the current biodiversity crisis.We found that at least 14% of the described fish and 4% of Anthozoa corals and anemones are held in aquariums.However, we strongly expect that there are significantly more species not yet recorded, and therefore we urge aquariums to increase their standardization and sharing of animal record keeping for species under their care to maximize their conservation potential as a global network.In 2014, the IUCN shark specialist group revealed that 25% of more than one thousand species of sharks, rays and chimaeras were threatened with extinction due to overfishing, whether targeted or accidental.Yet, only ∼4% of all described Elasmobranchii are listed in CITES and therefore considered threatened by international trade, from which 34% are in aquariums.Given the high volume of fisheries trade, it is highly likely that more species need to be listed.These reflect a combination of historical policy inertia and inadequate formal species risk assessments resulting in the trade of aquatic species continuing to be poorly regulated in many countries, with resultant pressures on wild populations.With the species they hold, aquariums are ideally placed to influence public opinion and policymakers so that more species threatened by international trade are included on CITES.Furthermore, aquariums’ populations of species can provide important information on demographic traits and ecological thresholds to inform fishing quotas and coral harvesting, as well as climate change vulnerability.Nevertheless, estimating species vital rates such as age at first reproduction, reproductive lifespan and recruitment, can only be possible when the population size is big enough.Therefore, to reach statistically reliable numbers to estimate these, aquariums’ shared data on the species they hold is essential.This is of particular importance not only for species endangered by international trade but also assessed as threatened by the IUCN Red List and other formal assessment and prioritization schemes.Here we showed that more than half of the aquariums worldwide hold at least one species considered to be threatened with extinction by the IUCN Red List, which underlines the potential value of aquariums’ husbandry data for saving species of concern from extinction.One of every seven fish species assessed by the IUCN is threatened with extinction and 8% of these are currently in aquariums.At the tipping point of extinction are species listed as ‘Extinct in the Wild’, for which aquariums hold four of the six EW species, with populations ranging from 47 up to 2 200 individuals.At least one of these EW listed species needs updating, the butterfly goodeid Ameca splendens, since it has been found in the wild, in Mexico.Ensuring viable populations of these species is crucial to prevent their extinction.Unfortunately, there are more species believed to be EW not yet updated by the IUCN Red List, that exist in aquariums of the Species360 network.For example, the pupfish Cyprinodon veronicae, which has not yet been formally reassessed by IUCN."Species listed as ‘Data Deficient’ are usually afforded conservation program assistance, however, 111 species are recorded being held in aquariums and data collected on them can provide important information on species' vital rates to support IUCN assessments.As shown by previous studies, species identified as DD have a high probability of being threatened or of becoming extinct even before we are able to notice that they were threatened.The potential of populations in aquariums to contribute to conservation should not only be seen in the light of the IUCN Red List but within other assessments or prioritization schemes.However, fish have been relatively neglected by these assessments, for example, from 1976 to 2002, no marine fish was listed in CITES.Likewise, fish classes are not yet assessed under EDGE, species vulnerability to climate change and AZE.This is mainly due to the lack of data and taxonomic uncertainties.However, filling this gap is essential, not least because fish sources provide 17% of the protein intake globally and the reduction of fish populations would lead to high economic and social pressures.Conversely, corals are included in more assessments, as the conservation outlook is bleak for almost a quarter of the species in class Anthozoa currently formally assessed by IUCN.Our findings showed 77 coral and anemone species listed as ‘Least Concern’ in aquariums, and consequently may currently not receive the conservation focus they should.However, 74 of these are listed as vulnerable to climate change and therefore justify better conservation attention.Moreover, with 17% of the evolutionary distinct corals already held in aquariums, these institutions have high potential to support conservation efforts as the extinction threat facing the Anthozoa group is so severe.The alignment of different conservation prioritization schemes is of special importance for the decision making of future collection planning.For fish, we would draw attention to the importance of those species threatened by international trade, which are already being managed through studbooks, such as the smalltooth sawfish.In total, aquariums intensively manage 26 fish species through a studbook, and 14 of those are considered threatened by the IUCN Red List.The knowledge on the ex situ population sizes of species with active studbooks might help the establishment of new management programs and the development of existing ones.Regarding corals and anemones, aquariums hold 17 species severely susceptible to extinction by being indexed as vulnerable to climate change and evolutionary distinct for which conservation programs have yet to be developed.These species are distinctive candidates for initiating a studbook, in which research into their husbandry, culture and management can be improved in the aquarium and their population viability assured while these species are being attentively managed as possible insurance populations for their wild counterparts.Conservation actions highly depend on collaboration between diverse institutions and the integration of different prioritization schemes.The management of populations across institutions as a metapopulation invariably means better changes of successful conservation outcomes.Population size demographics are of extreme importance, due to genetic variability and robustness, which can influence the repopulation success.We found that 4% of the species in our targeted analysis taxa have more than 500 individuals, which is considered a minimum population size to uphold a genetically sustainable population with minimal loss of genetic diversity.The biggest population with an active studbook in aquariums has more than 500 individuals and represents a ‘Vulnerable’ species, according to the IUCN Red List.Due to group management difficulties, the precise count of individuals is frequently not possible.Although challenging, identifying useful data management techniques for this group of colony living animals in conservation programs is crucial for conservation dependent species.Additionally, we need to highlight that corals are likely to be represented by many more species than the ones covered in this analysis.On the other side, for corals, the number of individuals is highly challenging and underestimated due to the enormous difficulty in identifying a single individual in a colony for many polyp species.The findings in this study also unveil the issue of taxonomy and the challenge of species identification.We expected a higher number of species than the ones recorded by aquariums due to identification issues and taxonomic compatibility.A prime example of these dual challenges are the Anthozoan corals and anemones, as the number of species in this class would be at least 55% higher accounting for the reported unidentified species that were not considered due to the lack of identification to species level.Recent initiatives, such as the CORALZOO are helping to address this identification issue.The employment of a standard, shared animal record databases is key to optimizing the ex situ conservation breeding program success for almost all species.For example, the now ‘Extinct’ pupfish Megupsilon aporus could potentially have been saved from extinction.This species naturally occurred in the same spring habitat in Mexico as the EW pupfish Cyprinodon alvarezi.Due to anthropogenic impacts, the spring disappeared and both species became EW in the 1990’s.Remnant populations of both species, however, were maintained in aquarium collections, when in 2013 the population of Megupsilon aporus dropped to dangerously low levels and was only recorded in a few institutions.By the time the remaining holders realized the fragmented metapopulations had only one remaining female, it was too late.It is believed the last fish died in 2014 and the species became ‘Extinct’.Therefore, it could be strongly argued that the integration of captive data in conservation projects could have raised the alert for this species before it reached critical levels and a coordinated effort could have saved the species from extinction.At the moment, conservation practitioners, demographers and scientists in general struggle to get good quality data, especially for species on the brink of extinction.The wealth of data collected by standardized databases such as ZIMS, maintained by Species360, can provide invaluable practical management assistance and also deeper insights of significance to both ex situ and in situ species conservation.By contributing information on life history traits, behavior, water quality requirements and other environmental and biological information of species member institutions are making invaluable contributions to global conservation knowledge and capability.Captive standardized data might also help to generally improve conservation assessments such as the IUCN Red List and climate change vulnerability assessments since wild species-specific data is scarce, hard to obtain and usually biased towards regions, habitats and environmental domains.Even if we rethink our approach to fill gaps in the available knowledge by targeting strategically chosen biases, there is a strong possibility that the gaps will not be entirely filled in a timely manner, delaying action for species in need and posing a dilemma for both conservationists and policymakers, who might not be able to wait years for sufficient data.Even though aquariums keep only a small proportion of all described species, they are in an ideal position to provide important information on species that occur in areas where an in situ study is difficult.Only through shared and standard data is it possible to support the decision making process to manage animal collections as metapopulations across the global aquarium community.Despite our focus on the Species360 member’s data, it is essential to stress the conservation importance of other aquariums that collect high-quality data but are not currently sharing it.Consequently, the number of species reported here is an underestimation of the real number of ex situ managed species.The main goal of species conservation is to protect wild ecosystems, but when we fail to preserve viable genetically diverse populations against threats such as habitat loss, disease, overharvesting, predation and pollution, complementary ex situ programs can make the critical difference for species survival.For such conservation breeding programs to be viable, it is crucial to quantify aquariums’ species holdings as these institutions have a great potential for contributing to the conservation of wild populations at risk.Here we overcame the uncertainty of the figure of species by assessing the number of described fish and corals in aquariums in the Species360 global network.Furthermore, we provided a list of targeted species based on prioritization schemes that conservation practitioners can access to further inform their collection planning and conservation program development.We showed the great value of sharing real-time standardized data among aquariums and urge that more institutions realize their data’s full potential when shared in a standard way.Concerted efforts to utilize standardized and shared animal record databases, address species identification gaps and taxonomic issues would greatly improve the conservation chances for many aquatic species and we urge that this challenge is met with the urgency it requires. | Aquatic ecosystems are indispensable for life on earth and yet despite their essential function and service roles, marine and freshwater biomes are facing unprecedented threats from both traditional and emerging anthropogenic stressors. The resultant species and ecosystem-level threat severity requires an urgent response from the conservation community. With their care facilities, veterinary and conservation breeding expertise, reintroduction and restoration, and public communication reach, stand-alone aquariums and zoos holding aquatic taxa have great collective potential to help address the current biodiversity crisis, which is now greater in freshwater than land habitats. However, uncertainty regarding the number of species kept in such facilities hinders assessment of their conservation value. Here we analyzed, standardized and shared data of zoological institution members of Species360, for fish and Anthozoa species (i.e. Actinopterygii, Elasmobranchii, Holocephali, Myxini, Sarcopterygii and Anthozoa). To assess the conservation potential of populations held in these institutions, we cross-referenced the Species360 records with the following conservation schemes: the Convention on the International Trade of Endangered Species of Fauna and Flora (CITES), the IUCN Red List of Threatened species, climate change vulnerability, Evolutionary Distinct and Globally Endangered (EDGE) and The Alliance for Zero Extinction (AZE). We found that aquariums hold four of the six fish species listed by the IUCN Red List as ‘Extinct in the Wild’ 31% of Anthozoa species listed by Foden et al. (2013) as vulnerable to climate change, 19 out of the 111 Anthozoa EDGE species, and none of the species prioritized by the AZE. However, it is very likely that significant additional species of high conservation value are held in aquariums that do not manage their records in standardized, sharable platforms such as Species360. Our study highlights both the great value of aquarium and zoo collections for addressing the aquatic biodiversity crisis, as well as the importance that they maintain comprehensive, standardised, globally-shared taxonomic data. |
397 | A framework for integrating systematic stakeholder analysis in ecosystem services research: Stakeholder mapping for forest ecosystem services in the UK | Since the publication of the Millennium Ecosystem Assessment in 2005, the ecosystem services concept has become popular amongst academics, policy-makers, and practitioners.The increasing use of ecosystem services thinking, however, requires not only the assessment of the goods and services different ecosystems provide, but also a detailed understanding of those who have a stake in such services and why.Until recently, most empirical ecosystem services research has focused either on the identification, mapping, assessment, or quantification or valuation of ecosystem services.Those who did include stakeholders in their work tended to do this in a more general, unsystematic way, and mostly on a regional or local case study level.However, in many cases, stakeholder interests in ecosystem services tend to intersect local, national and international levels.In the past, many efforts at governing and managing ecosystems and the goods and services they provide sustainably have been unsuccessful because the various stakeholders involved and their perspectives and potentially conflicting interests have not been given sufficient attention.The governance, management, and use of ecosystem services involve a wide range of stakeholders with distinctly different but frequently interrelated stakes, which need to be taken into account as they may be fundamental.Stakeholder analysis enables the systematic identification of these stakeholders, the assessment and comparison of their particular sets of interests, roles and powers, and the consideration and investigation of the relationships between them, including alliances, collaborations, and inherent conflicts.It examines “who these interested parties are, who has the power to influence what happens, how these parties interact and, based on this information, how they might be able to work more effectively together” to address environmental and/or natural resource management issues.Indeed, linking ecosystem services to stakeholders and systematically mapping their potential stakes in these will be essential for equitable and sustainable ecosystem governance and management.The findings of systematic stakeholder analysis can be used to recommend or develop future actions, such as new policies or policy instruments for ecosystem services or stakeholder engagement strategies.It can also aid land use planning linked to ecosystem services or support the design of communication tools for their management.Thus, I argue that making explicit the linkages between different stakeholders and their stakes in ecosystems and the various goods and services they provide, should be one of the main purposes of an ecosystem services framework.The increasing use of ecosystem services thinking requires a thorough understanding of the various stakeholders involved in ecosystem services, making a more systematic use of stakeholder analysis necessary.Systematic stakeholder mapping or analysis is a particularly useful approach to assess the stakes of various interested parties in a system in more detail.In recent years, this type of analysis has become increasingly popular in various fields and academic disciplines, including environmental management and governance, and is now regularly used by businesses, regulators, policy-makers and international organisations.Its roots are in management theory and in political science, where it has evolved into a systematic tool with clearly defined applications and methods.Stakeholder analysis can be seen “as a holistic approach or procedure for gaining an understanding of a system” and changes in it, “by means of identifying the key actors or stakeholders and assessing their respective interests in the system”.Freeman initially distinguished stakeholders in a business context as “any group or individual who can affect or is affected by the achievement of an organisation’s objectives”.In a natural resource management context, Grimble et al. defines stakeholders as “all those who affect, and/or are affected by, the policies, decisions, and actions of the system”.They can be individuals, or “any group of people, organised or unorganised, who share a common interest or stake in a particular issue or system”.Stakeholder interests often tend to cut across political administrative, social and economic units at international, national, regional and local levels and are likely to include governmental departments, commercial bodies, national and international planners, professional advisers, communities, and individuals.Stakeholder analysis enables the systematic assessment and comparison of their particular sets of interests, influences and roles, and the examination of relationships between them.In natural resource management, stakeholder analysis represented a particularly valuable tool since it typically involves a wide range of stakeholders, using the same resource for different purposes.Initially, stakeholder analysis within natural resource management has mainly been used in developing countries.There, the emphasis has largely been on participation and conflict resolution, following a more general trend towards the development of normative participatory approaches in resource management.Crucially, many past efforts at managing the environment and natural resources sensitively have failed because the various stakeholders involved and their potentially conflicting interests and perspectives have been given inadequate consideration by national policy-makers and regional or local planners.This has frequently led to local resistance of policies and/or projects which then became unsuccessful.Hence, it is essential to understand the different perspectives of the various actors involved and to specify who has an interest in the resource base and the goods and services it provides, to what level, and why.One of the earliest works on stakeholder analysis in a natural resource management context has been published by Grimble et al.; it focuses on tree resources and environmental policy in Cameroon and Thailand.The article introduces a classification system which categorises broad stakeholder groups along a continuum from the micro to macro level.In more recent years, stakeholder analysis has become firmly established as a core component of natural resource management.A number of approaches have been used in different sectors, such as forestry, marine planning, energy policy, water infrastructure, and conservation management.In many parts of the world, the important forest resource tends to involve a particularly large and diverse range of stakeholders, often with competing interests in different forest ecosystem services.Some may also exert considerable influence over forestry.In the UK, the stakeholder landscape linked to forestry appears to be complex and dynamic.Its complexity lies in the breadth of current and potential future interests involved, and in the way in which these interests span public and private domains from the national to the local level.A systematic mapping of these stakeholders would allow a better understanding of their multiple stakes in ecosystem services which, in turn, could aid the design of equitable and sustainable ecosystem governance and management strategies because it provides a detailed understanding of who has a stake and why.However, although there have been several studies that have made extensive use of stakeholder analysis tools in relation to tree pests and diseases, relatively few studies appear to have looked specifically at forest stakeholders within the ecosystem services framework.Those who have, have tend to concentrate on local case studies, often involving local communities, using stakeholder analysis in a general, somewhat unsystematic way.Garrido et al.’s study, for instance, has compared how stakeholders from different sectors perceived ecosystem services from the wood-pasture Dehesa landscape of northern Spain.The study compares civil, private and public sector stakeholders on the local and regional level.Agbenyega et al. applied, for the first time, an explicit ecosystem services framework to perceptions of woodlands in the UK.The authors classify the diverse range of functions and services generated by four community woodlands in Eastern England and link these with particular stakeholder interests and preferences.However, comparatively little is known about the stakeholders in/of forest ecosystem services on the UK macro to micro level, leaving a considerable knowledge gap.Building on this state of understanding, this paper intends to provide a better appreciation and promote discussion of a more systematic use of stakeholder analysis in ecosystem services research.Therefore, it aims to present an illustrative stakeholder mapping example, using a key natural resource, namely forests, in the UK.An exploratory qualitative approach was adopted to provide a better understanding of current stakeholders in forest ecosystem services, their particular stakes, characteristics, and relationships on the UK macro to micro level.Informed by this illustrative and exploratory example, the paper then offers a conceptual framework for the systematic application of stakeholder analysis in ecosystem services research, and useful to academics, policy-makers, land use decision-makers, and conservationists.Over the last 100 years or so, forest and woodland cover in the UK has increased from 4.6% at the beginning of the 20th Century to 13% today: 10% in England, 15% in Scotland, 15% in Wales and 8% in Northern Ireland.The UK National Forest Inventory defines woodlands as a minimum area of 0.5 ha; and a minimum width of 20 m; tree crown cover ≥ 20% or the potential to achieve it; and a minimum height of 2 m, or the potential to achieve it.All of the forested land in the UK, has, to some extent, been modified by management.The majority of woodland is classified as ‘Productive Plantation’, with ‘Modified Natural and Semi-natural’ representing 32% of the woodland area, and 0.7% being classed as ‘Protective Plantation’.Productive plantation has been established for the production of wood or non-wood goods, the second cover areas under intensive management, thus leading to changes in the structure and composition of the forest, and the last group has been established for soil and water protection, pest control and conservation of habitats to biological diversity.Forest ecosystems, depending on their location, scale, and management, are one of the largest providers of ecosystem services.They frequently provide the full range of goods and services as defined, for instance, by the Millennium Ecosystem Assessment.Thus, forests present a particularly useful case study example.In the UK, the Forestry Commission and its devolving country equivalents own or manage 28% of the total woodland area, ranging from 16% in England to 55% in Northern Ireland.The other forest owners comprise approximately 43.6% private owners, 12% businesses, 3.6% charities, and 4.9% local authorities and other public owners.A more recent sample survey of ownership was undertaken as part of the National Forest Inventory from 2009, but the data are not yet available.It should be noted that due to the devolution of political administration which began in 1998, it has not always been possible to keep a clear UK focus in the illustrative stakeholder analysis presented in this paper.At the time of interviewing the newly devolved governmental organisations were at various stages of devolution and the administrative competencies were still evolving.Still, it is reasonable to assume, that once these are fully established, their objectives and powers will be broadly similar to the previous organisations.In this study, an exploratory qualitative approach was adopted to uncover the stakeholders with an interest in forest ecosystem services and to analyse their particular stakes, roles and positions on different levels.The definition was adapted from Freeman and Grimble and Wellard as any organisation, group, or individual interested in or with an influence over woodland ecosystem services.Such stakeholders can be identified through various methods, including documentary reviews, expert interviews, and focus groups.For the purpose of this study, I chose a combined approach, using literature review through a key word analysis of official websites of organisations, and a stakeholder-led identification, based on expert interviews.The idea was to provide a more general overview of the wide range of stakeholders with an interest in the various forest ecosystem services through a literature review.I then employed a more resource intensive research method, to provide a more detailed understanding of a much smaller number of key stakeholders through an empirical capture of qualitative information obtained from the interviews.The intention was to capture stakeholders in the UK from a macro to micro level, building on Grimble et al.The concept of a macro to micro continuum is useful for classifying stakeholders at different levels.To begin with, a preliminary list was drawn up of stakeholders with a general stake in UK forests.It was based on a list of stakeholders compiled by the Forestry Commission.Several other stakeholders were iteratively added from various other sources throughout the data collection.This resulted in 244 stakeholders, comprising a wide range of governmental, and not-for-profit organisations, businesses and industry, and individuals.32 of the above 244 stakeholders on the preliminary list were either not found on the internet or were part of a larger organisation already on the list.To be able to better distinguish their specific interests in the various ecosystem services, their interest in the provisioning, regulating and cultural woodland ecosystem services was then determined through a ‘rapid’ keyword analysis of the organisations’ webpages: ‘home’, ‘about us’, or ‘what we do’.Only stakeholders who specifically mentioned woodlands or forests and one or more ecosystem service listed in Table 1, on one or all of these three webpages, were kept on the final list, leaving 83 stakeholders.For the purpose of this study, ‘interest’ in ecosystems services was defined simply as interest in the provisioning, regulating and cultural ecosystem services listed in Table 1.The positions of a selection of key stakeholders in forest ecosystem services were further explored through semi-structured interviews with 12 UK based forestry and conservation experts who were familiar with UK forestry and the concept of ecosystem services.These interviewees were identified through a combined purposive snowballing technique.This approach gave structure and coherence whilst also allowing for flexibility.The semi-structured interviews also allowed considerable focus and hence data “with significant depth or richness” for such an exploratory study.This notwithstanding, the scope of the study and its illustrative, exploratory and qualitative nature mean that the findings are illustrative rather than representative.The 20–40 min-long interviews were conducted either by telephone or in person between April 2013 and July 2014.They were, with the written and verbal consent of the interviewees, digitally recorded and then transcribed verbatim.Respondents consisted of senior staff of a cross-section from academic institutions, governmental organisations, non-governmental conservation organisations, and private sector forestry organisations.The identification of the key stakeholders in forest ecosystem services was based on the following guiding interview questions:Thinking about the provisioning, regulating and cultural services woodlands provide, who would you identify as key stakeholders in forest ecosystem services in the UK and why?,;,In which ecosystem service are they interested in?,To further enrich stakeholder mapping, stakeholders are frequently differentiated between and categorised into groups.For the purpose of this analysis, I chose literature review, again through a key word analysis of stakeholders websites, to distinguish between a wide range of stakeholders with an interest in the different ecosystem services, and a stakeholder-led categorisation combined with an extended interest-influence matrix for a more detailed differentiation of a number of key stakeholders.The partitioning of stakeholders into functional roles, such as according to their respective professional characteristics and interests in ecosystem services, may inform the design of a multi-user communication interface for ecosystem services management.The clustering of stakeholders, based on similarities in specific stakeholder characteristics, such as their roles, degrees of power, or their management objectives, may also assist land-use decisions, as it can differentiate more clearly between those who make the decisions and those who are affected by the decisions made, and in what way and to what degree.A variety of methods have been developed for such differentiation and categorisation, including ‘interest-influence matrices’, ‘stakeholder-led categorisation’, and ‘Q-methodology’.Here, first, I further grouped the formerly identified 83 stakeholders under provisioning, regulating and cultural ecosystem services and on a macro to a micro continuum.This comprised the UK national and regional level, using the county of West Sussex in southern England as a regional example.It was based on the web keyword analysis.The differentiation on the local level was based on the author’s judgment.This type of mapping is useful for classifying stakeholders at different levels and according to the broader groups of ecosystem services they are interested in.To be able to distinguish more clearly between the different groups of such a large number of stakeholders, I then classified them into groups according to their respective professional characteristics and interests in ecosystem services.Secondly, to obtain a more detailed understanding of a selection of key stakeholders, I followed Reed et al.’s recommendation to use an extended interested-influence matrix approach.For this purpose, I asked interview respondents to assess the degree of interest in and influence over woodland ecosystem services of the stakeholders they had recorded, and the reasons for it.‘Interests’, as defined under 3.2.1.,included both primary and secondary interests.‘Influence’ was defined as the ability to affect the provisioning of forest ecosystem services either directly through their use and/or management activity, or indirectly through policy and/or regulation.The scores for the degrees of interest and influence were calculated using the mean average from the interviews.The interviews were based on the following guiding questions:How would you assess the degree of interest in forest ecosystem services?,; and,How would you assess the degree of their influence over these services?,;,What are the reasons for their interest in and influence over forest ecosystem services?,A number of methods have been developed to investigate the relationships between stakeholders.Reed et al. identify three principal methods: i) Actor-linkages, ii) Social Network Analysis; and iii) Knowledge Mapping Analyses.These approaches are concerned principally with mapping flows of information, relationships and networks to provide a basis for reflection and action.Actor-linkage maps or matrices are generally seen as a useful starting point for discussing relationships and flows of information in a system.I used these in combination with a thematic narrative analysis, based on the exploratory interviews, to examine the relationships between the key stakeholders.First, I divided the selected key stakeholders into 5 functional groups according to professional characteristics and then examined their different roles as providers, users, and regulators of forest ecosystem services and their relationships towards each other.I defined providers/producers as those who provide or produce forest goods and services, such as timber products and landscape amenity, users or other beneficiaries as those who use or otherwise benefit from them, and regulators or enablers as those with the capacity to set formal and informal rules and regulations which impinge on the behaviour and practices of others.Interviewees were also asked to give examples of synergistic and conflicting relationships between the key stakeholders and specifically around forest ecosystem services.The empirical analysis was based on textual data obtained from literature in form of web pages and interviews.Web pages were searched and coded for the keywords of ‘forest’, ‘woodland’, ‘wood’ and the full range of forest ‘ecosystems services’.Although this approach had its limitations, keyword analysis is a potentially useful and cost-effective, but often understated social science research method, particularly when used in combination with other methods.The web pages and the interview transcripts were analysed through hand annotated codes.In the first round of coding, the latter were searched and coded for stakeholders and the ecosystem services of interest to them.These were then further coded in terms of the level of interest and influence, functional/professional characteristics, and relationships.The findings were presented through a qualitative narrative, supported by verbatim quotes from the interview transcripts, and by summary tables and matrices.In this section, I present the findings of the illustrative and exploratory mapping of stakeholders in forest ecosystem services in the UK.I begin with a general overview and differentiation of stakeholders with an interest in woodland ecosystem services, based on the literature.This is followed by a more detailed examination of a selection of key stakeholders and based on the interviews.The identification and grouping of stakeholders with an interest in forest ecosystem services in the UK, through the literature in form of webpages, was not as straightforward as anticipated; it was, at times challenging to clearly link stakeholders with specific ecosystem services.For example, the review found over 50 voluntary groups and organisations concerned with nature and rural or landscape conservation, many of which were interested in woodlands, yet their specific interest in forest ecosystem services was less clear.These organisations all varied considerably in their significance and objectives, and their goals were not always clearly stated on the organisations’ websites.Moreover, stakeholders’ interests in woodland ecosystem services were frequently rather hidden or indirect and therefore not specifically mentioned.Stakeholders with an interest in certain cultural and regulating services were particularly difficult to identify.For example, there was a large number of stakeholders with tourism or recreation related concerns.However, even though many of these were likely to have an interest in forests and specific services, these were not explicitly stated on the three websites used for the rapid web analysis.It was even more difficult to identify stakeholders with a specific stake in regulating services.Thus, the final list of 83 stakeholders with a stake in woodland ecosystem services presented here should be seen as indicative.Table 2 provides examples of these stakeholders grouped into those interested in the provisioning and regulating services, and those whose interests are primarily cultural in nature.On the national and regional levels specific stakeholders are identified; on the local level, examples are more generic.In Table 3, based on the information provided by the key websites, stakeholders are divided into nine groups of functional roles, according to their respective characteristics and interests in ecosystem services.The differentiation of stakeholders into meaningful functional clusters can shed further light on the ever-increasing complexity in the management of woodland ecosystem services.Stakeholders are listed according to their estimated influence in descending order.A cautionary note is warranted here; the boundaries between these groupings are not always entirely clear.In this section, I explore several key stakeholders with an interest in forest ecosystem services in more detail through expert interviews.15 prominent stakeholders or groups were perceived as particularly important players by the interviewees in UK forestry and in the context of woodland ecosystem services.These include both specific organisations, and more generic groups, spanning the public and the private domain from the UK national to the local level.They comprise two government departments, the Department for Environment, Food & Rural Affairs and the former Department of Energy & Climate Change, followed by the statutory country regulators, the Forestry Commission and the Environment Agency, and the statutory country nature conservation agency Natural England.Umbrella membership organisations, such as the Country Landowners and Business Association and the Confederation of Forest Industries, both trade bodies, and the Forestry Stewardship Council UK, an international non-profit, multi-stakeholder organisation with a division in the UK, also emerged from the analysis of the interviews with a significant interest in forest ecosystem services.So did several non-governmental membership organisations concerned with nature conservation, namely the Woodland Trust, the Royal Society for the Protection of Birds, and the National Trust on the national level, and woodland owning Wildlife Trusts active more on the sub-regional or local level.Private forest owners were identified by the interviewees as another key stakeholder group.The analysis of the interview transcripts suggests that these stakeholders had a wide range of frequently multiple, and at times competing interests in forest ecosystem services, likewise with less clearly defined stakeholders, including the public, local people or local communities.For the purpose of this study, these less well defined ones were amalgamated into one group.An additional five generic stakeholder groups, all belonging to the private sector, were also cited by the interviewed experts with reference to the forest ecosystem services of fresh water, hazard regulation, timber and fuel wood.These comprised water companies, energy suppliers, other corporates, developers and insurance companies all of which appear to have an increasingly important stake in UK forestry, arguably as a result of the promise of new financial opportunities linked to ecosystem services.The analysis suggests that many of the above stakeholders tended to have both a range of one or more primary, as well as secondary interests in ecosystem services.While the boundaries between these interests were not always entirely clear, an attempt was made to summarise them in Table 4.Moreover, during the interviews, it was frequently difficult to tease out the actual ecosystem services the selected range of key stakeholders might be interested in, as some of the interviewees were struggling to think within the ‘ecosystem services box’.For example, several interviewees spoke about general ‘access’ without mentioning the recreational purpose of such access.Moreover, there was a considerable range of opinions and perceptions on what constituted an ecosystem service amongst the interviewees.This was especially the case amongst those respondents who were less involved in formal policy work.On the other hand, many interviewees, including foresters, seemed unaware of the full range of ecosystem services provided by forests, especially of the less tangible services, such as erosion control, temperature regulation, air quality regulation, hazard regulation and disease regulation.These were rarely mentioned during interviews.The selected key stakeholders were found to have not only a range of different interests in woodland ecosystem services but also different roles and powers.In fact, the analysis suggests that several of these stakeholders exerted considerable influence over the management of forest ecosystems in general and over the provision of specific ecosystem services in particular.These include governmental organisations, especially Defra and the Forestry Commission, but also several more influential umbrella organisations, namely the CLA, the RSPB, and, increasingly, the Woodland Trust.The influence of the governmental departments and the Forestry Commission over forest owners appeared considerable, involving both direct powers through regulation and indirect influences through various incentive schemes; the latter primarily impacting grant holders.Interestingly, their regulative powers frequently appeared to be linked to formal transnational commitments, such as the 2009 EU Renewable Energy or Water Framework Directives.Amongst the umbrella membership organisations, the analysis identified a clear distinction between those stakeholders chiefly motivated by commercial and production concerns and those with more explicit biodiversity conservation agendas and other public interests.Crucially, stakeholders with wider conservation interests comprised a diverse set of organisations with differing primary objectives for their woodland management.The National Trust, for example, manages its woods particularly for public access, whereas the RSPB operates its woodlands primarily for biodiversity as do the Wildlife Trusts.Nevertheless, most also tended to manage their woods for a whole range of other ecosystem services enjoyed by the public.In terms of influence, the analysis of the interview transcripts suggests that the powers of certain membership organisations were due in part to their involvement in policy development, campaigning and lobbying, but also to their control over the actual management of their wooded land.This includes umbrella organisations with production and conservation interests.One of the interview respondents, for example, noted that: “The big organisations, the big NGO’s, such as the RSPB, and the Woodland Trust, also have reasonable amount of influence … Where these sorts of organisations score when it comes down to influence is because they, as organisations, also get involved in lobbying and trying to influence things politically, …”.Private forest owners emerged “as the most important stakeholders”, as one respondent put it, in their capacity as providers of forest ecosystem services, including those enjoyed by the public.They also exert control over their land.However, the analysis suggested that they also represented a diverse group with an equally diverse range of management objectives and interests in ecosystem services.One interviewee noted that they were ranging from “everything from the estates right down to hobby owners who have just got a few acres of woodland.….And then you got the … slightly larger farms, which have got woodlands”.Local people and communities, as principal users of forest ecosystem services, appeared to be an even more complex group, especially because they include direct and indirect users or beneficiaries, comprising the entire forest supply chain.A summary of the above stakeholders and the extent of their interest in, and influence over forest ecosystem services are shown in Table 5.The analysis of the interviews revealed that stakeholders tended to have different roles, either as producers, users or regulators, or a combination of these, of a range of different forest ecosystem services.To gain a better understanding of these multiple relationships, the above key government, civic, and private stakeholder groups have been analysed in relation to who makes decisions about ecosystem services either as enabler or regulator or as producer or provider, and those who use ecosystem goods and services and are affected by the decisions made.From the analysis of the interview transcripts, it emerged that, with the exception of the Forestry Commission and forest owning local authorities, the governmental stakeholder organisations examined, tended to be primarily enablers or regulators of forest goods and services.The woodland owning conservation NGOs were found to be mainly service providers whereas the trade bodies tended to be ecosystem services producers through their woodland and business-owning members.Certainly, all of these were also users or beneficiaries of forest ecosystem services in one way or another.In the UK, even though forest and woodland cover has increased substantially over recent decades, pressure on the forest resource has also grown.The analysis of the interview transcripts indicated that this may have resulted in growing competition in this intensively used and highly valued natural resource.Interestingly, particular tensions seemed to have arisen amongst the selected key stakeholders around transnational obligations associated with climate and natural hazard regulation, fibre, and fresh water.For example, several respondents mentioned a particular dispute over carbon ownership.One of them explained: “the government claims ownership of all the carbon in UK woodlands as part of its Kyoto commitments.So, that is not available for the actual owners to trade because, effectively the government is trading it intergovernmental, internationally as a government”.Another area of tension seems to have occurred as a result of the 2009 EU Renewable Energy Directive.The directive required member states to increase their use of renewable energy to 20% by 2020; woody biomass was expected to play a key role in this.However, the analysis indicated that the established timber industry was increasingly concerned that the electricity generators would take timber from their feedstock and turn it into fuel wood.In fact, one of the respondents claimed that “there is quite a lot of tension at the moment”.The analysis suggests that there were competing interests among several members of the Defra family because they all had different international commitments linked to forest ecosystem services to fulfil.Upland heathland areas emerge as a particularly pertinent example.Natural England had been aiming to restore former heathland to fulfil its international biodiversity target.However, some of the targeted areas had only been afforested by the Forestry Commission 30–50 years ago in order to fulfil the governments’ then afforestation target; the Commission seems to prefer to retain the trees to achieve the governments EU Renewable Energy and Kyoto obligations.Similarly, the Environment Agency and some water companies appear to be increasingly interested in upland tree planting to help ameliorate flooding events and to fulfil their own Water Framework Directive targets on water quality.The analysis also suggests that, partly due to the ever-widening scope of forestry, some of the key stakeholders were increasingly drawn into partnerships or wider networks linked to ecosystem services.This includes policy remits linked to water regulation, and renewable energy, i.e. woody biomass, and biodiversity.Catchment partnerships to improve water quality and to reduce flooding in response to the 2000 EU Water Framework Directive and networks to increase woody biomass production and usage in response to the 2009 EU Renewable Energy Directive were particular examples mentioned during the interviews.The former generally tended to be catchment scale project partnerships, often involving the Woodland Trust, the Environment Agency, the Wildlife Trusts, water companies and private landowners.The latter were local networks, frequently initiated or led by the Forestry Commission to promote wood fuel through the utilisation of existing supplier relationships between retailers, local farmers, and other suppliers.The analysis of the interview data showed that new health-related partnerships are also beginning to form on the local level, involving local authorities, the Forestry Commission, and other public health providers.In this section, I first discuss some of the findings of this illustrative and exploratory study in the light of existing work, highlighting this papers’ contribution.I then propose a conceptual framework for the use of systematic stakeholder analysis in ecosystem services related work.In the exploratory study presented here, stakeholder mapping was applied explicitly in order to link multiple ecosystem goods and services with particular stakeholders, using UK forestry as an example.It focused on a range of civic, public, and private stakeholders or stakeholder groups with different spheres of interests, priorities, and concerns on different scales and levels.The case study, whilst providing a useful illustrative example to promote discussion of the idea of a more systematic use of stakeholder analysis in ecosystem services research, also fills an important gap in the literature.Here, especially its attempt to assess stakeholders in forest ecosystem services on a macro to micro level addresses a gap, as most studies who include stakeholders in ecosystem services research, do so on the local level only.Indeed, both, in ecosystem services and forestry sciences, relatively little attention has been given to the users, providers, and regulators of the various forest ecosystem goods and services on different scales.The scope of forestry in the UK has widened considerably over recent years, continuously adding new stakeholders with a direct or indirect stake in forest ecosystem services.The alignment of these, and in a way that it sustainably balances the environmental, social and economic needs of current and future generations is complex, and requires a sound understanding of all the stakeholders involved.Thus, the forestry sector provides a particularly useful example to illustrate the importance of systematic stakeholder analysis in ecosystem services research.The illustrative stakeholder analysis presented in this paper has highlight a number of challenges involved in clearly linking specific ecosystem services with stakeholders.In particular, the complexity involved in ecosystem services research and the relative novelty of the ecosystem services concept makes it, at times, difficult to identify stakeholders in the context of forest ecosystem services.Crucially, at the time of data collection, there was still a lively debate on what exactly constituted an ecosystem service within the academic community.A review of ecosystem services related literature by Seppelt et al., for instance, illustrates an abundant use of the term which gave rise to concerns about its arbitrary application.This difficulty is reflected in the exploratory stakeholder analysis example by the considerable range of opinions and perceptions on what constituted an ecosystem service amongst those interviewed.Comparable observations have been made by other researchers in empirical studies on the local level.Asah et al.’s work, for instance, illustrates how people identify benefits in many of the same ways and categories as in the MA but also merge, or expand existing MA categories in novel ways.Accordingly, several authors have emphasised the need for new or improved definitions and classifications.Even the latest comprehensive, collaborative global initiative to create a detailed classification and organisation of provisioning, regulation, cultural, and supporting ecosystem services, the Common International Classification of Ecosystem Services, struggles to settle on a common operational definition and classification of ecosystem services.Thus, other scholars have called for the use of different classifications for different purposes, adding to the complexity.Consequently, in any systematic stakeholder analysis linked to ecosystem services, it is important to set clear boundaries at the outset.The results also suggest that most of the interviewed forestry and conservation experts are unaware of the full range of ecosystem services provided by forests, especially of the less tangible regulating services.A similar lack of awareness is also apparent as regards to cultural ecosystem services, confirming findings by other scholars.Indeed, the importance of cultural ecosystem services has been described by Oteros-Rozas et al. as particularly highly context specific.While the findings of these scholars were based on local cases studies, the study presented in this paper considers the UK at a macro level, down to a micro level.Notwithstanding, as stakeholders gain more awareness and understanding of ecosystem services, their interests may change and may include ecosystem services not considered here.It is, therefore, very likely that similar stakeholder analyses will reveal more or different stakeholders.To accommodate such evolution in interests, and to better reflect the versatile nature of here, forestry, stakeholder analysis should be seen as a continuous process.The illustrative analysis is useful in highlighting the wide range of frequently multiple primary and secondary interests of an equally diverse range of stakeholders in forest goods and services, with some of them being users of services, and others producers or regulators, or a combination of these, creating interesting dynamics.The issue of multiple objectives among multiple or even the same stakeholder groups has also been reported by other scholars in different environmental management contexts.Duggan et al., for instance, proposed that in the context of fisheries, stakeholders “were not exclusively interested in one objective but often showed dominant interests amongst fluctuating interests”.This, however, can be a source of bias, particularly if the multiple objectives appear to be in conflict.Similarly, the results of the exploratory study presented in this paper suggest that the multiple interests in forest ecosystem services of several government departments and organisations appear to have caused tensions.Conversely, there is also some evidence for increasing collaboration between several of the key stakeholders.Interestingly, the findings suggest that both conflicts and synergies frequently link to transnational obligations.Thus, it will be of interest to further map out and analyse the conflicts and synergies on various scales, in more detail.Significantly, the findings also suggest that in the UK there is a particularly wide range of woodland owners, spanning governmental organisations, conservation NGOs, and commercial and non-commercial private owners, all of which also tend to have numerous interests in forest ecosystem services.Previous reports and academic articles have highlighted the diversity of woodland ownership in the UK, however, these were concentrating on private woodland owners.Therefore, there is still a need to examine and classify the entire range of woodland owners in more detail, including the management objectives of public, community, and NGO ownership, as these groups also own considerable quantity of forests.This exploratory study makes a start in looking into the latter in more detail, through a more thorough investigation of the National Trust, the Woodland Trust, the RSPB, and the Wildlife Trust, all of which own substantial woodland.Still, further work would be useful.Similarly, there is a wide range of users of ecosystem services.However, these might be in distant locations or may belong to different functional groups on different spatial levels, necessitating a more systematic examination in future studies that transcends the local realm and encompasses different geographical and governance scales.Drawing on the illustrative example, I propose a conceptual framework for the systematic inclusion of stakeholder analysis in contemporary ecosystem services research.The framework combines and builds on Hein et al.’s typology of ecological and institutional scales for ecosystem services provision and Reed et al.’s schematic representation of key steps for stakeholder analysis in natural resource management.The latter provides a three-phase model, entailing 1) the context or planning phase, 2) the application of stakeholder analysis methods phase, and 3) subsequent actions which is further developed here.However, these phases frequently overlap with potential links in different directions between the different steps.There may be feedbacks between the execution of the stakeholder analysis and the context in which it is done, or even between the different applications of stakeholder analysis methods.For example, investigating stakeholder relationships, using social network analysis, could be used to further differentiate between and categorise groups from which stakeholders can be selected for future actions.Any stakeholder analysis needs to start out by understanding the context in which it is to be conducted, by setting clear boundaries, and by having a clear purpose.The illustrative empirical example showed that in ecosystem services research it is particularly important to establish a clear focus of the issues under investigation due to the high level of complexity involved.Researcher are now not only dealing with a potentially wide range of stakeholders, but they also need to consider numerous ecosystem goods and services.Moreover, ecosystem goods and services are generated at all ecological scales and their supply affects stakeholders’ at all institutional levels.However, institutional and ecological boundaries rarely coincide and stakeholders in ecosystem services frequently cut across a range of institutional and ecological zones and scales.Crucially, some types of ecosystems provide more ecosystem goods and services than others.Similarly, the same ecosystem type in one location may not provide the same services in another place.Stakeholders may also greatly vary from location to location, and scale.It is thus vital to have a clearly defined focus and purpose of the stakeholder analysis from the outset with clear system boundaries for the analysis.This phase frequently involves the participation of stakeholders.In Fig. 1, Hein et al.’s typology has been incorporated into Reed et al.’s stakeholder analysis context phase, now called planning phase.Once foci and clear boundaries have been set, researchers can move on to the actual stakeholder analysis phase.Reed et al. distinguishes between three different levels of stakeholder analysis applications.These are, first, the identification stage, followed by the differentiation and categorisation stage, and finally the investigation of relationships between stakeholders.These three stages have been usefully illustrated in this paper through the empirical example of stakeholders in woodland ecosystem services in the UK.Reed et al. also propose a range of available methods for each application stage and when best to use them2.These include literature, interviews, and focus groups for the identification of stakeholders, interest-influence matrices and Q methodology for the differentiation between and categorisation of stakeholders, and actor-linkages matrices and social network analysis for investigating relationships between stakeholders.The choice of methods used depends on the exact purpose of the stakeholder analysis, the resources available, and the skills of the researcher.Methods range from those that can be used easily and rapidly with little technical expertise or resources to methods that are highly technical and rely on specialist computer software.Illustrations of the former have been given in the exploratory empirical example.Although the less technical methods often offer less precision, this may be deemed acceptable in some circumstances.In fact, the illustrative example presented in this paper, showed that even simple exploratory approaches can provide very useful insights.Moreover, stakeholder analyses may be undertaken with or without the involvement of stakeholders or with part involvement in certain aspects of it.The findings of systematic stakeholder analysis in ecosystem services research can then be used to recommend or develop future activities, such as new policies or policy instruments linked to ecosystem goods and services or decision-making strategies.For example, a systematic stakeholder analysis can help specify who should be involved in a specific policy or decision-making process and why.Ecosystem service users/beneficiaries and providers are dispersed horizontally across sectors and vertically at multiple governance levels, requiring a thorough understanding of all those involved.Moreover, ecosystem services related decisions frequently involve trade-offs between different objectives and values held by different groups of stakeholders or individuals, and at different scales, some of which may not be well represented in the process.Others may not even be recognised or acknowledged at all.However, only when all the stakeholders and their differing economic, social and environmental interests in ecosystem services are fully recognised, can stakeholders be more equally represented or involved in decision-making and land use planning.For example, specifying and mapping the demand and supply of ecosystem services amongst different stakeholders may aid locally beneficial, balanced, and equitable multi-functional land use decisions.Only when there is a clear understanding of which ecosystem services are provided and where, and who produces and/or uses or otherwise benefits from them, can synergies and trade-offs between ecosystem services be assessed and addressed.Moreover, the partitioning of stakeholders based on their similarities in specific stakeholder characteristics, such as their roles, degrees of power, their management objectives, or their level of operation can assist a range of ecosystem services governance and/or management processes and strategies.For example, the partitioning of stakeholders into functional groups, for instance, according to their respective professional characteristics and interests in ecosystem services may inform the development of policy instruments, such as payments for ecosystem services.It may also inform the design of a multi-user communication interface for ecosystem services management.In this paper, I endeavoured to corroborate ecosystem services research with systematic stakeholder analysis.Although the scope and exploratory nature of the systematic stakeholder mapping/analysis presented here means that the findings are illustrative rather than representative, they still provide useful information of a wide range of stakeholders in forest ecosystem services on different levels, filling a gap in the forestry literature.The results also provide a baseline for further investigations linked to forest ecosystem services in the UK and using more complex participatory or quantitative techniques.These may include a more detailed analysis of the new communities of interests in forest ecosystem services and of the conflicts, synergies, and trade-offs linked to forest ecosystem services.Moreover, the research found that there is still a general need for a clear and common definition and classification of ecosystem services inasmuch as it has been challenging to work with those currently available.The increasing use of ecosystem services thinking requires a thorough understanding of the various stakeholders involved in governing or managing ecosystem services, making a more systematic use of stakeholder analysis necessary.However, due to the high level of complexity involved, the application of systematic stakeholder analysis in ecosystem services research needs careful consideration and planning.The comprehensive framework presented here assists the systematic and detailed identification of stakeholders in ecosystem services, the assessment, and comparison of their particular sets of interests, influences and roles, and the consideration and investigation of relationships between them.It is hoped, that this paper will stimulate further discussion and work on a more systematic use of stakeholder analysis in ecosystem services research. | The concept of ecosystem services offers a useful framework for the systematic assessment of the multiple benefits ecosystems deliver. However, the anthropogenic focus of the concept also requires a detailed understanding of the stakeholders interested in the goods and services ecosystems provide. Indeed, linking ecosystem services to stakeholders and systematically mapping their potential stakes in these is essential for effective, equitable and sustainable ecosystem governance and management because it specifies who is in the system and why. This paper endeavours to provide a better appreciation of systematic stakeholder analysis in ecosystem services research by, first, presenting an illustrative stakeholder analysis example, using a key natural resource in relation to ecosystem services: forests in the UK. In this exploratory study, a qualitative approach was adopted, using a literature review and interviews to identify the stakeholders with a stake in the provisioning, regulating and cultural ecosystem services of forests, to distinguish their characteristics, and to examine their relationships towards each other on different levels. The illustrative example then informed the design of a conceptual framework for the systematic application of stakeholder analysis in ecosystem services research. The comprehensive framework consists of a three-phase model entailing the planning phase, the execution of the actual stakeholder analysis phase, and, finally the subsequent actions. The framework incorporates stakeholders and ecosystem services on a geographical, institutional and ecosystem level. Systematic stakeholder analysis can be used to develop future activities linked to ecosystem services, including new policy or instruments, stakeholder engagement activities, and decision-making processes. |
398 | Customized workflow development and data modularization concepts for RNA-Sequencing and metatranscriptome experiments | RNA is in the form of mRNAs, tRNAs and rRNAs the type of molecule that interconnects the mechanisms involved in the readout of genetic information from the genome to protein.However, different types of RNA participate in a wide variety of additional processes.These include RNAs involved in the regulation of multiple physiological processes, following similar principles, often through forming sequence-specific base pairings with cellular RNA or DNA targets.Among the types of RNAs fundamental to these RNA-based systems are miRNAs in eukaryotic cells, small RNAs in bacteria and archaea, but also CRISPR RNAs, which are at the heart of the prokaryotic immune mechanism.All these RNAs act by using seed sequences that are presented through a particular ribonucleoprotein complex.Different, yet highly relevant classes of regulatory RNA are antisense transcripts that often play gene expression modulating functions in all three domains of life and long non-coding RNAs in eukaryotes that often impact epigenetic status and chromosome organization.Most RNA-Seq experiments are performed to measure differential gene expression.For this, RNA-Seq targets the composition of the entire transcriptome in a sample using next-generation sequencing techniques.By quantifying and comparing the transcriptome composition between samples of a time series or from different tissues or cell types, differences in gene expression are detected.Therefore, RNA-Seq can be applied to any kind of cell from any kind of organism, even without prior knowledge about its genome sequence.However, RNA-Seq is a powerful tool not only to analyze quantitative changes in gene expression.With RNA-Seq exon/intron boundaries as well as alternatively spliced transcript variants can be detected and quantified or post-transcriptional modifications identified.In more specialized RNA-Seq protocols, information is obtained about the suite of active transcriptional start sites or particular RNA classes such as sRNAs or miRNAs.One of the RNA-Seq variants targeting a particular fraction of the transcriptome is Ribo-Seq, or Ribosomal profiling.This approach targets specifically mRNA sequences protected by the ribosome during the process of translation.Therefore, it provides information on the complement of actively translated mRNAs at a certain moment, on the presence of signals for translation and on the regulation of protein synthesis.Other RNA-Seq variants target a particular fraction of the transcriptome, e.g., after size fractionation, specifically miRNAs and other sRNAs.One widely applied specialized RNA-Seq protocol is called differential RNA-Seq, a prolific experimental approach for the identification of all active TSSs at single-nucleotide resolution.As this method not only identifies TSSs linked to an mRNA but all TSSs, it is a superior approach for the detection of bacterial sRNAs.The dRNA-Seq protocol was first applied to the human pathogen Helicobacter pylori and rapidly applied to other bacteria, such as E. coli, Salmonella enterica serovar Typhimurium, Streptococcus pyogenes, Xanthomonas and various cyanobacteria.Whereas dRNA-Seq initially was primarily developed for bacteria, it has been applied to archaea and to eukaryotic cells.In a variant of dRNA-Seq, called “dual RNA-Seq”, the primary transcriptomes of a bacterial pathogen together with that of its eukaryotic host cells are analyzed in parallel.More recently, this methodology has been expanded for the analysis of complex environmental assemblages of organisms belonging to diverse species from all three domains of life.Metatranscriptomic differential RNA-Seq as well as metatranscriptomic RNA-Seq are protocols to analyze the highly complex transcript pools of entire biological communities or microbiomes.The range of RNA related sequencing experiments and subsequent data analyses are steadily increasing.Researchers have to weigh and decide on specific technologies and combinations of experiments which are getting more and more complex.Likewise, newly developed tools, which are rapidly published for many sequencing data analysis tasks, are also based on more sophisticated algorithms.A current collection of commonly used tools of any kind can be obtained at omic.tools and bio.tools.Tools, which are no longer maintained or were never designed to cope with evolving RNA-Seq protocols and the rapidly increasing amount of available sequence data from first, second and third generation sequencing approaches become outdated over time.Another phenomenon is that data analysis tools being continuously maintained may change their behavior and parameters over time.Therefore, the reuse of formerly generated workflows is frequently not simple or not possible at all to adapt towards certain analyses at the present time.Another major challenge is the comparison, benchmarking, selection and integration of the most appropriate tools, which is time-consuming and needs computational domain expertise.Depending on the number of samples, the scale of time series and sequencing depth, computations may require heavy computational resources such as cluster, grid and cloud computing solutions.An adaptive management of available computing resources by load balancers and queuing systems is often inevitable in creating analysis workflows.The German Bioinformatics Network Infrastructure as well as the European Network ELIXIR are aiming at supporting and training scientists with respect to diverse bioinformatics questions.In particular, the de.STAIR project focusses on the needs of the experimental researchers for robust data analysis tools and, therefore, develops tailor-made workflows for RNA-Seq experiments and further downstream data integration approaches to facilitate the accessibility of the latest bioinformatic tools, the most suitable analysis approaches and flexible computing environments.The de.STAIR service is highlighted in Fig. 1 and represents the data analysis elements of the workflows that are developed within the infrastructure of the RBC.We offer workflows for pro-, eukaryotic and mixed dual RNA-Seq experiments for multiple input layers, like raw fastq, quality controlled fastq, sam/bam files, etc.The data analyses procedures cover preprocessing, alignment and further advanced downstream analyses such as alternative and non-linear splicing, differential expression, epigenetic analyses and many more.The output from the workflows includes quality reports, calculations and predictions for novel transcripts, probabilities of differentially expressed transcripts and transcript characterizations like annotations, such as GO, KEGG, Panther, wiki pathways, HMDB, DisGeNet, Reactome and methods to visualize the results.The following sections provide detailed insights towards the used technology for the distribution and modularization of the workflows, the necessity for integration of transcriptomic data and specific examples of the potential of our service.Reduced costs and increased accuracy of biological sequencing enabled the investigation of biological phenomena at a high resolution.Unless the low entrance barriers and easy-to-use experimental protocols, the challenge of proper, transparent and reproducible data analyses are still a bottleneck.With respect to the number of data analysis steps, the complexity of decisions on tool selection is increasing likewise, hence calling for systematic workflow development and management frameworks.The de.NBI and ELIXIR initiatives are supporting the expansion and further development of accessible workflow frameworks:Galaxy and the Galaxy-RNA-Workbench: The Galaxy project is a framework that makes advanced computational tools accessible without the need of prior extensive training.Galaxy seeks to make data-intensive research more accessible, transparent and reproducible by providing a web-based environment in which users can perform computational analyses and have all of the details automatically tracked for later inspection, publication, or reuse."Applicable for non-computational user's on a public server, explanatory interactive Galaxy tours, Galaxy “Tool Shed” for advanced user’s, free to use, broad community with over 80 public servers available for various tasks, pre-build Docker/rkt images, international training network, new tools need to be xml wrapped to be integrated",KNIME: The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline.It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes.Modular, grid and user support environment, workflows are interoperable and represented as Petri nets, hierarchy of workflows possible, e.g., meta nodes can wrap a sub-workflow into an encapsulated new workflow, framework enables “hiliting”, execution of workflows on high performance clusters only within the commercial version,Chipster: Chipster is a user-friendly analysis software for high-throughput data.Its intuitive graphical user interface enables biologists to access a powerful collection of data analysis and integration tools, and to visualize data interactively.Users can collaborate by sharing analysis sessions and workflows.Desktop application user interface available, strong support and easy integration of R based tools, freely available and open source client-server system, about 25 different visualizations,Snakemake: Snakemake is a workflow engine that provides a readable Python-based workflow definition language and a powerful execution environment that scales from single-core workstations to compute clusters without modifying the workflow.Readable Python-based workflow definition language, efficient resource usage, available on Linux, computationally advanced command line based framework, interoperates with any installed tool or available web service, jobs can be visualized as directed acyclic graph,In addition, a systematic search and evaluation of further workflow management frameworks with a focus on RNA-Seq data analysis was done by Poplawski et al.After choosing and setting up the analysis workflow within an appropriate framework one has to decide on a reasonable computing environment.In general, computing environments can be distinguished between web-based, offline, and hybrid solutions.According to the National Institute of Standards and Technology cloud computing is defined “as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction”.Due to the vastly increasing amount of computational data being created, large consortia, namely public-private-partnerships, are established to share objectives, resources, costs, risks, and responsibilities between academia and industrial partners.The most frequent used commercial cloud services are Google’s and Amazon’s Web Services private cloud-computing infrastructures.Due to concerns on data safety, security and privacy, cloud computing is rather weakly adopted within the healthcare system.An emerging solution to deploy the workflows, including all necessary tools and dependencies, are software channels and containers like Bioconda, Docker or rkt.These containers are emerging as a possible solution for many of the formerly addressed concerns, as they allow the packaging of workflows in an isolated and self-contained system, which simplifies the distribution and execution of tools in a portable manner across a wide range of computational platforms such as Galaxy and KNIME.The technology combines several areas from systems research, especially operating system virtualization, cross-platform portability, modular reusable elements, versioning, and a “DevOps” philosophy.Exemplarily, Wolfien et al. and Schulz et al. demonstrated successful implementations of a Galaxy/Docker based workflow with discrete software applications for the analysis of NGS data.Workflow management frameworks and cloud computing services are bridging the gap between tool developers and end users, aiming towards an easy applicable and up-scalable computational data analysis.This in turn allows for an improved data reproducibility, process documentation, and monitoring of submitted jobs.Finally, workflows facilitate the use of state-of-the-art computational tools which would be hard to access for non-experts without graphical user interface frameworks.However, the use of workflows could be even more simplified for experimental researchers by strengthening the specific focus on the addressed research hypothesis and lessening the effort for the selection of the most appropriate tool.The selection and benchmarking of new tools by the bioinformatician is a crucial step for establishing and updating applicable data analyses workflows for non-computational experts with the help of modularized workflow development.Starting with a hypothesis or research question, the user will be guided to the necessary input data type and the most suitable software solution will be provided as a modularized workflow.Therefore, the comparison and the selection of existing tools as well as their implementation into the computing infrastructure will be omitted for the end-user, which will save time and guarantees an expert driven data analysis.With respect to our documentation, the parameters have to be adjusted and optimized to obtain the final results.In order to adapt workflows over time, we recommend keeping up with the changes in the tools by a registration in bio.tools, where they are described by means of the EDAM Ontology."This ontology enables the characterisation of a tool's input formats, output formats, and parameter types.Our proposed software layer would therefore leverage on the EDAM Ontology to infer what tool and parametrization can be used to carry out the desired task.This approach is tool implementation agnostic, which means that if the tool changes, its EDAM terms change, and therefore our recommendation for the most appropriate tool can change accordingly.This software layer implements a recommendation system, which empowers the user to decide on specific modules of the workflow to run against the provided input data.The result will be an expert driven, tailor-made workflow to perform the most appropriate computational data analysis.In order to enhance the usability, the workflows can be showcased by means of a Galaxy Tour: an interactive guide that illustrates how the main components of the workflow connect in relation to real-life user tasks.In this section we will showcase the importance of workflow development for differential gene expression analysis of standard RNA-Seq protocols.This workflow will be modularly implemented by de.STAIR, taking care of parameter settings for different tools.Sequencing technologies that involve DNA amplification steps, including RNA-Seq analyes, can cause asymmetric sequence amplification due to the inherent GC bias.A highly efficient approach to recalculate the initial real number of transcripts after the amplification and sequencing step, is called “digital RNA sequencing”.This procedure combines the specific tagging of sequences with unique barcodes with a distinct strategy for post-processing analysis.The tagging takes place before the amplification is performed, whereas the unique barcodes are counted after the sequencing to retrieve the original number of transcripts.Assuming demultiplexed RNA-Seq data, the processing usually starts with raw reads provided in fastq format.In addition to the sequence of nucleotides, the fastq format also provides a quality value, i.e. a Phred score, for each of the sequenced bases.In the first step, an evaluation of these quality values as well as the calculation of the GC content, read duplication levels and contaminations are crucial for any further analyses.The quality visualization tools FastQC or NGS QC Toolkit calculate multiple quality statistics for read data which can be used to adjust parameters for downstream analysis.For example, possible remaining adapter sequences can be detected from a basic k-mer analysis or overrepresented sequences.While the adapter sequence should always be documented and thus known to anyone who works with a specific sequencing dataset, in practice this is rarely the case.In such scenarios, adapters may also be automatically predicted using DNApi.To clip them off, various tools such as fastx_clipper from the FASTX-Toolkit, Cutadapt, Skewer and Reaper from the Kraken package can be used.Often adapter clippers are already integrated into trimming software like PrinSeq, Trimmomatic and ConDeTri.After the removal of adapters, a quality trimming step is recommendable.Removing low quality parts of a read, such as homopolymers, improves the reliability of downstream analysis.Despite the fact that most of the tools for quality trimming use an almost similar approach, usually a sliding window, they have been shown to perform quite differently.Thus, an adjustment of parameters and thresholds is often necessary to obtain optimal results.In a study performed with reads from Illumina sequencing experiments, the tools Trimmomatic, ConDeTri or Sickle perform best with loose cutoffs, whereas for others, e.g. PrinSeq, a more strict cutoff needs to be used.The chosen quality threshold is crucial for maximizing the relative number of trimmed reads alignable to the reference and to increase specificity for, e.g., SNP calling.In order to perform trimming and mapping, the encoding of the Phred scores is necessary to be known."While classic Sanger sequences, as well as Illumina sequences are usually encoded in the so called Phred + 33 scheme, Solexa and older Illumina sequences often need to be transcoded by fastq_quality_converter or EMBOSS to work with today's software.Over the last decade several read alignment algorithms were developed to replace traditional sequence aligners like BLAST and BLAT, which are limited in dealing with huge amounts of sequencing data.Furthermore, most of the state-of-the-art mapping algorithms take care of intronic regions and allow split-read alignments.Some of the most popular tools are BWA, Bowtie2, TopHat2 followed up by HiSat2.All of them are based on Burrows-Wheeler transform methods and seed-extend based mapping techniques.Other aligners like STAR implement suffix arrays as index of the reference for efficient mapping.Using a similar approach, Segemehl is a multi-split-read aligner based on enhanced suffix arrays, which is capable of processing InDels during the seed search and thus is suitable for mapping also short or contaminated reads and can be subsequently used to detect circular RNAs.Therefore, reads with increased error rates towards their 3′ ends or biases in the nucleotide composition can still be mapped using Segemehl.In an exhaustive study 11 different alignment programs were reviewed regarding accuracy/mismatch-frequency, splice site detection and performance.The underlying algorithms were described to either truncate reads or allow for mismatches and that mapping performance or accuracy in splice site detection is lost for higher mapping rates and increased sensitivity.More recent methods, including Kallisto, Sailfish and Salmon for the quantification of RNA-Seq reads follow the tendency to use ‘alignment free’ quantification methods for faster and resource-sparing RNA-Seq analysis.Such quasi-mapping techniques are based on lightweight-alignment- or pseudo-alignment-algorithms and efficiently use the structure of a reference sequence without performing full base-to-base alignments.This allows reporting of all potential alignments without increased running time as compared to other modern mappers.To infer likely seed positions of reads, fragment mapping information can be obtained from a reference indexed only once, using suffix arrays or approximately matching paths in a De Bruijn graph and other efficient data structures like k-mer hashes.Quasi-mapping lends itself for coarsed tasks like transcript quantifications, clustering and isoform prediction, thereby performing with similar accuracy to traditional approaches.This variety of options underscores the need to choose appropriate aligners for specific research questions.Aiming for non-coding transcript identification, especially miRNAs, reads should preferably not accumulate mismatches in seed regions, but can be truncated.Furthermore, most mapping tools allow for a multiple or unique mapping strategy.Regarding the first strategy, reads may be aligned to multiple regions such as domain sharing paralogous genes and pseudogenes as well as regions of low complexity in genomic references or numerous isoforms in transcriptomic references.While transcript quantification based on multi-mapped reads for non-coding RNA classes sharing similar sequences is advisable, it is generally not for other genes because the quantification becomes more challenging and the base wise accuracy decreases.Other important questions with consequences for the data analysis arise from the species under investigation.de.STAIR aims to suggest suitable tools and proper default parameter settings.Downstream analysis aims at solving a wide range of questions such as the detection of differentially expressed genes, splice isoforms, and identification of up- and downregulated pathways or single nucleotide variant enrichments.Differential gene expression analysis tools often start with per gene read counts from different RNA-Seq samples.Existing tools for read quantification differ in terms of counting strategy, parametrization and runtime.For example, the widely used software HTSeq-count counts reads and split read fragments in a rather static manner and offers comparably few parameters.Other tools such as RNAcounter and featureCounts have a higher level of flexibility and are usually faster.To compare two or more samples, it is essential to take into account varying library sizes and differing transcript lengths caused by multiple isoforms or SECIS element activity.To remove such biases, a number of different measures for the normalized quantifications of reads such as the “reads/fragments per kilobase million” or “transcripts per million” have been introduced.Meanwhile, a number of tools for the detection of differentially expressed genes have been published.The softwares edgeR and DESeq2 are using a similar statistical approach for variance stabilization transformation.This transformation becomes necessary as the variance of read counts essentially grows and thus depends on the number of counted reads.DESeq2 may be adjusted to take higher variances of non-differentially expressed genes into account as deduced from replicate samples collected from different species or patients.An alternative tool for the detection of differentially expressed genes is Cuffdiff.Rapaport et al. observed a reduced sensitivity compared to various other tools and concluded that the Cufflinks specific normalization process, including alternative isoform expression and transcript lengths, may be a possible reason.In a recent study to investigate host-pathogen interactions with a special focus on expressed long ncRNAs, an additional filter for differentially expressed genes was proposed.The authors concluded that minimum TPM values ought to be used to obtain a good set of significantly expressed genes, which are able to eliminate potential biases due to transcript lengths in normalized read counts.To further facilitate the integration of RNA-Seq experiments with other data such as those derived from the analysis of epigenetic modifications or transcription factor binding sites, de.STAIR is implementing tools and workflows to quickly identify differentially methylated regions even in larger datasets and to integrate these regions with the results from RNA-Seq experiments, e.g., to obtain correlated differentially methylated regions or significant transcription factor alterations.Bacterial regulatory small RNAs are crucial for the post-transcriptional regulation of gene expression.Indeed, these are involved in almost all responses to environmental changes.Bacterial sRNAs mediate cross-regulation between bacterial mRNAs because one mRNA can be targeted by multiple sRNAs, as known for most sRNAs and most sRNAs have multiple targets.However, sRNA genes are not commonly annotated during genome analysis and the identification of their targets requires substantial additional effort.In this chapter, we describe state-of-the art approaches to deal with these issues.Major problems arise from the fact that regulatory RNAs in bacteria are extremely heterogeneous.Their length varies between 40 nt for the Escherichia coli sRNA tpke70 and more than 800 nt long for the Salmonella sRNA STnc510.Even the very conception of a regulatory RNA as being non-coding has been challenged with the discovery of dual function sRNAs, i.e., sRNAs that have a regulatory function and also encode a functional peptide or small protein.Examples include the RNAIII of Staphylococcus aureus, which is a regulatory RNA of 514 nt and encodes the 26 amino acid δ hemolysin or the 227 nt SgrS sRNA of enteric bacteria that encodes the 43 amino acid functional polypeptide SgrT.Moreover, regulatory RNAs may derive by processing from larger mRNAs, UTRs as well as ncRNAs and include regulatory elements such as marooned riboswitches.Computational approaches for the prediction of bacterial sRNAs and their genes can be classified into de novo approaches and approaches that utilize comparative information.De novo approaches may combine the search for promoters, specific transcription factor binding sites and Rho-independent terminators in intergenic regions.For trans-encoded sRNAs, comparative strategies start with a conserved sequence from an intergenic region.Then, homologs from closely related species are clustered and compared in pairwise or multiple alignments, which are subsequently scored according to predicted RNA structural features.These include thermodynamic stability values derived from the consensus folding of aligned sequences, e.g. by using the tool RNAz.Based on such strategies, sRNAs were predicted for different sets of closely related model cyanobacteria, in Rhizobiales, in the haloarchaeon Haloferax volcanii and for picocyanobacteria of the Prochlorococcus-Synechococcus lineage.Using the NcDNAlign algorithm sRNAs were successfully predicted in Pseudomonas, relying on the usage of comparative information from 10 genomes.Comparative approaches are widely used to increase the reliability of a prediction due to the underlying statistical possibilities.By considering this principle, e.g. SIPHT, Infernal and RNAlien have been developed.In this context, “comparative” does not only mean the comparison of instantly generated sequences identified by an alignment tool of choice, but can also relate to well-annotated knowledge as existing, e.g., in the Rfam database.The annotation of ncRNAs enables researchers to carry out extended functional and differential expression studies.Therefore, new members of RNA families as provided by the Rfam database can be identified by secondary structure constrained homology searches.For this task, the GORAP pipeline is being developed, which is mainly based on the Infernal package and uses multiple in-house filters for taxonomic information, RNA family specific thresholds, structure and sequence properties, which can be governed by the user.GORAP comprises modular, specialized software for the detection of specific RNA families: tRNAscan-SE for tRNAs, RNAmmer for rRNAs, Bcheck for RNase P RNAs and CRT for CRISPR RNAs.It was successfully applied on different whole genome assemblies for bacterial species and fungi.The most powerful approach for the detection of bacterial sRNAs relies on RNA-Seq and especially dRNA-Seq.Here, it is advantageous to generate dRNA-Seq data together with a parallel classical RNA-Seq approach.Then, workflows such as the “TSS annotation regime” can be applied, which utilizes both types of datasets and is considering the local expression rate from RNA-Seq and associates peaks from dRNA-Seq to define TSSs.With the aid of the corresponding genome annotation file, the TSSs then can be classified as gTSS and/or aTSS and/or iTSS or oTSS or nTSS.The majority of published bacterial genome sequences were automatically annotated by computational services like RAST or Prokka.These annotation regimes provide rapid insight into the composition and arrangement of genes or regulatory elements for a given genome.Additional information from dRNA-Seq can help to improve the existing annotation by correcting the 5′ ends of modelled genes and adding precise information on the TSSs.However, by combining the information from dRNA-Seq and RNA-Seq it is possible to define transcriptional units based on real expression data, which is very advantageous for the identification of operons, the full lengths of sRNAs and asRNAs, as well as the identification of divergent but overlapping transcripts due to alternative TSSs or maturation events.Such TUs can be efficiently identified using the software package “RNASEG”.During computer-aided data analysis and the experimental setup, several pitfalls should be kept in mind.So, a correct parameter adjustment to optimize estimation numbers of TSS, respectively TU prediction, is important for the discussed tools.Metatranscriptomic differential RNA-Seq as well as metatranscriptomic RNA-Seq are protocols to analyze highly complex transcription compositions of biological communities.Hou et al. proposed a bioinformatic workflow for mdRNA-Seq/metaRNA-Seq data to perform taxonomic assignments, prediction of TSSs and the analysis of associated promoter sequences and regulatory motifs that are finally complemented by further functional analyses utilizing KEGG.This workflow was successfully applied to analyze data coming from the northern Gulf of Aqaba in the Red Sea.By using this workflow the authors located genome-wide TSS, regulatory elements in the promoter regions and intergenic regions and improved the genome annotation for several non-model organisms belonging to all three domains of life.Fig. 3 shows a slightly modified and updated workflow for this analysis.For example, “Cutadapt” was replaced by “Trimmomatic”, based on its enhanced usability and computational power.As input both types of datasets were used to perform a quality curation, followed by a global read assignment approach and a TSS prediction in combination of all sequences is applied to draw a transcriptome plot.In the metatranscriptomic analysis of natural microbial populations it is important to consider the community structure, i.e., the numerical relationships among taxa.In this way it is possible to differentiate “active” taxa with high transcriptional activity from “non-active” community members, which are present but show little-to-none gene expression.A workflow has been developed by Pfreundt et al. for the community composition analysis based on 16S amplicon quantification using the UPARSE pipeline and taxonomic classification using the SILVA SSU taxonomy database.In view of the high number of different sRNAs in any given bacterial genome, the identification of their regulatory targets is critical for their further characterization.Therefore, the reliable computational prediction of sRNA targets has become an important field of research.The main challenges for reliable predictions are the small number of interacting sequence elements between a sRNA and its frequently distant target mRNA as well as imperfect complementarity, which can reside in various sections of the sRNA.In addition, a few sRNAs have protein-binding rather than mRNA-binding functions, some targets are recognized by the joint action of two different modules in the interacting RNA molecules and some sRNAs have single or only very few targets whereas others control multiple different mRNAs.There are different approaches to compute interactions between a given sRNA and its target mRNAs.In the following, a selection of sRNA prediction tools will be presented.based on: Energy folding algorithm; fast detection of possible hybridization sites,main features: Extension of minimum energy folding algorithm to two sequences; up to 10–27 times faster than RNAhybrid,based on: Minimization of extended hybridization energy of two interacting RNAs,main features: Accessibility of binding sites; user-specified seed, freely available web server,main features: Target site accessibility; at least three orders of magnitude faster than RNAup or IntaRNA; freely available web server,based on: transcriptome, degradome, sRNAome and genome data,main features: applicable on large as well as on small-scale experiments,based on: Conservation of the sRNA, secondary structure of the sRNA and mRNA target, hybridization energy between the interacting sRNA/mRNA,main features: freely available web server; identification of trans-acting sRNA targets,based on: Phylogenetic information, IntaRNA predictions,main features: freely available web server; p-value; mRNA region plot; sRNA region plot; functional enrichment,During the comparison of the most popular target prediction tools, CopraRNA ranked on top from several perspectives.CopraRNA is especially good with regard to the number of false-positives.This can be ascribed to the consideration of multiple sRNA homologs instead of only a single sRNA.At the same time, the need for multiple sRNA orthologs is also a major disadvantage of CopraRNA because, in some cases, the sRNA of interest may be restricted to a single species or, more frequently, the potential homologs are difficult to find.To overcome this disadvantage and to find sRNA homologs in a reliable way and avoiding descriptor based approaches, the GLASSgo algorithm is being developed.GLASSgo is currently integrated into the web server providing the Freiburg RNA Tools and can be freely used without limitations.GLASSgo provides an approach to detect, extract and evaluate potential sRNAs from scratch.This workflow works for sequences coming from dRNA-Seq/RNA-Seq as well as mdRNA-Seq/metaRNA-Seq experiments “Input”).Followed by a preprocessing step, which contains “Quality Control” with FastQC, “Adapter + Barcode removal” with Trimmomatic, “Sequence Trimming” with Trimmomatic and finally “Sequence Mapping” with Segemehl or VSEARCH.The last step depends on the used sequencing protocol.For dRNA-Seq/RNA-Seq all transcripts are mapped against a reference genome, whereas mdRNA-Seq/metaRNA-Seq needs a preselection to assign the sequences with respect to their associated genome.At the end of the “Preprocessing” step, a SAM file is needed for the “TU-Prediction” tool RNASEG.Under the condition that the reads are Poisson distributed, Bischler et al. tried to define sharp borders between the start and end site of a transcript.This is called a “Transcriptional Unit” and the interval of a TU can be extracted from the final result table.The suggested workflow was designed to predict intergenic located sRNAs and therefore the “TU-Extraction” procedure takes only these types of TUs into account.The first part of the workflow is available and the potential set of sRNA TUs serve as input for the second part.Each predicted potential sRNA TU is used as query to perform a “Homology Search” with GLASSgo.It returns a trustworthy set of homologs and the query sequence itself as FASTA format.These sets are analyzed independently with RNAz as well as CopraRNA and finally the outcomes of both algorithms are correlated to set up a descend ranked table.The sRNA candidates with the highest ranked potential among the “sorted sRNAs” can be used to carry out experimental tests.The members of the de.STAIR consortium “Structured Analysis and Integration of RNA-Seq Experiments” aim at supporting the research community with tools and workflows to enhance the overall integration of transcriptomic data towards additional regulative, predictive and annotation potential.To enable maximum suitability, interconnectivity, and accessibility for the developed approaches and services, de.STAIR provides dedicated training programs and materials for bioinformaticians and other life scientists and, ultimately, is lowering the bars to RNA-Seq data analysis as a whole.These aims are supported by the development of tools for the analysis of gene regulatory networks as well as the prediction and identification of miRNA-RNA interactions based on high throughput data, the development of transparent and automated pipelines for RNA-Seq and mdRNA-Seq analysis as well as the ongoing integration of specific tools with the the existing RNA workbench of the RBC.Financial support for this work by the German Federal Ministry for Education and Research program de.NBI-Partner is greatly acknowledged. | RNA-Sequencing (RNA-Seq) has become a widely used approach to study quantitative and qualitative aspects of transcriptome data. The variety of RNA-Seq protocols, experimental study designs and the characteristic properties of the organisms under investigation greatly affect downstream and comparative analyses. In this review, we aim to explain the impact of structured pre-selection, classification and integration of best-performing tools within modularized data analysis workflows and ready-to-use computing infrastructures towards experimental data analyses. We highlight examples for workflows and use cases that are presented for pro-, eukaryotic and mixed dual RNA-Seq (meta-transcriptomics) experiments. In addition, we are summarizing the expertise of the laboratories participating in the project consortium “Structured Analysis and Integration of RNA-Seq experiments” (de.STAIR) and its integration with the Galaxy-workbench of the RNA Bioinformatics Center (RBC). |
399 | Taxonomy and evolution of Aspergillus, Penicillium and Talaromyces in the omics era – Past, present and future | Aspergillus, Penicillium and Talaromyces are diverse genera which belong to the Order Eurotiales and contain a large number of species possessing a worldwide distribution and a huge range of ecological habitats.They are ubiquitous and can be found in the air, soil, vegetation and indoor environments .Some members are able to grow in extreme environments such as those with high/low temperatures, high salt/sugar concentrations, low acidities or low oxygen levels .Species of the three genera are mainly environmental saprobes and the primary contribution of these microorganisms to nature is the decomposition of organic materials .Many Aspergillus, Penicillium and Talaromyces species are economically, biotechnologically and medically important with huge social impacts.For example, these species are vital to the food industry and quite a number of them are exploited to produce fermented food such as cheeses, sausages and soy sauce.These fungi are also important biotechnologically for their strong degradative abilities which have been utilised for the production of enzymes .In addition, they are robust producers of a diverse spectrum of secondary metabolites some of which could be used as drugs and antibiotics or as the lead compounds of potential drug candidates with pharmaceutical or biological activities .On the other hand, many of these species, such as A. chevalieri, A. flavipes, P. citreonigrum and T. macrosporus, are food spoiling decomposers which cause pre- and post-harvest devastation of food crops; and many of these food-spoiling species are also mycotoxin-producers .Even worse, some of them are infectious agents and cause diseases in humans and animals.The most notorious pathogenic species on a global sense is A. fumigatus , which is the aetiological agent for the majority of aspergillosis cases .Other commonly encountered pathogenic Aspergillus species include A. flavus, A. nidulans, A. niger and A. terrus.Although Penicillium and Talaromyces species are less commonly associated with human or veterinary infections, the thermally dimorphic fungus T. marneffei, previously known as P. marneffei, is an exception.This notorious fungus is endemic in Southeast Asia and it is able to cause systemic infections particularly in immunocompromised individuals such as HIV-positive patients or patients with impaired cell-mediated immunity .Aspergillus, Penicillium and Talaromyces were traditionally classified according to their morphologies.As technologies capable of characterising biological macromolecules advanced, various approaches focusing on the profiles of different cellular constituents such as lipids, proteins and exometabolites have emerged to supplement the taxonomy of these fungi.The availability of DNA sequencing technology in the past two-to-three decades has generated an enormous amount of DNA sequence data, allowing fungal taxonomy through phylogenetics, including genealogical concordance.The currently accepted consolidated species concept , or informally known as the ‘polyphasic taxonomic approach’, has revolutionised fungal taxonomy, and the classification scheme for a vast number of fungi has been revised.In particular, significant changes have been made to reclassify Aspergillus, Penicillium and Talaromyces species in the past seven years.Such revision on the classification of these fungi results in redefined species concepts for Aspergillus, Penicillium and Talaromyces, providing new insights on the evolution of these important filamentous fungi.In this article, the development of various taxonomic approaches as well as species recognition and identification schemes for Aspergillus, Penicillium and Talaromyces is reviewed.These include the traditional morphological/phenotypic approach, the supplementary lipidomic, proteomic and metabolomic approaches, as well as the currently widely used phylogenetic/consolidated approach.The clinical implications of this evolving taxonomy are also discussed.The name Aspergillus was first introduced by Micheli in 1729 to describe asexual fungi whose conidiophores resembled an aspergillum, a device used to sprinkle holy water .Later in 1768 von Haller validated the genus and in 1832 Fries sanctioned the generic name .Similarly, the genus Penicillium was erected by Link in 1809 to accommodate asexual fungi which bore penicillum-like fruiting bodies.Although both Aspergillus and Penicillium were originally described as anamorphic, some species of the two genera were subsequently found to be ascocarp-forming.For example, the sexual genus Eurotium was first firmly connected to Aspergillus by de Bary in 1854 whereas the ascomycetous genus Eupenicillium has been used to describe Penicillium species capable of producing sclerotoid cleistothecia from as early as 1892 .Since the discovery of the various sexual states of Aspergillus and Penicillium species, it has been controversial as to whether separate sexual generic names should be used to describe species able to produce ascospores.In spite of the fact that several sexual genera had already been established to accommodate the sexual morphs of some Aspergillus and Penicillium species, Thom, Church, Raper and Fennell, in their monographic masterpieces on the taxonomy of these two genera, neglected the use of sexual names.This was because, in their opinions, this would cause unnecessary nomenclatural confusion, especially for strains which were in sexual stages at first and then lost their ascospore-forming ability under laboratory maintenance.In addition, this would also lead to the fragmentation of the large and obviously cohesive Aspergillus/Penicillium groups .Nevertheless, in order to abide by the then International Code of Botanical Nomenclature, where the first valid names of the ‘perfect states’ of fungi took precedence , Benjamin assigned Aspergillus species which possess sexual life cycles into the sexual genera Eurotium, Emericella and Sartorya .In addition, he transferred Penicillium species with sexual life cycles to the ascomycetous genus Carpenteles .During his assignment, Benjamin also established the novel genus Talaromyces to describe Penicillium species which, in their sexual life cycles, possessed soft ascocarps exhibiting indeterminate growth and whose walls were composed of interwoven hyphae .As the number of species of the genera Aspergillus, Penicillium and Talaromyces increased, closely related species were grouped into subgroups .Such infrageneric classification system underwent vigorous changes since different authors focused on different morphological features when establishing their subgrouping schemes.For example, Blochwitz as well as Thom and his co-workers were the first to divide Aspergillus species into seven and 18 subgeneric ‘groups’, respectively, based on their phenotypes .The subgrouping by Thom and associates formed the foundation of Aspergillus subgeneric classification which had been largely followed by other mycologists working on this genus in the last century.However, since these subgeneric ‘groups’ did not possess any nomenclatural status, Gams et al., in 1986, established six subgenera and 18 sections to accommodate these ‘groups’, formalising the subgeneric classification of Aspergillus species .As for Penicillium, Dierckx and Biourge firstly subdivided the genus into the subgenera Aspergilloides as well as Eupenicillium, which was further separated into sections Biverticillium and Bulliardium .Subsequently, Thom and his co-workers did not follow Dierckx’s and Biourge’s grouping and proposed a new subgeneric classification scheme for Penicillium composed of four main divisions/sections, where species were grouped according to features of their colonies and branching patterns of their conidiophores .The system established by Thom and associates for Penicillium was adopted by other mycologists for the next 30 years until Pitt as well as Stolk and Samson in the 1980s proposed two other subgeneric classification schemes based on features of conidiophores, morphology of phialides and growth characteristics, as well as branching patterns of conidiophores and phialide morphology, respectively .Similarly, Talaromyces species were also split into four sections based on the structures of their conidial states .As the species concept for fungi migrates from morphological, physiological, or phenotypic to genetic, phylogenetic and even consolidated, further changes have been made to the infrageneric classification of Aspergillus, Penicillium and Talaromyces.The adoption of the consolidated species concept, with reduced emphasis on morphological properties, in classifying species of these genera resulted in the fact that fungi with aspergillum-shaped conidiophores no longer necessarily are Aspergillus species, while fungi with penicillum-shaped conidiophores no longer necessarily are Penicillium species .One notable change in relation to these genera, also as a result of the recent implementation of the single-naming system , was the transfer of fungi belonging to Penicillium subgenus Biverticillium to the genus Talaromyces , whose close chemotaxonomic relationship and phylogenetic connection have been recognised since the 1990s, leaving both the genera Penicillium and Talaromyces as monophyletic clades .Interestingly, during this transfer the species P. aureocephalum was also accommodated in the Talaromyces clade.Inclusion of this species, which is also the type and only species of the genus Lasioderma , necessitated the renaming of the Talaromyces clade as Lasioderma, since this is an older sexual name with nomenclatural priority .However, such renaming would require many name changes and several species are better scientifically and economically well-known with their Talaromyces names.Also, even though using identical names for botanical/mycological and zoological genera is not forbidden by the Melbourne Code, the name Lasioderma is a later homonym to Lasioderma currently in use for one of the beetle genera and this might cause confusion to non-taxonomists.Hence, it was proposed to conserve the generic name Talaromyces over Lasioderma .Recently, this proposal was approved by both the Nomenclature Committee for Fungi and General Committee for Nomenclature of the International Association for Plant Taxonomy, retaining the generic name Talaromyces.Despite the fact that the taxonomy of Penicillium and Talaromyces seems straight-forward now since both of them clearly represent two separate monophyletic groups , the scenario for Aspergillus is much more complicated, involving two opposing generic concepts, namely the wide and narrow Aspergillus concepts.Early work by Benjamin summarised the links between Aspergillus and the sexual genera Emericella, Eurotium and Neosartorya .Following other subsequent changes in Aspergillus classification, seven additional sexual genera, including Chaetosartorya , Cristaspora , Dichotomomyces , Fennellia , Neocarpenteles , Neopetromyces and Petromyces , are further connected to Aspergillus.Remarkably, each of these sexual genera only associates with a particular Aspergillus subgenus or section.Subsequent to the adoption of 1F1N, there have been disputes as to whether the generic name Aspergillus should be retained for the large monophyletic clade, although weakly supported by maximum likelihood analyses , of classical Aspergillus species ; or to adopt sexual names for those well-supported clades containing both pleomorphic species and asexual species with Aspergillus morphologies, leaving the weakly supported subgenus Circumdati as Aspergillus sensu stricto, even though this group does include several less well-known sexual genera .The latter proposal was advocated based on the fact that the sexual genera Chaetosartorya, Emericella, Eurotium and Neosartorya differ significantly in their morphologies, physiologies, enzymologies, as well as toxicologies .Also, Pitt, Taylor and Göker, proposers of the narrow Aspergillus concept, found in their phylogenetic analyses that classical Aspergillus was paraphyletic, encompassing the monophyletic Penicillium clade.As a result, according to Pitt et al. if the wide Aspergillus concept is to be adopted then Pencillium would also need to be synonymised under Aspergillus to make the whole clade monophyletic .On the other hand, the main problem for the narrow Aspergillus concept rests in the retypification by conservation of the genus.This is because under the narrow Aspergillus concept, the type of the genus Aspergillus, A. glaucus of subgenus Aspergillus, would fall in the genus Eurotium instead.Since taxonomic properties of the type and related species determine the circumscription of the genus, if the name Aspergillus is to be applied to subgenus Circumdati, the type of the genus has to be changed to one of the species within this subgenus, for example, A. niger as suggested by Pitt and Taylor because of its more frequent use in literatures and databases .However, in the eyes of the wide Aspergillus concept advocates, such generic retypification is debatable since the new type of choice would depend on the interest of different fields.For instance, A. flavus would be the type of choice for food mycology and mycotoxicology, A. fumigatus for medical mycology, whereas A. nidulans for fungal molecular genetics .Recently, regarding the narrow Aspergillus proposal which considers Aspergillus to be non-monophyletic and recommends to apply the name Aspergillus only to members of the subgenus Circumdati through retypification by conservation while maintaining the sexual names for other supported clades , Kocsubé et al., supporters of the wide Aspergillus concept, demonstrated in their phylogenetic analyses, based on six and nine genetic markers using both maximum likelihood and Bayesian approaches as well as extrolite profiling, that Aspergillus represents a well-supported monophyletic clade sister to the monophyletic Penicillium clade , rejecting Pitt et al.’s hypotheses and proposal.They also established the subgenus Polypaecilum to encompass species previously assigned to the genera Phialosimplex and Polypaecilum, whereas the species A. clavatoflavus and A. zonatus, which are actually phylogenetically distantly related to Aspergillus, were transferred to the novel genera Aspergillago as Aspergillago clavatoflava and Penicilliopsis as Penicilliopsis zonata, respectively .Nevertheless, Pitt and Taylor have submitted a formal proposal to the NCF to retypify Aspergillus with A. niger to redefine the genus to members of subgenus Circumdati only, with sexual names taken up to replace other subgeneric names of Aspergillus .In response to Pitt and Taylor, Samson et al. urged the NCF to reject the conservation proposal based on their arguments that Aspergillus is monophyletic as well as clearly-defined by phenotypic synapomorphies and secondary metabolite chemistry; that the size of the genus Aspergillus is irrelevant; and that conservation with a different generic type would lead to unpredictable name changes and would not result in a more stable nomenclature .Recently, voting was held by the NCF and the proposal by Pitt and Taylor could not obtain a 60% majority for the ‘yes’ vote after two rounds of ballots.Although the ‘no’ vote was also one vote short of reaching 60%, it was in the majority.Since there is no definite recommendation from the NCF, this proposal will be referred to the General Committee on Nomenclature for final decision.Since the establishment of Aspergillus, Penicillium and Talaromyces, species in these genera had been recognised by their morphological features until the dawn of molecular systematics.In particular, morphology of conidial structures, especially their branching patterns as discussed above, has played an important role in species recognition and identification.Other important morphological properties useful for diagnosing a species include cleistothecium and ascus/ascospore characters .Macroscopically, characteristics of the colony, such as texture, growth rate, degree of sporulation, conidial and mycelial colours, as well as production of diffusing pigments, exudates, acids and other secondary metabolites, are also used for species differentiation ."The need for standardisation of culture media and incubation condition for reproducible species identification was recognised as early as Biourge's and Dierckx's time .This is because variations in the immediate cultural environment, such as nutrient availability, temperature, light intensity, water activity, humidity and/or other environmental factors, regardless how subtle these discrepancies are, could change the appearance of the organism since morphology is one of the way in which an organism adapts to and survives in its environment .The effects of these changes in incubation condition have been exemplified by the work by Okura et al. .As such, standardised working techniques for morphological characterisation have been recommended for Aspergillus and Penicillium species .Although no standard is proposed for Talaromyces, these methods should also be applicable to this genus since by tradition quite a number of Talaromyces species were considered and characterised as Penicillium species.With the availability of newer techniques, such as gas–liquid chromatography and electrophoresis, for the characterisation of biomolecules in the 20th century, chemotaxonomy has gained popularity in Aspergillus, Penicillium and Talaromyces taxonomy, especially since the 1980s.One of the approaches for chemotaxonomy is zymogram profiling, where species are differentiated based on the polyacrylamide gel-electrophoretic patterns of certain isoenzymes .This technique has been demonstrated to be highly successful in differentiating species of Penicillium subgenus Penicillium, where the isozyme patterns showed a high correlation with morphological species .However, when species from other Penicillium subgenera were also included in the analysis it was found that correlation between zymogram grouping and morphological species only existed in some cases , rendering the utility of this technique for the identification of Penicillium species questionable.On the other hand, zymogram profiling has also been applied to Aspergillus species and this identification method was found to be practical especially for members of the subgenera Circumdati, Fumigati and Nidulantes , in spite of the fact that some closely related species, such as the wild type A. flavus and the domesticated counterpart A. oryzae or the wild type A. parasiticus and the domesticated A. sojae, produced very similar isoenzyme patterns and could not be well differentiated .Nonetheless, fingerprinting of isozymes has not been widely employed as a practical identification system since the enzyme profiles for the vast majority of Aspergillus, Penicillium and Talaromyces species remained uncharacterised.Also, there is no consensus as to which isoenzymes should be used for comparison.Another approach for chemotaxonomy is extrolite profiling.The exometabolome reflects the physiology of an organism in response to its biotic and abiotic environment and profiling of the exometabolome is particularly useful for the chemotaxonomy of Aspergillus, Penicillium and Talaromyces species since these genera are the best known exometabolite producers, having the most diverse spectra of exometabolites amongst 26 different groups of ascomycetes analysed, which represented four different Classes .Amongst the various kinds of exometabolites, such as excessive organic acids, extracellular enzymes and accumulated carbohydrates, the one that generally displays more pronounced chemoconsistency and higher species specificity is secondary metabolites .The first insight of the taxonomic value of secondary metabolite profiling was gained when Ciegler et al. attempted to divide P. viridicatum into three subgroups, in which the production of the mycotoxins citrinin, ochratoxin, viomellein and xanthomegnin was characterised as one of the classification criteria .However, Ciegler et al.’s method required complicated and tedious pre-treatment of the samples.As a result, their approach was only popularised after the development of simpler techniques which only involve direct spotting of small agar plugs from fungal cultures on thin-layer chromatography plates without the need of any preceding extraction or purification procedures .Since then, extrolite data have contributed much to species recognition of Aspergillus, Penicillium and Talaromyces species.For example, using secondary metabolite profiling Frisvad and Filtenborg classified more than 4,000 isolates of terverticillate penicillia into 38 taxa and chemotypes, where infrataxon strains exhibited chemoconsistency in terms of the production of mycotoxins .They also reidentified a large number of misidentified Penicillium strains based on their profiles of secondary metabolites .Frisvad and Filtenborg, together with Samson and Stolk, also pioneered the chemotaxonomy of Talaromyces.Again, their analysis demonstrated that the production of secondary metabolites by members of this genus was taxon-specific and they also recognised T. macrosporus and T. luteus as separate species from T. flavus and T. udagawae, respectively, because of their different metabolic profiles .In fact, this chemotaxonomic work offered one of the very first indications of the connection between Talaromyces and Penicillium subgenus Biverticillium .An overview of the extrolite profiles for various Talaromyces species was given in the latest monograph on the genus by Yilmaz et al. .The same also applies to Aspergillus species .Notably, different Aspergillus subgenera produce different unique extrolites, as summarised by Frisvad and Larsen .Thus, the production of a certain secondary metabolite by an Aspergillus isolate would serve as a practical hint for identification at the sectional level, whereas the identification of several secondary metabolites of the organism would be an effective tool for species recognition .Currently, high-performance liquid chromatography coupled with diode array detection and mass spectrometry is the method of choice for detailed chemotaxonomic characterisation of Aspergillus, Penicillium and Talaromyces .With about 350 accepted species each in Aspergillus and Penicillium and more than 100 accepted species in Talaromyces , qualitative databases equipped with a large volume of verified data on the production of secondary metabolites by various Aspergillus, Penicillium and Talaromyces species is needed for accurate species identification .In view of this, an Aspergillus Secondary Metabolites Database was established last year .Recently, metabolic fingerprinting has also been demonstrated as a potentially successful tool for differentiating closely related Aspergillus species, without the need of investigating the actual identities of the metabolites.For example, utilising this technique Tam et al. showed that A. nomius and A. tamarii could be distinguished from their morphologically similar sibling A. flavus .In addition, hierarchical cluster analysis by Tsang et al. also showed that except for A. austroafricanus, the metabolic fingerprints of species in the same Aspergillus section clustered together and those of infraspecific strains also formed smaller subclades .Fatty acid profiling is another increasingly used method in diagnosing filamentous fungal species.Although characterisation of fatty acid composition and relative concentration has long been utilised for bacterial and yeast chemotaxonomy and there is even a commercial fatty acid methyl ester-based bacterial/yeast identification system containing profiles from more than 1,500 different species developed , there are only a few studies making use of this technique to characterise the chemotaxonomy of filamentous fungi .This is because filamentous fungi do not produce fatty acids in the quantity and diversity that bacteria do and therefore, traditionally fatty acid profiling had been regarded to have little taxonomic value for filamentous fungi .Blomquist et al. first examined the utility of this technique on the identification of filamentous fungi.They characterised the fatty acid contents of conidia and found that fatty acid profiling, even though performed at different times, could potentially be used to identify Aspergillus and Penicillium species in a reproducible way .In 1996, Stahl and Klug performed a large-scale study to characterise the composition and relative concentration of fatty acids in the mycelia of a number of filamentous fungi from across different phyla .Seven species of Penicillium and one of Aspergillus were included in their study.It was revealed that four fatty acids, namely palmitic acid, stearic acid, oleic acid and linoleic acid, represented more than 95% of the total cellular fatty acid content.These four fatty acids were also common to all the filamentous fungi characterised.In spite of this, discriminant analysis showed that the fatty acid profiles for these species are significantly different.Notably, all the seven Penicillium species characterised were found to possess unique fatty acid profiles .Later in 1998, Da Silva et al. expanded the characterisation to 18 Penicillium species ; and they found that different Penicillium subgenera could be readily differentiated by fatty acid profiling.Moreover, in some cases, species of the same subgenus such as Furcatum could be separated based on their fatty acid profiles, which mainly differed in the relative concentration rather than the composition of fatty acids; although difficulties existed for the subgenus Penicillium .The fact that the species differentiation power relied on the variation in fatty acid relative concentration was observed by Mahmoud et al. as well .Fatty acid profiling has also been successfully used to differentiate Aspergillus species .A recent chemotaxonomic approach for rapid identification of Aspergillus, Penicillium and Talaromyces is matrix-assisted laser desorption/ionisation–time-of-flight MS. The technology compares the cellular protein profiles of different organisms to achieve identification at the species level .The advantage of this technique is that the methodology is simple, rapid and inexpensive, requiring a specialised bench-top MALDI–TOF mass spectrometer only.Also, since the majority of proteins analysed by MALDI–TOF MS are constitutively expressed ribosomal proteins, microorganisms can be successfully identified even though varying culture media and incubation conditions are used .More importantly, databases consisting of protein mass spectra from over 2,400 microbial species are commercially available , making the identification of a wide range of microorganisms possible.Given its numerous advantages, MALDI–TOF MS has been gaining popularity for identification of pathogenic microorganisms, including bacteria , yeasts and even filamentous fungi , in clinical microbiology laboratories.The potential of this technology in diagnosing Aspergillus, Penicillium and Talaromyces species has also been evaluated by numerous studies.In general, MALDI–TOF MS is successful in identifying the more commonly found aspergilli/penicillia, such as A. flavus, A. fumigatus, A. nidulans, A. niger, A. sydowii, A. unguis, P. chrysogenum, P. aurantiogriseum and P. purpurogenum, with correct identification rates of ≥78% .Yet, for other rare species misidentification is often encountered.Notably, these uncommon species could usually be identified to the sectional level.For example, A. tritici was misidentified as A. candidus; A. oryzae as A. flavus; A. fischeri as A. fumigatus; A. tubingensis and A. welwitschiae as A. niger, A. hortai and A. niveus as A. terreus; as well as A. sydowii as A. versicolor .A probable reason for this is that the mass spectra for many of these rare species are lacking in the commercial libraries.It should be noted that the Bruker MBT MSP 6903 Library, Bruker MBT Filamentous Fungi Library and Vitek MS V3.0 Knowledge Base only include reference mass spectra for 42, 127 and 82 filamentous fungal species, respectively .Of these, only up to 22 Aspergillus, 21 Penicillium and 6 Talaromyces, which are still named with their previous Penicillium synonyms, species are included .However, the numbers of accepted Aspergillus, Penicillium and Talaromyces species greatly outnumber those included in the MALDI–TOF MS databases, with both Aspergillus and Penicillium having approximately 350 species and Talaromyces having more than 100 species .Despite this, MALDI–TOF MS has still been demonstrated as a potential tool to differentiate members of the three genera by hierarchical cluster analysis of the mass spectra of various species .As a result, theoretically if more reference mass spectra for different species, especially the rare ones, are generated for inclusion in the databases the species diagnosis power of MALDI–TOF MS would be greatly enhanced and it has already been exemplified by previous studies that the correct identification rates could be improved by the expansion of reference libraries using inhouse generated mass spectra .To overcome the limitation of small reference data volume of the commercial databases, several organisations have self-established online supplementary databases.For example, the Spectra database by the Public Health Agency of Sweden is a platform for MALDI–TOF MS users to deposit and exchange user-generated mass spectra which are curated and continuously updated.Another such complementary database is the MSI Platforme which serves as a webtool for MALDI–TOF MS-based fungal identification.This platform contains more than 11,800 reference mass spectra of more than 900 fungal species, aiming at supplementing the insufficient spectral diversity of the commercial databases so as to improve species identification .With the current adoption of consolidated species recognition where molecular characters play a predominant role, DNA sequencing and phylogenetic analysis have become the gold standard for accurate fungal identification.As in other fungi, early molecular work on Aspergillus, Penicillium and Talaromyces involved the comparison of large and small subunit ribosomal nucleic acid as well as internal transcribed spacer sequences .However, subsequent analysis showed that ribosomal genes are too conserved to separate these groups of fungi .In addition, although ITS is now accepted as the official DNA barcode for fungi , it has also been recognised as an extremely conserved region for Aspergillus, Penicillium and Talaromyces .Despite the fact that its sequence variability could be used to distinguish species belonging to different sections or series , very often it is not useful for the differentiation of species within the same section or series.In view of this and also to better reflect the genealogy of this group of organisms, sequencing of multiple genetic markers, in particular the β-tubulin and calmodulin genes, to define species boundaries has been advocated .The exons of these genes are highly conserved and are therefore good locations for primer binding, whereas introns in between the exons act as the major source of sequence variation.As a result, sequences of these genes containing both exons and introns are able to provide variations at different levels for species delimitation .With the majority of Aspergillus, Penicillium and Talaromyces species clearly defined nowadays, sequencing of benA and/or cmdA can be utilised to identify most of these species.In fact, benA and cmdA have been proposed as the secondary identification markers for Penicillium and Aspergillus species, respectively .This is because there are universal primers available for these two genes and both of them are easy to amplify.In the case of Aspergillus, although benA could be easily amplified, the presence of paralogous genes in some species which could also be amplified by the universal primers could be confusing and complicate species identification .In contrast, although a similar problem has also been noted for cmdA, amplification of a pseudogene only occurred for one Aspergillus strain .Moreover, cmdA is also easy to amplify and its sequence is available for nearly all accepted species.Therefore, cmdA was chosen over benA as secondary identification marker for Aspergillus .On the other hand, as for Penicillium, amplification of benA paralogues has not been reported and since a complete cmdA sequence database is lacking, benA became the secondary identification marker of choice .Although a third option, RNA polymerase II second-largest subunit gene, also exists and its lack of introns allows robust and easy alignment for phylogenetic analysis, it was not selected over benA or cmdA because rpb2 is sometimes difficult to amplify and a database with sufficient volume is lacking .Nonetheless, when resources are available it is recommended to sequence all the four genetic markers to aid identification, especially when new species are diagnosed .Although a recommendation of identification markers has not been put forward for Talaromyces species, they generally follow those for Aspergillus and Penicillium species .In order to achieve accurate identification, sequences from reliable databases should be compared against.Despite the fact that the International Nucleotide Sequence Database Collaboration contains a vast number of sequences, the reliability of the sequence annotation is questionable .Notably, ≥10% of the fungal ITS sequences in these databases were found to be misannotated .As such, the Fungal ITS RefSeq Targeted Loci Project has been initiated by the National Center for Biotechnology Information to improve the quality and accuracy of the sequences deposited to INSDC .Similarly, the UNITE database was developed to include high-quality type or representative sequences for fungi or fungal species hypothesis with correct or up-to-date taxonomic annotations .The International Society for Human and Animal Mycology ITS database, specialised in the ITS-based identification of medical fungi, has also been recently established and it contains quite a number of high-quality ITS sequences for Aspergillus, Penicillium and Talaromyces species, which are commonly encountered in the clinical settings.While curated databases for benA, cmdA and rpb2 have not been created, reliable sequences for all the ex-type strains of Aspergillus, Penicillium and Talaromyces accepted species have been listed in the recent monographs on the three genera or online at http://www.aspergilluspenicillium.org/.In addition to nuclear genes, attempts have also been made to understand the evolution of Aspergillus, Penicillium and Talaromyces by sequencing of mitogenomes .Yet, only a handful of mitogenomes are available for these groups of fungi currently and the utility of mitogenomes for species diagnosis awaits further examination.A stable taxonomy is important to the study of Aspergillus, Penicillium and Talaromyces in every aspect including medical mycology.First of all, the nomenclature of pathogenic fungi should be steady over time, without frequent vigorous name changes.The recently implemented 1F1N scheme, where one fungus shall only possess one name, drastically simplified fungal nomenclature.The accepted use of Aspergillus and Penicillium names over their respective ‘sexual names’ is particularly important to the medical community.This is because most clinical fungi are isolated in the asexual forms and these fungi are traditionally named with their asexual names.Use of the ‘sexual names’ would confuse clinicians since they would not be aware of what Eupenicillium, Neosartorya and Emericella are, thus hindering treatment and patient care.This could be exemplified by the recent transfer of P. marneffei to T. marneffei, where the well-known disease name ‘penicilliosis’ also has to be changed to the unfamiliar ‘talaromycosis’.A stable taxonomy also clearly defines species and their identification methods.Therefore, the clinical spectrum of pathogenic species could also be better studied.In particular, rare and new aetiological agents could be revealed .Accurate identification of the causative pathogen is crucial to epidemiological studies.Correct species diagnosis could also help predict antifungal susceptibility, which varies across different species and this could significantly affect patient treatment, disease management and prognosis.For example, it has been shown that A. tubingensis and A. unguis possessed elevated minimum inhibitory concentrations to itraconazole .The fact that triazole agents exhibit various activities against different Aspergillus species has also been demonstrated by other studies .Also, although triazoles showed moderate activities against Penicillium species, their effectiveness against some Talaromyces species are poor .With a consistent taxonomy, understanding on the epidemiology and clinical spectrum of diseases caused by Aspergillus, Penicillium and Talaromyces could be enhanced.This in turn facilitates laboratory diagnosis of these important mycotic pathogens and establishment of patient treatment strategies.The transition from morphological/phenotypic to chemotaxonomic, genetic/phylogenetic, or consolidated species recognition results in the reclassification of these groups of fungi and enables sexual-asexual connection.In the current omics era, advancement in different omics technologies makes characterisation of the complete set of a particular group of characters possible, allowing more thorough analyses and therefore, a more stable taxonomy.For example, comparison of mitogenomes supported the transfer of ‘P. marneffei’ to Talaromyces and demonstrated that Aspergillus and Penicillium are more closely related to each other than to Talaromyces .The availability of contemporary advanced techniques, such as MALDI–TOF MS as well as UHPLC/HPLC–DAD–MS, significantly improves proteomic and metabolic fingerprinting of fungi, respectively, thus aiding chemotaxonomy.As the cost for second-generation sequencing is getting lower and the emerging third-generation sequencing is becoming more widely accessible, more and more complete/almost complete fungal genomes become available.These genome sequences could advance our knowledge on these fungi, such as T. marneffei , and taxonomy on them could thus be facilitated.With such additional novel data, further reclassification on Aspergillus, Penicillium and Talaromyces is expected.Application of all these state-of-the-art omics technologies is likely to provide comprehensive information on the evolution of the three related genera, and a more stable taxonomy for them will hopefully be achieved.Yet, it should be noted that even though these advanced methodologies are becoming more readily available for the identification and classification of fungi, it is equally important for mycologists to apply standard or best practices when studying fungal taxonomic relationships.In particular, fungal taxonomists should always keep themselves up-to-date with recent trends, tools, standards, recommendations and practices in the field, especially when describing new species .When depositing DNA sequence data to public databases, the sequences should be well checked for authenticity as well as reliability , and they should be richly annotated as far as possible .Also, multiple genetic markers and proper analytical tools should be used for the inference of phylogenetic relationships .As nowadays taxonomy has entered a deep crisis where descriptive taxonomic studies are not encouraged, it is important for taxonomists to keep the pace for re-growth, to participate actively and to form a good ‘taxonomic culture’ so that the scientific community would value taxonomic work higher .This could also help attract more research funding for more expensive technology or equipment for more detailed taxonomic characterisation.All these efforts could help speed up taxonomic and molecular ecology progress on Aspergillus, Penicillium and Talaromyces significantly. | Aspergillus, Penicillium and Talaromyces are diverse, phenotypically polythetic genera encompassing species important to the environment, economy, biotechnology and medicine, causing significant social impacts. Taxonomic studies on these fungi are essential since they could provide invaluable information on their evolutionary relationships and define criteria for species recognition. With the advancement of various biological, biochemical and computational technologies, different approaches have been adopted for the taxonomy of Aspergillus, Penicillium and Talaromyces; for example, from traditional morphotyping, phenotyping to chemotyping (e.g. lipotyping, proteotypingand metabolotyping) and then mitogenotyping and/or phylotyping. Since different taxonomic approaches focus on different sets of characters of the organisms, various classification and identification schemes would result. In view of this, the consolidated species concept, which takes into account different types of characters, is recently accepted for taxonomic purposes and, together with the lately implemented ‘One Fungus – One Name’ policy, is expected to bring a more stable taxonomy for Aspergillus, Penicillium and Talaromyces, which could facilitate their evolutionary studies. The most significant taxonomic change for the three genera was the transfer of Penicillium subgenus Biverticillium to Talaromyces (e.g. the medically important thermally dimorphic ‘P. marneffei’ endemic in Southeast Asia is now named T. marneffei), leaving both Penicillium and Talaromyces as monophyletic genera. Several distantly related Aspergillus-like fungi were also segregated from Aspergillus, making this genus, containing members of both sexual and asexual morphs, monophyletic as well. In the current omics era, application of various state-of-the-art omics technologies is likely to provide comprehensive information on the evolution of Aspergillus, Penicillium and Talaromyces and a stable taxonomy will hopefully be achieved. |