text
stringlengths
256
16.4k
 The Age of the Universe Predicted by a Time-Varying G Model? Based on previous work, it is shown how a time varying gravitational constant can account for the apparent tension between Hubble’s constant and a newly predicted age of the universe. The rate of expansion, about nine percent greater than previously estimated, can be accommodated by two specific models, treating the gravitational constant as an order parameter. The deviations from ΛCDM are slight except in the very early universe, and the two time varying parametrizations for G lead to precisely the standard cosmological model in the limit where, \stackrel{˙}{G}/G\to 0 , as well as offering a possible explanation for the observed tension. It is estimated that in the current epoch, \stackrel{˙}{G}/G=-0.06{H}_{0} {H}_{0} is Hubble’s parameter, a value within current observational bounds. Universe, Gravitational Constant, Hubble’s Constant, Standard Cosmological Model F=\left(0.9559,0.8688,0.9018\right) {\Omega }_{RAD}=0.000083 {\Omega }_{MATTER}=0.3089 {\Omega }_{\Lambda }=0.6911 {H}_{0}=67.74\text{\hspace{0.17em}}\text{km}/\left(\text{s}\cdot \text{Mpc}\right) \stackrel{˙}{G}/G~-H \stackrel{˙}{G} \stackrel{˙}{G}/G~-0.06H w=-0.98 w=-1 {\Lambda }_{OBS}=1.11\times {10}^{-52}{\text{m}}^{-2}=4.33\times {10}^{-66}{\left(\text{eV}\right)}^{2} {\Lambda }_{VACUUM}={\left(\text{PlanckLength}\right)}^{-2}=3.83\times {10}^{69}{\text{m}}^{-2}=1.22\times {10}^{28}{\left(\text{eV}\right)}^{2} {\Lambda }_{VACUUM} {\Lambda }_{OBS} w=-0.98 {G}^{-1} w=-0.98 {G}^{-1}={G}^{-1}\left(T\right)={G}^{-1}\left(a\right) a=\left(2.725/T\right)={\left(1+z\right)}^{-1} {G}^{-1}={G}_{\infty }^{-1}\left(1-{\text{e}}^{-x}\right) x\equiv b/T=ab/{T}_{0} {T}_{0}=2.275\text{\hspace{0.17em}}\text{K} {x}_{0}\equiv b/{T}_{0} {G}_{\infty }^{-1} {G}^{-1} {G}^{-1}={G}_{\infty }^{-1}L\left(x\right)\equiv {G}_{\infty }^{-1}\left[\mathrm{coth}\left(x\right)-1/x\right] L\left(x\right) L\left(x\right)\equiv \left[\mathrm{coth}\left(x\right)-1/x\right] x\equiv b/T=ab/{T}_{0} {x}_{0}\equiv b/{T}_{0} {G}_{\infty }^{-1} {G}^{-1} {G}^{-1} {G}_{\infty }^{-1} \stackrel{˙}{G}/G=-0.06{H}_{0} {H}_{0} w=-0.98 {x}_{0}=4.28,\text{\hspace{0.17em}}\text{\hspace{0.17em}}b=11.663\text{\hspace{0.17em}}\text{K} {x}_{0}=17.67,\text{\hspace{0.17em}}\text{\hspace{0.17em}}b=48.15\text{\hspace{0.17em}}\text{K} {G}^{-1}={G}^{-1}\left(a\right) {G}^{-1} {T}_{C}=6.20\times {10}^{21}\text{K} {T}_{C}=7.01\times {10}^{21}\text{K} {10}^{16}\text{K}\approx 1\text{\hspace{0.17em}}\text{TeV} {\left({G}_{\infty }^{-1}\right)|}_{A}=1.014{G}_{0}^{-1} {\left({G}_{\infty }^{-1}\right)|}_{B}=1.054{G}_{0}^{-1} {G}_{0} {\left({G}_{\infty }\right)|}_{A}=0.986{G}_{0} {\left({G}_{\infty }\right)|}_{B}=0.949{G}_{0} w=-0.98 \Lambda {G}^{-1}/{G}_{0}^{-1} {G}_{0} G/{G}_{0}={\rho }_{\Lambda }/{\rho }_{{\Lambda }_{0}}={\left(\Lambda /{\Lambda }_{0}\right)}^{1/2} {\rho }_{\Lambda } G/{G}_{0}=5.273\times {10}^{20} G/{G}_{0}=4.113\times {10}^{20} \Lambda /{\Lambda }_{0}=2.78\times {10}^{41} \Lambda /{\Lambda }_{0}=1.69\times {10}^{41} w=-0.98 {H}_{0A}=61.7\text{\hspace{0.17em}}\text{km}/\left(\text{s}\cdot \text{Mpc}\right) {H}_{0B}=63.9\text{\hspace{0.17em}}\text{km}/\left(\text{s}\cdot \text{Mpc}\right) a\le 0.001 \stackrel{˙}{G}/G {G}^{-1} 〈0|{\Phi }^{2}|0〉={G}^{-1}={M}_{G}^{2}/\left(\hslash c\right) {M}_{G} {G}_{F}^{-1}~{M}_{W}^{2} {G}^{-1} {G}^{-1} a\approx 2 a\approx 10 {L}_{PP}={\left(\hslash {G}_{\infty }/{c}^{3}\right)}^{\frac{1}{2}}={\left(\hslash {G}_{0}/{c}^{3}\right)}^{\frac{1}{2}}{\left({G}_{\infty }/{G}_{0}\right)}^{\frac{1}{2}}=s{L}_{P}=s\times 1.62\times {10}^{-35}\text{m} {s}_{A}=0.993 {s}_{B}=0.974 {L}_{P}=1.62\times {10}^{-35}\text{m} {L}_{PP} {M}_{PP}={\left(\hslash c/{G}_{\infty }\right)}^{\frac{1}{2}}={\left(\hslash c/{G}_{0}\right)}^{\frac{1}{2}}{\left({G}_{0}/{G}_{\infty }\right)}^{\frac{1}{2}}={s}^{-1}{M}_{P}={s}^{-1}\times 2.18\times {10}^{-8}\text{kg} {M}_{P}=2.18\times {10}^{-8}\text{kg} {M}_{PP} {s}^{-1} {s}_{A}^{-1}=1.007 {s}_{B}^{-1}=1.027 Pilot, C. (2019) The Age of the Universe Predicted by a Time-Varying G Model? Journal of High Energy Physics, Gravitation and Cosmology, 5, 928-934. https://doi.org/10.4236/jhepgc.2019.53048 1. Riess, A.G., Casertano, S., Yuan, W., Macri, L.M. and Scolnic, D. (2018) Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics Beyond ΛCDM. Astrophysical Journal, 861, 36. https://doi.org/10.3847/1538-4357/ab1422 2. Ananthaswamy, A. (2019) Best-Yet Measurements Deepen Cosmological Crisis. Scientific American. https://www.scientificamerican.com/article/best-yet-measurements-deepen-cosmological-crisis 3. Borenstein, S. New Study Says Universe Expanding Faster and Is Younger. https://apnews.com/fac50d45a19f4239848b1712cfd22c36 4. Pilot, C. (2018) Is Quintessence an Indication of a Time-Varying Gravitational Constant? Journal of High Energy Physics, Gravitation and Cosmology, 5, 41-81. https://doi.org/10.4236/jhepgc.2019.51003https://arxiv.org/abs/1803.06431 5. Arnaud, M., et al. (2015) Planck 2015 Results. XIII. Cosmological Parameters. https://arXiv.org/abs/1502.1589v2 6. Dirac, P.A.M. (1938) A New Basis for Cosmology. Proceedings of the Royal Society of London A, 165, 199-208. https://doi.org/10.1098/rspa.1938.0053 7. Dirac, P.A.M. (1937) The Cosmological Constants. Nature, 139, 323. https://doi.org/10.1038/139323a0 8. Dirac, P.A.M. (1974) Cosmological Models and the Large Numbers Hypothesis. Proceedings of the Royal Society of London A, 338, 439-446. https://doi.org/10.1098/rspa.1974.0095 9. Mena Marugan, G.A. and Carneiro, S. (2002) Holography and the Large Number Hypothesis. Physical Review D, 65, Article ID: 087303. https://doi.org/10.1103/PhysRevD.65.087303 10. Shao, C.-G., Shen, J., Wang, B. and Su, R.-K. (2006) Dirac Cosmology and the Acceleration of the Contemporary Universe. Classical and Quantum Gravity, 23, 3707-3720. https://doi.org/10.1088/0264-9381/23/11/003 11. Ray, S., Mukhopadhyay, U. and Ghosh, P.P. (2007) Large Number Hypothesis: A Review. 12. Unzicker, A. (2009) A Look at the Abandoned Contributions to Cosmology of Dirac, Sciama and Dicke. Annalen der Physik, 18, 57-70. https://doi.org/10.1002/andp.200810335 13. Matthews, R. (1998) Dirac’s Coincidences Sixty Years On. Astronomy & Geophysics, 39, 619-620. https://academic.oup.com/astrogeo/article/39/6/6.19/203469 https://doi.org/10.1093/astrog/39.6.6.19 14. Mather, J.C., et al. (1999) Calibrator Design for the COBE Far-Infrared Absolute Spectrophotometer. The Astrophysical Journal, 512, 511-520. https://doi.org/10.1086/306805 15. Husdal, L. (2016) On Effective Degrees of Freedom in the Early Universe. Galaxies, 4, 78. 16. Jordan, P. (1937) G Has to Be a Field. Naturwissenschaften, 25, 513. https://doi.org/10.1007/BF01498368
Simulation on Sequential Construction Process and Structure of the Pisa Tower () Jun Geng1, Zuping Meng2, Bangxun Yin3, Liufei Zhu4 1Jilin University, Jilin, China. 2National Chiao Tung University, Taiwan, China. 4The Experimental High School Attached to Beijing Normal University, Beijing, China. The leaning of structures happens all around the world and generates impacts on different extents; thus, it is important to learn about the causes behind. In this report, the sequential construction of a typical leaning structure, the Tower of Pisa, is discussed and simulated by using a finite element code, PLAXIS. The simulation is performed on a two-dimensional plane with simplifications taken into consideration in making modeling feasible under limitations. Three distinct models are built with one as a control variable, while the other two models are set up with exact eccentricity. Data are obtained from the analysis and are plotted in a graph to clearly show the relationship between the tilting angle and construction phases. With reasonable and completed simulation, the study is able to show the significant role compressible subsoil plays in impacting the tilting performance of a tall building. The Leaning Tower of Pisa, Tilting Angle, Sequential Construction, Subsoil, Tilting Performance, PLAXIS Geng, J. , Meng, Z. , Yin, B. and Zhu, L. (2020) Simulation on Sequential Construction Process and Structure of the Pisa Tower. Journal of Building Construction and Planning Research, 8, 30-41. doi: 10.4236/jbcpr.2020.81003. The leaning tower of Pisa is one of the most remarkable architectural structures in medieval Europe. Constructed in 1173, the tower is round and is constructed throughout of white marble, inlaid on the exterior with colored marbles. The uneven settling of the campanile’s foundations during its construction gave the structure a marked inclination that is now about 17 feet (5.2 m) out of the perpendicular [1]. Currently, the tower is approximately 60 meters tall; the highest side is 56.67 m, and the lowest side is 55.86 m. And the outer diameter is 15.484 m. The overall form of the tower is a hollow cylinder subdivided into eight orders, with spiral staircase leading to the top. Colonnade is engraved on the wall of the base floor; the middle six stories are surrounded by marble columns and loggias; a belfry, which is always denoted as the Bell Chamber, is constructed at the top. The significant events in the construction history of the leaning tower of Pisa were shown in Table 1. Began in 1173, the tower’s construction process was interrupted after 5 years when three and one third floors were completed, and the tower was leaning to the north about 0.25 degrees. The tower continued to move towards a more upright position when the work finished [2]. However, when the 7th floor was finished in 1278, the tower tilted back to south by about 0.6 degrees and increased to 1.6 degrees in about 90 years. In 1817, the first measurement of the tilting angle was recorded, showing that the angle is about 5 degrees toward the south. After centuries of repairs and adjustments, the inclination decreased approximately to about 4 degrees. As Pisans say, the tower is “banana-shaped”, with the bell tower 1.5 degrees closer to the vertical than the base [3]. The inclining angle of the tower over time could be seen from Figure 1. The objective of this research is to simulate the sequential construction of the leaning tower of Pisa. One of the important reasons is that the tower of Pisa is currently the world’s most ancient and famous leaning structure which means that there are more historical recordings and articles published about this leaning tower. Those can give us more freedom in choosing data for simulation and the tilting of the Tower of Pisa itself is also a very representative architectural problem. To this end, the finite element software PLAXIS 8.2 [5] is utilized to capture the process of staged construction. This is because PLAXIS has the functions to simulate the consolidation problems and step construction. Those functions are required for this simulation to be achieved. Even though the version of the code restraint the study to two-dimensional plane strain modeling, and the premises Table 1. Significant events in the construction history. Figure 1. The inclining angle over time [4]. do not accurately reflect the geometrical features of the Pisa tower, the proposed study is useful in gaining insight into how subsoil foundations impact the performance of a tall structure. The study also provides an opportunity to utilize an advanced finite element code in modeling complex boundary-value problems, such as those arising from the sequential construction of a world heritage. Soil Structure Interaction (SSI) is the process of mutual interaction between soil and structure, that is, under external forces, the response of the soil influences the motion of the structure and vice versa. Conventional structural design methods usually neglect the SSI effects, which is feasible under light structures in relatively stiff soil, but the effects become prominent when heavy building is resting on soft soils; the leaning tower of Pisa, which is mostly comprised of heavy stones and was built on compressible soil, can be a typical example of the latter situation. Therefore, this research is a simple exploration into the soil structure interaction between the leaning tower of Pisa and the soil beneath. 2. Subsoil Condition The tower of Pisa was built on highly compressible soils and started leaning at the very beginning. The ground profile beneath the tower consists of three separate layers as shown in Figure 2. Horizon A is about 10 m thick, consisting of a 2-meter-thick fine sand layer, the upper sand, which has medium density. Horizon B is at a greater depth of about 40 m, primarily consisting of marine clay. This layer is subdivided into four distinct layers. The uppermost layer is soft sensitive clay, locally known as the Pancone clay; the second layer consists of stiffer clay, and the third layer beneath is intermediate sand; a consolidated clay, known as the lower clay, is at the bottom. Horizon B is especially laterally uniform under the tower. Horizon C is a dense sand layer extending to a depth more than 60 meters; it primarily comprises the lower sands. The average Figure 2. The subsoil of the Tower [7]. ground water elevation is 3 meters above the mean sea level, while aquifers in Horizon A, located at a depth between 1 m and 2 m, create challenging problems on protecting the tower. This information is quite essential since the depth of soil layer strongly affects the structural response [6]. 3. Assumptions on the Tilting Structure Based on the aforementioned background of the tilting structure, following assumptions can be made. The first one is that the mass and weight of the building are uniformly distributed. Then the building is rectangular in shape and the building's deformation is small compared to its rigid-body motion. There is no relative movement between the soil and the building at their interface. The material of the building is elastic and the building is tilting in the plane defined along width and height; therefore, it will be a two-dimension simulation. Finally, the ground elevation is 0 m. Some simplifications of the problem are conducted to make the simulation feasible under limitations. The structure was concerned to be an 8-storied perfectly vertical building in height of 60 m, in length of 100 m, in width of 19 m, with 5 m embedded underground as shown in Figure 3. When simulating, the building was assumed to tilt in a certain direction. Additional information is shown in Table 2. The subsoil profile was assumed to consist of four layers in Mohr-Coulomb model to simulate their constitutive behavior. The profile is given in Table 3 and Table 4. Figure 3. The building model. Table 2. Building information. Table 3. Soil information on the side of lower foundation stress. Table 4. Soil information on the side of higher foundation stress. In order to simplify the loading condition, the weight of each floor is taken as concentrated load at the centroid as shown in Figure 4, which has the same effect as distributed load. The initial simulation model (model 1) is a vertical ghost building with vertical point loads on the centroid of each floor as shown in Figure 5. The structure of sand layers will cause uneven sinking and make the building tilt without other setting. However, the 2D model in PLAXIS is unable to simulate non-linear conditions so that an angle of tilt in the initial setting is required to generate an eccentricity which can make simulation closer to realistic condition. Figure 4. The loading condition. Figure 5. Geometry model (model 1). In order to locate the load with exact eccentricity, it is assumed that the entire building starts to tilt at 0.6 degrees as the construction begins. The geometry model (model 2) could be built in 2D as shown in Figure 6. The building could be defined as crossing plates with high stiffness with deformation ignored. The weight of each plate is included in the overall weight of the floor; therefore, the input magnitude each member weighs should be zero. Table 5 shows the properties of each member. The finite elements are produced by meshing the entire geometry with medium element distribution and refining the cluster around the box foundation for more precise calculation. The meshing result is shown in Figure 7. Table 5. Material properties of the members. Figure 7. Meshing result (model 2). In order to simulate the tilt during tower’s construction, the lower three floors should be revised to be perpendicular to the ground, because uneven sinking is so small that the tilt could be ignored. The upper five floors should be revised to tilt at 0.6 degrees because of the uneven sinking caused by soil consolidation. Therefore, the geometry model (model 3) could be built in 2D as shown in Figure 8. Material properties of the members and the meshing process are the same with the previous model. The meshing result is shown in Figure 9. The equivalent point load could be defined by: \stackrel{¯}{p}=\frac{\text{Weight}}{\text{Length}} Therefore, the simulation results could be obtained from PLAXIS as shown in Table 6 and Figure 10. 7. Advantages and Weaknesses There are several advantages about the current modeling process. The numerical model can be set up with ease, since the shape of the horizontal section of the Figure 8. Revised geometry model (model 3). Figure 10. Simulation results of all models. Pisa tower is a symmetrical circle and the weight of each floor is uniformly distributed, it is reasonable to regard the weight as concentrated load at the centroid. Similar leaning angle could be easily simulated in two-dimension model. Also the model has its limitations. The first one is that PLAXIS 2D is more suitable in simulating buildings with the same sectional surfaces. However, the tower of Pisa is a hollow cylinder with certain degrees of leaning, meaning that the sections are different; therefore, the results derived cannot be applied to real situations. Soil structure of the 2D model is over-simplified, for the Pisa tower, in reality, first tilted to the north and then to the south during the construction process. In addition, the presence of a gap between the soil and foundation of tower in cohesive soil is ignored, which has a significant effect on the capacity performance [8]. The simulation also lacks precise information, including soil condition and water levels, making it much more constrained. In conclusion, model 3 shows the largest angle of tilt when weight is approximately about 3000 MN in the middle; however, this is not the expected result, for the angle of tilt should increase in positive proportion to the weight. Model 1 and 2 show better simulations, showing the predicted tendency of the tilting angle. As a result, model 3 seems to have some drawbacks in predicting the inclination. The problems might generate from the distorted form of model 3, since the lower floors of the model are assumed to be perfectly vertical while the upper floors are tilted at 0.6 degrees, different from the form of the tower of Pisa, which has opposite directions of tilting angle in the lower level and the upper one. Thus, model 3 is not very suitable at least in this simulation. For model 1 and 2, the uprising tendency of the tilting angle corresponds to the initial assumption that the angle of tilt has a positive relationship with the weight. In the simulation above, the external force comes from the leaning tower of Pisa itself. Since the tower began tilting when the fourth order was constructing, the gravity from the above upper orders can provide the simulation with external force. Moreover, in the models above, certain areas around the building have been circled and meshed with finer elements, showing that the simulation focuses on the direct interaction. The SSI effect is reflected by the simulated tilting angle between the Pisa Tower and the plumb line, displaying the significant role that compressible subsoil has played in the performance of a tall tilting building. The research team would like to extend our sincere gratitude to our instructor, Professor Ronaldo Borja, for his useful suggestions on our thesis and detailed instructions on modification of our paper. They are deeply grateful for his help in the completion of this work. Second, thanks should go to the Cathaypath Institute of Science and Zizhu International Education Zone, for providing them a great opportunity to participate in the course of Solid Mechanics and eventually give them the chance in completing this work. Special thanks also need to be attributed to the research team’s tutors, Chen Wei and Liu Zekun, for they have put considerable time and effort into their comments on the draft. *These are co-first authors, sorted by alphabetical order of last name. [1] Britannica Online Encyclopedia (2009) Leaning Tower of Pisa (Tower, Pisa, Italy). https://www.britannica.com/topic/Leaning-Tower-of-Pisa [2] Duff, M. (2008) Europe | Pisa’s Leaning Tower ‘Stabilised’. BBC News, 5 May 2009. [3] Black, C.B. (1898) The Riviera, or the Coast from Marseilles to Leghorn: Including the Interior Towns of Carrara, Lucca, Pisa and Pistoia. A. & C. Black, London, 148. [4] Burland, J.B., Jamiolkowski M., Squeglia N. and Viggiani, C. (2013) The Leaning Tower of Pisa. In: Bilotta, E., Flora, A., Lirer, S. and Viggiani, C., Eds., Geotechnics and Heritage, CRC Press, London, 207-227. https://doi.org/10.1201/b14965-11 [5] PLAXIS 8.2 (1998) Finite Element Code for Soil and Rock Analysis, Version 8.2. Brinkgreve, R.B.J. and Vermeer, P.A., Eds. Rotterdam, The Netherlands. [6] Wolf, J.P. (1985) Dynamic Soil-Structure Interaction. Prentice-Hall, Inc., Englewood Cliffs, New Jersey. [7] Fiorentino, G., Nuti, C., Squeglia, N., Lavorato, D. and Stacul, S. (2018) One-Dimensional Nonlinear Seismic Response Analysis Using Strength-Controlled Constitutive Models: The Case of the Leaning Tower of Pisa’s Subsoil. Geosciences, 8, 228. [8] Tuladhar, R., Maki, T. and Mutsuyoshi, H. (2008) Cyclic Behavior of Laterally Loaded Concrete Piles Embedded into Cohesive Soil. Earthquake Engineering & Structural Dynamics, 37, 43-59. https://doi.org/10.1002/eqe.744
2014 Fixed Point Approximation of Nonexpansive Mappings on a Nonlinear Domain We use a three-step iterative process to prove some strong and \mathrm{\Delta } -convergence results for nonexpansive mappings in a uniformly convex hyperbolic space, a nonlinear domain. Three-step iterative processes have numerous applications and hyperbolic spaces contain Banach spaces (linear domains) as well as CAT(0) spaces. Thus our results can be viewed as extension and generalization of several known results in uniformly convex Banach spaces as well as CAT(0) spaces. Safeer Hussain Khan. "Fixed Point Approximation of Nonexpansive Mappings on a Nonlinear Domain." Abstr. Appl. Anal. 2014 (SI71) 1 - 5, 2014. https://doi.org/10.1155/2014/401650 Safeer Hussain Khan "Fixed Point Approximation of Nonexpansive Mappings on a Nonlinear Domain," Abstract and Applied Analysis, Abstr. Appl. Anal. 2014(SI71), 1-5, (2014)
Characterizations of Hemirings Based on Probability Spaces 2013 Characterizations of Hemirings Based on Probability Spaces Bin Yu, Jianming Zhan The notion of falling fuzzy h -ideals of a hemiring is introduced on the basis of the theory of falling shadows and fuzzy sets. Then the relations between fuzzy h -ideals and falling fuzzy h -ideals are described. In particular, by means of falling fuzzy h -ideals, the charac-terizations of h -hemiregular hemirings are investigated based on independent (prefect positive correlation) probability spaces. Bin Yu. Jianming Zhan. "Characterizations of Hemirings Based on Probability Spaces." J. Appl. Math. 2013 1 - 9, 2013. https://doi.org/10.1155/2013/716435 Bin Yu, Jianming Zhan "Characterizations of Hemirings Based on Probability Spaces," Journal of Applied Mathematics, J. Appl. Math. 2013(none), 1-9, (2013)
Mineral Candidates for Planet Interiors - The Quantum Theory Physics January 21, 2022 Dong and his colleagues explored whether silicates—a common type of mineral in Earth’s interior—could hold water. Unlike previous researchers, they considered pressures that would be relevant in Earth’s core: 136 to 364 GPa. The core mostly consists of iron alloys now, but silicates could have been present long ago. “We realized that in the early Earth, things were quite different, and silicates would have existed all the way down to the planet’s center,” Dong says. To investigate these core silicates, the researchers plugged four elements—magnesium (Mg), silicon (Si), oxygen (O), and hydrogen (H)—into their crystal structure algorithm. They discovered a new crystalline structure, , which only exists at pressures greater than 260 Gpa. At lower pressures, it decomposes into {\text{H}}_{2}\text{O} is not known. New simulations suggest that large volumes of rock made of a silicate mineral from Earth’s core could have gradually risen toward the surface and released enough water to fill the oceans. {\text{H}}_{2}\text{O} is not known. New simulations suggest that large volumes of rock made of a silicate mineral from Earth’s core could have gradually risen toward the surfac… This high-pressure silicate is 11% water by weight, which is comparable to other hydrous minerals. Given the abundance of magnesium and silicon in Earth’s interior, the team estimates that could have stored as much as twice Earth’s current water supply in the young planet’s core. The researchers postulate that—as the planet aged—dense iron alloys settled into the core, pushing the silicates upward into lower pressure regions where they released their water. This freed water eventually trickled up to the surface and filled the oceans. “We have used our newly developed crystal structure search techniques and ab initio molecular dynamics simulations to study the phase diagrams of the Si-O-H system,” Sun says. He and his colleagues found that—at pressures above 450 GPa—silica can react with water and hydrogen to form three compounds: . The last two compounds exhibit a so-called “superionic” phase in which protons (hydrogen nuclei) are able to diffuse within the mineral’s crystal framework. The freely moving protons would carry current and thus could generate Uranus’ and Neptune’s magnetic fields. The origin of Earth’s water is one of many open questions about our planet’s formation, says planetary scientist Ravit Helled from the University of Zurich. “We still don’t know exactly how much water Earth has in its deep interior today,” she says. If the core served as an important water reservoir—as suggested by Dong and colleagues—then similar water storage may have occurred in other rocky planets, affecting how they evolved, Helled says. Michael Schirber is a Corresponding Editor forPhysics based in Lyon, France. N. A. Teanby et al., “Neptune and Uranus: ice or rock giants?” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 378, 20190489 (2020). R. Helled et al., “Uranus and Neptune: Origin, Evolution and Internal Structure,” Space Science Reviews 216, 38 (2020).
Sketch each of the following piecewise-defined functions. Then, determine if the functions are continuous and differentiable over all reals. f ( x ) = \left\{ \begin{array} { l l } { 2 x ^ { 3 } } & { \text { for } x < 0 } \\ { 3 x ^ { 2 } } & { \text { for } x \geq 0 } \end{array} \right. f ( x ) = \left\{ \begin{array} { c c } { ( x + 1 ) ^ { 2 } + 1 } & { \text { for }\ \ x <-2 } \\ { | x | } & { \text { for } - 2 \leq x < 2 } \\ { \operatorname { sin } ( x - 2 ) + 2 } & { \text { for }} { x \geq 2 } \end{array} \right. To test if the function is continuous at the boundary point, use the 3 Conditions of Continuity: 1. \lim \limits_{x\rightarrow a}f(x)\text{ exist. (This means that the limit from the left equals the limit from the right.)} f\left(a\right) 3. \ f(a)=\lim \limits_{x\rightarrow a}f(x) To test if the function is differential at the boundary point, use the same 3 conditions on the derivative. 1. \lim \limits_{x\rightarrow a}f'(x)\text{ exist. (This means that the limit from the left equals the limit from the right.)} f ^\prime\left(a\right) exists. (Note, differentiability implies continuity.) 3. \ f'(a)=\lim \limits_{x\rightarrow a}f'(x) Notice that part (b) has two boundary points, so you will have to run these tests twice.
Fortran 2008 is a minor revision of Fortran 2003. The final draft of the Fortran 2008 standard, ISO/IEC JTC 1/SC 22/WG 5/N1830, was released on June 7, 2010 and is available from the WG5’s FTP server at ftp://ftp.nag.co.uk/sc22wg5/N1801-N1850/N1830.pdf. For compiler support, see Fortran 2008 status. do concurrent construct contiguous attribute error stop statement Internal procedures can be passed as actual arguments Procedure pointers can point to an internal procedure Maximum rank increased to 15 newunit= in open statement Unlimited format item Changes to existing intrinsic procedures: acos, asin, atan, cosh, sinh, tan, and tanh now accept complex arguments. atan2 can now be referenced as atan. lge, lgt, lle, and llt now accept arguments of ASCII kind. maxloc and minloc now have an optional back= argument. selected_real_kind now has a radix= argument. New intrinsic procedures: Inverse hyperbolic trigonometric functions: acosh, asinh, and atanh. Bessel functions: bessel_j0, bessel_j1, bessel_jn, bessel_y0, bessel_y1, and bessel_yn. Error function: erf, erfc, and erfc_scaled. Gamma function: gamma and log_gamma. Euclidean distance: hypot. L_2 norm: norm2. Bit sequence comparisons: bge, bgt, ble, and blt. Combined shifting: dshiftl and dshiftr. Counting bits: leadz, trailz, popcnt and poppar. Masking bits: maskl and maskr. Shifting bits: shifta, shiftl and shiftr. Merging bits: merge_bits. Bit transformational functions: iall, iany, and iparity. Coarray intrinsics: Convert a cosubscript to an image index: image_index. Cobounds of a coarray: lcobound and ucobound. Number of images: num_images. Image index or cosubscripts: this_image. Test for the contiguous attribute: is_contiguous?. Size of an element in bits: storage_size. Tests for the number of true values being odd: parity. Search for a value in an array: findloc?. Shell commands: execute_command_line. Define and reference variables atomically: atomic_define and atomic_ref. Additions to intrinsic modules: iso_fortran_env: Information about the compiler: compiler_version and compiler_options. Named constants for selecting kind values. ieee_arithmetic: ieee_selected_real_kind now has a radix= argument. iso_c_binding: c_sizeof returns the size of an array element in bytes. entry (Fortran 77 and later) John Reid announced on 2010 September 10th that the Final Draft International Standard had been approved by ISO by 18 votes to nil with 15 abstentions. The standard is likely to be published by ISO within two months, i.e. by the end of November 2010. J3 Documents: Latest draft accepted by the ISO Secretariat: ISO/IEC JTC 1/SC 22/WG 5/N1830 (June 7, 2010) Previous drafts of the standard: ISO/IEC JTC 1/SC 22/WG 5/N1826 (April 20, 2010) ISO/IEC JTC 1/SC 22/WG 5/N1791 (August 28, 2009) ISO/IEC JTC 1/SC 22/WG 5/N1776 (March 25, 2009) Reid, John (2008). The new features of Fortran 2008. ACM Fortran Forum 27(2), 8-21. (See also N1729) Reid, J. (2010). Coarrays in the next Fortran Standard. ISO/IEC JTC1/SC22/WG5 N1824. Revised on November 10, 2010 17:13:40 by Anonymous Coward (82.130.8.24) (4346 characters / 1.0 pages) Edit | Back in time (16 revisions) | See changes | History | Views: Print | TeX | Source | Linked from: HomePage, Standards, Fortran 2003, acos, acosh, asin, asinh, atan, atanh, atan2, bessel_j0, bessel_j1, bessel_jn, bessel_y0, bessel_y1, bessel_yn, c_sizeof, cosh, erfc, erf, erfc_scaled, gamma, hypot, leadz, log_gamma, maxloc, minloc, selected_real_kind, sinh, tanh, tan, trailz, Library distribution, 2009, FAQ, Keywords, iso_fortran_env, Submodules, Coarrays, Intel Fortran compiler, Stats, equivalence discussion, Fortran 2008 status, Continuation lines, 2010, Generating C Interfaces, Fortran History, J3, norm2, parity, popcnt, poppar, bge, bgt, blt, ble, dshiftl, dshiftr, maskl, maskr, merge_bits, shifta, shiftl, shiftr, execute_command_line, this_image, Fortran 2018, Compiler Support for Modern Fortran, 2016, atomic_define, atomic_ref, compiler_options, compiler_version, iany, iall, image_index, iparity, lcobound, num_images, storage_size, ucobound, Modern Fortran Explained
Konrad Voelkel » What is … a reductive group? « GL_n Short warning: I'm not going to discuss reductive Lie groups or reductive p-adic, adelic, whatsoever Lie groups. These occur, for example, by taking the points in a topological ring and obtaining a new topology, and have a rich structure theory by themselves. To complete the warning, I mention that the concept of reductive groups is not the same for Lie groups as it is for algebraic groups, since the real points of a unipotent algebraic group (which is like the opposite of reductive) can be a reductive Lie group. The correct bridge to understand this is the Lie algebra. Short explanation: connected linear algebraic groups are those where you have a useful theory of Borel subgroups and parabolic subgroups; reductive groups are a special case where you have a useful theory of root systems and Bruhat decomposition. Much of the theory of reductive groups becomes easier in the special cases of split groups (this can be done by base change to an algebraically closed field), of (étale) simply connected groups (this can be done by covering with a unique simply connected cover), or of semi-simple groups (this can be done by quotienting out the so-called radical). Fundamental examples: Elliptic curves or more generally Abelian varieties are by definition projective algebraic group varieties, so they are not affine algebraic groups. \mathbb{G}_a , the additive group, with \mathbb{G}_a(R) = (R,+) , is an affine algebraic group, but it is not reductive. \mathbb{G}_a^n is called a vector group (not reductive). For any commutative affine algebraic group \mathbb{G} \mathbb{G}_s \cdot \mathbb{G}_u , a product of the closed subgroup of semisimple elements with the closed subgroup of unipotent elements. GL_n is reductive (and split). Special case: GL_1 = \mathbb{G}_m , the multiplicative group, with \mathbb{G}_m(R) = (R^\times,\cdot) \mathbb{G}_m^n is called a torus (it's reductive). SL_n is semi-simple (hence reductive) and simply connected. PSL_n is semi-simple but not simply connected; SL_n is its simply connected cover. SO(Q) Q a quadratic form over a field k is a reductive group, defined over k , but not necessarily split over k Q L is a splitting field of Q , the base change of SO(Q) L is a split reductive group. Linear algebraic groups which are connected, semi-simple, split and simply-connected are called "Chevalley groups of universal type" and they can be constructed explicitly from their root system (and are classified by their root data). They are all defined over the integers. The groups SL_n Sp_{2n} are particular examples. I'm sure that there are more important examples that one should mention here, so I welcome any suggestions in the comment box below. Reductive Groups Over The Complex Numbers First of all, a reductive group \mathbb{G} is a linear algebraic group (with extra properties), which by definition means it is an affine group scheme, and one can show that it admits a map \mathbb{G} \to GL_n for some n. The most important objects to study a linear algebraic group are the spaces it acts on, especially the linear ones, namely representations, and the subobjects, i.e. subgroups and quotients, which correspond to normal subgroups. Linear algebraic group over the complex numbers means that the affine group scheme is defined over the complex numbers, i.e. by polynomial equations with complex coefficients. In this case, one can define: a linear algebraic group over the complex numbers is reductive if its representation category (the category of all finite dimensional complex representations) is completely reducible, i.e. every representation decomposes as a direct sum of simple objects. Warning: For representations, the term "completely reducible" is a synonym of "semisimple", whereas for linear algebraic groups, "reductive" and "semisimple" have a different meaning. To get this straight, we will later on discuss semisimple linear algebraic groups, which are all reductive, but not the other way around. On one hand, this definition is short and neat, but on the other hand it is not attached to subobjects or quotients of the group, but to its representation category, which might seem a little bit away from the group. While it is true that one can recover a linear algebraic group from its representation category by the Tannakian formalism, one could hope for something more direct. In fact, for fields of characteristics 0, one can express the reductivity property by the Lie algebra (no wonder, since the representation theory is also governed by the representation theory of the Lie algebra): A finite dimensional Lie algebra over a field of char. 0 is reductive if its adjoint representation is completely reducible (i.e. is a direct sum of simple representations). One defines a reductive Lie group by requiring its Lie algebra to be reductive. One can prove that a Lie algebra over a field of char. 0 is reductive iff it is the Lie algebra of a reductive algebraic group over this field. Warning: it does not follow that the representation category of the reductive Lie algebra itself is completely reducible. Reductive Algebraic Groups, in Terms of Radicals Now suppose we have an algebraically closed field k and all the groups we consider in this paragraph are defined over this field k. We define the radical of a linear algebraic group G as a maximal subgroup among all subgroups that are closed, connected, normal and solvable, denoted R(G) I want to remind you of the solvability condition in the discrete case: A discrete group G is called solvable if the descending derived normal series stabilizes to the trivial group in finitely many steps, so there is some n such that G^{(n)} = 0 . Here we use the notation G^{(n)} = [G^{(n-1)},G^{(n-1)}] for the iterated commutator subgroup. Abelian groups have G^{(1)} = [G,G] = 0 , so solvable generalizes abelian. More importantly, nilpotent groups are solvable. Nilpotent (discrete) groups are those which have a finite length lower central series We define the unipotent radical of a linear algebraic group G as a maximal subgroup among all subgroups that are closed, connected, normal and unipotent, denoted R_u(G) A group is unipotent if all its elements are unipotent. One characterization is that under any embedding G \to GL_n a unipotent element of G is mapped to a unipotent element of GL_n u GL_n(R) is called unipotent if 1-u is a nilpotent endomorphism of R^n . Now you might think that's a bad definition, and yes, there are other definitions, but they are not shorter. Any unipotent group is solvable, so R_u(G) \subset R(G) and for quotients we also get an epimorphism G / R_u(G) \to G / R(G) The definition of a reductive group in terms of radicals is now: An affine algebraic group G is called reductive if it is connected and R_u(G) = 0 . It is called semi-simple if furthermore R(G) = 0 For any reductive group G there is the semi-simple quotient G \to G/R(G) . A nice characterization of the radical is that it is precisely the intersection of all Borel subgroups. As an example, look at GL_n , which has no closed connected normal unipotent subgroup, hence vanishing unipotent radical. Any subgroup of the diagonal matrices with roots of unity as entries is a closed solvable subgroup, and all of them (together with all permutations which are diagonal in another basis) form the radical. The quotient is precisely GL_n / R(GL_n) = SL_n , since you can represent the determinant det(M) of every (nxn)-matrix M by a diagonal matrix with entries det(M)^{1/n} (over an algebraically closed field, of course). Reductive Groups Over Perfect Fields Now we take a field k (any field) and let our groups be defined over that field k. You can take the definition in terms of radicals and unipotent radicals, to define what a reductive or a semisimple linear algebraic group over a field k is, and just use it with the maybe non-closed field k. One calls these notions k-reductive and k-semisimple. One obvious question is, if the base change to an algebraic closure \overline{k} \overline{k} -reductive or \overline{k} -semisimple again. In general, this is wrong. One calls a group defined over some field k reductive (resp. semisimple) if the base change to a separable closure k^s k^s -reductive (resp. k^s -semisimple). A perfect field is a field with the property that all finite field extensions are separable. Examples: finite fields an algebraically closed fields. Counterexample: \mathbb{F}_p(t) One advantage of perfect fields is that you can replace any separable closure with an algebraic closure (it's just the same). Much of Galois theory, and therefore étale topology, becomes complicated at the first encounter because of this issue. For affine algebraic groups defined over perfect fields k, the notion k-reductive coincides with the notion from the previous paragraph. There are affine algebraic groups defined over non-perfect fields k which are k-reductive but not reductive, so you have to be super-careful. Look at the WP article on pseudo-reductive groups for more info. Reductive Groups Over Schemes If you look at affine algebraic groups defined over a local ring, you can take the base change to the residue field and thus talk about reductive and semisimple groups over local rings. In SGA3, Exposé XIX, 2.7 a reductive group over a scheme S is defined as a smooth affine group scheme over S such that all the geometric fibers are connected reductive algebraic groups. That specializes to S being the spectrum of a ring and becomes less scheme-theoretic for a local ring (if you want to work out some examples ...). T. Springer: Linear Algebraic Groups (Vol 9 in Progress in Mathematics, or republished by Birkhäuser; make sure to read 2nd edition instead of 1st) J. Milne: Course Notes on Reductive Groups Encyclopedia of Mathematics: Reductive Group (really worth reading! short!) Encyclopedia of Mathematics: Reductive Lie Algebra Wikipedia: Reductive Group Tags » Algebraic Geometry, Geometry, Reductive Groups, Representation Theory « 2013-04-09 (9. April 2013) I really like the post. And I would like to ask one quick question if you don't mind. What are the consequences of this notion in term of moduli space of representations of the group G ? To be more specific, I would like to know what was the motivation of considering this notion and for what purpose. 2013-04-15 (15. April 2013) I don't know in which category one has a (fine/coarse?) moduli space of (rational?) representations of a linear algebraic group, sorry. I think a big motivation for the notion of reductive groups comes from the fact that their finite (hence their rational) representations are semisimple. For an affine G-variety X this implies that G(k) acts semisimply on k[X]. On the other hand, you can try to build up some theorem about linear groups by proving it for tori and for unipotent groups and then stick together somehow. Now I came across this blog post by Youcis which elaborates more on reductive groups, with some good motivational material. I can only recommend this.
Tryptophan alpha,beta-oxidase - WikiMili, The Best Wikipedia Reader In enzymology, a tryptophan alpha,beta-oxidase (EC 1.3.3.10) is an enzyme that catalyzes the chemical reaction L-tryptophan + O2 {\displaystyle \rightleftharpoons } alpha,beta-didehydrotryptophan + H2O2 Thus, the two substrates of this enzyme are L-tryptophan and O2, whereas its two products are alpha,beta-didehydrotryptophan and H2O2. This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-CH group of donor with oxygen as acceptor. The systematic name of this enzyme class is L-tryptophan:oxygen alpha,beta-oxidoreductase. Other names in common use include L-tryptophan 2',3'-oxidase, and L-tryptophan alpha,beta-dehydrogenase. It employs one cofactor, heme. In enzymology, a sterol-4alpha-carboxylate 3-dehydrogenase (decarboxylating) (EC 1.1.1.170) is an enzyme that catalyzes the chemical reaction In enzymology, a 3alpha-hydroxysteroid dehydrogenase (B-specific) (EC 1.1.1.50) is an enzyme that catalyzes the chemical reaction In enzymology, a cholesterol oxidase (EC 1.1.3.6) is an enzyme that catalyzes the chemical reaction In enzymology, a hexose oxidase (EC 1.1.3.5) is an enzyme that catalyzes the chemical reaction In enzymology, a (S)-2-hydroxy-acid oxidase (EC 1.1.3.15) is an enzyme that catalyzes the chemical reaction In enzymology, a thiamine oxidase (EC 1.1.3.23) is an enzyme that catalyzes the chemical reaction In enzymology, a betaine-aldehyde dehydrogenase (EC 1.2.1.8) is an enzyme that catalyzes the chemical reaction In enzymology, a 3alpha,7alpha,12alpha-trihydroxy-5beta-cholestanoyl-CoA 24-hydroxylase (EC 1.17.99.3) is an enzyme that catalyzes the chemical reaction In enzymology, a lysine 2-monooxygenase (EC 1.13.12.2) is an enzyme that catalyzes the chemical reaction In enzymology, a tryptophan 2'-dioxygenase is an enzyme that catalyzes the chemical reaction In enzymology, a tryptophan 2-monooxygenase (EC 1.13.12.3) is an enzyme that catalyzes the chemical reaction Amine oxidase (copper-containing) (AOC) (EC 1.4.3.21 and EC 1.4.3.22; formerly EC 1.4.3.6) is a family of amine oxidase enzymes which includes both primary-amine oxidase and diamine oxidase; these enzymes catalyze the oxidation of a wide range of biogenic amines including many neurotransmitters, histamine and xenobiotic amines. They act as a disulphide-linked homodimer. They catalyse the oxidation of primary amines to aldehydes, with the subsequent release of ammonia and hydrogen peroxide, which requires one copper ion per subunit and topaquinone as cofactor: In enzymology, a D-aspartate oxidase (EC 1.4.3.1) is an enzyme that catalyzes the chemical reaction In enzymology, a L-glutamate oxidase (EC 1.4.3.11) is an enzyme that catalyzes the chemical reaction In enzymology, a L-lysine oxidase (EC 1.4.3.14) is an enzyme that catalyzes the chemical reaction In enzymology, a N6-methyl-lysine oxidase (EC 1.5.3.4) is an enzyme that catalyzes the chemical reaction In enzymology, a nitroalkane oxidase (EC 1.7.3.1) is an enzyme that catalyzes the chemical reaction In enzymology, a trimethylamine dehydrogenase (EC 1.5.8.2) is an enzyme that catalyzes the chemical reaction Genet R, Denoyelle C, Menez A (1994). "Purification and partial characterization of an amino acid alpha,beta- dehydrogenase, L-tryptophan 2',3'-oxidase from Chromobacterium violaceum". J. Biol. Chem. 269 (27): 18177–84. PMID 8027079. Genet R, Benetti PH, Hammadi A, Menez A (1995). "L-tryptophan 2',3'-oxidase from Chromobacterium violaceum. Substrate specificity and mechanistic implications". J. Biol. Chem. 270 (40): 23540–5. doi: 10.1074/jbc.270.40.23540 . PMID 7559518.
1 Polarization vs time in fill 2 Profile ratio R vs time in fill 3 Relative beam polarization by measurement number in a fill 4 Profile ratio R by measurement number in a fill The polarization decay reported in the stat box is shown in absolute polarization (%) lost (or gained) per second. To get the number in human format (i.e. absolute polarization percents per hour) one need to multiply the number in the stat box (the "slope" parameter) by 3600. The zero time 00 in the following plots corresponds to the first measurement in a store. The average slope for blue ring is 0.3 ± 0.05 %/hour The average slope for yellow ring is 0.8 ± 0.06 %/hour slope = -0.39 ± 0.07 %/hour Note the not so good {\displaystyle \chi ^{2}} per degree of freedom. Profile ratio R vs time in fill Profile r vs Time in Fill Relative beam polarization by measurement number in a fill One should be very careful interpreting these results as there is no guarantee that the sequential number of the measurement is correct. relative Polarization vs Measurement Id in Fill Profile ratio R by measurement number in a fill Profile Ratio R vs Measurement Id in Fill
D-alanine—D-alanine ligase - WikiProjectMed D-alanine—D-alanine ligase Enzyme belonging to the ligase family D-ala D-ala ligase N-terminus Dala_Dala_lig_N 2dln / SCOPe / SUPFAM D-ala D-ala ligase C-terminus Dala_Dala_lig_C In enzymology, a D-alanine—D-alanine ligase (EC 6.3.2.4) is an enzyme that catalyzes the chemical reaction ATP + 2 D-alanine {\displaystyle \rightleftharpoons } ADP + phosphate + D-alanyl-D-alanine Thus, the two substrates of this enzyme are ATP and D-alanine, whereas its 3 products are ADP, phosphate, and D-alanyl-D-alanine. This enzyme belongs to the family of ligases, specifically those forming carbon-nitrogen bonds as acid-D-amino-acid ligases (peptide synthases). The systematic name of this enzyme class is D-alanine:D-alanine ligase (ADP-forming). Other names in common use include alanine:alanine ligase (ADP-forming), and alanylalanine synthetase. This enzyme participates in d-alanine metabolism and peptidoglycan biosynthesis. Phosphinate and D-cycloserine are known to inhibit this enzyme. The N-terminal region of the D-alanine—D-alanine ligase is thought to be involved in substrate binding, while the C-terminus is thought to be a catalytic domain.[1] As of late 2007, 8 structures have been solved for this class of enzymes, with PDB accession codes 1EHI, 1IOV, 1IOW, 2DLN, 2FB9, 2I80, 2I87, and 2I8C. ^ Roper DI, Huyton T, Vagin A, Dodson G (August 2000). "The molecular basis of vancomycin resistance in clinically relevant Enterococci: crystal structure of D-alanyl-D-lactate ligase (VanA)". Proc. Natl. Acad. Sci. U.S.A. 97 (16): 8921–5. Bibcode:2000PNAS...97.8921R. doi:10.1073/pnas.150116497. PMC 16797. PMID 10908650. Ito, E; Strominger JL (1962). "Enzymatic synthesis of the peptide in bacterial uridine nucleotides II. Enzymatic synthesis and addition of D-alanyl-D-alanine". J. Biol. Chem. 237: 2696–2703. doi:10.1016/S0021-9258(19)73809-5. Neuhaus FC (1962). "Kinetic studies on D-Ala-D-Ala synthetase". Fed. Proc. 21: 229. van Heijenoort J (2001). "Recent advances in the formation of the bacterial peptidoglycan monomer unit". Nat. Prod. Rep. 18 (5): 503–19. doi:10.1039/a804532a. PMID 11699883. Retrieved from "https://mdwiki.org/wiki/D-alanine—D-alanine_ligase"
 A Comparative Analysis of Farmgate and Regulated Prices of Palay in Nueva Ecija, Philippines: A Policy Revisited 1Faculty, College of Management and Business Technology, Nueva Ecija University of Science and Technology, Cabanatuan, Philippines 2Department of Public Administration, Nueva Ecija University of Science and Technology, Cabanatuan, Philippines The study analyzed the interplay between the palay farm gate price and government price subsidy for palay in Nueva Ecija, Philippines. The paper argued that there is a gap in the implementation of the latter (government subsidy). Hence, this paper determined the monthly pattern of prices of palay, measured the difference between the average farm gate and government support price (subsidy) for palay from 1994 to 2005 and its influence to actual government procurement of palay vis-a-vis the supply and demand of palay grains in the market. The study utilized secondary data from the National Food Authority (NFA) and Bureau of Agricultural Statistics (BAS). This was subjected to t-test to assess whether the means of two groups were statistically different from each other. In general, farm gate prices (FG) (in the province) were higher compared to the government support price (subsidy) except in the years 1999, 2000, 2001, 2002 and 2003. From the years 1994 to 1998 and 2004 to 2005, the t-value is greater than the critical values of the t-distribution leading to the rejection of null hypothesis that government support price is higher than the farm gate price. The NFA consolidated actual procurement in (Nueva Ecija) the province that from the years 1994 until 2004, it constituted (a measly) 0.95 percent of the total palay production equivalent to 9,828,224 metric tons. This yielded a correlation coefficient between the actual price subsidy and actual procurement as “significant but very low” with a value of 0.16. The actual procurement correlation to farm gate price showed a significant but negative correlation of −0.18. This means that as the farm gate price increases, the actual procurement of NFA decreases. The subsidy and farm gate prices are positively correlated with a coefficient of 0.54 significant at 1 percent level. The study concluded that the objectives of subsidy are not met and necessary governmental adjustments must be made to realize them. \text{\%procured}=\frac{\text{TPc}}{\text{TPd}}\times 100 \text{tc}=\frac{\stackrel{\to }{{\text{X}}_{1}}-\stackrel{\to }{{\text{X}}_{2}}}{\text{S}12+\text{S}22} \text{r}=\frac{\text{n}\left({\sum }^{\text{​}}\text{xy}\right)-\left({\sum }^{\text{​}}\text{y}\right)}{\text{n}\left({\sum }^{\text{​}}\text{x}2\right)-\left({\sum }^{\text{​}}\text{x}\right)2\text{n}\left({\sum }^{\text{​}}\text{y}2\right)-\left({\sum }^{\text{​}}\text{y}\right)2} Santos, M.D., Clemente, M.O. and Gabriel, A.G. (2018) A Comparative Analysis of Farmgate and Regulated Prices of Palay in Nueva Ecija, Philippines: A Policy Revisited. Open Journal of Social Sciences, 6, 50-68. https://doi.org/10.4236/jss.2018.63005 1. Philippine Rice Research Institute. (1994) Economics of Seed Production. Phil Rice R and D Highlight. Maligaya, Munoz, Nueva Ecija. 2. Philippine Rice Research Institute. (2004) What’s the Future of the Philippine Rice economy? Department of Agriculture, Philippine Rice Research Institute Journal, 17, 3-17. 3. Robinson, R.L. (1989) Farm and Food Policies and their Consequences: Price Support Issues. Prentice-Hall, Englewood Cliffs, NJ, p. 286. 4. Lopez, N. (1996) Nueva Ecija Provincial Profile. First Edition, Philippine Information Agency, 69-89. 5. Bureau of Agricultural Statistics. (1998) Seasonality Adjusted Rice Production: A Philippine Statistical Yearbook (2004) Agricultural Area, Quantity, and Value of Production by Kind of Crop 2001-2003. National Statistical Coordination Board, Ground Floor Midland, Buendia Building, Makati City, Philippines, 22-23. 6. Cabling, J.M. (2002) Market Structure, Conduct, and Performance of the Rice Milling and Trading Industries in the Philippines. Unpublished MS Thesis, University of the Philippines, Los Banos. 7. Deomampo, N.R. and Sardido, L.C. (1979) A Survey of Marketing Policies for Agriculture in the Philippines. Journal of Agricultural Economics and Development, 9, 143-163. 8. Caintic, C.U. (1984) The Impact of National Food Authority in the Marketing of Rice and Corn in Bukidnon, Central Mindanao University (CMU). Journal of Agriculture, Food and Nutrition, 6, 216-235. 9. Cramer, G.L. and Jensen, C.W. (1994) Agricultural Economics and Agribusiness. 6th Edition, John Wiley & Sons, Inc., United States, p. 534. 10. Umali, D.L. (1990) Rice Marketing and Prices under Philippine Government Price Stabilization. Journal of Agricultural Economics and Development, 20, 1-43. 11. National Food Authority. (1988) NFA BP form 181. Manila, Philippines. 12. National Food Authority. (2000) A Primer on Grains Industry Profile. SRA Bldg. North Avenue, Diliman, Quezon City, 6-8. 13. Bureau of Agricultural Statistics Quezon City. (2000) Rice Statistics Handbook on Palay Production and Prices. Philippine Statistics Authority, 2, No. 4. 14. Downie, N.M. (1984) Basic Statistical Methods. http://www.amazon.com 15. Briones, A.M. (1997) An Economic Analysis of the Efficiency of Rice Marketing System in the Philippines. Unpublished MS Thesis, Polytechnic University of the Philippine, Manila.
a positive number x is multiplied by 2. and this product is then divided by 3. A positive number x is multiplied by 2. and this product is then divided by 3. If the positive square root of the result of these two operations equals x, what is the value of x? প্রশ্নঃ A positive number x is multiplied by 2. and this product is then divided by 3. If the positive square root of the result of these two operations equals x, what is the value of x? \sqrt{\frac{2}{3}x}=x ⸫ \frac{2}{3}x={x}^{2} [square করে] x=\frac{2}{3} [উভয় দিকে x দিয়ে ভাগ করে]
Detailed List Of Trigonometry Formulas Pdf Download - GrabNaukri.com Trigonometry all formulas pdf download:-Trigonometry is a branch of mathematics that deal with angles, lengths and heights of triangles and relations between different parts of circles and other geometrical figures. Maths Formulas – Trigonometric Ratios and identities are very useful and learning the below formulae help in solving the problems better. Trigonometry formulas pdf are essential for solving questions in Trigonometry Ratios and Identities in Competitive Exams. We designed this article as per requirements, this article will help those students whom are preparing for trigonometry formulas pdf class 10, trigonometry formulas for class 10, trigonometry formulas for class 11 and 12 pdf download, trigonometric formulas for class 12 pdf, trigonometry formulas pdf for engineering, SSC CHSL, SSC CGL, SSC 10+2 and so on. Details of Page Contents 1 Trigonometry all Formulas List pdf download 1.1 Download Definition of the Trig Functions 1.2 Formulas for arcs and sectors of circles 1.3 Trigonometric all Formulas pdf download – Right Angle 1.4 Download SSC CGL Mathematics Hand Written Notes Free pdf download 3.1 Trigonometry all Formulas pdf download Download 7 Co-function Identities (in Degrees) 7.1 Download Trigonometry Cheat Sheet pdf download 8.1 Trigonometry all Formulas pdf download Part 1 11 Half Angle Identities 12 Product identities 13 Sum to Product Identities 14 Inverse Trigonometry Formulas 16.1 Download Trigonometry all Handwritten Notes pdf download 17 Some Important Books for Competitive Exam: 17.1 Detailed Trigonometry all formula pdf download Download 17.2 NCERT Trigonometry book pdf download download 19 Q1: What formulas should I study for the SSC CHSL? 20 Q2: What are the basic trigonometric ratios? 21 Q3: What are all the formulas of trigonometry? 22 Q4: What are formulas for trigonometry ratios? 23 Q5: How do I memorize maths trigonometry formulas? 24 Q6: What are the three main functions in trigonometry? 25 Q7: Can I get a trigonometry formulas list? 26 Q8: What are the fundamental trigonometry identities? 27 Q9: Trigonometry formulas are applicable to which triangle? Trigonometry all Formulas List pdf download In this world so many peoples are searching for same topics with different keywords, we are today in this post covering some trending keywords i.e trigonometry formulas in pdf download, trigonometric formulas pdf download, trigonometric formula pdf download, trigonometry formulas pdf download, trigonometry all formula pdf download, all trigonometry formulas pdf download, trigonometry all formulas pdf download, trigonometry formulas list pdf download download, trigonometry formulas for ssc cgl exam pdf download, trigonometry formulas pdf download. When we learn about trigonometric formulas, we consider it for right-angled triangles only. In a right-angled triangle, we have 3 sides namely – Hypotenuse, Opposite side (Perpendicular), and Adjacent side (Height). The longest side is known as the hypotenuse, the side opposite to the angle is perpendicular and the side where both hypotenuse and opposite side rests is the adjacent side. Download Definition of the Trig Functions Trigonometric all Formulas pdf download – Right Angle There are basically 6 ratios used for finding the elements in Trigonometry. They are called trigonometric functions. The six trigonometric functions are sine, cosine, secant, co-secant, tangent and co-tangent. Download SSC CGL Mathematics Hand Written Notes Free pdf download Our this post also covering (inverse trigonometry formulas pdf, trigonometry formula ncert pdf, trigonometry formulas for class 11 pdf download, trigonometry table, trigonometry formula) these topics are frequently searches and this post will provided value for our visitors. Trigonometry all Formulas pdf download Download All these are taken from a right angled triangle. With the length and base side of the right triangle given, we can find out the sine, cosine, tangent, secant, cosecant, and cotangent values using trigonometric formulas. The reciprocal trigonometric identities are also derived by using the trigonometric functions. Download Trigonometry Cheat Sheet pdf download Trigonometry all Formulas pdf download Part 1 \mathrm{sin}\frac{x}{2}=±\sqrt{\frac{1-\mathrm{cos}\phantom{\rule{mediummathspace}{0ex}}x}{2}} \mathrm{cos}\frac{x}{2}=±\sqrt{\frac{1+\mathrm{cos}\phantom{\rule{mediummathspace}{0ex}}x}{2}} \mathrm{sin}\phantom{\rule{mediummathspace}{0ex}}x\cdot \mathrm{cos}\phantom{\rule{mediummathspace}{0ex}}y=\frac{\mathrm{sin}\left(x+y\right)+\mathrm{sin}\left(x-y\right)}{2} \mathrm{cos}\phantom{\rule{mediummathspace}{0ex}}x\cdot \mathrm{cos}\phantom{\rule{mediummathspace}{0ex}}y=\frac{\mathrm{cos}\left(x+y\right)+\mathrm{cos}\left(x-y\right)}{2} \mathrm{sin}\phantom{\rule{mediummathspace}{0ex}}x\cdot \mathrm{sin}\phantom{\rule{mediummathspace}{0ex}}y=\frac{\mathrm{cos}\left(x-y\right)-\mathrm{cos}\left(x+y\right)}{2} \mathrm{sin}\phantom{\rule{mediummathspace}{0ex}}x+\mathrm{sin}\phantom{\rule{mediummathspace}{0ex}}y=2\mathrm{sin}\frac{x+y}{2}\mathrm{cos}\frac{x-y}{2} \mathrm{sin}\phantom{\rule{mediummathspace}{0ex}}x-\mathrm{sin}\phantom{\rule{mediummathspace}{0ex}}y=2\mathrm{cos}\frac{x+y}{2}\mathrm{sin}\frac{x-y}{2} \mathrm{cos}\phantom{\rule{mediummathspace}{0ex}}x+\mathrm{cos}\phantom{\rule{mediummathspace}{0ex}}y=2\mathrm{cos}\frac{x+y}{2}\mathrm{cos}\frac{x-y}{2} 1. Always try to bring the multiple angles to single angles using basic formula. Make sure all your angles are the same. Using and is difficult, but if you use , that leaves and , and now all your functions match. The same goes for addition and subtraction: don’t try working with and . Instead, use 2. Converting to sin and cos all the items in the problem using basic formula. I have mentioned sin and cos as they are easy to solve. You can use any other also. 3. Use Pythagorean identifies to simplify the equations 5. Practice and Practice. You will soon start figuring out the equation and there symmetry to resolve them fast. Download Trigonometry all Handwritten Notes pdf download Some Important Books for Competitive Exam: Latest Lucent GK (General Knowledge) pdf download Black book of general awareness pdf download free download Orient Blackswan School Atlas E-Book for UPSC Modern Indian History Class Notes E-Book 1000 One liner Lucent gk questions in Hindi Detailed Trigonometry all formula pdf download Download NCERT Trigonometry book pdf download download For other Jobs Notification Please visit our Latest Jobs section: Click Here Must Read Disclaimer: GrabNaukri.com is only for the Educational Purpose & Govt. Job links provider. We are not owner of any Book, Notes, Magazine, pdf download Materials, eBooks Available on it, neither it been Created nor Scanned. We are only provide the link and Material already Available on the Internet. If any violates the Law or there is a Problem so Please Just CONTACT US.
Real-World Examples of the Expanded Accounting Equation The expanded accounting equation is derived from the common accounting equation and illustrates in greater detail the different components of stockholders' equity in a company. By decomposing equity into component parts, analysts can get a better idea of how profits are being used—as dividends, reinvested into the company, or retained as cash. The expanded accounting equation is the same as the common accounting equation but decomposes equity into component parts. The components of equity include contributed capital, retained earnings, and revenue minus dividends. It also accounts for total assets and total liabilities. Some terminology may vary among different companies, depending on how they organize their balance sheets. The expanded version of the accounting equation details the equity role in the basic accounting equation. The common form of the accounting equation is: \begin{aligned} &\text{Assets} = \text{Liabilities} + \text{Owner's Equity}\\ &\textbf{where:}\\ &\text{Liabilities} = \text{All current and long-term debts} \\ &\text{and obligations}\\ &\text{Owner's Equity} = \text{Assets available to shareholders} \\ &\text{after all liabilities}\\ \end{aligned} ​Assets=Liabilities+Owner’s Equitywhere:Liabilities=All current and long-term debtsand obligationsOwner’s Equity=Assets available to shareholdersafter all liabilities​ The expanded accounting equation decomposes equity into component parts: \begin{aligned}&\text{Assets} = \text{Liabilities} + \text{CC} + \text{BRE} + \text{R} - \text{E} - \text{D} \\&\textbf{where:}\\&\text{CC} = \text{Contributed Capital, capital provided by} \\&\text{the original stockholders (also known as Paid-In Capital)} \\&\text{BRE} = \text{Beginning Retained Earnings, earnings not} \\&\text{distributed to stockholders from the previous period} \\&\text{R} = \text{Revenue, what's generated from the ongoing} \\&\text{operation of the company} \\&\text{E} = \text{Expenses, costs incurred to run operations of} \\&\text{the business} \\&\text{D} = \text{Dividends, earnings distributed to the stockholders} \\&\text{of the company}\end{aligned} ​Assets=Liabilities+CC+BRE+R−E−Dwhere:CC=Contributed Capital, capital provided bythe original stockholders (also known as Paid-In Capital)BRE=Beginning Retained Earnings, earnings notdistributed to stockholders from the previous periodR=Revenue, what’s generated from the ongoingoperation of the companyE=Expenses, costs incurred to run operations ofthe businessD=Dividends, earnings distributed to the stockholdersof the company​ Sometimes, analysts want to better understand the composition of a company's shareholders' equity. Besides assets and liabilities, which are part of the general accounting equation, stockholders' equity is expanded into the following elements: Contributed capital: This is the capital provided by the original stockholders (also known as paid-in capital). Beginning retained earnings: Retained earnings are the earnings not distributed to the stockholders from the previous period. Revenue: This is what's generated from the ongoing operation of the company. Expenses: These are costs incurred to run operations of the business. Dividends: These are subtracted since they are the earnings distributed to the stockholders of the company. Contributed capital and dividends show the effect of transactions with the stockholders. The difference between the revenue and profit generated and expenses and losses incurred reflects the effect of net income (NI) on stockholders' equity. Overall, then, the expanded accounting equation is useful in identifying at a basic level how stockholders' equity in a firm changes from period to period. Some terminology may vary depending on the type of entity structure. "Members' capital" and "owners' capital" are commonly used for partnerships and sole proprietorships, respectively, while "distributions" and "withdrawals" are substitute nomenclature for "dividends." Revenues and expenses are often reported on the balance sheet as "net income." Let's look at an actual historical example. Below is a portion of Exxon Mobil Corporation's (XOM) balance sheet as of September 30, 2018. Total liabilities were $157,797 (1st highlighted red area). Total equity was $196,831 (2nd highlighted red area). The accounting equation whereby Assets = Liabilities + Shareholders' equity is calculated as follows: Accounting equation = $157,797 (total liabilities) + $196,831 (equity) equal $354,628, which equals the total assets for the period. We could also use the expanded accounting equation to see the effect of reinvested earnings ($419,155), other comprehensive income ($18,370), and treasury stock ($225,674). We could also look to XOM's income statement to identify the amount of revenues and dividends the company earned and paid out. XOM Balance Sheet. For another example, consider the balance sheet for Apple, Inc., as published in the company's quarterly report on July 28, 2021. For the quarter that ended on June 26, 2021, the company reported the following balances (in USD millions): Total Liabilities: $265,560. Total Shareholder's Equity: $64,280. The components of Shareholder's equity are further divided on the consolidated financial statement (in millions): Common stock and additional paid-in capital: $54,989 Beginning retained earnings: $15,261. Dividends and Dividend equivalents: $3,713 Share repurchases: $22,500. (treated as a dividend in the expanded equation, since these funds are effectively used to benefit shareholders). Common stock withheld related to net share settlement of equity awards: $1,559. Accumulated other comprehensive income: $58. Substituting for the appropriate terms of the expanded accounting equation, these figures add up to the total declared assets for Apple, Inc., which are worth $329,840 million U.S. dollars. The expanded accounting equation is a form of the basic accounting equation that includes the distinct components of owner's equity, such as dividends, shareholder capital, revenue, and expenses. The expanded equation is used to compare a company's assets with greater granularity than provided by the basic equation. The basic accounting equation is used to calculate how much a company is worth, based on the amount of money that has already been invested and the cost of any obligations. The formula for the basic accounting equation is as follows: The basic accounting equation is used to provide a simple calculation of a company's value, based on a comparison of equity and liabilities. For a more specific breakdown of the components of equity, use the expanded equation instead. Exxon Mobil. "SEC Form 10-Q for the Quarterly Period Ended September 30, 2018." Accessed Jan. 5, 2022. Apple, Inc. "SEC Form 10-Q for the Quarterly Period Ended June 26, 2021." Pages 6-7. Accessed Jan. 5, 2022.
Ease of Use Enhancements - Maple Help Home : Support : Online Help : System : Information : Updates : Maple 17 : Ease of Use Enhancements New updates and enhancements in ease of use: Maple has always been a pioneer in math software usability, and continually strives to ensure that new and occasional users are immediately productive while experienced users have the tools and flexibility they need to work efficiently. Maple 17 features advancements for our intuitive Clickable Math tools, including major updates in one-step app creation, as well as changes that help to improve the user experience within our programmatic interface. In Maple 17, typing commands has never been easier with autocomplete in 2-D math. Search and replace has been enhanced to search for names inside 2-D math expressions. Maple 17 also makes entering subscripts more intuitive and unlocks various desirable variable names for use in your calculations. When typing command and function names in 2-D Math, Maple now offers quick completions for items that are unambiguous. When such an item is available, it will appear as a yellow tooltip-style popup. Pressing the Tab or Return/Enter key will insert the suggested item. Search and replace functionality now extends into 2-D Math expressions for simple names. For example, searching for 'x' will find the x in the first term of the 2D expression x^2-y+10. There are two types of subscripts in Maple: Literal subscripts are a part of the variable name itself, and are not interpreted as an index of any kind. Index subscripts are a direct index reference to an element stored in an array or vector. Maple 17 features improvements to how subscripts are entered in both of these cases, and well as removes the need to enter a backslash in order to explicitly create underscore characters in variable names. To create a literal subscript, type the base name followed by two underscores, followed by the subscript. For example, 'a', '_', '_', 'b'. \mathrm{a__b} \textcolor[rgb]{0,0,1}{\mathrm{a__b}} {a}_{b} {\textcolor[rgb]{0,0,1}{a}}_{\textcolor[rgb]{0,0,1}{b}} To enter an underscore, simply type the underscore key on your keyboard. \mathrm{a_b} \textcolor[rgb]{0,0,1}{\mathrm{a_b}} For more information, see: Improved Subscript Handling \mathbf{local} \mathrm{I}≔\left[\begin{array}{rr}1& 0\\ 0& 1\end{array}\right]; \textcolor[rgb]{0,0,1}{I}\textcolor[rgb]{0,0,1}{:=}\left[\begin{array}{rr}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}\end{array}\right] 2\cdot I \left[\begin{array}{rr}\textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}\end{array}\right] \mathbf{local} \mathrm{D}:=5: \mathrm{D}+\mathrm{D} \textcolor[rgb]{0,0,1}{10} Improved Subscript Handling in Maple 17, Local Names in Maple 17, Clickable Math, Customer Wishlist
Standard Error - Displayr The standard deviation of the sampling distribution of a statistic. [1] The standard error is a measure of precision - estimates with higher standard errors have lower precision - and is used in the computation of confidence intervals and many tests of statistical significance. There are a number of common methods of computing standard errors: Formulas for computing the standard errors directly from data. For example, the most well known computation of the standard error is the standard error of the mean which is computed as {\displaystyle s/{\sqrt {(}}n)} , were s is the Standard Deviation and n is the sample size. Formulas for computing the standard errors from regression outputs (sometimes referred to as analytic standard errors). Algorithms for approximating the hessian, which is then used as an input into formulas for computing standard errors (sometimes referred to as numeric standard errors). This is generally the method employed with Mixed Multinomial Logit Model. It is commonly employed when analytic standard errors cannot be computed. Resampling methods, including bootstrapping, jackknifes and permutations'. These are commonly used when it is believed the assumptions for the above three methods are not met. Using intermediate calculations from Bayesian estimation methods (i.e., the posterior distributions of the parameters). Where the assumptions of each of the methods are met, they all compute the same standard error.[note 1] The only exception to this is Bayesian estimation methods, which often have slightly different goals and thus can lead to different results. If the standard deviation of a variable in a Simple Random Sample is 2.763 and the sample size is 212 then the standard error is: {\displaystyle 2.763/{\sqrt {(}}212)=0.189763619} See Confidence Interval for the use of this calculation. ↑ Differences can arise due to issues of numerical precision and model specification. Retrieved from ‘https://docs.displayr.com/index.php?title=Standard_Error&oldid=5828’
Perspectives on: Ion selectivity | Journal of General Physiology | Rockefeller University Press Correspondence to Olaf S. Andersen: [email protected] Olaf S. Andersen; Perspectives on: Ion selectivity. J Gen Physiol 1 May 2011; 137 (5): 393–395. doi: https://doi.org/10.1085/jgp.201110651 The purpose of the Perspectives in General Physiology is to provide a forum where scientific uncertainties or controversies are discussed in an authoritative, yet open manner. The Perspectives are solicited by the editors—often based on recommendations by members of the editorial advisory board. To frame the issue, two or more experts are invited to present brief points of view on the problem; these are published consecutively in the Journal. One or more experts and the organizer review the contributions, but the comments and opinions expressed in the Perspectives are those of the authors and not necessarily those of the editors or the editorial advisory board. The Perspectives are accompanied by a few editorial paragraphs that introduce the problem and invite the submission of comments, in the form of letters to the editor, which are published in a single, predetermined issue (usually three months after publication of the Perspective). After the letters to the editor have been published, further responses are limited to full manuscripts. In this issue of the Journal, Youxing Jiang (University of Texas Southwestern Medical Center) together with Amer Alam; Crina M. Nimigean (Weill Cornell Medical College) and Toby W. Allen (University of California at Davis); Benoît Roux (University of Chicago) together with Simon Bernèche, Bernhard Egwolf, Bogdan Lev, Sergei Y. Noskov, Christopher N. Rowley, and Haibo Yu; Dilip Asthagiri (Johns Hopkins University) together with Purushottam D. Dixit; and Susan B. Rempe (Sandia National Laboratory) together with Sameer Varma, David L. Bostick, David Rogers, Lawrence R. Pratt, and Charles L. Brooks III provide different perspectives on the ion selectivity of cation-selective channels and transporters. Current thinking about the mechanisms underlying ion channel selectivity are rooted in concepts dating back to the BC (before crystals) era. Mullins (1959) proposed that the selectivity of excitable membranes arose from the existence of membrane-spanning channels with diameters that were able to accommodate the preferred ion (e.g., Na+ or K+); he further noted that ions preferentially would move through pores that fit them rather well, in order for the pore lining to solvate the permeating ions. Eisenman (1961) developed the equilibrium theory of ion selectivity, in which the Na+/K+ selectivity of an ion-binding site is expressed in terms of the free energy difference (⁠ ΔΔGK+→Na+ ⁠) for the reaction Na+(aqueous)+K+(site)→K+(aqueous)+Na+(site), ΔΔGK+→Na+=(GNa+site−GNa+aqueous)−(GK+site−GK+aqueous)=(GNa+site−GK+site)−(GNa+aqueous−GK+aqueous). Ion selectivity thus arises when the difference in the ions’ interaction energy with the site, ΔGK+→Na+site=GNa+site−GK+site, differs from the difference in their hydration energies, ΔGK+→Na+aqueous=GNa+aqueous−GK+aqueous. Approximating the ion–site interactions as simple electrostatic interactions, Eisenman further introduced the concept of “electrostatic field strength” to characterize the strength of the interactions between the ions and the ligands that constitute the site. In the simplest case, the ion–ligand interactions are described by Gionsite=qs⋅qi4π⋅ε0⋅εr⋅(rs+ri), where qs and qi denote the charges on the ligand and ion, respectively, rs and ri are the radii of the site and ion, and ε0 and εr are the permittivity of free space and the relative dielectric constant. In sites with high field–strength ligands (meaning that the magnitude of qs/rs is large), the variation in Gionsite−Gionaqueous is dominated by changes in Gionsite, and the site would be expected to select for small ions; in sites with low field–strength ligands (meaning that the magnitude of qs/rs is small), the variation in Gionsite−Gionaqueous Gionaqueous, and the site would be expected to select for large ions. Whether or not well-defined sites actually existed was, however, not clear. It was generally recognized that the binding sites that appear in most kinetic models of channel-mediated ion movement (and ion selectivity) might just be figments of our imagination, as it seemed difficult to reconcile rapid ion movement with the existence of well-defined energy wells that would constitute such sites. The ideas of Mullins and Eisenman laid the foundations for subsequent studies of ion selectivity in channels, with major contributions by C.M. Armstrong and B. Hille. In Bezanilla and Armstrong (1972), Armstrong noted that selective ion permeation through a pore arises from selective exclusion of the less-preferred ions, and that selectivity therefore should be considered to be, at least in part, a kinetic problem. In the simplest case, selectivity would be determined by the rate constants for ion entry into the channel. After Mullins (1959), Bezanilla and Armstrong (1972) also suggested that the permeating ions would need to fit snugly in a rigid selectivity filter, which would allow the coordinating ligands to solvate the preferred ions and provide a parsimonious explanation for the exclusion of smaller ions, such as Na+ in potassium channels. Hille, in a series of landmark studies, explored the selectivity filters in sodium and potassium channels in myelinated nerve. Using a combination of inorganic and organic ions, he deduced that the sodium channels are aqueous pores that select among monovalent cations based on simple steric “fit” (Hille, 1971), with the inorganic ions being partly hydrated, meaning that Na+ is coordinated by the pore-lining residues through intervening H2O (Hille, 1972). In potassium channels, the permeant ions are coordinated directly by pore-lining (low field–strength) residues (Hille, 1973). Hille (1973) further noted that the observed selectivity (for K+ over Na+) was compatible both with a rigid selectivity filter, operating in a strict size-selection mode, and a flexible selectivity filter where the ion-coordinating ligands could be pulled in by smaller ions and pushed out by larger ions. The prevailing view over the next 25+ years, however, was that ion selectivity arose from size selection in a rather rigid selectivity filter. Yet, there was ample evidence that proteins were dynamic entities. Perutz and Mathews (1966), based on x-ray crystallographic studies, concluded that there was no path for ligands to gain access to the heme group in hemoglobin unless one or more side chains would move to “open the gates,” a proposal that was validated in subsequent molecular dynamics (MD) simulations (Case and Karplus, 1979). Diebler et al. (1969) pointed out that rapid dehydration/resolvation kinetics required some flexibility to allow for stepwise solvent substitution. Cooper (1976) noted that numerous spectroscopic studies provided evidence for a rather fluid, dynamic protein structure—a picture that appeared difficult to reconcile with the static structure depicted in, for example, a Protein Data Bank (PDB) coordinate file. But these two views of protein structure and dynamics are perfectly compatible once the thermodynamic fluctuations that occur in single molecules are considered. Frauenfelder et al. (1979) demonstrated that the conformational substates that had been deduced from spectroscopic measurements could also be observed by x-ray diffraction. By the early/mid-1980s, it was firmly established that proteins are dynamic, fluctuating entities (e.g., Cooper, 1984). Studies on ion selectivity changed fundamentally with the publication of the structure of the KcsA potassium channel (Doyle et al., 1998). The structure provided immediate insight into many years of structure–function studies. It also revealed important, unexpected features including the pore-lining in the selectivity filter being formed by the peptide backbone carbonyl oxygens in the “signature sequence” Val-Gly-Tyr-Gly-Asp, and the existence of distinct K+-binding sites formed by eight carbonyl oxygens. The “black box” era could be replaced by the modern era molecular studies! The proposed coordination of the K+ in the selectivity filter, with K+ fitting snugly in a cage formed by carbonyl oxygens held in place by molecular springs, was consistent with both the rigid and the flexible organization of the selectivity filter envisaged by Hille 25 years earlier. But the question remained: is the selectivity filter rigid (as believed by many electrophysiologists) or flexible (reflecting the dynamic nature of proteins)? Early MD simulations (Guidoni et al., 1999) showed that the selectivity filter was flexible/fluctuating, with the RMSD of the fluctuations being less when K+ was coordinated in the pore. Yet, how could a flexible selectivity filter, formed by relatively high field–strength carbonyl oxygens, be selective for K+? A resolution of this seeming paradox was proposed by Noskov et al. (2004), who noted that the conventional field strength point of view, focusing on just the ion–ligand interactions, was incomplete. When ligands are packed as tightly as they are in the selectivity filter, one needs to consider not only the ion–ligand interactions but also the ligand–ligand interactions when evaluating ΔΔGK+→Na+. The (attractive) ion–ligand interactions would tend to decrease ΔΔGK+→Na+ and favor the smaller Na+, and the (repulsive) ligand–ligand interactions would tend to increase ΔΔGK+→Na+ and disfavor the smaller Na+. Although this would provide an explanation for why the selectivity filter was K+ selective, it did not account for the variation of ΔΔGK+→Na+ among the different sites in the selectivity filter, as deduced from MD simulations, or for the variations in Na+/K+ selectivity among potassium channels with the same signature sequence (and, presumably, selectivity filter). Varma and Rempe (2007), who used quantum mechanical calculations to evaluate the ion–ligand interactions, therefore proposed that it would be necessary to also consider the environment outside the selectivity filter proper, in which the organization of the selectivity filter would be determined by ion–ligand as well as ligand–environment interactions. A key tool in most recent studies on ion selectivity has been the so-called toy models, introduced by Noskov et al., (2004), which emphasize the fluid-like features of the selectivity filter and allow for the isolation of key features that one would like to examine. But, although proteins may be fluid-like at small-length scales, they are not fluids—indeed, they show considerable rigidity (defined structure) at longer-length scales, as evident in, for example, a PDB coordinate file. This rigidity is important for several reasons. First, the carbonyl ligands would only be able to form a K+-selective site if they were confined. Second, the overall organization of the selectivity filter is determined during channel synthesis and folding; thus, although there might be an energetic cost associated with organizing the selectivity filter if the ligands were in a liquid, that cost was paid during biosynthesis. Third, the short-range flexibility and long-range rigidity allow for the molecular motions necessary for rapid exchange of the coordinating ligands (or H2O), while limiting the overall extent of the molecular transitions, allowing for rapid kinetics. Thus, although the toy models allow for important new insights, they are toys. The goal is to transfer the knowledge that is gained into understanding the selectivity of the bilayer-spanning channels, which remains a challenge as it becomes necessary to consider not only the equilibrium situations but also the kinetics, and the competition among the permeant ions as they strive to make it through the channel. As evident from the contributions to this Perspectives series, these questions can be approached from different, complementary directions. Thus, it may be useful to note that “a model is neither right because it predicts correct answers, nor are the ideas behind a model wrong because some details do not come out exactly correct. The challenge is to deduce those features that should have enduring significance however future models are constructed” (Hille, 2001). In this series of Perspectives, Alam and Jiang focus on what can be deduced from crystal structures. Next, Nimigean and Allen consider what can be learned from a combined electrophysiological, crystallographic, and computational approach. The last three contributions, by Roux et al., by Dixit and Asthagiri, and by Rempe and colleagues (Varma et al.), consider different theoretical and computational approaches based on MD simulations and quasi-chemical theory, including the use of simple “toy” models, to identify the mechanisms underlying ion selectivity (the contribution by Varma et al. will appear in the June 2011 issue of the Journal). Letters to the editor related to these Perspectives will be published in the September 2011 issue of the Journal. Letters to the editor should be received no later than Friday, July 22, 2011, to allow for editorial review. The letters may be no longer than two printed pages (approximately six double-spaced pages) and will be subject to editorial review. They may contain no more than one figure, no more than 15 references, and no significant references to unpublished work. Letters should be prepared according to The Journal’s instructions and can be submitted electronically at http://www.jgp.org, or as an e-mail attachment to [email protected]. Dynamics of ligand binding to heme proteins Thermodynamic fluctuations in protein molecules Protein fluctuations and the thermodynamic uncertainty principle Kinetics and mechanism of reactions of main group metal ions with biological carriers On the elementary origin of equilibrium ion specificity Symposium on Membrane Transport and Metabolism Tsernoglou An x-ray study of azide methaemoglobin Structural studies of ion selectivity in tetrameric cation channelsA structural view of cation channel selectivity
Forecast vector error-correction (VEC) model responses - MATLAB forecast - MathWorks Deutschland Forecast Unconditional Response Series from VEC Model Forecast VECX Model Forecast vector error-correction (VEC) model responses Y = forecast(Mdl,numperiods,Y0) returns a path of minimum mean squared error (MMSE) forecasts (Y) over the length numperiods forecast horizon using the fully specified VEC(p – 1) model Mdl. The forecasted responses represent the continuation of the presample data Y0. Y = forecast(Mdl,numperiods,Y0,Name,Value) uses additional options specified by one or more name-value arguments. For example, 'X',X,'YF',YF specifies X as future exogenous predictor data for the regression component and YF as future response data for conditional forecasting. Consider a VEC model for the following seven macroeconomic series. Then, fit the model to the data and forecast responses 12 quarters into the future. Y0 = FRED.Variables; Y is a 12-by-7 matrix of forecasted responses. Rows correspond to the forecast horizon, and columns correspond to the variables in EstMdl.SeriesNames. fh = dateshift(FRED.Time(end),'end','quarter',1:12); h1 = plot(FRED.Time((end-49):end),FRED.GDP((end-49):end)); fill([FRED.Time(end) fh([end end]) FRED.Time(end)],h.YLim([1 1 2 2]),'k',... legend([h1 h2],'True','Forecast','Location','Best') h1 = plot(FRED.Time((end-49):end),FRED.GDPDEF((end-49):end)); h1 = plot(FRED.Time((end-49):end),FRED.COE((end-49):end)); h1 = plot(FRED.Time((end-49):end),FRED.HOANBS((end-49):end)); h1 = plot(FRED.Time((end-49):end),FRED.FEDFUNDS((end-49):end)); h1 = plot(FRED.Time((end-49):end),FRED.PCEC((end-49):end)); h1 = plot(FRED.Time((end-49):end),FRED.GPDI((end-49):end)); Consider the model and data in Forecast Unconditional Response Series from VEC Model. The Data_Recessions data set contains the beginning and ending serial dates of recessions. Load the data set. Convert the matrix of date serial numbers to a datetime array. Estimate the model using all but the last three years of data. Specify the predictor identifying whether the observation was measured during a recession. bfh = FRED.Time(end) - years(3); estIdx = FRED.Time < bfh; EstMdl = estimate(Mdl,FRED{estIdx,:},'X',isrecession(estIdx)); Y0 = FRED{estIdx,:}; Y = forecast(EstMdl,12,Y0,'X',isrecession(~estIdx)); Y is a 12-by-7 matrix of simulated responses. Rows correspond to the forecast horizon, and columns correspond to the variables in EstMdl.SeriesNames. h2 = plot(FRED.Time(~estIdx),Y(:,1)); Analyze forecast accuracy using forecast intervals over a three-year horizon. This example follows from Forecast Unconditional Response Series from VEC Model. Estimate a VEC(1) model. Reserve the last three years of data to assess forecast accuracy. Assume that the appropriate cointegration rank is 4, and the H1 Johansen form is appropriate for the model. EstMdl = estimate(Mdl,FRED{estIdx,:}); Forecast responses from the estimated model over a three-year horizon. Specify all in-sample observations as a presample. Return the MSE of the forecasts. Y is a 12-by-7 matrix of forecasted responses. YMSE is a 12-by-1 cell vector of 7-by-7 matrices corresponding to the MSEs. h3 = plot(FRED.Time(~estIdx),YFI(:,1,1),'k--'); plot(FRED.Time(~estIdx),YFI(:,1,2),'k--'); Example: Consider forecasting one path of a VEC model composed of four response series three periods into the future. Suppose that you have prior knowledge about some of the future values of the responses, and you want to forecast the unknown responses conditional on your knowledge. Specify YF as a matrix containing the values that you know, and use NaN for values you do not know but want to forecast. For example, 'YF',[NaN 2 5 NaN; NaN NaN 0.1 NaN; NaN NaN NaN NaN] specifies that you have no knowledge of the future values of the first and fourth response series; you know the value for period 1 in the second response series, but no other value; and you know the values for periods 1 and 2 in the third response series, but not the value for period 3. \Delta {\stackrel{^}{y}}_{t}=\stackrel{^}{A}{\stackrel{^}{B}}^{\prime }{\stackrel{^}{y}}_{t-1}+{\stackrel{^}{\Phi }}_{1}\Delta {\stackrel{^}{y}}_{t-1}+...+{\stackrel{^}{\Phi }}_{p}\Delta {\stackrel{^}{y}}_{t-p}+\stackrel{^}{c}+\stackrel{^}{d}t+{x}_{t}\stackrel{^}{\beta }, forecast represents the VEC model Mdl as a state-space model (ssm model object) without observation error. \Delta {\stackrel{^}{y}}_{t}=\stackrel{^}{A}{\stackrel{^}{B}}^{\prime }{\stackrel{^}{y}}_{t-1}+{\stackrel{^}{\Phi }}_{1}\Delta {\stackrel{^}{y}}_{t-1}+...+{\stackrel{^}{\Phi }}_{p}\Delta {\stackrel{^}{y}}_{t-p}+\stackrel{^}{c}+\stackrel{^}{d}t+{x}_{t}\stackrel{^}{\beta }, {\stackrel{^}{y}}_{s},