Datasets:
id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
10002357 | Khintchine inequality | Theorem in probability
In mathematics, the Khintchine inequality, named after Aleksandr Khinchin and spelled in multiple ways in the Latin alphabet, is a theorem from probability, and is also frequently used in analysis. Heuristically, it says that if we pick formula_0 complex numbers formula_1, and add them together each multiplied by a random sign formula_2, then the expected value of the sum's modulus, or the modulus it will be closest to on average, will be not too far off from formula_3.
Statement.
Let formula_4 be i.i.d. random variables
with formula_5 for formula_6,
i.e., a sequence with Rademacher distribution. Let formula_7 and let formula_8. Then
formula_9
for some constants formula_10 depending only on formula_11 (see Expected value for notation). The sharp values of the constants formula_12 were found by Haagerup (Ref. 2; see Ref. 3 for a simpler proof). It is a simple matter to see that formula_13 when formula_14, and formula_15 when formula_16.
Haagerup found that
formula_17
where formula_18 and formula_19 is the Gamma function.
One may note in particular that formula_20 matches exactly the moments of a normal distribution.
Uses in analysis.
The uses of this inequality are not limited to applications in probability theory. One example of its use in analysis is the following: if we let formula_21 be a linear operator between two L"p" spaces formula_22 and formula_23, formula_24, with bounded norm formula_25, then one can use Khintchine's inequality to show that
formula_26
for some constant formula_27 depending only on formula_11 and formula_28.
Generalizations.
For the case of Rademacher random variables, Pawel Hitczenko showed that the sharpest version is:
formula_29
where formula_30, and formula_31 and formula_32 are universal constants independent of formula_11.
Here we assume that the formula_33 are non-negative and non-increasing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " N "
},
{
"math_id": 1,
"text": " x_1,\\dots,x_N \\in\\mathbb{C}"
},
{
"math_id": 2,
"text": "\\pm 1 "
},
{
"math_id": 3,
"text": " \\sqrt{|x_1|^{2}+\\cdots + |x_N|^{2}}"
},
{
"math_id": 4,
"text": " \\{\\varepsilon_n\\}_{n=1}^N "
},
{
"math_id": 5,
"text": "P(\\varepsilon_n=\\pm1)=\\frac12"
},
{
"math_id": 6,
"text": "n=1,\\ldots, N"
},
{
"math_id": 7,
"text": " 0<p<\\infty"
},
{
"math_id": 8,
"text": " x_1,\\ldots,x_N\\in \\mathbb{C}"
},
{
"math_id": 9,
"text": " A_p \\left( \\sum_{n=1}^N |x_n|^2 \\right)^{1/2} \\leq \\left(\\operatorname{E} \\left|\\sum_{n=1}^N \\varepsilon_n x_n\\right|^p \\right)^{1/p} \\leq B_p \\left(\\sum_{n=1}^N |x_n|^2\\right)^{1/2} "
},
{
"math_id": 10,
"text": " A_p,B_p>0 "
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "A_p,B_p"
},
{
"math_id": 13,
"text": "A_p = 1"
},
{
"math_id": 14,
"text": "p \\ge 2"
},
{
"math_id": 15,
"text": "B_p = 1"
},
{
"math_id": 16,
"text": "0 < p \\le 2"
},
{
"math_id": 17,
"text": "\n\\begin{align}\nA_p &= \\begin{cases}\n2^{1/2-1/p} & 0<p\\le p_0, \\\\\n2^{1/2}(\\Gamma((p+1)/2)/\\sqrt{\\pi})^{1/p} & p_0 < p < 2\\\\\n1 & 2 \\le p < \\infty\n\\end{cases}\n\\\\\n&\\text{and}\n\\\\\nB_p &= \\begin{cases}\n1 & 0 < p \\le 2 \\\\\n2^{1/2}(\\Gamma((p+1)/2)/\\sqrt\\pi)^{1/p} & 2 < p < \\infty\n\\end{cases},\n\\end{align}\n"
},
{
"math_id": 18,
"text": "p_0\\approx 1.847"
},
{
"math_id": 19,
"text": "\\Gamma"
},
{
"math_id": 20,
"text": "B_p"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": " L^p(X,\\mu)"
},
{
"math_id": 23,
"text": " L^p(Y,\\nu) "
},
{
"math_id": 24,
"text": "1 < p < \\infty"
},
{
"math_id": 25,
"text": " \\|T\\|<\\infty "
},
{
"math_id": 26,
"text": " \\left\\|\\left(\\sum_{n=1}^N |Tf_n|^2 \\right)^{1/2} \\right\\|_{L^p(Y,\\nu)}\\leq C_p \\left\\|\\left(\\sum_{n=1}^N |f_n|^2\\right)^{1/2} \\right\\|_{L^p(X,\\mu)} "
},
{
"math_id": 27,
"text": "C_p>0"
},
{
"math_id": 28,
"text": "\\|T\\|"
},
{
"math_id": 29,
"text": "\nA \\left(\\sqrt{p}\\left(\\sum_{n=b+1}^N x_n^2\\right)^{1/2} + \\sum_{n=1}^b x_n\\right)\n\\leq \\left(\\operatorname{E} \\left|\\sum_{n=1}^N \\varepsilon_n x_n\\right|^p \\right)^{1/p}\n\\leq B \\left(\\sqrt{p}\\left(\\sum_{n=b+1}^N x_n^2\\right)^{1/2} + \\sum_{n=1}^b x_n\\right)\n"
},
{
"math_id": 30,
"text": "b = \\lfloor p\\rfloor"
},
{
"math_id": 31,
"text": "A"
},
{
"math_id": 32,
"text": "B"
},
{
"math_id": 33,
"text": "x_i"
}
] | https://en.wikipedia.org/wiki?curid=10002357 |
10004115 | Brushed DC electric motor | Internally commutated electric motor
A brushed DC electric motor is an internally commutated electric motor designed to be run from a direct current power source and utilizing an electric brush for contact.
Brushed motors were the first commercially important application of electric power to driving mechanical energy, and DC distribution systems were used for more than 100 years to operate motors in commercial and industrial buildings. Brushed DC motors can be varied in speed by changing the operating voltage or the strength of the magnetic field. Depending on the connections of the field to the power supply, the speed and torque characteristics of a brushed motor can be altered to provide steady speed or speed inversely proportional to the mechanical load. Brushed motors continue to be used for electrical propulsion, cranes, paper machines and steel rolling mills. Since the brushes wear down and require replacement, brushless DC motors using power electronic devices have displaced brushed motors from many applications.
Simple two-pole DC motor.
The following graphics illustrate a simple, two-pole, brushed, DC motor.
When a current passes through the coil wound around a soft iron core situated inside an external magnetic field, the side of the positive pole is acted upon by an upwards force, while the other side is acted upon by a downward force. According to Fleming's left hand rule, the forces cause a turning effect on the coil, making it rotate. To make the motor rotate in a constant direction, "direct current" commutators make the current reverse in direction every half a cycle (in a two-pole motor) thus causing the motor to continue to rotate in the same direction.
A problem with the motor shown above is that when the plane of the coil is parallel to the magnetic field—i.e. when the rotor poles are 90 degrees from the stator poles—the torque is zero. In the pictures above, this occurs when the core of the coil is horizontal—the position it is just about to reach in the second-to-last picture on the right. The motor would not be able to start in this position. However, once it was started, it would continue to rotate through this position by momentum.
There is a second problem with this simple pole design. At the zero-torque position, both commutator brushes are touching (bridging) both commutator plates, resulting in a short circuit. The power leads are shorted together through the commutator plates, and the coil is also short-circuited through both brushes (the coil is shorted twice, once through each brush independently). Note that this problem is independent of the non-starting problem above; even if there were a high current in the coil at this position, there would still be zero torque. The problem here is that this short uselessly consumes power without producing any motion (nor even any coil current.) In a low-current battery-powered demonstration this short-circuiting is generally not considered harmful. However, if a two-pole motor were designed to do actual work with several hundred watts of power output, this shorting could result in severe commutator overheating, brush damage, and potential welding of the brushes—if they were metallic—to the commutator. Carbon brushes, which are often used, would not weld. In any case, a short like this is very wasteful, drains batteries rapidly and, at a minimum, requires power supply components to be designed to much higher standards than would be needed just to run the motor without the shorting.
One simple solution is to put a gap between the commutator plates which is wider than the ends of the brushes. This increases the zero-torque range of angular positions but eliminates the shorting problem; if the motor is started spinning by an outside force it will continue spinning. With this modification, it can also be effectively turned off simply by stalling (stopping) it in a position in the zero-torque (i.e. commutator non-contacting) angle range. This design is sometimes seen in homebuilt hobby motors, e.g. for science fairs and such designs can be found in some published science project books. A clear downside of this simple solution is that the motor now coasts through a substantial arc of rotation twice per revolution and the torque is pulsed. This may work for electric fans or to keep a flywheel spinning but there are many applications, even where starting and stopping are not necessary, for which it is completely inadequate, such as driving the capstan of a tape transport, or any similar instance where to speed up and slow down often and quickly is a requirement. Another disadvantage is that, since the coils have a measure of self inductance, current flowing in them cannot suddenly stop. The current attempts to jump the opening gap between the commutator segment and the brush, causing arcing.
Even for fans and flywheels, the clear weaknesses remaining in this design—especially that it is not self-starting from all positions—make it impractical for working use, especially considering the better alternatives that exist. Unlike the demonstration motor above, DC motors are commonly designed with more than two poles, are able to start from any position, and do not have any position where current can flow without producing electromotive power by passing through some coil. Many common small brushed DC motors used in toys and small consumer appliances, the simplest mass-produced DC motors to be found, have three-pole armatures. The brushes can now bridge two adjacent commutator segments without causing a short circuit. These three-pole armatures also have the advantage that current from the brushes either flows through two coils in series or through just one coil. Starting with the current in an individual coil at half its nominal value (as a result of flowing through two coils in series), it rises to its nominal value and then falls to half this value. The sequence then continues with current in the reverse direction. This results in a closer step-wise approximation to the ideal sinusoidal coil current, producing a more even torque than the two-pole motor where the current in each coil is closer to a square wave. Since current changes are half those of a comparable two-pole motor, arcing at the brushes is consequently less.
If the shaft of a DC motor is turned by an external force, the motor will act like a generator and produce an Electromotive force (EMF). During normal operation, the spinning of the motor produces a voltage, known as the counter-EMF (CEMF) or back EMF, because it opposes the applied voltage on the motor. The back EMF is the reason that the motor when free-running does not appear to have the same low electrical resistance as the wire contained in its winding. This is the same EMF that is produced when the motor is used as a generator (for example when an electrical load, such as a light bulb, is placed across the terminals of the motor and the motor shaft is driven with an external torque). Therefore, the total voltage drop across a motor consists of the CEMF voltage drop, and the parasitic voltage drop resulting from the internal resistance of the armature's windings. The current through a motor is given by the following equation:
formula_0
The mechanical power produced by the motor is given by:
formula_1
As an unloaded DC motor spins, it generates a backwards-flowing electromotive force that resists the current being applied to the motor. The current through the motor drops as the rotational speed increases, and a free-spinning motor has very little current. It is only when a load is applied to the motor that slows the rotor that the current draw through the motor increases.
The commutating plane.
In a dynamo, a plane through the centers of the contact areas where a pair of brushes touch the commutator and parallel to the axis of rotation of the armature is referred to as the "commutating plane". In this diagram the commutating plane is shown for just one of the brushes, assuming the other brush made contact on the other side of the commutator with radial symmetry, 180 degrees from the brush shown.
Compensation for stator field distortion.
In a real dynamo, the field is never perfectly uniform. Instead, as the rotor spins it induces field effects which drag and distort the magnetic lines of the outer non-rotating stator.
The faster the rotor spins, the further the degree of field distortion. Because the dynamo operates most efficiently with the rotor field at right angles to the stator field, it is necessary to either retard or advance the brush position to put the rotor's field into the correct position to be at a right angle to the distorted field.
These field effects are reversed when the direction of spin is reversed. It is therefore difficult to build an efficient reversible commutated dynamo, since for highest field strength it is necessary to move the brushes to the opposite side of the normal neutral plane.
The effect can be considered to be somewhat similar to timing advance in an internal combustion engine. Generally a dynamo that has been designed to run at a certain fixed speed will have its brushes permanently fixed to align the field for highest efficiency at that speed.
DC machines with wound stators compensate the distortion with commutating field windings and compensation windings.
Motor design variations.
DC motors.
Brushed DC motors are constructed with wound rotors and either wound or permanent-magnet stators.
Wound stators.
The field coils have traditionally existed in four basic formats: separately excited (sepex), series-wound, shunt-wound, and a combination of the latter two; compound-wound.
In a series wound motor, the field coils are connected electrically in series with the armature coils (via the brushes). In a shunt wound motor, the field coils are connected in parallel, or "shunted" to the armature coils. In a separately excited (sepex) motor the field coils are supplied from an independent source, such as a motor-generator and the field current is unaffected by changes in the armature current. The sepex system was sometimes used in DC traction motors to facilitate control of wheelslip.
Permanent-magnet motors.
Permanent-magnet types have some performance advantages over direct-current, excited, synchronous types, and have become predominant in fractional horsepower applications. They are smaller, lighter, more efficient and reliable than other singly-fed electric machines.
Originally all large industrial DC motors used wound field or rotor magnets. Permanent magnets have traditionally only been useful on small motors because it was difficult to find a material capable of retaining a high-strength field. Only recently have advances in materials technology allowed the creation of high-intensity permanent magnets, such as neodymium magnets, allowing the development of compact, high-power motors without the extra volume of field coils and excitation means. But as these high performance permanent magnets become more applied in electric motor or generator systems, other problems are realized (see Permanent magnet synchronous generator).
Axial field motors.
Traditionally, the field has been applied radially—in and away from the rotation axis of the motor. However some designs have the field flowing along the axis of the motor, with the rotor cutting the field lines as it rotates. This allows for much stronger magnetic fields, particularly if halbach arrays are employed. This, in turn, gives power to the motor at lower speeds. However, the focused flux density cannot rise about the limited residual flux density of the permanent magnet despite high coercivity and like all electric machines, the flux density of magnetic core saturation is the design constraint.
Speed control.
Generally, the rotational speed of a DC motor is proportional to the EMF in its coil (= the voltage applied to it minus voltage lost on its resistance), and the torque is proportional to the current. Speed control can be achieved by variable battery tappings, variable supply voltage, resistors or electronic controls. A simulation example can be found here and. The direction of a wound field DC motor can be changed by reversing either the field or armature connections but not both. This is commonly done with a special set of contactors (direction contactors). The effective voltage can be varied by inserting a series resistor or by an electronically controlled switching device made of thyristors, transistors, or, formerly, mercury arc rectifiers.
Series-parallel.
Series-parallel control was the standard method of controlling railway traction motors before the advent of power electronics. An electric locomotive or train would typically have four motors which could be grouped in three different ways:
This provided three running speeds with minimal resistance losses. For starting and acceleration, additional control was provided by resistances. This system has been superseded by electronic control systems.
Field weakening.
The speed of a DC motor can be increased by field weakening. Reducing the field strength is done by inserting resistance in series with a shunt field, or inserting resistances around a series-connected field winding, to reduce current in the field winding. When the field is weakened, the back-emf reduces, so a larger current flows through the armature winding and this increases the speed. Field weakening is not used on its own but in combination with other methods, such as series-parallel control.
Chopper.
In a circuit known as a chopper, the average voltage applied to the motor is varied by switching the supply voltage very rapidly. As the "on" to "off" ratio is varied to alter the average applied voltage, the speed of the motor varies. The percentage "on" time multiplied by the supply voltage gives the average voltage applied to the motor. Therefore, with a 100 V supply and a 25% "on" time, the average voltage at the motor will be 25 V. During the "off" time, the armature's inductance causes the current to continue through a diode called a "flyback diode", in parallel with the motor. At this point in the cycle, the supply current will be zero, and therefore the average motor current will always be higher than the supply current unless the percentage "on" time is 100%. At 100% "on" time, the supply and motor current are equal. The rapid switching wastes less energy than series resistors. This method is also called pulse-width modulation (PWM) and is often controlled by a microprocessor. An output filter is sometimes installed to smooth the average voltage applied to the motor and reduce motor noise.
Since the series-wound DC motor develops its highest torque at low speed, it is often used in traction applications such as electric locomotives, and trams. Another application is starter motors for petrol and small diesel engines. Series motors must never be used in applications where the drive can fail (such as belt drives). As the motor accelerates, the armature (and hence field) current reduces. The reduction in field causes the motor to speed up, and in extreme cases the motor can even destroy itself, although this is much less of a problem in fan-cooled motors (with self-driven fans). This can be a problem with railway motors in the event of a loss of adhesion since, unless quickly brought under control, the motors can reach speeds far higher than they would do under normal circumstances. This can not only cause problems for the motors themselves and the gears, but due to the differential speed between the rails and the wheels it can also cause serious damage to the rails and wheel treads as they heat and cool rapidly. Field weakening is used in some electronic controls to increase the top speed of an electric vehicle. The simplest form uses a contactor and field-weakening resistor; the electronic control monitors the motor current and switches the field weakening resistor into circuit when the motor current reduces below a preset value (this will be when the motor is at its full design speed). Once the resistor is in circuit, the motor will increase speed above its normal speed at its rated voltage. When motor current increases, the control will disconnect the resistor and low speed torque is made available.
Ward Leonard.
A Ward Leonard control is usually used for controlling a shunt or compound wound DC motor, and developed as a method of providing a speed-controlled motor from an AC supply, though it is not without its advantages in DC schemes. The AC supply is used to drive an AC motor, usually an induction motor that drives a DC generator or dynamo. The DC output from the armature is directly connected to the armature of the DC motor (sometimes but not always of identical construction). The shunt field windings of both DC machines are independently excited through variable resistors. Extremely good speed control from standstill to full speed, and consistent torque, can be obtained by varying the generator and/or motor field current. This method of control was the "de facto" method from its development until it was superseded by solid state thyristor systems. It found service in almost any environment where good speed control was required, from passenger lifts through to large mine pit head winding gear and even industrial process machinery and electric cranes. Its principal disadvantage was that three machines were required to implement a scheme (five in very large installations, as the DC machines were often duplicated and controlled by a tandem variable resistor). In many applications, the motor-generator set was often left permanently running, to avoid the delays that would otherwise be caused by starting it up as required. Although electronic (thyristor) controllers have replaced most small to medium Ward-Leonard systems, some very large ones (thousands of horsepower) remain in service. The field currents are much lower than the armature currents, allowing a moderate sized thyristor unit to control a much larger motor than it could control directly. For example, in one installation, a 300 amp thyristor unit controls the field of the generator. The generator output current is in excess of 15,000 amperes, which would be prohibitively expensive (and inefficient) to control directly with thyristors.
Torque and speed of a DC motor.
A DC motor's speed and torque characteristics vary according to three different magnetization sources, separately excited field, self-excited field or permanent-field, which are used selectively to control the motor over the mechanical load's range. Self-excited field motors can be series, shunt, or a compound wound connected to the armature.
Basic properties.
Define
Counter EMF equation.
The DC motor's counter emf is proportional to the product of the machine's total flux strength and armature speed:
Eb
kb Φ n
Voltage balance equation.
The DC motor's input voltage must overcome the counter emf as well as the voltage drop created by the armature current across the motor resistance, that is, the combined resistance across the brushes, armature winding and series field winding, if any:
Vm
Eb + Rm Ia
Torque equation.
The DC motor's torque is proportional to the product of the armature current and the machine's total flux strength:
formula_2
where
kT
Speed equation.
Since
n
and
Vm
Eb + Rm Ia
we have
formula_3
where
kn
Torque and speed characteristics.
Shunt wound motor.
With the shunt wound motor's high-resistance field winding connected in parallel with the armature, Vm, Rm and Ø are constant such that the no load to full load speed regulation is seldom more than 5%. Speed control is achieved three ways:
Series wound motor.
The series motor responds to increased load by slowing down; the current increases and the torque rises in proportional to the square of the current since the same current flows in both the armature and the field windings. If the motor is stalled, the current is limited only by the total resistance of the windings and the torque can be very high, but there is a danger of the windings becoming overheated. Series wound motors were widely used as traction motors in rail transport of every kind, but are being phased out in favour of power inverter-fed AC induction motors. The counter EMF aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate, the counter EMF is zero and the only factor limiting the armature current is the armature resistance. As the prospective current through the armature is very large, the need arises for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter EMF. As the motor rotation builds up, the resistance is gradually cut out.
The series wound DC motor's most notable characteristic is that its speed is almost entirely dependent on the torque required to drive the load. This suits large inertial loads as motor accelerates from maximum torque, torque reducing gradually as speed increases.
As the series motor's speed can be dangerously high, series motors are often geared or direct-connected to the load.
Permanent magnet motor.
A permanent magnet DC motor is characterized by a linear relationship between stall torque when the torque is maximum with the shaft at standstill and no-load speed with no applied shaft torque and maximum output speed. There is a quadratic power relationship between these two speed-axis points.
Protection.
To extend a DC motor's service life, protective devices and motor controllers are used to protect it from mechanical damage, excessive moisture, high dielectric stress and high temperature or thermal overloading. These protective devices sense motor fault conditions and either activate an alarm to notify the operator or automatically de-energize the motor when a faulty condition occurs. For overloaded conditions, motors are protected with thermal overload relays. Bi-metal thermal overload protectors are embedded in the motor's windings and made from two dissimilar metals. They are designed such that the bimetallic strips will bend in opposite directions when a temperature set point is reached to open the control circuit and de-energize the motor. Heaters are external thermal overload protectors connected in series with the motor's windings and mounted in the motor contactor. Solder pot heaters melt in an overload condition, which cause the motor control circuit to de-energize the motor. Bimetallic heaters function the same way as embedded bimetallic protectors. Fuses and circuit breakers are overcurrent or short circuit protectors. Ground fault relays also provide overcurrent protection. They monitor the electric current between the motor's windings and earth system ground. In motor-generators, reverse current relays prevent the battery from discharging and motorizing the generator. Since D.C. motor field loss can cause a hazardous runaway or overspeed condition, loss of field relays are connected in parallel with the motor's field to sense field current. When the field current decreases below a set point, the relay will deenergize the motor's armature. A locked rotor condition prevents a motor from accelerating after its starting sequence has been initiated. Distance relays protect motors from locked-rotor faults. Undervoltage motor protection is typically incorporated into motor controllers or starters. In addition, motors can be protected from overvoltages or surges with isolation transformers, power conditioning equipment, MOVs, arresters and harmonic filters. Environmental conditions, such as dust, explosive vapors, water, and high ambient temperatures, can adversely affect the operation of a DC motor. To protect a motor from these environmental conditions, the National Electrical Manufacturers Association (NEMA) and the International Electrotechnical Commission (IEC) have standardized motor enclosure designs based upon the environmental protection they provide from contaminants. Modern software can also be used in the design stage, such as Motor-CAD, to help increase the thermal efficiency of a motor.
DC motor starters.
The counter-emf aids the armature resistance to limit the current through the armature. When power is first applied to a motor, the armature does not rotate. At that instant the counter-emf is zero and the only factor limiting the armature current is the armature resistance and inductance. Usually the armature resistance of a motor is less than 1 Ω; therefore the current through the armature would be very large when the power is applied. This current can make an excessive voltage drop affecting other equipment in the circuit and even trip overload protective devices.
Therefore, the need arises for an additional resistance in series with the armature to limit the current until the motor rotation can build up the counter-emf. As the motor rotation builds up, the resistance is gradually cut out.
Manual-starting rheostat.
When electrical and DC motor technology was first developed, much of the equipment was constantly tended by an operator trained in the management of motor systems. The very first motor management systems were almost completely manual, with an attendant starting and stopping the motors, cleaning the equipment, repairing any mechanical failures, and so forth.
The first DC motor-starters were also completely manual, as shown in this image. Normally it took the operator about ten seconds to slowly advance the rheostat across the contacts to gradually increase input power up to operating speed. There were two different classes of these rheostats, one used for starting only, and one for starting and speed regulation. The starting rheostat was less expensive, but had smaller resistance elements that would burn out if required to run a motor at a constant reduced speed.
This starter includes a no-voltage magnetic holding feature, which causes the rheostat to spring to the off position if power is lost, so that the motor does not later attempt to restart in the full-voltage position. It also has overcurrent protection that trips the lever to the off position if excessive current over a set amount is detected.
Three-point starter.
The incoming power wires are called L1 and L2. As the name implies there are only three connections to the starter, one to incoming power, one to the armature, and one to the field. The connections to the armature are called A1 and A2. The ends of the field (excitement) coil are called F1 and F2. In order to control the speed, a field rheostat is connected in series with the shunt field. One side of the line is connected to the arm of the starter. The arm is spring-loaded so, it will return to the "Off" position when not held at any other position.
Four-point starter.
The four-point starter eliminates the drawback of the three-point starter. In addition to the same three points that were in use with the three-point starter, the other side of the line, L1, is the fourth point brought to the starter when the arm is moved from the "Off" position. The coil of the holding magnet is connected across the line. The holding magnet and starting resistors function identical as in the three-point starter.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I = \\frac{V_\\text{applied} - V_\\text{cemf}}{R_\\text{armature}}"
},
{
"math_id": 1,
"text": "P = I \\cdot V_\\text{cemf}"
},
{
"math_id": 2,
"text": "\\begin{align}\n T &= \\frac{1}{2\\pi} k_b I_a \\Phi \\\\\n &= k_T I_a \\Phi\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\n n &= \\frac{V_m - R_m I_a}{k_b \\Phi} \\\\\n &= k_n \\frac{V_m - R_m I_a}{\\Phi}\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=10004115 |
10004409 | Shear strength (soil) | Magnitude of the shear stress that a soil can sustain
Shear strength is a term used in soil mechanics to describe the magnitude of the shear stress that a soil can sustain. The shear resistance of soil is a result of friction and interlocking of particles, and possibly cementation or bonding of particle contacts. Due to interlocking, particulate material may expand or contract in volume as it is subject to shear strains. If soil expands its volume, the density of particles will decrease and the strength will decrease; in this case, the peak strength would be followed by a reduction of shear stress. The stress-strain relationship levels off when the material stops expanding or contracting, and when interparticle bonds are broken. The theoretical state at which the shear stress and density remain constant while the shear strain increases may be called the critical state, steady state, or residual strength.
The volume change behavior and interparticle friction depend on the density of the particles, the intergranular contact forces, and to a somewhat lesser extent, other factors such as the rate of shearing and the direction of the shear stress. The average normal intergranular contact force per unit area is called the effective stress.
If water is not allowed to flow in or out of the soil, the stress path is called an "undrained stress path". During undrained shear, if the particles are surrounded by a nearly incompressible fluid such as water, then the density of the particles cannot change without drainage, but the water pressure and effective stress will change. On the other hand, if the fluids are allowed to freely drain out of the pores, then the pore pressures will remain constant and the test path is called a "drained stress path". The soil is free to dilate or contract during shear if the soil is drained. In reality, soil is partially drained, somewhere between the perfectly undrained and drained idealized conditions.
The shear strength of soil depends on the effective stress, the drainage conditions, the density of the particles, the rate of strain, and the direction of the strain.
For undrained, constant volume shearing, the Tresca theory may be used to predict the shear strength, but for drained conditions, the Mohr–Coulomb theory may be used.
Two important theories of soil shear are the critical state theory and the steady state theory. There are key differences between the critical state condition and the steady state condition and the resulting theory corresponding to each of these conditions.
Factors controlling shear strength of soils.
The stress-strain relationship of soils, and therefore the shearing strength, is affected by:
Undrained strength.
This term describes a type of shear strength in soil mechanics as distinct from drained strength.
Conceptually, there is no such thing as "the" undrained strength of a soil. It depends on a number of factors, the main ones being:
Undrained strength is typically defined by Tresca theory, based on Mohr's circle as:
"σ1 - σ3 = 2 Su"
Where:
"σ1" is the major principal stress
"σ3" is the minor principal stress
formula_0 is the shear strength "(σ1 - σ3)/2"
hence, formula_0 = "Su" (or sometimes "cu"), the undrained strength.
It is commonly adopted in limit equilibrium analyses where the rate of loading is very much greater than the rate at which pore water pressure - generated due to the action of shearing the soil - dissipates. An example of this is rapid loading of sands during an earthquake, or the failure of a clay slope during heavy rain, and applies to most failures that occur during construction.
As an implication of undrained condition, no elastic volumetric strains occur, and thus Poisson's ratio is assumed to remain 0.5 throughout shearing. The Tresca soil model also assumes no plastic volumetric strains occur. This is of significance in more advanced analyses such as in finite element analysis. In these advanced analysis methods, soil models other than Tresca may be used to model the undrained condition including Mohr-Coulomb and critical state soil models such as the modified Cam-clay model, provided Poisson's ratio is maintained at 0.5.
One relationship used extensively by practising engineers is the empirical observation that the ratio of the undrained shear strength c to the original consolidation stress p' is approximately a constant for a given Over Consolidation Ratio (OCR). This relationship was first formalized by and who also extended it to show that stress-strain characteristics of remolded clays could also be normalized with respect to the original consolidation stress. The constant c/p relationship can also be derived from theory for both critical-state and steady-state soil mechanics . This fundamental, normalization property of the stress-strain curves is found in many clays, and was refined into the empirical SHANSEP (stress history and normalized soil engineering properties) method..
Drained shear strength.
The drained shear strength is the shear strength of the soil when pore fluid pressures, generated during the course of shearing the soil, are able to dissipate during shearing. It also applies where no pore water exists in the soil (the soil is dry) and hence pore fluid pressures are negligible. It is commonly approximated using the Mohr-Coulomb equation. (It was called "Coulomb's equation" by Karl von Terzaghi in 1942.) combined it with the principle of effective stress.
In terms of effective stresses, the shear strength is often approximated by:
formula_0 = "σ' tan(φ') + c"'
Where "σ' = (σ - u)", is defined as the effective stress. "σ" is the total stress applied normal to the shear plane, and "u" is the pore water pressure acting on the same plane.
"φ"' = the effective stress friction angle, or the 'angle of internal friction' after Coulomb friction. The coefficient of friction formula_1 is equal to tan(φ'). Different values of friction angle can be defined, including the peak friction angle, φ'p, the critical state friction angle, φ'cv, or residual friction angle, φ'r.
c' = is called cohesion, however, it usually arises as a consequence of forcing a straight line to fit through measured values of (τ,σ') even though the data actually falls on a curve. The intercept of the straight line on the shear stress axis is called the cohesion. It is well known that the resulting intercept depends on the range of stresses considered: it is not a fundamental soil property. The curvature (nonlinearity) of the failure envelope occurs because the dilatancy of closely packed soil particles depends on confining pressure.
Critical state theory.
A more advanced understanding of the behaviour of soil undergoing shearing led to the development of the critical state theory of soil mechanics . In critical state soil mechanics, a distinct shear strength is identified where the soil undergoing shear does so at a constant volume, also called the 'critical state'. Thus there are three commonly identified shear strengths for a soil undergoing shear:
The peak strength may occur before or at critical state, depending on the initial state of the soil particles undergoing shear force:
The constant volume (or critical state) shear strength is said to be extrinsic to the soil, and independent of the initial density or packing arrangement of the soil grains. In this state the grains being separated are said to be 'tumbling' over one another, with no significant granular interlock or sliding plane development affecting the resistance to shearing. At this point, no inherited fabric or bonding of the soil grains affects the soil strength.
The residual strength occurs for some soils where the shape of the particles that make up the soil become aligned during shearing (forming a slickenside), resulting in reduced resistance to continued shearing (further strain softening). This is particularly true for most clays that comprise plate-like minerals, but is also observed in some granular soils with more elongate shaped grains. Clays that do not have plate-like minerals (like allophanic clays) do not tend to exhibit residual strengths.
Use in practice: If one is to adopt critical state theory and take c' = 0; formula_0p may be used, provided the level of anticipated strains are taken into account, and the effects of potential rupture or strain softening to critical state strengths are considered. For large strain deformation, the potential to form a slickensided surface with a φ'r should be considered (such as pile driving).
The Critical State occurs at the quasi-static strain rate. It does not allow for differences in shear strength based on different strain rates. Also at the critical state, there is no particle alignment or specific soil structure.
Almost as soon as it was first introduced, the critical state concept was subjected to much criticism—chiefly its inability to match readily available test data from testing a wide variety of soils. This is primarily due to the theories inability to account for particle structure. A major consequence of this is its inability to model strain-softening post peak commonly observed in contractive soils that have anisotropic grain shapes/properties. Further, an assumption commonly made to make the model mathematically tractable is that shear stress cannot cause volumetric strain nor volumetric stress cause shear strain. Since this is not the case in reality, it is an additional cause of the poor matches to readily available empirical test data. Additionally, critical state elasto-plastic models assume that elastic strains drives volumetric changes. Since this too is not the case in real soils, this assumption results in poor fits to volume and pore pressure change data.
Steady state (dynamical systems based soil shear).
A refinement of the critical state concept is the steady state concept.
The steady state strength is defined as the shear strength of the soil when it is at the steady state condition. The steady state condition is defined as "that state in which the mass is continuously deforming at constant volume, constant normal effective stress, constant shear stress, and constant velocity." Steve J. Poulos , then an Associate Professor of the Soil Mechanics Department of Harvard University, built off a hypothesis that Arthur Casagrande was formulating towards the end of his career. Steady state based soil mechanics is sometimes called "Harvard soil mechanics". The steady state condition is not the same as the "critical state" condition.
The steady state occurs only after all particle breakage if any is complete and all the particles are oriented in a statistically steady state condition and so that the shear stress needed to continue deformation at a constant velocity of deformation does not change. It applies to both the drained and the undrained case.
The steady state has a slightly different value depending on the strain rate at which it is measured. Thus the steady state shear strength at the quasi-static strain rate (the strain rate at which the critical state is defined to occur at) would seem to correspond to the critical state shear strength. However, there is an additional difference between the two states. This is that at the steady state condition the grains position themselves in the steady state structure, whereas no such structure occurs for the critical state. In the case of shearing to large strains for soils with elongated particles, this steady state structure is one where the grains are oriented (perhaps even aligned) in the direction of shear. In the case where the particles are strongly aligned in the direction of shear, the steady state corresponds to the "residual condition."
Three common misconceptions regarding the steady state are that a) it is the same as the critical state (it is not), b) that it applies only to the undrained case (it applies to all forms of drainage), and c) that it does not apply to sands (it applies to any granular material). A primer on the Steady State theory can be found in a report by Poulos . Its use in earthquake engineering is described in detail in another publication by Poulos .
The difference between the steady state and the critical state is not merely one of semantics as is sometimes thought, and it is incorrect to use the two terms/concepts interchangeably. The additional requirements of the strict definition of the steady state over and above the critical state viz. a constant deformation velocity and statistically constant structure (the steady state structure), places the steady state condition within the framework of dynamical systems theory. This strict definition of the steady state was used to describe soil shear as a dynamical system . Dynamical systems are ubiquitous in nature (the Great Red Spot on Jupiter is one example) and mathematicians have extensively studied such systems. The underlying basis of the soil shear dynamical system is simple friction .
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\mu"
}
] | https://en.wikipedia.org/wiki?curid=10004409 |
1000441 | Artificial chemistry | An artificial chemistry is a chemical-like system that usually consists of objects, called molecules, that interact according to rules resembling chemical reaction rules. Artificial chemistries are created and studied in order to understand fundamental properties of chemical systems, including prebiotic evolution, as well as for developing chemical computing systems. Artificial chemistry is a field within computer science wherein chemical reactions—often biochemical ones—are computer-simulated, yielding insights on evolution, self-assembly, and other biochemical phenomena. The field does not use actual chemicals, and should not be confused with either synthetic chemistry or computational chemistry. Rather, bits of information are used to represent the starting molecules, and the end products are examined along with the processes that led to them. The field originated in artificial life but has shown to be a versatile method with applications in many fields such as chemistry, economics, sociology and linguistics.
Formal definition.
An artificial chemistry is defined in general as a triple (S,R,A). In some cases it is sufficient to define it as a tuple (S,I).
History of artificial chemistries.
Artificial chemistries emerged as a sub-field of artificial life, in particular from strong artificial life. The idea behind this field was that if one wanted to build something alive, it had to be done by a combination of non-living entities. For instance, a cell is itself alive, and yet is a combination of non-living molecules. Artificial chemistry enlists, among others, researchers that believe in an extreme bottom-up approach to artificial life. In artificial life, bits of information were used to represent bacteria or members of a species, each of which moved, multiplied, or died in computer simulations. In artificial chemistry bits of information are used to represent starting molecules capable of reacting with one another. The field has pertained to artificial intelligence by virtue of the fact that, over billions of years, non-living matter evolved into primordial life forms which in turn evolved into intelligent life forms.
Important contributors.
The first reference about Artificial Chemistries come from a Technical paper written by John McCaskill
Walter Fontana working with Leo Buss then took up the work developing the AlChemy model
The model was presented at the second International Conference of Artificial Life.
In his first papers he presented the concept of organization, as a set of molecules that is algebraically closed and self-maintaining.
This concept was further developed by Dittrich and Speroni di Fenizio into a theory of chemical organizations
Two main schools of artificial chemistries have been in Japan and Germany.
In Japan the main researchers have been Takashi Ikegami
Hideaki Suzuki
and Yasuhiro Suzuki
In Germany, it was Wolfgang Banzhaf, who, together with his students Peter Dittrich and Jens Ziegler, developed various artificial chemistry models.
Their 2001 paper 'Artificial Chemistries - A Review' became a standard in the field.
Jens Ziegler, as part of his PhD thesis, proved that an artificial chemistry could be used to control a small Khepera robot
Among other models, Peter Dittrich developed the Seceder model which is able to explain group formation in society through some simple rules. Since then he became a professor in Jena where he investigates artificial chemistries as a way to define a general theory of constructive dynamical systems.
Applications of artificial chemistries.
Artificial Chemistries are often used in the study of protobiology, in trying to bridge the gap between chemistry and biology.
A further motivation to study artificial chemistries is the interest in constructive dynamical systems. Yasuhiro Suzuki has modeled various systems such as membrane systems, signaling pathways (P53), ecosystems, and enzyme systems by using his method, abstract rewriting system on multisets (ARMS).
Artificial chemistry in popular culture.
In the 1994 science-fiction novel "Permutation City" by Greg Egan, brain-scanned emulated humans known as Copies inhabit a simulated world which includes the Autoverse, an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. Tiny environments are simulated in the Autoverse and filled with populations of a simple, designed lifeform, "Autobacterium lamberti". The purpose of the Autoverse is to allow Copies to explore the life that had evolved there after it had been run on a significantly large segment of the simulated universe (referred to as "Planet Lambert").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\subset"
}
] | https://en.wikipedia.org/wiki?curid=1000441 |
1000450 | Johann Heinrich von Thünen | German economist (1783–1850)
Johann Heinrich von Thünen (24 June 1783 – 22 September 1850), sometimes spelled Thuenen, was a prominent nineteenth-century economist and a native of Mecklenburg-Strelitz, now in northern Germany.
Even though he never held a professorial position, von Thunen had substantial influence on economics. He has been described as one of the founders of agricultural economics and economic geography. He made substantial contributions to economic debates on rent, land use, and wages.
Early life.
Von Thunen was born on June 24, 1783 on his father's estate Canarienhausen. His father was from an old feudal family. Von Thunen lost his father at the age of two. His mother remarried a merchant and the family moved to Hooksiel.
Von Thunen expected to take over his father's estate, which led him to study practical farming. In 1803, von Thunen published his first economic ideas.
Von Thunen was influenced by Albrecht Thaer.
Von Thunen married in 1806.
Work.
Model of agricultural land use.
Thünen was a Mecklenburg landowner, who in the first volume of his treatise "The Isolated State" (1826), developed the first serious treatment of spatial economics and economic geography, connecting it with the theory of rent. The importance lies less in the pattern of land use predicted than in its analytical approach.
Thünen developed the basics of the theory of marginal productivity in a mathematically rigorous way, summarizing it in the formula in which
formula_0
where R = land rent; Y = yield per unit of land; c = production expenses per unit of commodity; p=market price per unit of commodity; F = freight rate (per agricultural unit, per mile); m=distance to market.
Thünen's model of agricultural land, created before industrialization, made the following simplifying assumptions:
The use which a piece of land is put to is a function of the cost of transport to market and the land rent a farmer can afford to pay (determined by yield, which is held constant here).
The model generated four concentric rings of agricultural activity. Dairying and intensive farming lies closest to the city. Since vegetables, fruit, milk and other dairy products must get to market quickly, they would be produced close to the city.
Timber and firewood would be produced for fuel and building materials in the second ring. Wood was a very important fuel for heating and cooking and is very heavy and difficult to transport so it is located close to the city.
The third zone consists of extensive fields crops such as grain. Since grains last longer than dairy products and are much lighter than fuel, reducing transport costs, they can be located further from the city.
Ranching is located in the final ring. Animals can be raised far from the city because they are self-transporting. Animals can walk to the central city for sale or for butchering.
Beyond the fourth ring lies the wilderness, which is too great a distance from the central city for any type of agricultural product.
Thünen's rings proved especially useful to economic history, such as Fernand Braudel's "Civilization and Capitalism," untangling the economic history of Europe and European colonialism before the Industrial Revolution blurred the patterns on the ground.
In economics, Thünen rent is an economic rent created by spatial variation or location of a resource. It is "that which can be earned "above" that which can be earned at the margin of production".
Natural wage.
In the second volume of his great work "The Isolated State", Thunen developed some of the mathematical foundations of marginal productivity theory and wrote about the Natural Wage indicated by the formula , in which A equals the value of the product of labor and capital, and P equals the subsistence of the laborer and their family. The idea he presented is that a surplus will arise on the earlier units of an investment of either capital or labor, but as time goes on the diminishing return of newer investments will mean that if wages vary with the level of productivity those that are early will receive a greater reward for their labor and capital. But if wage rates were determined using his formula, thus giving labor a share that will vary as a geometric mean: the square root of the joint product of the two factors, A and P.
This formula was so important to him that it was a dying wish of his that it be placed on his tombstone.
In "The Isolated State", he also coined the term "Grenzkosten" (marginal cost) which would later be popularized by Alfred Marshall in his "Principles of Economics".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R = Y(p - c) - YFm \\,"
}
] | https://en.wikipedia.org/wiki?curid=1000450 |
10005756 | Sample mean and covariance | Statistics computed from a sample of data
The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables.
The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as an estimator for the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases.
The term "sample mean" can also be used to refer to a vector of average values when the statistician is looking at the values of several variables in the sample, e.g. the sales, profits, and employees of a sample of Fortune 500 companies. In this case, there is not just a sample variance for each variable but a sample variance-covariance matrix (or simply "covariance matrix") showing also the relationship between each pair of variables. This would be a 3×3 matrix when 3 variables are being considered. The sample covariance is useful in judging the reliability of the sample means as estimators and is also useful as an estimate of the population covariance matrix.
Due to their ease of calculation and other desirable characteristics, the sample mean and sample covariance are widely used in statistics to represent the location and dispersion of the distribution of values in the sample, and to estimate the values for the population.
Definition of the sample mean.
The sample mean is the average of the values of a variable in a sample, which is the sum of those values divided by the number of values. Using mathematical notation, if a sample of "N" observations on variable "X" is taken from the population, the sample mean is:
formula_0
Under this definition, if the sample (1, 4, 1) is taken from the population (1,1,3,4,0,2,1,0), then the sample mean is formula_1, as compared to the population mean of formula_2. Even if a sample is random, it is rarely perfectly representative, and other samples would have other sample means even if the samples were all from the same population. The sample (2, 1, 0), for example, would have a sample mean of 1.
If the statistician is interested in "K" variables rather than one, each observation having a value for each of those "K" variables, the overall sample mean consists of "K" sample means for individual variables. Let formula_3 be the "i"th independently drawn observation ("i"=1...,"N") on the "j"th random variable ("j"=1...,"K"). These observations can be arranged into "N"
column vectors, each with "K" entries, with the "K"×1 column vector giving the "i"-th observations of all variables being denoted formula_4 ("i"=1...,"N").
The sample mean vector formula_5 is a column vector whose "j"-th element formula_6 is the average value of the "N" observations of the "j"th variable:
formula_7
Thus, the sample mean vector contains the average of the observations for each variable, and is written
formula_8
Definition of sample covariance.
The sample covariance matrix is a "K"-by-"K" matrix formula_9 with entries
formula_10
where formula_11 is an estimate of the covariance between the jth
variable and the kth variable of the population underlying the data.
In terms of the observation vectors, the sample covariance is
formula_12
Alternatively, arranging the observation vectors as the columns of a matrix, so that
formula_13,
which is a matrix of "K" rows and "N" columns.
Here, the sample covariance matrix can be computed as
formula_14,
where formula_15 is an "N" by 1 vector of ones.
If the observations are arranged as rows instead of columns, so formula_5 is now a 1×"K" row vector and formula_16 is an "N"×"K" matrix whose column "j" is the vector of "N" observations on variable "j", then applying transposes
in the appropriate places yields
formula_17
Like covariance matrices for random vector, sample covariance matrices are positive semi-definite. To prove it, note that for any matrix formula_18 the matrix formula_19 is positive semi-definite. Furthermore, a covariance matrix is positive definite if and only if the rank of the formula_20 vectors is K.
Unbiasedness.
The sample mean and the sample covariance matrix are unbiased estimates of the mean and the covariance matrix of the random vector formula_21, a row vector whose "j"th element ("j = 1, ..., K") is one of the random variables. The sample covariance matrix has formula_22 in the denominator rather than formula_23 due to a variant of Bessel's correction: In short, the sample covariance relies on the difference between each observation and the sample mean, but the sample mean is slightly correlated with each observation since it is defined in terms of all observations. If the population mean formula_24 is known, the analogous unbiased estimate
formula_25
using the population mean, has formula_23 in the denominator. This is an example of why in probability and statistics it is essential to distinguish between random variables (upper case letters) and realizations of the random variables (lower case letters).
The maximum likelihood estimate of the covariance
formula_26
for the Gaussian distribution case has "N" in the denominator as well. The ratio of 1/"N" to 1/("N" − 1) approaches 1 for large "N", so the maximum likelihood estimate approximately equals the unbiased estimate when the sample is large.
Distribution of the sample mean.
For each random variable, the sample mean is a good estimator of the population mean, where a "good" estimator is defined as being efficient and unbiased. Of course the estimator will likely not be the true value of the population mean since different samples drawn from the same distribution will give different sample means and hence different estimates of the true mean. Thus the sample mean is a random variable, not a constant, and consequently has its own distribution. For a random sample of "N" observations on the "j"th random variable, the sample mean's distribution itself has mean equal to the population mean formula_27 and variance equal to formula_28, where formula_29 is the population variance.
The arithmetic mean of a population, or population mean, is often denoted "μ". The sample mean formula_30 (the arithmetic mean of a sample of values drawn from the population) makes a good estimator of the population mean, as its expected value is equal to the population mean (that is, it is an unbiased estimator). The sample mean is a random variable, not a constant, since its calculated value will randomly differ depending on which members of the population are sampled, and consequently it will have its own distribution. For a random sample of "n" independent observations, the expected value of the sample mean is
formula_31
and the variance of the sample mean is
formula_32
If the samples are not independent, but correlated, then special care has to be taken in order to avoid the problem of pseudoreplication.
If the population is normally distributed, then the sample mean is normally distributed as follows:
formula_33
If the population is not normally distributed, the sample mean is nonetheless approximately normally distributed if "n" is large and "σ"2/"n" < +∞. This is a consequence of the central limit theorem.
Weighted samples.
In a weighted sample, each vector formula_34 (each set of single observations on each of the "K" random variables) is assigned a weight formula_35. Without loss of generality, assume that the weights are normalized:
formula_36
(If they are not, divide the weights by their sum).
Then the weighted mean vector formula_37 is given by
formula_38
and the elements formula_11 of the weighted covariance matrix formula_39 are
formula_40
If all weights are the same, formula_41, the weighted mean and covariance reduce to the (biased) sample mean and covariance mentioned above.
Criticism.
The sample mean and sample covariance are not robust statistics, meaning that they are sensitive to outliers. As robustness is often a desired trait, particularly in real-world applications, robust alternatives may prove desirable, notably quantile-based statistics such as the sample median for location, and interquartile range (IQR) for dispersion. Other alternatives include trimming and Winsorising, as in the trimmed mean and the Winsorized mean.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\bar{X}=\\frac{1}{N}\\sum_{i=1}^{N}X_{i}."
},
{
"math_id": 1,
"text": "\\bar{x} = (1+4+1)/3 = 2"
},
{
"math_id": 2,
"text": "\\mu = (1+1+3+4+0+2+1+0) /8 = 12/8 = 1.5"
},
{
"math_id": 3,
"text": "x_{ij}"
},
{
"math_id": 4,
"text": "\\mathbf{x}_i"
},
{
"math_id": 5,
"text": "\\mathbf{\\bar{x}}"
},
{
"math_id": 6,
"text": "\\bar{x}_{j}"
},
{
"math_id": 7,
"text": " \\bar{x}_{j}=\\frac{1}{N} \\sum_{i=1}^{N} x_{ij},\\quad j=1,\\ldots,K. "
},
{
"math_id": 8,
"text": " \\mathbf{\\bar{x}}=\\frac{1}{N}\\sum_{i=1}^{N}\\mathbf{x}_i = \\begin{bmatrix}\n\\bar{x}_1 \\\\\n\\vdots \\\\\n\\bar{x}_j \\\\ \n\\vdots \\\\\n\\bar{x}_K\n\\end{bmatrix} "
},
{
"math_id": 9,
"text": "\\textstyle \\mathbf{Q}=\\left[ q_{jk}\\right] "
},
{
"math_id": 10,
"text": " q_{jk}=\\frac{1}{N-1}\\sum_{i=1}^{N}\\left( x_{ij}-\\bar{x}_j \\right) \\left( x_{ik}-\\bar{x}_k \\right), "
},
{
"math_id": 11,
"text": "q_{jk}"
},
{
"math_id": 12,
"text": "\\mathbf{Q} = {1 \\over {N-1}}\\sum_{i=1}^N (\\mathbf{x}_i.-\\mathbf{\\bar{x}}) (\\mathbf{x}_i.-\\mathbf{\\bar{x}})^\\mathrm{T},"
},
{
"math_id": 13,
"text": "\\mathbf{F} = \\begin{bmatrix}\\mathbf{x}_1 & \\mathbf{x}_2 & \\dots & \\mathbf{x}_N \\end{bmatrix}"
},
{
"math_id": 14,
"text": "\\mathbf{Q} = \\frac{1}{N-1}( \\mathbf{F} - \\mathbf{\\bar{x}} \\,\\mathbf{1}_N^\\mathrm{T} ) ( \\mathbf{F} - \\mathbf{\\bar{x}} \\,\\mathbf{1}_N^\\mathrm{T} )^\\mathrm{T}"
},
{
"math_id": 15,
"text": "\\mathbf{1}_N"
},
{
"math_id": 16,
"text": "\\mathbf{M}=\\mathbf{F}^\\mathrm{T}"
},
{
"math_id": 17,
"text": "\\mathbf{Q} = \\frac{1}{N-1}( \\mathbf{M} - \\mathbf{1}_N \\mathbf{\\bar{x}} )^\\mathrm{T} ( \\mathbf{M} - \\mathbf{1}_N \\mathbf{\\bar{x}} )."
},
{
"math_id": 18,
"text": "\\mathbf{A}"
},
{
"math_id": 19,
"text": "\\mathbf{A}^T\\mathbf{A}"
},
{
"math_id": 20,
"text": "\\mathbf{x}_i.-\\mathbf{\\bar{x}}"
},
{
"math_id": 21,
"text": "\\textstyle \\mathbf{X}"
},
{
"math_id": 22,
"text": "\\textstyle N-1"
},
{
"math_id": 23,
"text": "\\textstyle N"
},
{
"math_id": 24,
"text": "\\operatorname{E}(\\mathbf{X})"
},
{
"math_id": 25,
"text": " q_{jk}=\\frac{1}{N}\\sum_{i=1}^N \\left( x_{ij}-\\operatorname{E}(X_j)\\right) \\left( x_{ik}-\\operatorname{E}(X_k)\\right), "
},
{
"math_id": 26,
"text": " q_{jk}=\\frac{1}{N}\\sum_{i=1}^N \\left( x_{ij}-\\bar{x}_j \\right) \\left( x_{ik}-\\bar{x}_k \\right) "
},
{
"math_id": 27,
"text": "E(X_j)"
},
{
"math_id": 28,
"text": " \\sigma^2_j/N"
},
{
"math_id": 29,
"text": "\\sigma^2_j"
},
{
"math_id": 30,
"text": " \\bar{x}"
},
{
"math_id": 31,
"text": " \\operatorname E (\\bar{x}) = \\mu "
},
{
"math_id": 32,
"text": " \\operatorname{var}(\\bar{x}) = \\frac{\\sigma^2} n. "
},
{
"math_id": 33,
"text": "\\bar{x} \\thicksim N\\left\\{\\mu, \\frac{\\sigma^2}{n}\\right\\}."
},
{
"math_id": 34,
"text": "\\textstyle \\textbf{x}_{i}"
},
{
"math_id": 35,
"text": "\\textstyle w_i \\geq0"
},
{
"math_id": 36,
"text": " \\sum_{i=1}^{N}w_i = 1. "
},
{
"math_id": 37,
"text": "\\textstyle \\mathbf{\\bar{x}}"
},
{
"math_id": 38,
"text": " \\mathbf{\\bar{x}}=\\sum_{i=1}^N w_i \\mathbf{x}_i."
},
{
"math_id": 39,
"text": "\\textstyle \\mathbf{Q}"
},
{
"math_id": 40,
"text": " q_{jk}=\\frac{1}{1-\\sum_{i=1}^{N}w_i^2}\n\\sum_{i=1}^N w_i \\left( x_{ij}-\\bar{x}_j \\right) \\left( x_{ik}-\\bar{x}_k \\right) . "
},
{
"math_id": 41,
"text": "\\textstyle w_{i}=1/N"
}
] | https://en.wikipedia.org/wiki?curid=10005756 |
10006830 | Disk loading | Characteristic of rotors/propellers
In fluid dynamics, disk loading or disc loading is the average pressure change across an actuator disk, such as an airscrew. Airscrews with a relatively low disk loading are typically called rotors, including helicopter main rotors and tail rotors; propellers typically have a higher disk loading. The V-22 Osprey tiltrotor aircraft has a high disk loading relative to a helicopter in the hover mode, but a relatively low disk loading in fixed-wing mode compared to a turboprop aircraft.
Rotors.
Disk loading of a hovering helicopter is the ratio of its weight to the
total main rotor disk area. It is determined by dividing
the total helicopter weight by the rotor disk area, which is the area swept by the blades of a rotor. Disk area can be found by using the span of one rotor blade as the radius of a circle and then determining the area the blades encompass during a complete rotation. When a helicopter is being maneuvered, its disk loading changes. The higher the loading, the more power needed to maintain rotor speed. A low disk loading is a direct indicator of high lift thrust efficiency.
Increasing the weight of a helicopter increases disk loading. For a given weight, a helicopter with shorter rotors will have higher disk loading, and will require more engine power to hover. A low disk loading improves autorotation performance in rotorcraft. Typically, an autogyro (or gyroplane) has a lower rotor disk loading than a helicopter, which provides a slower rate of descent in autorotation.
Propellers.
In reciprocating and propeller engines, disk loading can be defined as the ratio between propeller-induced velocity and freestream velocity. Lower disk loading will increase efficiency, so it is generally desirable to have larger propellers from an efficiency standpoint. Maximum efficiency is reduced as disk loading is increased due to the rotating slipstream; using contra-rotating propellers can alleviate this problem allowing high maximum efficiency even at relatively high disc loading.
The Airbus A400M fixed-wing aircraft will have a very high disk loading on its propellers.
Theory.
The "momentum theory" or "disk actuator theory" describes a mathematical model of an ideal actuator disk, developed by W.J.M. Rankine (1865), Alfred George Greenhill (1888) and R.E. Froude (1889). The helicopter rotor is modeled as an infinitesimally thin disk with an infinite number of blades that induce a constant pressure jump over the disk area and along the axis of rotation. For a helicopter that is hovering, the aerodynamic force is vertical and exactly balances the helicopter weight, with no lateral force.
The downward force on the air flowing through the rotor is accompanied by an upward force on the helicopter rotor disk. The downward force produces a downward acceleration of the air, increasing its kinetic energy. This energy transfer from the rotor to the air is the induced power loss of the rotary wing, which is analogous to the lift-induced drag of a fixed-wing aircraft.
Conservation of linear momentum relates the induced velocity downstream in the far wake field to the rotor thrust per unit of mass flow. Conservation of energy considers these parameters as well as the induced velocity at the rotor disk. Conservation of mass relates the mass flow to the induced velocity. The momentum theory applied to a helicopter gives the relationship between induced power loss and rotor thrust, which can be used to analyze the performance of the aircraft. Viscosity and compressibility of the air, frictional losses, and rotation of the slipstream in the wake are not considered.
Momentum theory.
For an actuator disk of area formula_0, with uniform induced velocity formula_1 at the rotor disk, and with formula_2 as the density of air, the mass flow rate formula_3 through the disk area is:
formula_4
By conservation of mass, the mass flow rate is constant across the slipstream both upstream and downstream of the disk (regardless of velocity). Since the flow far upstream of a helicopter in a level hover is at rest, the starting velocity, momentum, and energy are zero. If the homogeneous slipstream far downstream of the disk has velocity formula_5, by conservation of momentum the total thrust formula_6 developed over the disk is equal to the rate of change of momentum, which assuming zero starting velocity is:
formula_7
By conservation of energy, the work done by the rotor must equal the energy change in the slipstream:
formula_8
Substituting for formula_6 and eliminating terms, we get:
formula_9
So the velocity of the slipstream far downstream of the disk is twice the velocity at the disk, which is the same result as for an elliptically loaded wing predicted by lifting-line theory.
Bernoulli's principle.
To compute the disk loading using Bernoulli's principle, we assume the pressure in the slipstream far downstream is equal to the starting pressure formula_10, which is equal to the atmospheric pressure. From the starting point to the disk we have:
formula_11
Between the disk and the distant wake, we have:
formula_12
Combining equations, the disk loading formula_13 is:
formula_14
The total pressure in the distant wake is:
formula_15
So the pressure change across the disk is equal to the disk loading. Above the disk the pressure change is:
formula_16
Below the disk, the pressure change is:
formula_17
The pressure along the slipstream is always falling downstream, except for the positive pressure jump across the disk.
Power required.
From the momentum theory, thrust is:
formula_18
The induced velocity is:
formula_19
Where formula_20 is the disk loading as before, and the power formula_21 required in hover (in the ideal case) is:
formula_22
Therefore, the induced velocity can be expressed as:
formula_23
So, the induced velocity is inversely proportional to the power loading formula_24.
References.
<templatestyles src="Reflist/styles.css" />
This article incorporates public domain material from | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "v"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "\\dot{m}"
},
{
"math_id": 4,
"text": "\\dot m = \\rho \\, A \\, v."
},
{
"math_id": 5,
"text": "w"
},
{
"math_id": 6,
"text": "T"
},
{
"math_id": 7,
"text": " T= \\dot m\\, w."
},
{
"math_id": 8,
"text": " T\\, v= \\tfrac12\\, \\dot m\\, {w^2}."
},
{
"math_id": 9,
"text": " v= \\tfrac12\\, w."
},
{
"math_id": 10,
"text": "p_0"
},
{
"math_id": 11,
"text": " p_0 =\\, p_1 +\\ \\tfrac12\\, \\rho\\, v^2."
},
{
"math_id": 12,
"text": " p_2 +\\ \\tfrac12\\, \\rho\\, v^2 =\\, p_0 +\\ \\tfrac12\\, \\rho\\, w^2."
},
{
"math_id": 13,
"text": "T /\\, A"
},
{
"math_id": 14,
"text": "\\frac {T}{A} = p_2 -\\, p_1 = \\tfrac12\\, \\rho\\, w^2"
},
{
"math_id": 15,
"text": " p_0 + \\tfrac12\\, \\rho\\, w^2 =\\, p_0 + \\frac {T}{A}."
},
{
"math_id": 16,
"text": " p_0 - \\tfrac12\\, \\rho\\, v^2 =\\, p_0 -\\, \\tfrac14 \\frac {T}{A}."
},
{
"math_id": 17,
"text": " p_0 + \\tfrac32\\, \\rho\\, v^2 =\\, p_0 +\\, \\tfrac34 \\frac {T}{A}."
},
{
"math_id": 18,
"text": " T = \\dot m\\, w = \\dot m\\, (2 v) = 2 \\rho\\, A\\, v^2."
},
{
"math_id": 19,
"text": "v = \\sqrt{\\frac{T}{A} \\cdot \\frac{1}{2 \\rho}}."
},
{
"math_id": 20,
"text": "T/A"
},
{
"math_id": 21,
"text": "P"
},
{
"math_id": 22,
"text": "P = T v = T \\sqrt{\\frac{T}{A} \\cdot \\frac{1}{2 \\rho}}."
},
{
"math_id": 23,
"text": " v = \\frac{P}{T} = \\left [ \\frac{T}{P} \\right ] ^{-1}."
},
{
"math_id": 24,
"text": "T/P"
}
] | https://en.wikipedia.org/wiki?curid=10006830 |
10008 | Electrode | Electrical conductor used to make contact with nonmetallic parts of a circuit
An electrode is an electrical conductor used to make contact with a nonmetallic part of a circuit (e.g. a semiconductor, an electrolyte, a vacuum or air). Electrodes are essential parts of batteries that can consist of a variety of materials (chemicals) depending on the type of battery.
The electrophore, invented by Johan Wilcke, was an early version of an electrode used to study static electricity.
Anode and cathode in electrochemical cells.
Electrodes are an essential part of any battery. The first electrochemical battery was devised by Alessandro Volta and was aptly named the Voltaic cell. This battery consisted of a stack of copper and zinc electrodes separated by brine-soaked paper disks. Due to fluctuation in the voltage provided by the voltaic cell, it was not very practical. The first practical battery was invented in 1839 and named the Daniell cell after John Frederic Daniell. It still made use of the zinc–copper electrode combination. Since then, many more batteries have been developed using various materials. The basis of all these is still using two electrodes, anodes and cathodes.
Anode (-).
'Anode' was coined by William Whewell at Michael Faraday's request, derived from the Greek words ἄνο (ano), 'upwards' and ὁδός (hodós), 'a way'. The anode is the electrode through which the conventional current enters from the electrical circuit of an electrochemical cell (battery) into the non-metallic cell. The electrons then flow to the other side of the battery. Benjamin Franklin surmised that the electrical flow moved from positive to negative. The electrons flow away from the anode and the conventional current towards it. From both can be concluded that the charge of the anode is negative. The electron entering the anode comes from the oxidation reaction that takes place next to it.
Cathode (+).
The cathode is in many ways the opposite of the anode. The name (also coined by Whewell) comes from the Greek words κάτω (kato), 'downwards' and ὁδός (hodós), 'a way'. It is the positive electrode, meaning the electrons flow from the electrical circuit through the cathode into the non-metallic part of the electrochemical cell. At the cathode, the reduction reaction takes place with the electrons arriving from the wire connected to the cathode and are absorbed by the oxidizing agent.
Primary cell.
A primary cell is a battery designed to be used once and then discarded. This is due to the electrochemical reactions taking place at the electrodes in the cell not being reversible. An example of a primary cell is the discardable alkaline battery commonly used in flashlights. Consisting of a zinc anode and a manganese oxide cathode in which ZnO is formed.
The half-reactions are:
Zn(s) + 2OH−(aq) → ZnO(s) + H2O(l) + 2e− formula_0 [E0oxidation = -1.28 V]
2MnO2(s) + H2O(l) + 2e− → Mn2O3(s) + 2OH−(aq)formula_1 [E0reduction = +0.15 V]
Overall reaction:
Zn(s) + 2MnO2(s) ⇌ ZnO(s) + Mn2O3(s)formula_0 [E0total = +1.43 V]
The ZnO is prone to clumping and will give less efficient discharge if recharged again. It is possible to recharge these batteries but is due to safety concerns advised against by the manufacturer. Other primary cells include zinc–carbon, zinc–chloride, and lithium iron disulfide.
Secondary cell.
Contrary to the primary cell a secondary cell can be recharged. The first was the lead–acid battery, invented in 1859 by French physicist Gaston Planté. This type of battery is still the most widely used in among others automobiles. The cathode consists of lead dioxide (PbO2) and the anode of solid lead. Other commonly used rechargeable batteries are nickel–cadmium, nickel–metal hydride, and Lithium-ion. The last of which will be explained more thoroughly in this article due to its importance.
Marcus' theory of electron transfer.
Marcus theory is a theory originally developed by Nobel laureate Rudolph A. Marcus and explains the rate at which an electron can move from one chemical species to another, for this article this can be seen as 'jumping' from the electrode to a species in the solvent or vice versa.
We can represent the problem as calculating the transfer rate for the transfer of an electron from donor to an acceptor
D + A → D+ + A−
The potential energy of the system is a function of the translational, rotational, and vibrational coordinates of the reacting species and the molecules of the surrounding medium, collectively called the reaction coordinates. The abscissa the figure to the right represents these. From the classical electron transfer theory, the expression of the reaction rate constant (probability of reaction) can be calculated, if a non-adiabatic process and parabolic potential energy are assumed, by finding the point of intersection (Qx). One important thing to note, and was noted by Marcus when he came up with the theory, the electron transfer must abide by the law of conservation of energy and the Frank-Condon principle.
Doing this and then rearranging this leads to the expression of the free energy activation (formula_2) in terms of the overall free energy of the reaction (formula_3).
formula_4
In which the formula_5 is the reorganisation energy.
Filling this result in the classically derived Arrhenius equation
formula_6
leads to
formula_7
With A being the pre-exponential factor which is usually experimentally determined, although a semi classical derivation provides more information as will be explained below.
This classically derived result qualitatively reproduced observations of a maximum electron transfer rate under the conditions formula_8. For a more extensive mathematical treatment one could read the paper by Newton. An interpretation of this result and what a closer look at the physical meaning of the formula_9 one can read the paper by Marcus.
the situation at hand can be more accurately described by using the displaced harmonic oscillator model, in this model quantum tunneling is allowed. This is needed in order to explain why even at near-zero Kelvin there still are electron transfers, in contradiction to the classical theory.
Without going into too much detail on how the derivation is done, it rests on using Fermi's golden rule from time-dependent perturbation theory with the full Hamiltonian of the system. It is possible to look at the overlap in the wavefunctions of both the reactants and the products (the right and the left side of the chemical reaction) and therefore when their energies are the same and allow for electron transfer. As touched on before this must happen because only then conservation of energy is abided by. Skipping over a few mathematical steps the probability of electron transfer can be calculated (albeit quite difficult) using the following formula
formula_10
With formula_11 being the electronic coupling constant describing the interaction between the two states (reactants and products) and formula_12 being the line shape function. Taking the classical limit of this expression, meaning formula_13, and making some substitution an expression is obtained very similar to the classically derived formula, as expected.
formula_14
The main difference is now the pre-exponential factor has now been described by more physical parameters instead of the experimental factor formula_15. One is once again revered to the sources as listed below for a more in-depth and rigorous mathematical derivation and interpretation.
Efficiency.
The physical properties of electrodes are mainly determined by the material of the electrode and the topology of the electrode. The properties required depend on the application and therefore there are many kinds of electrodes in circulation. The defining property for a material to be used as an electrode is that it be conductive. Any conducting material such as metals, semiconductors, graphite or conductive polymers can therefore be used as an electrode. Often electrodes consist of a combination of materials, each with a specific task. Typical constituents are the active materials which serve as the particles which oxidate or reduct, conductive agents which improve the conductivity of the electrode and binders which are used to contain the active particles within the electrode. The efficiency of electrochemical cells is judged by a number of properties, important quantities are the self-discharge time, the discharge voltage and the cycle performance. The physical properties of the electrodes play an important role in determining these quantities. Important properties of the electrodes are: the electrical resistivity, the specific heat capacity (c_p), the electrode potential and the hardness. Of course, for technological applications, the cost of the material is also an important factor. The values of these properties at room temperature (T = 293 K) for some commonly used materials are listed in the table below.
Surface effects.
The surface topology of the electrode plays an important role in determining the efficiency of an electrode. The efficiency of the electrode can be reduced due to contact resistance. To create an efficient electrode it is therefore important to design it such that it minimizes the contact resistance.
Manufacturing.
The production of electrodes for Li-ion batteries is done in various steps as follows:
Structure of the electrode.
For a given selection of constituents of the electrode, the final efficiency is determined by the internal structure of the electrode. The important factors in the internal structure in determining the performance of the electrode are:
These properties can be influenced in the production of the electrodes in a number of manners. The most important step in the manufacturing of the electrodes is creating the electrode slurry. As can be seen above, the important properties of the electrode all have to do with the even distribution of the components of the electrode. Therefore, it is very important that the electrode slurry be as homogeneous as possible. Multiple procedures have been developed to improve this mixing stage and current research is still being done.
Electrodes in lithium ion batteries.
A modern application of electrodes is in lithium-ion batteries (Li-ion batteries). A Li-ion battery is a kind of flow battery which can be seen in the image on the right.
Furthermore, a Li-ion battery is an example of a secondary cell since it is rechargeable. It can both act as a galvanic or electrolytic cell. Li-ion batteries use lithium ions as the solute in the electrolyte which are dissolved in an organic solvent. Lithium electrodes were first studied by Gilbert N. Lewis and Frederick G. Keyes in 1913. In the following century these electrodes were used to create and study the first Li-ion batteries. Li-ion batteries are very popular due to their great performance. Applications include mobile phones and electric cars. Due to their popularity, much research is being done to reduce the cost and increase the safety of Li-ion batteries. An integral part of the Li-ion batteries are their anodes and cathodes, therefore much research is being done into increasing the efficiency, safety and reducing the costs of these electrodes specifically.
Cathodes.
In Li-ion batteries, the cathode consists of a intercalated lithium compound (a layered material consisting of layers of molecules composed of lithium and other elements). A common element which makes up part of the molecules in the compound is cobalt. Another frequently used element is manganese. The best choice of compound usually depends on the application of the battery. Advantages for cobalt-based compounds over manganese-based compounds are their high specific heat capacity, high volumetric heat capacity, low self-discharge rate, high discharge voltage and high cycle durability. There are however also drawbacks in using cobalt-based compounds such as their high cost and their low thermostability. Manganese has similar advantages and a lower cost, however there are some problems associated with using manganese. The main problem is that manganese tends to dissolve into the electrolyte over time. For this reason, cobalt is still the most common element which is used in the lithium compounds. There is much research being done into finding new materials which can be used to create cheaper and longer lasting Li-ion batteries
Anodes.
The anodes used in mass-produced Li-ion batteries are either carbon based (usually graphite) or made out of spinel lithium titanate (Li4Ti5O12). Graphite anodes have been successfully implemented in many modern commercially available batteries due to its cheap price, longevity and high energy density. However, it presents issues of dendrite growth, with risks of shorting the battery and posing a safety issue. Li4Ti5O12 has the second largest market share of anodes, due to its stability and good rate capability, but with challenges such as low capacity. During the early 2000s, silicon anode research began picking up pace, becoming one of the decade's most promising candidates for future lithium-ion battery anodes. Silicon has one of the highest gravimetric capacities when compared to graphite and Li4Ti5O12 as well as a high volumetric one. Furthermore, Silicon has the advantage of operating under a reasonable open circuit voltage without parasitic lithium reactions. However, silicon anodes have a major issue of volumetric expansion during lithiation of around 360%. This expansion may pulverize the anode, resulting in poor performance. To fix this problem, scientists looked into varying the dimensionality of the Si. Many studies have been developed in Si nanowires, Si tubes as well as Si sheets. As a result, composite hierarchical Si anodes have become the major technology for future applications in lithium-ion batteries. In the early 2020s, technology is reaching commercial levels with factories being built for mass production of anodes in the United States. Furthermore, metallic lithium is another possible candidate for the anode. It boasts a higher specific capacity than silicon, however, does come with the drawback of working with the highly unstable metallic lithium. Similarly to graphite anodes, dendrite formation is another major limitation of metallic lithium, with the solid electrolyte interphase being a major design challenge. In the end, if stabilized, metallic lithium would be able to produce batteries that hold the most charge, while being the lightest.
Mechanical properties.
A common failure mechanism of batteries is mechanical shock, which breaks either the electrode or the system's container, leading to poor conductivity and electrolyte leakage. However, the relevance of mechanical properties of electrodes goes beyond the resistance to collisions due to its environment. During standard operation, the incorporation of ions into electrodes leads to a change in volume. This is well exemplified by Si electrodes in lithium-ion batteries expanding around 300% during lithiation. Such change may lead to the deformations in the lattice and, therefore stresses in the material. The origin of stresses may be due to geometric constraints in the electrode or inhomogeneous plating of the ion. This phenomenon is very concerning as it may lead to electrode fracture and performance loss. Thus, mechanical properties are crucial to enable the development of new electrodes for long lasting batteries. A possible strategy for measuring the mechanical behavior of electrodes during operation is by using nanoindentation. The method is able to analyze how the stresses evolve during the electrochemical reactions, being a valuable tool in evaluating possible pathways for coupling mechanical behavior and electrochemistry.
More than just affecting the electrode's morphology, stresses are also able to impact electrochemical reactions. While the chemical driving forces are usually higher in magnitude than the mechanical energies, this is not true for Li-ion batteries. A study by Dr. Larché established a direct relation between the applied stress and the chemical potential of the electrode. Though it neglects multiple variables such as the variation of elastic constraints, it subtracts from the total chemical potential the elastic energy induced by the stress.
formula_16
In this equation, μ represents the chemical potential, with μ° being its reference value. T stands for the temperature and k the Boltzmann constant. The term γ inside the logarithm is the activity and x is the ratio of the ion to the total composition of the electrode. The novel term Ω is the partial molar volume of the ion in the host and σ corresponds to the mean stress felt by the system. The result of this equation is that diffusion, which is dependent on chemical potential, gets impacted by the added stress and, therefore changes the battery's performance. Furthermore, mechanical stresses may also impact the electrode's solid-electrolyte-interphase layer. The interface which regulates the ion and charge transfer and can be degraded by stress. Thus, more ions in the solution will be consumed to reform it, diminishing the overall efficiency of the system.
Other anodes and cathodes.
In a vacuum tube or a semiconductor having polarity (diodes, electrolytic capacitors) the anode is the positive (+) electrode and the cathode the negative (−). The electrons enter the device through the cathode and exit the device through the anode. Many devices have other electrodes to control operation, e.g., base, gate, control grid.
In a three-electrode cell, a counter electrode, also called an auxiliary electrode, is used only to make a connection to the electrolyte so that a current can be applied to the working electrode. The counter electrode is usually made of an inert material, such as a noble metal or graphite, to keep it from dissolving.
Welding electrodes.
In arc welding, an electrode is used to conduct current through a workpiece to fuse two pieces together. Depending upon the process, the electrode is either consumable, in the case of gas metal arc welding or shielded metal arc welding, or non-consumable, such as in gas tungsten arc welding. For a direct current system, the weld rod or stick may be a cathode for a filling type weld or an anode for other welding processes. For an alternating current arc welder, the welding electrode would not be considered an anode or cathode.
Alternating current electrodes.
For electrical systems which use alternating current, the electrodes are the connections from the circuitry to the object to be acted upon by the electric current but are not designated anode or cathode because the direction of flow of the electrons changes periodically, usually many times per second.
Chemically modified electrodes.
Chemically modified electrodes are electrodes that have their surfaces chemically modified to change the electrode's physical, chemical, electrochemical, optical, electrical, and transportive properties. These electrodes are used for advanced purposes in research and investigation.
Uses.
Electrodes are used to provide current through nonmetal objects to alter them in numerous ways and to measure conductivity for numerous purposes. Examples include:
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\qquad \\qquad"
},
{
"math_id": 1,
"text": "\\qquad"
},
{
"math_id": 2,
"text": "\\Delta G^{\\dagger}"
},
{
"math_id": 3,
"text": "\\Delta G^{0}"
},
{
"math_id": 4,
"text": "\\Delta G^{\\dagger} = \\frac{1}{4 \\lambda} (\\Delta G^{0} + \\lambda)^{2} "
},
{
"math_id": 5,
"text": " \\lambda "
},
{
"math_id": 6,
"text": "k = A\\, \\exp\\left(\\frac{- \\Delta G^{\\dagger}}{kT}\\right),"
},
{
"math_id": 7,
"text": "k = A\\, \\exp\\left[{\\frac {-(\\Delta G^{0} + \\lambda)^{2}}{4 \\lambda k T}}\\right]"
},
{
"math_id": 8,
"text": "\\Delta G^{\\dagger} = \\lambda"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "w_{ET}= \\frac{|J|^{2}}{\\hbar^{2}}\\int_{-\\infty}^{+\\infty}dt\\, e^{-i \\Delta Et / \\hbar - g (t)}"
},
{
"math_id": 11,
"text": " J "
},
{
"math_id": 12,
"text": " g(t) "
},
{
"math_id": 13,
"text": " \\hbar \\omega \\ll k T "
},
{
"math_id": 14,
"text": "w_{ET} = \\frac{|J|^{2}}{\\hbar} \\sqrt{\\frac{\\pi}{\\lambda k T}}\\exp\\left[\\frac {- ( \\Delta E + \\lambda )^{2}} {4 \\lambda k T}\\right]"
},
{
"math_id": 15,
"text": " A "
},
{
"math_id": 16,
"text": "\\mu = \\mu^o + k\\cdot T\\cdot\\log (\\gamma\\cdot x) + \\Omega \\cdot \\sigma"
}
] | https://en.wikipedia.org/wiki?curid=10008 |
1001293 | Irreducibility (mathematics) | In mathematics, the concept of irreducibility is used in several ways.
<templatestyles src="Dmbox/styles.css" />
Index of articles associated with the same name
This includes a list of related items that share the same name (or similar names). <br> If an [ internal link] incorrectly led you here, you may wish to change the link to point directly to the intended article. | [
{
"math_id": 0,
"text": "\\mathbb RP^2"
}
] | https://en.wikipedia.org/wiki?curid=1001293 |
1001329 | Class function | In mathematics, especially in the fields of group theory and representation theory of groups, a class function is a function on a group "G" that is constant on the conjugacy classes of "G". In other words, it is invariant under the conjugation map on "G". Such functions play a basic role in representation theory.
Characters.
The character of a linear representation of "G" over a field "K" is always a class function with values in "K". The class functions form the center of the group ring "K"["G"]. Here a class function "f" is identified with the element formula_0.
Inner products.
The set of class functions of a group "G" with values in a field "K" form a "K"-vector space. If "G" is finite and the characteristic of the field does not divide the order of "G", then there is an inner product defined on this space defined by formula_1 where |"G"| denotes the order of "G" and bar is conjugation in the field "K". The set of irreducible characters of "G" forms an orthogonal basis, and if "K" is a splitting field for "G", for instance if "K" is algebraically closed, then the irreducible characters form an orthonormal basis.
In the case of a compact group and "K" = C the field of complex numbers, the notion of Haar measure allows one to replace the finite sum above with an integral: formula_2
When "K" is the real numbers or the complex numbers, the inner product is a non-degenerate Hermitian bilinear form. | [
{
"math_id": 0,
"text": " \\sum_{g \\in G} f(g) g"
},
{
"math_id": 1,
"text": " \\langle \\phi , \\psi \\rangle = \\frac{1}{|G|} \\sum_{g \\in G} \\phi(g) \\overline{\\psi(g)} "
},
{
"math_id": 2,
"text": " \\langle \\phi, \\psi \\rangle = \\int_G \\phi(t) \\overline{\\psi(t)}\\, dt. "
}
] | https://en.wikipedia.org/wiki?curid=1001329 |
1001361 | Semisimple module | Direct sum of irreducible modules
In mathematics, especially in the area of abstract algebra known as module theory, a semisimple module or completely reducible module is a type of module that can be understood easily from its parts. A ring that is a semisimple module over itself is known as an Artinian semisimple ring. Some important rings, such as group rings of finite groups over fields of characteristic zero, are semisimple rings. An Artinian ring is initially understood via its largest semisimple quotient. The structure of Artinian semisimple rings is well understood by the Artin–Wedderburn theorem, which exhibits these rings as finite direct products of matrix rings.
For a group-theory analog of the same notion, see "Semisimple representation".
Definition.
A module over a (not necessarily commutative) ring is said to be semisimple (or completely reducible) if it is the direct sum of simple (irreducible) submodules.
For a module "M", the following are equivalent:
For the proof of the equivalences, see "".
The most basic example of a semisimple module is a module over a field, i.e., a vector space. On the other hand, the ring Z of integers is not a semisimple module over itself, since the submodule 2Z is not a direct summand.
Semisimple is stronger than completely decomposable,
which is a direct sum of indecomposable submodules.
Let "A" be an algebra over a field "K". Then a left module "M" over "A" is said to be absolutely semisimple if, for any field extension "F" of "K", "F" ⊗"K" "M" is a semisimple module over "F" ⊗"K" "A".
Semisimple rings.
A ring is said to be (left-)semisimple if it is semisimple as a left module over itself. Surprisingly, a left-semisimple ring is also right-semisimple and vice versa. The left/right distinction is therefore unnecessary, and one can speak of semisimple rings without ambiguity.
A semisimple ring may be characterized in terms of homological algebra: namely, a ring "R" is semisimple if and only if any short exact sequence of left (or right) "R"-modules splits. That is, for a short exact sequence
formula_0
there exists "s" : "C" → "B" such that the composition "g" ∘ "s" : "C" → "C" is the identity. The map "s" is known as a section. From this it follows that
formula_1
or in more exact terms
formula_2
In particular, any module over a semisimple ring is injective and projective. Since "projective" implies "flat", a semisimple ring is a von Neumann regular ring.
Semisimple rings are of particular interest to algebraists. For example, if the base ring "R" is semisimple, then all "R"-modules would automatically be semisimple. Furthermore, every simple (left) "R"-module is isomorphic to a minimal left ideal of "R", that is, "R" is a left Kasch ring.
Semisimple rings are both Artinian and Noetherian. From the above properties, a ring is semisimple if and only if it is Artinian and its Jacobson radical is zero.
If an Artinian semisimple ring contains a field as a central subring, it is called a semisimple algebra.
Simple rings.
One should beware that despite the terminology, "not all simple rings are semisimple". The problem is that the ring may be "too big", that is, not (left/right) Artinian. In fact, if "R" is a simple ring with a minimal left/right ideal, then "R" is semisimple.
Classic examples of simple, but not semisimple, rings are the Weyl algebras, such as the Q-algebra
formula_3
which is a simple noncommutative domain. These and many other nice examples are discussed in more detail in several noncommutative ring theory texts, including chapter 3 of Lam's text, in which they are described as nonartinian simple rings. The module theory for the Weyl algebras is well studied and differs significantly from that of semisimple rings.
Jacobson semisimple.
A ring is called "Jacobson semisimple" (or "J-semisimple" or "semiprimitive") if the intersection of the maximal left ideals is zero, that is, if the Jacobson radical is zero. Every ring that is semisimple as a module over itself has zero Jacobson radical, but not every ring with zero Jacobson radical is semisimple as a module over itself. A J-semisimple ring is semisimple if and only if it is an artinian ring, so semisimple rings are often called "artinian semisimple rings" to avoid confusion.
For example, the ring of integers, Z, is J-semisimple, but not artinian semisimple.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "0 \\to A \\xrightarrow{f} B \\xrightarrow{g} C \\to 0 "
},
{
"math_id": 1,
"text": "B \\cong A \\oplus C"
},
{
"math_id": 2,
"text": "B \\cong f(A) \\oplus s(C)."
},
{
"math_id": 3,
"text": " A=\\mathbf{Q}{\\left[x,y\\right]}/\\langle xy-yx-1\\rangle\\ ,"
}
] | https://en.wikipedia.org/wiki?curid=1001361 |
10013925 | Multiple inert gas elimination technique | Medical technique
The multiple inert gas elimination technique (MIGET) is a medical technique used mainly in pulmonology that involves measuring the concentrations of various infused, inert gases in mixed venous blood, arterial blood, and expired gas of a subject. The technique quantifies true shunt, physiological dead space ventilation, ventilation versus blood flow (VA/Q) ratios, and diffusion limitation.
Background.
Hypoxemia is generally attributed to one of four processes: hypoventilation, shunt (right to left), diffusion limitation, and ventilation/perfusion (VA/Q) inequality. Moreover, there are also "extrapulmonary" factors that can contribute to fluctuations in arterial PO2.
There are several measures of hypoxemia that can be assessed, but there are various limitations associated with each. It was for this reason that the MIGET was developed, to overcome the shortcomings of previous methods.
Theoretical basis.
Steady-state gas exchange in the lungs obeys the principles of conservation of mass. This leads to the ventilation/perfusion equation for oxygen:
formula_0
and for carbon dioxide:
formula_1
where:
For the purposes of utilizing the MIGET, the equations have been generalized for an inert gas (IG):
formula_2
where:
Assuming diffusion equilibration is complete for the inert gas, dropping the subscript IG, and substituting the blood-gas partition coefficient (λ) renders:
formula_3
Rearranging:
formula_4
where:
This equation is the foundation for the MIGET, and it demonstrates that the fraction of inert gas not eliminated from the blood via the lung is a function of the partition coefficient and the VA/Q ratio. This equation operates under the presumption that the lung is perfectly homogenous. In this model, retention (R) is measured from the ratio &NoBreak;&NoBreak;. Stated mathematically:
formula_5
From this equation, we can measure the levels of each inert gas retained in the blood. The relationship between retention (R) and &NoBreak;&NoBreak; can be summarized as follows: As &NoBreak;&NoBreak; for a given λ increases, R decreases; however, this relationship between &NoBreak;&NoBreak; and R is the most obvious at values of &NoBreak;&NoBreak; between ten times higher and lower than a gas's λ. Beyond this, however, it is possible to measure the concentrations of the inert gases in the expired gas from the subject. The ratio of the mixed expired concentration to the mixed venous concentration has been termed excretion (E) and describes the ventilation to regions of varying &NoBreak;&NoBreak;. When taken together:
formula_6
where:
When observing a collection of alveoli in which PO2 and PCO2 are uniform, local alveolar ventilation and local blood flow define &NoBreak;&NoBreak;:
formula_7
From these equations it can be deduced that to have knowledge of either retention or excretion implies knowledge of the other. Moreover, a similar understanding exists for the relationship between the distribution of blood flow and the distribution of ventilation.
Limitations.
The data produced by the MIGET is an approximation of the distribution of &NoBreak;&NoBreak; ratios across the entire lung. It has been estimated that nearly 100,000 gas exchange units exist in the human lung; this could lead to a theoretical maximum of VA/Q compartments as high as 100,000, in that case.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_A/Q=8.63 \\times \\frac{C_{c'}\\ce{O2} - C_v\\ce{O2}}{P_I\\ce{O2} - P_A\\ce{O2}}"
},
{
"math_id": 1,
"text": "V_A/Q=8.63 \\times \\frac{C_v\\ce{CO2} - C_{c'}\\ce{CO2}}{P_A\\ce{CO2}}"
},
{
"math_id": 2,
"text": "V_A/Q = 8.63 \\times \\ce{solubility} \\times \\frac {P_V\\ce{IG} - P_{C'}\\ce{IG}}{P_A\\ce{IG}}"
},
{
"math_id": 3,
"text": " V_A/Q = {\\lambda} \\times \\frac{P_v - P_A}{P_A} "
},
{
"math_id": 4,
"text": "P_A/P_v = \\frac{{\\lambda}}{{\\lambda} + V_A/Q} = P_{c'}/P_v "
},
{
"math_id": 5,
"text": "R = \\frac{\\lambda}{\\lambda+V_A/Q}"
},
{
"math_id": 6,
"text": "V_{IG} = V_E \\times E = \\lambda \\times Q_T \\times [1-R]"
},
{
"math_id": 7,
"text": "V_A = Q \\times V_A/Q"
}
] | https://en.wikipedia.org/wiki?curid=10013925 |
10014466 | Copper cable certification | Cable testing regimen
In copper twisted pair wire networks, copper cable certification is achieved through a thorough series of tests in accordance with Telecommunications Industry Association (TIA) or International Organization for Standardization (ISO) standards. These tests are done using a certification-testing tool, which provide "pass" or "fail" information. While certification can be performed by the owner of the network, certification is primarily done by datacom contractors. It is this certification that allows the contractors to warranty their work.
Need for certification.
Installers who need to prove to the network owner that the installation has been done correctly and meets TIA or ISO standards need to certify their work. Network owners who want to guarantee that the infrastructure is capable of handling a certain application (e.g. Voice over Internet Protocol) will use a tester to certify the network infrastructure. In some cases, these testers are used to pinpoint specific problems. Certification tests are vital if there is a discrepancy between the installer and network owner after an installation has been performed.
Standards.
The performance tests and their procedures have been defined in the ANSI/TIA-568.2 standard and the ISO/IEC 11801 standard. The TIA standard defines performance in categories (Cat 3, Cat 5e, Cat 6, Cat 6A, and Cat 8) and the ISO defines classes (Class C, D, E, EA, F and FA). These standards define the procedure to certify that an installation meets performance criteria in a given category or class.
The significance of each category or class is the limit values of which the Pass/Fail and frequency ranges are measured: Cat 3 and Class C (no longer used) test and define communication with 16 MHz bandwidth, Cat 5e and Class D with 100 MHz bandwidth, Cat 6 and Class E up to 250 MHz, Cat6A and Class EA up to 500 MHz, Cat7 and Class F up to 600 MHz and Cat 7A and Class FA with a frequency range through 1000 MHz., Cat 8, Class I, and Class II have a frequency range through 2000MHz
The standards also define that data from each test result must be collected and stored in either print or electronic format for future inspection.
Tests.
Wiremap.
The wiremap test is used to identify physical installation errors; improper pin termination, shorts between any two or more wires, continuity to the remote end, split pairs, crossed pairs, reversed pairs, and any other mis-wiring.
Propagation delay.
The propagation delay test tests for the time it takes for the signal to be sent from one end and received by the other end.
Delay skew.
The delay skew test is used to find the difference in propagation delay between the fastest and slowest set of wire pairs. An ideal skew is between 25 and 50 nanoseconds over a 100-meter cable. The lower this skew the better; less than 25 ns is excellent, but 45 to 50 ns is marginal. (Traveling between 50% and 80% of the speed of light, an electronic wave requires between 417 and 667 ns to traverse a 100-meter cable.
Cable length.
The cable length test verifies that the copper cable from the transmitter to receiver does not exceed the maximum recommended distance of 100 meters in a 10BASE-T/100BASE-TX/1000BASE-T network.
Insertion loss.
Insertion loss, also referred to as attenuation, refers to the loss of signal strength at the far end of a line compared to the signal that was introduced into the line. This loss is due to the electrical resistance of the copper cable, the loss of energy through the cable insulation, and impedance mismatches introduced at the connectors. Insertion loss is usually expressed in decibels dB. Insertion loss increases with distance and frequency. For every roughly 3 dB of loss, signal power is reduced by a factor of formula_0 and signal amplitude is reduced by a factor of formula_1.
Return loss.
Return loss is the measurement (in dB) of the amount of signal that is reflected back toward the transmitter. The reflection of the signal is caused by the variations of impedance in the connectors and cable and is usually attributed to a poorly terminated wire. The greater the variation in impedance, the greater the return loss reading. If three pairs of wire pass by a substantial amount, but the fourth pair barely passes, it usually is an indication of a bad crimp or bad connection at the RJ45 plug. Return loss is usually not significant in the loss of a signal, but rather signal jitter.
Near-end crosstalk (NEXT).
In twisted-pair cabling near-end crosstalk (NEXT) is a measure that describes the effect caused by a signal from one wire pair coupling into another wire pair and interfering with the signal therein. It is the difference, expressed in dB, between the amplitude of a transmitted signal and the amplitude of the signal coupled into another cable pair, a"t the signal-source end" of a cable. A higher value is desirable as it indicates that less of the transmitted signal is coupled into the victim wire pair. NEXT is measured 30 meters (about 98 feet) from the injector/generator. Higher near-end crosstalk values correspond to higher overall circuit performance. Low NEXT values on a UTP LAN used with older signaling standards (IEEE 802.3 and earlier) are particularly detrimental. Excessive near-end crosstalk can be an indication of improper termination.
Power sum NEXT (PSNEXT).
Power sum NEXT (NEXT) is the sum of NEXT values from 3 wire pairs as they affect the other wire pair. The combined effect of NEXT can be very detrimental to the signal.
The equal-level far-end crosstalk (ELFEXT).
The equal-level far-end crosstalk (ELFEXT) test measures far-end Crosstalk (FEXT). FEXT is very similar to NEXT, but happens at the receiver side of the connection. Due to attenuation on the line, the signal causing the crosstalk diminishes as it gets further away from the transmitter. Because of this, FEXT is usually less detrimental to a signal than NEXT, but still important nonetheless. Recently the designation was changed from ELFEXT to ACR-F (far end ACR).
Power sum ELFEXT (PSELFEXT).
Power sum ELFEXT (PSELFEXT) is the sum of FEXT values from 3 wire pairs as they affect the other wire pair, minus the insertion loss of the channel. Recently the designation was changed from PSELFEXT to PSACR-F (far end ACR).
Attenuation-to-crosstalk ratio (ACR).
Attenuation-to-crosstalk ratio (ACR) is the difference between the signal attenuation produced NEXT and is measured in decibels (dB). The ACR indicates how much stronger the attenuated signal is than the crosstalk at the destination (receiving) end of a communications circuit. The ACR figure must be at least several decibels for proper performance. If the ACR is not large enough, errors will be frequent. In many cases, even a small improvement in ACR can cause a dramatic reduction in the bit error rate. Sometimes it may be necessary to switch from un-shielded twisted pair (UTP) cable to shielded twisted pair (STP) in order to increase the ACR.
Power sum ACR (PSACR).
Power sum ACR (PSACR) done in the same way as ACR, but using the PSNEXT value in the calculation rather than NEXT.
DC loop resistance.
DC loop resistance measures the total resistance through one wire pair looped at one end of the connection. This will increase with the length of the cable. DC resistance usually has less effect on a signal than insertion loss, but plays a major role if power over Ethernet is required. Also measured in ohms is the characteristic impedance of the cable, which is independent of the cable length. | [
{
"math_id": 0,
"text": "2"
},
{
"math_id": 1,
"text": "\\sqrt 2"
}
] | https://en.wikipedia.org/wiki?curid=10014466 |
1001490 | Convex conjugate | Generalization of the Legendre transformation
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). The convex conjugate is widely used for constructing the dual problem in optimization theory, thus generalizing Lagrangian duality.
Definition.
Let formula_0 be a real topological vector space and let formula_1 be the dual space to formula_0. Denote by
formula_2
the canonical dual pairing, which is defined by formula_3
For a function formula_4 taking values on the extended real number line, its convex conjugate is the function
formula_5
whose value at formula_6 is defined to be the supremum:
formula_7
or, equivalently, in terms of the infimum:
formula_8
This definition can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes.
Examples.
For more examples, see .
The convex conjugate and Legendre transform of the exponential function agree except that the domain of the convex conjugate is strictly larger as the Legendre transform is only defined for positive real numbers.
Connection with expected shortfall (average value at risk).
See this article for example.
Let "F" denote a cumulative distribution function of a random variable "X". Then (integrating by parts),
formula_17
has the convex conjugate
formula_18
Ordering.
A particular interpretation has the transform
formula_19
as this is a nondecreasing rearrangement of the initial function "f"; in particular, formula_20 for "f" nondecreasing.
Properties.
The convex conjugate of a closed convex function is again a closed convex function. The convex conjugate of a polyhedral convex function (a convex function with polyhedral epigraph) is again a polyhedral convex function.
Order reversing.
Declare that formula_21 if and only if formula_22 for all formula_23 Then convex-conjugation is order-reversing, which by definition means that if formula_21 then formula_24
For a family of functions formula_25 it follows from the fact that supremums may be interchanged that
formula_26
and from the max–min inequality that
formula_27
Biconjugate.
The convex conjugate of a function is always lower semi-continuous. The biconjugate formula_28 (the convex conjugate of the convex conjugate) is also the closed convex hull, i.e. the largest lower semi-continuous convex function with formula_29
For proper functions formula_30
formula_31 if and only if formula_32 is convex and lower semi-continuous, by the Fenchel–Moreau theorem.
Fenchel's inequality.
For any function f and its convex conjugate "f" *, Fenchel's inequality (also known as the Fenchel–Young inequality) holds for every formula_33 and formula_34:
formula_35
Furthermore, the equality holds only when formula_36.
The proof follows from the definition of convex conjugate: formula_37
Convexity.
For two functions formula_38 and formula_39 and a number formula_40 the convexity relation
formula_41
holds. The formula_42 operation is a convex mapping itself.
Infimal convolution.
The infimal convolution (or epi-sum) of two functions formula_32 and formula_43 is defined as
formula_44
Let formula_45 be proper, convex and lower semicontinuous functions on formula_46 Then the infimal convolution is convex and lower semicontinuous (but not necessarily proper), and satisfies
formula_47
The infimal convolution of two functions has a geometric interpretation: The (strict) epigraph of the infimal convolution of two functions is the Minkowski sum of the (strict) epigraphs of those functions.
Maximizing argument.
If the function formula_32 is differentiable, then its derivative is the maximizing argument in the computation of the convex conjugate:
formula_48 and
formula_49
hence
formula_50
formula_51
and moreover
formula_52
formula_53
Scaling properties.
If for some formula_54 formula_55, then
formula_56
Behavior under linear transformations.
Let formula_57 be a bounded linear operator. For any convex function formula_32 on formula_58
formula_59
where
formula_60
is the preimage of formula_32 with respect to formula_61 and formula_62 is the adjoint operator of formula_63
A closed convex function formula_32 is symmetric with respect to a given set formula_64 of orthogonal linear transformations,
formula_65 for all formula_66 and all formula_67
if and only if its convex conjugate formula_68 is symmetric with respect to formula_69
Table of selected convex conjugates.
The following table provides Legendre transforms for many common functions as well as a few useful properties. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "X^{*}"
},
{
"math_id": 2,
"text": "\\langle \\cdot , \\cdot \\rangle : X^{*} \\times X \\to \\mathbb{R}"
},
{
"math_id": 3,
"text": "\\left( x^*, x \\right) \\mapsto x^* (x)."
},
{
"math_id": 4,
"text": "f : X \\to \\mathbb{R} \\cup \\{ - \\infty, + \\infty \\}"
},
{
"math_id": 5,
"text": "f^{*} : X^{*} \\to \\mathbb{R} \\cup \\{ - \\infty, + \\infty \\}"
},
{
"math_id": 6,
"text": "x^* \\in X^{*}"
},
{
"math_id": 7,
"text": "f^{*} \\left( x^{*} \\right) := \\sup \\left\\{ \\left\\langle x^{*}, x \\right\\rangle - f (x) ~\\colon~ x \\in X \\right\\},"
},
{
"math_id": 8,
"text": "f^{*} \\left( x^{*} \\right) := - \\inf \\left\\{ f (x) - \\left\\langle x^{*}, x \\right\\rangle ~\\colon~ x \\in X \\right\\}."
},
{
"math_id": 9,
"text": " f(x) = \\left\\langle a, x \\right\\rangle - b"
},
{
"math_id": 10,
"text": " f^{*}\\left(x^{*} \\right)\n= \\begin{cases} b, & x^{*} = a\n \\\\ +\\infty, & x^{*} \\ne a.\n \\end{cases}\n"
},
{
"math_id": 11,
"text": " f(x) = \\frac{1}{p}|x|^p, 1 < p < \\infty "
},
{
"math_id": 12,
"text": "\nf^{*}\\left(x^{*} \\right) = \\frac{1}{q}|x^{*}|^q, 1<q<\\infty, \\text{where} \\tfrac{1}{p} + \\tfrac{1}{q} = 1."
},
{
"math_id": 13,
"text": "f(x) = \\left| x \\right|"
},
{
"math_id": 14,
"text": "\nf^{*}\\left(x^{*} \\right)\n= \\begin{cases} 0, & \\left|x^{*} \\right| \\le 1\n \\\\ \\infty, & \\left|x^{*} \\right| > 1.\n \\end{cases}\n"
},
{
"math_id": 15,
"text": "f(x)= e^x"
},
{
"math_id": 16,
"text": "\nf^{*}\\left(x^{*} \\right)\n= \\begin{cases} x^{*} \\ln x^{*} - x^{*} , & x^{*} > 0\n \\\\ 0 , & x^{*} = 0\n \\\\ \\infty , & x^{*} < 0.\n \\end{cases}\n"
},
{
"math_id": 17,
"text": "f(x):= \\int_{-\\infty}^x F(u) \\, du = \\operatorname{E}\\left[\\max(0,x-X)\\right] = x-\\operatorname{E} \\left[\\min(x,X)\\right]"
},
{
"math_id": 18,
"text": "f^{*}(p)= \\int_0^p F^{-1}(q) \\, dq = (p-1)F^{-1}(p)+\\operatorname{E}\\left[\\min(F^{-1}(p),X)\\right] \n = p F^{-1}(p)-\\operatorname{E}\\left[\\max(0,F^{-1}(p)-X)\\right]."
},
{
"math_id": 19,
"text": "f^\\text{inc}(x):= \\arg \\sup_t t\\cdot x-\\int_0^1 \\max\\{t-f(u),0\\} \\, du,"
},
{
"math_id": 20,
"text": "f^\\text{inc}= f"
},
{
"math_id": 21,
"text": "f \\le g"
},
{
"math_id": 22,
"text": "f(x) \\le g(x)"
},
{
"math_id": 23,
"text": "x."
},
{
"math_id": 24,
"text": "f^* \\ge g^*."
},
{
"math_id": 25,
"text": "\\left(f_\\alpha\\right)_\\alpha"
},
{
"math_id": 26,
"text": "\\left(\\inf_\\alpha f_\\alpha\\right)^*(x^*) = \\sup_\\alpha f_\\alpha^*(x^*),"
},
{
"math_id": 27,
"text": "\\left(\\sup_\\alpha f_\\alpha\\right)^*(x^*) \\le \\inf_\\alpha f_\\alpha^*(x^*)."
},
{
"math_id": 28,
"text": "f^{**}"
},
{
"math_id": 29,
"text": "f^{**} \\le f."
},
{
"math_id": 30,
"text": "f,"
},
{
"math_id": 31,
"text": "f = f^{**}"
},
{
"math_id": 32,
"text": "f"
},
{
"math_id": 33,
"text": "x \\in X"
},
{
"math_id": 34,
"text": "p \\in X^{*}"
},
{
"math_id": 35,
"text": "\\left\\langle p,x \\right\\rangle \\le f(x) + f^*(p)."
},
{
"math_id": 36,
"text": "p \\in \\partial f(x)"
},
{
"math_id": 37,
"text": "f^*(p) = \\sup_{\\tilde x} \\left\\{ \\langle p,\\tilde x \\rangle - f(\\tilde x) \\right\\} \\ge \\langle p,x \\rangle - f(x)."
},
{
"math_id": 38,
"text": "f_0"
},
{
"math_id": 39,
"text": "f_1"
},
{
"math_id": 40,
"text": "0 \\le \\lambda \\le 1"
},
{
"math_id": 41,
"text": "\\left((1-\\lambda) f_0 + \\lambda f_1\\right)^{*} \\le (1-\\lambda) f_0^{*} + \\lambda f_1^{*}"
},
{
"math_id": 42,
"text": "{*}"
},
{
"math_id": 43,
"text": "g"
},
{
"math_id": 44,
"text": "\\left( f \\operatorname{\\Box} g \\right)(x) = \\inf \\left\\{ f(x-y) + g(y) \\mid y \\in \\mathbb{R}^n \\right\\}."
},
{
"math_id": 45,
"text": "f_1, \\ldots, f_{m}"
},
{
"math_id": 46,
"text": "\\mathbb{R}^{n}."
},
{
"math_id": 47,
"text": "\\left( f_1 \\operatorname{\\Box} \\cdots \\operatorname{\\Box} f_m \\right)^{*} = f_1^{*} + \\cdots + f_m^{*}."
},
{
"math_id": 48,
"text": "f^\\prime(x) = x^*(x):= \\arg\\sup_{x^{*}} {\\langle x, x^{*}\\rangle} -f^{*}\\left( x^{*} \\right)"
},
{
"math_id": 49,
"text": "f^{{*}\\prime}\\left( x^{*} \\right) = x\\left( x^{*} \\right):= \\arg\\sup_x {\\langle x, x^{*}\\rangle} - f(x);"
},
{
"math_id": 50,
"text": "x = \\nabla f^{{*}}\\left( \\nabla f(x) \\right),"
},
{
"math_id": 51,
"text": "x^{*} = \\nabla f\\left( \\nabla f^{{*}}\\left( x^{*} \\right)\\right),"
},
{
"math_id": 52,
"text": "f^{\\prime\\prime}(x) \\cdot f^{{*}\\prime\\prime}\\left( x^{*}(x) \\right) = 1,"
},
{
"math_id": 53,
"text": "f^{{*}\\prime\\prime}\\left( x^{*} \\right) \\cdot f^{\\prime\\prime}\\left( x(x^{*}) \\right) = 1."
},
{
"math_id": 54,
"text": "\\gamma>0,"
},
{
"math_id": 55,
"text": "g(x) = \\alpha + \\beta x + \\gamma \\cdot f\\left( \\lambda x + \\delta \\right)"
},
{
"math_id": 56,
"text": "g^{*}\\left( x^{*} \\right)= - \\alpha - \\delta\\frac{x^{*}-\\beta} \\lambda + \\gamma \\cdot f^{*}\\left(\\frac {x^{*}-\\beta}{\\lambda \\gamma}\\right)."
},
{
"math_id": 57,
"text": "A : X \\to Y"
},
{
"math_id": 58,
"text": "X,"
},
{
"math_id": 59,
"text": "\\left(A f\\right)^{*} = f^{*} A^{*}"
},
{
"math_id": 60,
"text": "(A f)(y) = \\inf\\{ f(x) : x \\in X , A x = y \\}"
},
{
"math_id": 61,
"text": "A"
},
{
"math_id": 62,
"text": "A^{*}"
},
{
"math_id": 63,
"text": "A."
},
{
"math_id": 64,
"text": "G"
},
{
"math_id": 65,
"text": "f(A x) = f(x)"
},
{
"math_id": 66,
"text": "x"
},
{
"math_id": 67,
"text": "A \\in G"
},
{
"math_id": 68,
"text": "f^{*}"
},
{
"math_id": 69,
"text": "G."
}
] | https://en.wikipedia.org/wiki?curid=1001490 |
10016360 | Excellent ring | In commutative algebra, a quasi-excellent ring is a Noetherian commutative ring that behaves well with respect to the operation of completion, and is called an excellent ring if it is also universally catenary. Excellent rings are one answer to the problem of finding a natural class of "well-behaved" rings containing most of the rings that occur in number theory and algebraic geometry. At one time it seemed that the class of Noetherian rings might be an answer to this problem, but Masayoshi Nagata and others found several strange counterexamples showing that in general Noetherian rings need not be well-behaved: for example, a normal Noetherian local ring need not be analytically normal.
The class of excellent rings was defined by Alexander Grothendieck (1965) as a candidate for such a class of well-behaved rings. Quasi-excellent rings are conjectured to be the base rings for which the problem of resolution of singularities can be solved; showed this in characteristic 0, but the positive characteristic case is (as of 2024) still a major open problem. Essentially all Noetherian rings that occur naturally in algebraic geometry or number theory are excellent; in fact it is quite hard to construct examples of Noetherian rings that are not excellent.
Definitions.
The definition of excellent rings is quite involved, so we recall the definitions of the technical conditions it satisfies. Although it seems like a long list of conditions, most rings in practice are excellent, such as fields, polynomial rings, complete Noetherian rings, Dedekind domains over characteristic 0 (such as formula_0), and quotient and localization rings of these rings.
Recalled definitions.
Finally, a ring is J-2 if any finite type formula_1-algebra formula_11 is J-1, meaning the regular subscheme formula_12 is open.
Definition of (quasi-)excellence.
A ring formula_1 is called quasi-excellent if it is a G-ring and J-2 ring. It is called excellentpg 214 if it is quasi-excellent and universally catenary. In practice almost all Noetherian rings are universally catenary, so there is little difference between excellent and quasi-excellent rings.
A scheme is called excellent or quasi-excellent if it has a cover by open affine subschemes with the same property, which implies that every open affine subscheme has this property.
Properties.
Because an excellent ring formula_1 is a G-ring, it is Noetherian by definition. Because it is universally catenary, every maximal chain of prime ideals has the same length. This is useful for studying the dimension theory of such rings because their dimension can be bounded by a fixed maximal chain. In practice, this means infinite-dimensional Noetherian rings which have an inductive definition of maximal chains of prime ideals, giving an infinite-dimensional ring, cannot be constructed.
Schemes.
Given an excellent scheme formula_13 and a locally finite type morphism formula_14, then formula_15 is excellentpg 217.
Quasi-excellence.
Any quasi-excellent ring is a Nagata ring.
Any quasi-excellent reduced local ring is analytically reduced.
Any quasi-excellent normal local ring is analytically normal.
Examples.
Excellent rings.
Most naturally occurring commutative rings in number theory or algebraic geometry are excellent. In particular:
A J-2 ring that is not a G-ring.
Here is an example of a discrete valuation ring "A" of dimension 1 and characteristic "p" > 0 which is J-2 but not a G-ring and so is not quasi-excellent. If "k" is any field of characteristic "p" with ["k" : "k""p"] = ∞ and "A" is the ring of power series Σ"a""i""x""i" such that ["k""p"("a"0, "a"1, ...) : "k""p"] is finite then the formal fibers of "A" are not all geometrically regular so "A" is not a G-ring. It is a J-2 ring as all Noetherian local rings of dimension at most 1 are J-2 rings. It is also universally catenary as it is a Dedekind domain. Here "k""p" denotes the image of "k" under the Frobenius morphism "a" → "a""p".
A G-ring that is not a J-2 ring.
Here is an example of a ring that is a G-ring but not a J-2 ring and so not quasi-excellent. If "R" is the subring of the polynomial ring "k"["x"1,"x"2...] in infinitely many generators generated by the squares and cubes of all generators, and "S" is obtained from "R" by adjoining inverses to all elements not in any of the ideals generated by some "x""n", then "S" is a 1-dimensional Noetherian domain that is not a J-1 ring as "S" has a cusp singularity at every closed point, so the set of singular points is not closed, though it is a G-ring.
This ring is also universally catenary, as its localization at every prime ideal is a quotient of a regular ring.
A quasi-excellent ring that is not excellent.
Nagata's example of a 2-dimensional Noetherian local ring that is catenary but not universally catenary is a G-ring, and is also a J-2 ring as any local G-ring is a J-2 ring . So it is a quasi-excellent catenary local ring that is not excellent.
Resolution of singularities.
Quasi-excellent rings are closely related to the problem of resolution of singularities, and this seems to have been Grothendieck's motivationpg 218 for defining them. Grothendieck (1965) observed that if it is possible to resolve singularities of all complete integral local Noetherian rings, then it is possible to resolve the singularities of all reduced quasi-excellent rings. Hironaka (1964) proved this for all complete integral Noetherian local rings over a field of characteristic 0, which implies his theorem that all singularities of excellent schemes over a field of characteristic 0 can be resolved. Conversely if it is possible to resolve all singularities of the spectra of all integral finite algebras over a Noetherian ring "R" then the ring "R" is quasi-excellent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "R\\otimes_kK"
},
{
"math_id": 5,
"text": "R \\to S"
},
{
"math_id": 6,
"text": "\\mathfrak{p} \\in \\text{Spec}(R)"
},
{
"math_id": 7,
"text": "S\\otimes_R\\kappa(\\mathfrak{p})"
},
{
"math_id": 8,
"text": "\\kappa(\\mathfrak{p})"
},
{
"math_id": 9,
"text": "\\mathfrak{p}"
},
{
"math_id": 10,
"text": "R_\\mathfrak{p} \\to \\hat{R_\\mathfrak{p}}"
},
{
"math_id": 11,
"text": "S"
},
{
"math_id": 12,
"text": "\\text{Reg}(\\text{Spec}(S)) \\subset \\text{Spec}(S)"
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "f:X'\\to X"
},
{
"math_id": 15,
"text": "X'"
},
{
"math_id": 16,
"text": "R[x_1,\\ldots, x_n]/(f_1,\\ldots,f_k)"
}
] | https://en.wikipedia.org/wiki?curid=10016360 |
1002045 | Émilie du Châtelet | French mathematician, physicist, and author (1706–1749)
Gabrielle Émilie Le Tonnelier de Breteuil, Marquise du Châtelet (; 17 December 1706 – 10 September 1749) was a French natural philosopher and mathematician from the early 1730s until her death due to complications during childbirth in 1749.
Her most recognized achievement is her translation of and commentary on Isaac Newton's 1687 book "Philosophiæ Naturalis Principia Mathematica" containing basic laws of physics. The translation, published posthumously in 1756, is still considered the standard French translation.
Her commentary includes a contribution to Newtonian mechanics—the postulate of an additional conservation law for total energy, of which kinetic energy of motion is one element. This led her to conceptualize energy, and to derive its quantitative relationships to the mass and velocity of an object.
Her philosophical magnum opus, "Institutions de Physique" (Paris, 1740, first edition; "Foundations of Physics"), circulated widely, generated heated debates, and was republished and translated into several other languages within two years of its original publication.
She participated in the famous "vis viva" debate, concerning the best way to measure the force of a body and the best means of thinking about conservation principles. Posthumously, her ideas were heavily represented in the most famous text of the French Enlightenment, the "Encyclopédie" of Denis Diderot and Jean le Rond d'Alembert, first published shortly after du Châtelet's death.
She is also known as the intellectual collaborator with and romantic partner of Voltaire. Numerous biographies, books and plays have been written about her life and work in the two centuries since her death. In the early 21st century, her life and ideas have generated renewed interest.
Contribution to philosophy.
In addition to producing famous translations of works by authors such as Bernard Mandeville and Isaac Newton, du Châtelet wrote a number of significant philosophical essays, letters and books that were well known in her time.
Because of her well-known collaboration and romantic involvement with Voltaire, which spanned much of her adult life, du Châtelet has been known as the romantic partner of and collaborator with her famous intellectual companion. Despite her notable achievements and intelligence, her accomplishments have often been subsumed under his and, as a result, even today she is often mentioned only within the context of Voltaire's life and work during the period of the early French Enlightenment. In her own right, she was a strong and influential philosopher, with the ideals of her works spread from the ideals of individual empowerment to issues of the social contract.
Recently, however, professional philosophers and historians have transformed the reputation of du Châtelet. Historical evidence indicates that her work had a very significant influence on the philosophical and scientific conversations of the 1730s and 1740s – in fact, she was famous and respected by the greatest thinkers of her time. Francesco Algarotti styled the dialogue of "Il Newtonianismo per le dame" based on conversations he observed between Du Châtelet and Voltaire in Cirey.
Du Châtelet corresponded with renowned mathematicians such as Johann II Bernoulli and Leonhard Euler, early developers of calculus. She was also tutored by Bernoulli's prodigy students, Pierre Louis Moreau de Maupertuis and Alexis Claude Clairaut. Frederick the Great of Prussia, who re-founded the Academy of Sciences in Berlin, was her great admirer, and corresponded with both Voltaire and du Châtelet regularly. He introduced du Châtelet to Leibniz's philosophy by sending her the works of Christian Wolff, and du Châtelet sent him a copy of her "Institutions".
Her works were published and republished in Paris, London, and Amsterdam; they were translated into German and Italian; and, they were discussed in the most important scholarly journals of the era, including the "Memoires des Trévoux", the "Journal des Sçavans", the ',"' and others. Perhaps most intriguingly, many of her ideas were represented in various sections of the "Encyclopédie" of Diderot and D'Alembert, and some of the articles in the "Encyclopédie" are a direct copy of her work (this is an active area of current academic research - the latest research can be found at Project Vox, a Duke University research initiative).
Biography.
Early life.
Émilie du Châtelet was born on 17 December 1706 in Paris, the only girl amongst six children. Three brothers lived to adulthood: René-Alexandre (b. 1698), Charles-Auguste (b. 1701), and Elisabeth-Théodore (b. 1710). Her eldest brother, René-Alexandre, died in 1720, and the next brother, Charles-Auguste, died in 1731. However, her younger brother, Elisabeth-Théodore, lived to a successful old age, becoming an abbot and eventually a bishop. Two other brothers died very young. Du Châtelet also had a half-sister, Michelle Born in 1686, who was born of her father and Anne Bellinzani, an intelligent woman who was interested in astronomy and married to an important Parisian official.
Her father was Louis Nicolas le Tonnelier de Breteuil (1648–1728), a member of the lesser nobility. At the time of du Châtelet's birth, her father held the position of the Principal Secretary and Introducer of Ambassadors to King Louis XIV. He held a weekly "salon" on Thursdays, to which well-respected writers and scientists were invited. Her mother was Gabrielle Anne de Froullay (1670–1740), Baronne de Breteuil. Her paternal uncle was cleric Claude Le Tonnelier de Breteuil (1644–1698). Among her cousins was nobleman François Victor Le Tonnelier de Breteuil (1686–1743), Who was her uncle's son Francois Le Tonnelier de Breteuil (1638–1705).
Early education.
Du Châtelet's education has been the subject of much speculation, but nothing is known with certainty.
Among their acquaintances was Fontenelle, the perpetual secretary of the French Académie des Sciences. Du Châtelet's father Louis-Nicolas, recognizing her early brilliance, arranged for Fontenelle to visit and talk about astronomy with her when she was 10 years old. Her mother, Gabrielle-Anne de Froulay, had been brought up in a convent, which was at that time the predominant educational institution available to French girls and women. While some sources believe her mother did not approve of her intelligent daughter, or of her husband's encouragement of Émilie's intellectual curiosity, there are also other indications that her mother not only approved of du Châtelet's early education, but actually encouraged her to vigorously question stated fact.
In either case, such encouragement would have been seen as unusual for parents of their time and status. When she was small, her father arranged training for her in physical activities such as fencing and riding, and as she grew older, he brought tutors to the house for her. As a result, by the age of twelve she was fluent in Latin, Italian, Greek and German; she was later to publish translations into French of Greek and Latin plays and philosophy. She received education in mathematics, literature, and science.
Du Châtelet also liked to dance, was a passable performer on the harpsichord, sang opera, and was an amateur actress. As a teenager, short of money for books, she used her mathematical skills to devise highly successful strategies for gambling.
Marriage.
On 12 June 1725, she married the Marquis Florent-Claude du Chastellet-Lomont (1695–1765). Her marriage conferred the title of Marquise du Chastellet. Like many marriages among the nobility, theirs was arranged. As a wedding gift, her husband was made governor of Semur-en-Auxois in Burgundy by his father; the recently married couple moved there at the end of September 1725. Du Châtelet was eighteen at the time, her husband thirty-four.
Children.
Émilie du Châtelet and the Marquis Florent-Claude du Chastellet-Lomont had three children: Françoise-Gabrielle-Pauline (30 June 1726 – 1754), married in 1743 to Alfonso Carafa, Duca di Montenero (1713–1760), Louis Marie Florent (born 20 November 1727), and Victor-Esprit (born 11 April 1733). Victor-Esprit died as an infant in late summer 1734, likely the last Sunday in August. On 4 September 1749 Émilie du Châtelet gave birth to Stanislas-Adélaïde du Châtelet, daughter of Jean François de Saint-Lambert. She died as a toddler in Lunéville on 6 May 1751.
Resumption of studies.
After bearing three children, Émilie, Marquise du Châtelet, considered her marital responsibilities fulfilled and reached an agreement with her husband to live separate lives while still maintaining one household. In 1733, aged 26, du Châtelet resumed her mathematical studies. Initially, she was tutored in algebra and calculus by Moreau de Maupertuis, a member of the Academy of Sciences; although mathematics was not his forte, he had received a solid education from Johann Bernoulli, who also taught Leonhard Euler. However by 1735 du Châtelet had turned for her mathematical training to Alexis Clairaut, a mathematical prodigy known best for Clairaut's equation and Clairaut's theorem. Du Châtelet resourcefully sought some of France's best tutors and scholars to mentor her in mathematics. On one occasion at the Café Gradot, a place where men frequently gathered for intellectual discussion, she was politely ejected when she attempted to join one of her teachers. Undeterred, she returned and entered after having men's clothing made for her.
Relationship with Voltaire.
Du Châtelet may have met Voltaire in her childhood at one of her father's "salons"; Voltaire himself dates their meeting to 1729, when he returned from his exile in London. However, their friendship developed from May 1733 when she re-entered society after the birth of her third child.
Du Châtelet invited Voltaire to live at her country house at Cirey in Haute-Marne, northeastern France, and he became her long-time companion. There she studied physics and mathematics, and published scientific articles and translations. To judge from Voltaire's letters to friends and their commentaries on each other's work, they lived together with great mutual liking and respect. As a literary rather than scientific person, Voltaire implicitly acknowledged her contributions to his 1738 "Elements of the Philosophy of Newton". This was through a poem dedicated to her at the beginning of the text and in the preface, where Voltaire praised her study and contributions. The book's chapters on optics show strong similarities with her own "Essai sur l'optique". She was able to contribute further to the campaign by a laudatory review in the "Journal des savants".
Sharing a passion for science, Voltaire and du Châtelet collaborated scientifically. They set up a laboratory in du Châtelet's home in Lorraine. In a healthy competition, they both entered the 1738 Paris Academy prize contest on the nature of fire, since du Châtelet disagreed with Voltaire's essay. Although neither of them won, both essays received honourable mention and were published. She thus became the first woman to have a scientific paper published by the Academy.
Social life after living with Voltaire.
Du Châtelet's relationship with Voltaire caused her to give up most of her social life to become more involved with her study in mathematics with the teacher of Pierre-Louis Moreau de Maupertuis. He introduced the ideas of Isaac Newton to her. Letters written by du Châtelet explain how she felt during the transition from Parisian socialite to rural scholar, from "one life to the next."
Final pregnancy and death.
In May 1748, du Châtelet began an affair with the poet Jean François de Saint-Lambert and became pregnant. In a letter to a friend, she confided her fears that she would not survive her pregnancy. On the night of 4 September 1749 she gave birth to a daughter, Stanislas-Adélaïde. Du Châtelet died on 10 September 1749 at Château de Lunéville, from a pulmonary embolism. She was 42. Her infant daughter died 20 months later.
Scientific research and publications.
Criticizing Locke and the debate on "thinking matter".
In her writings, du Châtelet criticized John Locke's philosophy. She emphasizes the necessity of the verification of knowledge through experience: "Locke's idea of the possibility of "thinking matter" is […] abstruse." Her critique on Locke originated in her commentary on Bernard de Mandeville's "The Fable of the Bees". She resolutely favored universal principles which precondition human knowledge and action, and maintained that this kind of law is innate. Du Châtelet claimed the necessity of a universal presupposition, because if there is no such beginning, all our knowledge is relative. In that way, Du Châtelet rejected Locke's aversion to innate ideas and prior principles. She also reversed Locke's negation of the principle of contradiction, which would constitute the basis of her methodic reflections in the "Institutions". On the contrary, she affirmed her arguments in favor of the necessity of prior and universal principles. "Two and two could then make as well 4 as 6 if prior principles did not exist."
Pierre Louis Moreau de Maupertuis' and Julien Offray de La Mettrie's references to du Châtelet's deliberations on motion, free will, "thinking matter", numbers, and the way to do metaphysics are a sign of the importance of her reflections. She rebuts the claim to finding truth by using mathematical laws, and argues against Maupertuis.
Warmth and brightness.
In 1737 du Châtelet published a paper "Dissertation sur la nature et la propagation du feu", based upon her research into the science of fire. In it she speculated that there may be colors in other suns that are not found in the spectrum of sunlight on Earth.
"Institutions de Physique".
Her book "Institutions de Physique" ("Lessons in Physics") was published in 1740; it was presented as a review of new ideas in science and philosophy to be studied by her 13-year-old son, but it incorporated and sought to reconcile complex ideas from the leading thinkers of the time. The book and subsequent debate contributed to her becoming a member of the Academy of Sciences of the Institute of Bologna in 1746. Du Châtelet originally preferred anonymity in her role as the author, because she wished to conceal her sex. Ultimately, however, "Institutions" was convincing to salon-dwelling intellectuals in spite of the commonplace sexism.
"Institutions" discussed, refuted, and synthesized many ideas of prominent mathematicians and physicists of the time. In particular, the text is famous for discussing ideas that originated with G.W. Leibniz and Christian Wolff, and for using the principle of sufficient reason often associated with their philosophical work. This main work is equally famous for providing a detailed discussion and evaluation of ideas that originated with Isaac Newton and his followers. That combination is more remarkable than it might seem now, since the ideas of Leibniz and Newton were regarded as fundamentally opposed to one another by most of the major philosophical figures of the 18th century.
In chapter I, du Châtelet included a description of her rules of reasoning, based largely on Descartes’s principle of contradiction and Leibniz’s principle of sufficient reason. In chapter II, she applied these rules of reasoning to metaphysics, discussing God, space, time, and matter. In chapters III through VI, du Châtelet continued to discuss the role of God and his relationship to his creation. In chapter VII, she broke down the concept of matter into three parts: the macroscopic substance available to sensory perception, the atoms composing that macroscopic material, and an even smaller constituent unit similarly imperceptible to human senses. However, she carefully added that there was no way to know how many levels truly existed.
The remainder of "Institutions" considered more metaphysics and classical mechanics. Du Châtelet discussed the concepts of space and time in a manner more consistent with modern relativity than her contemporaries. She described both space and time in the abstract, as representations of the relationships between coexistent bodies rather than physical substances. This included an acknowledgement that "absolute" place is an idealization and that "relative" place is the only real, measurable quantity. Du Châtelet also presented a thorough explanation of Newton’s laws of motion and their function on earth.
Forces Vives.
In 1741 du Châtelet published a book titled "Réponse de Madame la Marquise du Chastelet, a la lettre que M. de Mairan". D'Ortous de Mairan, secretary of the Academy of Sciences, had published a set of arguments addressed to her regarding the appropriate mathematical expression for "forces vives" ("living forces"). Du Châtelet presented a point-by-point rebuttal of de Mairan's arguments, causing him to withdraw from the controversy.
Immanuel Kant's first publication in 1747, 'Thoughts on the True Estimation of Living Forces' ("Gedanken zur wahren Schätzung der lebendigen Kräfte"), focused on du Châtelet's pamphlet against the secretary of the French Academy of Sciences, Mairan. Kant's opponent, Johann Augustus Eberhard, accused Kant of taking ideas from du Châtelet. lnterestingly, Kant, in his "Observations on the Feeling of the Beautiful and Sublime", wrote sexist critiques of learned women of the time including Mme Du Châtelet, stating: "A woman who has a head full of Greek, like Mme. Dacier, or who conducts disputations about mechanics, like the Marquise du Châtelet might as well also wear a beard; for that might perhaps better express the mien of depth for which they strive."
Advocacy of kinetic energy.
Although in the early 18th century the concepts of force and momentum had been long understood, the idea of energy as being transferable between different systems was still in its infancy, and would not be fully resolved until the 19th century. It is now accepted that the total mechanical momentum of a system is conserved and that none is lost to friction. Simply put, there is no 'momentum friction', and momentum cannot transfer between different forms, and particularly there is no 'potential momentum'. In the 20th century, Emmy Noether proved this to be true for all problems where the initial state is symmetric in generalized coordinates. E.g., mechanical energy, either kinetic or potential, may be lost to another form, but the total is conserved in time.
Du Châtelet's contribution was the hypothesis of the conservation of total energy, as distinct from momentum. In doing so, she became the first to elucidate the concept of energy as such, and to quantify its relationship to mass and velocity based on her own empirical studies. Inspired by the theories of Gottfried Leibniz, she repeated and publicized an experiment originally devised by Willem 's Gravesande in which heavy balls were dropped from different heights into a sheet of soft clay. Each ball's kinetic energy - as indicated by the quantity of material displaced - was shown to be proportional to the square of the velocity: She showed that if two balls were identical except for their mass, they would make the same size indentation in the clay if the quantity formula_0 (then called "vis viva") were the same for each ball.
Newton's work assumed the exact conservation of only mechanical momentum. A broad range of mechanical problems in physics are soluble only if energy conservation is included. The collision and scattering of two point masses is one example. Leonhard Euler and Joseph-Louis Lagrange established a more formal framework for mechanics using the results of du Châtelet.
Translation and commentary on Newton's "Principia".
In 1749, the year of du Châtelet's death, she completed the work regarded as her outstanding achievement: her translation into French, with her commentary, of Newton's "Philosophiae Naturalis Principia Mathematica" (often referred to as simply the "Principia"), including her derivation of the notion of conservation of energy from its principles of mechanics. Despite modern misconceptions, Newton's work on his "Principia" was not perfect. Du Châtelet took on the task of not only translating his work from Latin to French, but adding important information to it as well. Her commentary was as essential to her contemporaries as her spreading of Newton's ideas. Du Châtelet's commentary was very extensive, comprising almost two-thirds of volume II of her edition.
To undertake a formidable project such as this, du Châtelet prepared to translate the "Principia" by continuing her studies in analytic geometry, mastering calculus, and reading important works in experimental physics. It was her rigorous preparation that allowed her to add a lot more accurate information to her commentary, both from herself and other scientists she studied or worked with. She was one of only 20 or so people in the 1700s who could understand such advanced math and apply the knowledge to other works. This helped du Châtelet greatly, not only with her work on the "Principia" but also in her other important works like the "Institutions de Physique".
Du Châtelet made very important corrections in her translation that helped support Newton's theories about the universe. Newton, based on the theory of fluids, suggested that gravitational attraction would cause the poles of the earth to flatten, thus causing the earth to bulge outwards at the equator. In Clairaut's "Memoire", which confirmed Newton's hypothesis about the shape of the earth and gave more accurate approximations, Clairaut discovered a way to determine the shape of the other planets in the solar system. Du Châtelet used Clairaut's proposal that the planets had different densities in her commentary to correct Newton's belief that the earth and the other planets were made of homogeneous substances.
Du Châtelet used the work of Daniel Bernoulli, a Swiss mathematician and physicist, to further explain Newton's theory of the tides. This proof depended upon the three-body problem which still confounded even the best mathematicians in 18th century Europe. Using Clairaut's hypothesis about the differing of the planets' densities, Bernoulli theorized that the moon was 70 times denser than Newton had believed. Du Châtelet used this discovery in her commentary of the "Principia", further supporting Newton's theory about the law of gravitation.
Published ten years after her death, today du Châtelet's translation of the "Principia" is still the standard translation of the work into French, and remains the only complete rendition in that language. Her translation was so important that it was the only one in any language used by Newtonian expert I. Bernard Cohen to write his own English version of Newton's "Principia". Du Châtelet not only used the works of other great scientists to revise Newton's work, but she added her own thoughts and ideas as a scientist in her own right. Her contributions in the French translation made Newton and his ideas look even better in the scientific community and around the world, and recognition for this is owed to du Châtelet. This enormous project, along with her "Foundations of Physics", proved du Châtelet's abilities as a great mathematician. Her translation and commentary of the "Principia" contributed to the completion of the scientific revolution in France and to its acceptance in Europe.
Illusions and happiness.
In "", Émilie Du Châtelet argues that illusions are an instrument for happiness. To be happy, “one must have freed oneself of prejudice, one must be virtuous, healthy, have tastes and passions, and be susceptible to illusions...”. She mentions many things one needs for happiness, but emphasizes the necessity of illusions and that one should not dismiss all illusions. One should not abandon all illusions because they can bestow positivity and hope, which can ameliorate one's well-being. But Du Châtelet also warns against trusting all illusions, because many illusions are harmful to oneself. They may cause negativity through a false reality, which can cause disappointment or even limit one’s abilities. This lack of self-awareness from so many illusions may cause one to be self-deceived. She suggests a balance of trusting and rejecting illusions for happiness, so as not to become self-deceived.
In "Foundation of Physics", Émilie Du Châtelet discusses avoiding error by applying two principles – the principle of contradiction and the principle of sufficient reason. Du Châtelet presumed that all knowledge is developed from more fundamental knowledge that relies on infallible knowledge. She states that this infallible fundamental knowledge is most reliable because it is self-explanatory and exists with a small number of conclusions. Her logic and principles are used for an arguably less flawed understanding of physics, metaphysics, and morals.
The principle of contradiction essentially claims that the thing implying a contradiction is impossible. So, if one does not use the principle of contradiction, one will have errors including the failure to reject a contradiction-causing element. To get from the possible or impossible to the actual or real, the principle of sufficient reason was revised by Du Châtelet from Leibniz's concept and integrated into science. The principle of sufficient reason suggests that every true thing has a reason for being so, and things without a reason do not exist. In essence, every effect has a cause, so the element in question must have a reasonable cause to be so.
In application, Émilie Du Châtelet proposed that being happy and immoral are mutually exclusive. According to Du Châtelet, this principle is embedded within the hearts of all individuals, and even wicked individuals have an undeniable consciousness of this contradiction that is grueling. It suggests one cannot be living a happy life while living immorally. So, her suggested happiness requires illusions with a virtuous life. These illusions are naturally given like passions and tastes, and cannot be created. Du Châtelet recommended we maintain the illusions we receive and work to not dismantle the trustworthy illusions, because we cannot get them back. In other words, true happiness is a blending of illusions and morality. If one merely attempts to be moral, one will not obtain the happiness one deeply seeks. If one just strives for the illusions, one will not get the happiness that is genuinely desired. One needs to endeavor in both illusions and happiness to get the sincerest happiness.
Other contributions.
Development of financial derivatives.
Du Châtelet lost the considerable sum for the time of 84,000 francs—some of it borrowed—in one evening at the table at the Court of Fontainebleau, to card cheats. To raise the money to pay back her debts, she devised an ingenious financing arrangement similar to modern derivatives, whereby she paid tax collectors a fairly low sum for the right to their future earnings (they were allowed to keep a portion of the taxes they collected for the King), and promised to pay the court gamblers part of these future earnings.
Biblical scholarship.
Du Châtelet wrote a critical analysis of the entire Bible. A synthesis of her remarks on the Book of Genesis was published in English in 1967 by Ira O. Wade of Princeton in his book "Voltaire and Madame du Châtelet: An Essay on Intellectual Activity at Cirey" and a book of her complete notes was published in 2011, in the original French, edited and annotated by Bertram Eugene Schwarzbach.
Translation of the "Fable of the Bees", and other works.
Du Châtelet translated "The Fable of the Bees" in a free adaptation. She also wrote works on optics, rational linguistics, and the nature of free will.
Support of women's education.
In her first independent work, the preface to her translation of the "Fable of the Bees", du Châtelet argued strongly for women's education, particularly a strong secondary education as was available for young men in the French "collèges". By denying women a good education, she argued, society prevents women from becoming eminent in the arts and sciences.
Legacy.
Du Châtelet made a crucial scientific contribution in making Newton's historic work more accessible in a timely, accurate and insightful French translation, augmented by her own original concept of energy conservation.
A main-belt minor planet and a crater on Venus have been named in her honor, and she is the subject of three plays: "Legacy of Light" by Karen Zacarías; "Émilie: La Marquise Du Châtelet Defends Her Life Tonight" by Lauren Gunderson and "Urania: the Life of Émilie du Châtelet" by Jyl Bonaguro. The opera "Émilie" by Kaija Saariaho is about the last moments of her life.
Du Châtelet is often represented in portraits with mathematical iconography, such as holding a pair of dividers or a page of geometrical calculations. In the early nineteenth century, a French pamphlet of celebrated women ("Femmes célèbres") introduced a possibly apocryphal story of her childhood. According to this story, a servant fashioned a doll for her by dressing up wooden dividers as a doll; however, du Châtelet undressed the dividers, and intuiting their original purpose, drew a circle with them.
The Institut Émilie du Châtelet, which was founded in France in 2006, supports "the development and diffusion of research on women, sex, and gender".
Since 2016, the French Society of Physics (la Société Française de Physique) has awarded the Émilie Du Châtelet Prize to a physicist or team of researchers for excellence in Physics.
Duke University also presents an annual Du Châtelet Prize in Philosophy of Physics "for previously unpublished work in philosophy of physics by a graduate student or junior scholar".
On December 17, 2021, Google Doodle honored du Châtelet.
Émilie du Châtelet was portrayed by the actress Hélène de Fougerolles in the docudrama "Einstein's Big Idea".
Works.
Scientific
Other
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "mv^2"
}
] | https://en.wikipedia.org/wiki?curid=1002045 |
1002128 | Giant magnetoresistance | Phenomenom involving the change of conductivity in metallic layers
Giant magnetoresistance (GMR) is a quantum mechanical magnetoresistance effect observed in multilayers composed of alternating ferromagnetic and non-magnetic conductive layers. The 2007 Nobel Prize in Physics was awarded to Albert Fert and Peter Grünberg for the discovery of GMR, which also sets the foundation for the study of spintronics.
The effect is observed as a significant change in the electrical resistance depending on whether the magnetization of adjacent ferromagnetic layers are in a parallel or an antiparallel alignment. The overall resistance is relatively low for parallel alignment and relatively high for antiparallel alignment. The magnetization direction can be controlled, for example, by applying an external magnetic field. The effect is based on the dependence of electron scattering on spin orientation.
The main application of GMR is in magnetic field sensors, which are used to read data in hard disk drives, biosensors, microelectromechanical systems (MEMS) and other devices. GMR multilayer structures are also used in magnetoresistive random-access memory (MRAM) as cells that store one bit of information.
In literature, the term giant magnetoresistance is sometimes confused with colossal magnetoresistance of ferromagnetic and antiferromagnetic semiconductors, which is not related to a multilayer structure.
Formulation.
Magnetoresistance is the dependence of the electrical resistance of a sample on the strength of an external magnetic field. Numerically, it is characterized by the value
formula_0
where R(H) is the resistance of the sample in a magnetic field H, and R(0) corresponds to H = 0. Alternative forms of this expression may use electrical resistivity instead of resistance, a different sign for δH, and are sometimes normalized by R(H) rather than R(0).
The term "giant magnetoresistance" indicates that the value δH for multilayer structures significantly exceeds the anisotropic magnetoresistance, which has a typical value within a few percent.
History.
GMR was discovered in 1988 independently by the groups of Albert Fert of the University of Paris-Sud, France, and Peter Grünberg of Forschungszentrum Jülich, Germany. The practical significance of this experimental discovery was recognized by the Nobel Prize in Physics awarded to Fert and Grünberg in 2007.
Early steps.
The first mathematical model describing the effect of magnetization on the mobility of charge carriers in solids, related to the spin of those carriers, was reported in 1936. Experimental evidence of the potential enhancement of δH has been known since the 1960s. By the late 1980s, the anisotropic magnetoresistance had been well explored, but the corresponding value of δH did not exceed a few percent. The enhancement of δH became possible with the advent of sample preparation techniques such as molecular beam epitaxy, which allows manufacturing multilayer thin films with a thickness of several nanometers.
Experiment and its interpretation.
Fert and Grünberg studied electrical resistance of structures incorporating ferromagnetic and non-ferromagnetic materials. In particular, Fert worked on multilayer films, and Grünberg in 1986 discovered the antiferromagnetic exchange interaction in Fe/Cr films.
The GMR discovery work was carried out by the two groups on slightly different samples. The Fert group used (001)Fe/(001) Cr superlattices wherein the Fe and Cr layers were deposited in a high vacuum on a (001) GaAs substrate kept at 20 °C and the magnetoresistance measurements were taken at low temperature (typically 4.2 K). The Grünberg work was performed on multilayers of Fe and Cr on (110) GaAs at room temperature.
In Fe/Cr multilayers with 3-nm-thick iron layers, increasing the thickness of the non-magnetic Cr layers from 0.9 to 3 nm weakened the antiferromagnetic coupling between the Fe layers and reduced the demagnetization field, which also decreased when the sample was heated from 4.2 K to room temperature. Changing the thickness of the non-magnetic layers led to a significant reduction of the residual magnetization in the hysteresis loop. Electrical resistance changed by up to 50% with the external magnetic field at 4.2 K. Fert named the new effect giant magnetoresistance, to highlight its difference with the anisotropic magnetoresistance. The
Grünberg experiment made the same discovery but the effect was less pronounced (3% compared to 50%) due to the samples being at room temperature rather than low temperature.
The discoverers suggested that the effect is based on spin-dependent scattering of electrons in the superlattice, particularly on the dependence of resistance of the layers on the relative orientations of magnetization and electron spins. The theory of GMR for different directions of the current was developed in the next few years. In 1989, Camley and Barnaś calculated the "current in plane" (CIP) geometry, where the current flows along the layers, in the classical approximation, whereas Levy "et al." used the quantum formalism. The theory of the GMR for the current perpendicular to the layers (current perpendicular to the plane or CPP geometry), known as the Valet-Fert theory, was reported in 1993. Applications favor the CPP geometry because it provides a greater magnetoresistance ratio (δH), thus resulting in a greater device sensitivity.
Theory.
Fundamentals.
Spin-dependent scattering.
In magnetically ordered materials, the electrical resistance is crucially affected by scattering of electrons on the magnetic sublattice of the crystal, which is formed by crystallographically equivalent atoms with nonzero magnetic moments. Scattering depends on the relative orientations of the electron spins and those magnetic moments: it is weakest when they are parallel and strongest when they are antiparallel; it is relatively strong in the paramagnetic state, in which the magnetic moments of the atoms have random orientations.
For good conductors such as gold or copper, the Fermi level lies within the "sp" band, and the "d" band is completely filled. In ferromagnets, the dependence of electron-atom scattering on the orientation of their magnetic moments is related to the filling of the band responsible for the magnetic properties of the metal, e.g., 3"d" band for iron, nickel or cobalt. The "d" band of ferromagnets is split, as it contains a different number of electrons with spins directed up and down. Therefore, the density of electronic states at the Fermi level is also different for spins pointing in opposite directions. The Fermi level for majority-spin electrons is located within the "sp" band, and their transport is similar in ferromagnets and non-magnetic metals. For minority-spin electrons the "sp" and "d" bands are hybridized, and the Fermi level lies within the "d" band. The hybridized "spd" band has a high density of states, which results in stronger scattering and thus shorter mean free path λ for minority-spin than majority-spin electrons. In cobalt-doped nickel, the ratio λ↑/λ↓ can reach 20.
According to the Drude theory, the conductivity is proportional to λ, which ranges from several to several tens of nanometers in thin metal films. Electrons "remember" the direction of spin within the so-called spin relaxation length (or spin diffusion length), which can significantly exceed the mean free path. Spin-dependent transport refers to the dependence of electrical conductivity on the spin direction of the charge carriers. In ferromagnets, it occurs due to electron transitions between the unsplit 4"s" and split 3"d" bands.
In some materials, the interaction between electrons and atoms is the weakest when their magnetic moments are antiparallel rather than parallel. A combination of both types of materials can result in a so-called inverse GMR effect.
CIP and CPP geometries.
Electric current can be passed through magnetic superlattices in two ways. In the current in plane (CIP) geometry, the current flows along the layers, and the electrodes are located on one side of the structure. In the current perpendicular to plane (CPP) configuration, the current is passed perpendicular to the layers, and the electrodes are located on different sides of the superlattice. The CPP geometry results in more than twice higher GMR, but is more difficult to realize in practice than the CIP configuration.
Carrier transport through a magnetic superlattice.
Magnetic ordering differs in superlattices with ferromagnetic and antiferromagnetic interaction between the layers. In the former case, the magnetization directions are the same in different ferromagnetic layers in the absence of applied magnetic field, whereas in the latter case, opposite directions alternate in the multilayer. Electrons traveling through the ferromagnetic superlattice interact with it much weaker when their spin directions are opposite to the magnetization of the lattice than when they are parallel to it. Such anisotropy is not observed for the antiferromagnetic superlattice; as a result, it scatters electrons stronger than the ferromagnetic superlattice and exhibits a higher electrical resistance.
Applications of the GMR effect require dynamic switching between the parallel and antiparallel magnetization of the layers in a superlattice. In first approximation, the energy density of the interaction between two ferromagnetic layers separated by a non-magnetic layer is proportional to the scalar product of their magnetizations:
formula_1
The coefficient "J" is an oscillatory function of the thickness of the non-magnetic layer ds; therefore "J" can change its magnitude and sign. If the ds value corresponds to the antiparallel state then an external field can switch the superlattice from the antiparallel state (high resistance) to the parallel state (low resistance). The total resistance of the structure can be written as
formula_2
where R0 is the resistance of ferromagnetic superlattice, ΔR is the GMR increment and θ is the angle between the magnetizations of adjacent layers.
Mathematical description.
The GMR phenomenon can be described using two spin-related conductivity channels corresponding to the conduction of electrons, for which the resistance is minimum or maximum. The relation between them is often defined in terms of the coefficient of the spin anisotropy β. This coefficient can be defined using the minimum and maximum of the specific electrical resistivity ρF± for the spin-polarized current in the form
formula_3
where "ρF" is the average resistivity of the ferromagnet.
Resistor model for CIP and CPP structures.
If scattering of charge carriers at the interface between the ferromagnetic and non-magnetic metal is small, and the direction of the electron spins persists long enough, it is convenient to consider a model in which the total resistance of the sample is a combination of the resistances of the magnetic and non-magnetic layers.
In this model, there are two conduction channels for electrons with various spin directions relative to the magnetization of the layers. Therefore, the equivalent circuit of the GMR structure consists of two parallel connections corresponding to each of the channels. In this case, the GMR can be expressed as
formula_4
Here the subscript of R denote collinear and oppositely oriented magnetization in layers, "χ = b/a" is the thickness ratio of the magnetic and non-magnetic layers, and ρN is the resistivity of non-magnetic metal. This expression is applicable for both CIP and CPP structures. Under the condition formula_5 this relationship can be simplified using the coefficient of the spin asymmetry
formula_6
Such a device, with resistance depending on the orientation of electron spin, is called a spin valve. It is "open", if the magnetizations of its layers are parallel, and "closed" otherwise.
Valet-Fert model.
In 1993, Thierry Valet and Albert Fert presented a model for the giant magnetoresistance in the CPP geometry, based on the Boltzmann equations. In this model the chemical potential inside the magnetic layer is split into two functions, corresponding to electrons with spins parallel and antiparallel to the magnetization of the layer. If the non-magnetic layer is sufficiently thin then in the external field E0 the amendments to the electrochemical potential and the field inside the sample will take the form
formula_7
formula_8
where "ℓ"s is the average length of spin relaxation, and the z coordinate is measured from the boundary between the magnetic and non-magnetic layers (z < 0 corresponds to the ferromagnetic). Thus electrons with a larger chemical potential will accumulate at the boundary of the ferromagnet. This can be represented by the potential of spin accumulation "V"AS or by the so-called interface resistance (inherent to the boundary between a ferromagnet and non-magnetic material)
formula_9
where "j" is current density in the sample, "ℓ"sN and "ℓ"sF are the length of the spin relaxation in a non-magnetic and magnetic materials, respectively.
Device preparation.
Materials and experimental data.
Many combinations of materials exhibit GMR, and the most common are the following:
The magnetoresistance depends on many parameters such as the geometry of the device (CIP or CPP), its temperature, and the thicknesses of ferromagnetic and non-magnetic layers. At a temperature of 4.2 K and a thickness of cobalt layers of 1.5 nm, increasing the thickness of copper layers dCu from 1 to 10 nm decreased δH from 80 to 10% in the CIP geometry. Meanwhile, in the CPP geometry the maximum of δH (125%) was observed for dCu = 2.5 nm, and increasing dCu to 10 nm reduced δH to 60% in an oscillating manner.
When a Co(1.2 nm)/Cu(1.1 nm) superlattice was heated from near zero to 300 K, its δH decreased from 40 to 20% in the CIP geometry, and from 100 to 55% in the CPP geometry.
The non-magnetic layers can be non-metallic. For example, δH up to 40% was demonstrated for organic layers at 11 K. Graphene spin valves of various designs exhibited δH of about 12% at 7 K and 10% at 300 K, far below the theoretical limit of 109%.
The GMR effect can be enhanced by spin filters that select electrons with a certain spin orientation; they are made of metals such as cobalt. For a filter of thickness "t" the change in conductivity ΔG can be expressed as
formula_10
where ΔGSV is change in the conductivity of the spin valve without the filter, ΔGf is the maximum increase in conductivity with the filter, and β is a parameter of the filter material.
Types of GMR.
GMR is often classed by the type of devices which exhibit the effect.
Films.
Antiferromagnetic superlattices.
GMR in films was first observed by Fert and Grünberg in a study of superlattices composed of ferromagnetic and non-magnetic layers. The thickness of the non-magnetic layers was chosen such that the interaction between the layers was antiferromagnetic and the magnetization in adjacent magnetic layers was antiparallel. Then an external magnetic field could make the magnetization vectors parallel thereby affecting the electrical resistance of the structure.
Magnetic layers in such structures interact through antiferromagnetic coupling, which results in the oscillating dependence of the GMR on the thickness of the non-magnetic layer. In the first magnetic field sensors using antiferromagnetic superlattices, the saturation field was very large, up to tens of thousands of oersteds, due to the strong antiferromagnetic interaction between their layers (made of chromium, iron or cobalt) and the strong anisotropy fields in them. Therefore, the sensitivity of the devices was very low. The use of permalloy for the magnetic and silver for the non-magnetic layers lowered the saturation field to tens of oersteds.
Spin valves using exchange bias.
In the most successful spin valves the GMR effect originates from exchange bias. They comprise a sensitive layer, "fixed" layer and an antiferromagnetic layer. The last layer freezes the magnetization direction in the "fixed" layer. The sensitive and antiferromagnetic layers are made thin to reduce the resistance of the structure. The valve reacts to the external magnetic field by changing the magnetization direction in the sensitive layer relatively to the "fixed" layer.
The main difference of these spin valves from other multilayer GMR devices is the monotonic dependence of the amplitude of the effect on the thickness "dN" of the non-magnetic layers:
formula_11
where δH0 is a normalization constant, λN is the mean free path of electrons in the non-magnetic material, "d"0 is effective thickness that includes interaction between layers. The dependence on the thickness of the ferromagnetic layer can be given as:
formula_12
The parameters have the same meaning as in the previous equation, but they now refer to the ferromagnetic layer.
Non-interacting multilayers (pseudospin valves).
GMR can also be observed in the absence of antiferromagnetic coupling layers. In this case, the magnetoresistance results from the differences in the coercive forces (for example, it is smaller for permalloy than cobalt). In multilayers such as permalloy/Cu/Co/Cu the external magnetic field switches the direction of saturation magnetization to parallel in strong fields and to antiparallel in weak fields. Such systems exhibit a lower saturation field and a larger δH than superlattices with antiferromagnetic coupling. A similar effect is observed in Co/Cu structures. The existence of these structures means that GMR does not require interlayer coupling, and can originate from a distribution of the magnetic moments that can be controlled by an external field.
Inverse GMR effect.
In the inverse GMR, the resistance is minimum for the antiparallel orientation of the magnetization in the layers. Inverse GMR is observed when the magnetic layers are composed of different materials, such as NiCr/Cu/Co/Cu. The resistivity for electrons with opposite spins can be written as formula_13; it has different values, i.e. different coefficients β, for spin-up and spin-down electrons. If the NiCr layer is not too thin, its contribution may exceed that of the Co layer, resulting in inverse GMR. Note that the GMR inversion depends on the sign of the "product" of the coefficients β in adjacent ferromagnetic layers, but not on the signs of individual coefficients.
Inverse GMR is also observed if NiCr alloy is replaced by vanadium-doped nickel, but not for doping of nickel with iron, cobalt, manganese, gold or copper.
GMR in granular structures.
GMR in granular alloys of ferromagnetic and non-magnetic metals was discovered in 1992 and subsequently explained by the spin-dependent scattering of charge carriers at the surface and in the bulk of the grains. The grains form ferromagnetic clusters about 10 nm in diameter embedded in a non-magnetic metal, forming a kind of superlattice. A necessary condition for the GMR effect in such structures is poor mutual solubility in its components (e.g., cobalt and copper). Their properties strongly depend on the measurement and annealing temperature. They can also exhibit inverse GMR.
Applications.
Spin-valve sensors.
General principle.
One of the main applications of GMR materials is in magnetic field sensors, e.g., in hard disk drives and biosensors, as well as detectors of oscillations in MEMS. A typical GMR-based sensor consists of seven layers:
The binder and protective layers are often made of tantalum, and a typical non-magnetic material is copper. In the sensing layer, magnetization can be reoriented by the external magnetic field; it is typically made of NiFe or cobalt alloys. FeMn or NiMn can be used for the antiferromagnetic layer. The fixed layer is made of a magnetic material such as cobalt. Such a sensor has an asymmetric hysteresis loop owing to the presence of the magnetically hard, fixed layer.
Spin valves may exhibit anisotropic magnetoresistance, which leads to an asymmetry in the sensitivity curve.
Hard disk drives.
In hard disk drives (HDDs), information is encoded using magnetic domains, and a change in the direction of their magnetization is associated with the logical level 1 while no change represents a logical 0. There are two recording methods: longitudinal and perpendicular.
In the longitudinal method, the magnetization is normal to the surface. A transition region (domain walls) is formed between domains, in which the magnetic field exits the material. If the domain wall is located at the interface of two north-pole domains then the field is directed outward, and for two south-pole domains it is directed inward. To read the direction of the magnetic field above the domain wall, the magnetization direction is fixed normal to the surface in the antiferromagnetic layer and parallel to the surface in the sensing layer. Changing the direction of the external magnetic field deflects the magnetization in the sensing layer. When the field tends to align the magnetizations in the sensing and fixed layers, the electrical resistance of the sensor decreases, and vice versa.
Magnetic RAM.
A cell of magnetoresistive random-access memory (MRAM) has a structure similar to the spin-valve sensor. The value of the stored bits can be encoded via the magnetization direction in the sensor layer; it is read by measuring the resistance of the structure. The advantages of this technology are independence of power supply (the information is preserved when the power is switched off owing to the potential barrier for reorienting the magnetization), low power consumption and high speed.
In a typical GMR-based storage unit, a CIP structure is located between two wires oriented perpendicular to each other. These conductors are called lines of rows and columns. Pulses of electric current passing through the lines generate a vortex magnetic field, which affects the GMR structure. The field lines have ellipsoid shapes, and the field direction (clockwise or counterclockwise) is determined by the direction of the current in the line. In the GMR structure, the magnetization is oriented along the line.
The direction of the field produced by the line of the column is almost parallel to the magnetic moments, and it can not reorient them. Line of the row is perpendicular, and regardless of the magnitude of the field can rotate the magnetization by only 90°. With the simultaneous passage of pulses along the row and column lines, of the total magnetic field at the location of the GMR structure will be directed at an acute angle with respect to one point and an obtuse to others. If the value of the field exceeds some critical value, the latter changes its direction.
There are several storage and reading methods for the described cell. In one method, the information is stored in the sensing layer; it is read via resistance measurement and is erased upon reading. In another scheme, the information is kept in the fixed layer, which requires higher recording currents compared to reading currents.
Tunnel magnetoresistance (TMR) is an extension of spin-valve GMR, in which the electrons travel with their spins oriented perpendicularly to the layers across a thin insulating tunnel barrier (replacing the non-ferromagnetic spacer). This allows to achieve a larger impedance, a larger magnetoresistance value (~10× at room temperature) and a negligible temperature dependence. TMR has now replaced GMR in MRAMs and disk drives, in particular for high area densities and perpendicular recording.
Other applications.
Magnetoresistive insulators for contactless signal transmission between two electrically isolated parts of electrical circuits were first demonstrated in 1997 as an alternative to opto-isolators. A Wheatstone bridge of four identical GMR devices is insensitive to a uniform magnetic field and reacts only when the field directions are antiparallel in the neighboring arms of the bridge. Such devices were reported in 2003 and may be used as rectifiers with a linear frequency response.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta_H = \\frac{R(H)-R(0)}{R(0)}"
},
{
"math_id": 1,
"text": "w = - J (\\mathbf M_1 \\cdot \\mathbf M_2). "
},
{
"math_id": 2,
"text": "R = R_0 + \\Delta R \\sin^2 \\frac{\\theta}{2},"
},
{
"math_id": 3,
"text": "\\rho_{F\\pm}=\\frac{2\\rho_F}{1\\pm\\beta},"
},
{
"math_id": 4,
"text": "\\delta_H = \\frac{\\Delta R}{R}=\\frac{R_{\\uparrow\\downarrow}-R_{\\uparrow\\uparrow}}{R_{\\uparrow\\uparrow}}=\\frac{(\\rho_{F+}-\\rho_{F-})^2}{(2\\rho_{F+}+\\chi\\rho_N)(2\\rho_{F-}+\\chi\\rho_N)}."
},
{
"math_id": 5,
"text": "\\chi\\rho_N \\ll \\rho_{F\\pm}"
},
{
"math_id": 6,
"text": "\\delta_H = \\frac{\\beta^2}{1-\\beta^2}."
},
{
"math_id": 7,
"text": "\\Delta\\mu = \\frac{\\beta}{1-\\beta^2}eE_0\\ell_se^{z/\\ell_s},"
},
{
"math_id": 8,
"text": "\\Delta E = \\frac{\\beta^2}{1-\\beta^2}eE_0\\ell_se^{z/\\ell_s},"
},
{
"math_id": 9,
"text": "R_i= \\frac{\\beta(\\mu_{\\uparrow\\downarrow}-\\mu_{\\uparrow\\uparrow})}{2ej} = \\frac{\\beta^2\\ell_{sN}\\rho_N}{1+(1-\\beta^2)\\ell_{sN}\\rho_N/(\\ell_{sF}\\rho_F)},"
},
{
"math_id": 10,
"text": "\\Delta G = \\Delta G_{SV} + \\Delta G_f (1 - e^{\\beta t/\\lambda}),"
},
{
"math_id": 11,
"text": "\\delta_H(d_N) = \\delta_{H0} \\frac{\\exp\\left(-d_N/\\lambda_N\\right)}{1 + d_N/d_0},"
},
{
"math_id": 12,
"text": "\\delta_H(d_F) = \\delta_{H1} \\frac{1 - \\exp\\left(-d_F/\\lambda_F\\right)}{1 + d_F/d_0}."
},
{
"math_id": 13,
"text": "\\rho_{\\uparrow,\\downarrow}=\\frac{2\\rho_F}{1\\pm\\beta}"
}
] | https://en.wikipedia.org/wiki?curid=1002128 |
Dataset Card
This dataset is created from the English Wikipedia dump file (enwiki-20240901-pages-articles-multistream.xml.bz2), available for download from Wikimedia Dumps. It includes pages containing mathematical content, extracted using the wikimathextractor tool, which is an adaptation of the wikiextractor specifically designed to extract mathematical contents.
Dataset Sources
- Repository: s-kat0/wikimathextractor
- Data Source: Wikipedia Dumps
Direct Use
This dataset is intended for use in tasks that require the processing and understanding of mathematical content in natural language, such as mathematical question answering and symbolic math problem solving.
Dataset Structure
Data Instances
A typical data entry in the dataset corresponds to one Wikipedia article that includes mathematical content. Here is an example:
{
'id': '26513034',
'title': 'Pythagorean theorem',
'text': 'Relation between sides of a right triangle\nIn mathematics, the Pythagorean theorem or Pythagoras\' theorem is a fundamental relation in Euclidean geometry between the three sides of a right triangle. ...'
'formulas': [
{'math_id': 0, 'text': 'a^2 + b^2 = c^2 .'},
{'math_id': 1, 'text': 'a + b'}, ...
],
'url': 'https://en.wikipedia.org/wiki?curid=26513034'
}
Data Fields
- id (str)
- a string representing the unique identifier of the article.
- title (str)
- the title of the Wikipedia article.
- text (str)
- the text content of the article, with some tags removed. Details are available in the GitHub repository.
- formulas (list[dict[str, int | str]])
- a list of mathematical expressions extracted from the article, where each formula is represented as follow.
- math_id: an index representing the position of the formula in the text.
- text: the textual representation of the formula, without
- a list of mathematical expressions extracted from the article, where each formula is represented as follow.
- url (str)
- the URL of the Wikipedia article.
Data Splits
The dataset is provided in a single training split:
- train: 38,661 rows
The DatasetDict
structure is as follows:
DatasetDict({
'train': Dataset({
features: ['id', 'title', 'text', 'formulas', 'url'],
num_rows: 38661
})
})
Dataset Creation
Please refer to the GitHub repository for details about the dataset construction.
Source Data
- Original Source: English Wikipedia available from Wikipedia Dumps
Data Collection and Processing
Please refer to the GitHub repository for details about the dataset construction.
Who are the source data producers?
Shota Kato
Bias, Risks, and Limitations
This dataset is in English and contains English Wikipedia page related to mathematical topics.
Acknowledgements
This dataset was constructed with references to several projects and tools. Acknowledgement is given to the following resources:
- Downloads last month
- 37