vacuum tube

 

Contents

From Cathode Rays to the Television

The Nature of Cathode Rays

The inconsistency in the behavior of cathode rays led to the emergence of two competing hypotheses. Some physicists like Goldstein, Hertz and Lenard thought this phenomenon is like electromagnetic radiation of very short wavelength. Others like Crooks and Thomson believed theses rays are formed by negatively-charged matter moving with great velocity.

The properties that fitted the wave model The properties that fitted the particle model
  • They travelled in straight lines
  • If an opaque object was placed in their path, a shadow of the object appeared
  • They could pass through thin metal foils without damaging them
  • The rays left the cathode at right angles to the surface (instead of propagating outwards like a wave)
  • They were obviously deflected by magnetic fields
  • Small paddlewheels turned when placed in the path of the rays (possession of momentum)
  • They travelled considerably more slowly than light.

Cathode Ray Tubes

A cathode ray tube or a discharge tube is a sealed glass tube from which most of the air is removed by vacuum pump. A beam of charged particles (electrons) travels from the cathode to the anode and can be manipulated through deflection by electrical and/or magnetic fields.

Experiments involving cathode ray tubes

Various experiments have been conducted using cathode ray tubes to ascertain the nature of cathode rays.

When an electric field was applied to cathode rays, it was evident that the cathode rays were negatively charged, as the rays bended towards the positive plate.This property indicates cathode rays were likely to be particles.
When a magnetic field was applied to cathode rays, it was again evident that cathode rays were negatively charged, as they were bent by a magnetic field in the direction expected for negatively charged particles.This property indicates cathode rays were likely to be particles.
When an object (shown here, a Maltese cross) was placed in front of cathode rays, a shadow of the object was cast onto a phosphor screen behind it.This indicated that cathode rays traveled in straight lines. This indicates cathode rays could be particles or light.
When a small paddlewheel was placed in front of cathode rays, it was observed to rotate. This indicated that cathode rays were imparting a momentum onto the paddlewheel, causing it to rotate.This property indicated cathode rays were almost certainly of a particle nature.

Forces on Charged Particles

Charges that move through magnetic fields interact with those magnetic fields because their movement gives rise to a second magnetic field. It is the two magnetic fields interacting which produces forces that change the motion of the charge.

If a particle with charge  is moving with velocity v, perpendicularly to a magnetic field of strength B, the particle will experience a magnetic force F, given by:

The direction of the force is given by the right hand palm rule. (If the particle has a negative charge, the direction of the conventional current is opposite to that of the velocity. Alternatively, use your left hand instead of your right hand for negative charges). If the velocity is at an angle  to the magnetic field, the force is given by:

 

To find the direction of the force in this case, use the component of the velocity perpendicular to the magnetic field and the right-hand palm rule.

Charged Plates

A uniform electric field can be made by placing charges on two parallel plates which are separated by a small distance compared with their length. The electric field is directed from the positive to the negative plate.

When a potential difference is applied to the plates, the magnitude or intensity of the uniform magnetic field can be determined by:

 

Where

  • E is the electric field intensity, in 
  • V is the potential difference, in volts
  • d is the distance between the two plates, in metres

Electric Field Strength due to a Point Charge

Recall that the direction of the electric field is defined as the direction in which a positive charge will experience a force when placed in an electric field.

For a positive charge, lines of force leave the centre of the charge and radiate in all directions from it. For a negative charge, the lines are directed radially into the centre of the charge. Remember, field lines never cross (and this applies to electric fields, magnetic fields, gravitational fields, and all other fields).

Electric field lines that are close together represent strong fields. Field lines that are well separated represent weak fields. The strength of the electric field due to a point charge diminishes with distance from the charge proportional to the inverse square of distance (i.e. electric fields follow the inverse square law).

The magnitude (or intensity or strength) of an electric field at a particular point is determined by finding the force acting on a unit charge placed at that point:

 

This definition can also be re-arranged to give the force experienced by a point charge q in an electric field, that is: .

Where

  • E is the electric field intensity, in
  • F is the force on the charged particle due to the electric field, in N
  • q is the electric charge, in C

Charge to mass ratio of an electron

J.J. Thomson built a cathode ray tube with charged parallel plates to provide a uniform electric field and coils to provide a uniform magnetic field. The fields were oriented at right angles to each other and this had the effect of producing forces on the cathode rays that directly opposed each other.

Thomson’s experiment involved two stages:

Stage one:

Varying the magnetic field and electric field until their opposing forces cancelled, leaving the cathode rays undeflected. By equating the magnetic and electric force equations, Thomson was able to determine the velocity of the cathode ray particles in terms of E and B.

So:



 

 

Stage two:

Applying the same strength magnetic field alone (with the electric plates turned off) and determining the radius of the circular path travelled by the charged particle in the magnetic field. The magnetic force provided the centripetal force, causing the rays to bend in a circular arc with a fixed and measurable radius.



(substitution from )

Cathode Ray Tubes in Televisions and CRO’s

The cathode ray tube is used in cathode ray oscilloscopes (CRO’s) and TV’s.

A cathode ray tube in these devices consists of three main components:

  1. Electron gun: this produces a narrow beam of electrons. It consists of the heating filament, which heats the cathode, releasing electrons by thermionic emission. A number of electrodes are used to control the ‘brightness’ of the beam, to focus the beam and accelerate the electrons along the tube. The positive anodes accelerate the electrons, making them move at high speed.
  2. Fluorescent screen: the inside of the end of the tube is coated with a fluorescent material (e.g. zinc sulfide, or phosphorous coatings). When an electron beam (i.e. steady stream of electrons produced by an electron gun) hits the screen, the coating fluoresces (glows) due to the energy carried by the fast electrons and a spot of light is seen on the screen. The screen can provide a trace of the movement of the electron beam which produces the visible output.
  3. Deflection system: this may consist of two sets of parallel plates which are charged to produce electric fields perpendicular to each other to deflect the beam of electrons vertically and/or horizontally. Deflection coils may also establish a magnetic field that controls the deflection of the electron beam from side to side and up and down.

Televisions

Before modern televisions that use LCDs and Plasma screens, conventional televisions used a cathode ray tube as their output device. A colour TV camera records images through three coloured filters: red, green and blue. The information is transmitted to the receiver which then directs the appropriate signal to one of the three electron guns, each corresponding to one of the primary colours. The picture is then reconstituted on the screen by an additive process involving three coloured phosphors. Each electron gun stimulates its appropriate phosphor.

Each TV image is made up of hundreds of horizontal lines of dots. The deflection coils are varied to scan the screen twice for each image. Each picture is formed from two passes of the electron beam. The odd-numbered lines are drawn first, then the even-numbered lines. Dots of phosphorescent paint on the screen convert the energy of the electron beam into coloured light. When seen together, the many dots on the screen form the colour TV image we see.

Cathode ray oscilloscopes

This is an electronic device used to view electrical signals in waveform. Because of its ability to make variations in any electric current visible, it is used as a test instrument for acoustics, communicating, electronics, heart monitoring, etc.

A CRO uses a CRT to display a variety of electrical signals. The horizontal deflection is usually provided by a time base, which allows the voltage to be plotted as a function of time. This enables complex waveforms or very short pulses to be displayed and measured.

The Quantum model of light

Hertz’s experiment and the photoelectric effect

Hertz’s experiment

Heinrich Hertz wanted to produce some of the EM waves with frequencies other than that of visible light. Before Hertz’s experiments, an earlier scientist / mathematician named James C. Maxwell predicted the existence of a spectrum of electromagnetic waves, including EM waves in frequencies other than visible light. Hertz’s experimental aim was to confirm this prediction. In his experiments, Hertz used an induction coil to produce sparks, which, he correctly believed, should produce strong radio waves.

Hertz observed that when a small length of wire was bent into a loop so that there was a small gap and held near the sparking induction coil, a secondary spark would jump across the gap in the loop. He observed that this occurred when a spark jumped across the terminals of the induction coil. This sparking occurred even though the loop was not connected to a source of electrical current.

Since there was no physical contact between the transmitter and the receiver, the sparks across the gap of the induction coil emitted invisible electromagnetic waves which induced the sparks across the gap in the receiver loop.

The Photoelectric Effect

The photoelectric effect refers to the release of electrons from a metal surface caused by exposure to electromagnetic radiation. The effect was first observed by Hertz in 1887 in the experiment described.

Hertz enclosed the receiver loop in a dark case, so as to make his observations easier. When he did this, he noticed the maximum spark length in the receiver loop became much smaller. Hertz also found that the induced sparks were stronger in the receiver when it was illuminated by UV light. However, when he placed a quartz screen (which blocks UV radiation) over the receiver loop, the induced spark became small again.

What was actually happening was the UV radiation was giving energy to the electrons in the receiver loop, by the photoelectric effect. The energised electrons causes the maximum spark length of the secondary sparks to increase significantly. When the quartz screen was used, the UV light was blocked, preventing the UV from boosting the energy of the electrons in the receiver loop. Therefore, disabling the photoelectric effect reduced the maximum spark length.

In his experiment, Hertz noted this observation, but failed to investigate it further. It was not until Einstein that the photoelectric effect was fully understood.

Hertz’s experiment and the speed of radio waves

In order to prove his invisible waves were electromagnetic waves just like visible light, Hertz carried out another experiment. He was able to reflect radio waves off sheets of metals. He was also able to refract the radio waves using a prism of pitch. He showed that the waves could be polarised by reorientating the spark gap of the receiver. Hertz therefore demonstrated that his waves had the properties of reflection, refraction and polarisation, in common with light and all electromagnetic radiation.

Most significantly, Hertz was able to measure the speed of radio waves and show that it was equal to the speed of light, thus proving his waves to be part of the electromagnetic spectrum. To do this, Hertz calculated the wavelength of his radiation by studying the interference patterns produced when one ray, reflected off a metal plate to a detector was superimposed on a ray that travelled directly to the detector, as shown below. Knowing the frequency of the oscillator producing the sparks, he used  to find the speed of his waves.

Black body radiation and Planck’s hypothesis

The radiation emitted by a hot body depends not only on the temperature but also on the material of which the body is made, its shape, and the nature of its surface. Such details make it hard to understand thermal radiation in simpler theoretical terms. Therefore we can define the concept of an ‘ideal radiator’ for which the spectrum of the emitted thermal radiation depends only on the temperature of the radiator and not on any other factors.

Such an ideal radiator is called a black body. A black body is defined as a body that has the following properties:

  • Black bodies absorb all electromagnetic radiation that falls on it
  • No electromagnetic radiation pass through it
  • No electromagnetic radiation is reflected
  • Electromagnetic radiation is emitted at a frequency characteristic of the body’s temperature (called blackbody radiation)

All objects are approximate black bodies to an extent. For example, a sheet of cloth gets hot if left in the sun, as it absorbs sunlight and warms up. A piece of paper held close to a lamp will get warm after absorbing its light for a while.

All hot objects emit approximate blackbody radiation as well. For example, a piece of metal will eventually glow red hot if heated until it is hot enough. The sun is yellow because its surface temperature is about 6,000°K. Some stars glow blue-white, because their surface temperature can be as high as 20,000°K.

The Ultraviolet Catastrophe

At all temperatures, black bodies emit radiation at all wavelengths, but the intensity of emission of each wavelength varies.

As black bodies become hotter, the peak wavelength shifts to smaller wavelengths (higher frequencies) and this is responsible for the fact that hotter objects begin to glow red hot, then orange, white and eventually blue-white.

Before Planck and Einstein, the classical theory of light predicted that for all black body radiation curves, as the wavelength of the radiation emitted becomes shorter, the radiation intensity would increase without limit.

The diagram on the right shows the black body radiation curve predicted by the classical theory of light (the black curve) as compared to what is measured in reality.

The prediction of infinite intensity emissions of short wavelengths by classical theory is obviously incorrect. If this were true, all objects around us would continually emit infinite energy in the form of harmful X-rays and gamma rays, which is obviously not what actually happens. This incorrect prediction of the classical theory was called the ‘ultraviolet catastrophe’.

Planck’s hypothesis of quantised light

The problem with the classical theory of light was that the mathematics it was based on had predicted radiation curves that went to infinite as wavelength approached zero. (For the HSC, you do not need to understand the maths behind classical theory and why it leads to the ultraviolet catastrophe.)

Max Planck, a German scientist in the early 1900s proposed that the radiation emitted from black bodies could only be emitted in certain discrete amounts. Think of light being emitted in discrete packets of energy, each containing a fixed amount of energy, instead of a continuous beam. Black bodies could also only absorb radiation in discrete amounts. Essentially, Planck proposed that black body radiation (and indeed all light) was quantised, with each ‘packet’ of light holding an amount of energy equal to:

 

Where

  • E is the energy of the photon, in joules
  • h is Planck’s constant, equal to .
  • f is the frequency of radiation, in Hertz

Using this formula, the energy of each packet of light (called a photon) can be calculated if we know the frequency of the light, or vice versa.

Planck’s theory was initially developed for something else, but Einstein noticed that by seeing black body radiation as quantised, the mathematics worked out to match experimental data, solving the ultraviolet catastrophe.

Planck’s theory was the precursor to modern quantum physics, and is hugely important to our modern understanding of physics.

The Photoelectric Effect

The photoelectric effect is the emission of electrons (called ‘photoelectrons’) from a surface (usually a metal) when light is shined on it. It is the principle behind modern technologies such as solar cells and light-sensitive diodes.

Problems with the classical theory of light

The classical theory of light viewed light as purely of a wave nature. According to this theory, light shining on a piece of metal will be absorbed by the electrons in the metal, and the electrons will accumulate energy until they have enough kinetic energy to be ejected from the metal surface.

However, experimentally, it was observed that for all metals, below a certain threshold frequency of light, no electrons were ejected from the metal’s surface, regardless of low long the light is applied, or how bright the light is.

 

This suggested that whether electrons were ejected off a metal’s surface depended solely on the frequency of the incident light, not on its brightness or time of exposure. This critical frequency was found to be different for different metals.

Another prediction of the classical theory of light was that the intensity of the incident light determined the energy of the photoelectrons emitted. However, Lenard’s experiments showed that doubling the intensity would double the number of electrons ejected, while there was no change in the kinetic energy of individual electrons.

Therefore, clearly the classical theory of light was unable to explain the observations in reality.

Einstein’s explanation of the photoelectric effect

Einstein used Planck’s theory in which the particles of light, or photons, carried energy in discrete amounts, and proposed the following assumptions:

  1. Light exists as photons, each with an energy represented as 
  2. Light intensity depends on the number of photons (the more photons, the greater the intensity)
  3. All photons of a particular frequency have precisely the same amount of energy. Photons with the highest energy correspond to light of the highest frequency.
  4. To produce the photoelectric effect, the energy contained in the light photons must be equal to, or greater than, the energy required to overcome the forces holding the electrons to the surface. The energy required to release the electron from the surface is called the ‘work function’, denoted as W.
  5. If the energy of the photon is greater than the work function, the additional energy of the photon, above the work function level, will provide the kinetic energy, , of the photoelectrons.
  6. All photons, regardless of their frequency, have zero rest mass and travel at the speed of light in a vacuum.

Einstein’s photoelectric equation is:

 

Where

  • E is the energy of the photon that ejects the photoelectron. E is also the total energy the photoelectron gains from the photon.
  • W is the work function, the amount of energy needed to eject the photoelectron
  •  is the leftover energy after the photoelectron has been ejected, and becomes the kinetic energy (i.e. voltage) of the electron

Measuring
 in terms of volts

One coulomb of charge moving across a potential difference of one volt releases one joule of energy. Therefore, volts can be thought of as a measure of the kinetic energy of electrons. The higher the volts, the more energy is released per unit of charge.

Therefore, experimentally we can measure the 

 of photoelectrons by making the electrons move against a potential difference. As we increase the potential difference until exactly zero current is observed, the potential difference of the electric field is the
 of the electrons, in electron-volts (eV , not joules).

Einstein’s contribution to quantum theory

Einstein contributed greatly to both the quantum theory itself and its acceptance. His greatest contribution is that he is the first one to take quantum theory seriously and actively advocated it. Einstein’s two early contributions is using quantum theory to successfully explain:

  • The ultraviolet catastrophe and understanding black body radiation
  • The photoelectric effect

The success of using quantum theory to explain these phenomena led to the acceptance of the theory.

Einstein’s work had led to further discoveries including the heat capacities of solids, and the Compton Effect (not in HSC). It is the experimental results from the photoelectric effect and the Compton Effect which gives indisputable evidence that light behaved both as a wave and as a particle, supporting the quantum model.

Line spectra of atoms and molecules were available in Planck’s time, but they were not interpreted in terms of energy quantisation until Planck and Einstein developed the concept of the photon and quantised light.

Assessment

By using quantum theory to explain the ultraviolet catastrophe and the photoelectric effect, Einstein validated quantum theory. Our entire knowledge of physics on the atomic scale today is based on the quantum theory that began with Planck, Einstein, De Broglie, Heisenberg and a few others – and Einstein played a pivotal role in moving the theory forward in the early 20th century. Therefore, Einstein made a very significant contribution to quantum theory.

Historical perspective -social and political forces on scientific research

Einstein and Planck initially held differing views as to the relationship between science and politics, but in the end they both came to realise the two were intrinsically linked.

Einstein Planck
  • Einstein at first refused to support the use science to help governments fight the war, believing that science should be removed from social and political forces.  His views were those of a pacifist and believed that science should not be a tool of governments in waging war.
  • However, with the extreme persecution of the Jews in Nazi Germany, Einstein emigrated to the US where he assisted with the Manhattan Project in developing the world’s first atomic weapons. Einstein’s contribution to the Manhattan Project may be seen as his change in views, as scientific contributions led to the development of a weapon that surely ended the war.

 

  • Planck clearly believed that science is not separate from political and social forces. Planck believed in the German cause during the war, but at the same time, he protected many of his Jewish colleagues from Nazi persecution. Planck was one of the first German intellectuals to sign a document supporting the role of Germany in the war.
  • During the war, he devoted his work and research to whatever the war effort required of him. Although he eventually came to resent and oppose the Nazi regime, he still believed that science is not separate from political and social forces.

 

Both Planck and Einstein are representative of both views of the wider debate in science that continues even today as to whether political forces should hold sway as to the direction of scientific research.

   

Thermionic and Solid state devices

Both thermionic and solid state devices are capable of switching and modifying electrical signals. They are required for all computer circuitry and electronics devices.

Thermionic devices

A typical thermionic device is a vacuum tube, as shown on the right. Similar to cathode ray tubes, vacuum tubes consist of a cathode that emits electrons when heated (this is called thermionic emission).

The electrons then move to an anode. This simple device can be used to rectify currents (convert AC into DC) acting as a diode. Similar switching devices can also be made using vacuum tubes, and the earliest radios and televisions in the early 20th century were all made using vacuum tubes.

Solid state devices

A solid state device uses semiconductors to direct the flow of electrons and does not require a heating circuit. The junction between a p-type semiconductor and an n-type semiconductor (called a p-n junction) acts as a diode, allowing current to flow in one direction only. When a p-n junction is combined further with a p or n semiconductor, transistors can be made, which can switch and amplify electrical signals. Transistors is the basic component of all computers – billions of them combined form a modern CPU.

Reasons why solid state replaced thermionic devices

Thermionic and solid state devices could be used to achieve the same things of manipulating electrical signals. However, as we entered the modern age, thermionic devices were completely replaced with solid state devices.

Advantages of solid state devices Advantage of thermionic devices
  • Much smaller and lighter, unlike thermionic devices, which required heavy glass and complex parts
  • Can operate with much lower voltages. Thermionic tubes require high voltages for the cathode to emit electrons
  • Much more robust and durable due to small size and lightness. Could be incorporated into tiny devices (e.g. laptops, ipods etc)
  • Thermionic devices were fragile, as they were made of glass encasing a vacuum
  • Thermionic devices had a relatively short lifespan, due to the gradual poisoning of the cathode by impurities in the tubes
  • Soon, solid state devices became much cheaper as it was mass produced
  • Silicon is the second most abundant element in the Earth’s crust, so semiconductors used cheap materials to make
  • Did not require an understanding of solid state devices
  • Immune to electromagnetic disturbances such as EMP shockwaves

 

 

The advantages of solid state devices based on silicon greatly outweighed any advantage of thermionic devices (practically none), thus leading to the eventual dominance of solid state devices for applications in electrons, computing, integrated circuits etc.

The Invention of transistors

Limitations driving discovery

The biggest problem with communication technology in the early days of the radio was achieving amplification. The received signal was extremely weak and could not produce a loud sound without being amplified. This problem meant researchers were always trying to improve amplification technology to address the shortcomings with thermionic vacuum tubes (such as their high failure rate, high power consumption, their weight and their warm-up time).

When researchers first looked into some of the properties of semiconductors, they realized its potential for making amplification devices. With the development of solid state devices, and its advantages over thermionic devices (discussed above), transistors replaced vacuum tube triodes for signal amplification applications.

The impact of transistors on society

By 1960 transistors began to rapidly replace vacuum tubes in electronics. The development of integrated circuits (a single silicon chip with many transistors within the same chip) paved the way for the rapid development of solid state based electronics (all of modern electronics today!).

The invention of transistors and consequently microprocessors (large integrated circuits with high computing power) enabled the building of small, efficient computers that now have widespread applications throughout society as well as in scientific research. For example, today almost every household has at least one computer.

This has allowed the automation of repetitive tasks which has led to higher quality of life. The spread of computers led to the widespread adoption of the internet, which has increased society’s connectedness and increased information accessibility.

The rapid changes stemming from these new related technologies (computers and modern electronics in general) have led to temporary unemployment and redundancy of workers at each stage of development; however these economic side-effects are temporary and necessary for progress.

Therefore overall transistors have had an extremely positive impact on society.

Superconductivity

The Braggs and Crystal Lattice structure

Definitions

  • Interference: the interaction of two or more waves producing regions of maximum amplitude (constructive interference) and zero amplitude (destructive interference).
  • Diffraction: the spreading out of light waves around the edge of an object or when light passes through a small aperture.

The Braggs’ experiment

When x-rays enter a crystal such as sodium chloride, they are scattered in all directions. In some directions, the scattered waves undergo destructive interference, resulting in dimmer areas; in other directions, the interference is constructive, resulting in brighter areas. The scattering is caused by diffraction, and by analysing the interference pattern and knowing the wavelength of light used, we can mathematically deduce the space between atoms in the crystal lattice.

The Bragg’s used x-ray diffraction to determine crystal structure, proposing that the small wavelengths of x-rays meant that they could penetrate the surface of matter and diffract from the atomic lattice planes within the crystals. This technique of analysis, called x-ray crystallography, involves placing a crystal of a substance on a stand, and shining a beam of x-rays through the crystal. This produces a pattern of dark and light areas on photographic paper placed in the path of the x-rays, which are spots of constructive and destructive interference.

The Braggs observed that the regions of maximum intensity occurred in specific directions. The pattern of maximum and minimum intensity occurs as if X-rays were diffracted by a series of parallel planes of atoms, forming the crystal lattice. By analysing the interference pattern, the Braggs were able to discover the structure of crystal lattices that made up certain materials tested (e.g. ionic substances like NaCl, or metals).

Structure of Metals

Metals can be pictured as consisting of fixed positive ions making up the crystal lattice, surrounded by a ‘sea’ of delocalised electrons that are free to move about the crystal lattice.

Because of the random direction of movement of these electrons, with equal numbers moving in each direction, a steady state is established (no net transport of electric charge).

 

When an electric field is applied, it produces a small component of velocity in the direction opposite to the field. This is what allows metals to conduct electricity.

Conduction in metals

As discussed, electrons in metals exist in a delocalised ‘sea of electrons’. Because these electrons are free to move, under an electric field, a current may flow. This gives metals a low electrical resistance.

Resistance in metals

Despite metals being an excellent conductor, it still experiences electrical resistance under normal conditions. Free electron movement is impeded by vibrations in the lattice. This is caused by temperature – the higher the temperature of the metal, the more its crystal lattice vibrates.

The vibrating lattice collides with free moving electrons, deflecting or scattering them and taking some of their energy away. We measure this as a voltage drop whenever current flows through a resistor. As the temperature is increased, the lattice ions vibrate with greater amplitude and electrons moving through the metal collide with increased frequency. This implies that electrical conductivity increases with falling temperature of metals.

Superconductors and the ‘BCS theory’

Superconductivity describes a state of matter that occurs when a conductor is cooled to below a certain threshold temperature, where as a result, electrical resistance drops to zero. For example, when mercury was cooled to below 4°K, it becomes superconducting, and has zero electrical resistance. The temperature at which this state occurs is called the critical temperature, or .

BCS theory

In 1957 John Bardeen, Leon Cooper, and John Robert Schrieffer proposed a theory to explain why materials lose all resistance and become superconductors at their critical temperature. This was the BCS theory. The main idea of the BCS model described the formation of pairs of electrons in the conductivity process. These pairs of electrons were called ‘Cooper pairs’.

Pairs of electrons that normally repel each other are bound to one another due to the action of phonons. Phonons are packets of vibration energy (same as sound waves, but within the crystal itself) in the crystal lattice as it vibrates.

According to the BCS theory, as one electron passes by positively charged ions in the lattice of the superconductor, the lattice distorts.

This distortion is due to an attraction of the positive ions of the lattice to the first electron. In distorting, an area of increased positive charge concentration forms behind the first electron, and attracts a closely following second electron that chases this region of increased positive charge.

The result is that two electrons are bound together by the phonon which creates a region of higher positive charge between the electrons. The two electrons form what is called a Cooper pair, which is a stable state as the electrons move through the lattice. They remain coherent (together) as they pass through the lattice unimpeded.

As long as the superconductor is maintained below their critical temperature, the Cooper pairs are able to stay together (or at least constantly form). When the temperature exceeds the critical temperature, superconductivity is lost as vibrations within the crystal lattice become too strong for Cooper pairs to continue to exist.

Limitations of BCS theory

The BCS theory of superconductivity is simply the idea that lattice distortions at low temperatures lead to the formation of Cooper pairs.

This theory was extremely successful at explaining superconductivity in Type 1 superconductors (pure metals with a critical temperature below 30°K). However it still is unable to explain superconductivity in Type 2 superconductors. Type 2 superconductors are ceramic-based materials that can retain superconductivity at far higher temperatures, up to 92°K. This is because the BCS model predicts 30°K as being the maximum temperature at which Cooper pairs are able to form.

Advantages and Limitations of Superconductors

This relatively newly discovered phenomenon has potential applications to engineering, such as power generation, motors, magnetic levitation and where powerful magnets are required, such as in MRI scanners. However there are limitations to the potential applications of superconductivity.

Advantages Limitations
Since there is no loss in electrical energy when superconductors carry electrical current, relatively narrow wires made of superconducting materials can be used to carry huge currents with no power loss. There is a maximum current that superconducting materials can carry; this is called the critical current density. Currents above that threshold change the superconductor to being a normal conductor even though it may be below its critical temperature.Similarly there is a critical flux density, where if an external magnetic field is powerful enough, this can cause superconductivity to break down.
Environmental benefits accrue from the higher efficiency of power generation, transmission, distribution and use of electric power using superconductors. This saves the environment from less pollution due to power generation in fossil fuel power plants. The cost is prohibitive for immediate replacement of existing technologies. Such costs may outweigh any benefits to the environment.
Current research is being made into high temperature superconductors. The higher the of the superconductor material, the less expensive it will be to run. It is believed that in the near future, we will discover materials that can be superconductive at room temperature, making running costs negligible. Current high temperature superconductors are all ceramic-based materials (made of mixtures of copper, oxygen and rare-earth metals). These materials are brittle and subject to cracking and breaking, like glass. They cannot be stretched into thin wires, so it is impractical to make coils out of them.
It is expensive to maintain superconductors at temperatures below their . Even the highest temperature superconductors today require liquid nitrogen to keep within temperature range. This makes running costs prohibitively expensive for most applications.

Superconductors and their critical temperatures

Common elements that show the ability to become superconductors at below their  are known as Type 1 superconductors. Superconductors consisting of multiple elements are classified as Type 2 superconductors. They are often ceramic in nature. Type 2 superconductors have a higher  than Type 1 superconductors.

Type 1
in kelvin
Type 2
in kelvin
Aluminium 1.2 Hg12Tl3Ba30Ca30Cu45O127 138
Lead 7 Bi2Sr2CuO6 (BSSCO) 110
Mercury 4 YBa2Cu3O7 (YBCO) 92
Zinc 0.85 SmFeAs 43

 

Elektronik

 

Hiçbir yazı/ resim  izinsiz olarak kullanılamaz!!  Telif hakları uyarınca bu bir suçtur..! Tüm hakları Çetin BAL' a aittir. Kaynak gösterilmek şartıyla  siteden alıntı yapılabilir.

 © 1998 Cetin BAL - GSM: +90  05366063183 - Turkiye / Denizli