Category Archives: Guides

What Are Proximity Sensors?

Those of you who use a mobile phone with a touch-screen may have wondered why items on the touch-screen do not trigger when you hold the phone to your ear while answering a call. Well, designers of mobile phones with touch-screen have built-in a feature that prevents a situation such as “My ear took that stupid picture, not me.” The savior in this situation is the tiny sensor placed close to the speaker of the phone, and this proximity sensor prevents touch-screen activity when anything comes very close to the speaker. That is what happens when your ear touches the screen as you are on a call, but does not generate any touch events.

So, what sort of proximity sensors do the phones use? Well, in most cases, it is an optical sensor or a light sensing device. The sensor senses the ambient light intensity and provides a “near” or “far” output. When nothing is covering the sensor, the ambient light falling on it makes it give out a “far” reading, and keeps the touch-screen active.

When you are on a call, your ear covers the sensor, obstructing the device to see ambient light. Its output changes to “near” and the phone ignores any activity from the touch-screen, until the sensor changes its state. Of course, the mobile phone considers more complications such as what happens when the ambient light falls very low, but we will discuss more on different types of proximity sensors instead.

Different types of proximity sensors detect nearby objects. Usually, the proximity sensor is used to activate an electrical circuit when an object either makes contact with it or comes within a certain distance of the sensor. The sensing mechanism differentiates the types of sensors and these can be Inductive, Capacitive, Acoustic, Piezoelectric and Infra-Red.

You may have seen doors that open automatically when you step up to them. When you are close to the door, the weight of your body changes the output of a piezoelectric sensor placed under the floor near the door triggering a mechanism to open the door.

Cars avoid bumping into walls while backing. The proximity sensor (a transmitter and sensor pair) used here works acoustically. A pair is fitted on the backside of the car. The transmitter generates a high frequency sound signal and the sensor measures the time difference of the signal bounced back from the wall. The time difference reduces as the car approaches the wall, telling the driver when to stop.

Computer screens inside ATM kiosks and the screen on your mobile are examples of capacitive proximity sensors. When you put a finger or a style on the screen, the device detects the change in the capacitance of the screen. The device measures the capacitance change in two directions, horizontal and vertical, or in x and y directions, to pinpoint the exact location of your finger and operate the function directly underneath.

When a security guard checks you out with a wand, or you walk through a metal detector door, the guard may ask you to remove your watch, coins from your pocket and in many cases, even your belt. The reason is the wand or the door has an inductive proximity sensor that will trigger in the presence of metals (mostly made of iron or steel).

Finally, the fire detector in your home or office is a classic example of a proximity sensor working on Infrared principles. Level of infrared activity beyond a threshold will trigger the alarm, and bring the fire brigade rushing.

How Does the Touch Screen on a Mobile Phone Work?

The mobile phone is an amazing piece of work. Earlier you had to press buttons, now you just touch the app on your screen and it comes to life. You can even pinch your pictures to zoom in on a detail or zoom out to see more of the scene. The movement of your finger in the screen causes the screen to scroll up, down, left or right.

The technology behind this wizardry is called the touch-screen. It is an extra transparent layer sitting on the actual liquid crystal display, the LCD screen of your mobile. This layer is sensitive to touch and can convert the touch into an electrical signal, which the computer inside the phone can understand.

Touch screens are mainly of three different types – Resistive, Capacitive and Infrared, depending on their method of detection of touch.

In a resistive touch-screen, there are multiple layers separated by thin spaces. When you apply pressure on the surface of the screen by a finger or a stylus, the outer layer is pushed into the inner layers and their resistance changes. A circuitry measuring the resistance tells the device where the user is touching the screen. Since the pressure of the finger or the stylus has to change the resistance of the screen by deforming it, the pressure required in resistive type touch-screens is much more than for capacitive type touch-screens.

Capacitive type touch-screens work on a principle different to that of the resistive touch-screens. Here the change measured is not in terms of resistance but of capacitance. A glass surface on the LCD senses the conductive properties of the skin on your fingertip when you touch it. Since the surface does not rely on pressure, the capacitive touch-screens are more responsive and they can respond to such gestures as swiping or pinching (multi-touch). Unlike the resistive type screens, the capacitive screen will only respond to touch by a finger and not to stylus or a gloved finger, and certainly not to fingers with long nails. The capacitive touch-screens are more expensive and can be found on high-end smartphones such as from Apple, HTC and Samsung.

As the screen grows larger, such as for TVs and other interactive displays such as in banking machines and for military applications, the resistive and capacitive type technologies for touch sensing quickly become less than adequate. It is more customary to use infrared touch screens here.

Instead of an overlay on the screen, infrared touch screens have a frame surrounding the display. The frame has light sources on one side and light detectors on the other. The light sources emit infrared rays across the screen in the form of an invisible optical grid. When any object touches the screen, the invisible beam is broken, and the corresponding light sensor shows a drop in the signal output.

Although the infrared touch-screens are the most accurate and responsive among the three types, they are expensive and have other disadvantages. The failure rate is high because diodes used for generating the infrared rays fail often.

What Is Electromagnetic Interference (EMI) And How Does It Affect Us?

Snap on ferrite for EMI suppression

(Snap on ferrite for EMI suppression)

What Is Electromagnetic Interference (EMI) And How Does It Affect Us?

Electromagnetic interference, abbreviated EMI, is the interference caused by an electromagnetic disturbance affecting the performance of a device, transmission channel, or system. It is also called radio frequency interference, or RFI, when the interference is in the radio frequency spectrum.

All of us encounter EMI in our everyday life. Common examples are:

• Disturbance in the audio/video signals on radio/TV due an aircraft flying at a low altitude

• Noise on microphones from a cell phone handshaking with communication tower to process a call

• A welding machine or a kitchen mixer/grinder generating undesired noise on the radio

• In flights, particularly while taking off or landing, we are required to switch off cell phones since the EMI from an active cell phone interferes with the navigation signals.

EMI is of two types, conducted – in which there is physical contact between the source and the affected circuits, and radiated – which is caused by induction.

The EMI source experiences rapidly changing electrical currents, and may be natural such as lightning, solar flares, or man-made such as switching off or on of heavy electrical loads like motors, lifts, etc. EMI may interrupt, obstruct, or otherwise cause an appliance to under-perform or even sustain damages.

In radio astronomy parlance, EMI is termed radio-frequency interference (RFI), and is a signal within the observed frequency band emanating from other than celestial sources themselves. In radio astronomy, RFI level being much larger than the intended signal, is a major impediment.

Susceptibility to EMI and Mitigation

Analog amplitude modulation or other older, traditional technologies are incapable of differentiating between desired and undesired signals, and hence are more susceptible to in-band EMI. Recent technologies like Wi-Fi are more robust, using error correcting technologies to minimize the impact of EMI.

All integrated circuits are a potential source of EMI, but assume significance only in conjunction with physically larger components such as printed circuit boards, heat sinks, connecting cables, etc. Mitigation techniques include the use of surge arresters or transzorbs (transient absorbers), decoupling capacitors, etc.

Spread-spectrum and frequency-hopping techniques help both analog and digital communication systems to combat EMI. Other solutions like diversity, directional antennae, etc., enable selective reception of the desired signal. Shielding with RF gaskets or conductive copper tapes is often a last option on account of added cost.

RFI detection with software is a modern method to handle in-band RFI. It can detect the interfering signals in time, frequency or time-frequency domains, and ensures that these signals are eliminated from further analysis of the observed data. This technique is useful for radio astronomy studies, but not so effective for EMI from most man-made sources.

EMI is sometimes put to useful purposes as well, such as for modern warfare, where EMI is deliberately generated to cause jamming of enemy radio networks to disable them for strategic advantages.

Regulations to contain EMI

The International Special Committee on Radio Interference (CISPR) created global standards covering recommended emission and immunity limits. These standards led to other regional and national standards such as European Norms (EN). Despite additional costs incurred in some cases to give electronic systems an agreed level of immunity, conforming to these regulations enhances their perceived quality for most applications in the present day environment.

Can capacitors act as a replacement for batteries?

It is common knowledge that capacitors store electrical energy. One could infer that this energy could be extracted and used in much the same way as a battery. Why can capacitors then not replace batteries?

Conventional capacitors discharge rapidly, whereas batteries discharge slowly as required for most electrical loads. A new type of capacitors with capacitances of the order of 1 Farad or higher, called Supercapacitors:

• Are capable of storing electrical energy, much like batteries
• Can be discharged gradually, similar to batteries
• Recharged rapidly – in seconds rather than hours (batteries need hours to recharge)
• Can be recharged again and again, without degradation (batteries have a limited life and hold increasingly lower charge with age, until they can be recharged no longer)

The Supercapacitor would thus appear to be one up on the batteries in terms of performance and longevity, and some more research could actually lead to a viable alternative to conventional fuel for automobiles. It is this concept that created the hybrid, fuel-efficient cars.

However, let us not jump to conclusions without considering all the aspects. For one, the research required to refine this technology would be both time and cost intensive. The outcome must justify the efforts in terms of both time and cost. The negatives must be carefully weighed against the advantages enumerated above, some of which are:

• Supercapacitors’ energy density (Watt-hours per kg) is much lower compared to batteries, leading to gigantically sized capacitors
• For quick charging, one would need to apply very high voltages and/or currents. As an illustration, charging a 100KWH battery in 10 seconds would need a 500V supply with a current of 72,000 Amps. This would be a challenge for safety, besides needing huge cables with solid insulation, along with a stout structure for support
• The sheer size of the charging infrastructure would call for robotic systems, a cumbersome and expensive set up. The cost and complexity of its operation and maintenance at multiple locations could defeat its purpose
• Primary power to enable the stations to function may not be available at remote locations.
Many prefer to opt for the traditional “battery bank” instead. The major problem of lead acid battery banks is the phenomenal hike in the cost of lead and the use of corrosive acid. Warm climates accelerate the chemical degradation leading to a shorter battery life.

A better solution, as often advocated, is to use a century-old technology in which nickel-iron (NiFe) batteries were used. These batteries need minimal maintenance, where the electrolyte, a non-corrosive and safe lithium compound, has to be changed once every 12-15 years. To charge fully, it is preferable to charge NiFe batteries using a capacitor bank in parallel with the bank rather than charging with a lead-acid-battery charger.

Though NiFe batteries are typically up to one and a half times more expensive, lower maintenance cost more than offsets the same over its lifetime.

To summarize, the Supercapacitor technology would still have to evolve in a big way before actually replacing batteries although the former offers a promising alternative to batteries.

image courtesy of eet.com

The Future of Cloud Computing

What is Cloud Computing?

Cloud Computing, an efficient method to balance between dealing with voluminous data and keeping costs competitive, is designed to deliver IT services consumable on demand, is scalable as per user need and uses a pay-per-use model. Business houses are progressively veering towards retaining core competencies, and shedding the non-core competencies for on-demand technology, business innovation and savings.

Delivery Options
• Infrastructure-as-a-Service (IaaS): Delivers computing hardware like Servers, Network, Storage, etc. Typical features are:
a) Users use resources but have no control of underlying cloud infrastructure
b) Users pay for what they use
c) Flexible scalable infrastructure without extensive pre-planning
• Storage-as-a-Service (SaaS): Provides storage resources as a pay-per-use utility to end users. This can be considered as a type of IaaS and has similar features.
• Platform-as-a-Service (PaaS): Provides a comprehensive stack for developers to create Cloud-ready business applications. Its features are:
a) Supports web-service standards
b) Dynamically scalable as per demand
c) Supports multi-tenant environment
• Software-as-a-Service (SaaS): Supports business applications of host and delivery type as a service. Common features include:
a) User applications run on cloud infrastructure
b) Accessible by users through web browser
c) Suitable for CRM (Customer Resource Management) applications
d) Supports multi-tenant environment

There are broadly three categories of cloud, namely Private, Hybrid and Public.

Private Cloud
• All components resident within user organization firewalls
• Automated, virtualized infrastructure (servers, network and storage) and delivers services.
• Use of existing infrastructure possible
• Option for management by user or vendor
• Works within the firewalls of the user organization
• Controlled network bandwidth
• User defines and controls data access and security to meet the agreed SLA (Service Level Agreement).

Advantages:
a) Direct, easy and fast end-user access of data
b) Chargeback to concerned user groups while maintaining control over data access and security

Public Cloud
• Easy, quick, affordable data sharing
• Most components reside outside the firewalls of user organization in a multi-tenant infrastructure
• Access of applications and storage by user, either at no cost or on a pay-per-use basis.
• Enables small and medium users who may not find it viable or useful to own Private clouds
• Low SLA
• Doesn’t offer a high level of data security or protection against corruption

Hybrid Cloud
• Leverages advantages of both Private and Public Clouds
• Users benefit from standardized or proprietary technologies and lower costs
• User definable range of services and data to be kept outside his own firewalls
• Smaller user outlay, pay-per-usage model
• Assured returns for cloud provider from a multi-tenant environment, bringing economies of scale
• Better security from high quality SLA’s and a stringent security policy

Future Projections and Driving User Segments

1. Media & entertainment – Enabling direct access to streaming music, video, interactive games, etc., on their devices without building huge infrastructure.
2. Social/collaboration – cloud computing enables more and more utilities on Face book, Linked-In, etc. With user base of nearly one-fifth of the world’s population, this is a major driving application
3. Mobile/location – clouds offering location and mobility through smart phones enable everything from email to business deals and more.
4. Payments – Payments cloud, a rather complex environment involving sellers, buyers, regulatory authorities, etc. is a relatively slow growth area

Overall, Cloud Computing is a potent tool to fulfill business ambitions of users, and with little competition on date, is poised for a bright future.

Is your anti-virus software really effective?

A popular concept floats around stating that anti-virus software simply does not work. Some sections of the press are known to propagate that the software products sold by anti-virus companies are rather ineffective in combating computer virus. Studies also influence these views on the efficacy of anti-virus software, such as the one conducted by a digital security agency in the USA. It infers that the high rate of virus growth on the internet outsmarts the bulk of anti-virus software commercially available. These software products fail to keep track of and provide adequate protection to computers against virus. Consequently, the effectiveness of these products is not commensurate with the cost of such software.

Some leading anti-virus providers have openly discarded these findings on grounds of ridiculously small sample sizes to be statistically correct, and declared the methodology used as inappropriate and unsound. They further consider the validation methodology – of simply examining the digital signatures – as poor and unscientific, not having run the study samples on live PC’s that such anti-virus software were actually supposed to protect.

The process of scanning signatures for malware detection is just one among several recognized methods of identifying the source of virus. Real anti-virus protection involves a lot more than that presumed in the aforementioned study. To be really useful, a complete suite of such methods must work in tandem, and that is the real safeguard against virus.

Consider a case of vehicle security, which could be a combination of an ignition lock, a door lock, gear lock, steering lock, immobilizer and a recent addition of GPS tracker, to name a few. Each of these provides a part of the protection using commercially available tools. The owner must decide the type and quantity of these he wants obtain and what he is willing to pay for them. A lopsided decision may defeat the very purpose of protection. It is like one installing a GPS tracker and an immobilizer in his car. A burglar may break the window glass and happily walk away with the expensive stereo, laptops and other valuables in the car, which the GPS tracker, or immobilizer may not be equipped to sense.

It is rather unjust to make a sweeping statement that anti-virus tools are no good in affording protection, without first deciding the level of security desired and having implemented solutions commensurate with such security. One needs to understand, with expert advice where necessary, the implications of using methods like firewalls, anti-phishing, anti-spam and so on, including what each can protect.

Another analogy to elucidate this concept is the performance of an orchestra, which does not depend solely on the violinist or the pianist, or even the entire range of musicians. Other important factors affect the performance, such as the conductor, the acoustics, the seats, the audience, and so on.

Irrespective of what popular opinion makes it out to be, if one is clear what one desires to protect and uses proper tools, it is very unlikely for one to conclude that anti-virus software serves no useful purpose.

Energy Harvesting – How & Why

What Is Energy Harvesting – Why Is It Needed?

The process of extracting small quantities of energy from one or more natural, inexhaustible sources, accumulation and storage for subsequent use at an affordable cost is called Energy Harvesting. Specially developed electronic devices that enable this task are termed Energy Harvesting Devices.

The world is facing acute energy crisis and global warming, stemming from rapid depletion of the traditional sources of energy such as oil, coal, fossil fuels, etc., which are on the verge of exhaustion. Not only is the global economy nose-diving, but the damage to the environment is also threatening our very existence. Natural calamities like earthquakes, tsunamis, droughts, floods, storms, etc., have become the order of the day. Economic growth is generating a spiraling demand for energy, goading us to tap alternative sources of energy on a war footing. Our very existence on the planet Earth is at stake, and we must find immediate solutions to meet the energy needs for survival.

Alternative Energy Sources Available

There are many, almost inexhaustible, sources of energy in nature. In addition, these energy forms are available almost free, if available close to the place where required. Sources include: Solar Energy, Wind Energy, Tidal Energy, Energy from the waves of the ocean, Bio Energy, Electromagnetic Energy, Chemical Energy, and so on.

Recent Advances in Technology

The sources listed above provide miniscule quantities of energy. The challenge before us is to gather the miniscule amounts and generate meaningful quantities of energy at affordable cost. Until very recently, this has remained an unfulfilled challenge.

Today, research and innovation has resulted in creation of more efficient devices to capture minute amounts of energy from these sources and convert them into electrical energy. Besides, better technology has led to lower power consumption, and hence higher power efficiency. These have been the major propelling factors for better, more efficient energy harvesting techniques, making it a viable solution. These solutions are considered to be more reliable and relatively maintenance free compared to traditional wall sockets, expensive batteries, etc.

Basic Building Blocks of an Energy Harvesting System

An Energy Harvesting System essentially consists of:

a) One or more sources of renewable energy (solar, wind, ocean or other type of energy)
b) An appropriate transducer to capture the energy and to convert it into electrical energy (such as solar cells for use in conjunction with solar power, a windmill for wind power, a turbine for hydro power, etc.)
c) An energy harvesting module to accumulate, store and control electrical power
d) A means of conveying the power to the user application (such as a transmission line)
e) The user application that consumes the power

With advancement in technology, various interface modules are commercially available at affordable prices. Combined with the enhanced awareness of the efficacy of Energy Harvesting, more and more applications and utilities are progressively using alternative sources of energy, which is a definite sign of progress to effectively deal with the global energy crisis.

Optional addition of power conditioning systems like voltage boosters, etc., can enhance the applications, but one must remember that such devices also consume power, which again brings down the efficiency and adds to cost.

Demystifying the A/D and D/A Converters

Analog and Digital Signals

Analog signals represent a physical parameter in the form of a continuous signal. In contrast, digital signals are discrete time signals formed by digital modulation. Most natural signals, like human voice and other sounds are analog in nature. Traditionally, communication systems were based on analog systems.

As demand for systems capable of carrying more information over longer distances kept soaring, the drawbacks of analog communication systems became increasingly evident. Efforts to improve the performance and throughput of systems saw the evolution of digital systems, which far surpasses the performance of analog systems, and offer features that were considered impossible earlier. Some major advantages of digital systems over analog are:

• Optical fibers can transmit digital signals and have virtually infinite information bearing capacity
• Combining multiple input signals over same channel is possible by multiplexing
• Digital signals can be encrypted and hence are more secure
• Better noise immunity leads to superior performance due to regeneration
• Much higher flexibility and ease of configuration

On the other hand, disadvantages include:

• Higher bandwidth required to transmit the same information
• Accurate synchronization required between transmitter and receiver for error free communication

Primary signals like human voice, natural sounds and pictures, etc., are all inherently analog. However, most signal processing and transmission systems are progressively becoming digital. Therefore, there is an obvious need for conversion of analog signals to digital. This facilitates processing and transmission, and reverse transition from digital to analog, since the digital signals will not be intelligible to human receivers or gadgets like a pen recorder. This need led to the evolution of Analog to Digital (A/D) Converters for encoding at the transmitting end and Digital to Analog (D/A) Converters at the receiving end for decoding.

Principle of Working of A/D and D/A Converters

An A/D converter senses the analog input signal at regular intervals and generates a corresponding binary bit stream as a combination of 0’s and 1’s. This data stream is then processed by the digital system until it is ready to be regenerated at the receiver’s location. The sampling rate has to be at least twice the highest frequency of the input signal so that the received signal is a near perfect replica of the input.

In contrast, a D/A Converter receives the bit stream and regenerates the signal by plotting the sampled values to obtain the input signal at the receiving end. The simplest way to achieve this is by using a variable resistor network, which converts each digital level into an equivalent binary weighted voltage (or current). However, if the recipient is a computer or other device capable of handling a digital signal directly, processing by D/A Converters is not necessary.

Two of the most important parameters of A/D and D/A Converters are Accuracy and Resolution. Accuracy reflects how closely the actual output signal resembles the theoretical output voltage. Resolution is the smallest increment in the input signal the system can sense and respond to. Higher resolution requires more bits and is more complicated and expensive, apart from being slower.

Measuring Temperature Remotely

How to Measure Temperature Remotely

In hostile atmospheres like toxic zones, very high temperature areas or remote locations, where objects are not amenable to direct temperature measurements, remote measurement techniques are deployed. In such applications, remote temperature measuring techniques are resorted to, and devices used include Infrared or Laser Thermometers as described below.

Infrared Thermometers or Laser Thermometers

These devices sense the thermal radiation, also called Blackbody Radiation, emitted by all bodies, and the emission depends on the physical temperature of the object whose temperature is to be sensed. Laser Thermometers, Non-contact Thermometers or Temperature Guns are names of variants that use lasers to direct the thermometer towards the object.

In these devices, a lens helps the thermal energy converge onto a detector, which in turn, generates an electrical signal, and drives a display after temperature compensation. The devices produce fairly accurate results and have a fast response, unlike direct temperature sensing, which is difficult, slow to respond to or not accurate enough. Induction heating, firefighting applications, cloud detection, monitoring of ovens or heaters are some typical examples of remote measurement of temperature. Other examples from the industry include hot chambers for equipment calibration and control, monitoring of manufacturing processes, and so on.

These devices are commercially available in a wide range of configurations, such as those designed for use in fixed locations, portable or handheld applications. The specifications, among others, mention the range of temperatures that the specific design is intended for, together with the level of accuracy (say, measurement uncertainty of ± 2°C).

For such devices, the most important specification is the DISTANCE-TO-SPOT RATIO (D:S) where D is the object’s distance from the device, and S denotes the diameter of the area whose temperature is to be measured. This implies that a measurement by the device concerned provides the average temperature over an area having a diameter S with the object placed at a distance D away from the device.

Some thermometers are available with a settable emissivity to adapt to the type of surface whose temperature is being measured. These sensors can thus be used for measuring the temperature of shiny as well as dull surfaces. Even thermometers without settable emissivity can be used for shiny objects by fixing a dull tape on the surface, but the error would be larger.

Commercially Available Types of Thermometers:

• Spot Infrared Thermometer or Infrared Pyrometer, for measurement of temperature at a spot on the object’s body

• Infrared Scanning Systems, for scanning large areas. This functionality is often realized by using a spot thermometer that aims at a rotating mirror, such as piles of material along a conveyor belt, cloth or paper sheets, etc. However, this cannot be termed a thermometer in the true sense.

• Infrared Thermal Imaging Cameras or Infrared Cameras are the ones that generate a thermogram, or an image in two dimensions, by plotting the temperature at many points along a larger surface. The temperatures sensed at various points are converted to pixels, and an image is created. As opposed to the types described above, these are primarily dependent on processor- and software-for functioning. These devices find use in perimeter monitoring by military or security personnel, and monitoring for safety and efficiency.

The ins and outs of Peltier Cells

What Are Peltier Cells and How Do They Work?

If you join two dissimilar metals by two separate junctions, and maintain the two junctions at different temperatures, a small voltage develops between the two metals. Conversely, if a voltage is applied to the two metals, allowing a current to pass through them in a certain direction, their junctions develop a temperature difference. The former is called the Seebeck effect and the latter is the Peltier effect.

Many such dissimilar metal junctions are grouped together to form a Peltier cell. Initially, copper and bismuth were the two dissimilar metals used to form the junctions. However, more efficient semi-conductor materials are used in the modern Peltier cell. These are sandwiched between two ceramic plates and the junctions are encased in silicon.

Just as you could pass electric current through a Peltier cell to make one of its surfaces hot and the other cool, so could you place a Peltier cell in between two surfaces with a temperature difference to generate electricity. In fact, BMW places them around the exhaust of their cars to reclaim some electricity from the temperature difference between the hot gases emanating from the car and the atmosphere.

Another place where Peltier cells are put to use is the picnic basket. It connects to the car battery and has two compartments – one to keep food hot and the other to keep food and drinks cool. Unfortunately, Peltier cells are notoriously inefficient, since all they do is move heat from their cold side to the hot. Part of their efficiency is also dependent on how fast heat is removed from their hot side. Usually, Peltier cells are able to maintain a maximum temperature difference of 40°C between their hot and cold sides.

Active heat sinks use Peltier cells to keep CPUs cool inside heavy-duty computers. These CPUs pack a lot of electronics inside their tiny bodies and generate huge amounts of heat when working at high frequencies of a few Giga-hertz. Peltier cells help to remove the heat from the CPU and keep the temperature constant. One advantage in using Peltier cells for this work is the CPU can regulate the amount of heat removed. The CPU in a computer has temperature sensors inside and when it senses its temperature is going up, it pumps in more current into the Peltier to increase the heat removal.

What does the Peltier do with the heat it has acquired from the hot source? To maintain its functioning, the Peltier has to transfer this heat to the material surrounding its hot surface. Usually, this is an Aluminum or Copper heat sink, which then transfers the heat to the atmosphere.

Active heat sinks that are more exotic use heat-conducting fluids to transfer the heat away from the hot side of the Peltier cell. These are specially formulated fluids with high thermal conductivity running in pipes over the hot surface of the Peltier. As the Peltier gets hot, the fluid takes away the heat and changes to a liquid of a lower density. Convection currents are set up, causing the hot liquid to move away to be replaced by cooler liquid, aiding heat transfer. Heat from the hot liquid is removed in a heat exchanger in a different part of the computer.