Category Archives: Guides

How does temperature affect component life?

Change in temperature affects the speed, power and reliability of electronic components and systems. Variation of temperature affects the speed performance, because material characteristics depend on temperature. These dependencies may be normal or reversed based on the type of the semiconductor material. Additionally, these dependencies change with technology scaling, and manufacturers counteract by introducing new processing materials, using metal gates and high-K dielectrics.

For example, temperature influences various performance functions in a MOSFET. These include the carrier density, energy band gap, carrier diffusion, mobility, current density, velocity saturation, leakage current, threshold voltage, electro-migration and interconnect resistance.

Temperature dependence of carrier density for a doped material occurs in three distinct regions. The material has just enough latent energy in the ionization region to push a few of the dopant carriers into the conduction band. When the material is in the extrinsic region, which is the desired region of operation, the carrier concentration remains flat over a wide range of temperatures.

This region has all the dopant carriers energized into the conduction band, and there is minimum generation of additional thermal carriers. However, as the temperature increases, the extrinsic region converts into the intrinsic region, with the number of thermally generated carriers exceeding the number of donor carriers. Typically, the intrinsic carrier concentration in a material is generally much smaller than the concentration of dopant carriers at room temperatures. However, intrinsic carrier concentration is highly temperature dependent and once the number of thermally generated carriers exceeds the number of dopant-generated carriers, the potential for thermal variation problems increases substantially.

At low temperatures, lattice vibrations in the material are small and electrons move more slowly. Thus, ion impurity forces dominate the limit to mobility. As temperature decreases, it takes less time for an electron to pass an impurity ion, which means the mobility decreases. The reverse is true when temperature rises; the carrier’s thermal velocity increases, consequently decreasing the impact of interface charges.

With an increase in temperature, the kinetic energy of particles within the material also increases, effectively increasing the diffusion component of the total current. Two parameters, mobility and carrier density affect the total current through the material. While the carrier density remains nearly fixed with temperature over the extrinsic range or the intended range of operation, the mobility term or the drift component of the total current actually decreases with an increase in temperature.

Since the temperature dependencies of diffusion and drift currents are of opposing nature, the net current change depends on the applied electric field and affects the threshold voltage and leakage current of the MOSFET. Manufacturers typically design the MOSFET such that its threshold voltage decreases linearly with increasing temperature. However, the leakage current doubles for every 10°C rise on temperature.

The resulting change in device current based on temperature can have devastating effects leading to timing failures, systems exceeding power or energy budgets and errors in communication between cores. This is more commonly known as reverse temperature dependence, which is the increase of electrical conduction with increase in temperature, first discovered by C. Park of Motorola, in 1995.

Is solid state memory better than magnetic memory?

Moore’s Law, or more specifically, Gordon Moore, predicted in 1965 that transistor density in chips would double about every two years. For the past 50 years, this observation has held true and is the fundamental driving force behind most advances in technology leading to computers becoming smaller, faster, cheaper and more reliable. Typically, Moore’s Law has resulted in solid-state memory becoming smaller and cheaper. However, the ever-increasing need for greater storage capacity has had manufacturers sacrificing reliability and performance in some cases. Apparently, the solid-state memory area presents a major dichotomy.

The solid-state memory arena has seen exponential growth in the past 15 years. Many have not even seen a floppy disk as it was replaced almost overnight by the thumb drives. These solid-state drives also made ZIP drives disappear shortly thereafter. With the increase in digital storage in smartphones, CD players and CD drives became obsolete.

There is no doubt that solid state media is superior to the mechanical, rotating magnetic memory media, which we commonly known as the Hard Disk Drive, in most applications. However, surprising as it may seem, the solid-state media has some characteristics that do not allow it to follow Moore’ Law, making it less than optimal for some types of applications.

For example, flash memory is known to have a limited life, as it wears out over time. With the technological improvements we have been witnessing year-over-year, one would assume that flash endurance has gradually improved; however, reality says otherwise. In the ancient days (read 8 years ago), SLC NAND memory was rated to give 100,000 write/erase cycles. Currently manufactured SLC NAND memory has a life of 50,000 write/erase cycles. In the case of MLC NAND memory, the reduction is even more dramatic. Older MLC NAND memories had a life of about 10,000 write/erase cycles, current MLC NAND memories are limited to 3,000 write/erase cycles.

That means newer flash memories wear out more quickly, and in addition, do not perform as well. This can be attributed in part to the stronger EDC or Error Detection and Correction requirements for newer flash. This is evident from the decreasing write/erase cycles and the increasing ECC or Error Correcting Code requirements following a reduction in the NAND flash lithography.

This reduced lifespan may not be so much of a problem for many consumer devices. For example, MLC NAND memory within a cell phone is likely to outlast the mobile phone itself; assuming an original owner lifespan of two years.

Industrial devices using solid-state media would face a different story. Typically, devices manufactured for industrial use have an expected lifetime of 10, 15 or more years. That makes endurance of solid-state media in such devices very critical.

Although flash lifespan is measured in cycles of write/erase, interestingly, only the erase operation counts against the life of flash memory. That means, you could use up the flash simply by erasing it repeatedly, while not writing anything to it at all. It does not matter whether you write a small or a large amount of data; what matters is how many times you erased it.

What are Zener, Schottky and Avalanche Diodes?

Diodes are very commonly used semiconductor devices. They are mostly used as rectifiers for converting Alternating to Direct current. Their special characteristic of allowing current flow in only one direction makes them indispensable as rectifiers. Apart from rectification, various types of diodes are available for different purposes such as for generating light, microwaves, infrared rays and for various types of switching at high speeds.

For example, the power supply industry has been moving towards high speed switching because higher speed reduces the volume of magnetics used, which ultimately reduces the bulk and price of the units. For switching at high frequencies, diodes are also required to react at high speeds. Schottky diodes are ideal for this purpose, as their switching speeds approach nearly zero time. Additionally, they have very low forward voltage drop, which increases their operating efficiency.

As their switching speed is very high, Schottky diodes recover very fast when the current reverses, resulting in only a very small reverse current overshoot. Although the maximum average rectified currents for Schottky diodes are popularly in the range of 1, 2, 3 and 10 Amperes, Schottky diodes that can handle up to 400A are also available. The corresponding maximum reverse voltage for Schottky diodes can range from 8 to 1200V, with most popular values being 30, 40, 60 and 100 Volts.

Another very versatile type of diode used in the power supply industry is the Zener diode. All diodes conduct current only when they are forward biased. When they are reverse biased, there is only a very small leakage current flowing. As the reverse voltage increases to beyond the rated peak inverse voltage of the diode, the diode can breakdown irreversibly and with permanent damage.

A special type of diode, called the Zener diode, blocks the current through it up to a certain voltage when reverse biased. Beyond this reverse breakdown voltage, it allows the current to flow even when biased in the reverse. That makes this type of diode very useful for generating reference voltages, clamping signals to specific voltage levels or ranges and more generally acting as a voltage regulator.

Zener diodes are manufactured to have their reverse breakdown voltage occur at specific, well-defined voltage levels. They are also able to operate continuously in the breakdown mode, without damage. Commonly, Zener diodes are available with breakdown voltage between 1.8 to 200 Volts.

Another special type of diode called the Avalanche diode is used for circuit protection. When the reverse bias voltage starts to increase, the diode intentionally starts an avalanche effect at a predetermined voltage. This causes the diode to start conducting current without damaging itself, and diverts the excessive power away from the circuit to its ground.

Designers use the Avalanche diode more as a protection to circuits against unwanted or unexpected voltages that might otherwise have caused extensive damage. Usually, the cathode of the diode connects to the circuit while its anode is connected to the ground. Therefore, the Avalanche diode bypasses any threatening voltage directly to the ground, thus saving the circuit. In this configuration, Avalanche diodes act as clamping diodes fixing the maximum voltage that the circuit will experience.

Of what use is spark erosion?

For many decades, people have been using Electro Discharge Machining (EDM) or Spark Erosion to successfully remove material from difficult to machine location or shapes. Toolmakers come across such difficult to machine surfaces and shapes occasionally. Therefore, most good tool making workshops are usually equipped with a Spark Eroder. Spark Erosion techniques and machines are not new – this knowledge has been around for nearly two-decades and more. EDM machines are equipped with current generators, with typical currents being in the range of 75A.

Modern EDM machines are computerized and numerically controlled. With such machines, it is a very simple affair to make a single set up to cut an array of cavities. For example, with a sparker, you can drill square holes very easily. These machines can be programmed to make an undercut or cut profiles with a precision measured in microns.

Workpieces are normally metallic and must be electrically conductive. In the tool making business, aluminum or steel blocks are usual. However, a workpiece could also be a machine with a broken drill bit, stud or a broken tap that has jammed tight in a hole. Machinists also use sparkers to work on car parts.

The other important part required in the spark erosion process is the electrode. Mold makers or toolmakers use any shape such as a simple cylinder or a polygon. Other more complex operations need a CNC milled brush head, convolute or a diaphragm. If you need to remove a drill bit jammed into a car part, the machinist would normally use a cylinder such as a copper tube of small diameter. After precisely mounting the electrode in the machine head, the machinist will align its movement in the direction of the travel for the head. The alignment of the electrode and the workpiece is a precision task often requiring the help of a Digital Readout or DRO.

To start the spark erosion process, the workpiece must be immersed in a dielectric liquid. Earlier EDM machinists used paraffin as the dielectric, but now liquids that have a much higher flashpoint have superseded paraffin. Moreover, these liquids are not only safer, but also kinder to the machinists’ hands, as he has to dip them often in the liquid.

The machine is switched on and the electrode is brought closer to the workpiece. For this motion control, the mechanism may be hydraulic or electronic. As soon as a critical distance is reached, a tiny spark jumps between the electrode and the workpiece. This is an electrical discharge creating extremely hot plasma and it melts a little part of the workpiece into a tiny pool. At the same time, a small part of the dielectric also vaporizes creating a bubble around the spark.

Seen on a microscopic scale, the pool and the bubble both get larger until the control electronics stops the spark. This collapses the bubble, whereby the surrounding dielectric rushes in and flushes away the molten workpiece. The process creates a large pit on the workpiece and a smaller one on the electrode. When repeated tens of thousands of times each second, the workpiece is slowly eroded away to the required depth.

Why is Power Factor so Important?

The specifications of any electrical appliance working on AC supply, such as a refrigerator, a toaster, a fan, etc., list a minimum of three important parameters – Voltage, Wattage and PF. The voltage rating indicates the nominal operating voltage of the appliance, the wattage rating indicates the power the appliance will use when switched on. The third parameter, PF, stands for the Power Factor – usually a value between 0.6 and 1.0.

All electrical appliances consume power for operating or working such as for lighting, heating, motion, etc. The appliance transforms a major part of the consumed power into its intended activity and the rest is wasted as heat. The ratio of the power converted to useful work to the total power consumed is the efficiency of the appliance.

Of the power converted to useful work, only a part is used as true or real power and the balance as reactive power. Engineers express real power in W (Watts) and reactive power as VAR (Volt-Amperes-Reactive). The appliance converts the real power into actual work, while it needs the reactive power to sustain a magnetic field and this does not directly contribute to the actual work done by the appliance. Therefore, the real power is also called the working power, while the reactive power is called non-working power. The sum total of the working and non-working power of an appliance is called its apparent power, expressed as VA (Volt-Amperes) and is the product of the nominal operating voltage and the current consumed by the appliance when operating.

This phenomenon of reactive power is true mostly for inductive appliances such as motors, compressors or ballasts. Power Factor is the ratio of the real or working power to the apparent power – an indication of how effectively the appliance will be using electricity. The problem is, although you will be paying the electricity utility for the entire apparent power consumed, the appliance will be converting only the real power into useful work for you. Therefore, a higher PF rating for your appliance works to your advantage – choose one with PF as close to 1.0 as possible.

In reality, low PF is also a headache for the utility supplying you with power. This is best explained with an example. Let us assume you have an operation that requires 100KW to run properly. If you install a machine that has a PF of 0.8, it will chalk up 125VA on the Apparent Power meter, but will convert only 80% of its incoming power into useful work. Since the electricity utility will have to supply both active and reactive power to its consumers, the wasted power ends up heating the conductors of the distribution system, resulting in a voltage drop at the consumers end.

The simplest way of improving the power factor is to add capacitor banks to the electrical system. PF correction capacitors offset the reactive power used by inductive loads, thereby improving the power factor. That maximizes the current carrying capacity, improves the supply voltage, reduces transmission power losses and lowers electricity bills.

How do you measure cable length?

Where miles of cable are involved, how do people determine where the fault lies and decide where to dig for initiating repairs? The method involves something very similar to how people determine the depth of a well, the distance of a cliff or the location of a thundercloud – by echolocation. If you know the speed of sound in air, you can find the distance of the sound source from the product of the amount of time sound is taking to travel the distance to its speed in air. For example, light travels much faster than sound; therefore, light from a thunderclap precedes its sound. By timing the gap between seeing the flash of light and hearing the thunder, it is easy to tell how far away the thunderclap occurred.

When an electrical pulse is directed into one end of the cable, it travels down the length until it meets a change in the cable’s impedance. This may be a fault in the cable or it may simply be its other open end. Whatever the situation, the change in impedance causes the pulse to turn back to its point of origin. The time gap between the original pulse and its return represents the length it has traveled. Therefore, if it has returned in, say 30ns, instead of the 100ns expected, the fault is at about 1/3rd the cable’s length from the end where the pulse was injected.

Engineers usually rig up an oscilloscope and a pulse generator for the purpose. Knowing the cable’s characteristics is necessary to set the pulse generator’s output impedance. The pulse generator needs to output a narrow pulse of about one to 100ns, with as small a duty cycle as possible – the two parameters depending on the length of the cable under test. The pulse voltage is not critical – 1V peak is enough.

The oscilloscope’s trigger level should be just under the peak voltage of the 1V pulse. The time base should be set just long enough to display one pair of the 1V pulses generated. That completes the setup.

As you launch the pulse into the cable, it triggers the oscilloscope sweep. The pulse now continues to the other end of the cable, until it encounters an open end. Since energy cannot be destroyed, the pulse is reflected back to the generator. When it passes the oscilloscope, it is displayed again. You can differentiate the reflected pulse from the original by the reduction of its amplitude and a difference in the rise/fall slopes. This happens because of attenuation when traveling within the cable and a loss of high-frequency harmonics. Although there may also be additional reflections caused by input capacitance of the oscilloscope, the echo of interest is only the first one after the original pulse was launched.

The round trip time is dependent on the cable length, which is usually known. For most cables, the pulse will travel at about 66% of the speed of light in vacuum (300m/µs). That makes its speed within the cable about 200m/µs. You may have to play with the time base and the pulse period until you can see both the launched and the reflected pulse.

How to Measure Large DC Currents Accurately

The market has several instruments for accurately measuring small DC currents, say up to 3A. You can also find some devices that can measure DC currents that extend beyond 50A with good accuracy. Large currents are common in photovoltaic renewable energy installations, grid energy storage, electric vehicles, to name a few. Usually, it is a common necessity for such systems to be able to predict accurately the state of charge or SOC of the associated energy storage batteries.

Usually, systems for current or charge measurements are designed to include built-in data acquisition modules such as ADCs or analog to digital converters, filters and suitable amplifiers. The arrangement is typically that of a current sensor followed by a filter/amplifier and finally an ADC. The current sensor senses the current a circuit for converting the output into a usable form such as voltage, typically follows it. The signal requires filtering to reduce the radio frequency and electromagnetic interferences. The cleaned signal may have to be amplified before being digitized. Current data samples multiplied by the appropriate time interval are accumulated for charge values.

Two sensor technologies are commonly used for measurement of large currents. The first of these techniques measures the voltage drop across a resistor (also called a current shunt) that carries the current to be measured. The voltage drop follows Ohm’s law and equals the product of the current times the resistance.

Large DC currents may cause power bus bars and cables to dissipate significant amounts of heat. As a thumb rule, designers of power installations strive to achieve less than 1% power loss from the wiring, including bus bars and heavy cables. For example, an offline storage system of batteries with output of 1KV and 1KA supplies power at 1MW. Although the dissipation of a 50W shunt is insignificant at 0.005%, the power cables and bus bars may dissipate heat upwards of several KW.

To put things in perspective, designers go by 1W per µOhm at 1KA, therefore, for a shunt with 10 µOhm resistance, a continuous current of 1KA passing through it will heat it up to 10W. Alternately, copper wire, with a diameter of one-inch, will be dissipating 12-14W of heat at 1KA for each foot, since the resistance of the wire is about 10 µOhm per foot, after correcting for resistance increase due to heating.

The second technology senses the magnetic field encircling the current carrying conductor. The device for sensing the current is generally known as the Hall-Effect current sensor. Usually, the magnetic field around the current carrying conductor is concentrated in a magnetic core, which has a thin slot and the Hall element resides here. The magnetic field is thus perpendicular to the plane of the Hall element, while the magnetic core makes it nearly uniform. Energizing the Hall element with an exciting current makes it produce a voltage proportional to the magnetic field in the core and the exciting current. This voltage, suitably amplified and filtered, is presented to the ADC.

One advantage of the second technique using Hall elements is the isolation between the current carrying conductor and the measuring electronics. Since the coupling is only magnetic, the current carrying conductor may have very high voltage potentials, which do not affect the current measuring elements.

Pulse Ranging Technology Sensors Can Now Measure Distance

Radar measures the distance of an object by bouncing bursts of high frequency waves from the surface of the object and sensing the time it takes the echo to return. Pulse Ranging Technology or PRT sensors use a similar technique, but instead of using radio waves, they use bursts of light. The sensor emits bursts of light that travel to the object, bounce off its surface and return to the sensor. A processor in the sensor measures the time of flight of the light pulse and calculates the distance to the object.

PRT sensors emit light pulses of high-intensity at rates of 250 thousand pulses every second. The delay between the emission of light and its recapture increases with distance. Distances can also be measured by sensing the difference in phase shift of the reflected light from other types of photoelectric time of fight sensors that emit continuous light beams. The returning beam of light undergoes a change of phase because of reflection, and the difference in phase is a measure of the distance travelled by the light beam. However, a PRT sensor is superior in performance to other types of sensors.

Since a PRT sensor uses a pulsed laser diode, higher currents can be pumped into the laser source, resulting in light of higher intensity as compared to sources emitting continuous light. Light from a PRT sensor can be up to a thousand times more intense than that from other sources, which means they can easily detect objects further than 300 m.

High intensity light pulses from PRT sensors are not harmful to eyes. Although the light is intense, PRT sensors are off for longer periods that they are on. Therefore, in reality, PRT sensors emit very low power at any time compared to sensors sending out continuous light beams. In the market there are several PRT sensors certified as Class 1 laser products or “eye-safe.”

As pulsed light is easy to differentiate, PRT sensors are immune to other nearby photoelectric sensors, lighting and even sunlight. While sensing pulsed light, the PRT sensors can eliminate interference and crosstalk. On the other hand, sensors that use continuous light beams find light from stray sources often interfering with their readings.

PRT sensors are very useful in measuring continuously changing positions of the target. For example, they can monitor the stack height of metals; check if a container has been filled up to a specified height; and position a load or a product properly. They are good in preventing collisions of cranes, gantry and conveyors. Some PRT sensors can convert the distance measurement to streams of binary digits via Profibus, Ethernet or IO-Link, while some can output analog signals as well.

PRT sensors are useful not only for distance measurements, but also for detecting the presence/absence of objects. For example, they can verify rack occupancy in warehouses, detect stacks or panels within a defined window, tell when spools or rolls are either empty or full and check the height of a forklift truck. Moreover, designers can set the range at which the sensor will start detecting objects.

What is an H-Bridge?

Those who are into robotics know that robots, just as humans do, also need to suddenly change course when they run into an obstacle in their path. Changing course while walking may not be a big deal for humans, but for robots, and especially for those who design them, it is sometimes a serious challenge.

For example, consider a robot that is moving towards an obstacle, which it has to avoid and proceed on a parallel path. A robot with two wheels will need to stop moving as it reaches the obstacle, then pivot on one wheel by a certain angle and move forward until the obstruction no longer bars its way. Then it has to stop again, pivot back on the other wheel by the same angle it had turned earlier and move forward. If the robot is required to go back to its original track, it has to pivot once again. The entire exercise gets more complicated if the robot has more than two wheels; clearly, robotics is not for the faint-hearted.

Most movements in robotics involve DC motors and moving a robot backwards requires the DC motor to run in reverse. This is accomplished by switching the connections of the motor to its power source so that it now connects in a way opposite to its normal manner. Doing this causes the current flow in the motor to reverse, making it rotate in the opposite direction. However, it is impractical to manually disconnect the wires of several motors and reconnect them in a moving robot. That job is best left to H-bridges.

An H-bridge is a circuit that looks very similar to a capital H. It has four switching elements at its corners, with the motor forming the cross bar. The only difference are the top and bottom bars – these are not part of the alphabet H. Traversing clockwise, the four switching elements are called – high-side left, high-side right, low-side right and low-side left. The top bar connects to the positive terminal of the power supply/battery and the lower bar connects to the ground or the negative terminal of the power supply/battery.

You run the motor by turning on a pair of switches. For example, if you turn on the switches high-side left and low-side right, the motor will turn, say, in a clockwise direction. If these switches are turned off and the other pair is switched on, they connect the motor to the supply in reverse and the motor rotates counterclockwise.

While the switches are turned on in pairs, those on the same side must never be turned on simultaneously. For example, if the two switches on the left or the two on the right were to be switched on together, they would create a direct short between the terminals of the power supply/battery, bypassing the motor altogether.

This phenomenon is called shoot through, and if your power supply or battery has no short-circuit protection, it may cause a premature failure of the source including irreparable damage to the switches. Typically, the rating of the switches must match the rating of the motor – powerful motors operate on high currents and the switches must be capable of handling those currents. In practice, the switches are power MOSFETs or IGBTs.

What is a 4-20 mA Current Loop?

The pre-electronic industry used pneumatic controls. Compressed air powered all ratio controllers, temperature sensors, PID controllers and actuators. The modulation standard was 3-15 pounds per square inch, with 3 psi standing for an active zero and 100% represented by 15 psi. If the pressure went below 3 psi, an alarm would sound.

Electronic controls made their debut in the 1950s. A new signaling method with 4-20 mA current emulated and replaced the 3-15 psi pneumatic signal. As wires were easier to handle, install and maintain, current signaling quickly gained popularity. In contrast, pneumatic pressure lines and energy requirements are much higher – you need a 20-50 HP compressor, for instance. Moreover, with electronics you can add more complicated control algorithms.

The 4-20 mA current loop is a sensor signaling standard and a very robust one. Current loops are the favored form of data transmission method because they are inherently insensitive to electrical noise. In the 4-20 mA current loop, the signaling current flows through all the components. Therefore, the same current flows even if the wire terminations are not perfect. All components in the loop drop some voltage because the signaling current flows through them. However, the signaling current is unaltered by these voltage drops as long as the power supply voltage remains greater than the sum of the individual voltage drops around the loop at the maximum signaling current of 20 mA.

The simplest form of the 4-20 mA current loop has only four components –

− A DC power supply
− A 2-wire transmitter
− A receiving resistor to convert the current signal to a voltage
− A wire to interconnect all the above

Most 4-20 mA loops use 2-wire transmitters, with standard power supplies of 12, 15, 24 and 36 VDC. There are also 3-wire transmitters with AC or DC power supplies.

The transmitter forms the heart of the 4-20 mA signaling system. The transmitter helps to convert physical properties such as pressure, humidity or temperature into an electrical signal, a current, proportional to the physical quantity being measured. In the 4-20 mA current loop system, 4mA represents the lowest limit of the measurement range, while the 20 mA represents the highest limit.

Since it is much easier and simpler to measure voltage than it is to measure current, typical current loop circuits incorporate a Receiver Resistor. This resistor helps to convert the current into a voltage, following Ohms Law (Voltage = Current x Resistance). Most commonly, the resistor used in a 4-20 mA current loop is 250Ω, although some engineers use resistances of 100Ω to 750Ω, depending upon the application. When using 250Ω, four mA of current will produce a voltage of one VDC across the resistor, and 20 mA will produce five VDC. Therefore, the analog input of a controller can very easily interpret the 4-20 mA current as a 1-5 VDC voltage range.

The wire connecting all the components of a 4-20 mA current loop has its own resistance expressed in Ohms per 1,000 feet. Some voltage is dropped across this resistance of the wires according to Ohm’s Law and has to be compensated by the power supply voltage.

The major advantages in using 4-20 mA current loops are their extreme immunity to noise and power supply voltage fluctuations.