Category Archives: Guides

What is HD Audio?

With the advent of wireless headphones, there has been a steadily increasing demand for HD or High Definition audio. People of all ages like the HD sound experience, especially those with age-related hearing degradation. These trends are driving the HD audio support development at all stages of the delivery chain.

High definition audio or high-resolution audio has no strict technical definition. Industry experts use the term to describe audio systems supporting higher data rates than older equipment could handle. Initially, industry experts first used the term to describe digital systems that could handle higher data rates as compared to the Compact Disc format. Now, it applies to wireless headphones that can deliver better audio quality.

Although the industry has improved the recording and distribution of audio with increased data rates and these are available to mobile listeners, wireless headphones were left behind mainly due to Bluetooth limitations. With newer Bluetooth codecs, it is now possible for wireless headphones too to deliver HD quality audio. Therefore, the trend is to improve the audio hardware, especially the drivers.

Digital audio formats are mainly defined by two terms — sample rate and bit depth. When converting from the analog sound, the digital audio samples the signal amplitude and saves each sample as a binary number. The sample rate represents the number of times the system samples the analog signal every second. The binary number size that represents the amplitude is the bit depth.

To accurately capture the information in a sine wave, it is necessary to sample it at least two times per cycle. Therefore, for music, the sample rate must be at least two times the highest frequency in the music. Therefore, if the maximum audio frequency to be reproduced is 20 kHz, the sampling frequency must be at least 40 kHz. Additionally, the ADC will require a very sharp 20 kHz low-pass filter to remove all frequencies above 20 kHz. However, in practice, nothing is perfect. Therefore, experts set the actual sample rate to 44.1 kHz, as this produces a better hearing experience.

The size of the byte that describes the audio sample, or its bit depth, determines the accuracy of each sample when digitized. Each additional bit in the binary word describes the amplitude with twice the original number of values. Alternately, this reduces errors by a factor of two. In the digital music system, this reduces the distortion and quantization noise, allowing each added bit of depth to reduce the noise floor by 6 dB.

Typically, digital systems work on multiples of 8 bits. Therefore, digital audio also uses multiples of 8 bits for its word size. With 8 bits as the word size, the noise floor is only 48 dB below the loudest music and is not very practical. Compact discs use a bit depth of 16, providing a signal-to-noise ratio of 96 dB, which is more reasonable.

At present, it is possible to deliver HD audio to the listener, as it is possible to upgrade all the stages in the delivery chain. Bluetooth codecs have been upgraded, mobile phone capability is better, and music streaming services have improved.

What is a DIP Switch?

DIP or Dual-In-line-Package switches have been popular since the 1970s. OEMs and end-users use them widely to change the functionality of electronic devices at the point of use. For instance, DIP switches allow users to set region codes for equipment to make them work in different areas, to change to a specific radio channel, which garage door the opener will engage, or to select the type of memory a PC motherboard has.

The DIP switch comprises a set of switches within a single unit, typically mounted on a PCB. Each switch is very basic in construction and functionality. The user must set each switch manually, and therefore, the user can simply determine the status by viewing the switch bank during system startup. This is in direct contrast to a membrane keypad connected to a microcontroller, which must be powered up and polled to know the status. Therefore, DIP switches have the simplicity and provide input to basic system firmware, and need not be powered up to know their current status.

Users can select the number of operations on their DIP switch depending on the configuration of the electronic application. This is possible as DIP switches are available in a variety of sizes, configurations, power ratings, and styles.

Just like any other switch, users can select from the number of poles and throws the DIP switch must-have. For instance, they can use the SPST switch or single pole single throw switch, as it has a two-terminal option, with the pole either engaging with the throw to enable continuity or disengaging with the throw to enable electrical isolation.

Likewise, there are SPDT switches or single pole double throw switches, where the user may push the single pole to engage with any one of the throws, and push it the other way to engage with the other throw. It is possible to direct any signal on the pole to either one of the throws at any time.

Other switches are available as a combination of the above SPST and SPDT arrangements. For instance, there may be mechanically linked double poles engaging with double throws, making the switch DPDT or double pole double throw type.

Typically, the number of switches in a package is dependent on the application, with 1 to 16 positions being a common number. For instance, a common DIP switch package may have eight positions, allowing it to be set to 256 different ways. This is equivalent to the 256 binary values that an eight-bit byte may express.

Mechanically, DIP switches are available in various types, depending on the way they operate, whether they have slide actuators, rotary actuators, piano actuators, and so on.

DIP switches with slide actuators usually have two positions, either closed or open, acting as an SPST switch. However, there can be DIP switches with slide actuators and three positions. Frequently, in such switches, the middle position acts as the neutral. As the actuator moves to either side, it makes contact with the position on that side.

DIP switches are low-cost, flexible, and provide a simplicity rarely found.

What are Capacitive Accelerometers?

In the electronic industry, there are various applications requiring accelerometers. For instance, the automotive industry uses accelerometers to activate airbag systems. Cameras use accelerometers to actively stabilize pictures. Computer hard disk drives rely on accelerometers to detect imminent external shocks that may damage the device—the accelerometer protects the device when an external shock is imminent. But, these are only a few applications for accelerometers.

In reality, there are endless possibilities for accelerometers uses. Microfabrication technologies have advanced steadily to enable the low-cost, tiny micro-machined accelerometers that the industry uses today. In fact, the small form and low cost are the two main factors allowing the application of these devices to cover such a broad spectrum.

The most common method of measuring acceleration is using a mass-spring-damper structure, converting the acceleration to a displacement quantity. Applying the capacitive sensing technique makes it easy to convert this displacement to an electrical signal proportional to the applied acceleration.

For the mass-spring-damper structure, a known quantity of mass, also known as test mass or proof mass, connects to the sensor frame through a spring. When the sensor frame senses acceleration because of an external force, the proof mass tends to hang back due to its inertia. This allows the relative position of the proof mass to change with respect to the sensor frame.

An external observer sees the proof mass being displaced to one side of its resting position. At the same time, the displacement of the proof mass compresses or elongates the spring. This exerts a force proportional to the displacement on the proof mass. The force from the compressed or elongated spring pushes or pulls the proof mass to the other side and makes it accelerate in the direction of the external force.

If the designer has chosen appropriate values for the various parameters in the system, the displacement of the proof mass will be proportionate to the value of the frame acceleration, once the transient response of the system subsides.

In summary, a mass-spring-damper structure converts the sensor frame acceleration to a displacement of the proof mass. Now the question is, how to measure this displacement? Although there are several methods of measuring this displacement, one of the most common arrangements is the capacitive sensing technique.

Fixing two electrodes to the sensor frame and a movable electrode to the proof mass creates two capacitors. As the proof mass moves, the capacitance between the moving electrode and that of one fixed electrode decreases, while the capacitance between the others increases. By measuring the change in the sense capacitors, it is possible to detect the displacement of the proof mass. This is then proportional to the input acceleration.

To measure changes in the sense capacitors accurately, it is necessary to apply the technique of synchronous demodulation. It is easy to do this while employing the signal conditioning offered by the ADXL family of accelerometers from Analog Devices. These devices use a 1 MHz square wave as the AC excitation for the sense capacitors.

As the movable electrode moves close to one of the fixed electrodes, the amplifier input bridge receives a larger proportion of the excitation voltage from the moving electrode. If the movable electrode is at rest, the voltage at the amplifier input is zero.

What is Ambient Sensing?

Although smart homes have been around for several years now, this industry is rather nascent. Even though we are familiar with the use of Amazon Alexas and Google Homes as smart devices, but for smart homes, they have their limitations.

Smart devices do use technologies promising levels of interoperability and convenience that were unheard of a few years ago. However, they have not been able to fulfill current expectations. For instance, they struggle if there is no home network, cannot use unprocessed data, and are typically standalone devices.

Movies provide a better concept of a smart home. They present a futuristic building with levels of autonomy and comfort far beyond what the current technology can provide. In the real world, our ability to interact with them is rather limited.

For instance, the smart technology available at present allows interaction with voice commands only, thereby limiting their autonomy. Although the current technology boasts of voice recognition, this is still frustrating and cumbersome to use. Most people seek a seamless experience that comes with higher intuitive or human interaction.

For instance, it is still not possible to unlock a smart home simply by improving voice commands. Although audio sensors do form a crucial element for intuitive interaction with a smart home, making them a part of a sensor array for providing better contextual information would be a better idea. For genuinely smart home, the devices must provide a more meaningful interaction, including superior personalization for contextualized decision-making.

While it may be possible for manufacturers to pack in unique sensor arrays in devices, some sensor types could prove to be more useful. For instance, cameras provide huge amounts of information, and smart systems could make use of this fact to perceive the smart home in a better way. Adding acoustic sensors, and gas sensors along with 3-D mapping could be one way of bringing smart environments to the next level.

By collating these inputs, smart devices can understand and implement individual preferences better. For instance, depending on who has entered or exited the room, a smart device can change the sounds, lights, safety features, and temperature matching that person’s profile. Smart devices must not limit themselves to comprehending the ambient alone, but be capable of changing the environment, even without direct inputs.

These features could go beyond providing comfort alone. For instance, with motion sensors, the device could extend security. Along with motion sensing, individual recognition, and 3-D mapping could make homes much safer. For saving energy, sensors for presence, daylight sensing, and temperature measurements could dim lights or adjust air conditioning for better comfort on hot days.

One of the issues holding back such implementation is consumer privacy. While homeowners have grown accustomed to smart speakers, endless examples are available of data-mining organizations that observe the consumer’s daily interaction with these devices. For instance, Amazon’s Astro robot has been accused of data harvesting and there is criticism of Facebook’s smart glasses by the Data Privacy Commission in Ireland. As devices get smarter and use more ambient technology, consumers will have to share greater amounts of data than they are doing at present.

What are Axial Flux Motors?

AC induction motors are no doubt the most popular and widely used electric motors today. For DC applications, there are permanent magnet motors. However, newer applications are demanding different types of motors with higher efficiency and better speed-torque characteristics. One such application is the electric vehicle sector, where axial flux motors are gaining traction.

Axial flux motors are not new. For the past few decades, manufacturers have been using these motors for stationary applications like agricultural machinery and elevators. With modifications and innovations over the past decades, axial flux motors are now capable of running airport pods, electric motorcycles, delivery trucks, aircraft, and electric cars.

Induction motors and permanent magnet motors are most often known as radial flux motors, as the flux they generate radiates out perpendicularly, relative to their axle. With extensive development, engineers are aiming to optimize the weight and cost of radial flux motors, but the going has been asymptotic. Therefore, moving to a completely different type of machine like an axial flux makes better sense.

With the axial flux design, a permanent magnet motor can provide higher torque for a given volume than a similar motor of radial flux design can. This is because the axial flux design works with a much larger active magnetic surface area to generate torque rather than the motor’s outside diameter.

Therefore, the axial flux motor can be much more compact, with an axial length far shorter than that of their radial counterparts. Because of their shorter axial length, axial flux motors are more suitable for applications that use a motor inside the wheel. Although these motors are slim and lightweight, they can provide the machine where they are mounted with higher power and torque density than a comparable radial motor can, without resorting to high-speed rotation.

The shorter, single-dimensional flux path also provides the axial flux motors with high efficiency, typically over 96%. This is a tall order for the best 2D radial flux motors available on the market.

Compared to radial flux motors, axial flux motors can be five to eight times shorter, and two to times lighter. Both these factors improve the options for designers of EV platforms.

Axial flux motors are available in two principal technologies—dual-rotor, single stator, and single rotor, dual stator.

In a permanent magnet motor using radial flux technology, the magnetic flux loop starts from a permanent magnet on the rotor. It then passes through the first tooth of the stator, continues to flow radially along the stator, and passes through a second tooth, arriving at the second magnet in the rotor.

In an axial flux motor, using the dual rotor technology, the flux loop begins at the first magnet. It then passes axially through the stator tooth arriving immediately at the second magnet. Therefore, the flux has to travel a much shorter distance compared to that in the radial flux motor. This allows the axial flux motor to be much smaller for the same power, increasing its power density and efficiency. In contrast, the flux has to follow a 2-dimensional path inside a radial flux motor.

How to Effectively Mount Accelerometers

An appropriate coupling between the accelerometer and the system it is monitoring is essential for accurate measurements. Engineers use different methods for mounting MEMS accelerometers, and this affects their frequency response.

The resonance of the mounting fixture plays an important role, as it can introduce an error in the measurement. Accelerometers using MEMS sensors typically use a printed circuit board or PCB for mounting the sensor, and there may also be other mechanical interfaces between the PCB and the surface of the object it is monitoring. This creates a mechanical system that can have multiple resonances within the frequency range of interest.

For instance, the resonant frequency of the mounting structure may be close to the frequency of the acceleration signal. This will cause the sensor to receive an amplified signal in place of the original acceleration.

Again, if the mechanical coupling causes damping, the sensor will likely receive an attenuated signal.

That means, unless applying proper mounting techniques, it is not possible to take full advantage of the accelerometer’s bandwidth. This is especially so when the measuring acceleration signals are above 1 kHz. Engineers apply three types of accelerometer-mounting techniques such as stud, adhesive, and magnetic mountings.

Stud mounting requires drilling a hole in the object and fixing the sensor to the device under test with a nut and a bolt or a screw. This method of mounting provides an immobile mechanical connection. But it is capable of effectively transferring vibrations of high frequencies from the object to the sensor.

Proper stud mounting requires the coupling surfaces to be as clean and flat as possible. Using a thin film of some type of coupling fluid like oil or grease between the coupling surfaces aids in improving the coupling. The fluid fills small voids between the surfaces, thereby improving transmissivity. It also helps to use a torque wrench to tighten the stud to the manufacturer’s specifications.

Where it is not possible to drill a hole in the device, engineers use an adhesive to couple the sensor to the object it has to monitor. Depending on the nature of the object, engineers use glue, epoxy, or even wax for the coupling. They select the adhesive depending on whether the mounting is temporary or permanent. In case the surface of the object is not smooth, engineers sometimes use an adhesive mounting pad or mounting base. While adhesives fix the mounting pad to the test surface, a stud mounting fixes the sensor to the mounting base.

Engineers have an alternative method of fixing accelerometers, that is, by using magnetics. However, this method is only suitable for ferromagnetic surfaces. If the surface is non-magnetic metal or very rough, engineers often weld a ferromagnetic pad to it to act as a magnetic base.

As the stud mounting method offers a relatively firm connection as compared to the adhesive and magnetic methods, it is suitable for higher frequency signals for measuring acceleration. The adhesive and magnetic methods of mounting accelerometers are suitable for applications where the acceleration signals are below a few kilohertz.

Why ADC Grounding is Important

We commonly use analog to digital converters in electronic devices. For instance, we connect the output of a sensor to an ADC input and use the digital readings for our purpose. Digital signals offer good noise rejection, there is firm switching between levels, and the built-in margin available is also good. However, the analog side can be susceptible to noise.

If the analog input is noisy, it affects the digital output. Most noise levels on the analog side come from a single source—a lack of attention to the ground. To have better results with ADCs, understanding basic principles about grounding is important.

Grounding is simple for low- and mid-speed digital design. When testing the design on a breadboard, power and ground lines are well-defined. These are the two rails running along the two longer edges of the breadboard. The designer designates one of the lines as power, while the other is the ground. They connect the power and ground points in the circuit to the respective rails using short wires.

The importance of grounding increases as digital circuits start to operate faster, and the resolution of the analog side increases. In reality, the ground is not simply a zero-voltage level, it is also a return path for the current flowing in the circuit.

In ideal conditions, whatever may be the circuit behavior, it would not affect the ground. However, in the real world, the ground is rather imperfect. The return path from a narrow trace may have a tiny bit of resistance, such as that from a bad solder joint, or from a few ground pins on a chip. It is possible to see ground bounces in the form of voltage spikes. Add to this resistance some stray inductance, such as from leads of a chip package. Now, power supply noise increases as the operating frequencies go up.

When the ADC resolution is high, the step width in the digital output is in the millivolt range. This makes spikes and noise on the analog input a major problem. The input noise causes bits of error that add to the error sources within the ADC. Designers can take reference from good off-the-shelf board designs for improving the ground quality.

Boards with SoCs or microcontrollers often have a ground plane. Here, the ground is a thick copper layer that may occupy more than one layer on a multi-layered board. IC pins that need to reach the ground can do so with very short paths. Connecting resistance reduces drastically. Capacitors bypassing the power lines reduce stray inductance to a large extent. This helps to smoothen the noise from power supply lines.

Nowadays, there are smart sensors in the market. They contain built-in microcontrollers and ADCs. The input analog signals have a very small distance to travel. They have much less tendency to pick up noise. In addition, some sensors also present data in serial modes, such as on SPI or i2C interfaces, where the output is already in digital form.

Designers must pay special attention to boards with islands of unconnected ground, as these can cause the maximum level of noise at the input.

Using RTDs for Measuring Temperature

Much industrial automation, medical equipment, instrumentation, and other applications require temperature measurement for monitoring environmental conditions, correcting system drift, or achieving high precision and accuracy. Many temperature sensors are available for use like electronic bandgap sensors, thermistors, thermocouples, and resistance temperature detectors or RTDs.

The selection of the temperature sensor depends on the temperature range to be measured and the accuracy desired. The design of the thermometer also depends on these factors. For instance, RTDs provide an excellent means of measuring the temperature when the range is within -200 °C to +850 °C. RTDs also have very good stability and high accuracy of measurement.

The electronics associated with using RTDs as temperature sensors with high accuracy and good stability must meet certain criteria. As an RTD is a passive device, it does not produce any electrical signal output on its own. The electronics must provide the RTD with an excitation current for measuring its resistance. This requires a small but steady electrical current passing through the sensor for generating a voltage across it.

The design of the electronics also depends on whether the design is using a 2-, 3-, or 4-wire sensor. This decision affects the sensitivity and accuracy of the measurement. Furthermore, as the variation of resistance of the RTD with temperature is not linear, the electronics must condition the RTD signal and linearize it.

RTDs in common use are mostly made of platinum, and their commercial names are PT100 and PT1000. These are available in 2-wire, 3-wire, and 4-wire configurations. Platinum RTDs are available in two shapes—wire wound and thin-film. Other RTD types available are made from copper and nickel.

When using an RTD as a temperature sensor, its resistance varies as a function of the temperature, and not in a linear manner. However, the variation is very precise. To linearize the output of the RTD, the electronics must apply a standardizing curve, the most common standardizing curve for RTDs is the DIN curve. This curve defines the resistance versus temperature characteristics of the RTD sensor and its tolerance within the operating temperature range.

Using the standardizing curve helps define the accuracy of the sensor, starting with a base resistance at a specific temperature. Usually, this resistance is 100 ohms at 0 °C. DIN RTD standards have many tolerance classes, which are applicable to all types of platinum RTDs in low power applications.

The user must select the RTD and its accuracy for the specific application. The temperature range the RTD can cover depends on the element type. The manufacturer denotes its accuracy at calibration temperature, usually at 0 °C. Therefore, any temperature measured below or above the specified temperature range of the RTD will have lower accuracy and a wider tolerance.

The categorization of RTDs depends on their nominal resistance at 0 °C. Therefore, a PT100 sensor at 0 °C has a resistance of 100 ohms, while at the same temperature a PT1000 sensor has a resistance of 1000 ohms. Likewise, the temperature coefficient at 0 °C for a PT100 sensor is 0.385 ohms/°C, while that for the PT1000 is ten times higher at the same temperature

Advanced Materials for Magnetic Silence

High-performing advanced magnetic materials are now available that help to handle challenges in hybrid/electrical vehicles. These are challenges related to conducted and radiated electromagnetic interference. Automotive engineers are encountering newer challenges with fully electric vehicles or EVs and hybrid electric vehicles or HEVs become more popular.

The above challenges are so intriguing, engineers now have a fundamental discipline for it, noise vibration and harshness or NVH engineering. Their aim is to minimize NVH for ensuring not only the stability of the vehicle but also the comfort of the passengers.

With electric vehicles becoming quieter, several NVH sources that the noise of the internal combustion engine would mask, are now easily discernible. Engineers divide the root cause of the NVH problems in electric vehicles into vibration, aerodynamic noise, mechanical noise, and electromagnetic noise.

For instance, cabin comfort is adversely affected by electromagnetic noise from auxiliary systems such as the power-steering motor and the air-conditioning system. This can also interfere with the functioning of other subsystems.

Likewise, there is electromagnetic interference from the high-power traction drive system. This interference produces harmonics of the inverter switching and power supply frequencies. Moreover, the interference also induces electromagnetic noise within the motor as well.

With the battery frequently charging and discharging when the EV is in operation, combined with various electromagnetic noises like radiated noise, common-mode noise, and differential noise move through the transmission lines.

All the above reduce the cabin comfort in the vehicle while interfering with systems that help manage the combustion engine in an HEV.

As with many engineering projects, NVH issues are also specific to particular platforms and depend on the design of several structural components, the location of subsystems related to one another, and the design of isolating bushes and mountings. Engineers must deal with most NVH issues related to EMI by applying best practices in electrical engineering for attenuating high-frequency conducted and radiated interference as they couple onto cables and reach various subsystems. Engineers use cable ferrites for preventing long wires from acting as pickups or radiating aerials. They also use inline common-mode chokes for attenuating EMI entering signal and power lines by conduction.

For automotive applications, such cable chokes and ferrites must meet exacting criteria.  Major constraints for these components are their weight and size. Common-mode chokes must provide noise suppression through excellent attenuation properties while using a small physical volume. Additionally, they need to suppress broadband noise up to high operating temperatures, while maintaining high electrical and mechanical stress resistance.

To help with manufacturing such as maintaining high levels of productivity, there are further requirements of robustness and easy handling on assembly lines. This ensures each unit reaches customers in perfect condition. New materials meet the above requirements while offering enhanced characteristics.

The new class of materials is Nanocrystalline cores that engineers classify as metals and they help with eliminating low-frequency electromagnetic noise. Cable ferrites and choke cores made of these materials are much smaller than those made from conventional materials like ceramic ferrites. They also deliver superior magnetic performance, presenting a viable solution for challenging automotive and e-NVH issues.

New Battery Technology for UPS

Most people know of the Lithium-ion battery technology in use mainly due to their overwhelming presence in mobile sets. Those who use uninterruptible power supplies for backing up their systems are familiar with the lead-acid cells and the newer lithium-ion cells. Another alternative technology is also coming up mainly for mission-critical facilities such as for data centers. This is the Nickel-Zinc technology, and it has better trade-offs to offer.

But the Nickel-Zinc battery technology is not new. In fact, Thomas Edison had patented it about 120 years ago. In its current avatar, the Nickel-Zinc battery offers superior performance when used in UPS backup systems. They offer better power density, are more reliable, safe, and are highly sustainable.

For instance, higher power density translates into smaller weight and size. This is the major difference between a battery providing energy and a battery providing power. In a data center, the UPS must discharge fast for a short period for maintaining operational continuity. This is what happens during brief outages, or until the backup generators spin up to take over the load. This is the most basic power battery operation, where the battery must deliver a high rate of discharge, and it does so with a small footprint.

On the other hand, Lead-acid and Lithium-ion technologies offer energy batteries. Their design allows them to discharge energy at a lower rate for longer periods. Electric vehicles utilize this feature, and the automotive industry is spending top dollars for increasing the energy density of such EV batteries so that the user can get more mileage or range from their vehicles. This is not very useful for data center backup, as the battery must have a higher energy storage footprint for supporting short duration high power output requirements.

This is where the Nickel-Zinc battery technology comes in. With an energy density nearly twice that of a Lead-acid battery, Nickel-Zinc batteries take up only half the space. Not only is the footprint reduced by half, but the weight also reduces by half for the same power output. As compared to Lithium-ion batteries, Nickel-Zinc batteries not only excel in footprint reduction, but they charge at a faster rate while retaining thermal stability. This feature makes them so useful for mission-critical facility uptime.

Nickel-Zinc batteries have proven their reliability as well. They have clocked over tens of millions of operating hours for providing uninterrupted backup power in mission-critical applications. Another feature very useful for data center operations is the battery string operations of the Nickel-Zinc technology.

When a Lithium-ion or a Lead-acid battery fails, the battery acts as an open circuit, preventing other batteries in the string from transferring power. On the other hand, a weal or a failed Nickel-Zinc cell remains conductive, allowing the rest of the string to continue operations, with a lower voltage. In emergency situations, this feature of the Nickel-Zinc battery is extremely helpful, as the faulty battery replacement can proceed with no operational impact and at a low cost.

In parallel operation also, Nickel-Zinc batteries are more tolerant of string imbalances, thereby maintaining constant power output at significantly lower states of health and charge as compared to batteries of other technologies.