Category Archives: Guides

Why Smart Home Tech Adoptions Need Switches

Most modern homes now use connected devices for entertainment, access control, and several other daily tasks. Their rapid increase can be gauged from the growth of the US market for smart homes, which has reached 29 million and is still rising.

The amazing features and efficiencies products related to smart homes offer to households naturally mesmerize consumers. However, this also necessitates engineers keep in mind the physical interfaces. While customer satisfaction is a long-standing effect, the immediate look and feel of the device dictates its price. This implies details are an important aspect, where the choice of every component matters and that includes switches and buttons.

Most people tend to ignore switches and buttons, forgetting they are responsible for driving the technical movement known as smart homes. However, a few important reasons establish engineers designing home products must give them a serious thought.

The connected devices in a smart home depend critically on their hardware designs. These include switches, sensors, screens and other components used on smart televisions, smart thermostat controls, connected door locks, and more. Most importantly, a user’s overall satisfaction comes from the way a product feels or the tactile sensation it generates.

Most of the time, a customer’s first interaction with the control of a product comes from its on/off switch, which a user physically touches. Unless the switch creates a delightful experience, the customer is likely to search for another product that offers a better feeling.

Cameras working on the Internet Protocol are now commonly available in smart homes. The reason for this is easy to figure out, as according to the statistics provided by iControl Networks, there is a burglary happening every 14.1 seconds in the US. With an IP camera installed, a person can monitor the activity at home from a remote location on their smartphones, laptops, or any other smart device. The very presence of IP cameras act as a deterrent to crime, apart from helping the police apprehend criminals, while simply providing a piece of mind to a homeowner.

However, smart cameras need the right switch to power and protect them. Usually, this is a miniature tactile switch, suitable for meeting the shrinking form factors of the device. Often smaller than the small lens display used by these cameras, the switch must be robust enough to prevent intruders from breaking it and rendering the camera useless.

While IP cameras capture images of unwelcome intruders whom people are not suspecting of entering their homes, access controls offer an additional level of security to the majority of consumers concerned with privacy and security in their smart homes. Access controls are usually equipped with internet doorbells with built-in cameras, and smart door locks.

While the camera shows an image of the person at the door, the smart lock allows unlocking the door remotely. This arrangement can be handy if the door has to be opened for the baby sitter or for the teenager who has misplaced his keys. Usually, the smart lock has a miniature switch to set or reset it. This switch has to be small but long lasting, and able to withstand harsh conditions such as humidity and rain.

Bluetooth 5.0

Custodians of the Bluetooth standard are a flexible lot, considering the enhancements the popular short-range 2.4 GHz wireless technology has been receiving. The Bluetooth SIG or Special Interest Group has allowed it to evolve in ways not envisioned by the inventors. Their foresight will be allowing this technology to expand beyond three billion shipments beginning next year.

The latest incarnation of the technology is the Bluetooth 5.0. This indicates the seriousness with which SIG wants to entrench Bluetooth as a vital component of the IoT or Internet of Things. By 2025, more than 80 billion connected things will be busy exchanging data across networks wirelessly. According to IDC or the International Data Corporation, Bluetooth will be the governing standard for these networks.

That is understandable, as Bluetooth has its roots in short-range handset communication. It all started in mid 90s at Ericsson, when engineers Sven Mattisson and Jaap Haartsen wanted to get rid of the jumble of wires linking their electronic devices. They devised low-throughput, short-range radio links for exchanging information between handsets, without having to plug in a cable. The Ericsson endeavor turned into an open standard operating in the unlicensed 2.4 GHz band, and several others joined them, including Toshiba, IBM, and Nokia.

Around 1998, the standard was named Bluetooth, after an ancient Scandinavian king. However, performance of Bluetooth 1.0 was below expectations, achieving only 700 kbps under ideal but practical conditions. In addition, manufacturers had their own problems in getting their equipment to interoperate. Subsequent iterations not only added bandwidth but also added 79 1-MHz channels for randomly hopping around to avoid RF interference from other devices on the license-free 2.4 GHz band.

Incorporation into cellphones brought major success to Bluetooth, as the handset started to be center of the personal area networks, linking almost everything electronic to the smartphones. Additions to the firmware stack of Bluetooth optimized its performance to suit specific applications, such as in cars, printers, speakers, and in PCs. By now, Bluetooth was in version 3.0+, with a bandwidth of 3 Mbps. Moreover, by co-locating to an 802.11 channel, Bluetooth was soon competing with Wi-Fi at 24 Mbps.

Bluetooth was able to achieve its biggest breakthrough with version 4.0, also called Bluetooth low energy. This version introduced a second radio using a lightweight stack but interoperable with its elder brother. Now, even compact wireless devices could send a tiny amount of data in a rapid burst, returning to an ultra-low power consumption state of sleep. This mode allowed the devices to operate for long periods from small-capacity batteries.

With Bluetooth 5.0, its low energy part also gets a speed boost to 2 Mbps, which makes things run far more smoothly. Now, IoT sensors can receive over the air updates to keep them protected from hackers. The range has also increased four times. This makes Bluetooth 5.0 viable for the entire house applications such as smart lights, with the throughput dropping to 125 kbps when the range is extended.

To make it competitive to other industrial and smart home networking technologies such as Z-Wave, Zigbee, and Thread, Bluetooth 5.0 now incorporates the Mesh Networking standard.

Choosing a Regulator – Switching or LDO

Unlike AC circuits where a simple transformer can change the incoming voltage to a different level, DC circuits need an active device to change the voltage to the desired level. In general, there are two types of circuits to do this—switching and linear. Switching regulators are highly efficient and work on buck, boost, or buck-boost technology to change the voltage level. On the other hand, linear regulators such as LDOs are ideal for powering very low power devices or applications where the difference between the input and output voltages is small. Compared to switching regulators, linear regulators generate lower noise, are simple and cheap, but inefficient.

Linear Regulators (Low-Dropout Regulators)

Using linear circuits and non-linear techniques, linear regulators regulate the voltage output from the input supply. The resistance of the regulator varies according to the load and this creates a constant output voltage.

Irrespective of their make and design, all linear regulators must have their input voltage at least some minimum amount higher than the desired output voltage. Engineers call this minimum amount as the dropout voltage. An LDO regulator or low-dropout regulator is a DC linear regulator that is able to regulate the output voltage even for very low differences between the input and output voltages.

Therefore, applications that need an input voltage very close to the supply voltage and consume low power are ideal for linear regulators. As the product of the load current and difference of the input and output voltages governs the power dissipated by a linear regulator, a smaller difference means the regulator can handle higher power or allow a higher load current.

Although the linear regulators or low-dropout regulators offer a simple and cheap solution, these devices are notoriously inefficient as they dissipate heat based on the difference between the input voltage and the regulated output voltage. Most low-dropout regulators are low-current devices, offering well-regulate outputs, and require very few external components. They usually come in small packages, have fast transient response, and are highly accurate.

Switching Regulators

Most solutions for power management today require low power consumption under various load conditions, ability to operate in small spaces, offer high reliability, and the capability of withstanding wide input voltages. Therefore, a broad range of applications is moving towards highly efficient, wide input, low quiescent current switching regulators.

Switching regulators work by switching a series element on and off very rapidly. The series element can be either synchronous or non-synchronous FET switches. Usually, an associated inductor stores the input energy temporarily, and releases the energy subsequently to the output circuit at a different voltage level. The duty cycle of the switch determines the amount of charge transferred to the load.

Switching regulators operate efficiently, as their switching element dissipates almost no power, because the element is either switched off or fully conducting. Unlike linear regulators, switching regulators can generate output voltages higher than the input voltage or of the opposite polarity.

Therefore, switching regulators offer wide input and output voltage ranges, integrated series elements, pin-to-pin compatible parts, internal compensation, and light load efficiency modes, while being simple and easy to use.

Charlieplexing on the Raspberry Pi

If you suddenly find the need to control many LEDs and do not have the requisite electronics to do so, you can turn to your single board computer, the Raspberry Pi (RBPi) and use it to charlieplex the LEDs.

Charlieplexing is named after Charlie Allen, the inventor of the technique. Charlieplexing takes advantage of a feature of the GPIO pins of the RBPi, wherein they can change from outputs to inputs even when the RBPi is running a program. Simply setting a GPIO pin to be low does not allow enough current to pass through an LED or influence the other pins set as outputs and connected to the LED.

Using Charlieplexing, you can control up to six LEDs with three GPIO pins. For this, you will need three current limiting 470Ω resistors on each GPIO pin. The program charlieplexing.py defines a 3×6 array, which sets the state and direction of the three GPIO pins. The state defines whether the pin is set as digitally high or low, and the direction defines whether the pin is an output or an input.

Since LEDs are also diodes, they will light up only if their anodes are at a higher potential than their cathodes are, and not otherwise. Therefore, to light up a single LED, the program has to set the pin connected to its anode as output and drive it high. Next, the program must set the pin connected to the anode of the LED as input, while it sets the third pin as output and drive it low. Various combinations of the state and direction of the pins will drive all the LEDs on and off sequentially.

The array in the program holds the settings for each GPIO pin. A value of 0 means the pin is an output in a low state, 1 means the pin is an output in a high state, and -1 means the pin is set as an input.

In charlieplexing, it is easy to calculate how many LEDs each GPIO pin can control. The formula for this is, LEDs = n2-n, where n is the number of pins used. According to the charlieplexing formula, three GPIO pins can charlieplex 6 LEDs; four pins can control 12 LEDs, while 10 pins would allow control over a massive 90 LEDs.

Charlieplexing is good for not only lighting one LED at a time, but it is capable of lighting more at the same time also. For this, the program must run a refresh loop to keep the desired state of the LEDs in the array. While refreshing the display, the program must turn on other LEDs that need to be on, before moving on to the next. However, persistence of vision plays a large part here, and the program must be sufficiently fast to make it appear that more than one LED is on at a time.

However, there is a downside to lighting more LEDs at a time. Since more number of LEDs are now on to make it appear that more than one LED is on simultaneously, each LED is actually lit for a lower amount of time, which makes each LED glow less than at its full brightness.

Dimming LEDs with PWM Generator

nlike incandescent bulbs, dimming Light Emitting Diodes (LEDs) is not an easy task. Incandescent bulbs operate on alternating voltage supply, whereby using Triacs, one can control the effective RMS voltage applied to the bulb. Moreover, since the incandescent bulbs are resistive elements, a simple reduction is voltage is sufficient to reduce the current through it, thereby reducing its light and heat output.

LED operation is different, as they work on direct voltage. Each LED requires an optimum load current to produce light, while dropping a fixed voltage across its terminals. Therefore, it is impossible to dim the LED light output by decreasing the voltage across it or by limiting its current load.

However, an LED responds much faster, switching on and off at a much higher speed than an incandescent bulb does. This feature allows switching an LED on/off rapidly to change its light output. For instance, if the LED is repeatedly switched on for the same amount of time that it is switched off, the resultant average intensity from the LED is halved. By continuously changing the ratio of the on-to-off period, the LED can be made to traverse from zero output to its maximum light output. Engineers call this technique the Pulse Width Modulation (PWM), and this has become the de facto mechanism for dimming LEDs.

Linear Technology makes different types of PWM controllers for LEDs, and they have designed the LT3932 for dimming a string of LEDs efficiently. A monolithic, synchronous, step-down DC/DC converter, the LT3932 utilizes peak current control and fixed-frequency PWM dimming for a number of LEDs connected serially.

The user can program the LED current of the LT3932 using an analog voltage, or control its duty cycle of the pulses from the CTRL pin. A resistor divider on the FB pin of the LT3932 sets its output voltage limit.

One can use an external clock at the SYNC/SPRD pin of the LT3932 to control the switching frequency, which is programmable from 200 KHz to 2 MHz. Alternatively, an external resistor connected to the RT pin can also serve the same purpose. To reduce EMI generated by the switching frequency, the LT3932 features an optional function of frequency modulation involving spread spectrum that varies the frequency from 100 to 125%.

The LT3932 features an external high-side transistor rated for 3.6-36 V, 2 A, and a synchronous step-down PWM LED driver for dimming an LED string. This uses an internal signal generator for controlling the analog PWM dimming in the absence of an external PWM signal. LT3932 regulates the LED current to ±1.5%, while regulating the output voltage to ±1.2%. The IC achieves a 5000:1 PWM dimming at 100 Hz, and the internal PWM achieves a 128:1 dimming ratio with a maximum duty cycle of 99.9%.

The LT3932 protects the LED string from open/shorts while offering fault indication, as it has an accurate LED current sensor with a monitor output. Along with thermal shutdown, the IC features an accurate under voltage lockout threshold and an open-drain fault reporting for open circuit and short-circuit load conditions. With its silent switcher topology, the LE3932 is well suited for several applications including automotive, industrial, and architectural lighting.

Replacement for Flash Memory

Today flash memories or thumb drives are commonly used as devices that store information even without power—nonvolatile memory. However, physicists and researchers are of the opinion that flash memory is nearing the end of its size and performance limits. Therefore, the computer industry is in search of a replacement for flash memory. For instance, the National Institute of Technology (NIST) conducted research is suggesting resistive random access memory (RRAM) as a worthy successor for the next generation of nonvolatile computer memory.

RRAM has several advantages over flash. Potentially faster and less energy hungry than flash, it is also able to pack in far more information within a given space. This is because its switches are tiny enough to store a terabyte within a space the size of a postage stamp. So far, technical hurdles have been preventing RRAM from being broadly commercialized.

One such hurdle physicists and researchers are facing is the RRAM variability. To be a practical memory, a switch needs to have two distinct states—representing a digital one or zero, and a predictable way of flipping from one state to the other. Conventional memory switches behave reliably when they receive an electrical pulse and switch states predictably. However, RRAM switches are still not so reliable, and their behavior is unpredictable.

Inside a RRAM switch, an electrical pulse flips it on or off by moving oxygen atoms around, thereby creating or breaking a conductive path through an insulating oxide. When the pulses are short and energetic, they are more effective in moving ions by the right amount for creating distinct on/off states. This potentially minimizes the longstanding problem of overlapping states largely keeping the RRAM in the R&D stage.

According to a guest researcher at NIST, David Nminibapiel, RRAMs are as yet highly unpredictable. The amount of energy required to flip a switch may not be adequate to do the same the next time around. Applying too much energy may cause it to overshoot, and may worsen the variability problem. In addition, even with a successful flip, the two states could overlap, and that makes it unclear whether the switch is actually storing a zero or a one.

Although this randomness takes away from the advantages of the technology, the researcher team at NIST has discovered a potential solution. They have found the energy delivered to the switch may be controlled with several short pulses rather than using one long pulse.

Typically, conventional memory chips work with relatively strong pulses lasting about a nanosecond. However, the NIST team found less energetic pulses of about 100 picoseconds, which were only a tenth of the conventional pulses, worked better with RRAM.  Sending a few of these gentler signals, the team noticed, was more useful not only for flipping the RRAM switches predictably, but also for exploring the behavior of the switches.

That led the team to conclude these shorter signals reduce the variability. Although the issue does not go away totally, but tapping the switch several times with the lighter pulses makes the switch flip gradually, while allowing checking to verify whether the switch did flip successfully.

Difference between AC, DC, and EC Motors

People have been using different types of motors for ages. Primarily, motors can be broadly classified into AC and DC types, depending on the power source they require to operate. However, the basics of operation remain the same for all types of motors. Current running through a wire generates magnetic fields around it, and if there is another magnetic field present such as from an external magnet, the two interact to generate a mechanical force on the wire capable of moving the wire. This is the basic principle on which all motors operate.

AC and DC Motors

AC induction motors have a number of coils controlled and powered by the AC input voltage. This input voltage also creates the stator field, which then induces the rotor field. Another type of AC motor, a synchronous motor, can operate with precision supply frequency.

An AC motor operates at a specific point on its performance curve, which coincides with the peak efficiency of the motor. If forced to operate beyond this point, the motor runs with a significant reduction in efficiency. As the magnetic field in an AC motor is created by inducing a current in the rotor, AC motors consume extra energy from the input. This makes the AC motors less efficient than DC motors are.

DC motors generate their secondary magnetic field using permanent magnets rather than windings. They rely on commutation rings and carbon brushes to switch the direction of the current and the polarity of the magnetic field in the rotating armature. The interaction between the magnetic field from the fixed permanent magnets and the magnetic field from the internal rotor induces rotation in the rotor.

Although DC motors run at high efficiency, they suffer from specific losses. The initial resistance in the rotor, brush friction, eddy current losses cause the motor to lose efficiency.

EC Motors

To achieve higher energy efficiency and control the energy output, engineers have designed the EC or electronically commuted motors. They combine the best of both AC and DC motors by removing the brush and slip ring system of commutation, and replacing them with solid-state devices. This electronic control allows them to operate with a higher efficiency.

EC motors are also called brushless DC motors, and they are controlled by external electronics, which may be an electronic circuit board or a variable frequency drive. Permanent magnets are on the rotor, while the fixed windings are on the stator.

The circuit board keeps the motor running by switching the phases in the fixed windings as necessary. This supplies the armature with the right amount of current at the right time, resulting in the motor achieving higher accuracy and efficiency.

EC motors offer several benefits. Absence of brushes eliminates sparking and increases the life of the motor. As electronics controls the power to the motor, there is less wastage with better performance and controllability. This allows even small EC motors to equal the performance of larger AC or DC motors. Heat generation in EC motors is also lower than that generated in AC or DC motors.

How Do Piezoelectrics Work?

Piezoelectrics, found almost everywhere in modern life, are materials that are able to change mechanical stress to electricity and back again. One can find them in sonars, medical ultrasound, loud speakers, computer hard drives, and in many more places. However popular piezolectrics may be as a technology, very few people truly understand their way of working. At the Simon Fraser University at Canada, and the National Institute of Standards, researchers are working on understanding one of the main classes of these materials. These are the relaxors, behaving distinctly different from the regular materials, and exhibiting the largest effect among piezoelectrics. Most surprisingly, their discovery comes in the shape of a butterfly.

The team was examining two of the most popular piezoelectric compounds, the relaxor PMN and the ferroelectric PZT. These look very similar when viewed through a microscope, with both exhibiting crystalline structure comprising cube-shaped unit cells. These are the basic building blocks all crystals use, and they contain one lead atom and three oxygen atoms. The team found the essential difference only at the center of the cells. While the PZT had one similarly charged zirconium or titanium atom occupying the center randomly, the PMN had differently charged niobium or manganese atoms in the center. With the differently charged atoms, PMN produced strong electric fields varying from one unit cell to the other. They observed this behavior exclusively in PMN and in other relaxors, but not in PZT.

According to Peter Gehring of the NIST Center for Neutron Research, although ferroelectric PZT and PMN-based relaxors have been around for decades, the difficulty in identifying the origin of their behavioral difference was due to the inability in growing sufficiently large single crystals of PZT. For a long time, the researchers had no fundamental explanation for the reason relaxors exhibited greater piezoelectric effect, which could help guide efforts in optimizing this technologically valuable property.

Then scientists from Simon Fraser University discovered a way to grow crystals of PZT that were large enough to enable a comparison of the PZT and PMN crystals. The scientists used neutron beams and they revealed new details about the location of atoms within the unit cells. The scientists found the atoms in the PMN cells were not in their expected positions, whereas in the PZT cells had them in more or less rightly expected positions. According to Gehring, this accounted for the essentials of relaxor behavior.

Te scientists observed that neutron beams scattering off PMN crystals formed the shape of a butterfly. The characteristic blurred image revealed the nanoscale structure withing the PMN and in all other relaxor materials. However, when they studied PZT materials with the same method, they did not observe the butterfly shape. This led them to conclude relaxors offer a characteristic signature in the shape of this butterfly-shaped scattering.

The team conducted additional tests on both PMN and PZT crystals. These tests revealed for the first time that compared to PZT, PMN-based relaxors were over 100 percent more sensitive to mechanical stimulation. The team hopes these findings will help in better optimization of piezoelectric behavior in general.

Monitoring Sound & Vibration for Process Control

In a production environment, one can always find two common themes for the successful application of acoustical or vibrational monitoring. Usually, workers judge the noise or vibration event as being the start or end of a particular process. Initiated by such an event, an automated control system can easily minimize any loss of production.

On the production floor, control of manufacturing processes have used continuous monitoring of sound and vibration for the past several years. For instance Brüel & Kjær had used their 2505 Multipurpose Monitor in the early 1980s to automatically monitor vibration signals. One could connect an accelerometer, a microphone, or other piezoelectric device to this monitor, and set limits for alerting the user whenever the levels exceeded them. They had filters to limit the signal bands, and detectors to average signals that fluctuated highly. On the output side, relays interfaced with the process control systems or other instrumentation. No other expensive analysis systems were necessary if the process control technician used this device to monitor acoustic or vibration levels automatically. People used these monitors also in the machine condition monitoring field as basic overall vibration detectors to switch off the machine if vibration levels exceeded the set limits.

Discrete analog circuit boards enclosed in weather proof enclosures made up these early monitors. The user had to select the circuit cards necessary for their specific application. Usually, a circuit card was capable of performing a specific function, such as RMS detector, amplifier or attenuator, high and/or low pass filter, and signal conditioner. The circuit cards worked together with the relays, alarm indicators, and the meter module. With very little dynamic range, users had to be very careful in selecting a circuit card for each application. One had to be knowledgeable about the transducer they employed and the particular measurement they were making. If conditions changed, they had to order additional circuit cards.

The above disadvantages of the analog system made Brüel & Kjær develop their digital signal processors replacing the monitors with modern electronics. They now had software controlling the functions of RMS detection, gain/attenuation, and filtering. End users found the application of the new monitors much simpler, as a monitor could be field-programmed for meeting the demands of the present task. The supplied software and its use in setting up and control of the unit allowed users to save time they earlier spent on analyzing the required settings before purchasing the monitor.

The new monitors use a PC interface for setting up and to display the results of their measurements. Users can store programmed data within the unit, so the monitor can operate even without the presence of the PC and retain measurements if the power fails. Digital signal processing within the unit allows the user to set up many low and high pass filters, true RMS, and peak-to-peak measurements. Users can set other built-in voltage references and test functions for set-ups related to new tests, including relays and indicators for system failure. In addition, the presence of electrical outputs for unconditioned and conditioned AC signals makes these new monitors ideal for real-time detection and control of acoustic and vibration events.

What is PID Control and How Does it Work?

We use control loops all the time. For instance, it is much easier to place an object on a tabletop with your eyes open than it is with the eyes closed. The eyes provide us with visual feedback to control the hand to place the object in the required position on the tabletop without error. In the same way, modern industrial controls regulate processes as a part of a control loop. The user sends a set point request to the controller, which then compares it to a measured feedback. The difference between the two forms the error and the controller tries to eliminate the error.

PID controls also work in the above manner, but also add a bit of mathematics. In fact, PID is an acronym for Proportional, Integral, and Derivative. The three terms allow the controller to adjust the rate at which it minimizes the error.

For instance, the proportional factor introduces a constant multiple KP. Therefore, the controller moves at a constant factor from its present position to the desired set point. If the present position is far from the desired set point, the error is large, keeping the speed of approach high. As the error decreases, the speed of approach reduces. This is similar to a car running at high speed when it is far from its destination, with the speed reducing as it nears its terminus. When error reduces to a certain level, the Integral term takes over.

The integral term controls the rate of change over a given interval based on the summation of error over time. Therefore, the rate of change is no longer linear but changes in a non-linear manner. The speed of approach reduces non-linearly as the error approaches zero, and just before the controller settles, the derivative term takes over.

The derivative term controls the rate of change of the error over a given interval. In fact, it corrects the controller’s position based on the last time the positional error was checked. In reality, the three terms do not work independently as above, but concurrently. The magnitude of error defines which among them affects the controller more than the others.

All three components of the PID controller create outputs based on the measured error of the process under regulation. For a properly operating control loop, any change in error caused by a process disturbance or set point change can be quickly eliminated by the combination of the P, I, and D factors.

Sometimes PID controllers use only the proportional term. However, a proportional-only loop works with only a sizable error. When the error becomes small, the output of the controller is too low to enable corrections. Therefore, even when the control loop has reached steady state, there is still some error. The steady state error will reduce by setting a high proportional factor. However, setting a very large proportional factor, which depends on the gain of the controller, leads to repeatedly overshooting the set point, resulting in oscillations and making the loop unstable. This leads to steady state error and this is called offset.