Importance of Vibration Analysis in Maintenance

For those engaged in maintenance practices, it is necessary to ensure the decision to replace or repair comes much before a complete system failure of key components. Vibration analysis is the easiest way to mitigate this risk.

With vibration analysis, it is possible to detect early signs of machine deterioration or failure. This allows in-time replacement or repair of machinery before any catastrophe or systemically functional failure can occur.

According to Physical laws, all rotating machinery vibrates. As components begin to deteriorate or reach the end of their serviceable life, they begin to vibrate differently, and some may even begin to vibrate more strongly.

This makes analyzing vibration so important while monitoring equipment. Using vibration analysis, it is possible to identify many known modes of failure that are indicators of wear and tear. It is also possible to assess the extent of future damage before it becomes irretrievable and impacts the business or its finances.

Therefore, vibration monitoring and analysis can detect machine problems like process flow issues, electrical issues, loose fasteners, loose mounts, loose bolts, component or machine balances, bent shafts, gear defects, impeller operational issues, bearing health, misalignment, and many more.

In the industry, vibration analysis helps in avoiding serious equipment failure. Modern vibration analysis offers a comprehensive snapshot of the health of a specific machinery. Modern vibration analyzers can display the complete frequency spectrum of the vibration with respect to time for the three axes simultaneously.

However, for interpreting this information properly, the person analyzing the information must understand the basics of the analysis, the failure modes of the machine, and their application.

For this, it is necessary to ensure the gathering of complete information. It is essential to gather a full vibration signature from all three axes, the axial, vertical, and horizontal axes, not only for the driven equipment but also for both ends of the driver motor. It is also necessary to ensure the capability to resolve all indications of failure from the dataset.

Furthermore, it is possible that busy personnel take a read on only one axis. However, this may be problematic, as the problem may be existing in any one of the three axes. Unless testing all three axes, there is a good chance of missing the issue. Comprehensive and careful analysis of the time waveform can predict several concerns.

This also makes it possible and easier to predict issues and carry out beneficial predictive maintenance successfully. In the industry, the importance of reactive maintenance is immense. The industry calls this the run till failure approach. In most cases, they fix the concern after it happens.

To make reactive maintenance as effective as possible in the long run, monitoring, and vibration analysis are essential. The approach helps to ensure the detection of problems at the beginning of failure. That makes fixing the issue cheaper, easier, and faster.

On the other hand, there is a completely opposite approach, that of predictive maintenance. This involves monitoring the machinery while it is operating. The purpose is to predict the parts likely to fail. Vibration analysis is a clear winner here as well.

What is a Reed Relay?

A reed relay is basically a combination of a reed switch and a coil for creating a magnetic field. Users often add a diode for handling any back EMF from the coil, but this is optional. The entire arrangement is very low cost and a simple device to be manufactured.

The most complex construction in the reed relay is the reed switch. As the name suggests, the switch has two reed-shaped metal blades made of a ferromagnetic material. A glass envelope encloses the two blades, holding them in place facing each other, and providing a hermetic seal preventing entry of contaminants. Typically, reed switches have open contacts in a normal state, meaning the two metal blades do not touch when not energized.

The presence of a magnetic field along the axis of the reed switch induces the reeds to magnetize, which attracts them to each other. The reeds, therefore, bend to close the gap. If the applied field is strong enough, the blades bend to touch each other, thereby forming an electrical contact.

The only movement within the reed switch is the bending of the blades. The reed switch has no part that slides past another or pivot points. Therefore, it is safe to say the reed switch has no moving parts that may wear out mechanically. Moreover, an inert gas surrounds the contact area within the hermetically sealed glass tube. For high-voltage switches, a vacuum replaces the inert gas. With the switch area being enclosed against external contaminants, the reed switch has an exceptionally long working life.

The size of a reed switch is a design variable. In longer switches, in comparison with shorter switches, the reeds do not need to deflect much to close a given gap between the blades. To make the reeds in more miniature switches bend more easily, they need to be made of thinner material, and this has an impact on the switch’s current rating. However, small switches allow for more miniature reed relays, which are useful in tighter spaces. On the other hand, larger switches are mechanically more robust, can carry higher currents, and have a greater contact area (lower contact resistance).

A magnetic field, of adequate strength, is necessary to operate a reed relay. It is possible to operate a reed relay by bringing a permanent magnet close to it. However, in the field, a coil surrounding the reed relay typically generates the magnetic field. A control signal forces a current through the coil, which creates the axial magnetic field necessary for closing the reed contacts.

Different models of reed switches need different levels of the magnetic field to make them operate and close the contacts. Manufacturers specify this in ampere-turns or AT, which is the product of current flow and the number of turns in the coil. Therefore, there is a huge variation in the characteristics of the reed relays available. A higher voltage or power level is necessary for stiffer reed relays and those with larger contact gaps. These require higher AT levels to operate, as the coils require more power.

Astronomical Growth of Machine Vision

Industries are witnessing rapid growth of machine vision. This technology being a vital component of the industry’s modern automation solutions, they expect the market for 3-D machine vision to nearly double in the next six years. In the manufacturing context, two major factors contribute to this increase in adoption of the machine vision technology. The first is due to the industry facing acute labor shortage problems, and the second is the dramatic decrease in hardware costs.

Additionally, with an increase in technological performance, the industry needs machine vision systems to process ever-expanding amounts of information every second. Moreover, with the advent of machine learning and advanced artificial intelligence algorithms, data collected from machine vision systems are becoming more valuable. The industry is rightly realizing the power of machine vision.

So, what exactly is machine vision? What makes a robot see? A vision system typically is a conglomeration of many parts that include the camera, lighting sources, lenses, robotic components, a computer for processing, and application-specific software.

The camera forms the eye of the system. There are many types of cameras that the industry uses for machine vision. Each type of camera is specific for a particular application need. Also, an automation solution may have many cameras with different configurations.

For instance, a static camera typically remains in a fixed position in a scenario where speed is imperative. It might have a bird’s eye view of the scene below it. On the other hand, a robotic arm may mount a dynamic camera at its end, to take a closer look at a process, thereby picking out higher details.

One of the important aspects of the vision system is its computing power. In fact, this is the brain to help the eye understand what it is seeing. Traditional machine vision systems were rather limited in their computing powers. Modern machine vision systems that take advantage of machine learning algorithms require far greater computation resources. They also depend on software libraries for augmenting their computing capabilities.

Machine vision manufacturers design these capabilities specifically for application users. They design the software to provide advanced capabilities for machine vision systems. These advanced capabilities allow users to control the tasks for the machine vision, such that they can gain valuable insights from the visual feedback.

With the industry increasingly using vision for assembly lines, the concept of a vision-guided system replacing basic human capabilities is on the upswing in a wide range of processes and applications.

One of the major applications of machine vision is inspection. As components enter the assembly line, machine vision cameras give them a thorough inspection. They look for cracks, bends, shifts, misalignment, and similar defects, which, even if minor, may lead to a quality issue later. The machine vision compares the crack, and if larger than a specified size, rejects the component automatically.

In addition to mechanical defects, machine vision is capable of detecting color variations. For instance, a color camera can detect discoloration and thereby reject faulty units.

The camera can also read product labels, serial numbers, or barcodes. This allows the identification of specific units that need tracking.

Condition Based Monitoring and MEMS Sensors

Lately, there has been a tremendous improvement in MEMS accelerometer performance. So much so, it can now compete with piezo vibration sensors that are all-pervasive. This is because MEMS sensors offer several advantages including smaller size, lower power consumption, low noise levels, wider bandwidth, and a higher level of integration. Consequently, the industry is now increasingly using MEMS sensors in CbM or condition-based monitoring for facility and maintenance. Engineers find CbM very useful, as it helps in detecting, diagnosing, predicting, and ultimately, avoiding faults in their machines.

The smaller size and ultra-low power consumption of MEMS accelerometers allow for replacing wired piezo sensors which are typically bulky, with wireless solutions. Moreover, it is easy to replace bulky single-axis piezo sensors with small, light, and triaxial MEMS accelerometers. The industry finds such replacements cost-effective for continuously monitoring various machines.

The world over, millions of electric motors are in continuous operation. They account for about 45% of global electricity usage. In a survey across industries, more than 80% of the companies in the survey claimed to experience unplanned maintenance. More than 70% of the companies remain unaware that their assets are due for upgrade and maintenance. With Industry 4.0, or the IoT, the industry is moving towards digitization to improve its productivity and efficiency.

The trend is more toward wireless sensor systems. An estimate finds there will be about 5 billion wireless modules in smart manufacturing by 2030. Although most critical assets require a wired CbM system, there are many, many more that will benefit from wireless CbM solutions.

For the best performance, speed, reliability, and security, it is difficult to surpass a wired CbM system. For these reasons, greenfield sites still deploy them. However, installing wired CbM systems requires routing cables across factory floors. This may be difficult in cases where it is not possible to disturb certain machinery. Industrial wired sensor networks typically use 60 m or 200 ft of cables, which can be substantially expensive depending on the material and labor the process involves. Some deployments may also require wire harnesses and routing through existing infrastructure, thereby increasing the cost, complexity, and time to install.

On the other hand, brownfield sites may not be amenable to wired solution installations. For them, although the wireless systems may initially appear to be more expensive, other factors can lead to significant cost savings. For instance, initial cost savings can come from less cabling, fewer maintenance routes, and lower hardware requirements. Over the lifetime of the wireless CbM installation, substantial cost savings can accrue from the ease of scalability and more effortless maintenance routines.

Wireless installations depend on batteries for powering them. Depending on the level of reporting, batteries may last several years. Deployment of wireless systems based on energy-harvesting techniques can make maintenance of these systems even easier and less expensive. However, once a company decides to go wireless, they must focus on the best technology for CbM that suits their application, of which, there are quite a few to choose from, such as Bluetooth Low Energy, GlowPAN, and Zigbee.

Next-Generation Battery Management

Although there has been a significant advancement in increasing the range of electric vehicles, the charging speed is still a matter of concern. For instance, DC fast chargers can charge the battery to 80 percent in about 30 to 45 minutes. In contrast, it is possible to fill the gas tank in only a few minutes. Fast charging has its limitations, as the process generates a significant amount of heat. The high current and the internal resistance of the cable and the battery typically generate a significant rise in temperature.

EV batteries are typically rated at 400 V, and several factors limit their charging rate. This includes the cross-sectional area of the charging cable and the temperature of the battery cells. The temperature rise can be high enough for some fast-charging stations that necessitate liquid-cooling of their cables. Therefore, it would seem reasonable to expect that an increase in the battery’s voltage will boost the power it delivers.

Porsche, in their Taycan EV, has done just that. Their first production vehicle has a system voltage of 800 V rather than the usual 400. This would allow a 350 KW level 3 ultra-fast DC charging station to potentially charge the vehicle to 80% in as low as 15 minutes. But then, an EV design with an 800 V system requires new considerations for all its electrical systems, especially those related to managing the battery.

Switching the vehicle on and off requires the main contactors to electrically connect and disconnect the battery from the traction inverter. On the other side, there are independent contacts for connecting and disconnecting the battery to and from the charger buses and the DC link. For DC fast charging, additional DC charge contacts are necessary that can establish a connection from the battery to the DC charging station. Additionally, auxiliary contactors connect and disconnect the battery to electrical heaters for optimizing the passenger compartment temperature in cold weather conditions.

Moving to a higher battery voltage increases the potential for the formation of electrical arcs, and these can be damaging. Vehicle architectures operating at 800 V therefore, require stricter isolation parameters than those necessary for 400 V architecture. This can increase the cost of the vehicle.

For instance, higher voltage levels require the connector pins to have greater creepage and clearance between them to reduce the risk of arcing. Although connector manufacturers have managed to overcome these issues, the connectors are more expensive than those they offer for 400 V systems, thereby jacking up the total costs.

The maximum battery voltage decides the ratings of components that the traction inverter module uses. For battery voltages at 400 V, there is a wide range of selection of suitably rated components. But this range reduces drastically when the battery voltage is at 800 V. Most components for higher voltages come with a premium price tag attached. This raises the price of the traction inverter module.

A solution to the above problem is to use two 400 V batteries. To reduce the charging time, the batteries may connect in series. They can connect in parallel when driving.

SiC MOSFETs Enhance Performance and Efficiency

Power applications across industries demand smaller sizes, greater efficiency, and enhanced performance from the electronic equipment they use. These applications include energy storage systems, battery chargers, DC-DC and AC-DC inverters/converters, industrial motor drivers, and many more. In fact, the performance requirements have become so aggressive that they surpass the capabilities of silicon MOSFETs. Enter new transistor architectures based on silicon carbide or SiC.

Although silicon carbide devices do offer significantly enhanced benefits across most critical performance metrics, the first-generation SiC devices had various application uncertainties and limitations. The second-generation devices came with improved specifications. With pressures for time-to-market increasing, manufacturers improved the performance of SiC MOSFETs, and by the third generation of devices, there were vast improvements across key parameters.

While silicon-based MOSFETs significantly enhanced the design of power electronic equipment, the insulated-gate bipolar transistor or IGBT also helped. The IGBT is a functionally similar semiconductor, its construction is vastly different, and its switching attributes are more optimized. This led to power electronic equipment adopting switched topologies, thus becoming far more efficient and compact.

The main characteristics of switched mode topologies are based on some form of PWM or pulse-width modulation. They use a closed-loop feedback arrangement for maintaining the desired current, voltage, or power value. With the increasing use of silicon MOSFETs, the demand for better performance also increased. Regulatory mandates demanded new efficiency goals.

With a considerable effort in R&D, an alternative emerged. This was the SiC power-switching device, that used silicon carbide as the substrate rather than silicon. Deep-physics changes have allowed these SiC devices three major advantageous electrical characteristics over silicon-alone products. These characteristics offer operational advantages and subtle differences.

The first of these three main characteristics is a higher critical breakdown voltage. While silicon-based products offer 0.3 MV/cm, SiC-based products offer 2.8 MV/cm. This results in products with the same voltage rating now being available in a much thinner layer, effectively reducing the drain to source on-time resistance.

The second main characteristic is higher thermal conductivity. This allows SiC-based devices to handle much higher current densities in the same cross-sectional area, as compared to that silicon-based devices can.

The final characteristic is a wider bandgap. This is the difference in energy measured in electron volts between the bottom of the conduction band and the top of the valence band in many types of insulators and semiconductors. This results in a lower leakage current at higher temperatures. Because of the above reasons, the industry also refers to SiC devices as wide bandgap devices.

In general terms, SiC-based devices can handle voltages that are ten times higher than Si-only devices can. They can also switch about ten times faster, besides offering an on-time drain-to-source resistance of half or lower at 25 °C, even when using the same die area as a Si-only device. Moreover, the switching-related loss at turn-off periods for SiC devices is significantly lower than those for Si-based devices. Additionally, it is easier to handle thermal design and management issues with SiC-based devices, as they can operate at much higher temperatures, such as up to 200 °C, as compared to 125 °C for Si-based devices.

LDOs for Portables and Wearables

As electronic devices get increasingly smaller in form factor, they are also becoming more portable and relying more on battery power. These devices include security systems, fitness trackers, and Internet of Things or IoT devices. The design of such tiny devices demands high-efficiency power regulators that can make use of every milliwatt of power from each charge for extending the working life of the device. The efficiency of traditional linear regulators and switch-mode power regulators falls woefully short of the requirements. Moreover, transient voltages and noise in switch-mode power regulators are detrimental to their performance.

The most recent addition to switching and linear regulators is the LDO or the low-dropout voltage regulator. It lowers thermal dissipation while improving efficiency by operating with a very low voltage drop across the regulator. Low-to-medium power applications are well-served by various types of LDOs, as they are available in minuscule packages of 3 x 3 x 0.6 mm. In addition, there are LDOs with fixed or adjustable output voltages, including some versions with on-off control of the output.

A voltage regulator must maintain a constant output voltage even when the source or load voltages change. Traditional voltage regulator devices operate in one of two ways—linear or switched mode. While LDO regulators are linear regulators, they operate with a very low voltage difference between their output and input terminals. As with other linear voltage regulators, LDOs also function with feedback control.

This feedback control of the LDO functions via a resistive voltage divider that scales the output voltage. The scaled voltage enters an error amplifier that compares it to a reference voltage. The resulting output of the error amplifier drives the series pass element to maintain the output terminal with the desired voltage. The dropout voltage of the LDO is the difference between the input and output voltages, and this appears across the series pass element.

The series pass element of an LDO functions like a resistor whose value varies with the applied voltage from the error amplifier. LDO manufacturers use various devices for the series pass element. It can be a PMOS device, NMOS device, or a PNP bipolar transistor. While it is possible to drive into saturation the PMOS and PNP devices, the dropout voltage for PMOS-type FET devices depends on the drain-to-source on resistance. Although each of these devices has its own advantages and disadvantages, using PMOS devices for the series pass element has the lowest implementation cost. For instance, positive LDO regulators from Diodes Incorporated offer LDOs with PMOS pass devices featuring dropout voltages of about 300 mV, when their output voltage is 3.3 V and the load current is 1 A.

The output of the LDO must have an output capacitor. The inherent ESR or effective series resistance of the capacitor affects the stability of the circuit. That means the capacitor used must have an ESR of 10 ohms or lower for guaranteeing stability covering the entire operating temperature range. Typically, these capacitors are of the type multilayer ceramic, solid-state E-CAPs, or tantalum, with values upwards of 2.2 µF.

What are MOSFET Relays

For certain applications, especially for high-power switching, traditional electromagnetic relays are still a popular choice. However, with the advent of solid-state relays, particularly MOSFET Relays, this trend is now shifting for a growing range of applications. In addition, with IoT growing exponentially, and 5G networks moving the trend towards shrinking form factors, engineers are forced to fit more powerful devices with higher functionality into smaller spaces. That means, they must also find better ways of improving power efficiencies through improved switching speeds.

Modern IT infrastructure, such as switching power supplies and DC-DC converters, presents engineers with specific design challenges. MOSFET relays help to address these challenges as their characteristics are superbly suited to several key applications.

Although the name includes the word relays, MOSFET relays are actually electronic circuits rather than relays and feature an input and an output side. The input side comprises a PDA or photodiode dome array, along with an LED or Light Emitting Diode. The output side comprises a FET or Field Effect Transistor block, with a control circuit bridging the two.

To activate a MOSFET relay, a current must flow through its input LED and turn it on. The PDA then converts the light from the LED to a voltage. The control circuit uses this voltage to drive the output block. This action turns on the double MOSFETs, present in the output block, allowing them to pass either AC or DC loads bi-directionally.

Unlike electromagnetic relays, MOSFET relays have no moving parts. Therefore, the latter can withstand vibration and physical shock without suffering damage or malfunction. Ideally, the MOSFET relay should perform indefinitely, operate silently, and cause very little electrical interference, provided it is under proper use.

While MOSFET relays can handle a wide range of input voltages, they consume very little power and do not arc during operation. That makes this solid state relays eminently suitable for working in hazardous environments. While enabling the switching of both AC and DC signals, solid state relays minimize surge currents. A physical comparison with electromagnetic relays reveals MOSFET relays to be considerably smaller, occupying less space on printed circuit boards, and consuming very low power.

Certain characteristics of MOSFET relays offer advantageous implications in electronic applications. For instance, they offer low output capacitance, implying an improvement in switching times with better isolation characteristics for load signals at high frequencies. The presence of an LED at the input implies optical isolation between the input and output circuits offering a better physical or galvanic isolation. The on-resistance for MOSFETs is low, implying increased switching speeds and low power dissipation when switching high currents. Being solid state, MOSFET relays have no hysteresis when switching from the on-state to the off-state and vice versa. These relays have high linearity, ensuring there is no signal distortion when switching. Therefore, MOSFET relays are equally suitable for analog and low-level signal switching.

The above characteristics of the MOSFET relay make them ideally suitable for a wide range of applications. These include use in energy-related equipment, telecommunications, factory automation, amusement equipment, security equipment, medical equipment, automated test equipment, and much more.

Thermal Interface Materials for Electronics

As the name suggests, TIMs are Thermal Interface Materials that the electronic industry typically uses between two mating surfaces. They help to conduct heat from one metal surface to another. TIMs are a great help in thermal management, especially when removing heat from a semiconductor device to a heat sink. By acting as a filler material between the two mating surfaces, TIMs improve the efficiency of the thermal management system.

There are various types of material that can act as TIMs, and there are important factors that designers must consider when selecting a specific material to act as a TIM for a unique application.

Every conductor has its own resistance which impedes the flow of electrical current through it. Impressing a voltage across a conductor starts the free electrons moving inside it. Moving electrons collide against other atomic particles within the conductor, giving rise to friction and thereby generating thermal energy or heat.

In electronic circuits, active devices or processing units like CPUs, TPUs, GPUs, and light-emitting diodes or LEDs generate copious amounts of heat when operating. Other passive devices like resistors and transformers also release high amounts of thermal energy. Increasing amounts of heat in components can lead to thermal runaway, ultimately leading to their failure or destruction.

Therefore, it is desirable to keep electronic components cool when operating, thereby ensuring better performance and reliability. This calls for thermal management to maintain the temperature of the device within its specified limits.

It is possible to use both passive and active cooling techniques for electronic components. It is typical for passive cooling methods to use natural conduction, convection, or radiation techniques for cooling down electronic devices. Active cooling methods, on the other hand, typically require the use of external energy for cooling down components or electronic devices.

Although active cooling can be more effective in comparison to passive cooling, it is more expensive to deploy. Using TIMs is an intermediate method to enhance the efficiency of passive cooling techniques, but without excessive expense.

Although the mating surfaces of the component and its heat sink may appear flat, in reality, they are not. They typically have tool marks and other imperfections such as pits and scratches. The presence of these imperfections prevents the two surfaces from forming close physical contact, leading to air filling the space between the two non-mating surfaces. Air, being a poor conductor of heat, introduces higher thermal resistance between the interfacing surfaces.

TIMs, being a soft material, fills a majority of the gaps between the mating surfaces, expelling the air from between them. In addition, TIMs have better thermal conductivity than air does, typically, 100 times better, and their use considerably improves the thermal management system. As such, many industrial and consumer electronic systems use TIMs widely for ensuring efficient heat dissipation and preventing electronic components from getting too hot.

The electronic industry uses different forms of TIMs. These can be thermal tapes, greases, gels, thermal adhesives, dielectric pads, or PCMs that change their phase. The industry also uses more advanced materials such as pyrolytic graphite, as these are thermally anisotropic.

Generating Power from Space

At the beginning of this year, the SSPP or Space Solar Power Project of the California Institute of Technology launched a prototype SSPD or Space Solar Power Demonstrator into orbit. They have an aspiring plan of gathering solar power in space. Not only will the SSPD prototype test several vital components, but also beam the energy it collects back to earth.

Outer space has a practically unlimited supply of solar energy. This energy is constantly available, never subject to cloud cover, and is unaffected by seasons and cycles of day and night. Therefore, space solar power is a tremendous step towards harnessing limitless amounts of clean and free energy.

The launch is a major milestone for the project. In full realization, the SSPD will have several spacecraft in the form of a constellation for collecting sunlight. It will then transform the sunlight into electricity, and transmit it wirelessly over to earth. The project will provide electricity wherever necessary, including places that do not have access to reliable power.

A SpaceX rocket launched the 50-kg SSPD into space on a Transporter-6 mission. The demonstrator has three main experiments. Each handling a vital technology of the project.

The first experiment is the DOLCE. This is the on-orbit, deployable, ultralight composite. Measuring 6 x 6 feet, this structure is meant to demonstrate the packaging scheme, architecture, and deployment mechanism of the future modular spacecraft that the scientists eventually plan to make up as the kilometer-long constellation of the power station.

The next experiment is the ALBA. This is a collection of various types of photovoltaic cells. Numbering 32 in total, this experiment allows the scientists to make an assessment of the effective performance of each type of photovoltaic cell in the extremely hostile environment of space.

The final experiment is the MAPLE. This is a microwave array for transferring power at low orbit. It consists of an array of lightweight flexible power transmitters at microwave ranges. With precise timing control systems, it can focus the power onto two different receivers selectively. This experiment will demonstrate the transmission of wireless power at a distance in space.

The SSPD has an additional fourth experiment. This is a box of electronics interfacing the prototype with the Vigoride computer while providing a control for the three experiments.

The ALBA or photovoltaic cell experiment will require up to six months of testing before it can generate new insights into the most suitable photovoltaic technology for space power applications. MAPLE constitutes a series of experiments, starting from verification of the initial functionality to an evaluation of the system performance under extreme environments over time.

DOLCE has two cameras on booms that can deploy as necessary, and more cameras on the electronics system. They will monitor the progress of the experiments and provide a feedback stream to earth. According to the SSPD team, they expect to have a complete assessment of the experiments’ performance within a few months.

In the meantime, the team still has to deal with numerous challenges. This is because it is not possible to guarantee anything about conducting an experiment in space.