Author Archives: Andi

Industrial Safety Devices

The market is overwhelmed with various safety devices, making the design of machine safety a daunting task. Selecting an optimum safety device often depends upon understanding its proper use for a specific design.

There are different types of industrial automated equipment. They can be as simple as a pneumatic cylinder, or various automation components working simultaneously. Irrespective of the complexity of the system, it is imperative to consider the safety of the maintenance staff, integrator, or operator. Depending on the system they are installed in, safety systems can be very simple or complex. Moreover, the complexity of the safety system depends on the automated system, increasing with the latter’s complexity. The availability of different types of safety systems on the market makes the choice rather difficult.

As automated equipment have been around for so long, documentation on standard design principles, best practices, and guidelines for safety systems are available in plenty. When designing safety systems, these documents are a great resource, and it is necessary to consult them to ensure that the equipment is safe.

SIL or Safety Integrity Level is a measure of the failure rate of equipment expressed in terms of the probability of its failure. Typically, a safety-rated equipment will most likely have a published SIL number. This number is not a rating, but rather a guideline to the type of system with which it can be used. For instance, if the system has a SIL rating of 3, then all devices within the safety system must also have a SIL number of 3. There are four SIL levels and level 4 is the highest, meaning it has the lowest probability of failure.

The E-Stop or Emergency-Stop button is the most common safety device. It is the first safety device typically added to a system. Typically, the push button comes with one normally open contact for monitoring and two normally closed contacts for de-energizing. The button is colored bright red and has a yellow label. Its basic purpose is to stop all sources of motion or hazards by de-energizing the power within the system. For pneumatic equipment, engaging the E-Stop results in venting the stored pressure to the atmosphere, and turning off any STO signals for motion devices.

While there are many ways of using an E-Stop button, the most common is to couple it with a safety relay. Typically, two monitor circuits of the safety relay pass through the dual contacts on the E-Stop button. Pressing the button opens the contacts, causing a break in the redundant safety circuit, thereby triggering the safety relay, which then opens its contacts.

A safety controller or safety PLC using special safety inputs can also be useful for monitoring the state of the E-Stop button. The PLC program should incorporate opening the output contacts when the emergency stop button has been pressed. For simple systems, passing the STO signals or the control voltage for contactors through the E-Stop contacts should suffice.

The inexpensive E-Stop button is a simple way for easily stopping and de-energizing hazards within a system. It is possible to integrate them easily into either simple or complex safety systems.

Position and Distance Sensors for Accurate Tracking

Many engineering processes like security systems, feedback control, and robotics rely on position and distance sensors for machines to operate safely and accurately. These sensors provide vital information in real-time about the position and displacement of an object. The coordinates of an object relative to a known reference give a measure of its position. The movement of the object from one location to another with a determined angle and distance, is its displacement.

The history of position sensors begins with the potentiometer. Johann Poggendorf invented the potentiometer in 1841. With a change in resistance, it measures the position of a movable contact on a resistive track. Later, the field of position sensing was taken over by magnetoresistive sensors, which measured the change in a material’s resistance due to a magnetic field.

Soon, new position sensors appeared. These were based on solid-state electronics and included LVDTs or Linear Variable Differential Transformers. Later came digital position sensors, which offered high-resolution measurements suitable for integrating into computer systems. The latest trend is to miniaturize position and distance sensors.

Position and distance sensors provide real-time information by detecting changes in physical properties like magnetic field, inductance, capacitance, and displacement. There are various position and distance sensors, including level, thickness, radar, capacitive, gravitational, and potentiometric sensors.

Thickness and level sensors are useful in measuring the thickness and level of powders, solids, and liquids. Primary technologies they employ include optical, laser, and ultrasonic. For instance, they calculate the thickness of a material based on distance measurement between an object and the sensor within a referenced space. In the same way, level sensors measure the height of the liquid in a container to produce a level reading. Industries using thickness and level sensors include manufacturing, pharmaceutical, chemical, and food processing.

RADAR or Radio Detection and Ranging technology locates objects using radio waves. They detect objects by transmitting radio waves at specific frequencies and listening for echoes of the signals as they bounce back from objects. By analyzing the time and frequency of the returned signal, radar can determine the object’s size, speed, and location.

Capacitive sensors measure distance by measuring the change in capacitance due to the proximity of objects to the sensor. The basic sensor has two conductive plates with a dielectric material separating them. As an object moves closer to one of the plates, the capacitance of the sensor changes. This generates a voltage signal proportional to the distance of the object from the plate.

The advantage of capacitive sensors is their formidable accuracy and resolution. For instance, capacitive sensors can measure displacement in the nanometric range. Harsh environments do not affect these sensors. However, outside interference in the form of humidity and magnetic fields can affect their performance.

Potentiometric sensors typically measure linear or angular displacement. For this, the sensor employs a resistive element. The basic construction consists of a thin resistive wire of film wound around an insulating element like a ceramic rod. A metallic wiper moves on the resistive element. As the wiper moves, its resistance changes with reference to one end of the resistive wire. An electronic circuit quantifies the resistance to produce a voltage output, indicating the displacement of the wiper attached to the measured object.

Skin Effect in Conductors

Alternating current distribution is non-uniform in real conductors with finite dimensions and rectangular or circular cross-sections. This is because the alternating current flow creates eddy currents in real conductors leading to current crowding, following Faraday’s laws.

AC currents, being time-varying, produce non-uniform distributions across the cross-sectional area of a conductor. The conductor offers a high-frequency resistance, and for the approximation, we can assume the current to flow uniformly in the conductor, in a layer one skin deep, just below the surface. This phenomenon is known as the skin effect. However, this is only a simple explanation, with the actual distribution of current being much more nuanced, even within an isolated conductor.

For instance, what is the current distribution within a cylindrical conductor with a diameter 2.5 times greater than the skin depth at the frequency of interest? For the answer, it may be necessary to look closely at the physics of skin effect, and the way skin depth is typically derived.

Skin effect is caused by a basic electromagnetic situation. This is related to the propagation of electromagnetic waves inside a good conductor. Textbooks typically examine the propagation of a plane wave within a conducting half-space.

Euclidean space is typically three-dimensional, consisting of length, breadth, and height. A plane can divide this space into two parts, with each part being a half-space. Therefore, a line, connecting one point in one half-space to another point in the other half-space, will intersect the dividing plane. Plane waves propagate along the dividing plane in the conducting half-space.

Now, plane waves consist of magnetic and electric fields that are perpendicular to the direction of propagation and each other. That is why these waves are also known as transverse electromagnetic or TEM waves. Moreover, within a plane wave, all points on planes perpendicular to the direction of propagation, experience the same electric and magnetic fields.

For instance, considering the electric field (E) is in the z-direction, the magnetic field (H) will be in the x-direction, while the wave propagates in the y-direction. Therefore, assuming a plane wave propagation, the electric and magnetic fields remain constant along the x or y direction, and change only as a function of y.

Moreover, for a good conductor, the electric field and the current density are interrelated to the conductivity of the conductor. Using these parameters allows us to calculate the current density, and the skin depth, by solving Maxwell’s equation.

Maxwell’s equation tells us that the amplitude of the current density at the skin depth decreases at the surface of the conductor. It also gives an initial idea of the change in current density at any instant in time as we go deeper into the conductor.

The equation allows us to relate the skin depth to the wavelength within the conductor. The attenuation constant and phase constant of a good conductor are inversely related to the skin depth. It is easy to see that a single wavelength within the conductor is about 6 times larger than the skin depth. This also means the current density will attenuate significantly at a distance of one wavelength.

Coin-Sized MEMS Rocket Thruster

Thrusters, in addition to engine and control systems, typically require hydrazine rocket fuel, which they must store in tanks, making the entire setup physically rather large. Recent innovations are under development, and they use ion and electric propulsion systems. Although the newer thrusters are physically smaller, they are still too large for Nano and Pico satellites, which weigh between 1 and 10 kg, and between 0.1 and 1 kg respectively.

These small satellites require miniature satellite thrusters. These are rocket engines with combustion chambers of 1 mm size. They use only electricity and ice to create thrust. Manufacturing such tiny coin-sized thrusters requires MEMS fabrication techniques.

Miniaturization of electronics is leading to increased accessibility of orbital launch capacity. In addition, small satellites are experiencing fast growth. But, along with electronics, many other things need to shrink too.

For small satellites, thrusters and other equipment for stabilization must also proportionally shrink in size. Although satellites for special purposes are getting smaller, some key components, especially thrusters, have not kept pace with the downsizing.

Enter The Imperial College of London, where a team has designed a new micro thruster especially meant for Nano and Pico satellite applications. The ESA or European Space Agency, who tested the new thrusters, has dubbed them as ICE-Cubes or Iridium Catalyzed Electrolysis CubeSat Thrusters. ICE-Cube thrusters use the process of electrolysis to separate oxygen and hydrogen from water.

The thruster then recombines the two gasses in a combustion chamber less than 1 mm long. The miniature size of the chamber requires a MEMS fabrication process to create it. In laboratory tests, the thruster delivered 1.25 millinewtons of thrust, and it could sustain it for an impulse of 185 seconds.

Although a fast-growing category of space vehicles, Nano satellites are a relatively new breed. While 2012 saw only 25 launches, a decade later, there were 334 launches in 2022. By 2023, that number has nearly doubled.

Being tiny, Nano satellites have little room to spare. That means, conventional tankage carrying corrosive and toxic propellants, such as hydrazine, is no longer practical. While there are forms of propulsion available on a smaller scale, and they typically use compressed air, ions, or steam, these are neither energy-efficient nor do they offer sufficient lifetime. The highest energy efficiency comes from using oxygen and hydrogen in a combustion system.

Nano satellites typically store their propellant as water-ice, because it is safer and less expensive as compared to holding it in liquid or gaseous form. The electrolysis process requires only 20 Watts, which storage batteries or solar cells can easily produce. Therefore, the satellites typically convert solar energy into thrust using ice.

The Imperial Plasma Propulsion Laboratory of the college fabricates the above devices in-house, using their own MEMS process. They create the shape of the device with a reactive ion etching technique using a refractory metal. Then they sputter-deposit an indium layer, which acts like an ignition catalyst, while simultaneously creating a protective oxidation layer for the walls of the device.

The college laboratory has developed two types of micro thrusters—the ICE-200 producing a design thrust of 1-2 N, and the ICE-Cube, generating a thrust of 5 mN.

Difference Between Protection and Control Relays

Today’s industrial revolution is paced fast enough to require the safety of people and processes through protection systems. There are many types of electronic protection relays, and they are different from standard control relays. Being an essential element of industrial control engineering, it is impossible to conceive of machine control without relays. Traditionally, almost all relays have ON/OFF features. However, relay technology has advanced, and now protection relays can have more than the customary features.

For instance, protective relays can measure specific process variables like voltage and current and switch the output based on the measured values. On the other hand, control relays do not monitor anything. Rather, by detecting the presence of an electrical signal at their input terminal, they change their output contact state.

For integrating into a machine, it is necessary to specially wire protective relays. Control relays, being simple, do not require complex wiring.

In an industrial scenario, the main objective of protective relays is to prevent electrical and electronic systems from exposing any type of hazards. Control relays can only switch applications, like switching from one state to another, but they do not inherently protect from hazards.

Some protective relays can indicate the value of the variables they measure, such as current or voltage. However, control relays cannot do anything like that, only outputting the state change.

Many types of protective relays are available on the market. Their selection depends on the nature of the hazard from which the protection is desired.

For instance, an earth fault relay offers protection in a system where current is passing through an earth terminal. In a system, current passing through an earth terminal represents a significant fault in the wiring. It can damage not only the wiring but also the connected electronic and electrical components. The earth fault relay recognizes the current flowing through the earth terminal and protects the system from this hazard.

When the earth fault relay detects current flow through the earth terminal, it trips the connected circuit and sounds an alarm to protect the system from the hazard. To detect the current, it uses a current transformer connected to the earth terminal.

For instance, moisture can short the phase terminal to the earth terminal. This causes current to flow from the phase terminal to the earth terminal, creating an earth fault. On detecting such a current flow, the earth fault relay typically trips the circuit breaker, cutting off supply to the connected circuit. Simultaneously, the relay indicates the fault with the alarm flag. Medium to high-voltage applications typically use earth fault relays. These include substations, transformers, and distribution panels.

Voltage relays protect the system against voltage fluctuations. Two major voltage relay types are undervoltage and overvoltage relays. While overvoltage relays protect the connected system from voltage supply greater than a certain level, an undervoltage relay protects the system from voltage drops below a certain level.

This is necessary to protect equipment from the hazards of voltage supplies below and beyond the specified level damaging electronic and electrical components. Voltage relays are available for AC and DC systems, and voltage values range from 5 to 500 V.

Sensing Current Using Optical Fibers

There does not seem to be any relation between an optical fiber carrying light and a wire through which an electric current is flowing. But as far back as 1845, Michael Faraday had demonstrated that the magnetic field generated by a current flow through a wire influences the plane-polarization of light waves.

Optical fibers are typically known for their usefulness in data links. In fact, these links can span not only intra-board and short distances but are extremely useful from inter-chassis links to those covering thousands of kilometers. Moreover, being immune to RFI/EMI and external electronic interferences, optical fibers are a good fit for linking data in high-interference situations. But then, this claim goes against the earlier observations of Michael Faraday.

As it is, a special arrangement and the right circumstances are necessary to make optical fibers immune to external electromagnetic influence. Meanwhile, engineers and scientists are taking advantage of the Faraday effect—passing light through a magnetic field rotates its optical polarization state. An electric current can induce this magnetic field. A large electric current can generate a strong magnetic field, and this can change the polarization significantly.

The Verdet constant is a proportionality constant relating the strength of the magnetic field to the angle of rotation. Although not easy to mix optics and electromagnetic physics, scientists use the Verdet effect to measure the current value in a current-carrying wire by covering it with an optical fiber. One of the advantages of this implementation is the high-voltage value of galvanic isolation obtained—very important in power-related applications.

However, there are other details to take care of, when sensing current using the Faraday effect. For instance, thermal fluctuations or minor vibrations can affect the polarization state in the fiber. Therefore, it is necessary to isolate the fiber from these effects, at the same time allowing it to remain sensitive to the magnetic field inducing the polarization.

Scientists have developed a solution for the above problem. They use a type of fiber different from the conventional ones for data links. This special fiber is an advanced type of optical fiber, an SHB or spun-high birefringent fiber. Although on a microscale, the SHB fiber maintains its polarization, on a macroscale, it offers a net-zero birefringence.

To make such a fiber, the manufacturer spins the glass to create a constant rotation of the polarization axis. They twist the fiber once every few millimeters. This allows the fiber to maintain circular polarization despite mechanical stresses on it, while still allowing it to remain sensitive to the Verdet effect.

A careful balance of the spin pitch of the fiber overcomes the effect of stress due to bending during the coiling process yet allowing the fiber to maintain its sensitivity to the Faraday effect. As a result, scientists can use the spun fiber in longer lengths and with smaller coil diameters, resulting in higher sensitivity.

Of course, this one complex subtle step based on optical fiber is not enough to build a current sensor. The input laser beam must have a stable circular polarization before it enters the fiber, requiring the use of polarization-control methods.

Current-Sense Resistors Tradeoffs

Using a resistor for sensing current should be a simple affair. After all, one has only to apply Ohm’s law or I=V/R. So, all it takes is to measure the voltage drop across a resistor to find the current flowing through it. However, things are not as simple as that. The thorn in the flesh is the resistor value.

Using a large resistor value has the advantage of offering a large reading magnitude, greater resolution, higher precision, and improved SNR or Signal to Noise Ratio. However, the larger value also wastes power, as W=I2R. It may also affect loop stability, as the larger value adds more idle resistance between the load and the power source. Additionally, there is an increase in the resistors self-heating.

Would a lower resistor value be better? But then, it will offer higher SNR, lower precision, resolution, and a low reading magnitude. The solution lies in a tradeoff.

Experimenting with various resistor values to sense different ranges of currents, engineers have concluded that a resistor offering a voltage drop of about 100 mV at the highest current is a good compromise. However, this should preferably be a starting point, and the best value for the current sense resistor depends on the function of priorities for sensing the current in the specific application.

The voltage or IR drop is only one of two related problems, with the second problem being a consequence of the chosen resistor value. This second issue, resistive self-heating, is a potential concern, especially when a high-value current flows through the resistor. Considering the equation W=I2R, even for a milliohm resistor, the dissipation may be in several watts when the current is multiple amperes.

Why should self-heating be a concern? Because, self-heating shifts the nominal value of the sense resistor, and this corrupts the current-value reading.

Therefore, unless the designer is measuring microamperes or milliamperes, where they can neglect the self-heating, they would need to analyze the resistance change with temperature change. For doing this, they will need to consult the data for TCR or temperature coefficient of resistance typically available from the resistor’s vendor.

The above analysis is usually an iterative process. That is because the resistance change affects the current flow, which, in turn, affects self-heating which affects resistance, and so on.

Therefore, the current-sensing accuracy depends on three considerations—the initial resistor value and tolerance, the TCR error due to ambient temperature change, and the TCR error due to self-heating. To overcome the iterative calculations, vendors offer resistors with very low TCR.

These resistors are precision, specialized metal-foil types. Making them from various alloys like copper, manganese, and other elements, manufacturers use special production techniques for managing and minimizing TCR. To reduce self-heating and improve thermal dissipation, some manufacturers add copper to the mix.

Instrumentation applications demand the ultimate precision measurements. Manufacturers offer very low TCR resistors and fully characterized curves of their resistance versus temperature. The nature of the curve depends on the alloy mix and is typically parabolic.

What are Industrial Network Switches?

The industry requires network switches to interconnect automation equipment, controllers, and other such devices for transmitting and receiving data on computer networks. Many models of network switches are available on the market, and the industry uses them for both wired and wireless connections. The switches allow multiple devices to access the network with minimal delays or data collisions.

While the industry uses switches to interconnect automatic equipment, controllers, and other such devices, the office environment uses network switches to interconnect computers, scanners, servers, printers, cameras, and more. There are several types of network switches, the most common of them being unmanaged switches, managed smart switches, and PoE switches.

Unmanaged switches are the simplest types primarily used in offices. Although they have the fewest user-configurable options and are the least secure, they are also the cheapest option. The greatest option of most of these switches is their plug-and-play feature. This feature allows them to quickly interconnect most devices in the office, without requiring assistance from any specialist.

For properly implementing a network switch, all devices that connect to it must have unique IP addresses on the same subnet. It is also necessary they can network with each other. Therefore, a network switch is different from a gateway, which allows a device on one network to interconnect to another device on a separate network.

Although more expensive, managed switches tend to be more secure. They also have more advanced features. Depending on their level of automation, the switches determine the availability of the number of user-configurable options. For instance, many of them allow the creation of a VLAN or virtual local area network that can link several LAN or local area networks. The advantage is that a VLAN is more effective than a large LAN consisting of a combination of numerous existing LANs. Therefore, a switch capable of managing a VLAN is of significantly greater advantage for larger facilities.

One of the disadvantages of a managed smart switch is managing them through a CLI or command-line interface. Most average office owners do not possess the requisite skill level for managing these devices, necessitating help from IT specialists. For larger facilities with IT specialists, managed smart switches offer higher flexibility, speed, and greater security over unmanaged switches. Industrial facilities employ OT or operational technology specialists to diagnose and build network connections between control and automation devices.

Some network switches offer the option of PoE or Power over Ethernet, carrying low voltages through the Ethernet cable for powering devices. Although the limit of power transmitted through these PoE connections has a limit of 90 W for class 4 equipment, it offers a significant advantage over running extra cables in some cases. For instance, more cables in robotics and automation means more troubleshooting and more equipment for securing against the robot’s range of motion.

Some network switches are stackable, allowing a combination of multiple switches to form a single large switch suitable for handling more devices. For a few extra devices, a stackable switch may be a better option for now, but there must be future expansion plans also, with investments in a smart switch and multiple LANs.

What is an In-Memory Processor?

According to a university press release, the world’s first in-memory processor is now available. This large-scale processor will redefine the use of energy to a higher efficiency level when it is processing data. Researchers at LANES or the Laboratory at Nanoscale Electronics and Structures in Switzerland, at the EPFL or Ecole Polytechnique Fédérale de Lausanne have developed the new processor.

The latest information technology systems produce copious amounts of heat. Engineers and scientists are looking for more efficient ways of using energy to lower the production of heat, thereby helping to reduce carbon emissions as the world aims to go greener in the future. In trying to reduce the unwanted heat, they are going to the root of the problem. They want to investigate the von Neumann architecture of a processor.

In a contemporary computing architecture, the information processing center is kept separated from the storage area. Therefore, the system spends much of its energy in shuttling information between the processor and the memory. This made sense in 1945, when John von Neumann first described the architecture. At the time, processing devices and memory storage were intentionally kept separate.

Because of the physical separation, the processor must first retrieve data from the memory before it can perform computations. The action involves movement of electric charges, repeatedly discharging and charging capacitors, including transiting currents. All these leads to energy dissipation in the form of heat.

At EPFL, researchers have developed an in-memory processor, which performs a dual role—that of processing and data storage. Rather than using silicon, the researchers have used another semiconductor—MoS2 or molybdenum disulphide.

According to the researchers, MoS2 can form a stable monolayer, which is only three atoms thick, and can interact only weakly with its surroundings. They created a monolayer consisting of a single transistor, simply by peeling it off using Scotch tape. They could design a 2D version of an extremely compact device using this thin structure.

However, a processor requires many transistors to function properly. The research team at LANE could successfully design a large-scale transistor that consists of 1024 elements. They could make this entire structure within a chip of 1×1 cm dimensions. Within the chip, each component serves as a transistor and a floating gate to store a charge. This controls the conductivity of the transistors.

The crucial achievement of the researchers was the processes the team used for creating the processor. For over a decade, the team has perfected their ability to fabricate entire wafers that had MoS2 in uniform layers. This allowed them to design integrated circuits using industry standard tools on computers. They then translated these designs into physical circuits, leading to mass production of the in-memory processor.

With electronics fabrication in Europe needing a boost for revival, the researchers want to leverage their innovative architecture as a base. Instead of competing in fabrication of silicon wafers, the researchers envisage their research as a ground-breaking effort for using non-von Neumann architecture in future applications. They look forward to using their highly efficient in-memory processor for data-intensive applications, such as those related to Artificial Intelligence.

What are Artificial Muscles?

In the animal world, muscles are the basis of all movement. With commands from the brain, electrical pulses contract or release muscles, and this is how we can move our body parts. Now, researchers have created new types of actuators based on structures of multiple soft materials. Like regular actuators, these also convert electrical energy into force or motion. The advantage of these new actuators is they are lightweight, quiet in operation, and biodegradable. During the early stages of development, continuous electrical stimulation could only achieve short-term contraction of the actuators. However, new research has led to the development of a system that not only allows for longer-term contraction of the actuators but also enables accurate force measurements. These new actuators are the basis for artificial muscles.

With their ability to transform electrical energy into force or motion, the new actuators are serving an important role in everyday life. These are soft-material-based actuators, and because of their multiple functionality, have been attracting plenty of attention in the scientific community.

According to the researchers, making a soft actuator is rather simple. They use multi-material structures, in the form of pockets made of flexible films of plastic. They fill the pockets with oils and cover them with conductive plastics. Electrically activating the film results in the pocket contracting, similar to what happens in a biological muscle.

Using this technique, the researchers were able to create robotic muscles, tactile surfaces, and changeable optics. So far, using continual electrical stimulation has resulted in only short-term contractions, and this was a considerable practical barrier.

The researchers have published their findings in Nature Electronics. Researcher Ion-Dan Sirbu, at the Johannes Kepler University in Linz, along with an Austrian research group, developed a system enabling accurate measurement of force in the new actuators.

During their research on combining common materials, the researchers also experimented with plastic films that they were using for work on artificial muscles. They realized a specific combination of materials was able to sustain a constant force for long periods arbitrarily.

The team then constructed a theoretical model of the material for studying its characteristics in depth. They realized their simple model could accurately describe their experimental results. They claim their results with the simple but powerful tool will help in designing and investigating newer systems.

Their study has not only made this technology more functional, it additionally enables identifying material combinations that reduce energy consumption by a factor of thousands. With their material combinations, the researchers and other scientists have successfully investigated and developed various types of artificial muscles, tactile displays, and variable gradient optics.

The study has deepened our grasp of the basic workings of soft actuators. These advancements hold promise for significant strides in assistive devices, mobile robots, and automated machines, offering valuable contributions to marine, terrestrial, and space explorations. This is particularly crucial, given the ongoing quest in these sectors for cost-effective, high-performance solutions that prioritize low power consumption and sustainable environmental impact.