Category Archives: Guides

Industrial Safety Devices

The market is overwhelmed with various safety devices, making the design of machine safety a daunting task. Selecting an optimum safety device often depends upon understanding its proper use for a specific design.

There are different types of industrial automated equipment. They can be as simple as a pneumatic cylinder, or various automation components working simultaneously. Irrespective of the complexity of the system, it is imperative to consider the safety of the maintenance staff, integrator, or operator. Depending on the system they are installed in, safety systems can be very simple or complex. Moreover, the complexity of the safety system depends on the automated system, increasing with the latter’s complexity. The availability of different types of safety systems on the market makes the choice rather difficult.

As automated equipment have been around for so long, documentation on standard design principles, best practices, and guidelines for safety systems are available in plenty. When designing safety systems, these documents are a great resource, and it is necessary to consult them to ensure that the equipment is safe.

SIL or Safety Integrity Level is a measure of the failure rate of equipment expressed in terms of the probability of its failure. Typically, a safety-rated equipment will most likely have a published SIL number. This number is not a rating, but rather a guideline to the type of system with which it can be used. For instance, if the system has a SIL rating of 3, then all devices within the safety system must also have a SIL number of 3. There are four SIL levels and level 4 is the highest, meaning it has the lowest probability of failure.

The E-Stop or Emergency-Stop button is the most common safety device. It is the first safety device typically added to a system. Typically, the push button comes with one normally open contact for monitoring and two normally closed contacts for de-energizing. The button is colored bright red and has a yellow label. Its basic purpose is to stop all sources of motion or hazards by de-energizing the power within the system. For pneumatic equipment, engaging the E-Stop results in venting the stored pressure to the atmosphere, and turning off any STO signals for motion devices.

While there are many ways of using an E-Stop button, the most common is to couple it with a safety relay. Typically, two monitor circuits of the safety relay pass through the dual contacts on the E-Stop button. Pressing the button opens the contacts, causing a break in the redundant safety circuit, thereby triggering the safety relay, which then opens its contacts.

A safety controller or safety PLC using special safety inputs can also be useful for monitoring the state of the E-Stop button. The PLC program should incorporate opening the output contacts when the emergency stop button has been pressed. For simple systems, passing the STO signals or the control voltage for contactors through the E-Stop contacts should suffice.

The inexpensive E-Stop button is a simple way for easily stopping and de-energizing hazards within a system. It is possible to integrate them easily into either simple or complex safety systems.

Skin Effect in Conductors

Alternating current distribution is non-uniform in real conductors with finite dimensions and rectangular or circular cross-sections. This is because the alternating current flow creates eddy currents in real conductors leading to current crowding, following Faraday’s laws.

AC currents, being time-varying, produce non-uniform distributions across the cross-sectional area of a conductor. The conductor offers a high-frequency resistance, and for the approximation, we can assume the current to flow uniformly in the conductor, in a layer one skin deep, just below the surface. This phenomenon is known as the skin effect. However, this is only a simple explanation, with the actual distribution of current being much more nuanced, even within an isolated conductor.

For instance, what is the current distribution within a cylindrical conductor with a diameter 2.5 times greater than the skin depth at the frequency of interest? For the answer, it may be necessary to look closely at the physics of skin effect, and the way skin depth is typically derived.

Skin effect is caused by a basic electromagnetic situation. This is related to the propagation of electromagnetic waves inside a good conductor. Textbooks typically examine the propagation of a plane wave within a conducting half-space.

Euclidean space is typically three-dimensional, consisting of length, breadth, and height. A plane can divide this space into two parts, with each part being a half-space. Therefore, a line, connecting one point in one half-space to another point in the other half-space, will intersect the dividing plane. Plane waves propagate along the dividing plane in the conducting half-space.

Now, plane waves consist of magnetic and electric fields that are perpendicular to the direction of propagation and each other. That is why these waves are also known as transverse electromagnetic or TEM waves. Moreover, within a plane wave, all points on planes perpendicular to the direction of propagation, experience the same electric and magnetic fields.

For instance, considering the electric field (E) is in the z-direction, the magnetic field (H) will be in the x-direction, while the wave propagates in the y-direction. Therefore, assuming a plane wave propagation, the electric and magnetic fields remain constant along the x or y direction, and change only as a function of y.

Moreover, for a good conductor, the electric field and the current density are interrelated to the conductivity of the conductor. Using these parameters allows us to calculate the current density, and the skin depth, by solving Maxwell’s equation.

Maxwell’s equation tells us that the amplitude of the current density at the skin depth decreases at the surface of the conductor. It also gives an initial idea of the change in current density at any instant in time as we go deeper into the conductor.

The equation allows us to relate the skin depth to the wavelength within the conductor. The attenuation constant and phase constant of a good conductor are inversely related to the skin depth. It is easy to see that a single wavelength within the conductor is about 6 times larger than the skin depth. This also means the current density will attenuate significantly at a distance of one wavelength.

Coin-Sized MEMS Rocket Thruster

Thrusters, in addition to engine and control systems, typically require hydrazine rocket fuel, which they must store in tanks, making the entire setup physically rather large. Recent innovations are under development, and they use ion and electric propulsion systems. Although the newer thrusters are physically smaller, they are still too large for Nano and Pico satellites, which weigh between 1 and 10 kg, and between 0.1 and 1 kg respectively.

These small satellites require miniature satellite thrusters. These are rocket engines with combustion chambers of 1 mm size. They use only electricity and ice to create thrust. Manufacturing such tiny coin-sized thrusters requires MEMS fabrication techniques.

Miniaturization of electronics is leading to increased accessibility of orbital launch capacity. In addition, small satellites are experiencing fast growth. But, along with electronics, many other things need to shrink too.

For small satellites, thrusters and other equipment for stabilization must also proportionally shrink in size. Although satellites for special purposes are getting smaller, some key components, especially thrusters, have not kept pace with the downsizing.

Enter The Imperial College of London, where a team has designed a new micro thruster especially meant for Nano and Pico satellite applications. The ESA or European Space Agency, who tested the new thrusters, has dubbed them as ICE-Cubes or Iridium Catalyzed Electrolysis CubeSat Thrusters. ICE-Cube thrusters use the process of electrolysis to separate oxygen and hydrogen from water.

The thruster then recombines the two gasses in a combustion chamber less than 1 mm long. The miniature size of the chamber requires a MEMS fabrication process to create it. In laboratory tests, the thruster delivered 1.25 millinewtons of thrust, and it could sustain it for an impulse of 185 seconds.

Although a fast-growing category of space vehicles, Nano satellites are a relatively new breed. While 2012 saw only 25 launches, a decade later, there were 334 launches in 2022. By 2023, that number has nearly doubled.

Being tiny, Nano satellites have little room to spare. That means, conventional tankage carrying corrosive and toxic propellants, such as hydrazine, is no longer practical. While there are forms of propulsion available on a smaller scale, and they typically use compressed air, ions, or steam, these are neither energy-efficient nor do they offer sufficient lifetime. The highest energy efficiency comes from using oxygen and hydrogen in a combustion system.

Nano satellites typically store their propellant as water-ice, because it is safer and less expensive as compared to holding it in liquid or gaseous form. The electrolysis process requires only 20 Watts, which storage batteries or solar cells can easily produce. Therefore, the satellites typically convert solar energy into thrust using ice.

The Imperial Plasma Propulsion Laboratory of the college fabricates the above devices in-house, using their own MEMS process. They create the shape of the device with a reactive ion etching technique using a refractory metal. Then they sputter-deposit an indium layer, which acts like an ignition catalyst, while simultaneously creating a protective oxidation layer for the walls of the device.

The college laboratory has developed two types of micro thrusters—the ICE-200 producing a design thrust of 1-2 N, and the ICE-Cube, generating a thrust of 5 mN.

Sensing Current Using Optical Fibers

There does not seem to be any relation between an optical fiber carrying light and a wire through which an electric current is flowing. But as far back as 1845, Michael Faraday had demonstrated that the magnetic field generated by a current flow through a wire influences the plane-polarization of light waves.

Optical fibers are typically known for their usefulness in data links. In fact, these links can span not only intra-board and short distances but are extremely useful from inter-chassis links to those covering thousands of kilometers. Moreover, being immune to RFI/EMI and external electronic interferences, optical fibers are a good fit for linking data in high-interference situations. But then, this claim goes against the earlier observations of Michael Faraday.

As it is, a special arrangement and the right circumstances are necessary to make optical fibers immune to external electromagnetic influence. Meanwhile, engineers and scientists are taking advantage of the Faraday effect—passing light through a magnetic field rotates its optical polarization state. An electric current can induce this magnetic field. A large electric current can generate a strong magnetic field, and this can change the polarization significantly.

The Verdet constant is a proportionality constant relating the strength of the magnetic field to the angle of rotation. Although not easy to mix optics and electromagnetic physics, scientists use the Verdet effect to measure the current value in a current-carrying wire by covering it with an optical fiber. One of the advantages of this implementation is the high-voltage value of galvanic isolation obtained—very important in power-related applications.

However, there are other details to take care of, when sensing current using the Faraday effect. For instance, thermal fluctuations or minor vibrations can affect the polarization state in the fiber. Therefore, it is necessary to isolate the fiber from these effects, at the same time allowing it to remain sensitive to the magnetic field inducing the polarization.

Scientists have developed a solution for the above problem. They use a type of fiber different from the conventional ones for data links. This special fiber is an advanced type of optical fiber, an SHB or spun-high birefringent fiber. Although on a microscale, the SHB fiber maintains its polarization, on a macroscale, it offers a net-zero birefringence.

To make such a fiber, the manufacturer spins the glass to create a constant rotation of the polarization axis. They twist the fiber once every few millimeters. This allows the fiber to maintain circular polarization despite mechanical stresses on it, while still allowing it to remain sensitive to the Verdet effect.

A careful balance of the spin pitch of the fiber overcomes the effect of stress due to bending during the coiling process yet allowing the fiber to maintain its sensitivity to the Faraday effect. As a result, scientists can use the spun fiber in longer lengths and with smaller coil diameters, resulting in higher sensitivity.

Of course, this one complex subtle step based on optical fiber is not enough to build a current sensor. The input laser beam must have a stable circular polarization before it enters the fiber, requiring the use of polarization-control methods.

What are Industrial Network Switches?

The industry requires network switches to interconnect automation equipment, controllers, and other such devices for transmitting and receiving data on computer networks. Many models of network switches are available on the market, and the industry uses them for both wired and wireless connections. The switches allow multiple devices to access the network with minimal delays or data collisions.

While the industry uses switches to interconnect automatic equipment, controllers, and other such devices, the office environment uses network switches to interconnect computers, scanners, servers, printers, cameras, and more. There are several types of network switches, the most common of them being unmanaged switches, managed smart switches, and PoE switches.

Unmanaged switches are the simplest types primarily used in offices. Although they have the fewest user-configurable options and are the least secure, they are also the cheapest option. The greatest option of most of these switches is their plug-and-play feature. This feature allows them to quickly interconnect most devices in the office, without requiring assistance from any specialist.

For properly implementing a network switch, all devices that connect to it must have unique IP addresses on the same subnet. It is also necessary they can network with each other. Therefore, a network switch is different from a gateway, which allows a device on one network to interconnect to another device on a separate network.

Although more expensive, managed switches tend to be more secure. They also have more advanced features. Depending on their level of automation, the switches determine the availability of the number of user-configurable options. For instance, many of them allow the creation of a VLAN or virtual local area network that can link several LAN or local area networks. The advantage is that a VLAN is more effective than a large LAN consisting of a combination of numerous existing LANs. Therefore, a switch capable of managing a VLAN is of significantly greater advantage for larger facilities.

One of the disadvantages of a managed smart switch is managing them through a CLI or command-line interface. Most average office owners do not possess the requisite skill level for managing these devices, necessitating help from IT specialists. For larger facilities with IT specialists, managed smart switches offer higher flexibility, speed, and greater security over unmanaged switches. Industrial facilities employ OT or operational technology specialists to diagnose and build network connections between control and automation devices.

Some network switches offer the option of PoE or Power over Ethernet, carrying low voltages through the Ethernet cable for powering devices. Although the limit of power transmitted through these PoE connections has a limit of 90 W for class 4 equipment, it offers a significant advantage over running extra cables in some cases. For instance, more cables in robotics and automation means more troubleshooting and more equipment for securing against the robot’s range of motion.

Some network switches are stackable, allowing a combination of multiple switches to form a single large switch suitable for handling more devices. For a few extra devices, a stackable switch may be a better option for now, but there must be future expansion plans also, with investments in a smart switch and multiple LANs.

What is an In-Memory Processor?

According to a university press release, the world’s first in-memory processor is now available. This large-scale processor will redefine the use of energy to a higher efficiency level when it is processing data. Researchers at LANES or the Laboratory at Nanoscale Electronics and Structures in Switzerland, at the EPFL or Ecole Polytechnique Fédérale de Lausanne have developed the new processor.

The latest information technology systems produce copious amounts of heat. Engineers and scientists are looking for more efficient ways of using energy to lower the production of heat, thereby helping to reduce carbon emissions as the world aims to go greener in the future. In trying to reduce the unwanted heat, they are going to the root of the problem. They want to investigate the von Neumann architecture of a processor.

In a contemporary computing architecture, the information processing center is kept separated from the storage area. Therefore, the system spends much of its energy in shuttling information between the processor and the memory. This made sense in 1945, when John von Neumann first described the architecture. At the time, processing devices and memory storage were intentionally kept separate.

Because of the physical separation, the processor must first retrieve data from the memory before it can perform computations. The action involves movement of electric charges, repeatedly discharging and charging capacitors, including transiting currents. All these leads to energy dissipation in the form of heat.

At EPFL, researchers have developed an in-memory processor, which performs a dual role—that of processing and data storage. Rather than using silicon, the researchers have used another semiconductor—MoS2 or molybdenum disulphide.

According to the researchers, MoS2 can form a stable monolayer, which is only three atoms thick, and can interact only weakly with its surroundings. They created a monolayer consisting of a single transistor, simply by peeling it off using Scotch tape. They could design a 2D version of an extremely compact device using this thin structure.

However, a processor requires many transistors to function properly. The research team at LANE could successfully design a large-scale transistor that consists of 1024 elements. They could make this entire structure within a chip of 1×1 cm dimensions. Within the chip, each component serves as a transistor and a floating gate to store a charge. This controls the conductivity of the transistors.

The crucial achievement of the researchers was the processes the team used for creating the processor. For over a decade, the team has perfected their ability to fabricate entire wafers that had MoS2 in uniform layers. This allowed them to design integrated circuits using industry standard tools on computers. They then translated these designs into physical circuits, leading to mass production of the in-memory processor.

With electronics fabrication in Europe needing a boost for revival, the researchers want to leverage their innovative architecture as a base. Instead of competing in fabrication of silicon wafers, the researchers envisage their research as a ground-breaking effort for using non-von Neumann architecture in future applications. They look forward to using their highly efficient in-memory processor for data-intensive applications, such as those related to Artificial Intelligence.

What are Artificial Muscles?

In the animal world, muscles are the basis of all movement. With commands from the brain, electrical pulses contract or release muscles, and this is how we can move our body parts. Now, researchers have created new types of actuators based on structures of multiple soft materials. Like regular actuators, these also convert electrical energy into force or motion. The advantage of these new actuators is they are lightweight, quiet in operation, and biodegradable. During the early stages of development, continuous electrical stimulation could only achieve short-term contraction of the actuators. However, new research has led to the development of a system that not only allows for longer-term contraction of the actuators but also enables accurate force measurements. These new actuators are the basis for artificial muscles.

With their ability to transform electrical energy into force or motion, the new actuators are serving an important role in everyday life. These are soft-material-based actuators, and because of their multiple functionality, have been attracting plenty of attention in the scientific community.

According to the researchers, making a soft actuator is rather simple. They use multi-material structures, in the form of pockets made of flexible films of plastic. They fill the pockets with oils and cover them with conductive plastics. Electrically activating the film results in the pocket contracting, similar to what happens in a biological muscle.

Using this technique, the researchers were able to create robotic muscles, tactile surfaces, and changeable optics. So far, using continual electrical stimulation has resulted in only short-term contractions, and this was a considerable practical barrier.

The researchers have published their findings in Nature Electronics. Researcher Ion-Dan Sirbu, at the Johannes Kepler University in Linz, along with an Austrian research group, developed a system enabling accurate measurement of force in the new actuators.

During their research on combining common materials, the researchers also experimented with plastic films that they were using for work on artificial muscles. They realized a specific combination of materials was able to sustain a constant force for long periods arbitrarily.

The team then constructed a theoretical model of the material for studying its characteristics in depth. They realized their simple model could accurately describe their experimental results. They claim their results with the simple but powerful tool will help in designing and investigating newer systems.

Their study has not only made this technology more functional, it additionally enables identifying material combinations that reduce energy consumption by a factor of thousands. With their material combinations, the researchers and other scientists have successfully investigated and developed various types of artificial muscles, tactile displays, and variable gradient optics.

The study has deepened our grasp of the basic workings of soft actuators. These advancements hold promise for significant strides in assistive devices, mobile robots, and automated machines, offering valuable contributions to marine, terrestrial, and space explorations. This is particularly crucial, given the ongoing quest in these sectors for cost-effective, high-performance solutions that prioritize low power consumption and sustainable environmental impact.

What is Magnetic Levitation?

Many systems such as flywheels, Maglev trains, and other high-speed machinery already use magnetic levitation. The Brookhaven National Laboratory had pioneered this technology in the late 1960s. Maglev trains use the magnetic levitation technology, where superconducting magnets keep a train car suspended above a U-shaped concrete guide-way. Like regular magnets, superconducting magnets repel one another when like poles face each other. Systematically electrifying propulsion loops in the system creates moving magnetic fields that pull the train car forward from the front and push it forward from the rear. As the train car is floating in a sea of interacting magnetic fields while moving, the trip is very smooth and very fast, reaching up to 375 miles per hour (ca. 604 km/h).

Now, at the Technical University of Denmark, latest research has given this old technology a new twist. They have shown it is possible to levitate a magnet simply by rotating another similar sized magnet near it. Hamdi Ucar, an electronics and software engineer, had first demonstrated this unusual effect in 2021. The team at TU Denmark is using this effect to exploit contactless object handling or for trapping and manipulating microplastics made of ferromagnetic materials.

Magnetic levitation can be of three types. The first of these is active magnetic stabilization. Here, a control system supplies the magnetic force that keeps the levitating object under balanced conditions. The second type is used by Maglev trains and is known as electrodynamic suspension. In this case, a moving magnet induces a current in a stationary conductor, which then produces a repulsive magnetic force. This force increases with the speed of the moving magnet. The third type is the spin-stabilized levitation. Here, a levitating magnet spins at about 500 RPM or revolutions per minute. Gyroscopic effect keeps the magnet stable.

The TU-Denmark type of levitation is a variation of the third type. It involves two magnets—a rotor, and a floater. The rotor magnet is mounted on a motor. It has its magnetic poles oriented perpendicular to its rotational axis. The motor makes it rotate at velocities of about 10,000 RPM. The TU-Denmark team used a spherical magnet, made from neodymium-iron-boron, and 19 mm in diameter.

The floater magnet, placed under the rotor, begins to automatically spin with the spinning rotor, moving upwards towards the rotor to hover in space a few centimeters below it. The frequency of precession of the floater is the same as that of the rotor and has its magnetization oriented near to the rotation axis, matching that of the like pole of the rotor. When disturbed, the interacting magnetic fields forces it back to its equilibrium position.

The team used computer simulations, taking into account the magneto-static interactions between the two magnets. They found the new type of levitation is caused by a combination of magnetic dipole to dipole coupling, and the gyroscopic effect. They explained it as a magneto-static force of one magnet exerting an attractive and repulsive force on the other.

Furthermore, they explained that the process goes on to create a midair energy minimum in the potential of interaction between the dipoles. The team’s computer modelling revealed this minimum, where the floater could stably levitate.

What are Thermal Transistors?

Modern electronic devices depend on electronic transistors. Although transistors control the flow of electricity precisely, in the process, they also generate heat. So far, there was not much control over the amount of heat transistors generated during operation—it depended on the efficiency of the device—devices with higher efficiency generated lower amounts of heat. Now, using a solid-state thermal transistor, it is possible to use an electric field to control the flow of heat through electronic devices.

The new device, the thermal transistor, was developed by researchers at the University of California, Los Angeles. They published their study in Science, demonstrating the capabilities of the new technology. The lead author of the study explained the process as very challenging, as, for a long time, scientists and engineers wanted to control heat transfer as easily as they could control current flow.

So far, engineers cooled electronics with heat sinks. They used passive heat sinks to draw excess heat away from the electronic device to keep it cool. Although many have tried active approaches to thermal management, these mostly rely on moving parts or fluids. They can take typically from minutes to hours to ramp up or down, depending on the thermal conductivity of the material. On the other hand, using thermal transistors, the researchers were able to actively modulate the heat flow with higher precision and speed. The higher rate of cooling or heating makes thermal transistors a promising option for thermal management in electronic devices.

Similar to the working of an electronic transistor, the thermal transistor uses electric fields to modulate its channel conductance. However, in this case, the conductance is thermal, rather than electrical. Researchers engineered a thin film of molecules in the form of a cage to act as the transistor’s channel. They then applied an electric field, making the molecular bonds stronger within the film. This, in turn, increased its thermal conductance.

As the film was only a single molecule thick, the researchers could attain maximum change in conductivity. The most astonishing feature of this technology was the speed at which the change in conductivity occurred. The researchers were able to go up to a frequency of 1 MHz and above—this was several times faster than that achieved by other heat management systems.

Other types of thermal switches typically control heat flow through molecular motion. However, compared to the motion of electrons, molecular motion is far slower. The use of electrical fields allowed the researchers to increase the speed of electrons in the switch from mHz to MHz frequencies.

Another difference between molecular and electron motion is that the former cannot create a large enough difference in thermal conduction between the on and off states of the transistor. However, with electron motion, the difference achieved can be as high as 13 times, an enormous figure, both in speed and magnitude.

Because of this improvement, the device assumes an important status for cooling processors. Being small, the transistors use only a tiny amount of power to control the heat flow. Another advantage is that it is possible to integrate many thermal transistors on the same chip.

What is a CPU?

We use computers every day, and most users are aware of the one indispensable hardware component in it—the CPU or the Central Processing Unit. However, contrary to popular belief, the entire desktop computer or the router is not the CPU, as the actual CPU is small enough to fit in the palm of your hand. Small as it is, the CPU is the most important component inside any computer.

That is because the central processing unit is the main driving force or the brain of the computer and is the only component that does the actual thinking and decision-making. To do that, CPUs typically contain one or more cores that break up the workload and handle individual tasks. As each task requires data handling, a CPU must have access to the memory where such data actually resides. To enable fast computing, the memory speed must be high. This is generally RAM or Random Access Memory, and together with a great amount of cache memory, which is part of the CPU, helps the central processing unit to complete tasks at high speed. However, the RAM and cache can only store a small amount of data, and the CPU must periodically transfer the required data from external disk drives, as these can hold much more of it.

Being processors, CPUs are available in large varieties of ISAs or Instruction-Set Architectures. ISAs can be highly distinct, making them so extreme that software running on one ISA may not run on others. Even within CPUs using the same ISA, there may be differences in microarchitecture, specifically related to the actual design of the CPU. Manufacturers use different microarchitectures to offer CPUs with various levels of performance, features, and efficiency.

A CPU with a single core is highly efficient in accomplishing tasks that require a serial, sequential order of execution. To improve the performance even further, CPUs with multiple cores are available. Where consumer chips typically offer up to eight cores, bigger server CPUs may offer anywhere from 32 to 128 cores. CPU designers target improving per-core performance by increasing the clock speed, thereby increasing the number of instructions per second that the core handles. This is again dependent on the microarchitecture.

Crafting CPUs is an incredibly intricate endeavor, navigated by only a select few experts worldwide. Noteworthy contributors to this field include industry giants like Intel, AMD, ARM, and RISC-V International. Intel and AMD, the pioneers in this arena, consistently engage in fierce competition, each striving to outdo the other in various CPU categories.

ARM, on the other hand, distinguishes itself by offering its proprietary ARM ISA, a technology it licenses to prominent entities such as Apple, Qualcomm, and Samsung. These licensees then leverage the ARM ISA to fashion bespoke CPUs, often surpassing the performance of the standard ARM cores developed by the parent company.

In a departure from the proprietary norm, RISC-V International promotes an open-standard approach with its RISC-V ISA. This innovative model allows anyone to freely adopt and modify the ISA, fostering a collaborative environment that encourages diverse contributions to CPU design.

To truly grasp how well a CPU performs, your best bet is to dive into reviews penned by fellow users and stack their experiences against your specific needs. This usually involves delving into numerous graphs and navigating through tables brimming with numbers. Simply relying on the CPU specification sheet frequently falls short of providing a comprehensive understanding.