Monthly Archives: June 2024

Sensing Current Using Optical Fibers

There does not seem to be any relation between an optical fiber carrying light and a wire through which an electric current is flowing. But as far back as 1845, Michael Faraday had demonstrated that the magnetic field generated by a current flow through a wire influences the plane-polarization of light waves.

Optical fibers are typically known for their usefulness in data links. In fact, these links can span not only intra-board and short distances but are extremely useful from inter-chassis links to those covering thousands of kilometers. Moreover, being immune to RFI/EMI and external electronic interferences, optical fibers are a good fit for linking data in high-interference situations. But then, this claim goes against the earlier observations of Michael Faraday.

As it is, a special arrangement and the right circumstances are necessary to make optical fibers immune to external electromagnetic influence. Meanwhile, engineers and scientists are taking advantage of the Faraday effect—passing light through a magnetic field rotates its optical polarization state. An electric current can induce this magnetic field. A large electric current can generate a strong magnetic field, and this can change the polarization significantly.

The Verdet constant is a proportionality constant relating the strength of the magnetic field to the angle of rotation. Although not easy to mix optics and electromagnetic physics, scientists use the Verdet effect to measure the current value in a current-carrying wire by covering it with an optical fiber. One of the advantages of this implementation is the high-voltage value of galvanic isolation obtained—very important in power-related applications.

However, there are other details to take care of, when sensing current using the Faraday effect. For instance, thermal fluctuations or minor vibrations can affect the polarization state in the fiber. Therefore, it is necessary to isolate the fiber from these effects, at the same time allowing it to remain sensitive to the magnetic field inducing the polarization.

Scientists have developed a solution for the above problem. They use a type of fiber different from the conventional ones for data links. This special fiber is an advanced type of optical fiber, an SHB or spun-high birefringent fiber. Although on a microscale, the SHB fiber maintains its polarization, on a macroscale, it offers a net-zero birefringence.

To make such a fiber, the manufacturer spins the glass to create a constant rotation of the polarization axis. They twist the fiber once every few millimeters. This allows the fiber to maintain circular polarization despite mechanical stresses on it, while still allowing it to remain sensitive to the Verdet effect.

A careful balance of the spin pitch of the fiber overcomes the effect of stress due to bending during the coiling process yet allowing the fiber to maintain its sensitivity to the Faraday effect. As a result, scientists can use the spun fiber in longer lengths and with smaller coil diameters, resulting in higher sensitivity.

Of course, this one complex subtle step based on optical fiber is not enough to build a current sensor. The input laser beam must have a stable circular polarization before it enters the fiber, requiring the use of polarization-control methods.

Current-Sense Resistors Tradeoffs

Using a resistor for sensing current should be a simple affair. After all, one has only to apply Ohm’s law or I=V/R. So, all it takes is to measure the voltage drop across a resistor to find the current flowing through it. However, things are not as simple as that. The thorn in the flesh is the resistor value.

Using a large resistor value has the advantage of offering a large reading magnitude, greater resolution, higher precision, and improved SNR or Signal to Noise Ratio. However, the larger value also wastes power, as W=I2R. It may also affect loop stability, as the larger value adds more idle resistance between the load and the power source. Additionally, there is an increase in the resistors self-heating.

Would a lower resistor value be better? But then, it will offer higher SNR, lower precision, resolution, and a low reading magnitude. The solution lies in a tradeoff.

Experimenting with various resistor values to sense different ranges of currents, engineers have concluded that a resistor offering a voltage drop of about 100 mV at the highest current is a good compromise. However, this should preferably be a starting point, and the best value for the current sense resistor depends on the function of priorities for sensing the current in the specific application.

The voltage or IR drop is only one of two related problems, with the second problem being a consequence of the chosen resistor value. This second issue, resistive self-heating, is a potential concern, especially when a high-value current flows through the resistor. Considering the equation W=I2R, even for a milliohm resistor, the dissipation may be in several watts when the current is multiple amperes.

Why should self-heating be a concern? Because, self-heating shifts the nominal value of the sense resistor, and this corrupts the current-value reading.

Therefore, unless the designer is measuring microamperes or milliamperes, where they can neglect the self-heating, they would need to analyze the resistance change with temperature change. For doing this, they will need to consult the data for TCR or temperature coefficient of resistance typically available from the resistor’s vendor.

The above analysis is usually an iterative process. That is because the resistance change affects the current flow, which, in turn, affects self-heating which affects resistance, and so on.

Therefore, the current-sensing accuracy depends on three considerations—the initial resistor value and tolerance, the TCR error due to ambient temperature change, and the TCR error due to self-heating. To overcome the iterative calculations, vendors offer resistors with very low TCR.

These resistors are precision, specialized metal-foil types. Making them from various alloys like copper, manganese, and other elements, manufacturers use special production techniques for managing and minimizing TCR. To reduce self-heating and improve thermal dissipation, some manufacturers add copper to the mix.

Instrumentation applications demand the ultimate precision measurements. Manufacturers offer very low TCR resistors and fully characterized curves of their resistance versus temperature. The nature of the curve depends on the alloy mix and is typically parabolic.

What are Industrial Network Switches?

The industry requires network switches to interconnect automation equipment, controllers, and other such devices for transmitting and receiving data on computer networks. Many models of network switches are available on the market, and the industry uses them for both wired and wireless connections. The switches allow multiple devices to access the network with minimal delays or data collisions.

While the industry uses switches to interconnect automatic equipment, controllers, and other such devices, the office environment uses network switches to interconnect computers, scanners, servers, printers, cameras, and more. There are several types of network switches, the most common of them being unmanaged switches, managed smart switches, and PoE switches.

Unmanaged switches are the simplest types primarily used in offices. Although they have the fewest user-configurable options and are the least secure, they are also the cheapest option. The greatest option of most of these switches is their plug-and-play feature. This feature allows them to quickly interconnect most devices in the office, without requiring assistance from any specialist.

For properly implementing a network switch, all devices that connect to it must have unique IP addresses on the same subnet. It is also necessary they can network with each other. Therefore, a network switch is different from a gateway, which allows a device on one network to interconnect to another device on a separate network.

Although more expensive, managed switches tend to be more secure. They also have more advanced features. Depending on their level of automation, the switches determine the availability of the number of user-configurable options. For instance, many of them allow the creation of a VLAN or virtual local area network that can link several LAN or local area networks. The advantage is that a VLAN is more effective than a large LAN consisting of a combination of numerous existing LANs. Therefore, a switch capable of managing a VLAN is of significantly greater advantage for larger facilities.

One of the disadvantages of a managed smart switch is managing them through a CLI or command-line interface. Most average office owners do not possess the requisite skill level for managing these devices, necessitating help from IT specialists. For larger facilities with IT specialists, managed smart switches offer higher flexibility, speed, and greater security over unmanaged switches. Industrial facilities employ OT or operational technology specialists to diagnose and build network connections between control and automation devices.

Some network switches offer the option of PoE or Power over Ethernet, carrying low voltages through the Ethernet cable for powering devices. Although the limit of power transmitted through these PoE connections has a limit of 90 W for class 4 equipment, it offers a significant advantage over running extra cables in some cases. For instance, more cables in robotics and automation means more troubleshooting and more equipment for securing against the robot’s range of motion.

Some network switches are stackable, allowing a combination of multiple switches to form a single large switch suitable for handling more devices. For a few extra devices, a stackable switch may be a better option for now, but there must be future expansion plans also, with investments in a smart switch and multiple LANs.

What is an In-Memory Processor?

According to a university press release, the world’s first in-memory processor is now available. This large-scale processor will redefine the use of energy to a higher efficiency level when it is processing data. Researchers at LANES or the Laboratory at Nanoscale Electronics and Structures in Switzerland, at the EPFL or Ecole Polytechnique Fédérale de Lausanne have developed the new processor.

The latest information technology systems produce copious amounts of heat. Engineers and scientists are looking for more efficient ways of using energy to lower the production of heat, thereby helping to reduce carbon emissions as the world aims to go greener in the future. In trying to reduce the unwanted heat, they are going to the root of the problem. They want to investigate the von Neumann architecture of a processor.

In a contemporary computing architecture, the information processing center is kept separated from the storage area. Therefore, the system spends much of its energy in shuttling information between the processor and the memory. This made sense in 1945, when John von Neumann first described the architecture. At the time, processing devices and memory storage were intentionally kept separate.

Because of the physical separation, the processor must first retrieve data from the memory before it can perform computations. The action involves movement of electric charges, repeatedly discharging and charging capacitors, including transiting currents. All these leads to energy dissipation in the form of heat.

At EPFL, researchers have developed an in-memory processor, which performs a dual role—that of processing and data storage. Rather than using silicon, the researchers have used another semiconductor—MoS2 or molybdenum disulphide.

According to the researchers, MoS2 can form a stable monolayer, which is only three atoms thick, and can interact only weakly with its surroundings. They created a monolayer consisting of a single transistor, simply by peeling it off using Scotch tape. They could design a 2D version of an extremely compact device using this thin structure.

However, a processor requires many transistors to function properly. The research team at LANE could successfully design a large-scale transistor that consists of 1024 elements. They could make this entire structure within a chip of 1×1 cm dimensions. Within the chip, each component serves as a transistor and a floating gate to store a charge. This controls the conductivity of the transistors.

The crucial achievement of the researchers was the processes the team used for creating the processor. For over a decade, the team has perfected their ability to fabricate entire wafers that had MoS2 in uniform layers. This allowed them to design integrated circuits using industry standard tools on computers. They then translated these designs into physical circuits, leading to mass production of the in-memory processor.

With electronics fabrication in Europe needing a boost for revival, the researchers want to leverage their innovative architecture as a base. Instead of competing in fabrication of silicon wafers, the researchers envisage their research as a ground-breaking effort for using non-von Neumann architecture in future applications. They look forward to using their highly efficient in-memory processor for data-intensive applications, such as those related to Artificial Intelligence.