Category Archives: Guides

Sensing Current Using Optical Fibers

There does not seem to be any relation between an optical fiber carrying light and a wire through which an electric current is flowing. But as far back as 1845, Michael Faraday had demonstrated that the magnetic field generated by a current flow through a wire influences the plane-polarization of light waves.

Optical fibers are typically known for their usefulness in data links. In fact, these links can span not only intra-board and short distances but are extremely useful from inter-chassis links to those covering thousands of kilometers. Moreover, being immune to RFI/EMI and external electronic interferences, optical fibers are a good fit for linking data in high-interference situations. But then, this claim goes against the earlier observations of Michael Faraday.

As it is, a special arrangement and the right circumstances are necessary to make optical fibers immune to external electromagnetic influence. Meanwhile, engineers and scientists are taking advantage of the Faraday effect—passing light through a magnetic field rotates its optical polarization state. An electric current can induce this magnetic field. A large electric current can generate a strong magnetic field, and this can change the polarization significantly.

The Verdet constant is a proportionality constant relating the strength of the magnetic field to the angle of rotation. Although not easy to mix optics and electromagnetic physics, scientists use the Verdet effect to measure the current value in a current-carrying wire by covering it with an optical fiber. One of the advantages of this implementation is the high-voltage value of galvanic isolation obtained—very important in power-related applications.

However, there are other details to take care of, when sensing current using the Faraday effect. For instance, thermal fluctuations or minor vibrations can affect the polarization state in the fiber. Therefore, it is necessary to isolate the fiber from these effects, at the same time allowing it to remain sensitive to the magnetic field inducing the polarization.

Scientists have developed a solution for the above problem. They use a type of fiber different from the conventional ones for data links. This special fiber is an advanced type of optical fiber, an SHB or spun-high birefringent fiber. Although on a microscale, the SHB fiber maintains its polarization, on a macroscale, it offers a net-zero birefringence.

To make such a fiber, the manufacturer spins the glass to create a constant rotation of the polarization axis. They twist the fiber once every few millimeters. This allows the fiber to maintain circular polarization despite mechanical stresses on it, while still allowing it to remain sensitive to the Verdet effect.

A careful balance of the spin pitch of the fiber overcomes the effect of stress due to bending during the coiling process yet allowing the fiber to maintain its sensitivity to the Faraday effect. As a result, scientists can use the spun fiber in longer lengths and with smaller coil diameters, resulting in higher sensitivity.

Of course, this one complex subtle step based on optical fiber is not enough to build a current sensor. The input laser beam must have a stable circular polarization before it enters the fiber, requiring the use of polarization-control methods.

What are Industrial Network Switches?

The industry requires network switches to interconnect automation equipment, controllers, and other such devices for transmitting and receiving data on computer networks. Many models of network switches are available on the market, and the industry uses them for both wired and wireless connections. The switches allow multiple devices to access the network with minimal delays or data collisions.

While the industry uses switches to interconnect automatic equipment, controllers, and other such devices, the office environment uses network switches to interconnect computers, scanners, servers, printers, cameras, and more. There are several types of network switches, the most common of them being unmanaged switches, managed smart switches, and PoE switches.

Unmanaged switches are the simplest types primarily used in offices. Although they have the fewest user-configurable options and are the least secure, they are also the cheapest option. The greatest option of most of these switches is their plug-and-play feature. This feature allows them to quickly interconnect most devices in the office, without requiring assistance from any specialist.

For properly implementing a network switch, all devices that connect to it must have unique IP addresses on the same subnet. It is also necessary they can network with each other. Therefore, a network switch is different from a gateway, which allows a device on one network to interconnect to another device on a separate network.

Although more expensive, managed switches tend to be more secure. They also have more advanced features. Depending on their level of automation, the switches determine the availability of the number of user-configurable options. For instance, many of them allow the creation of a VLAN or virtual local area network that can link several LAN or local area networks. The advantage is that a VLAN is more effective than a large LAN consisting of a combination of numerous existing LANs. Therefore, a switch capable of managing a VLAN is of significantly greater advantage for larger facilities.

One of the disadvantages of a managed smart switch is managing them through a CLI or command-line interface. Most average office owners do not possess the requisite skill level for managing these devices, necessitating help from IT specialists. For larger facilities with IT specialists, managed smart switches offer higher flexibility, speed, and greater security over unmanaged switches. Industrial facilities employ OT or operational technology specialists to diagnose and build network connections between control and automation devices.

Some network switches offer the option of PoE or Power over Ethernet, carrying low voltages through the Ethernet cable for powering devices. Although the limit of power transmitted through these PoE connections has a limit of 90 W for class 4 equipment, it offers a significant advantage over running extra cables in some cases. For instance, more cables in robotics and automation means more troubleshooting and more equipment for securing against the robot’s range of motion.

Some network switches are stackable, allowing a combination of multiple switches to form a single large switch suitable for handling more devices. For a few extra devices, a stackable switch may be a better option for now, but there must be future expansion plans also, with investments in a smart switch and multiple LANs.

What is an In-Memory Processor?

According to a university press release, the world’s first in-memory processor is now available. This large-scale processor will redefine the use of energy to a higher efficiency level when it is processing data. Researchers at LANES or the Laboratory at Nanoscale Electronics and Structures in Switzerland, at the EPFL or Ecole Polytechnique Fédérale de Lausanne have developed the new processor.

The latest information technology systems produce copious amounts of heat. Engineers and scientists are looking for more efficient ways of using energy to lower the production of heat, thereby helping to reduce carbon emissions as the world aims to go greener in the future. In trying to reduce the unwanted heat, they are going to the root of the problem. They want to investigate the von Neumann architecture of a processor.

In a contemporary computing architecture, the information processing center is kept separated from the storage area. Therefore, the system spends much of its energy in shuttling information between the processor and the memory. This made sense in 1945, when John von Neumann first described the architecture. At the time, processing devices and memory storage were intentionally kept separate.

Because of the physical separation, the processor must first retrieve data from the memory before it can perform computations. The action involves movement of electric charges, repeatedly discharging and charging capacitors, including transiting currents. All these leads to energy dissipation in the form of heat.

At EPFL, researchers have developed an in-memory processor, which performs a dual role—that of processing and data storage. Rather than using silicon, the researchers have used another semiconductor—MoS2 or molybdenum disulphide.

According to the researchers, MoS2 can form a stable monolayer, which is only three atoms thick, and can interact only weakly with its surroundings. They created a monolayer consisting of a single transistor, simply by peeling it off using Scotch tape. They could design a 2D version of an extremely compact device using this thin structure.

However, a processor requires many transistors to function properly. The research team at LANE could successfully design a large-scale transistor that consists of 1024 elements. They could make this entire structure within a chip of 1×1 cm dimensions. Within the chip, each component serves as a transistor and a floating gate to store a charge. This controls the conductivity of the transistors.

The crucial achievement of the researchers was the processes the team used for creating the processor. For over a decade, the team has perfected their ability to fabricate entire wafers that had MoS2 in uniform layers. This allowed them to design integrated circuits using industry standard tools on computers. They then translated these designs into physical circuits, leading to mass production of the in-memory processor.

With electronics fabrication in Europe needing a boost for revival, the researchers want to leverage their innovative architecture as a base. Instead of competing in fabrication of silicon wafers, the researchers envisage their research as a ground-breaking effort for using non-von Neumann architecture in future applications. They look forward to using their highly efficient in-memory processor for data-intensive applications, such as those related to Artificial Intelligence.

What are Artificial Muscles?

In the animal world, muscles are the basis of all movement. With commands from the brain, electrical pulses contract or release muscles, and this is how we can move our body parts. Now, researchers have created new types of actuators based on structures of multiple soft materials. Like regular actuators, these also convert electrical energy into force or motion. The advantage of these new actuators is they are lightweight, quiet in operation, and biodegradable. During the early stages of development, continuous electrical stimulation could only achieve short-term contraction of the actuators. However, new research has led to the development of a system that not only allows for longer-term contraction of the actuators but also enables accurate force measurements. These new actuators are the basis for artificial muscles.

With their ability to transform electrical energy into force or motion, the new actuators are serving an important role in everyday life. These are soft-material-based actuators, and because of their multiple functionality, have been attracting plenty of attention in the scientific community.

According to the researchers, making a soft actuator is rather simple. They use multi-material structures, in the form of pockets made of flexible films of plastic. They fill the pockets with oils and cover them with conductive plastics. Electrically activating the film results in the pocket contracting, similar to what happens in a biological muscle.

Using this technique, the researchers were able to create robotic muscles, tactile surfaces, and changeable optics. So far, using continual electrical stimulation has resulted in only short-term contractions, and this was a considerable practical barrier.

The researchers have published their findings in Nature Electronics. Researcher Ion-Dan Sirbu, at the Johannes Kepler University in Linz, along with an Austrian research group, developed a system enabling accurate measurement of force in the new actuators.

During their research on combining common materials, the researchers also experimented with plastic films that they were using for work on artificial muscles. They realized a specific combination of materials was able to sustain a constant force for long periods arbitrarily.

The team then constructed a theoretical model of the material for studying its characteristics in depth. They realized their simple model could accurately describe their experimental results. They claim their results with the simple but powerful tool will help in designing and investigating newer systems.

Their study has not only made this technology more functional, it additionally enables identifying material combinations that reduce energy consumption by a factor of thousands. With their material combinations, the researchers and other scientists have successfully investigated and developed various types of artificial muscles, tactile displays, and variable gradient optics.

The study has deepened our grasp of the basic workings of soft actuators. These advancements hold promise for significant strides in assistive devices, mobile robots, and automated machines, offering valuable contributions to marine, terrestrial, and space explorations. This is particularly crucial, given the ongoing quest in these sectors for cost-effective, high-performance solutions that prioritize low power consumption and sustainable environmental impact.

What is Magnetic Levitation?

Many systems such as flywheels, Maglev trains, and other high-speed machinery already use magnetic levitation. The Brookhaven National Laboratory had pioneered this technology in the late 1960s. Maglev trains use the magnetic levitation technology, where superconducting magnets keep a train car suspended above a U-shaped concrete guide-way. Like regular magnets, superconducting magnets repel one another when like poles face each other. Systematically electrifying propulsion loops in the system creates moving magnetic fields that pull the train car forward from the front and push it forward from the rear. As the train car is floating in a sea of interacting magnetic fields while moving, the trip is very smooth and very fast, reaching up to 375 miles per hour (ca. 604 km/h).

Now, at the Technical University of Denmark, latest research has given this old technology a new twist. They have shown it is possible to levitate a magnet simply by rotating another similar sized magnet near it. Hamdi Ucar, an electronics and software engineer, had first demonstrated this unusual effect in 2021. The team at TU Denmark is using this effect to exploit contactless object handling or for trapping and manipulating microplastics made of ferromagnetic materials.

Magnetic levitation can be of three types. The first of these is active magnetic stabilization. Here, a control system supplies the magnetic force that keeps the levitating object under balanced conditions. The second type is used by Maglev trains and is known as electrodynamic suspension. In this case, a moving magnet induces a current in a stationary conductor, which then produces a repulsive magnetic force. This force increases with the speed of the moving magnet. The third type is the spin-stabilized levitation. Here, a levitating magnet spins at about 500 RPM or revolutions per minute. Gyroscopic effect keeps the magnet stable.

The TU-Denmark type of levitation is a variation of the third type. It involves two magnets—a rotor, and a floater. The rotor magnet is mounted on a motor. It has its magnetic poles oriented perpendicular to its rotational axis. The motor makes it rotate at velocities of about 10,000 RPM. The TU-Denmark team used a spherical magnet, made from neodymium-iron-boron, and 19 mm in diameter.

The floater magnet, placed under the rotor, begins to automatically spin with the spinning rotor, moving upwards towards the rotor to hover in space a few centimeters below it. The frequency of precession of the floater is the same as that of the rotor and has its magnetization oriented near to the rotation axis, matching that of the like pole of the rotor. When disturbed, the interacting magnetic fields forces it back to its equilibrium position.

The team used computer simulations, taking into account the magneto-static interactions between the two magnets. They found the new type of levitation is caused by a combination of magnetic dipole to dipole coupling, and the gyroscopic effect. They explained it as a magneto-static force of one magnet exerting an attractive and repulsive force on the other.

Furthermore, they explained that the process goes on to create a midair energy minimum in the potential of interaction between the dipoles. The team’s computer modelling revealed this minimum, where the floater could stably levitate.

What are Thermal Transistors?

Modern electronic devices depend on electronic transistors. Although transistors control the flow of electricity precisely, in the process, they also generate heat. So far, there was not much control over the amount of heat transistors generated during operation—it depended on the efficiency of the device—devices with higher efficiency generated lower amounts of heat. Now, using a solid-state thermal transistor, it is possible to use an electric field to control the flow of heat through electronic devices.

The new device, the thermal transistor, was developed by researchers at the University of California, Los Angeles. They published their study in Science, demonstrating the capabilities of the new technology. The lead author of the study explained the process as very challenging, as, for a long time, scientists and engineers wanted to control heat transfer as easily as they could control current flow.

So far, engineers cooled electronics with heat sinks. They used passive heat sinks to draw excess heat away from the electronic device to keep it cool. Although many have tried active approaches to thermal management, these mostly rely on moving parts or fluids. They can take typically from minutes to hours to ramp up or down, depending on the thermal conductivity of the material. On the other hand, using thermal transistors, the researchers were able to actively modulate the heat flow with higher precision and speed. The higher rate of cooling or heating makes thermal transistors a promising option for thermal management in electronic devices.

Similar to the working of an electronic transistor, the thermal transistor uses electric fields to modulate its channel conductance. However, in this case, the conductance is thermal, rather than electrical. Researchers engineered a thin film of molecules in the form of a cage to act as the transistor’s channel. They then applied an electric field, making the molecular bonds stronger within the film. This, in turn, increased its thermal conductance.

As the film was only a single molecule thick, the researchers could attain maximum change in conductivity. The most astonishing feature of this technology was the speed at which the change in conductivity occurred. The researchers were able to go up to a frequency of 1 MHz and above—this was several times faster than that achieved by other heat management systems.

Other types of thermal switches typically control heat flow through molecular motion. However, compared to the motion of electrons, molecular motion is far slower. The use of electrical fields allowed the researchers to increase the speed of electrons in the switch from mHz to MHz frequencies.

Another difference between molecular and electron motion is that the former cannot create a large enough difference in thermal conduction between the on and off states of the transistor. However, with electron motion, the difference achieved can be as high as 13 times, an enormous figure, both in speed and magnitude.

Because of this improvement, the device assumes an important status for cooling processors. Being small, the transistors use only a tiny amount of power to control the heat flow. Another advantage is that it is possible to integrate many thermal transistors on the same chip.

What is a CPU?

We use computers every day, and most users are aware of the one indispensable hardware component in it—the CPU or the Central Processing Unit. However, contrary to popular belief, the entire desktop computer or the router is not the CPU, as the actual CPU is small enough to fit in the palm of your hand. Small as it is, the CPU is the most important component inside any computer.

That is because the central processing unit is the main driving force or the brain of the computer and is the only component that does the actual thinking and decision-making. To do that, CPUs typically contain one or more cores that break up the workload and handle individual tasks. As each task requires data handling, a CPU must have access to the memory where such data actually resides. To enable fast computing, the memory speed must be high. This is generally RAM or Random Access Memory, and together with a great amount of cache memory, which is part of the CPU, helps the central processing unit to complete tasks at high speed. However, the RAM and cache can only store a small amount of data, and the CPU must periodically transfer the required data from external disk drives, as these can hold much more of it.

Being processors, CPUs are available in large varieties of ISAs or Instruction-Set Architectures. ISAs can be highly distinct, making them so extreme that software running on one ISA may not run on others. Even within CPUs using the same ISA, there may be differences in microarchitecture, specifically related to the actual design of the CPU. Manufacturers use different microarchitectures to offer CPUs with various levels of performance, features, and efficiency.

A CPU with a single core is highly efficient in accomplishing tasks that require a serial, sequential order of execution. To improve the performance even further, CPUs with multiple cores are available. Where consumer chips typically offer up to eight cores, bigger server CPUs may offer anywhere from 32 to 128 cores. CPU designers target improving per-core performance by increasing the clock speed, thereby increasing the number of instructions per second that the core handles. This is again dependent on the microarchitecture.

Crafting CPUs is an incredibly intricate endeavor, navigated by only a select few experts worldwide. Noteworthy contributors to this field include industry giants like Intel, AMD, ARM, and RISC-V International. Intel and AMD, the pioneers in this arena, consistently engage in fierce competition, each striving to outdo the other in various CPU categories.

ARM, on the other hand, distinguishes itself by offering its proprietary ARM ISA, a technology it licenses to prominent entities such as Apple, Qualcomm, and Samsung. These licensees then leverage the ARM ISA to fashion bespoke CPUs, often surpassing the performance of the standard ARM cores developed by the parent company.

In a departure from the proprietary norm, RISC-V International promotes an open-standard approach with its RISC-V ISA. This innovative model allows anyone to freely adopt and modify the ISA, fostering a collaborative environment that encourages diverse contributions to CPU design.

To truly grasp how well a CPU performs, your best bet is to dive into reviews penned by fellow users and stack their experiences against your specific needs. This usually involves delving into numerous graphs and navigating through tables brimming with numbers. Simply relying on the CPU specification sheet frequently falls short of providing a comprehensive understanding.

What are Cold-Cathode Devices?

Some devices, like thermionic valves, contain a cathode that requires heating up before the device can work. However, other devices do not require a hot cathode to function. These devices have two electrodes within a sealed glass envelope that contains a low-pressure gas like neon. With a sufficiently high voltage applied to the electrodes, the gas ionizes, producing a glow around the negative electrode, also known as the cathode. Depending on the gas in the tube, the cathode glow can be orange (for neon), or another color. Since these devices do not require a hot cathode, they are known as cold-cathode devices. Based on this effect, scientists have developed a multitude of devices.

The simplest of cold-cathode devices is the neon lamp. Before the advent of LEDs, neon lamps were the go-to lights. Neon lamps ionize at around 90 V, which is the strike voltage or breakdown voltage of the neon gas within the lamp. Once ionized, the gas will continue to glow at a voltage of around 65 V, which is its maintain or sustain voltage. This difference between the strike voltage and the sustain voltage implies the gas has a negative resistance region in the operating curve of the device. Hence, users often build a relaxation oscillator with a neon lamp, a capacitor, and a resistor.

Another everyday use for the neon lamp is as a power indicator for the AC mains. In practice, as an AC power indicator, the neon lamp requires a series resistance of around 220k – 1M ohms to limit the current flow through it, which also extends its life significantly. Since the electrodes in a neon lamp are symmetrical, using it in an AC circuit causes both electrodes to glow equally.

Neon signs, such as those in Times Square and Piccadilly Circus, also use the same effect. Instead of a short tube like in the neon lamp, neon signs use a long tube shaped in the specific design of the application. Depending on the display color, the tube may contain neon or another gas, together with a small amount of mercury. By applying a fluorescent phosphor coating to the inside of the glass tube, it is possible to produce still more colors. Due to the significant separation between the two electrodes in neon signs, they require a high strike voltage of around 30kV.

Another application of cold-cathode devices is the popular Nixie tube. Although seven-segment LED displays have now largely replaced them, Nixie tubes are still popular due to their effect as a glorified neon tube. Typically, they have ten electrodes, each in the shape of a numeral. In use, the circuit switches to the electrode required for displaying a particular number. The Nixie tube produces very natural-looking displays, hence, people find them beautiful and preferable to the stick-like seven-segment LED displays.

Photographers still use flash tubes to illuminate the scenes they are capturing. They typically use them as camera flashes and strobes. Flash tubes use xenon gas as their filling. Apart from the two regular main electrodes, flash tubes have a smaller trigger electrode near one or both the main electrodes. In use, the main electrodes have a few hundred volts between them. For triggering, the circuit applies a high-voltage pulse to the trigger electrode. This causes the gas between the two electrodes to ionize rapidly, giving off a bright white flash.

What is Industrial Ethernet?

Earlier, we had a paradigm shift in the industry related to manufacturing. This was Industry 3.0, and, based on information technology, it boosted automation, enhanced productivity, improved precision, and allowed higher flexibility. Today, we are at the foothills of Industry 4.0, with ML or machine language, M2M or machine-to-machine communication, and smart technology like AI or artificial intelligence. There is a major difference between the two. While Industry 3.0 offered information to humans, allowing them to make better decisions, Industry 4.0 offers digital information to optimize processes, mostly without human intervention.

With Industry 4.0, it is possible to link the design office directly to the manufacturing floor. For instance, using M2M communications, CAD, or computer aided design can communicate directly to machine tools, thereby programming them to make the necessary parts. Similarly, machine tools can also provide feedback to CAD, sending information about challenges in the production process, such that CAD can modify them suitably for easier fabrication.

Manufacturers use the Industrial Internet or IIoT, the Industrial Internet of Things, to build their Industry 4.0 solutions. The network has an important role like forming feedback loops. This allows sensors to monitor processes in real-time, and the data thus collected can effectively control and enhance the operation of the machine.

However, it is not simple to implement IIoT. One of the biggest challenges is the cost of investment. But this investment can be justified through better design and manufacturing processes leading to cost savings through increased productivity and fewer product failures. In fact, reducing capital outflows is one way to accelerate adoption of Industry 4.0. Another way could be to use a relatively inexpensive but proven and accessible communication technology, like the Ethernet.

Ethernet is one of the wired networking options that is in wide use all over the world. It has good IP interoperability and huge vendor support. Moreover, POE or power over internet uses the same set of cables for carrying data as well as power to connected cameras, actuators, and sensors.

Industrial Ethernet, using rugged cables and connectors, builds on the consumer version of the Ethernet, thereby bringing a mature and proven technology to industrial automation. With the implementation of Industrial Ethernet, it is possible to not only transport vital information or data, but also remotely supervise machines, controllers, and PLCs on the shop floor.

Standard Ethernet protocol has high latency, mainly due to its tendency to lose packets. This makes it unsuitable for rapidly moving assembly lines that must run in synchronization. On the other hand, Industrial Ethernet hardware uses deterministic and low-latency industrial protocols, like PROFINET, Modbus TCP, and Ethernet/IP.

For Industrial Ethernet deployment, the industry uses hardened versions of the CAT 5e cable. For instance, the Gigabit Ethernet uses CAT 6 cable. For instance, the CAT 5e cable has eight wires formed into four twisted pairs. This twisting limits cross talk and signal interference, and each pair supports a duplex connection. Gigabit Ethernet, being a high-speed system, uses all four pairs for carrying data. For lower throughput, systems can use two twisted pairs, and the other two for carrying power or for conventional phone service.

E-Fuse Future Power Protection

High-voltage eMobility applications are on the rise. Traditionally, fuses are non-re-settable, and sometimes mechanical relays or contactors are used. However, that is now changing. Semiconductor-based re-settable fuses or eFuses are now replacing traditional fuses.

These innovative eFuses represent a significant trend in safeguarding hardware and users in high-voltage and high-power scenarios. Vishay has announced a reference design for an eFuse that can handle high power loads. They have equipped the new eFuse with SIC MOSFETs and a VOA300 optocoupler. The combination can handle up to 40 kW of continuous power load. The design is capable of operating at full power with minimal losses of lower than 30 W without active cooling. The eFuse incorporates important essential features like continuous current monitoring, a preload function, and rapid overcurrent protection.

Vishay has designed the eFuse to manage the safe connection and disconnection of a high-voltage power source. For instance, the eFuse can safely connect or disconnect various vehicle loads safely to and from a high-energy battery pack. The eFuse uses SIC MOSFETS as its primary switches, and these are capable of continuous operation up to 100 Amperes. The user can predefine a current limit. When the current exceeds this limit, the eFuse disconnects the load rapidly from the power source, safeguarding the user and the power source or battery pack. In addition, the presence of a short circuit or an excessive load capacitance during power-up causes the eFuse to initiate an immediate shutdown.

The basic design of the eFuse is in the form of a four-layer, double-sided PCB or printed circuit board of 150 mm x 90 mm. Each layer has thick copper of 70 µm thickness, as against 35 µm for regular PCBs. The board has some connectors extending beyond its edges. The top side of the PCB has all the high-voltage circuitry, control buttons, status LEDs, multiple test points, and connectors. The PCB’s bottom side has the low-voltage control circuitry. It is also possible to control the eFuse remotely via a web browser.

To ensure safety, the user must enable the low-voltage power supply in the first place. They can follow this up by enabling the high-voltage power supply on the input. For input voltages exceeding 50 V, an LED indicator lights up on the board. Vishay has added two sets of six SIC MOSFETS with three connected in parallel in a back-to-back configuration. This ensures the eFuse can handle current flow in both directions. A current-sensing shunt resistor, Vishay WSLP3921, monitors the current flowing to the load. Vishay has positioned the current sensing shunt resistor strategically between the two parallel sets of MOSFETs.

Vishay has incorporated convenient control options in the eFuse. Users can operate the control options via the push buttons on the PCB, or by using the external controller, Vishay MessWeb. Either way unlocks access to an expanded array of features. Alternately, the user can integrate the eFuse seamlessly into a CAN bus-based system. They can do this by using an additional chipset in conjunction with the MessWEB controller. Vishay claims to have successfully tested its reference eFuse design.