What is Magnetic Levitation?

Many systems such as flywheels, Maglev trains, and other high-speed machinery already use magnetic levitation. The Brookhaven National Laboratory had pioneered this technology in the late 1960s. Maglev trains use the magnetic levitation technology, where superconducting magnets keep a train car suspended above a U-shaped concrete guide-way. Like regular magnets, superconducting magnets repel one another when like poles face each other. Systematically electrifying propulsion loops in the system creates moving magnetic fields that pull the train car forward from the front and push it forward from the rear. As the train car is floating in a sea of interacting magnetic fields while moving, the trip is very smooth and very fast, reaching up to 375 miles per hour (ca. 604 km/h).

Now, at the Technical University of Denmark, latest research has given this old technology a new twist. They have shown it is possible to levitate a magnet simply by rotating another similar sized magnet near it. Hamdi Ucar, an electronics and software engineer, had first demonstrated this unusual effect in 2021. The team at TU Denmark is using this effect to exploit contactless object handling or for trapping and manipulating microplastics made of ferromagnetic materials.

Magnetic levitation can be of three types. The first of these is active magnetic stabilization. Here, a control system supplies the magnetic force that keeps the levitating object under balanced conditions. The second type is used by Maglev trains and is known as electrodynamic suspension. In this case, a moving magnet induces a current in a stationary conductor, which then produces a repulsive magnetic force. This force increases with the speed of the moving magnet. The third type is the spin-stabilized levitation. Here, a levitating magnet spins at about 500 RPM or revolutions per minute. Gyroscopic effect keeps the magnet stable.

The TU-Denmark type of levitation is a variation of the third type. It involves two magnets—a rotor, and a floater. The rotor magnet is mounted on a motor. It has its magnetic poles oriented perpendicular to its rotational axis. The motor makes it rotate at velocities of about 10,000 RPM. The TU-Denmark team used a spherical magnet, made from neodymium-iron-boron, and 19 mm in diameter.

The floater magnet, placed under the rotor, begins to automatically spin with the spinning rotor, moving upwards towards the rotor to hover in space a few centimeters below it. The frequency of precession of the floater is the same as that of the rotor and has its magnetization oriented near to the rotation axis, matching that of the like pole of the rotor. When disturbed, the interacting magnetic fields forces it back to its equilibrium position.

The team used computer simulations, taking into account the magneto-static interactions between the two magnets. They found the new type of levitation is caused by a combination of magnetic dipole to dipole coupling, and the gyroscopic effect. They explained it as a magneto-static force of one magnet exerting an attractive and repulsive force on the other.

Furthermore, they explained that the process goes on to create a midair energy minimum in the potential of interaction between the dipoles. The team’s computer modelling revealed this minimum, where the floater could stably levitate.

What are Thermal Transistors?

Modern electronic devices depend on electronic transistors. Although transistors control the flow of electricity precisely, in the process, they also generate heat. So far, there was not much control over the amount of heat transistors generated during operation—it depended on the efficiency of the device—devices with higher efficiency generated lower amounts of heat. Now, using a solid-state thermal transistor, it is possible to use an electric field to control the flow of heat through electronic devices.

The new device, the thermal transistor, was developed by researchers at the University of California, Los Angeles. They published their study in Science, demonstrating the capabilities of the new technology. The lead author of the study explained the process as very challenging, as, for a long time, scientists and engineers wanted to control heat transfer as easily as they could control current flow.

So far, engineers cooled electronics with heat sinks. They used passive heat sinks to draw excess heat away from the electronic device to keep it cool. Although many have tried active approaches to thermal management, these mostly rely on moving parts or fluids. They can take typically from minutes to hours to ramp up or down, depending on the thermal conductivity of the material. On the other hand, using thermal transistors, the researchers were able to actively modulate the heat flow with higher precision and speed. The higher rate of cooling or heating makes thermal transistors a promising option for thermal management in electronic devices.

Similar to the working of an electronic transistor, the thermal transistor uses electric fields to modulate its channel conductance. However, in this case, the conductance is thermal, rather than electrical. Researchers engineered a thin film of molecules in the form of a cage to act as the transistor’s channel. They then applied an electric field, making the molecular bonds stronger within the film. This, in turn, increased its thermal conductance.

As the film was only a single molecule thick, the researchers could attain maximum change in conductivity. The most astonishing feature of this technology was the speed at which the change in conductivity occurred. The researchers were able to go up to a frequency of 1 MHz and above—this was several times faster than that achieved by other heat management systems.

Other types of thermal switches typically control heat flow through molecular motion. However, compared to the motion of electrons, molecular motion is far slower. The use of electrical fields allowed the researchers to increase the speed of electrons in the switch from mHz to MHz frequencies.

Another difference between molecular and electron motion is that the former cannot create a large enough difference in thermal conduction between the on and off states of the transistor. However, with electron motion, the difference achieved can be as high as 13 times, an enormous figure, both in speed and magnitude.

Because of this improvement, the device assumes an important status for cooling processors. Being small, the transistors use only a tiny amount of power to control the heat flow. Another advantage is that it is possible to integrate many thermal transistors on the same chip.

What is a CPU?

We use computers every day, and most users are aware of the one indispensable hardware component in it—the CPU or the Central Processing Unit. However, contrary to popular belief, the entire desktop computer or the router is not the CPU, as the actual CPU is small enough to fit in the palm of your hand. Small as it is, the CPU is the most important component inside any computer.

That is because the central processing unit is the main driving force or the brain of the computer and is the only component that does the actual thinking and decision-making. To do that, CPUs typically contain one or more cores that break up the workload and handle individual tasks. As each task requires data handling, a CPU must have access to the memory where such data actually resides. To enable fast computing, the memory speed must be high. This is generally RAM or Random Access Memory, and together with a great amount of cache memory, which is part of the CPU, helps the central processing unit to complete tasks at high speed. However, the RAM and cache can only store a small amount of data, and the CPU must periodically transfer the required data from external disk drives, as these can hold much more of it.

Being processors, CPUs are available in large varieties of ISAs or Instruction-Set Architectures. ISAs can be highly distinct, making them so extreme that software running on one ISA may not run on others. Even within CPUs using the same ISA, there may be differences in microarchitecture, specifically related to the actual design of the CPU. Manufacturers use different microarchitectures to offer CPUs with various levels of performance, features, and efficiency.

A CPU with a single core is highly efficient in accomplishing tasks that require a serial, sequential order of execution. To improve the performance even further, CPUs with multiple cores are available. Where consumer chips typically offer up to eight cores, bigger server CPUs may offer anywhere from 32 to 128 cores. CPU designers target improving per-core performance by increasing the clock speed, thereby increasing the number of instructions per second that the core handles. This is again dependent on the microarchitecture.

Crafting CPUs is an incredibly intricate endeavor, navigated by only a select few experts worldwide. Noteworthy contributors to this field include industry giants like Intel, AMD, ARM, and RISC-V International. Intel and AMD, the pioneers in this arena, consistently engage in fierce competition, each striving to outdo the other in various CPU categories.

ARM, on the other hand, distinguishes itself by offering its proprietary ARM ISA, a technology it licenses to prominent entities such as Apple, Qualcomm, and Samsung. These licensees then leverage the ARM ISA to fashion bespoke CPUs, often surpassing the performance of the standard ARM cores developed by the parent company.

In a departure from the proprietary norm, RISC-V International promotes an open-standard approach with its RISC-V ISA. This innovative model allows anyone to freely adopt and modify the ISA, fostering a collaborative environment that encourages diverse contributions to CPU design.

To truly grasp how well a CPU performs, your best bet is to dive into reviews penned by fellow users and stack their experiences against your specific needs. This usually involves delving into numerous graphs and navigating through tables brimming with numbers. Simply relying on the CPU specification sheet frequently falls short of providing a comprehensive understanding.

Is Metal Better than Ferrite for Inductors?

Many power and signal conditioning applications use power inductors as a basic component to store, block, filter, or attenuate energy. Today’s power circuits use increasingly higher switching frequencies and high powers that impose challenges in packaging and material levels for component manufacturers. Consequently, power inductors, while shrinking their form factors, are pushing to provide higher-rated currents.

The above presents a dual challenge to component manufacturers and designers alike. For instance, component designers must use materials other than the traditional ferrite core materials to miniaturize these devices, while maintaining other parameters such as DCR and inductance without change. Taiyo Yuden is meeting the dynamic challenges of these applications by using metal for power inductors.

Engineers typically select power inductors primarily by their inductance value, then by their current rating and DCR or DC resistance value, followed by their operating temperature range. They may also consider whether the inductor will require to have shielding or none. The application circuit that will use the inductor requires optimization of the above parameters.

Applications of power inductors can range from filtering EMI at the AC inputs of a power supply to filtering ripples at the output of a DC power supply. Inductors are indispensable for reducing the ripple in voltage and current in switching power supply outputs. DC-DC converters use inductors for their self-inductance property of storing power—as the switching circuit turns off, the inductor discharges its stored current. Almost all types of voltage regulation circuits, for instance, power supplies, DC-DC converters, switching circuits, and others, take advantage of the characteristics of power inductors.

Semiconductor power supplies are transitioning from the higher 3.3 V rails and lower currents to lower voltages of 1-1.2 V rails and higher currents for catering to advances in chip design technology. This entails the need for a high-current handling power inductor. Furthermore, smaller form factors of enclosures following the development of smaller-sized electronic components are increasing the demand for miniaturization of all associated electronic components, including the power inductor.

However, the size of power inductors and their higher current capability present a tradeoff. Withstanding higher currents typically requires a bigger case size, resulting in a change in land patterns on PCBs. On the other hand, a small size translates into saturation current due to insufficient inductance. Taiyo Yuden uses the patented construction of a wire-wound multilayer power inductor with a unique metal alloy. This construction allows the designer to achieve both the required inductance in a small case size and a high saturation current.

Taiyo Yuden create their multilayer inductor by printing a pattern on a ceramic sheet that contains ferrite. They laminate these sheets before firing them. Then they assemble the final piece, pressure bond them and fire them. At the last stage, they form external electrodes at both ends. The use of material with a high magnetic permeability results in an inductor with a high inductance value.

The construction of wire-wound inductors follows the traditional method. The coil is either on the inside or on the outside surface of a magnetic material, such as ferrite. A high number of turns results in a higher inductance and a higher DC resistance.

What are Cold-Cathode Devices?

Some devices, like thermionic valves, contain a cathode that requires heating up before the device can work. However, other devices do not require a hot cathode to function. These devices have two electrodes within a sealed glass envelope that contains a low-pressure gas like neon. With a sufficiently high voltage applied to the electrodes, the gas ionizes, producing a glow around the negative electrode, also known as the cathode. Depending on the gas in the tube, the cathode glow can be orange (for neon), or another color. Since these devices do not require a hot cathode, they are known as cold-cathode devices. Based on this effect, scientists have developed a multitude of devices.

The simplest of cold-cathode devices is the neon lamp. Before the advent of LEDs, neon lamps were the go-to lights. Neon lamps ionize at around 90 V, which is the strike voltage or breakdown voltage of the neon gas within the lamp. Once ionized, the gas will continue to glow at a voltage of around 65 V, which is its maintain or sustain voltage. This difference between the strike voltage and the sustain voltage implies the gas has a negative resistance region in the operating curve of the device. Hence, users often build a relaxation oscillator with a neon lamp, a capacitor, and a resistor.

Another everyday use for the neon lamp is as a power indicator for the AC mains. In practice, as an AC power indicator, the neon lamp requires a series resistance of around 220k – 1M ohms to limit the current flow through it, which also extends its life significantly. Since the electrodes in a neon lamp are symmetrical, using it in an AC circuit causes both electrodes to glow equally.

Neon signs, such as those in Times Square and Piccadilly Circus, also use the same effect. Instead of a short tube like in the neon lamp, neon signs use a long tube shaped in the specific design of the application. Depending on the display color, the tube may contain neon or another gas, together with a small amount of mercury. By applying a fluorescent phosphor coating to the inside of the glass tube, it is possible to produce still more colors. Due to the significant separation between the two electrodes in neon signs, they require a high strike voltage of around 30kV.

Another application of cold-cathode devices is the popular Nixie tube. Although seven-segment LED displays have now largely replaced them, Nixie tubes are still popular due to their effect as a glorified neon tube. Typically, they have ten electrodes, each in the shape of a numeral. In use, the circuit switches to the electrode required for displaying a particular number. The Nixie tube produces very natural-looking displays, hence, people find them beautiful and preferable to the stick-like seven-segment LED displays.

Photographers still use flash tubes to illuminate the scenes they are capturing. They typically use them as camera flashes and strobes. Flash tubes use xenon gas as their filling. Apart from the two regular main electrodes, flash tubes have a smaller trigger electrode near one or both the main electrodes. In use, the main electrodes have a few hundred volts between them. For triggering, the circuit applies a high-voltage pulse to the trigger electrode. This causes the gas between the two electrodes to ionize rapidly, giving off a bright white flash.

Sensors at the Heart of IoT

IoT, or the Internet of Things, depends on sensors. So much so, there would not be any IoT, IIoT, or for that matter, any type of Industry 4.0, at all, without sensors. As the same factors apply to all the three, we will use IoT as a simplification. However, some basic definitions first.

As a simple, general definition, IoT involves devices intercommunicating with useful information. As their names suggest, for IIoT and Industry 4.0, these devices are mainly located in factories. While IIoT is a network of interconnected devices and machines on a plant floor, Industry 4.0 goes a step further. Apart from incorporating IIoT, Industry 4.0 expands on the network, including higher level systems as well. This allows Industry 4.0 to process and analyze data from IIoT, while using it for a wider array of functions, including looping it back into the network for control.

However, the entire network has sensors as its basis, supplying it with the necessary raw data. Typically, the output from sensors is in the form of electrical analog signals, and IoT creates the fundamental distinction between data and information.

This distinction is easier to explain with an example. For instance, a temperature sensor, say, a thermistor, shows electrical resistance that varies with temperature. However, that resistance is in the form of raw data, in ohms. It has no meaning to us, until we are able to correlate it to degrees.

Typically, we measure the resistance with a bridge circuit, effectively converting the resistance to voltage. Next, we apply the derived voltage to a measuring equipment that we have calibrated to show voltage as degrees. This way, we have effectively converted data into information useful to us, humans. However, we can still use the derived voltage to control an electric heater or inform a predictive maintenance system of the temperature of a motor.

But information, once we have derived it from raw data, has almost endless uses. This is the realm of IoT, intercommunicating useful information among devices.

To be useful for IoT, we must convert the analog data from a sensor to a digital form. Typically, the electronics required for doing this is the ADC or Analog to Digital Converter. With IoT applications growing rapidly, users are also speeding up their networks, thereby handling even larger amounts of data, making them more power efficient.

Scientists have evolved a new method for handling large amounts of data that does not require the IoT devices to have large amounts of memory. The devices send their data over the internet to external data centers, the cloud. There, other computers handle the proper storing and analysis of the data. However, this requires higher bandwidth and involves latency.

This is where the smart sensor makes its entry. Smart sensors share the workload. A sensor is deemed smart when it is embedded within a package that has electronics for preprocessing, such as for signal conditioning, analog to digital conversion, and wireless transmission of the data. Lately, smart sensors are also incorporating AI or Artificial Intelligence capabilities.

What is Industrial Ethernet?

Earlier, we had a paradigm shift in the industry related to manufacturing. This was Industry 3.0, and, based on information technology, it boosted automation, enhanced productivity, improved precision, and allowed higher flexibility. Today, we are at the foothills of Industry 4.0, with ML or machine language, M2M or machine-to-machine communication, and smart technology like AI or artificial intelligence. There is a major difference between the two. While Industry 3.0 offered information to humans, allowing them to make better decisions, Industry 4.0 offers digital information to optimize processes, mostly without human intervention.

With Industry 4.0, it is possible to link the design office directly to the manufacturing floor. For instance, using M2M communications, CAD, or computer aided design can communicate directly to machine tools, thereby programming them to make the necessary parts. Similarly, machine tools can also provide feedback to CAD, sending information about challenges in the production process, such that CAD can modify them suitably for easier fabrication.

Manufacturers use the Industrial Internet or IIoT, the Industrial Internet of Things, to build their Industry 4.0 solutions. The network has an important role like forming feedback loops. This allows sensors to monitor processes in real-time, and the data thus collected can effectively control and enhance the operation of the machine.

However, it is not simple to implement IIoT. One of the biggest challenges is the cost of investment. But this investment can be justified through better design and manufacturing processes leading to cost savings through increased productivity and fewer product failures. In fact, reducing capital outflows is one way to accelerate adoption of Industry 4.0. Another way could be to use a relatively inexpensive but proven and accessible communication technology, like the Ethernet.

Ethernet is one of the wired networking options that is in wide use all over the world. It has good IP interoperability and huge vendor support. Moreover, POE or power over internet uses the same set of cables for carrying data as well as power to connected cameras, actuators, and sensors.

Industrial Ethernet, using rugged cables and connectors, builds on the consumer version of the Ethernet, thereby bringing a mature and proven technology to industrial automation. With the implementation of Industrial Ethernet, it is possible to not only transport vital information or data, but also remotely supervise machines, controllers, and PLCs on the shop floor.

Standard Ethernet protocol has high latency, mainly due to its tendency to lose packets. This makes it unsuitable for rapidly moving assembly lines that must run in synchronization. On the other hand, Industrial Ethernet hardware uses deterministic and low-latency industrial protocols, like PROFINET, Modbus TCP, and Ethernet/IP.

For Industrial Ethernet deployment, the industry uses hardened versions of the CAT 5e cable. For instance, the Gigabit Ethernet uses CAT 6 cable. For instance, the CAT 5e cable has eight wires formed into four twisted pairs. This twisting limits cross talk and signal interference, and each pair supports a duplex connection. Gigabit Ethernet, being a high-speed system, uses all four pairs for carrying data. For lower throughput, systems can use two twisted pairs, and the other two for carrying power or for conventional phone service.

What are Olfactory Sensors?

We depend on our five senses to help us understand the world around us. Each of the five senses—touch, sight, smell, hearing, and taste—contributes individual information to our brains, which then combines them to create a better understanding of our environment.

Today, with the help of technology like ML, or machine learning, and AI, or Artificial Intelligence, we can make complex decisions with ease. ML and AI also empower machines to better understand their surroundings. Equipping them with sensors only augments their information-gathering capabilities.

So far, most sensory devices, like proximity and light-based ones, remain limited as they need clear physical contact or line of sight to function correctly. However, with today’s technology trending towards higher complexity, it is difficult to rely solely on simple sensing technology.

Olfaction, or the sense of smell, functions by chemically analyzing low concentrations of molecules suspended in the air. The biological nose has receptors for this activity, which, on encountering these molecules, transmit signals to the parts of the brain that are responsible for the detection of smell. A higher concentration of receptors means higher olfaction sensitivity, and this varies between species. For instance, compared to the human nose, a dog’s nose is far more sensitive, allowing a dog to identify chemical compounds that humans cannot notice.

Humans have recognized this superior olfactory ability in dogs and put it to various tasks. One advantage of olfaction over that of sight is the former does not rely on line-of-sight for detection. It is possible to detect odors from unseen objects, which may be obscured, hidden from sight, or simply not visible. That means the olfactory sensor technology can work without requiring invasive procedures. That makes olfactory sensors ideally suited for a range of applications.

With advanced technology, scientists have developed artificial smell sensors to mimic this extraordinary natural ability. The sensors can analyze chemical signatures in the air, and thereby unlock newer levels of safety, efficiency, and early detection in places like the doctor’s office, factory floors, and airports.

The healthcare industry holds the most exciting applications for olfactory sensors. This is because medical technology depends on early diagnosis to provide the most effective clinical outcomes to patients. Conditions like diabetes and cancer cause detectable olfactory changes in the body’s chemistry. Using olfactory sensors to detect the changes in body odor, with their non-invasive nature, provides a critical early diagnosis that can significantly improve the chances of effective treatment and recovery.

The industry is also adopting olfactory sensors. Industrial processes often produce hazardous byproducts. With olfactory sensors around, it is easy to monitor chemical conditions in the air and highlight the buildup of harmful gases that can be dangerous beyond a certain level.

As the sense of smell does not require physical contact, it is ideal for detection in large spaces. For instance, olfactory sensors are ideal for airport security, where they can collect information about passengers and their belongings as they pass by. All they need is a database of chemical signatures along with processing power to analyze many samples in real-time.

High-Voltage TVS Diodes as IGBT Active Clamp

Most high-voltage applications like power inverters, modern electric vehicles, and industrial control systems use IGBTs or Insulated Gate Bipolar Transistors, as they offer high-efficiency switching. However, as power densities are constantly on the rise in today’s electronics, the systems are subjected to greater demands. This necessitates newer methods of control. Littelfuse has developed new TVS diodes as an excellent choice to protect circuits against overvoltages when IGBTs turn off.

Most electronic modules and converter circuits contain parasitic inductances that are practically impossible to eliminate. Moreover, it is not possible to ignore their influence on the system’s behavior. While commuting, the current changes as the IGBT turns off. This produces a high voltage overshoot at its collector terminal.

The turn-off gate resistance of the IGBT, in principle, affects the speed of commutation and the turn-off voltage. Engineers typically use this technique for lower power level handling. However, they must match the turn-off gate resistance for overload conditions, short circuits, and for a temporary increase in the link circuit voltage. In regular operation, the generation of the overshoot voltage typically increases the switching losses and turn-off delays in the IGBTs, reducing the usability and or efficiency of the module. Therefore, high-power modules cannot use this simple technique.

The above problem has led to the development of a two-stage turn-off, with slow turn-off and soft-switch-off driver circuits, which operate with a gate resistance that can be reversed. In regular operations, the IGBT is turned off with the help of a gate resistor of low ohmic value, as this minimizes the switching losses. For handing surge currents or short circuits, this is changed to a high ohmic gate resistor. However, this also means that normal and fault conditions must be detected reliably.

Traditionally, the practice is to use an active clamp diode to protect the semiconductor during the event of a transient overload. The high voltage causes a current flow through the diode until the voltage transient dissipates. This also means the clamping diode is never subjected to recurrent pulses during operation. The IGBT and its driver power limit the problem of repetitive operation, both absorbing the excess energy. The use of an active clamp means the collector potential is directly fed back to the gate of the IGBT vial an element with an avalanche characteristic.

The clamping element forms the feedback branch. Typically, this is made up of a series of TVS or Transient Voltage Suppression diodes. When the collector-emitter voltage of the IGBT exceeds the approximate breakdown voltage of the clamping diode, it causes a current flow via the feedback to the gate of the IGBT. This raises the potential of the IGBT, reducing the rate of change of current at the collector, and stabilizing the condition. The design of the clamping diode then determines the voltage across the IGBT.

As the IGBT operates in the active range of its output characteristics, the energy stored in the stray inductance of the IGBT is converted to heat. The clamping process goes on until the stray inductance is demagnetized. Therefore, several low-voltage TVS diodes in series or a single TVS diode rated for high voltage are capable of providing the active clamping solution.

E-Fuse Future Power Protection

High-voltage eMobility applications are on the rise. Traditionally, fuses are non-re-settable, and sometimes mechanical relays or contactors are used. However, that is now changing. Semiconductor-based re-settable fuses or eFuses are now replacing traditional fuses.

These innovative eFuses represent a significant trend in safeguarding hardware and users in high-voltage and high-power scenarios. Vishay has announced a reference design for an eFuse that can handle high power loads. They have equipped the new eFuse with SIC MOSFETs and a VOA300 optocoupler. The combination can handle up to 40 kW of continuous power load. The design is capable of operating at full power with minimal losses of lower than 30 W without active cooling. The eFuse incorporates important essential features like continuous current monitoring, a preload function, and rapid overcurrent protection.

Vishay has designed the eFuse to manage the safe connection and disconnection of a high-voltage power source. For instance, the eFuse can safely connect or disconnect various vehicle loads safely to and from a high-energy battery pack. The eFuse uses SIC MOSFETS as its primary switches, and these are capable of continuous operation up to 100 Amperes. The user can predefine a current limit. When the current exceeds this limit, the eFuse disconnects the load rapidly from the power source, safeguarding the user and the power source or battery pack. In addition, the presence of a short circuit or an excessive load capacitance during power-up causes the eFuse to initiate an immediate shutdown.

The basic design of the eFuse is in the form of a four-layer, double-sided PCB or printed circuit board of 150 mm x 90 mm. Each layer has thick copper of 70 µm thickness, as against 35 µm for regular PCBs. The board has some connectors extending beyond its edges. The top side of the PCB has all the high-voltage circuitry, control buttons, status LEDs, multiple test points, and connectors. The PCB’s bottom side has the low-voltage control circuitry. It is also possible to control the eFuse remotely via a web browser.

To ensure safety, the user must enable the low-voltage power supply in the first place. They can follow this up by enabling the high-voltage power supply on the input. For input voltages exceeding 50 V, an LED indicator lights up on the board. Vishay has added two sets of six SIC MOSFETS with three connected in parallel in a back-to-back configuration. This ensures the eFuse can handle current flow in both directions. A current-sensing shunt resistor, Vishay WSLP3921, monitors the current flowing to the load. Vishay has positioned the current sensing shunt resistor strategically between the two parallel sets of MOSFETs.

Vishay has incorporated convenient control options in the eFuse. Users can operate the control options via the push buttons on the PCB, or by using the external controller, Vishay MessWeb. Either way unlocks access to an expanded array of features. Alternately, the user can integrate the eFuse seamlessly into a CAN bus-based system. They can do this by using an additional chipset in conjunction with the MessWEB controller. Vishay claims to have successfully tested its reference eFuse design.