Author Archives: Andi

IoT Sensor Design

Individuals are progressively integrating electrical components into nearly every system possible, thereby imbibing these systems with a degree of intelligence. Nevertheless, to meet the intelligence requirements posed by diverse business applications, especially in healthcare, consumer settings, industrial sectors, and within building environments, there is a growing necessity to incorporate a multitude of sensors.

These sensors now have a common name—IoT or Internet of Things sensors. Typically, these must be of a diverse variety, especially if they are to minimize errors and enhance insights. As sensors gather data through sensor fusion, users build ML or Machine Learning algorithms and AI or Artificial Intelligence around sensor fusion concepts. They do this for many modern applications, which include advanced driver safety and autonomous driving, industrial and worker safety, security, and audience insights.

Other capabilities are also emerging. These include TSN or time-sensitive networking, with high-reliability, low-latency, and network determinism features. These are evident in the latest wireless communication devices conforming to modern standards for Wi-Fi and 5G. To implement these capabilities, it is necessary that sensor modules have ultra-low latency at high Throughput. Without reliable sensor data, it is practically impossible to implement these features.

Turning any sensor into an IoT sensor requires effectively digitizing its output while deploying the sensor alongside communication hardware and placing the combination in a location suitable for gathering useful data. This is the typical use case for sensors in an industrial location, suitable for radar, proximity sensors, and load sensors. In fact, sensors are now tracking assets like autonomous mobile robots working in facilities.

IoT system developers and sensor integrators are under increasing pressure to reduce integration errors through additional processing circuits. Another growing concern is sensor latency. Users are demanding high-resolution data accurate to 100s of nanoseconds, especially in proximity sensor technologies following the high growth of autonomous vehicles and automated robotics.

Such new factors are leading to additional considerations in IoT sensor design. Two key trends in the design of sensors are footprint reduction and enhancing their fusion capabilities. As a result, designers are integrating multiple sensors within a single chip. This is a shift towards a new technology known as SoC or system-on-chip.

Manufacturers are also using MEMS technology for fabricating sensors for position and inertial measurements such as those that gyroscopes and accelerometers use. Although the MEMS technology has the advantage of fabrication in a semiconductor process alongside digital circuits, there are sensors where this technology is not viable.

Magnetic sensors, high-frequency sensors, and others need to use ferromagnetic materials, metastructures, or other exotic semiconductors. Manufacturers are investing substantially towards the development of these sensor technologies using SiP or system-in-package modules with 2D or 2.5D structures, to optimize them for use in constrained spaces and to integrate them to reduce delays.

Considerations for modern sensor design also include efforts to reduce intrinsic errors that affect many sensor types like piezoelectric sensors. Such sensors are often prone to RF interference, magnetic interference, electrical interference, oscillations, vibration, and shock. Designers mitigate the effect of intrinsic errors through additional processing like averaging and windowing.

The above trends are only the tip of the iceberg. There are many other factors influencing the growing sensor design complexity and the need to accommodate better features.

What is DFMEA?

If you are just entering the world of design, you will have to face a session of DFMEA some time or the other. DFMEA is an acronym for Design Failure Mode and Effects Analysis. In recent years, corporate settings are using DFMEA, a subset of FMEA or failure mode and effects analysis, as a valuable tool. It helps engineers spot potential risks in product design before they make any significant investments.

Engineers are using DFMEA as a systematic tool for mapping the early-warning system of a product. They use it to make sure the product functions not only as they intend it to, but also to keep users happy. It is like taking a peek into the future, catching any design flaws before they cause any major damage. Simply put, DFMEA helps to check the overall design of products and components, figuring out anything that might go wrong, and the way to fix it. This tool is specifically useful in industries involved with manufacturing, where it is important to prevent failure.

To use DFMEA effectively, the designer must look for potential design failures, observing them from all angles. Here is how they do it.

They first look for a failure mode, which essentially means how the design could possibly fail. For instance, your computer might freeze up when you open too many programs, which is one mode or type of failure.

Then they look for why the failure mode should happen. This could be due to a design defect, or a defect in the quality, system, or application of the part.

Next, the designers look for an effect of the failure. That is, what happens when there is a failure. In our example, a frozen computer can lead to a frustrated user.

In the last stage, designers look for the severity of the failure. They estimate how bad the failure could be for safety, quality, and productivity. Designers typically look for the worst-case scenarios.

To put it in a nutshell, DFMEA helps engineers figure out not only potential issues, but also the consequences of the failures. This way, they can prevent failures from happening in the first place.

However, DFMEA is never a one-man show. Rather, it is a team effort. Typically, the team has about 4 to 6 members—those who are fully knowledgeable about the product—and led by a product design engineer. The team members could include engineers with material background, and those from product quality, testing, and analysis. There may be people from other departments, like logistics, service, and production.

DFMEA is an essential tool in any design process. However, it is a crucial tool in industries handling new products and technology. This includes industries such as software, healthcare, manufacturing, industrial, defense, aerospace, and automotive. DFMEA helps them locate potential failure modes, reducing risks involved with introducing new technologies and products.

The entire DFMEA exercise is a step-by-step process and the team must think through each step thoroughly before they movie on to the next. It is essential they look for and identify the failure, and find out its consequences, before finding out ways to prevent it from happening.

What is Voice UI?

Although we usually talk to other humans, our interactions with non-animated objects are almost always silent. That is, until the advent of the Voice User Interface or Voice UI or VUI technology. Now, Voice UI has broken this silent interaction between humans and machines. Today, we have virtual assistants and voice-controlled devices like Siri, Google Assistant, Hound, Alexa, and many more. Most people who own a voice-controlled device say it is like talking to another person.

So, what is Voice UI? The Voice UI technology has made it possible for humans to interact with a device or an application through voice commands. As we are increasingly using digital devices, screen fatigue is something we have all experienced often. This has led to the development of a voice user interface. The advantages are numerous—primarily, hands-free operation and control over the device or application without having to stare at a screen. Leading five companies of the world, Amazon, Google, Microsoft, Apple, and Facebook, have developed their respective voice-activated AI assistants and voice-controlled devices.

Whether it is a voice-enabled mobile app, an AI assistant, or a voice-controlled device like a smart speaker, voice interactions and interfaces have become incredibly common. For instance, according to a report, 25% of adults in the US own a smart speaker, and 33% of the US population use their voice for searching online.

How does this technology work? Well, under the hood, there are several Artificial Intelligence technologies at work, such as Automatic Speech Recognition, Name Entity Recognition, and Speech Synthesis. The VUI speech components and the backend infrastructure are backed by AI technologies and typically reside in a public or private cloud. It is here that the VUI processes the speech and voice of the user. After deciphering and translating the user’s intent, the AI technology returns a response to the device.

The above is the basics of the Voice UI technology, albeit in a nutshell. For a better user experience, most companies also include additional sound effects and a graphical user interface. The sound effects and visuals assist the user in knowing whether the device is listening to them, or processing before responding and responding.

Today Voice UI technology is widespread, and it is available in many day-to-day devices like Smartphones, Desktop Computers, Laptops, Wearables, Smartwatches, Smart TVs, Sound Systems, Smart Speakers, and the Internet of Things. However, everything has advantages and disadvantages.

First, the advantages. VUI is faster than having to type the commands in text, and more convenient. Not many are comfortable typing commands, but almost all can use their voice to request a task from the VUI device. Voice commands, being hands-free, are useful while cooking or driving. Moreover, you do not need to face or look at the device to send voice commands.

Next, the disadvantages. There are privacy concerns, as a neighboring person can overhear your commands. AI technology is still in its infancy, and is prone to misinterpretation or being inaccurate, especially when differentiating homonyms like ‘their’ and ‘there’. Moreover, voice assistants may find it difficult to decipher commands in noisy public places.

What is UWB Technology?

UWB is the acronym for Ultra-Wideband, a 132-year-old communications technology. Engineers are revitalizing this old technology for connecting wireless devices over short distances. Although more modern technologies like Bluetooth are available for the purpose, industry observers are of the opinion that UWB can prove to be more versatile and successful than Bluetooth is. According to them, UWB has superior speed, uses less power, is more secure, provides superior device ranging and location discovery, and is cheaper than Bluetooth is.

Therefore, companies are researching and investing in UWB technology. This includes names like Xtreme Spectrum, Bosch, Sony, NXP, Xiaomi, Samsung, Huawei, Apple, Time Domain, and Intel. As such, Apple is already using UWB chips in their iPhone 11. This is allowing Apple obtain superior positioning accuracy and ranging, as it uses time of flight measurements.

Marconi’s first man-made radio using spark-gap transmitters used UWB for wireless communication. The government banned UWB signals for commercial use in 1920. However, since 1992, the scientific community started paying greater attention to the UWB technology.

UWB or Ultra-Wideband technology offers a protocol for short-range wireless communications, similar to what Wi-Fi or Bluetooth offer. It uses short pulse radio waves over a spectrum of frequencies that range from 3.1 to 10.5 GHz and does not require licensing for its applications.

In UWB, the bandwidth of the signal is equal to or larger than 500 MHz or is fractionally greater than 20% of the fractional bandwidth around the center frequency. Compared to conventional narrowband systems, the very wide bandwidth of UWB signals leads to superior performance indoors. This is because the wide bandwidth offers significantly greater immunity from channel effects when used in dense environments. It also allows very fine time-space resolutions resulting in highly accurate indoor positioning of the UWB devices.

As its spectral density is low, often below environmental noise, UWB ensures the security of communications with a low probability of signal detection. UWB allows transmission at high data rates over short distances. Moreover, UWB systems can comfortably co-exist with other narrowband systems already under deployment. UWB systems allow two different approaches for data transmission.

The first approach uses ultra-short pulses—often called Impulse radio transmission—in the picosecond range, covering all frequencies simultaneously. The second approach uses the OFDM or orthogonal frequency division multiplexing for subdividing the entire UWB bandwidth to a set of broadband channels.

While the first approach is cost-effective, there is a degradation of the signal-to-noise ratio. Impulse radio transmission does not involve a carrier; therefore, it uses a simpler transceiver architecture as compared to traditional narrowband transceivers. For instance, the UWB antenna radiates the signal directly. An example of an easy to generate UWB pulse is using a Gaussian monocycle or one of its derivatives.

The second approach offers better performance as it significantly uses the spectrum more effectively. Although the complexity is higher as the system requires more signal processing, it substantially improves the data throughput. However, the higher performance comes at the expense of higher power consumption. The application defines the choice between the two approaches.

Touch-sensing HMI

The key element in the consumer appeal of wearable devices lies in their touch-sensing HMI or human-machine interface—it provides an intuitive and responsive way of interacting via sliders and touch buttons in these devices. Wearable devices include earbuds, smart glasses, and smartwatches with a small touchscreen.

An unimaginable competition exists in the market for such types of wearable devices, continually driving innovation. The two major features over which manufacturers typically battle for supremacy and which matter particularly to consumers are—run time between battery charges, and the form factor. Consumers typically demand a long run-time between charges, and they want a balance between convenience, comfort, and a plethora of features, along with a sleek and attractive design. This is a considerable challenge for the designers and manufacturers.

For instance, while the user can turn off almost all functions in a wearable device like a smartwatch for long periods between user activity, the touch-sensing HMI must always remain on. This is because the touch intentions of the user are randomly timed. They can touch-activate their device any time they want to—there is no pattern that allows the device to know in advance when the user is about to touch-activate it.

Therefore, the device must continuously scan to detect a touch for the entire time it is powered up, leading to power consumption by the HMI subsystem, even during the low-power mode. The HMI subsystem is, therefore, a substantial contributor to the total power consumed by the device. Reducing the power consumption of the touch system can result in a substantial increase in the run-time between charges of the device.

Most wearable devices use the touch-sensing HMI as a typical method for waking up from a sleep state. These devices generally conserve power by entering a low-power touch detect function that operates it in a deep sleep mode. In this mode, the scanning takes place at a low refresh rate suitable for detecting any kind of touch event. In some devices, the user may be required to press and hold a button or tap the screen momentarily to wake the device.

In such cases, the power consumption optimization and the amount of power saved significantly depends on how slow it is possible to refresh the sensor. Therefore, it is always a tradeoff between a quick response to user touch and power consumption by the device. Moreover, touch HMI systems are notorious for the substantial amount of power they consume.

Commercial touch-sensing devices typically use microcontrollers. Their architecture mostly has a CPU with volatile and non-volatile memory support, an AFE or analog front-end to interface the touch-sensing element, digital logic functions, and I/Os.

The scanning operation typically involves CPU operation for initializing the touch-sensing system, configuring the sensing element, scanning the sensor, and processing the results to determine if a touch event has occurred.

In low-power mode, the device consumes less power as the refresh rate of the system reduces. This leads to fewer scans occurring each second, only just enough to detect if a touch event has occurred.

Ultrasonic Sensors in IoT

For sensing, it has been a standard practice to employ ultrasonic sensors. This is mainly due to their exceptional capabilities, low cost, and flexibility. With IoT or the Internet of Things now virtually entering most industries and markets, one can now find ultrasonic sensors in newer applications in healthcare, industrial, and smart offices and homes.

As their name suggests, ultrasonic sensors function using sound waves, especially those beyond the hearing capability of humans. These sensors typically send out chirps or small bursts of sound in the range of 23 kHz to 40 kHz. As these chirps bounce back from nearby objects, the sensor detects them. It keeps track of the time taken by the chirp for a round trip and thereby calculates the distance to the object based on the speed of sound.

There are several benefits from using ultrasonic sensors, the major one being very accurate detection of the object. The effect of material is also minimal—the sensor uses sound waves and not electromagnetic waves—the transparency or color of the object has minimum effect on the readings. Additionally, this also means that apart from detecting solid objects, ultrasonic sensors are equally good at detecting gases and liquid levels.

As ultrasonic sensors do not depend on or produce light during their operation, they are well-suited for applications that use variable light conditions. With their relatively small footprints, low cost, and high refresh rates, ultrasonic sensors are well-established over other technologies, like inductive, laser, and photoelectric sensors.

According to a recent study, the smart-office market will likely reach US$90 billion by 2030. This is mainly due to a surging demand for sensor-based networks, brought about by the need for safety and advancements in technology. Ultrasonic sensors will be playing an expanded role due to industry and local regulations supporting increased energy efficiency for automating different processes around the office.

A prime example of this is lighting and HVAC control in offices. Ultrasonic sensors are adept at detecting populated rooms in offices all through the day. This data is useful in programming HVAC systems, for keeping rooms hot or cool when populated, and turning the system off at the end of the day, kicking back on at first arrival.

Similarly, as people enter or leave rooms or areas of the office, ultrasonic sensors can control the lights automatically. Although the process looks simple, the energy savings from cutting back on lighting and HVAC can be huge. This is especially so for large office buildings that can have many unoccupied office spaces. For sensing objects across large areas, ultrasonic sensors offer ideal solutions, with detecting ranges of 15+ meters and detecting beam angles of >80°.

Additionally, smart offices can also have other smart applications like hygiene and touchless building entry devices. Touchless devices include automatic door entries and touchless hygiene products include faucets, soap dispensers, paper towel dispensers, and automatically lifting waste bin lids. During the COVID-19 pandemic, people’s awareness of these common applications has increased as public health and safety became critical for local offices and businesses.

Electric Motor Sans Magnets

Although there are electric motor designs that do not use permanent magnets, they typically work with an AC or alternating current supply. As such, these induction motors, as is their popular name, are not suitable for EVs or electric vehicles running on batteries, and are therefore, DC or direct current systems. Magnets in EV motors are permanent types, typically made of rare-earth elements like ferrite, samarium-cobalt, or neodymium-boron-iron, and are heavy and expensive.

The extra weight of the PM or permanent magnet EV motors tends to reduce the efficiency of the drive system, and it would be advantageous if the weight of the EV motor could be reduced somehow. One of the ways this can be done is to use motors that did not use heavy magnets.

A Stuttgart-based automotive parts manufacturer has done just that. MAHLE has developed a highly efficient magnet-free induction motor that works on DC systems. They claim the new motor is environmentally friendly and cheaper to manufacture as compared to others. Moreover, they claim it is maintenance-free as well.

According to a press statement from MAHLE, the new type of electric motor developed by them does not require any rare earth elements. They claim to have combined the strength of various concepts of electric motors into their new product and to have achieved an above 95% efficiency level.

The new motor generates torque via a system of contactless power transmission. Its fine-tuned design not only makes it highly efficient at high speeds but also wear-free.

When working, a wireless transmitter injects an alternating current into the receiving electrodes of the rotor. This current, in turn, charges wound copper coils, and they produce a rotating electromagnetic field much like that inside a regular three-phase induction motor. The rotating electromagnetic field helps to spin the rotor, thereby generating torque.

The magnetic coils take the place of permanent magnets in regular motors. MAHLE typically leaves an air gap between the rotating parts of the motor to prevent wear and tear. According to the manufacturer, it is possible to use the new concept in many applications, including subcompact and commercial vehicles.

MAHLE claims to have used the latest simulation processes to adjust and combine various parameters from different motor designs to reach an optimal solution for their new product. Not using rare earth element magnets allowed them to make lighter motors and has gained them a tremendous advantage from a geopolitical perspective as well.

Electric vehicles, and therefore PM electric motors, have seen a recent boom. But PM electric motors require rare earth metals, and mining these metals is not environmentally friendly. Moreover, with the major supply of these PM electric motors coming from China, automakers outside of China, are understandably, uncomfortable.

Although MAHLE used the latest simulation processes to design their new motor, the original concept is that of induction motors, invented by Nikola Tesla, in the 19th century. Other automakers have also developed EV motors sans permanent magnets, the MAHLE design has a rather utilitarian approach, making it more sustainable as compared to others

Battery-Free Metal Sensor IoT Device

Many industrial, supply chain and logistics applications require advanced monitoring of temperature, strain, and other parameters during goods transfer. One of the impediments of such requirements is a battery-powered device, typically involving its cost and maintenance overheads. A global leader in digital security and identification in the IoT or Internet of Things, Identiv, Inc., has developed a sensory TOM or Tag on Metal label, collaborating with Asygn, a sensor and IC specialist from France. The advantage of this sensory label is it operates without batteries.

The new sensor label is based on the next-generation IC platform of Asygn, the AS321X. They can capture strain and temperature data near metallic objects. The AS321X series of UHF or ultra-high frequency RFID or radio-frequency identification chips is suitable for sensing applications and can operate without batteries. Identiv has partnered with Asygn to expand its portfolio of products. It now includes the new sensor-based UHF inlays compliant with RAIN RFID standards, enabling the identification of long-range products and monitoring their condition.

According to Identiv, their advanced RFID engineering solutions, combined with Asygn’s sensing IC platform, have created a unique product in the industry. Taking advantage of their production expertise, and the latest sensor capabilities of Asygn, these new on-metal labels from Identiv offer the first exclusive, on-metal, battery less sensing solution in the market.

Using their connected IoT ecosystems, Identiv can create digital identities for every physical object by embedding RFID-enabled IoT devices, labels, and inlays into them. Such everyday objects include medical devices, products from industries like pharmaceuticals, specialty retail, luxury brands, athletic apparel, smart packaging, toys, library media, wine and spirits, cold chain items, mobile devices, and perishables.

RFID and IoT are playing an increasing role in the complex and dynamic supply chain industry. The integration of RFID with IoT is developing automated sensing, and promoting seamless, interoperable, and highly secure systems by connecting many devices through the internet. The evolution of RFID-IoT has had a significant impact on revolutionizing the SCM or Supply Chain Management.

The adoption of these technologies is improving the operational processes and reducing SCM costs with their information transparency, product traceability, flexibility, scalability, and compatibility. RFID-IoT is now making it possible to interconnect each stage in the SCM to ensure the delivery of the right process and product at the right quantity and to the right place. Such information sharing is essential for improving coordination between organizations in the supply chain and improving their efficiency.

Combining RFID and IoT makes it easier to identify physical objects on a network. The system transmits raw data about an item’s location, status, movement, temperature, and process. IoT provides the item with an identification ID for tracking its physical status in real-time.

Such smart passive sensors typically power themselves through energy harvesting, specifically RF power. Each sensor is battery-free and has an antenna for wireless communication. As an RF reader interrogates a sensor, it uses the energy from the signal to transmit an accurate and fast reading. Many sensors form a hub that collects their data while communicating with other connected devices.

Shape-Changing Robot Travels Large Distances

The world of robotics is developing at a tremendous pace. We have biped robots that walk like humans do, fish robots that can swim underwater, and now we have a gliding robot that can travel large distances.

This unique and innovative robot that the engineers at the University of Washington have developed, is, in fact, a technical solution for collecting environmental data. Additionally, it is helpful in conducting atmospheric surveys as well. The astonishing part of this lightweight robotic device is that it is capable of gliding in midair without batteries.

The gliding robots cannot fly up by themselves. They ride on drones that carry them high up in the air. The drones then release them about 130 ft above the ground and they glide downwards. The design of these gliding robots is inspired by Origami, the Japanese art of folding paper to make various designs.

The highly efficient design of these gliding robots or micro-fliers as their designers call them can change shape when they are floating above the ground. As these robots or micro-fliers weigh only 400 milligrams, they are only about half the weight of a small nail.

According to their designers, the micro-fliers are very useful for environmental monitoring, as it is possible to deploy them in large numbers as wireless sensor networks monitoring the surrounding area.

To these micro-fliers, engineers have added an actuator that can operate without batteries and a controller that can initiate the alterations in its shape. They have also added a system for harvesting solar power.

When dropped from drones, the solar-powered micro-fliers change their shape dynamically as they glide down, spreading themselves as a leaf as they descend. The electromagnetic actuators built into these robots control their shape, changing them from a flat surface to a creased one.

According to their designers, using an origami shape allows the micro-fliers to change their shape, thereby opening up a new space for the design. Inspired by the geometric pattern in leaves, they have combined the Miura-ori fold of origami, with power-harvesting and miniature actuators. This has allowed the designers to make the micro-fliers mimic the flight of a leaf in midair.

As it starts to glide down, the micro-flier is in its unfolded flat state. It tumbles about like an elm leaf, moving chaotically in the wind. As it catches the sun’s rays, its actuators fold the robot, changing its airflow and allowing it to follow a more stable descent path, just like a maple leaf does. The design is highly energy efficient, there is no need for a battery, and the energy from the sun is enough.

Being lightweight, the micro-flier can travel large distances under light breeze conditions, covering distances about the size of a football field. The team showcased the functioning of the newly developed micro-flier prototypes by releasing them from drones at an altitude of about 40 meters above the ground.

During the testing, the released micro-fliers traveled nearly 98 meters after they changed their shapes dynamically. Moreover, they could successfully transmit data to Bluetooth devices that were about 60 meters away.

Are Lithium Iron Phosphate Batteries Better?

According to the latest news from developments in batteries, the LFP or Lithium Iron Phosphate battery technology is going to pose a serious challenge to that of the omnipresent Lithium-ion type.

As far as e-mobility is concerned, Lithium-ion batteries have some serious disadvantages. These include higher cost and lower safety as compared to other chemistries. On the other hand, recent advancements in battery pack technology have led to an enhancement in the energy density of LFP batteries so that they are now viable for all kinds of applications related to e-mobility—not only in vehicles but also in shipping, such as in battery tankers.

In their early years of development, LFP cells had a lower energy density as compared to those of Lithium-ion cells. Improved packaging technology had bumped up the energy density to about 160 Wh/kg, but this was still not enough for use in e-mobility applications.

With further improvements in technology, LFP batteries now operate better at low temperatures, charge faster, and have a longer cycle life. These features are making them more appealing for many applications, including their use in electric cars and in battery tankers.

However, LFP batteries still continue to face several challenges, especially in applications involving high power. This is mainly due to the unique crystal structure of LFP, which reduces its electronic conductivity. Scientists have been experimenting with different approaches, such as reducing the directional crystal growth or particle size, using different conductive layer coatings, and element doping. These have not only helped to improve the electronic conductivity but have increased the thermal stability of the batteries as well.

Comparing LFP batteries with the Lithium-ion types shows them to have individual advantages in different key characteristics. For instance, Lithium-ion batteries offer higher cell voltages, higher power density, and better specific capacity. These characteristics lead to Lithium-ion batteries offering higher volumetric energy density suitable for achieving longer driving ranges.

In contrast, LFP batteries offer a longer cycle life, better safety, and better rate capability. As the risk of thermal runaway, in case of mechanical damage to a cell, is also much lower, these batteries are now popularly used for commercial vehicles with frequent access to charging, such as scooters, forklifts, and buses.

It is also possible to fully charge LFP batteries in each cycle, in contrast to having to stop at 80% to avoid overcharging some type of Lithium-ion batteries. Although this does allow simplification of the battery management algorithm, it adds other complexities for Battery Management Systems managing LFP cells.

Another key advantage of LFP batteries is they do not require the use of cobalt and nickel in their anodes. The industry fears that in the coming years, sourcing these metals will be more difficult. Even with mining projections of both elements doubling by 2030, they may not meet the increase in demand.

All the above is making the LFP batteries look increasingly interesting for e-mobility applications, with more car manufacturers planning to adapt them in their future cars.