Raspberry Pi and Traffic Lights

Although we come across traffic lights almost every time we step out of our homes, we rarely stop to think about how they work. However, Gunnar Pelpman has done just that, and he has put the hugely popular single board computer, Raspberry Pi to good use. While most of the tutorials introduce turning on and off LEDs, he has prepared a somewhat more complex tutorial, one that teaches how to program traffic lights. Moreover, he has done this with the Raspberry Pi (RBPi) running the Windows 10 IoT Core.

Traffic Lights may look very complicated installations, but they are rather simple in operation. They mostly comprise a controller, the signal head, and the detection mechanism. The controller acts as the brains behind the installation and controls the information required to light up the lights through their various sequences. Depending on location and time of the day, traffic signals run under a variety of modes, of which two are the fixed time mode and the vehicle actuation mode.

Under the fixed time mode, the traffic signal will repeatedly display the three colors in fixed cycles, regardless of the traffic conditions. Although adequate in areas with heavy traffic congestion, this mode is very wasteful for a side road with light traffic—if for some cycles there are no waiting vehicles, the time could be more efficiently allocated to a busier approach.

The second most common mode of operation of the traffic signal is the vehicle actuation. As its name suggests, the traffic signal adjusts the cycle time according to the demands of vehicles on all approaches.

Sensors, installed in the carriageway or above the signal heads, register the demands of the traffic. After processing these demands, the controller allocates the cycle time accordingly. However, the controller has a preset minimum and maximum cycle time, and it cannot violate them.

The hardware for the project could not be simpler. Gunnar has used three LEDs—red, orange, and green—to represent the three in a traffic light. The LEDs have an appropriate resistor in series for current limiting, and three ports of the RBPi drive them on and off. The rest of the project is the software, for which Gunnar uses the UWP application.

According to Gunnar, there are two options for writing UWP applications—the first a blank UWP application and the second a background application for IoT—depending on your requirement. The blank UWP is good for trying things out as a start, as, at a later point of time, you can build a User Interface for your application.

After creating the project with the blank UWP application, Gunnar added a reference to Windows IoT Extensions for the UWP. Next, he opened the file MainPage.xaml and added his own code, which begins with a test for the wiring. He uses the init() function to initialize the GPIO pins and stop() to turn all LEDs off. Then the code turns on all LEDs for 10 seconds to signal everything is working fine.

According to Gunnar, the primitive code mimics the traffic lights. He uses a separate code for the cycling of the traffic lights, and another for blinking them on and off. He uses the play() function for running ten cycles of the traffic light.

USB Type C and USB 3.1 Gen 2 – What is the Difference?

With the need for increasing capabilities, USB technology has evolved and improved over several years. Recently, the USB Implementation Forum has released the specifications for the SuperSpeed+1 standard or USB 3.1 Gen 2 signal standard and the USB Type C connector. Data transfer rates have been increasing from USB 1.0, released in January 1996, with a full speed of 1.5 MB/s, to USB 2.0, released in April 2000, with full speed of 60 MB/s, and to USB 3.0, released in Nov 2008, with a full speed of 625 MB/s. The latest standard, USB 3.1 Gen 2 was released in Jul 2013, and has a full speed of 1.25 GB/s.

Confusion between USB Type C and USB 3.1 Gen 2

When discussing the relationship, people are often confused between the USB Type C and the USB 3.1 Gen 2 standard. The major point to note is the USB Type C standard defines the physical connector alone, whereas the USB 3.1 Gen 2 standard defines the electrical signal for communication.

Therefore, system designers have the freedom to select signals conforming to USB 3.1 Gen 2 to pass through USB Type C connectors and cables or through a connector that do not conform to the USB Type C specification. Designers can implement their own proprietary connector and still use the USB 3.1 Gen 2 signal standard in case they want to use their own hardware or to ensure their system remains isolated from other systems.

The reverse is also equally true and applicable. One can use the USB Type C connector to transmit and receive signals that do not conform to the USB signal standards. Although the implementation will benefit from the inexpensive and easily available USB Type C connectors and cables, the OEM must label it correctly, since the user will be at the risk of connecting the proprietary non-conforming system to a USB 3.1 Gen 2 standard system and damaging one or both the systems.

OEMs can also transmit legacy USB signaling configurations using the USB Type C connectors and cables. This is because the USB standard allows using pre-USB 3.1 Gen 2 on USB Type C connectors, as they have designed the standard to cause no damage to either system. However, the most optimum power and data transfer will occur only when both systems are negotiating a common power configuration and communication standard.

Why USB Type C

Compared to the older configurations, the use of the USB Type C connector offers several advantages. Apart from being a smaller package with more conductors, the USB Type C supports higher voltage and current ratings, while offering greater signal bandwidths.

Physically smaller, the USB Type C plugs and receptacles fit in a wide range of applications where space is restricted. Moreover, one can connect the plugs and receptacles any way—either right-side up or up-side down. This allows easier and faster insertions of plugs into their receptacles.

While USB Type A and B connectors can have a maximum of four or five conductors, there are 24 contacts within the USB Type C and it can carry 3 A at 5 V, or 15 W of power.

Sensors, IoT, and Medical Health

Increasingly, people are looking for preventive care outside of a hospital setting. Medical providers, startups, and Fortune 500 technology companies are all trying out new products and devices for revolutionizing medical care and streamlining costs. While this reduces hospital readmission rates, patients in remote areas are getting the care they need.

The evolving trend is towards remote patient monitoring, which is fundamentally improving the quality of care and patient outcomes right across the medical arena. Moreover, this is happening not only in clinics, onsite in hospitals, and at-home care, but also in remote areas, less populated areas, and in developing countries.

New technologies, new devices, and better results are driving healthcare nowadays. There are several examples of this. For instance, cardiovascular patients can have their heart rates and blood pressure monitored regularly from their homes, with the data feeding back to the cardiologists to allow them to track their patients better. Similarly, doctors are able to track respiration rates, oxygen and carbon dioxide levels, cardiac output, and body temperature of their patients.

Sensors are able to track the weight of patients who are suffering from obstructive heart diseases. This allows doctors to detect fluid retention, and decide if the patient requires hospitalization. Similarly, sensors can monitor the asthma medication of a child to be sure family members are offering it the right dosage. This can easily cut down the number of visits to the ER.

IoT can wirelessly link a range of sensors to measure the vitals in intensive care and emergency units. The first step consists of sensors that generate the data. When tools such as artificial intelligence combine with the sensors, it becomes easy to analyze large amounts of data, helping to improve clinical decisions.

Technological advances such as telemedicine offer advantages in rural hospitals that constantly need more physicians. This often includes remote specialist consultations, remote consultations, outsourced diagnostic analysis, and in-home monitoring. With telemedicine, remote physicians can offer consultations more quickly, making the process cheaper and more efficient compared to that offered by traditional healthcare appointments.

Sensor networks within practices and hospitals are helping to monitor patient adherence, thereby optimizing healthcare delivery. The healthcare industry is increasingly focusing on value-based, patient-centric care, and their outcomes.

This is where the new technology and devices are making a big impact. For instance, data sensors are helping health care providers detect potential issues in the prosthetic knee joint of a patient. The use of sensors allows them to summarize the pressure patterns and bilateral force distribution across the prosthetic. This is of immense help to the patient, warning them to the first indication of strain. The provider can monitor the situation 24/7 and adjust the treatment accordingly, while the payer saves additional expenses on prolonged treatment or recovery.

Integration of IoT features into medical devices has improved the quality and effectiveness of healthcare tremendously. It has made high-value care possible for those requiring constant supervision, those with chronic conditions, and for elderly patients. For instance, wearable medical devices now feature sensors, actuators, and communication methods with IoT features that allow continuous monitoring and transmitting of patient data to cloud based platforms.

Researching Hearing Aids with the Raspberry Pi

All around the world, millions of people benefit from wearing hearing aids. Apart from helping them to hear in a better way, hearing aids lower people’s risk of developing dementia, the likelihood for loneliness, and the possibility of their withdrawing from society.

Testing hearing aids outside the laboratory can be a tough task, but researchers have found the highly popular single board computer, the Raspberry Pi (RBPi) a sound investment for testing hearing aid algorithms. Therefore, for hearing aid research, they are using the RBPi boards.

Although researchers have spent a lot of time and energy for developing hearing aids over the years, there is yet room for improvement. According to a signal processing engineer Tobias Herzke at HorTech in Oldenburg, in Germany, this is especially true for situations that are difficult acoustically. However, the RBPi is proving to be a next-generation research tool for the scientists.

To compensate for an individual’s hearing loss, it is necessary to tailor the amplification and compression in the hearing aid. Researchers plug a monitor to the RBPi and fire up the Fitting GUI for the tailoring.

For this, a spin-off company of the University of Oldenburg has developed openMHA. They have designed openMHA as a common, portable software platform, useful for teaching and researching hearing aid. According to Hendrik Kayser, with the openMHA platform, researchers can process signals in real-time with low delays. Hendrik develops algorithms for processing signals for digital hearing devices.

The software platform openMHA offers a set of standard algorithms that form a complete hearing aid. It can process the signal a live microphone generates to perform different activities such as directional filtering, amplification, feedback suppression, and noise reduction. The RBPi helps in testing new algorithms as this can be difficult with hearing aids alone. The RBPi and openMHA help hearing aid researchers with processing audio signals instantly and adapting to the hearing loss of the individual. The main advantage is the delays between incoming and outgoing audio is below 10 ms. The hearing aid actually has no GUI, except when fitting the amplifier parts.

In the laboratory environment, researchers can execute the openMHA software on Linux computers. According to Tobias, the sound environment will be different within a laboratory from that in an environment that a hearing aid user is likely to encounter in real life. This has often led to wrong results in the past, and did not offer a true reflection of the use of hearing aids. In such situations, the ARM-based single board computer, the RBPi offers a wonderful solution.

By taking advantage of the portable nature of the RBPi, and running openMHA on it, the researchers were able to evaluate newer algorithms in realistic outdoor conditions in real time. In fact, researchers were able to implement a new algorithm running on a mobile device for finding out how the user hears in real time while he is running around wearing a hearing aid.

Using an RBPi means one does not have to carry around a Linux laptop and it is far less expensive. The RBPi offers decent computing capabilities in a small space, while consuming low power.

Condition Monitoring with MEMS Accelerometers

In the market today, several condition-monitoring products are available as easy to deploy and highly integrated devices. A vast majority of them contain a microelectromechanical system or MEMS accelerometer as their core sensor. Not only are these economical, they also help in reducing the cost of deployment and ownership. In turn, this expands the facilities and the number of equipment benefitting from a condition monitoring program.

Compared to the legacy mechanical sensors, solid state MEMS accelerometers offer several attractive attributes. So far, their low bandwidth had restricted their application for use in condition monitoring. For instance, the noise performance of MEMS accelerometers was found to be not sufficiently low to cater to diagnostic applications requiring low noise levels over bandwidths beyond 10 KHz and over high frequency ranges.

The above situation is changing. Although still restricted of a few KHz of bandwidth, MEMS accelerometers with low noise are now available allowing the designers of condition monitoring products to use them in their new product concepts. This is because the use of MEMS brings several valuable and compelling advantages to the designer.

For instance, the size and weight of the MEMS accelerometers are of the utmost importance to airborne applications in health and usage monitoring systems, especially as they employ multiple sensors on a platform. MEMS devices in surface mount packages in a triaxial formation provide very high performance, while their footprints are only 6 x 6 mm, and weigh less than one gram. This shrinks the final package, while the interface of a typical MEMS device uses a single supply, which makes it easier to use in digital applications by saving on cost and weight of cables.

The triaxial arrangement is simpler with solid-state electronics and the small size of the transducers. They offer a small form factor enabling mounting on a printed circuit board, with the assembly hermetically sealed in housing suitable for fitting on a machine. MEMS devices require very low levels of power from single voltage supply and simple signal conditioning electronics, suitable for battery-powered wireless products.

Designers are able to use MEMS accelerometers in industrial settings for easy transition to digital interfaces now common. This is because the topology of the signal conditioning circuit for MEMS devices is common with both analog and digital output variations, allowing them to adapt the sensors to a wider variety of situations.

For instance, designers can load open protocols such as the Modbus RTU into a micro-controller, while using them with easily available RS-485 transceiver chips. Using surface mount chips, designers can lay out the complete solution for a transmitter with small footprint and fit them within relatively small board areas. They can insert these assemblies into packages, hermetically sealing them for supporting intrinsically safe characteristics or for conforming to environmental robustness certifications.

Although the current generation of MEMS devices can safely withstand 10,000 g of shock according to their specifications, in reality they can tolerate much higher levels without affecting sensitivity specifications. For instance, automatic test equipment can trim the sensitivity of a high-resolution sensor to remain stable over time and temperature to 0.01°C.

A Google Assistant with the Raspberry Pi

This is the age of smart home assistants, but not the human kind. The last couple of years a fever pitch has been building up over these smart home assistants, and every manufacture is now offering their own version. While Apple offers Siri, Amazon presents Echo and Alexa, Microsoft wants us to use Cortana, and Google tempts us with Google Home Assistant, there are several more in the race. However, in this melee, Raspberry Pi (RBPi) enthusiasts can make their own smart speaker using the SBC.

Although you can buy Google Home, the problem is it is not available worldwide. However, it is a simple matter to have the Google Assistant in your living room, provided you have an RBPi3 or an RBPiZ. Just as with any other smart home assistant, your RBPi3 home assistant will let you control any device connected to it, simply with your voice.

The first thing you need to communicate with your assistant is a microphone and a speaker. The May issue MagPi, the official RBPi magazine, had carried a nice speaker set sponsored by Google. However, if you have missed the issue, you can use any speaker and USB microphone combination available. The MagPi offer is an AIY Voice Kit for making your own home assistant. AIY is an acronym coined from AI or Artificial Intelligence, and DIY or DO it Yourself.

The MagPi Kit is a very simple arrangement. The magazine offers a detailed instruction set anyone can follow. If you do not have the magazine, the instructions are available on their AIY projects website. The contents of the kit include Voice HAT PCB for controlling the microphone and switch, a long PCB with two microphones, a switch, a speaker, an LED light, a switch mechanism, a cardboard box for assembling the kit, and cables for connecting everything.

Apart from the kit, you will also require additional hardware such as an RBPi3, a micro SD card for installing the operating system, a screwdriver, and some scotch tape.

After collecting all the parts, start the assembly by connecting the Voice HAT PCB. It controls the microphones and the switch, and you attach it to the RBPi3 or RBPiZ using the two small standoffs. Take care to align the GPIO connectors on the HAT to that on the RBPi, and push them in together to connect.

The combination of the HAT board and RBPi will go into the first box. You will need to fold the box taking care to keep the written words on the outside. Place the speaker inside the box first, taking care to align it to the side with the holes. Now, connect the cables to the Voice HAT, and place the combination inside the box.

Next, assemble the switch and LED, inserting the combination into the box. Take care to connect the cables in proper order according to the instructions. As the last step, use the PCB with the two microphones, and use scotch tape to attach it to the box.

Now flash the SD card with the Voice Kit SD image from the website, and insert it into the RBPi. Initially, you may need to monitor the RBPi with an HDMI cable, a keyboard, and mouse.

How does LoRa Benefit IoT?

Cycleo, a part of Semtech since 2012, has developed and patented a physical layer with a modulation type, with the name LoRA or Long Range, where the transmission utilizes the license-free ISM bands. LoRa consumes very low power and is therefore, ideal for IoT for data transmission. Sensor technology is one possible field of application for LoRa, where low bit rates are sufficient, and where the sensor batteries last for months or even years. Other applications are in the industry, environment technology, logistics, smart cities, agriculture, consumption recording, smart homes, and many others.

LoRa uses wireless transmission technology, and consumes very low power to transmit small amounts of data over distances of nearly 15 Km. It uses CSS or Chirp Spread Spectrum modulation, originally meant for radar applications, and developed in the 1940s, with chirp standing for Compressed High Intensity Radar Pulse. The name suggests the manner of data transmission by this method.

Many current wireless data transmission applications use the LoRa method, owing to its relative low power consumption, and its robustness against fading, in-band spurious emissions, and Doppler effect. IEEE has taken up the CSS PHY as a standard 802.15.4a for use as low-rate wireless personal area networks.

A correlation mechanism, based on band spreading methods, makes it possible for LoRa to achieve the long ranges. This mechanism permits use of extremely small signals that can disappear in noise. De-spreading allows modulation of these small signals in the transmitter. LoRa receivers are sensitive enough to decode these signals, even when they are more than 19 dB below the noise levels. Unlike the DSSS or direct sequence spread spectrum that the UMTS or WLAN uses, CSS makes use of chirp pulses for frequency spreading rather than using the pseudo-random code sequences.

A chirp pulse, modulated by GFSK or FM, usually has a sine-wave signal characteristic along with a constant envelope. As time passes, this characteristic falls or rises continuously in frequency. That makes the frequency bandwidth of the pulse equivalent to the spectral bandwidth of the signal. CSS uses the signal characteristic as a transmit pulse.

Engineers use LoRaWAN to define the MAC or media access protocol and the architecture of the system for a WAN or wide area network. The special design of LoRaWAN especially targets IoT devices requiring energy efficiency and high transmission range. Additionally, the protocol makes it easier for communications with server-based internet applications.

The architecture of the LoRaWAN MAC is suitable for LoRa devices, because of its influence on their battery life, the network capacity, the service quality, and the level of security it offers. Additionally, it has a number of applications as well.

The LoRa Alliance, a standardization body, defines, develops, and manages the regional factors and the LoRa waveform in the LoRaWAN stack for interaction between the LoRa MAC. The standardization body consists of software companies, semiconductor companies, manufacturers of wireless modules and sensors, mobile network operators, testing institutions, and IT companies, all working towards a harmonized standard for LoRaWAN. Using the wireless technology of LoRa, users can create wireless networks covering an area of several square kilometer using only one single radio cell.

Why Use a Multi-Layer PCB?

Although a multi-layer PCB is more expensive than a single or double-layer board of the same size, the former offers several benefits. For a given circuit complexity, the multi-layer PCB has a much smaller size as compared to that a designer can achieve with a single or even a double-layer board—helping to offset the higher cost—with the main advantage being the higher assembly density the multiple layers offer.

There are other benefits of a multi-layer PCB as well, such as increased flexibility through reduced need for interconnection wiring harnesses, and improved EMI shielding with careful placements of layers for ground and power. It is easier to control impedance features in multi-layer PCBs meant for high-frequency circuits, where cross talk and skin effect is more prominent and critical.

As a result, one can find equipment with multi-layer PCBs in nearly all major industries, including home appliances, communication, commercial, industrial, aerospace, underwater, and military applications. Although rigid multi-layer PCBs are popular, flexible types are also available, and they offer additional benefits over their rigid counterparts—lower weight, higher flexibility, ability to withstand harsh environments, and more. Additionally, rigid flex multi-layer PCBs are also available, offering the benefits of both types in the same PCB.

Advantages of a Multi-Layer PCB

Compared to single or double-layer boards, multi-layer PCBs offer pronounced advantages, such as:

  • Higher Routing Density
  • Compact Size
  • Lower Overall Weight
  • Improved Design Functionality

Use of multiple layers in PCBs is advantageous as they increase the surface area available to the designer, without the associated increase in the physical size of the board. Consequently, the designer has additional freedom to include more components within a given area of the PCB and route the interconnecting traces with better control over their impedance. This not only produces higher routing density, but also reduces the overall size of the board, resulting in lower overall weight of the device, and improving its design functionality.

The method of construction of multi-layer PCBs makes them more durable compared to single and double-layer boards. Burying the copper traces deep within multiple layers allows them to withstand adverse environment much better. This makes boards with multiple layers a better choice for industrial applications that regularly undergo rough handling.

With the availability of increasingly smaller electronic components, there is a tendency towards device miniaturization, and the use of multi-layer PCBs augments this trend by providing a more comprehensive solution than single or double-layer PCBs can. As these trends are irreversible, more OEMs are increasingly using multi-layer boards in their equipment.

With the several advantages of multiple layer PCBs, it is imperative to know their disadvantages as well. Repairing PCBs with several layers is extremely difficult as several copper traces are inaccessible. Therefore, the failure of a multi-layer circuit board may turn out to be an expensive burden, sometimes necessitating a total replacement.

PCB manufacturers are improving their processes to overcome the increase in inputs and to reduce design and production times for decreasing the overall costs in producing multi-layer PCBs. With improved production techniques and better machinery, they have improved the quality of multi-layer PCBs substantially, offering better balance between size and functionality.

What are Multi-Layer PCBs?

Most electronic equipment have one or more Printed Circuit Boards (PCB) with components mounted on them. The wiring to and from these PCBs determines the basic functionality of the equipment. It is usual to expect a complex PCB within equipment meant to deliver highly involved performance. While a single layer PCB is adequate for simple equipment such as a voltage stabilizer, an audio amplifier may require a PCB with two layers. Equipment with more complicated specifications such as a modem or a computer requires PCB with multiple layers, that is, a PCB with more than two layers.

Construction of a Multi-Layer PCB

Multiple layer PCBs have three or more layers of conductive copper foil separated by layers of insulation, also called laminate or prepreg. However, a simple visual inspection of a PCB may not imply its multi-layer structure, as only the two outermost copper layers are available for external connection, with the inner copper layers remaining hidden inside. Fabricators usually transform the copper layers into thin traces according to the predefined electrical circuit. However, some of the layers may also represent a ground or power connection with a large and continuous copper area. The fabricator makes electrical interconnections between the various copper layers using plated through holes. These are tiny holes drilled through the copper and insulation layers and electroplated to make them electrically conducting.

A via connecting the outermost copper layers and some or all of the inner layers is a through via, that connecting one of the outermost layers to one or more inner layers is the blind via, while the one connecting two or more inner layers but not visible on the outermost layers is the blind via. Fabricators drill exceptionally small diameter holes using lasers to make vias, as this allows maximizing the area available for routing the traces.

As odd number of layers can be a cause of warping in PCBs, manufacturers prefer to make multiple layer boards with even number of layers. The core of a PCB is an insulating laminate layer with copper foils pasted on both its sides—forming the basic construction of a double-layer board. Fabricators make up further layers by adding a combination of prepreg insulation and copper layers on each side of the double-layer board—repeating the process for as many extra layers as defined by the design—to make a multi-layer PCB.

Depending on the electrical circuit, the designer has to define the layout of traces on each copper layer of the board, and the placement of individual vias, preferably using CAD software packages. The designer transfers the layered design output onto photographic films, which the fabricator utilizes to remove the excess metal from individual copper layers by the process of chemical etching, followed by drilling necessary holes and electroplating them to form vias. As they complete etching and drilling for each layer, the fabricator adds it on to the proper side of the multi-layer board.

Once the fabricator has placed all layers properly atop each other, application of heat and external pressure to the combination makes the insulation layers melt and bond to form a single multi-layer PCB.

Wireless Charging and Electric Vehicles

In our daily lives, we are increasingly using wireless products. At the same time, researchers are also working on newer trends in charging electric vehicles wirelessly. With more countries now implementing regulations for fuel economy and pushing initiatives for replacing fossil-fuel based vehicles with those driven by electricity, automotive manufacturers have focused their targets on development of electric vehicles. On one hand there are technological advancements on lithium-ion batteries and ultra-capacitors, while on the other, researchers are working on infrastructure and the availability of suitably fast charging systems that will lead to a smoother overall transition to the adoption of electric vehicles.

Charging the batteries of a vehicle requires charging systems using high power conversion equipment. They convert the AC or DC power available from the power supply sources into suitable DC power for charging. As of now, the peak power demand from chargers is of the order of 10-20 KW, but this is likely to climb up depending on the time available for charging, and the advancements made in capabilities for battery charging. Therefore, both governments and OEMs are gearing up for developing high-power charging systems to cater to the power needs of future electric vehicles.

Wireless charging systems transfer power from the source to the load without the need for a physical connection between the two. Commonly available schemes use an air-cored transformer—with power transfer taking place without any contact between the source and the load. Wireless power transfer technology is available in various ranges, starting from mobile power charger systems rated for 10s of watts, to high power fast chargers for electric vehicles rated for 10s of kilowatts.

Earlier, the major issues with wireless charging systems were their low efficiency and safety. The technology has now progressed to the stage where achieving efficiencies of over 80% is commonplace. Although this is on par with wired power charger systems, increasing the spacing between the primary and secondary coils allows the efficiency to drop exponentially, which means the efficiency improves as the spacing between the coils decreases. Researchers are also looking at adopting various other methods of constructing the coils to address the issue.

Likewise, smart power controls are taking care of safety, by detecting power transfers taking place spuriously and suspending power transmission directly. Manufacturers are ensuring safety at all stages by implementing regulatory guidelines such as SAE J2954.

Although several methods exist for wireless power transfer, most popular are the resonance and inductive transfer methods. The inductive method of power transfer uses the principles of the transformer, with the AC voltage applied to the primary side inducing a secondary side voltage through magnetic coupling, and thereby transferring power.

The inductive method of power transfer is highly sensitive to the coupling between the primary and secondary windings. Therefore, as the distance increases, the power loss also increases, reducing the efficiency. That restricts this method to low power applications alone.

Based on impedance matching between the primary and the secondary side, the design of a resonant method allows forming a tunnel effect for transferring magnetic flux. While minimizing the loss of power, this method allows operations at higher efficiency even when placing the coils far apart, making it suitable for transferring higher levels of power.