Category Archives: Guides

What is an i-Robot?

The level of CO2 in our atmosphere is increasing at alarming levels, affecting all life on Earth either directly or indirectly. For instance, it is related to global warming risks, reducing the quantity of ice in the polar regions, which in turn changes the level of seas all around as the ice melts. This has significant consequences on several human activities such as fishing. It also affects the submarine environment adversely, together with the associated biological sphere. For long, scientists have been monitoring the marine environment and studying the status of the seas.

However, the harshness of the marine environment and/or the remoteness of the location preclude many explorations under the sear by vehicles driven by the mother ship. Scientists are of the view robots could effectively contribute to such challenging explorations. This view has led to the development of Autonomous Underwater Vehicles or AUVs.

One such AUV is the Semi-Autonomous Underwater Vehicle for Intervention Mission or SAUVIM, and is expected to address challenging tasks as above. The specialty of SAUVIM is its capability of autonomous manipulation underwater. As it has no human occupants and no physical links with its controller, SAUVIM can venture into dangerous regions such as classified areas, or retrieve hazardous objects from deep within the oceans.

This milestone is a technological challenge, as it gives the robotic system the capability to perform intervention tasks such as physical contact with unstructured environment but without a human supervisor constantly guiding it.

SAUVIM, being a semi-autonomous vehicle, integrates electronic circuitry capable of withstanding the enormous pressure deep ocean waters generate. In general, it can operate in the harsh environmental conditions—low temperatures of the deep oceans—in a reliable and safe manner. Ensuring the effectiveness of such robots requires a high level of design and accurate choice of components.

As SAUVIM operates semi-autonomously, it needs huge energy autonomy. For this, Steatite, Worcestershire, UK, has introduced a new solution in the form of long-life batteries, ones capable of operating in submarine environment. These Lithium-Sulfur (Li-S) battery packs, a result of the first phase of a 24-month project, improves the endurance and speed of autonomous underwater vehicles when deep diving.

Primary advantages that Li-S batteries offer are enhanced energy storage capability to provide improvements in operational duration, despite being constructed from low-cost building materials.

The National Oceanography Center in Southampton, UK, completed the first phase of the Li-S battery project, after repeatedly testing the cells at pressure and temperatures prevailing in undersea depths of 6 Kms. According to the tests, Li-S cells can deliver performances similar to those at ambient conditions, while their effective Neutral Buoyancy Energy Density or NBED is almost double that offered by Li-ion cells used as reference. Life tests, performed on a number of Li-S cells demonstrate they can reach over 60 cycles with slow discharge, and 80 cycles with fast discharges.

The energy within an AUV is limited, which also limits its endurance. Therefore, to conserve the available energy, speeds of AUV are usually kept low at 2-4 knots. Therefore, to enhance or expand this operational envelope, it is necessary to increase the energy available within the vehicle, and the Li-S batteries do just that to increase the vehicles range and speed.

Cloud Storage and Alternatives

Ordinarily, every computer has some local memory storage capacity. Apart from the Random Access Memory or RAM, computers have either a magnetic hard disk drive (HDD) or a solid-state disk (SSD) to store programs and data even when power is shut off—RAM cannot hold information without power. The disk drive primarily stores the Operating System that runs the computer, other application programs, and the data these programs generate. Typically, such memory is limited and tied to a specific computer, meaning other computers cannot share it.

A user has two choices for adding more memory to a computer—he/she can either buy a bigger drive or add to the existing one, or he can use cloud storage. Various service providers offer remote memory storage, and the user has to pay a nominal rental amount for using a specific amount of cloud memory.

There are several advantages of using such remote memory. Most cloud storage services offer desktop folders where users can drag and drop files from their local storage to the cloud and vice versa. As accessing the cloud services requires Internet connection, the user can avail the cloud facilities from anywhere, while sharing it between several computers and users.

The user can use the cloud service as a back up for storing a second copy of their important information. In the event an emergency strikes and the user loses all or part of their data on their computer, accessing the cloud storage through the Internet can help to restore the stored information on the cloud. Therefore, cloud storage can act as a disaster recovery mechanism.

Compared to local memory storage, cloud services are much cheaper. Therefore, users can reduce their annual operating costs by using cloud services. Additionally, the user saves on power expenses, as cloud storage does not require the user to supply power that local memory storage would need.

However, cloud storage has its disadvantages. Dragging and dropping files to and from the cloud storage takes finite time on the Internet. This is because cloud storage services usually limit the bandwidth the user can avail for a specific rental charge. Power interruptions and or bad Internet connection during the transfer process can lead to corruption of data. Moreover, the user cannot access his/her data on the cloud storage unless there is an Internet connection available.

Storing data remotely also brings up the concerns of safety and privacy. As the remote memory is likely to be shared by other organizations, there is a possibility of data comingling.

Therefore, people prefer using private cloud services, which are more expensive, rather than using cheaper public cloud services. Private cloud services may also offer alternative payment plans, and these may be more convenient for users. Usually, the private cloud services have better software for running their services, and offer users greater confidence.

Another option private cloud services often offer is of encrypting the stored data. That means only the actual user can make use of their data, and others, even if they can access it, will see only garbage.

What is a Wireless Router?

Most of the electronic gadgets we use today are wireless. When they have to connect to the Internet, they do so through a device called a router, which may be a wired or a wireless one. Although wired routers were very common a few years back, wireless routers have overtaken them.

Routers, as their name suggests, direct a stream of data from one point to another or to multiple points. Usually, the source of data is the transmitting tower belonging to the broadband dealer. The connection from the tower to the router may be through a cable, a wire, or wireless. To redirect the traffic, the router may have a network of multiple Ethernet ports to which users may connect their PCs, or, as in the latest versions, it may transmit the data wirelessly. The only wire a truly wireless router will probably have is a cable to charge its internal battery.

Technically speaking, the wireless router is actually a two-way radio, receiving the signals from the tower and retransmitting them for other devices to receive. A SIM card inside the router identifies the device to the broadband company, helping it to keep track of the routers statistics. Modern wireless routers follow international wireless communication standards—the 802.11n being the latest, although there are several of the type 802.11b/g/n, meaning they conform to the earlier standards as well. Another differentiation between various routers is their operating speed, and the band on which they operate.

The international wireless communication standards define the speed at which routers operate. For instance, wireless routers of the type 802.11b are the slowest, with speeds reaching up to 11 Mbps. While those with the g suffix can deliver a maximum speed of 54 Mbps, those based on the 802.11n standard are the fastest, reaching up to 300 Mbps. However, a router can deliver data only as fast as the Internet connection allows. Therefore, even if it has a rating of n or 300 Mbps, it will perform at speeds of 100 Mbps at the most. Nonetheless, a fast wireless router can increase the speed of your network, and this allows PCs to interact faster, making them more productive.

International standards allow wireless communication on two bands—2.4 GHz and 5.0 GHz. Most wireless routers based on the 802.11b, g, and n standards use the 2.4 GHz band. These are the single band routers. However, the 802.11n standard allows wireless devices to operate on the 2.4 GHz or the 5.0 GHz band also. These are the dual-band routers, which can transmit in either of the two bands via a selection switch, or in some devices, they can operate in both frequencies at the same time.

A newer standard, 802.11a, allows wireless networking on the 5.0 GHz band, while also transmitting on the 2.4 GHz band used by the 802.11b, g, and n standards. These are also dual band wireless routers with two different types of radios that support connections on both 2.4 GHz and 5.0 GHz bands. The 5.0 GHz band offers better performance, lower interference, and more coverage.

What Are PhotoRelays?

Classification of relays include two main groups—contact type or electromechanical relays and contactless type or semiconductor relays. While sub-groups of the mechanical type include signal relays and power relays, those of the contactless type include the solid-state relays and photorelays.

Solid-state relays generally use semiconductor photo triacs, phototransistors, or photo thyristors as the output device, and such relays are limited to AC loads alone. On the other hand, photorelays preferably use MOSFETs as the output device that is capable of handling both AC and DC loads. Photorelays are mainly used as replacements for signal relays.

Photorelays are available mainly in two packages—the frame type in an SO6 package, and the substrate type in an S-VSON package. Both packages use a PDA chip and a MOSFET chip encased in epoxy resin for a hermetic seal.

As evinced by the name, a photorelay contains an LED to emit light when current passes through the diode. The emitted light crosses the isolation boundary to fall on the light sensor of the PDA chip, which in turn, powers and drives the gate of the MOSFET. This turns the MOSFET on, and allows AC/DC current flow through the power terminals of the MOSFET.

Compared to the electromechanical signal relays that the photorelays replace, the miniaturization of the mounting area offers a huge advantage in real-estate recovery. For instance, Toshiba is replacing large size packages such as SOP, SSOP, and USOP with miniature packages such as the VSON and S-VSON types. Replacing with photorelays contributes greatly to the miniaturization of the device.

As photorelays have no moving parts to fail, they are more reliable than the mechanical relays they replace. The basic operation of the photorelay involves LED light triggering the photodiode array, which then drives the MOSFET. Mechanical relays, on the other hand, suffer from wear and tear induced degradation. Photorelays are maintenance free, as they do not have contacts.

Since an LED drives the photorelay, the drive circuit can be relatively simple when compared with the drive circuit that a mechanical relay requires—a buffer transistor to boost the microcomputer output. The output pin of a microcomputer can easily drive a photorelay, as this is equivalent to driving an LED by the microcomputer, requiring very low currents of 3 to 5 mA maximum. Designers only need to consider the LED lifetime.

Mechanical relays suffer from chattering or bouncing—contacts connecting and disconnecting rapidly before finally settling down. In high-speed electronic devices, this chattering can cause misreading of the relay status. Moreover, every mechanical relay requires an additional diode to take care of high voltage generation from back electromotive forces. Photorelays do not suffer from chattering or back EMF forces.

Unless connected to the cold side of a circuit, mechanical relays have a shorter lifespan, as they arc when their contacts open when connected to a high voltage. On the other hand, it does not matter for the photorelay whether it connects to the hot or the cold side.

However, unlike the mechanical relay, photorelays cannot offer normally closed contacts without power being applied to the LED.

Nano-Diamonds Help Prevent Lithium Battery Fires

Last year, airline flights banned Samsung Galaxy Note 7s because of its battery-related fires and explosions. Scientists researching the source of the runway beat buildup found the culprit to be small dendrites forming between the anode and cathode of the battery. A materials specialist from the Drexel University at Philadelphia has proposed a low-cost easy solution to preclude dendrite formation.

According to Drexel professor Yury Gogotsi, this simply requires mixing nanodiamonds with the regular lithium-ion electrolyte at one percent concentration. Gogotsi discovered the method along with a doctoral candidate from Tsinghua University at Beijing. However, Gogotsi found it rather easier to confirm that the nanodiamond additive works, than getting Samsung and several other OEMs and Li-ion battery producers to follow the concept.

Gogotsi had to use internal financial support from Drexel for proving the concept. They are now trying to interest industrial partners for funding to characterize the process in more detail. Specifically, they have yet to determine the amount of nanodiamonds necessary to add to the electrolyte for particular applications.

As Li-ion battery technology is already expensive, it is possible that cost-conscious manufacturers are wary of increasing the cost of batteries because of the addition of diamonds of the nanodiamonds. However, according to Gogotsi, the concern is rather unfounded, as, contrary to popular belief, nanodiamonds are not expensive, but cheap to manufacture. Moreover, they can be easily created from waste materials.

Gogotsi suggests a very simple method of manufacturing nanodiamonds. According to him, this is possible using expired explosives—otherwise expensive to dispose of—and exploding them in a sealed chamber. The coating on the walls of the chamber will have more than 50% nanodiamonds with a typical size of 5 nanometers. This is similar to Superman making diamonds from coal in the popular comic books. The presence of nanodiamonds in the electrolyte of a Lithium-ion battery prevents the formation of dendrites that create shorts resulting in runaway heat build-up and subsequent fires.

Although Gogotsi uses nanodiamonds in his lab, the process for creating them came from Russia. Three separate laboratories in Russia independently perfected the technique, which was kept very secret.

A description of the process finally emerged from a publication from the Los Alamos National Lab, and worldwide people use the technique to turn waste into marketable products. These hard-to-dispose-of waste products include the expired C4. Several manufacturers use nanodiamonds widely in their products. These include medical coatings, industrial abrasives, and magnetic field measuring electronic sensors.

According to Gogotsi and his team, the nanodiamonds work as an additive to the electrolyte to co-deposit with lithium ions, and produces dendrite-free deposits of lithium. This is because lithium prefers to adsorb onto the nanodiamond surfaces leading to a uniform deposit of lithium arrays. This uniform deposition of lithium enhances the cycling performance of the electrolyte, leading to a stable cycling of lithium.

As the nanodiamond co-deposition significantly alters the plating behavior of lithium, the process offers a promising method of suppressing the growth of lithium dendrites in batteries using the lithium metal.

What is Better for Quality — Visual or Machine Inspection?

Although the human eye is a wonderfully complex instrument, it has its limitations. For instance, when inspecting objects on the production line, human eye cannot compete with machine vision, as the latter is not only faster, but also accurate to a larger extent.

The human eye works in tandem with the brain, allowing us to realize our surroundings. We can recognize things within split seconds, even with their exact shape varying. We can analyze our environment keenly, and although we have a wide field of view under normal conditions, our vision is flexible enough to allow us to focus very sharply on particular areas of interest. As humans learned to survive under different environments and stimuli, the ability of the eye gradually evolved over the millennia.

However, in adapting to our environments and circumstances, our visual capabilities are limited to the natural world. For instance, we have only two eyes, as stereoscopic vision is adequate. We do not need to see moving objects in detail, as the perception of movement is enough. We are sensitive to only a limited portion of the light spectrum, and unable to adjust to glare and reflections, which impede our ability to focus on certain properties of an object for a long time, mainly their size and color. Not only are we quite subjective to perception and storing of images, but our eyes are incapable of making accurate measurements. Therefore, our eyes are not the ideal instruments for verifying product quality.

Automatic inspection and analysis, based on imaging or machine vision, surpasses the performance of the human eye. Machine vision can be more accurate for reliable product inspection, and it is possible to combine it with different technologies to ensure highest quality in production environments.

For instance, in a fast-paced production environment, where long-term reliability is essential, the human eye cannot inspect 20 products moving every second, and where errors detection requires an accuracy of 0.02 square millimeters. Manual inspection with a whole team of people may be attempted, but this would go against the objectivity of inspection. Engineers solve the problem with machine vision. They have six cameras observing the fast moving products. As the cameras use polarized light with strobed exposure and very short shutter speeds, they create extremely sharp images on which the defects stand out perfectly. Computers use special software to search for the defects within 50 milliseconds, and the system can continue to do this for 24 hours a day for every day.

Our eye is capable of learning. We can spot anomalies or defects in products, and recognize it as a defect, although we have never seen the defect before. For instance, we recognize a scratch on a surface as a defect, and know it should not be there. Therefore, we have exceptional interpretative abilities, and for a long time, the human eye was almost unbeatable in this area.

However, machine vision technology is advancing, and in many cases, it is able to rival our interpretative abilities. Although this requires fast computers and self-learning vision algorithms, these are easily available. Therefore, machine vision is catching up fast with the capabilities of the human eye.

Why Panic Buttons are Going Wireless

Panic buttons or emergency stop switches are extremely important for protecting workers, machinery, and products from catastrophic failures. Traditionally, manufacturers include them with their machinery, and most are hard-wired. However, things are changing and now, these red emergency switches are finally going wireless.

When a machine malfunctions, or a critical incident occurs, operators often have to press these last resort switches to bring the system or the entire machine to a halt quickly and safely. Hence, these switches are aptly called E-stop, emergency, or panic switches. Operating these switches brings the machine or the system to a halt and prevents serious damage to products or the machine itself, as well as preventing injury to workers.

The importance of the emergency stop button is evident from a report from OSHA or Occupational Safety and Health Organization. According to this report, more than 5000 US workers were injured fatally in 2015 in industrial accidents.

Ever since the second industrial revolution, manufacturers have hard-wired E-stops in their machines as a standard solution to shut them down in case of emergency. Usually, manufacturers placed these emergency switches well apart from the usual on/off and other switches the machines normally carry on their control panels, making it easier for the operator to identify and hit them to stop the machine. With the E-switches being functionally so important, it is understandable manufacturers were reluctant to make them wireless. However, a wireless E-stop device would allow the worker to shut the machine down without even having to go near it, improving the safety factor.

The tech company, Laird PLC, of London, has seemingly realized the additional benefit of a wireless E-stop button, and has evolved the Safe-E-Stop. It is possible to incorporate the Safe-E-Stop with the existing hard-wired emergency stop system already involved with production systems such as assembly lines. This improves the on-the-job safety, as an individual operator or a group can immediately shut down a machine in the production line, without having to hit the hard-wired E-stop button physically.

The emergency might involve the closest machine-mounted E-stop button in the same danger zone. Therefore, the operator rushing in to operate such an emergency button could face a hazard and increase the response time for arresting the emergency.

Laird PLC has developed a wireless personal safety solution rated at SIL 3 as an answer to the above problem. The Rockwell Automation distributors market the Safe-E-Stop from Laird making it available through the Encompass partner program of Rockwell Automation.

Users can have continuous status indication on LED and LED readouts on the Safe-E-Stop system. They can use the IP/Ethernet port on the MSD or Machine Safety Device for reporting the status of the wireless E-stops actuated to personnel in charge of operations. It is possible to link as many as five PSDs or Personal Safety Devices to the MSD simultaneously. This allows multiple operators to collaborate or work independently to supervise the operation. Activation of an E-stop on any linked PSD causes the MSD to issue a stop command and notify all other PSDs immediately of the stop condition.

Raspberry Pi Drives the Oton Glass

Imagine standing in front of several road signs but unable to locate the one you want, because they are all written in a foreign language. This is the job for the OTON GLASS, a device to capture the image, translate it to a language of your choice, and read it to you in your ear. Not only will this help travelers abroad, but also help people with poor vision and those suffering from dyslexia.

Oton Glass is the effort of Keisuke Shimakage, who says he was inspired to develop the device by his father’s dyslexia. Keisuke got together a team of engineers and designers from the Media Creation Research Department at the Institute of Advanced Media Arts and Sciences, Japan, and started on the project.

At the heart of the Oton Glass is a Raspberry Pi 3 (RBPi3), along with two tiny cameras, and an earphone. One camera resides on the inside of a spectacle frame, tracking the user’s eyes. As soon as it detects the user blinking with no eyeball movement, another camera on the outside of the frame captures the image of whatever the user is looking at. The RBPi3 then processes the image, running it through an optical character recognition program. If there are any written words in the image, the RBPi3 coverts them to speech, and plays it through the earphone into the user’s ear.

Although the initial prototype of the Oton Glass was slow in capturing and replaying the text into audio, the team was able to cut down the time from 15 seconds to a mere 3 seconds in their second prototype.

The team designed the case in CAD software and 3-D printed it to be able to test it in real life situations. With feedback from dyslexic users, they were able to upgrade the device further.

At present, the Oton Glass is doing the rounds at several trade and tech shows throughout Japan, and is ready for public distribution. Trial is underway with models of the device at the Nippon Keihan Library, Kobe Eye Centre, and the Japan Blind Party Association. The Oton Glass has won the runner-up prize for the James Dyson Award of 2016. It has also generated huge attention at several other award shows and in the media.

In front of the inside camera of the Oton Glass is a lens with a half mirror that reflects the eye of the user, which the camera tracks for movements. The outer camera waits for a trigger from the blink of a still eye resulting from the wearer reading something, and captures the image and passes it to the RBPi3.

The RBPi3 uses an optical character recognition software to filter out any characters in the image. It then uses artificial voice technology to change the words into sounds, whose meaning the user can understand. If the Oton Glass is unable to recognize some characters, it sends them to a remote server to decipher. This allows the Oton Glass to translate anything that the user sees. The device combines camera-to-glasses and looks very much like normal glasses.

What Are Super-Junction MOSFETs?

Switching power-conversion systems such as switching power supplies and power factor controllers increasingly demand higher energy efficiencies. For such energy-conscious designers, super-junction MOSFETs are a favored solution, as the technology allows smaller die sizes when considering key parameters such as on-resistance. This leads to an increase in current density while enabling designers to reduce circuit size. With increasing market adoption of this new technology goes up, other challenges are coming to the fore, mainly the requirement for improved noise performance.

High-end power supplies for equipment such as LED lighting, LCD TVs, notebook power adapters, medical power supplies, and tablet power supplies require reduced electromagnetic noise emission. Designers prefer using resonant switching topologies such as the LLC converter with zero-voltage switching, as these have inherently low electromagnetic emissions. Super-junction transistors in the primary side switching in an LLC circuit helps designers achieve a compact and energy-efficient power supply.

Compared to conventional planar silicon MOSFETs, the super-junction MOSFET has significantly lower conduction loss for a give die size. Additionally, architecture of the latter device allows lower gate charges and capacitances, leading to lower switching losses compared to conventional silicon transistors.

Fabricators used a multi-epitaxial process for structuring the early super-junction devices. They doped the N-region richly allowing a much lower on-resistance compared to conventional planar transistors. They adapted the P-type region bounding the N-channel to achieve the desired breakdown voltage.

The multi-epitaxial processes resulted in the N- and P-type structures being dimensionally larger than ideal and led to an associated impact on overall device size. The nature of the multi-epitaxial fabrication also restricted engineering the N-region to minimize on-resistance. Therefore, fabricators now use single-epitaxial fabrication processes such as deep trench filling to optimize the aspect ratios of N- and P-regions to minimize the on-resistance while also reducing the size of the MOSFET.

For instance, the single epitaxial fabrication process allows DTMOS IV family of Toshiba’s fourth-generation super-junction MOSFETs to achieve a 27% reduction in device pitch, while also reducing the on-resistance by 30% for each die area. Similarly, Toshiba’s DTMOS V, based on deep trench process, has further improvements at the cell structure level.

Thanks to the single-epitaxial process, the super-junction MOSFETs can deliver more stable performance when faced with temperature changes. Power converters with conventional MOSFETs are noted for reduced efficiencies at higher operating temperatures, which the super-junction MOSFETs are able to counter. For instance, super-junction MOSFETs show a12% lower on-resistance at 150°C.

Power converters using the fifth-generation super-junction DTMOS V devices can now deliver low-noise performance along with superior switching performance. A modified gate structure and patterning helps to achieve this, resulting in an increase in the reverse transfer capacitance between the gate and drain of the device.

What is USB Type-C Interface?

All new electronic devices are now coming with the USB-C interface, and this is revolutionizing the way people charge the devices. So far, most electronic devices had the micro-USB type-B connectors. With the USB Type-C connector, it is immaterial what orientation you use for the charging cable—the non-polarized connector goes in either the right side up or upside down. The connecting system is smart enough to figure out the polarity as a part of the negotiation process, and supports bidirectional power flow at a much higher level.

Earlier, the USB connectors handled only the 5 VDC fed into them. The USB-C port can take in the default 5 V, and depending on the plugged in device, raise the port voltage up to 20 V, or any mutually agreed on voltage, and a preconfigured current level. The maximum power delivery you can expect from a USB-C port is 20 V at 5 A or 100 W. This is more than adequate for charging a laptop. No wonder, electronic device manufacturers are opting for incorporating the USB-C into their next-generation products.

With the increasing power delivery through the USB Type-C ports, the computer industry has had to raise the performance requirement of the voltage regulator. Unlike the USB Type-B and the USB Type-A fixed voltage ports, the USB Type-C is a bidirectional port with a variable input, and an output range of 5-20 VDC. This adjustable output voltage feature allows manufacturers of notebooks and other mobile devices to use USB Type-C ports to replace the conventional AC/DC power adapters and USB Type-B and A terminals. Manufacturers are taking advantage of these features and incorporating dual or multiple USB Type-C ports into their devices.

However, using the current system architecture for implementing dual or multiple USB Type-C ports, leads to a complicated situation. It is unable to meet many requirements of the customers. As a solution, Intersil has proposed a new system architecture using the ISL95338 buck-boost type of regulator, and the ISL95521A, which is a combo battery charger. Use of these devices simplifies the design of the USB-C functions and fully supports all features. Applied on the adapter side, manufacturers can implement a programmable power supply, and it offers an adjustable output voltage that matches the USB-C variable input voltage.

In the proposed design, Intersil offers an architecture with two or more ISL95338 devices in parallel. Each of them interfaces a USB Type-C port to the ISL95521A battery charger. As this architecture eliminates several components from the conventional charging circuit, including individual PD controllers, ASGATE and OTG GATEs, it saves manufacturers significant costs. For charging a battery, power is drawn directly from the USB-C input to the ISL95521A, and the multiple ISL95338s offer additional options.

For instance, the user can apply two or more USB-C inputs with different power ratings for charging the battery fast. Therefore, the battery charge power is now higher than that supplied by a single USB-C input power. It also means there is no need for adding external circuitry to determine the different power rating operations of the paralleled ISL95338 voltage regulators.