Category Archives: Guides

Butterfly IQ – Smartphone Connected Ultrasound Scanner

Traditional ultrasound scanners are rather expensive, and rarely do people own one to use at home. However, that may be about to change, as Butterfly IQ has now obtained FDA clearance for a portable ultrasound scanner that anyone can use by connecting to their smartphones.

Connecticut-based Butterfly IQ has made an innovative ultrasound scanner that uses a semiconductor chip for generating the ultrasonic signals, rather than the piezoelectric crystal transducers that traditional ultrasound machines use. The semiconductor chip based transducer is much easier to manufacture than the piezoelectric ones are.

Using the semiconductor chip makes the device much less expensive as compared to existing ultrasonic scanners. The cost of ownership comes down further as the device can operate with a smartphone and other smartphone connected devices such as the Philips Lumify device.

According to Dr. Jonathan Rothberg, founder and chairperson of Butterfly Network, this ultrasound-on-a-chip technology opens up a low-cost window for peering into the human body, allowing anyone to access high quality diagnostic imaging. With more than two-third of the population of the world without access to proper medical imaging, this effort by Butterfly is a great beginning.

FDA has cleared the device for 13 different clinical use cases. These include pediatric, urological, gynecological, cardiac, abdominal, and fetal use cases. The scanner transfers the captured imagery directly to the user’s smartphone via a chord, and the smartphone stores the images into a HIPAA-compliant cloud.

As reported by the MIT Tech Review, the chief medical officer of Butterfly Network, Dr. John Martin, was able to detect cancerous growth in his body while testing the scanner. This is an example of the potential of the low-cost ultrasound scanner.

According to Martin, the easy-to-use, powerful, healthcare providers will be able to afford the whole-body medical imaging system for less than $2,000, and it will fit in their pockets. As the price barrier comes down, Martin expects the Butterfly device to replace the stethoscope ultimately in the daily practice of medicine. The impact this technology will provide as a low-cost diagnostic system, can be gaged from the help it will offer to hundreds of thousands of women who die in childbirth, and the millions of children who die of pneumonia each year.

After perfecting the scanner, Butterfly has plans to augment its hardware capabilities with software for artificial intelligence. This will help clinicians interpret the images that the device picks up. The company expects the products with many new features to be ready for the market by 2018. At present, the device works only with iPhones.

According to the President of Butterfly IQ, Gioel Molinari, ultrasound imaging makes a perfect combination with deep learning. With more physicians using the devices in the field, the neural network models keep improving. As physicians use the Butterfly scanner regularly, they will be able to interpret the results better. This will help improve the acquiring and interpretation of the image by the artificial intelligence, which in turn, will help less skilled users to extract life-saving insight from the images captured by the Butterfly IQ ultrasound scanner on the field.

Cloud Storage and Alternatives

Ordinarily, every computer has some local memory storage capacity. Apart from the Random Access Memory or RAM, computers have either a magnetic hard disk drive (HDD) or a solid-state disk (SSD) to store programs and data even when power is shut off—RAM cannot hold information without power. The disk drive primarily stores the Operating System that runs the computer, other application programs, and the data these programs generate. Typically, such memory is limited and tied to a specific computer, meaning other computers cannot share it.

A user has two choices for adding more memory to a computer—he/she can either buy a bigger drive or add to the existing one, or he can use cloud storage. Various service providers offer remote memory storage, and the user has to pay a nominal rental amount for using a specific amount of cloud memory.

There are several advantages of using such remote memory. Most cloud storage services offer desktop folders where users can drag and drop files from their local storage to the cloud and vice versa. As accessing the cloud services requires Internet connection, the user can avail the cloud facilities from anywhere, while sharing it between several computers and users.

The user can use the cloud service as a back up for storing a second copy of their important information. In the event an emergency strikes and the user loses all or part of their data on their computer, accessing the cloud storage through the Internet can help to restore the stored information on the cloud. Therefore, cloud storage can act as a disaster recovery mechanism.

Compared to local memory storage, cloud services are much cheaper. Therefore, users can reduce their annual operating costs by using cloud services. Additionally, the user saves on power expenses, as cloud storage does not require the user to supply power that local memory storage would need.

However, cloud storage has its disadvantages. Dragging and dropping files to and from the cloud storage takes finite time on the Internet. This is because cloud storage services usually limit the bandwidth the user can avail for a specific rental charge. Power interruptions and or bad Internet connection during the transfer process can lead to corruption of data. Moreover, the user cannot access his/her data on the cloud storage unless there is an Internet connection available.

Storing data remotely also brings up the concerns of safety and privacy. As the remote memory is likely to be shared by other organizations, there is a possibility of data comingling.

Therefore, people prefer using private cloud services, which are more expensive, rather than using cheaper public cloud services. Private cloud services may also offer alternative payment plans, and these may be more convenient for users. Usually, the private cloud services have better software for running their services, and offer users greater confidence.

Another option private cloud services often offer is of encrypting the stored data. That means only the actual user can make use of their data, and others, even if they can access it, will see only garbage.

What is a wireless router?

Most of the electronic gadgets we use today are wireless. When they have to connect to the Internet, they do so through a device called a router, which may be a wired or a wireless one. Although wired routers were very common a few years back, wireless routers have overtaken them.

Routers, as their name suggests, direct a stream of data from one point to another or to multiple points. Usually, the source of data is the transmitting tower belonging to the broadband dealer. The connection from the tower to the router may be through a cable, a wire, or wireless. To redirect the traffic, the router may have a network of multiple Ethernet ports to which users may connect their PCs, or, as in the latest versions, it may transmit the data wirelessly. The only wire a truly wireless router will probably have is a cable to charge its internal battery.

Technically speaking, the wireless router is actually a two-way radio, receiving the signals from the tower and retransmitting them for other devices to receive. A SIM card inside the router identifies the device to the broadband company, helping it to keep track of the routers statistics. Modern wireless routers follow international wireless communication standards—the 802.11n being the latest, although there are several of the type 802.11b/g/n, meaning they conform to the earlier standards as well. Another differentiation between various routers is their operating speed, and the band on which they operate.

The international wireless communication standards define the speed at which routers operate. For instance, wireless routers of the type 802.11b are the slowest, with speeds reaching up to 11 Mbps. While those with the g suffix can deliver a maximum speed of 54 Mbps, those based on the 802.11n standard are the fastest, reaching up to 300 Mbps. However, a router can deliver data only as fast as the Internet connection allows. Therefore, even if it has a rating of n or 300 Mbps, it will perform at speeds of 100 Mbps at the most. Nonetheless, a fast wireless router can increase the speed of your network, and this allows PCs to interact faster, making them more productive.

International standards allow wireless communication on two bands—2.4 GHz and 5.0 GHz. Most wireless routers based on the 802.11b, g, and n standards use the 2.4 GHz band. These are the single band routers. However, the 802.11n standard allows wireless devices to operate on the 2.4 GHz or the 5.0 GHz band also. These are the dual-band routers, which can transmit in either of the two bands via a selection switch, or in some devices, they can operate in both frequencies at the same time.

A newer standard, 802.11a, allows wireless networking on the 5.0 GHz band, while also transmitting on the 2.4 GHz band used by the 802.11b, g, and n standards. These are also dual band wireless routers with two different types of radios that support connections on both 2.4 GHz and 5.0 GHz bands. The 5.0 GHz band offers better performance, lower interference, and more coverage.

What is Optane Memory?

Optane is a revolutionary class of memory from Intel creating a bridge between dynamic RAM and storage for delivering an intelligent and amazingly responsive computing experience. For instance, Intel claims an increase of 28% in overall system performance, 14 times faster hard drive access, and two times increase in responsiveness in everyday tasks.

However, this revolution is not for everyone. It works only on the 7th generation Intel Core processor based systems that affordably maintain their capacity in mega-storage. For those using the above processor-based system, Intel promises Optane will deliver shorter boot times, faster application launching, extraordinarily fast gaming experience, and responsive browsing. However, there is a farther catch; you need to be running the latest Windows 10 operating system to take full advantage of Optane.

According to Intel, Windows 10 users on the 7th gen Intel Core processing systems can expect their computers to boot up twice as fast as earlier, with web browsers launching five times faster, and games launching up to 67% faster. Intel claims their Optane memory to be an adaptable system accelerator adjusting the tasks of the computer on which it is installed to run them more easily, smoothly, and faster. Intel provides an intelligent software for automatically learning the computing behavior and thereby accelerating frequent tasks and customizing the computer experience.

Intel’s new system acceleration solution places the new memory media module between the controller and other SATA-based storage devices that are slower. This includes SSHD, HDD, or SATA SSD. Based on 3D XPoint memory media, the module format stores commonly used programs and data close to the processor. This allows the processor to access information more quickly and thereby, improves the responsiveness of the overall system.

However, the Intel Optane memory module is not a replacement for the system Dynamic RAM. For instance, if a game requires X GB of DRAM, it cannot be divided between DRAM and Optane memory to meet the game requirements. Regular PC functioning will continue to require the necessary amount of DRAM.

For those who already have installed a solid-state disk or SSD in their computer systems can also install the Intel Optane memory for additional speed benefits. As such, the Intel Optane memory can extend acceleration to any type of SATA SSDs. However, the performance benefits are observed to be greater when the Intel Optane memory is used on slower magnetic HDDs, rather than when installed in systems with faster SSD-SATA.

Although other caching solutions exist, such as those using NAND technology, Intel’s Optane memory is entirely different. This new technology is a high performance, high endurance solution with low latency and quality of service or QoS. Optane uses the revolutionary new 3D XPoint memory media that performs well not only in low capacities, but also has the necessary endurance for withstanding multiple reading and writing cycles to the module.

In addition, Intel’s new Rapid Storage Technology driver, with its leading-edge algorithm, creates a compelling high-performance solution for a user-friendly, intuitive installation with easy to use setup process that automates the configuration to match the needs of the user.

Why do you need a Good Grounding?

Grounding is a safety measure for electrical and electronic systems whereby the user is protected from accidentally coming in contact with electrical hazards. For instance, refrigerators at home usually stand on rubber feet, even when operating from the AC outlet. Although electricity enters the refrigerator and runs through most of the electrical components within it, it has no connection to the outer metal body. Rather, the outer metal body of the refrigerator connects independently to a green grounding wire, which leads to the third pin (the thickest one) on the power plug.

If the outer metal body of the refrigerator was not grounded as above, and for some reason, electricity came in contact with the outer metal chassis such as from leakage, it would cause injury to anyone, if the person were to touch the refrigerator. Connecting the outer metal body to the grounding wire protects the person from being electrocuted, as electricity present on the metal body passes to the earth directly instead of through the person.

This is presuming the third pin on the power plug is connected to a good grounding arrangement outside the building. Typically, this arrangement is a ground rod, or a grounding electrode inserted into the soil. The arrangement works because the earth is a good conductor of electricity, and the overhead transformer that supplies power to the area, also has a grounding arrangement near it, which completes the circuit for the leakage current of the refrigerator. Therefore, a good grounding arrangement is essential for safety.

Apart from safety, most of the electronic equipment, such as computers, microwave ovens, LED lights, televisions, and more, need to be securely grounded to operate effectively. This is because most electronic equipment generate huge amounts of electrical noise that affect other equipment nearby. This can cause damage to an equipment, or cause it to work less effectively. Proper grounding helps to remove the unwanted noise, allowing all equipment to inter-operate more effectively.

Another advantage of a good grounding system is it helps protect against lightning. Lightning has high-voltage electricity with fast rise-times and causes large magnitude currents. A grounding system must present a low-resistance path for the high currents from a lightning strike to enter the earth, without causing damage to the building or equipment within.

Therefore, low resistance or low impedance of grounding is the key to protection from leakage of electricity, electrical noise, and lightning strikes. A good practice is to have all grounding connections as short and direct as possible, and connected with a heavy gauge wire, preferably made of copper. This ensures minimization of inductance and reduces the peak voltages induced.

The effectiveness of the grounding system in coupling the unwanted electricity to ground depends on a number of factors. This primarily includes the geometry of the ground electrode, the size of the conductors, the effective coupling into the soil, and the resistivity of the soil around the electrode.

Therefore, the basic requirements of any ground installation are to maximize the surface area of the electrode with the surrounding soil. This helps to lower the earth resistance and impedance.

Variables in Lead-Free Reflow for PCBs

Reflow ovens often show degrees of variability from profile to profile. This may depend on the distribution of components on the board, especially those that are slow heating, heat-sensitive, or of high mass. In general, reflow systems cannot generate one single reflow profile producing capable thermal results for all products.

For instance, a large BGA package on the PCB may not allow more than five degrees of variation near the peak of the reflow profile curve. Therefore, even while the BGA joints show good soldering, there is a probability of frying some other smaller components nearby the BGA package.

Variables during reflow can also be the result of several external factors. There may be limited control for some factors, but others could be uncontrollable. For instance, in some cases, the PCB may be non-uniform, its components may have varying thermal characteristics, or the tolerances of the process controller could be the major contributor. Even the exhaust could contribute as an external factor.

Oven loading is another major factor when creating custom reflow profile for a high-layer-count PCB. The reflow oven characteristics depend on the number of PCBs passing through it, as the total mass of the PCBs and their speed through the oven influences the rate of rise of temperature. Usually, the load capacity of the reflow oven is measured in boards per minute, and this value differs when only a single PCB is passing through as against a batch of several PCBs passing through at a time.

Customers often demand demonstrable settings of the custom reflow profile for their boards. It may be necessary to demonstrate that a given setting fulfills the requirements of a thermal profile for the board, without damaging any other component on it. Sometimes there are requirements to create documentation as evidence that a particular assembly is indeed within specifications. One of the advantages of creating custom profile for a board is it brings a total visibility to the lead-free reflow process when handling that board.

Automated Methods for Lead-Free Reflow

A reflow profiler, such as the one made by KIC, is the most popular method assemblers use for profiling groups of boards they assemble. The instrument works equally well when profiling individual boards automatically and continuously. Assemblers also use it for a cluster of boards consisting of two or three categories.

The KIC profiler has a Navigation prediction software accompanying it. This helps to drive the generated profile deeply within the specifications of the board. Typically, actual profiles need to be run on some boards that match the representative profile for that group. The process must be repeated periodically to ensure the settings remain valid. Used along with the Navigation prediction software, the KIC profiler saves much time and effort when creating lead-free reflow characteristics for high-layer-count PCBs.

Conclusion

Lead-free reflow of high-layer-count PCBs need not be a tiresome exercise provided it is possible to set up a custom reflow profile for a group of PCBs with similar thermal characteristics. Using modern thermal profilers makes the job economical and fast.

How Counterfeit Electronic Parts and Components Affect Businesses

Although counterfeiting has been an age-old industry, it is only recently that the impact of counterfeit electronic parts and components has come to be highlighted. The public is slowly gaining the awareness of the implications and risks such counterfeited electronics bring to trusting users.

It is difficult for manufacturers to trace the origin of the counterfeited parts compared to the traceability present for the authentic components. It is possible these are older, but legitimate versions of the part, and someone has reprocessed them. On the other hand, these are legitimate fakes, which someone is trying to pass off as real. In both cases, their quality is highly suspect. Receiving counterfeit electronic parts or components in your business can result in mechanical and electrical defects, leading to financial risks and finally to loss of reputation and goodwill.

Mechanical Failures

Scrupulous elements recover a huge number of electronic components and parts from e-waste and reprocess them to sell as new. However, the stress of reprocessing these parts, especially integrated circuits, makes them susceptible to damage. As reprocessing elements do not usually follow proper manufacturing processes, they compromise the integrity of the components, and they occasionally fail to meet the stringent environmental requirements in the field.

Electrical Failures

While reprocessing, usually there is little or no effort to protect the component from ESD damage. Although the counterfeit component may be functioning in the circuit, it is difficult to predict when they will fail. The typical design of genuine electronic components allows them to function for a certain amount of time under specified conditions of use. Reprocessed parts generally fail as their useful life has been exceeded or they have endured dubious production controls and improper processing before they were resold as new.

Financial Risks

Counterfeit electronic components malfunctioning in the product or failing within the warranty period may lead to huge financial ramifications for the business. The financial risks may not be restricted only to simple replacements of the product, but may involve insurance compensations in case human lives are endangered, as could happen in premature failure of sensitive medical devices. The short-term savings from using counterfeit components may not be worth it, considering the financial backlash may turn out to be too huge for the business to handle.

Loss of Reputation and Goodwill

It takes a lot of effort to build up credibility, reputation, and goodwill in business, and these are essential for sustenance and growth of the business. However, the above can only happen provided the customers perceive the products to be of the quality and reliability the business claims they are. Counterfeit electronic components and parts leading to mechanical or electrical malfunctions and failures can easily undermine customer confidence in the business, leading not only to financial loss and legal hurdles, but also to loss of reputation and goodwill.

Conclusion

For safeguarding the business, its customers, its reputation, and goodwill, it is necessary for a business to take proper steps to prevent any incoming counterfeit parts and components.

What are Flexible Heaters?

Everyone is familiar with heaters. Although the simplest forms of heaters are sunlight and fire, both are not easy to handle or control. Therefore, people prefer using electricity for generating heat, as this is easy to control, requiring only a switch to shut off or turn on.

An electric heater generates heat by driving current through a resistive element. The power this arrangement consumes is the product of the resistance of the element and the square of the current flowing through it. The resistance radiates a part of the power it consumes as heat. This heat reaches other nearby surfaces through conduction, convection, or radiation, and transfers its energy to them, increasing their temperature.

Controlling an electric heater is a convenient way to keep the heated surface at a specific temperature or at least below a temperature that would cause damage. Initially, the resistive element of these heaters used simple nickel-chrome wires, as these could withstand high temperatures without melting. The heaters generally wrapped the wires around a mass and connected the ends to a power source. However, this arrangement, although effective, was not a practical solution for all applications.

Heaters have now evolved into flexible types, ones designed on flexible material, suitable for attachment to both flat and non-flat surfaces. Usually, temperature-sensing devices accompany these heaters, allowing constant monitoring and adjustments depending on the changes in the ambient surroundings. Flexible heater materials are commonly of two types—polyimide and silicon rubber—with flexible polyimide heaters being more popular.

Designers need to choose the conductor or the resistive element very carefully when designing a flexible heater. Flexible circuits commonly use copper as the standard cost-effective material, and this comes as a pre-laminated material. However, as copper is a good conductor, its resistivity is low. Therefore, heaters designed with regular copper require a lot of surface, and is suitable only for very low resistive designs.

Flexible heaters meant for producing high heat within small areas need a different material than regular copper for higher resistance. Designers thus use different types of nickel-copper alloys, such as Constantan or Inconel instead, which allow much higher resistive circuits within smaller areas. However, as nickel-copper alloys pre-laminated on a polyimide substrate are not commonly available, flexible heaters are more expensive.

The resistance of a heater depends on the target temperature of the material it is heating. After calculating the required resistance, the designer creates a pattern of interlocking serpentine traces of the correct width and length that emit consistent heat across the surface.

Some equipment, exposed to conditions of varying temperatures, use flexible heaters to keep their components at a consistent temperature. In cold countries, automobile manufacturers use flexible heaters for warming the steering wheel or the seat in their cars. Flexible heaters often keep biological samples at typical body temperature of a human or animal for better analysis. Flexible heaters keep batteries and electronics of aircraft warm when they have to operate at high altitudes. Flexible heaters keep critical components within ATMs and handheld electronic devices operating accurately in cold climates. This makes flexible heaters an important element in the electronic industry.

How is Solid Insulation Tested?

Engineers test solid insulation using two important test methods, and both use the Direct Voltage. One of the tests involves checking the resistance of the insulation, while the other tests the leakage current through the insulation at high voltages.

Insulation Resistance Testing

The instrument used for conducting this test is a megohmmeter. The instrument can be hand-cranked, motor driven, or electronic. Regardless of its principle of operation, a megohmmeter generates a direct voltage in the range of 100-15,000 V, and when applied to the insulation, indicates the material’s resistance in megohms.

As the resistance of any insulation material is temperature dependent, all readings need correction to the standard temperature for the class of the equipment under test. Usually, engineers refer to a table for the temperature correction factors for various electrical apparatus.

The value of the resistance of an insulation material is inversely proportional to the volume of insulation under test. For instance, a material a hundred feet long would have one-tenth the insulation resistance of another material a thousand feet long, provided all other conditions remained identical.

Engineers use this test typically to obtain an indication of deterioration trends of the insulation they are testing. However, the value of the insulation resistance by itself is no indication of the weakness of the insulation or the total dielectric strength of the material.

However, if the value of the insulation resistance showed a continuous downward trend, it usually points towards a contamination of the insulation, along with deterioration ahead. Therefore, engineers measure the insulation resistance in four common methods— short-time readings, time-resistance readings, polarization index test, and step-voltage test—to check for deterioration in the insulation system.

Short-Time Readings

This reading is only a rough check of the condition of the insulation. It measures the value of the insulation resistance for a short period, through a spot reading. The reading usually lies on the curve of increasing value of the insulation resistance. When comparing this value with the previous values, if a downward trend is indicated, then the insulation is deteriorating.

Time-Resistance Readings

All good insulation materials show a continued increase in the value of their resistance over the period when the voltage is applied. However, contamination with moisture and dirt decreases the resistance value of an insulation system, indicating contamination.

Indicating conclusive results of the state of the insulation, the time-resistance method is independent of the equipment size and its temperature. The state of the insulation system is derived from a ratio of the time-resistance readings.

Polarization-Index Test

This is a specialized application of the dielectric absorption test. Polarization Index is the ratio of the insulation test at 10 minutes to the insulation test at 1 minute. A ratio of less than one indicates the insulation is deteriorating and needs immediate maintenance. Usually, this test is reserved for dry insulation systems such as for cables, dry type transformers, rotating machines, and so on.

Step-Voltage Test

A controlled voltage method is used to apply voltage in steps to the insulation under test. If the insulation is weak, it will show a reduction in resistance that would not be apparent under lower voltage levels.

What are Memristors?

What are Memristors and where are they used?

Professor Leon Chua developed the memristor theory in 1971. A scientist discovered the behavior of memristors when he was working in a lab at HP, trying to figure out crossbar switches. Memristors are also known as matrix switches, as they behave as a switch when connecting multiple inputs to several outputs. When Professor Chua looked at the assortment of resistors, capacitors, and inductors for the switching, he noticed a vital missing component, calling it the memory resistor or memristor. In 2006, Stanley Williams developed the practical model of a memristor.

One can consider the memristor as the fourth class of electrical component, after the familiar resistor, capacitor, and inductor. However, unlike the others, memristors exhibit their unique properties only at the nanoscale. Theoretically, memristors maintain a relationship between the time integrals of voltage across and current through two terminals of an element, as they are passive circuit elements.

The resistance of a memristor, or memristance, varies according to its function. It allows access to the history of the applied voltage via tiny read charges. The presence of hysteresis defines the material implementation of its memristive effects. This is its fundamental property, which looks more like a non-linear anomaly. Therefore, memristance is a simple charge dependent resistance, with a unit of Ohm. However, it has its own advantages.

The memristor technology offers lower heat generation as it utilizes less energy. Used in data centers, it offers greater resilience and reliability under interruptions of power. While not consuming any power when idle, memristors are compatible with CMOS interfaces. It is possible to store additional information as memristors allow higher densities to be achieved.

Physically, the memristor has two platinum electrodes across a resistive material, and its resistance depends on its polarity, magnitude, and length. The device retains its resistance even when the voltage is turned off, which makes it a non-volatile memory device. The resistive material can be titanium dioxide or silicon dioxide. As voltage is applied across the terminals, the oxygen atoms within the material disperse towards one of the electrodes. This activity stretches or contracts the material depending on the polarity of the applied voltage, thereby changing the resistance of the memristor.

Depending on their build, memristors are of two types—Iconic thin film and molecular memristors, and magnetic and spin based thermistors.

The material property of iconic thin film and molecular memristors rely more on different material properties of the thin film atomic lattices and application of charge makes these display hysteresis. Using these materials scientists make memristors of Titanium dioxide, ionic or polymers, resonant tunneling diodes, and manganite.

In contrast, magnetic and spin based memristors rely more on the property of the degree of electron spin. Therefore, these systems are aware of the polarization of electronic spin. There are two major types of such memristors—the spintronic memristors, and the spin torque transistor memristors.

With the practical demonstration of memristor manufacturing, their potential application has led to a rapid increase in research for using them in analog and digital circuits, such as programmable logic controllers, computers, and sensors. This has also led to development of theoretical models of memristors—Verilog-A, MATLAB, and Spice.