Author Archives: Andi

Are We Ready for 6G?

Apart from simply being an evolution of the 5G technology, 6G is actually a transformation of cellular technology. Just like 4G introduced us to the mobile Internet, and 5G helped to expand cellular communications beyond the customary cell phones, with 6G the world will be taken to newer heights of mobile communications, beyond the traditional devices and applications for cellular communication.

6G devices operate at sub-terahertz or sub-THz frequencies with wide bandwidths. That means 6G opens up the possibility of transfers of massive amounts of information compared to those under use by 4G and even 5G. Therefore, 6G frequencies and bandwidth will provide applications with immersive holograms with VR or Virtual Reality and AR or Augmented Reality.

However, working at sub-THz frequencies means newer research and understanding of material properties, antennas, and semiconductors, along with newer DSP or Digital Signal Processing technologies. Researchers are working with materials like SiGe or Silicon Germanium and InP or Indium Phosphide to develop highly integrated high-power devices. Many commercial entities, universities, and defense industries have been going ahead with research on using these compound semiconductor technologies for years. Their goal is to improve the upper limits of frequency and performance in areas like linearity and noise. It is essential for the industry to understand the system performance before they can commercialize these materials for use in 6G systems.

As the demand increases for higher data rates, the industry moves towards higher frequencies, because of the higher tranches of bandwidth availability. This has been a continuous trend across all generations of cellular technology. For instance, 5G has expanded into bands between 24 and 71 GHz. 6G research is also likely to take the same path. For instance, commercial systems are already using bands from FR2 or Frequency Range 2. The demand for high data rates is at the root of all this trend-setting.

6G devices working at sub-THz frequencies require generating adequate amounts of power for overcoming higher propagation losses and semiconductor limits. Their antenna design must integrate with both the receiver and the transmitter. The receiver design must offer the lowest possible noise figures. The entire available band must have high-fidelity modulation. Digital signal processing must be high-speed to accommodate high data rates in wide bandwidth swathes.

While focussing on the above aspects, it is also necessary to overcome the physical barriers of material properties while reducing noise in the system. This requires the development of newer technologies that not only work at high frequencies, but also provide digitization, test, and measurements at those frequencies. For instance, handling research at sub-THz systems requires wide bandwidth test instruments.

A 6G working system may require characterization of the channel through which its signals propagate. This is because the sub-THz region for 6G has novel frequency bands for effective communications. Such channel-sounding characterization is necessary to create a mathematical model of the radio channel that can encompass intercity reflectors such as buildings, cars, and people. This helps to design the rest of the transceiver technology. It also includes modulation and encoding schemes for forward error correction and overcoming channel variations.

Why are VFDs Popular?

The industrial space witnesses many innovations today. This is possible due to easily affordable and available semiconductors of various types, which makes it easier for manufacturers. One of the most popular innovations is the VFD or variable frequency drive.

Earlier, a prime mover had only a fixed speed, and its use was limited to expensive, non-efficient devices. With the advent of VFDs, it was possible to have an easy, efficient, cost-effective, and low-maintenance method of controlling the speed of the prime mover. This addition to the control of a prime mover not only increases the efficiency of the operation of equipment but also improves automation.

OEMs typically use VFDs for small and mobile equipment. They only need to plug it into a commercial single-phase outlet, in the absence of a three-phase power supply. These can be hose crimpers, mobile pumping units, lifts, fans/blowers, actuator-driven devices, or any other application that uses a motor as the prime mover. Using a VFD to vary the motor’s speed could improve the operation of the equipment. Apart from the benefits of variable speed, OEMs also use VFDs because of their ability to use the single-phase power source to output a three-phase supply to run the motor.

Although the above may not seem much, the value addition is tremendous, especially for the production of small-batch items. As VFDs output three-phase power, they can use standard three-phase induction motors, which are both widely available and cost-effective. VFDs also offer current control. This not only improves motor control but also helps in avoiding inrush currents that are typical when starting induction motors.

For instance, a standard duplex 120V 15A power source can safely operate a 0.75 HP motor without tripping. However, a VFD, when operating from the same power source, can comfortably operate a 1.5 HP motor. In such situations, using a VFD for doubling the prime mover power has obvious benefits for the capacity or functionality of the application.

The above benefits make VFDs an ideal method of controlling motors for small OEM applications. VFD manufacturers also recognize these benefits, and they are adding features to augment them. For instance, they are now adding configurable/additional inputs and outputs, basic logic controls, and integrated motion control programming platforms to VFDs. This is making VFDs an ideal platform for operating equipment and controlling the motor speed, thereby eliminating any requirement for onboard microcontrollers.

However, despite several benefits, VFDs also have some limitations. OEMs typically face problems when using GFCI or ground fault circuit interrupter breakers with VFDs. A GFCI typically monitors current flowing through the ground conductor. Leakage currents through the ground conductor can electrocute users.

A VFD consists of an inverter stage that works on high-frequencies. Harmonics from this stage can create ground currents, also known as common-mode noise. The three-phase waveforms generated by the inverter do not always sum to zero (as is the case in a regular three-phase power source), leading to a difference of potential causing capacitive induced currents. When these currents seek a path to the ground, they can trip a GFCI device. However, this can be minimized by lowering the operating frequency.

What is Pressure and How to Measure it?

The concept of pressure is simple—it is a force. Typically measured in psi or pounds per square inch, pressure is the force applied on a specific area. However, there are other ways of expressing pressure and different units of pressure measurement. It is important to understand the differences so that the user can apply specific measurements and units properly.

Depending on the application, there are several types of pressure. For instance, there is absolute pressure. Engineers define the zero point of absolute pressure as that occurring in a perfect vacuum, which is the case for some applications. Absolute pressure readings typically include the pressure of the media added to the pressure of the atmosphere. One can use the absolute pressure sensor to rely on a specific pressure range for reference while eliminating instances of varying atmospheric pressure. Thermodynamic equations and relations typically use absolute pressure.

Then there is gauge pressure, which indicates the difference between the pressure of the media and a reference. While the pressure of the media can be that of the gas or fluid in a container, the reference can be the local atmospheric pressure. For instance, the gauge for measuring tire pressure will read zero when disconnected from the tire. Which means, it will not read or register the atmospheric pressure. However, when connected to the tire, it can reveal the air pressure inside the tire.

Another type of pressure is the differential pressure. It is somewhat more complex compared to gauge or absolute pressure, as it is the difference in the pressures of two media. The gauge pressure can also be termed as a differential pressure sensor, as it measures the difference between the atmospheric and the media’s pressure. With a true differential pressure sensor, one can measure the difference between any two separate physical areas. For instance, by measuring the differential pressure, one can indicate the pressure drop or loss, from one side of a baffle to the other.

Compared to the above three, sealed pressure is less common. However, it is useful as a means of measurement. It measures the pressure of a media compared to a sample of atmospheric pressure that is sealed hermetically within a transducer. Exposing the pressure port of the sensor to the atmosphere will cause the transducer to indicate a reading close to zero. This is due to the presence of ambient atmospheric pressure on one side of the diaphragm and a fixed atmospheric pressure on the other. As they are nearly the same, the reading it indicates is close to zero. When they differ, the reading will be a net output other than zero.

The internal pressure can change due to differences in temperature. This may create errors exceeding the accuracy of the sensor. This is the main reason engineers use sealed sensors for measuring high pressures—the changes in the references cause only small errors that do not affect the readings much.

Engineers typically use several units when expressing measurements of pressure. They are easy to modify using the conventions of the International System of Units, even when they are not a part of that measurement system.

Difference Between FPGA and Microcontroller

Field Programmable Grid Arrays or FPGAs do share some similarities with microcontrollers. However, the two are different. While both are integrated circuits, and products and devices use them, there is a distinct difference between the two.

It is possible to program both FPGA and microcontrollers such that they perform specific tasks. However, they are useful in different applications. While FPGA users can program them straight away, it is possible to program microcontrollers only when in a circuit. Another difference between the two is FPGAs are capable of handling multiple parallel inputs, while microcontrollers can read only one line of code at a time.

As FPGAs enable a higher level of customization, they are more expensive and also more difficult to program. On the other hand, microcontrollers, being small and cost-effective, are also easy to customize. It is necessary to know the differences and similarities between the two to make an informed decision about which of them to effectively use for a project.

A microcontroller is typically an integrated circuit that functions like a small computer, constituting a CPU or central processing unit, some amount of random access memory or RAM, and some level of input/output devices. However, unlike a desktop computer, a microcontroller is incapable of running numerous programs. A microcontroller, being a special-purpose device, is capable of executing only one program at a time.

It is possible to make a microcontroller perform a single function repeatedly or at intervals of user request. Typically embedded along with other devices, microcontrollers can be a part of the appliance, no matter the type of product. Moreover, these small computers can operate at very low energy levels—most consume currents in milliamperes, at typically 5 VDC or lower. When produced in large quantities, microcontrollers can be very affordable, although the appliance where the microcontroller is embedded can vary in cost.

On the other hand, an FPGA is a much more complicated device compared to a microcontroller. Most FPGAs come with a pre-programmed chip that allows the users to change the software but not the hardware inside it. By changing the software, users can configure the hardware while using the FPGA. Embedded within a device, an FPGA allows altering the hardware of the device without adding or removing anything physically.

An FPGA is typically an array of integrated circuits, with the arrays arranged in programmable logic blocks. A new FPGA is not configured in any particular function. Users decide the configuration according to their application, and if necessary, users can reconfigure the FPGA as many times as necessary. The FPGA configuration process requires the use of a Hardware Description Language, or HDL, such as Verilog and VHDL.

A modern FPGA features many logic gates and RAM blocks to enable it to execute complex computations. Components in an FPGA may include complete memory blocks in addition to simple flip-flops.

Both FPGAs and microcontrollers serve similar basic functions. Manufacturers develop these items such that users can decide their functionality when designing the application. Both integrated circuits have a similar appearance and are versatile, and users can apply them for various applications.

Difference Between IoT and Embedded Systems

Today, we are accustomed to using many IoT or Internet of Things and embedded systems every day. But just a decade ago, very few people had smartphones. Innovations and technological advancements have changed that—ushering in an era of the smart revolution almost globally. With the advent of the 4th Industrial Revolution and the revolutionary use of IoT equipment, several million devices link to the internet and cloud services. We can easily connect to the world around us, mainly due to IoT connectivity along with the evolution of regular gadgets. Many new equipment and devices now come inbuilt with IoT technologies, and these include not only personal fitness devices, but also kitchen items, home heating systems, and medical equipment.

Embedded systems typically comprise a small computer integrated into a mechanical or electrical system. Some examples of such devices include electric bikes, washing machines, home internet routers, and heart monitors. Each of these devices comes with an inbuilt computer that serves a specific purpose. Forming the brain of the device, the computers may have one or more microprocessors. For instance, a smartphone consists of many embedded systems interconnected to function simultaneously. So far, embedded systems hardly ever connect to larger networks such as the Internet. Most still use antiquated connection standards such as the RS-232 to interconnect to other embedded systems. These protocols are usually plagued with bandwidth and speed constraints. In comparison, modern communication protocol standards for embedded systems are much faster and support higher bandwidth. Many also support wireless connectivity. All in all, modern embedded systems are more sophisticated than before.

IoT devices, on the other hand, are rather pieces of hardware. They can be machines, appliances, gadgets, actuators, or sensors. Their main function is to transfer data over networks such as the Internet. The design of most IoT devices allows them to be useful for specific purposes. It is possible to integrate IoT devices into various appliances, including industrial machinery, medical equipment, environmental sensors, and mobile systems. There are IoT embedded systems also, and they are embedded systems that connect to the internet or other networks like home networks. Most are capable of carrying out tasks beyond the capabilities of the individual system. Connectivity allows them to perform functions that were not possible earlier.

Sensors effectively behave as the Internet of Things or IoT devices when they can transmit data over networks, including the Internet. It is possible for an embedded system to be enhanced with IoT capabilities by incorporating an IoT module. The basic IoT ecosystem roots still rely heavily on embedded systems. It is possible to gauge the importance of embedded systems within the IoT realm by the fact that embedded systems support much of the functionality of IoT devices.

Although a network, such as the Internet, is a necessary medium for transmitting data to and from IoT devices to their cloud services, embedded systems help in the actual collection, rationalization, interpretation, and transmission of the data from the sensor. Embedded systems also help interface the data with online services, smartphone applications, and nearby computers. In this chain, the numerous sensors that actually collect real-world data, remain the most important link.

MEMS Replacing Quartz?

The automotive market is transforming very fast. We have next-generation technologies already—semi-autonomous cars, ADAS or advanced driver assistance systems, and an array of electric vehicle options—smart mirrors, backup cameras, voice recognition, smartphone integration, telematics, keyless entry, and start. Some of the latest models feature lane-keep assist technology, automated parallel parking, and many other self-driving capabilities as these vehicles move steadily toward fully autonomous driving.

All the above has required a redefinition of the automotive design, including infotainment, convenience, and safety features, as the users of smart and connected cars expect. With automotive being the fastest-growing market segment in the semiconductor field, the key drivers for this growth come from electronic components for ADAS and other EV applications. Consider that an average car has about 1,500 semiconductors to control everything from the drivetrain to the safety systems.

However, apart from sensing, processing, and communication chips, there is another critical technology contributing to the reliable, safe operation of autonomous systems, and that is precision timing.

Most car owners understand automotive timing as the timing that belts, camshafts, or ignition systems keep for the engine to run efficiently and smoothly. For automotive systems developers, however, timing means devices providing the clock for buffers, oscillators, and resonators. In the vehicle, each timing device has a different but essential clocking function that ensures stable, accurate, and reliable frequency control for digital components. This precision timing is especially important for modern complex automotive systems like the ADAS that generate, process, and transmit huge volumes of data.

As a result, modern cars may use up to 70 timing devices to keep the automotive system operating smoothly. As vehicles get smarter with each new model, the number of timing devices is also growing. The automotive design has a wide array of digital systems that require precise, reliable timing references from clock generators and oscillators. They provide the essential timing functions for networks, infotainment, and other subsystems within the vehicle and electronic control system units like ADAS.

With the accelerating pace of automotive innovation, one critical component has remained constant for the past 70 years—the quartz-base timing devices, or the quartz crystal oscillator. But in the automotive environment, quartz crystals face fundamental limitations like fragility due to their susceptibility to environmental and mechanical stresses. Quartz timing devices are now becoming a bottleneck for safety and reliability because of their inherent drawbacks.

MEMS timing components, on the other hand, can easily meet the rigors of AEC-Q100 automotive qualification requirements. MEMS is a well-established technology, widely useful in many fields, including automotive systems. Here, they serve as gyroscopes, accelerometers, and a wide variety of sensor types.

The industry qualification of AEC-Q100 for MEMS devices offers the assurance that these timing components will provide the robustness, reliability, ad performance as the automotive electronic systems demand.

Stringent testing has proven the greater reliability of the silicon-based MEMS technology overclocking applications of quartz crystals. Being much smaller than quartz crystals, MEMS resonators are ideal for space-sensitive automotive applications like radar/LIDAR, smart mirrors, and camera module sensors. Their low mass and smaller size make MEMS timing devices far more resilient to mechanical shock and vibration.

What is PCB Prototyping?

Each electronic piece of equipment has at least one printed circuit board or PCB. The PCB holds electronic components in place, interconnects them appropriately, and allows them to function in the manner that the designer has intended. This allows the equipment to perform according to its specifications.

A designer lays out the printed circuit board carefully following the schematic diagram and other rules before sending it for manufacturing, assembling, and using in the final product. However, it is possible to overlook small mistakes and incorrect connections during the design. Often, only when the PCB is in the final product, is it observed to be not working properly.

Sometimes, things can go wrong during the routing and layout phase. Two of the most common issues are shorting or opens. A short is an unintentional electric connection between two metallic entities, while an open is an unintentional disconnection between two points. A short or an open can prevent the printed circuit board from performing as intended.

To overcome this issue, designers prefer to generate a netlist, preferably in an IPC-356 format, that they send to their PCB manufacturer along with the other Gerber files. The netlist is a database of electrical connections that confirms and maps that the layout in the Gerber files is correct, and will work as intended. The manufacturer loads the netlist along with the Gerber files into the CAM program before verifying the correctness of the design.

The manufacturer can compare the netlist file to the data for finding shorts or opens within the routed file. On discovering an open or short in a PCB, the designer must scrap or redesign the board. If the discovery of the error is at a late stage, the designer has no alternative but to scrap the board. However, if the manufacturer discovers the error before assembly, it is possible to redesign the board rather than scrap it.

Prototyping a board is the process of manufacturing only a few numbers of the board initially. These boards undergo full assembly and then rigorous testing to weed out all errors in them. The testing stage makes a complete list of the errors, and the designer can go back to the design process to rectify the mistakes. Once the designer has addressed all the corrections, the board can proceed with production.

If the errors are of a minor nature, it may not be necessary for the designer to redo the design and layout. The manufacturer can suggest simple tweaks and the PCB engineers can accept them through an approval process. Manufacturers can easily handle a trace cut or add a thermal connection or a clearance that they can easily and cleanly complete.

Allowing the manufacturer to handle the required changes versus a complete revision by the designer is much more cost-effective, and faster. During the prototyping process, it is sufficient to document the process. Later, an ECN can fix the data set, create a completely new version or bump the revision as necessary. This process is inexpensive and accurate.

Efficiency and Performance of Edge Artificial Intelligence

Artificial Intelligence or AI is a very common phrase nowadays. We encounter AI in smart home systems, in intelligent machines we operate, in the cars we drive, or even on the factory floor, where machines learn from their environments and can eventually operate with as little human intervention as possible. However, for the above cases to be successful, it was necessary for computing technology to develop to the extent that the user could decentralize it to the point in the network where the system generates data—typically known as the edge.

Edge artificial intelligence or edge AI makes it possible to process data with low latency and at low power. This is essential, as a huge array of sensors and smart components forming the building blocks of modern intelligent systems can typically generate copious amounts of data.

The above makes it imperative to measure the performance of the edge AI deployment to optimize its advantages. To gauge the performance of the edge AI model requires specific benchmarks that can indicate its performance based on standardized tests. However, there are nuances in edge AI applications, as the application itself often influences the configuration and design of the processor. Such distinctions often prevent using generalized performance parameters.

In contrast with data centers, a multitude of factors constraint the deployment of edge AI. Among them, the primary factors are its physical size and power consumption. For instance, the automotive sector is witnessing a huge increase in electric vehicles with a host of sensors and processors for autonomous driving. Manufacturers are implementing them within the limited capacity of the battery supply of the vehicle. In such cases, power efficiency parameters take precedence.

In another application, such as home automation, the dominant constraint is the physical size of the components. The design of AI chips, therefore, must use these restrictions as guidelines, with the corresponding benchmarks reflecting the adherence to these guidelines.

Apart from power consumption and size constraints, the deployment of the machine learning model will also determine the application of the processor. Therefore, this can impose specific requirements when analyzing its performance. For instance, benchmarks for a chip in a factory utilizing IoT for detecting objects will be different from a chip for speech recognition. Therefore, estimating edge AI performance requires developing specific benchmarking parameters that showcase real-world use cases.

For instance, in a typical modern automotive application, sensors like computer vision, LiDAR, etc., generate the data that the AI model must process. In a single consumer vehicle fitted with an autonomous driving system, this can easily amount to generating two to three terabytes of data per week. The AI model must process this huge amount of data in real-time, and provide outputs like street sign detection, pedestrian detection, vehicle detection, and so on. The volume of data the sensors produce depends on the complexity of the autonomous driving system, and in turn, determines the size and processing power of the AI core. The power consumption of the onboard AI system depends on the quality of the model, and the manner in which it pre-processes the data.

Cooling Machine Vision with Peltier Solutions

The industry is using machine vision for replacing manual examination, assessment, and human decision-making. For this, they are using video hardware supplemented with software systems. The technology is highly effective for inspection, quality control, wire bonding, robotics, and down-the-hole applications. Machine vision systems obtain their information by analyzing the images of specific processes or activities.

Apart from inspection systems, the industry also uses machine vision for the sophisticated detection of objects and for recognizing them. Machine vision is irreplaceable in collision avoidance systems that the next generation of autonomous vehicles, robotics, and drones are using. Recently, scientists are using machine vision in many machine learning and artificial intelligence systems, such as facial recognition.

However, for all the above to be successful, the first requirement is the machine vision must be capable of capturing images of high quality. For this, machine vision systems employ image sensors and cameras that are temperature sensitive. They require active cooling for delivering optimal image resolutions that are independent of the operating environment.

Typically, machine vision applications make use of two types of sensors—CCD or charge-coupled devices, and CMOS or complementary metal-oxide semiconductor sensors. For both, the basic functionality is to convert photons to electrons that are necessary for digital processing. Both types of sensors are sensitive to temperature, as thermal noise affects their image resolution, and thermal noise increases with the rising temperature of the sensor assembly. This depends on environmental conditions or the heat generated by the surrounding electronics, which can raise the temperature of the sensor beyond its maximum operating specification.

By rough estimation, the dark current of a sensor doubles for every 6 °C rise in temperature. By dropping the temperature by 20 °C, it is possible to reduce the noise floor by 10 dB, effectively improving the dynamic range by the same figure. When operating outdoors, the effect is more pronounced, as the temperature can easily exceed 40 °C. Solid-state Peltier coolers can prevent image quality deterioration, by reducing and maintaining the temperature of the sensor to below its maximum operating temperature, thereby helping to obtain high image resolution.

However, it is a challenge to spot cool CCD and CMOS sensors in machine vision system applications. Adding a Peltier cooling device increases the size, cost, and weight. It also adds to the complexity of the imaging system. Cooling of imaging sensors can lead to condensation on surfaces exposed to temperatures below the dew point. That is why vision systems are mainly contained within a vacuum environment that has insulated surfaces on the exterior. This prevents the build-up of condensation over time.

The temperature in the 50-60 °C range primarily affects the image quality of CCD and CMOS sensors. However, this depends on the quality of the sensor as well. For sensors in indoor applications just above ambient, a free convection heat sink with good airflow may be adequate to cool a CMOS sensor. However, this passive thermal solution may not suffice for outdoor applications. Active cooling with a Peltier cooling solution is the only option here.

Differences between USB-PD and USB-C

With all the electronic devices we handle every day of our lives, it is a pain to handle an equally large number of cables for charging them and transferring data. So far, a single standard connector to rule all the gadgets has proven to be elusive. A format war opens up, with one faction emerging victorious for a few years, until overtaken by another newer technology. For instance, Betamax overtook VHS, then DVD ousted Betamax, until Blu-ray overtook the DVD, and Blu-ray is now hardly visible with the onslaught of online streaming services.

As suggested by its acronym, the Universal Serial Bus, USB-C has proven to be different and possibly even truly universal. USB-C ports are now a part of almost all manner of devices, from simple Bluetooth speakers to external hard drives to high-end laptops and ubiquitous smartphones. Although all USB-C ports look alike, they do not offer the same capabilities.

The USB-C, being an industry-standard connector, is capable of transmitting both power and data on a single cable. It is broadly accepted by the big players in the industry, and PC manufacturers have readily taken to it.

USB-PD or USB Power Delivery is a specification for allowing the load to program the output voltage of a power supply. Combined with the USB-C connector, USB-PD is a revolutionary concept as devices can transmit both data and power as the adapter adjusts to the power requirements of the device to which it connects.

With USB-PD, it is possible to charge and power multiple devices, such as smartphones and tablets, with each device drawing only the power it requires.

However, USB-C and USB-PD are two different standards. For instance, the USB-C standard is basically a description of the physical connector. Using the USB-C connector does not imply that the adapter has USB-PD capability. Therefore, anyone can choose to use a USB-C connector in their design without conforming to USB-PD. However, with a USB-C connector, the user has the ability to transfer data and moderate power (less than 240 W) over the same cable. In addition, the USB-C connector is symmetrical and self-aligning, which makes it easy to insert and use.

Earlier USB power standards were limited, as they could not provide multiple levels of power for different devices. Using the USB-PD specifications, the device and the power supply can negotiate for optimum power delivery. How does that work?

First, each device starts with an initial power level of up to 10 W at 5 VDC. From this point, power negotiations start. Depending on the needs of the load, the device can transfer power up to 240 W.

In the USB-PD negotiation, there are voltage steps starting at 5 VDC, then at 9 VDC, 15 VDC, and 20 VDC. Beyond this, the device supports power output starting from 0.5 W up to 240 W, by varying the current output.

With USB-PD, it is possible to handle higher power levels at the output, as it allows a device to negotiate the power levels it requires. Therefore, USB power adapters can power more than one device at optimum levels, allowing them to achieve faster charge times.