Category Archives: Guides

What are Stacked 3D ICs?

Just like any big city, electronics is evolving with great rapidity, such that both are running out of open space. The net result is a growth in the vertical direction. For a city, vertical growth promises more apartments, office space, and people per square mile. For electronics, there is the slowing of Moore’s law and the adoption of new advanced technology. That means chip developers cannot increase density and speed from shrinking processes and smaller transistors. Although they can increase the die capacity, this suffers from longer signal delays that reduce yield. That limits the expansion in X-Y directions, which means the only option remaining is building upwards.

Among the many established forms of vertical integration, there are 2.5D ICs, flip-chip technology, inter-die connectivity with wire bonding, and stacked packages. However, all these suffer from constraints that limit their value. Three-dimensional integrated circuits or 3-D ICs offer the highest density and speed.

Three-dimensional ICs are monolithic 3-D SoCs built on multiple active silicon layers. These layers use vertical interconnections between the different layers. So far, this is an emerging technology and has not been widely deployed. Furthermore, there are stacked 3-D ICs with multiple dies that manufacturers have stacked, aligned, and bonded into a single package. They use TSVs or through silicon vias, and a hybrid bonding technique to complete the inter-die communication. Stacked 3-D ICs are now commercially available, offering an option for larger dies or migration to leading-edge nodes that are very expensive.

Stacked 3-D ICs offer an ideal option for applications requiring more transistors in a given footprint. For instance, a mobile SoC requires high transistor densities but has limits on its footprint and height. Another example is cache memory chips. Manufacturers usually stack them on top of or below the processor to increase their bandwidth. This makes stacked 3-D ICs a natural choice for applications that are on the limits of a single die.

Vertical stacking offers a smaller footprint with faster interconnections compared to multiple packaged chips. Rather than a single large die, splitting it into several smaller dies provides a better yield. For the manufacturer, there is flexibility in stacking heterogeneous dies, as they can intermix various manufacturing processes and nodes. Moreover, it is possible to reuse existing chips without redesigning them or incorporating them into a single die. This offers a substantial reduction in risk and cost.

Although there are numerous benefits and opportunities from the use of stacked 3-D ICs, they also introduce new challenges. The architecture of 3-D silicone systems needs a more holistic approach, taking into account the third dimension. It is not sufficient to think of 3-D ICs only in terms of 2-D chips stacked on top of each other. Although it is necessary to optimize power, performance, and area in the familiar three-way approach,  the optimization must be in every cubic millimeter rather than in every square millimeter. All tradeoff decisions must take into account the vertical dimension also. This requires making the tradeoffs across all design stages, including IP, architecture, chip packaging, implementation, and system analysis.

Remote Sensing with nRF24L01+ Modules

RF modules, nRF24L01+, from Nordic Semiconductor, are low-cost solutions for two-way wireless communication. Users can configure the modules via their SPI or Serial Peripheral Interface. The SPI interface also allows control over a microcontroller. The Internet has many examples of projects using these RF modules with Arduino boards.

The RF module nRF24L01 has a built-in PCB antenna. Moreover, the module has an extra feature that utilizes the two-way communication feature for detecting any loss of communication between the transmitter and the receiver. The modules offer two-way communication because they act as a transmitter and a receiver at the same time. However, one module acts as the main transmitter and transmits the state of a PIR or Passive Infrared Sensor to the other module that receives the data for further processing.

Remote sensors need this ability to detect the loss of communications. This is because, in the absence of communication, it is easy to lose data or information without notice. Again, this is an important feature when installing the sensor to verify if both RF modules are actually talking to each other, and are not out of range.

Although the RF modules nRF24L01 need powering with 3.3 VDC, their IO pins are 5 VDC tolerant. That makes it easy to connect the SPI bus of the nRF24L01 modules to an Arduino Pro Mini working on 5 VDC.

It is very significant to place the power supply bypass capacitors as close as possible to the microcontroller and the nRF24L01 modules, as this effectively suppresses most of the switching noise from these chips. Overlooking this in such projects often leads to all types of unexpected problems. It is also necessary to use multiple bypass capacitors. Users can effectively parallel capacitors of different values, like an electrolytic capacitor of 100 µF and a polypropylene capacitor of 100 nF. The electrolytic capacitor filters out noises of lower frequencies, but it is ineffective for filtering any high-frequency noise. The polypropylene capacitor filters the higher frequency noise.

The PIR sensor connects to the microcontroller. A voltage level translator offers the sensor the optimum voltage level it needs to function. Therefore, depending on the type of PIR sensor, the voltage level translator can supply a 5 VDC, 3.3 VDC, or other lower level outputs. The polarity of the voltage level translator transistor decides whether the trigger output is high active or low active.

A red LED begins to flash when the transmitter and the receiver have lost their connection. On restoring the connection, the red LED stops flashing.

When the PIR sensor senses motion, a blue LED lights up to indicate this. The transmitter sends this trigger event over to the receiver as a trigger code byte. If there is no motion to detect, the transmitter sends only a live beat code to the receiver. This is how the receiver knows if the sensor has sent a motion trigger.

The receiver sends the same code it receives back to the transmitter as an acknowledgment. There is thus continuous communication between the receiver and the transmitter, and both can easily determine as soon as they have lost connection.

What is a PolyFuse?

Electronic circuits often have fuses on board the PCB. Fuses protect the circuitry from catching fire due to overload. Because of some fault like a short-circuit, a part of the circuit may start drawing more power than it is admissible. The additional power flow may lead to overheating and finally, a fire can break out. A fuse acts as a circuit breaker to protect against overload by interrupting the power flow. Typically, the fuse element is a thin wire with a low melting point. Higher power through the fuse means more increased current flow through it, which heats the wire and causes it to melt or blow. This interrupts the power flow.

Although the fuse wire acts as a protection, one of its drawbacks is it needs a physical replacement once it is blown. This is a problem for electronics at a remote location because the device will remain inoperative until someone fixes the problem and replaces the damaged fuse with a new one. This drawback has led to the development of PolyFuse.

There are electromechanical devices that act as self-resetting circuit breakers. However, most of such devices have a rating of 1A and above. Moreover, their physical size is not suitable for printed circuit boards. A PolyFuse is a self-resetting circuit breaker suitable for low voltage, low current electronics. Moreover, its physical size is small enough to allow its use on a small printed circuit board.

PolyFuses are similar to PTC or positive temperature coefficient resistors—initially, their resistance is low enough to allow the load current to flow unhindered. However, in case of an overload, the PolyFuse starts to heat up, and its resistance also increases. This helps in cutting down the load current through it. However, unlike PTCs, PolyFuses have a self-healing property. If the current through a PolyFuse reduces, its resistance drops back to a lower value. This is their self-resetting property.

A PolyFuse typically contains an organic polymer substance with the impregnation of carbon particles. The carbon particles are usually in close contact, as the polymer is in a crystalline state. This allows the resistance of the device to be low initially.

As current flow increases, the carbon in the PolyFuse heats up, and the polymer begins to expand in an amorphous state. This causes the carbon particles to separate, increasing the resistance of the device and a subsequent increase in the voltage drop across the PolyFuse, which leads to a decrease in the current flow through it. The residual current flow under the fault condition keeps the PolyFuse warm enough to limit the current. As soon as the cause of the overload is removed, the current reduces to allow the PolyFuse to cool down, regain its low resistance, and the correct operation to resume.

PolyFuses cannot act fast, because they need to heat up before limiting the current flow. That means they have a short but appreciable time delay before they operate. Hence, they are not very effective against fast surges and spikes. However, they are very useful because of their self-resetting property, making them effective against short-term short-circuits and overloads.

What is Pulsed Electrochemical Machining?

With pulsed electrochemical machining, it is possible to achieve high-repeatability production parts. This advanced process is a completely non-thermal and non-contact material removal process. It is capable of forming small features and high-quality surfaces.

Although its fundamentals remain the same as electromechanical machining or ECM, the variant, PECM or the pulsed electrochemical machining process is newer and more precise, using a pulsed power supply. Similar to other machining processes, like EDM and more, there is no contact between the tool and the workpiece. Material very close to the tool dissolves by an electrochemical process and the flowing electrolyte washes away the by-products. The remaining part takes on a shape like an inverse of the tool.

The PECM process has some key terms that it uses routinely. The first is the cathode—representing the tool in the process. Other names for the cathode are tool and electrode. Typically, its manufacturing is specific for each application and its design is the inverse of the shape the process wants to achieve.

The second is the anode—it refers to the workpiece or the material that the process works on. Therefore, the anode can assume many forms. This can include a cast piece of near net shape, wrought stock, an additively manufactured or 3D printed part, a part conventionally machined, and so on.

The third key item is the electrolyte—referring to the working fluid in the PECM process that flows between the cathode and the anode. Commonly a salt-based solution, the electrolyte serves two purposes. It allows electrical current to flow between the cathode and anode. It also flushes away the by-products of the electrochemical process such as hydroxides of the metals dissolved by the process.

The final key item is the gap—this is also the IEG or inter-electrode gap and is the space between the anode and the cathode. This space is an important part of the process, and it is necessary to maintain this gap during the machining process as the gap is a major contributor to the performance of the entire process. The PECM process allows gap sizes as small as 0.0004” to 0.004” (10 µm to 100 µm). This is the primary reason for PECM’s capability to resolve minuscule features in the final workpiece.

Compared to other manufacturing processes, pulsed electrochemical machining has some important advantages:

The pulsed electrochemical machining process of metal removal is unaffected by the hardness of the material it is removing. Moreover, the hardness also does not affect the speed of the process.

Being a non-thermal and non-contact process, PECM does not change the properties of the material on which it is working.

As it is a metal removal process using electrochemical means, it does not leave any burrs behind. In fact, many deburring processes use this method as a zero-risk method of machining to avoid burrs.

It is possible to achieve highly polished surfaces with the PECM process. For instance, surfaces of 0.2-8 µin Ra (0.005-0.2 µm Ra) are very common in a variety of materials.

Because of non-contact, there is no wear and tear in the cathode, and it has practically near-infinite tool life.

PECM can form an entire surface of a part at a time. The tool room can easily parallel it to manufacture multiple parts in a single operation.

Advantages of Additive Manufacturing

Additive manufacturing, like those from 3-D printers, allows businesses to develop functional prototypes quickly and cost-effectively. They may require these products for testing or for running a limited production line, allowing quick modifications when necessary. This is possible because these printers allow effortless electronic transport of computer models and designs. There are many benefits of additive manufacturing.

Designs most often require modifications and redesign. With additive manufacturing, designers have the freedom to design and innovate. They can test their designs quickly. This is one of the most important aspects of making innovative designs. Designers can follow the creative freedom in the production process without thinking about time and or cost penalties. This offers substantial benefits over the traditional methods of manufacturing and machining. For instance, over 60% of designs undergoing tooling and machining also undergo modifications while in production. This quickly builds up an increase in cost and delays. With additive manufacturing, the movement away from static design gives engineers the ability to try multiple versions or iterations simultaneously while accruing minimal additional costs.

The freedom to design and innovate on the fly without incurring penalties offers designers significant rewards like better quality products, compressed production schedules, more product designs, and more products, all leading to greater revenue generation. Regular traditional methods of manufacturing and production are subtractive processes that remove unwanted material to achieve the final design. On the other hand, additive manufacturing can build the same part by adding only the required material.

One of the greatest benefits of additive manufacturing is streamlining the traditional methods of manufacturing and production. Compressing the traditional methods also means a significant reduction in environmental footprints. Taking into account the mining process for steel and its retooling process during traditional manufacturing, it is obvious that additive manufacturing is a sustainable alternative.

Traditional manufacturing requires tremendous amounts of energy, while additive manufacturing requires only a relatively small amount. Additionally, waste products from traditional manufacturing require subsequent disposal. Additive manufacturing produces very little waste, as the process uses only the needed materials. An additional advantage of additive manufacturing is it can produce lightweight components for vehicles and aircraft, which further mitigates harmful fuel emissions.

For instance, with additive manufacturing, it is possible to build solid parts with semi-hollow honeycomb interiors. Such structures offer an excellent strength-to-weight ratio, which is equivalent to or better than the original solid part. These components can be as much as 60% lighter than the original parts that traditional subtractive manufacturing methods can produce. This can have a tremendous impact on fuel consumption and the costs of the final design.

Using additive manufacturing also reduces the risk involved and increases predictability, resulting in improving the bottom line of a company. As the manufacturer can try new designs and test prototypes quickly, digital additive manufacturing modifies the earlier unpredictable methods of production and turns them into predictable ones.

Most manufacturers use additive manufacturing as a bridge between technologies. They use additive technology to quickly reach a stable design that traditional manufacturing can then take over for meeting higher volumes of production.

What are Power Factor Controllers?

Connecting an increasing number of electrically-powered devices to the grid is leading to a substantial distortion of the electrical grid. This, in turn, is causing problems in the distribution of the electrical network. Therefore, most engineers resort to advanced power factor correction circuitry in power supply designs that can meet power factor standards strictly for mitigating these issues.

Most power factor correction methods popularly use the boost PFC topology. However, with the advent of wide band-gap semiconductors, like silicon carbide and gallium nitride, it is becoming easier to implement bridge-less topologies also, including the column PFC. With advanced column controllers, it is now possible to simplify the control over complex designs of the interleaved column PFC.

At present, the interleaved boost PFC is the most common topology that engineers use for power factor correction. They use a rectifying diode bridge for converting AC voltage to DC. A boost converter then steps up the DC voltage to a higher value, while converting it to a sinusoidal waveform. This has the effect of reducing the ripple on the output voltage while offering a sinusoidal waveform for the current.

Although it is possible to achieve power factor correction with only a single boost converter, engineers often use two or more converters in parallel. Each of these converters is given a phase shift to improve its efficiency and reduce the ripple on the input current. This topology is known as interleaving.

With new families of semiconductors, especially the silicon carbide type, creating power switches offers substantial improvements in their thermal and electrical characteristics. Using the new type of semiconductors, it is becoming possible to integrate the rectification and boost stages, along with two switching branches for operating at different frequencies. This is the bridge-less column PFC topology.

One of the two branches is the slow branch, and it commutates at the grid frequency, typically 50 or 60 Hz. This branch operates with traditional silicon switches, while it is primarily responsible for input voltage rectification. The second branch is the fast branch and is responsible for stepping up the voltage. Switching at very high frequencies like 100 kHz, this branch places great thermal and electrical strain on the semiconductor switches. For safe and efficient performance, engineers prefer to use wide band-gap semiconductor switches, such as GaN and SiC MOSFETs, in the second branch.

The bridge-less column PFC topology improves the performance in comparison with the interleaved boost converter. But the control circuitry is more complex due to the presence of additional active switches. Therefore, engineers often integrate the column controller to mitigate the issue.

It is possible to add more high-frequency branches for improving the efficiency of the bridge-less column PFC. Such additions help in reducing the ripple on the output voltage of the converter while distributing the power requirements equally among the branches. Such an arrangement minimizes the overall costs while reducing the layout.

Although it is possible to reach general conclusions about each topology by comparing their performance, this largely depends on the device selection and its operating parameters. Therefore, designers must be careful in considering the design for implementation.

How Piezoelectric Accelerometers Work

Vibration and shock testing typically require piezoelectric accelerometers. This is because these devices are ideal for measuring high-frequency acceleration signals generated by pyrotechnic shocks, equipment and machinery vibrations, impulse or impact forces, pneumatic or hydraulic perturbations, and so on.

Piezoelectric accelerometers rely on the piezoelectric effect. Generally speaking, when subject to mechanical stress, most piezoelectric materials produce electricity. A similar effect also happens conversely, as applying an electric field to a piezoelectric material can deform it mechanically to a small extent. Details of this phenomenon are quite interesting.

When no mechanical stress is present, the location of the negative and positive charges are such as to balance each other, making the molecules electrically neutral.

The application of a mechanical force deforms the structure and displaces the balance of the positive and negative charges. This leads the molecules to create many small dipoles in the material. The result is the appearance of some fixed charges on the surface of the piezoelectric material. The amount of electrical charges present is proportional to the force applied.

Piezoelectric substances belong to a class of dielectric materials. Being insulating in nature, they are very poor conductors of electricity. However, depositing two metal electrodes on the opposite surfaces of a piezoelectric material makes it possible to produce electricity from the electric field that the piezoelectric effect produces.

However, the electric current that the piezoelectric effect produces from a static force can last only a short period. Such a current flow continues only until free electrons cancel the electric field from the piezoelectric effect.

Removing the external force causes the material to return to its original shape. However, this process now causes a piezoelectric effect in the reverse direction, causing a current flow in the opposite direction.

Most piezoelectric accelerometers constitute a piezoelectric element that mechanically connects a known quantity of mass (proof mass) to the accelerometer body. As the mechanism accelerates due to external forces, the proof mass tends to lag behind due to its inertia. This deforms the piezoelectric element, thereby producing a charge output. The input acceleration produces a proportional amount of charge.

Piezoelectric accelerometers vary in their mechanical designs. Fundamentally, there are three designs, working in the compression mode, shear mode, and flexural mode. The sensor performance depends on the mechanical configuration. It impacts the sensitivity, bandwidth, temperature response of the sensor, and the susceptibility of the sensor to the base strain.

Just as in a MEMS accelerometer, Newton’s second law of motion is also the basis of the piezoelectric accelerometer. This allows modeling the piezoelectric element and the proof mass as a mass-damper-spring arrangement. A second-order differential equation of motion best describes the mass displacement. The mechanical system has a resonance behavior that specifies the upper-frequency limit of the accelerometer.

The amplifier following the sensor defines the lower frequency limit of the piezoelectric accelerometer. Such accelerometers are not capable of true DC response, and hence incapable of performing true static measurements. With a proper design, a piezoelectric accelerometer can respond to frequencies lower than 1 Hz, but cannot produce an output at 0 Hz or true DC.

What are Tactile Switches?

Tactile switches are electromechanical switches that make or break an electrical circuit with the help of manual actuation. In the 1980s, tactile switches were screen-printed or membrane switches that keypads and keyboards used extensively. Later versions offered switches with metal domes for improved feedback, enhanced longevity, and robust actuation. Today, a wide range of commercial and consumer applications use tactile switches extensively.

The presence of the metal dome in tactile switches provides a perceptible click sound, also known as a haptic bump, with the application of pressure. This is an indication that the switch has operated successfully. As tactile switches are momentary action devices, removal of the applied pressure releases the switch immediately, causing the current flow to be cut off.

Although most tactile switches are available as normally open devices, there are normally closed versions also in the market. In the latter model, the application of pressure causes the current flow to turn off and the release of pressure allows the current flow to resume.

Mixing up the names and functions of tactile and pushbutton switches is quite common, as their operation is somewhat similar. However, pushbutton switches have the traditional switch contact mechanism inside, whereas tactile switches use the membrane switch type contacts.

Their construction makes most pushbutton switches operate in momentary action. On the other hand, all tactile switches are momentary, much smaller than pushbutton switches, and generally offer lower voltage and current ratings. Compared to pushbutton switches, the haptic or audible feedback of tactile switches is another key differentiator from pushbutton switches. While it is possible to have pushbutton switches in PCB or panel mounting styles, the design of tactile switches allows only direct PCB mounting.

Comparing the construction of tactile switches with those of other mechanical switches shows a key area of difference, leading to the tactile switches being simple and robust. This difference is in the limited number of internal components that allows a tactile switch to achieve its intended function. In fact, a typical tactile switch has only four parts.

A molded resin base holds the terminals and contacts for connecting the switch to the printed circuit board.

A metallic contact dome with an arched shape fits into the base. It reverses its shape with the application of pressure and returns to its arched shape with the removal of pressure. This flexing process causes the audible sound or haptic click. At the same time, the dome also connects two fixed contacts in the base for the completion of the circuit. On removal of the force, the contact dome springs back to its original shape, thereby disconnecting the contacts. As the material for both the contacts and the dome are metal, they determine the haptic feel and the sound the switch makes.

A plunger directly above the metallic contact dome is the component the user presses to flex the dome and activate the switch. The plunger is either flat or a raised part.

The top cover, above the plunger, protects the switch’s internal mechanism from dust and water ingress. Depending on the intended function, the top cover can be metallic or other material. It also protects the switch from static discharge.

Proximity Sensor Technology

Proximity sensor technologies vary with operating standards, strengths, and determining detection, proximity, or distance. There are four major options for compact proximity sensors useful in fixed embedded systems. It is necessary to understand the basic principles of operation of these four types for determining which to select.

Most proximity sensors offer an accurate means of detecting the presence of an object and its distance, without requiring physical contact. Typically, the sensor sends out an electromagnetic field, a beam of light, or ultrasonic sound waves that pass through or reflect off an object, before returning to the sensor. Compared with conventional limit switches, proximity sensors have the significant benefit of being more durable and, hence, last longer than their mechanical counterparts.

Reviewing the performance of a proximity sensor technology for a specific application requires considering the cost, size, range, latency, refresh rate, and material effect.

Ultrasonic

Ultrasonic proximity sensors emit a chirp or pulse of sound with a frequency beyond the usual hearing range of the human ear. The length of time the chirp takes to bounce off an object and return determines not only the presence of the object but also its distance from the sensor. The proximity sensor holds a transmitter and a receiver in a single package, with the device using the principles of echolocation to function.

Photoelectric

Photoelectric sensors are a practical option for detecting the presence or absence of an object. Typically, infrared-based, their applications include garage door sensing, counting occupancy in stores, and a wide range of industrial requirements.

Implementing photoelectric sensors can be through-beam or retro-reflective methods. The through-beam method places the emitter on one side of the object, with the detector on the opposite side. As long as the beam remains unbroken, there is no object present. An interruption of the beam indicates the presence of the object.

The retro-reflective method requires the emitter and the detector to be on the same side of the object. It also requires the presence of a reflector on the other side of the object. As long as the beam of light returns unimpeded, there is no object detected. The breaking of the beam indicates the presence of an object. Unfortunately, it is not possible to measure distances.

Laser Rangefinders

Although expensive, these are highly accurate, and work on the same principle as that of ultrasonic sensors, but using a laser beam rather than a sound wave.

Lasers require lots of power to operate, making laser rangefinders non-suitable for portable applications or battery operations. Being high-power devices, they can be unsafe for ocular health. Although their field of view can be fairly narrow, lasers do not work well with glass or water. 

Inductive

Inductive proximity sensors work only with metallic objects, as they use a magnetic field to detect them. They perform better with ferrous materials, typically steel and iron. A cost-effective solution over a huge range, the limited use of inductive proximity sensors to detect objects reduces their usefulness. Moreover, inductive proximity sensors can be susceptible to a wide range of external interference sources.

What is Tactile Sensing Technology?

Scientists have been exploring the field of soft robotics for use in healthcare systems. They aim to emulate the sense of touch. However, they have not had much success with tactile-sensing technology while fine-tuning dexterity.

In an experimental study, published in the Journal of the Royal Society Interface, scientists made a comparison of the performance of an artificial fingertip with that of neural recordings made from the human sense of touch. The study also describes an artificial biometric tactile sensor, the TacTip, which the scientists had created. According to the study, TacTip offers artificial analogs of the dynamics of the human skin and the nerves that pass information from skin receptors to the central nervous system. In simple words, TacTip is an artificial fingertip that mimics nerve signals on human fingertips.

The researchers created the artificial sense of touch. They used papillae mesh that they 3-D printed and placed on the underside of the compliant skin. This construction is similar to the dermal-epidermal interface on real skin and is backed by a mesh of dermal papillae and biometric intermediate ridges, along with inner pins that are tipped with markers.

They constructed the papillae on advanced 3-D printers. The printers mixed soft and hard materials, thereby emulating textures and effects found in real human fingertips. They actually reconstructed the complex internal structure of the human skin and the way it provides for the sense of touch in human hands.

The scientists described the effort as an exciting development in soft robotics. They claim that 3-D printing tactile skin would lead to more dexterous robots. They also claim that their efforts could significantly improve the performance of prosthetic hands by imbibing them with an in-built sense of touch.

The scientists produced artificial nerve signals from the 3-D printed tactile fingertips. These signals look very similar to the recordings from actual, tactile neurons. According to scientists, human fingers have several nerve endings known as mechanoreceptors that transmit signals through human tactile nerves. The mechanoreceptors can signal the shape and pressure of contact. Earlier, others had mapped electrical signals from these nerves. By comparing the output from their 3-D printed artificial fingertip, the scientists found a startlingly close match to the earlier neural data.

A cut-through section of the 3-D printed tactile skin shows a white plastic that forms the rigid mount for the flexible black rubber skin. Scientists made both parts on advanced 3-D printers. The inside of the skin has dermal papillae, just as the real human skin also has.

In comparing the artificial nerve recordings from the 3-D printed fingertip with the 40-year-old real recordings, the scientists were pleasantly surprised. The complex recordings had many dips and hills over ridges and edges, and the artificial tactile data also showed the same pattern.

However, the researchers feel that the artificial skin still needs more refinement, especially in the sensitivity area pertaining to fine detail. As such, the artificial skin is much thicker than the real skin is. Scientists are now exploring different means of printing 3-D skins that mimic the scale of human skin.