Category Archives: Guides

What if Your Life was Speech Activated?

Although we mostly use speech when interacting with other human beings, interacting with machines using speech is still a distant dream. So far, human-to-machine communication technology has been reserved for science fiction movies. However, many are working to provide groundwork for transforming that vision to reality. For instance, speech recognition software, such as Apple’s Siri for the iPhone 4s, is now quite popular. Yet, there are several challenges to address and many kinks to be smoothened out related to voice authentication and voice-activated commands.

VocalZoom, a startup based in Israel, utilizes military technology and develops proprietary optical sensors to map out vibrations emanating from people when they speak. Their HMC or human-to-machine sensor is coupled to an acoustic microphone voice signal. They translate the output to a machine-readable sound signal. The system delivers a speech-recognition technology that is highly accurate and unparalleled in the market today.

VocalZoom approached the problem of speech recognition in an entirely different way. They came across a military technology commonly used for eavesdropping – a laser microphone to sense vibrations on windows. Designers at VocalZoom surmised that if windows vibrate when people speak, surely other things did too. Their research led them to facial skin vibrations because of voice. They created a special low-cost sensor small enough to measure facial vibrations similar to the way microphones did. Their speech recognition system uses microphones, audio processors and the special sensor.

The special sensor is actually an interferometer to measure distance and velocity. Therefore, it can be used as a microphone for measuring vibrations of audiobe used for 3D imaging, proximity sensing, biometric authentication, tapping detection and accurate heart-rate detection. The multifunction sensor has a very wide dynamic range useful for implementation in many applications, for instance, to measure vibrations in engines, industrial printers, or turbines.

A typical sensor for measuring distance and velocity, such as time-of-flight based sensors, use an emitter and a detector. However, designers at VocalZoom use a laser for both purposes. That means their interferometer is of a super low-cost design that practically has no optical component. However, they had to cope with noise issues and it was necessary to develop noise reduction methods when using the sensor with speech recognition systems.

The noise reduction methods used by VocalZoom often use optical sensors to improve speech recognition. They have reached a stage where in an environment with a lot of background noise, they can reduce the results of the speech recognition or voice authentication to a very low error rate.

In actual practice, the laser is directed at the face of the person talking. It measures vibrations that are in the order of tens and hundreds of nanometers, not usually picked up by normal sensors. As the laser measurements are so precise, other surrounding noise does not interfere with the micro-measurements of the skin, which are then converted into clear audio.

Very soon, you will be able to use the optical laser technology of VocalZoom together with Siri or Google Voice and other voice-recognition applications for a wholly different experience.

Smart Amplifiers to Give More Bass

As our smartphones get smaller and thinner, one of the consequences is the loss of bass or low frequency sounds we are accustomed to hearing naturally. The miniaturization of all components, including the loudspeaker, leads to voice or audio reproduction from the gadget seem unnatural. This is mainly because handset manufacturers have been slow to improve the audio performance, except in high-end handsets, leading to a lack of low-frequency audio.

However, the situation is changing now. A technology called smart amplifier is available to extract the maximum performance from the micro-speaker of a cell phone. Where the coupling between a traditional amplifier and its speaker is unidirectional, a smart amplifier senses the loudspeaker’s operation while playing. It also applies advanced algorithms to drive the loudspeaker to its maximum without hurting your ears.

To discuss the operation of a smart amplifier, it is important to understand that a loudspeaker is a vital component in the audio reproduction chain. If the design of the loudspeaker is not up to the mark, no amount of amplification or audio processing will overcome its shortcomings. However, if you even have a reasonable loudspeaker to start with, a smart amplifier can turbo charge it and push it to its limits.

Speakers contain a frame, voice coil, magnet, and diaphragm. Electrical current from an amplifier coursing through the voice coil magnetizes it, making it react with and move against the fixed magnet of the speaker. The movement causes the membrane or diaphragm attached to the coil to also move back and forth, and emanate audible sound waves. The movement of the diaphragm is called excursion, and it has its limits – audible distortions can occur when an amplifier exceeds the limits of this excursion – leading to failure in extreme cases.

Traditionally, amplifiers have used simple equalization networks at their outputs to limit this excursion. Because there can be large varieties of speakers, and different operating conditions including extreme audio signals, the filters are generally conservative. They actually limit the capability of the amplifier to push the speaker to its true limit. Additionally, current through the voice coil generates heat to some extent, and this factor limits the extent to which an amplifier can drive the speaker.

With micro-speakers commonly used in smartphones, smart amplifiers use feedback when driving them. A common method with Class-D amplifiers is to add IV or current and voltage sense to the DAC or digital to analog converter that provides a feed-forward solution. With IV-sense, the system receives feedback about the speaker’s voice coil temperature, its loading, and variations from unit to unit. The algorithm in the system uses this information to extract the maximum SPL or sound pressure level from the speaker without damaging it.

However, before a smart amplifier can drive a loudspeaker safely, a few steps are necessary. These include thermal characterization, excursion characterization, and SPL measurements for the speaker. Usually, data plots are necessary of excursions versus frequency and safe operating area limits.

Smart amplifiers such as the TAS2555 from Texas Instruments have a DSP or digital signal processor integrated. That reduces the time required for software development tremendously.

Trombe Wall to Heat and Cool Buildings Using Renewable Energy

Researchers at Lund University, Sweden have devised a technique for using an adaptation of the nineteenth century Trombe wall for heating and cooling modern buildings. The modified structure is capable of reducing carbon emissions associated with the heating and cooling processes, as well. Residents of Saint Catherine in Egypt are trying out the invention.

Trombe wall basics

A Trombe wall was a popular method used in the nineteenth century to keep buildings cool during the day and warm at night. The construction was simple, consisting of a very thick wall painted black on the outside surface and with a glass pane in front of it. The black surface, being a good absorber allowed the wall to absorb heat from the sun’s rays falling on it. The glass surface, being a bad radiator trapped the heat for some time. However, as the temperature dropped during the night, the heat was released slowly, keeping the building warm for several hours. Homes and buildings in the northern hemisphere had a south facing wall, while those in the southern hemisphere, a north-facing one.

An additional advantage of this structure is that the glass sheet causes the release of infrared rays. The warmth produced by these rays is more agreeable than the heat generated by traditional convection methods.

Marwa Dabaieh, an architectural scientist at the university has tried out the modern version of the Trombe wall in Egypt where 94% of the energy used is derived from fossil fuels. She explains that the innovation could help reduce dependence on electricity and cut down carbon emissions.

Cost effective production

The researchers have taken care to retain the basic construction methods. The old but popular passive technique has been employed, meaning there are no mechanical parts involved. This makes for an economical operation. The materials that are used are easily available. Wood and locally quarried stone are used for the basic construction, while wool is used for insulation. The glass used is produced locally, too.

Ventilation system

The modified version relies solely on naturally available solar energy and prevailing wind currents in the region. This makes for a very cost effective design structure.

Dabaieh reveals that the new design employs the concept of ventilation to utilize the air streams to generate cooling techniques. This is a major improvement upon the older version of the Trombe wall, which often caused over heating inside the building. The researchers are continually adjusting the vent structures and positions to make the temperature more endurable. This eliminates the need for air conditioning in the hot summer months.

Roping in the locals

Dabaieh reveals that the project has engaged local residents in the construction and installation process. This will help cut down costs further and provide employment opportunities for young people. Since many homeowners in St Catherine who have put up the Trombe wall, have expressed their satisfaction about the structure, several other residents are keen on installing it.

The adapted Trombe wall is a cheap and efficient system that could serve to meet the challenges posed by rising energy requirements worldwide.

What is an Integrated Development Environment?

Those who develop and streamline software use IDEs or Integrated Development Environments. IDEs are software applications providing a programming environment for developing and debugging software. The older way of developing software was to use unrelated individual tasks such as coding, editing, compiling and linking to produce a binary or an executable file. An IDE combines these separate tasks to provide one seamless development environment for the developer.

Developers have a multitude of choices when selecting an IDE. They can choose IDEs made available from software companies, vendors and Open Source communities. There are free versions and those whose pricing depends on the number of licenses necessary. In general, IDEs do not follow any standard and developers select an IDE based on its own capabilities, strengths and weaknesses.

Typically, all IDEs provide an easy and useful interface, with automatic development steps. Developers using IDEs run and debug their programs all from one screen. Most IDEs offer links from a development operating system to a target application platform such as a microprocessor, smartphone or a desktop environment.

Developing executable software for any environment entails creating source files, compiling them to produce the machine code and linking these with each other along with any library files and resources to produce the executable file.

Programmers write code statements for specific tasks they expect their program to handle. This forms the source file and developers write in statements specific to a high-level language such as C, Java, Python, etc. The language of the source file is evident from the extension that developers use for the file. For example, a file written using c language usually has a name similar to “myfile.c.”

Compilers within the IDE translate source files to the appropriate machine level code or object files suitable for the target environment. The IDE will offer a choice of compilers suitable for the proper environment. In the next level, a linker collects all the object files that a program requires and links them together. Linking also takes in specified library files while assigning memory and register values to variables in the object files. Library files are necessary for supporting the tasks needed by the operating system. The output of the linker is an executable file, in low-level code, understood by the hardware in the system.

Without an IDE, the task of the developer is highly complicated. He or she must compile each source file separately. If the program has more than one source file, they must have separate names so that the compiler can identify them. While invoking the compiler, the developer must specify the proper directory containing the source files along with specifying another directory for holding the output files.

Any error in the source files leads to a failure in compiling and the compiler usually outputs error messages. Compilation succeeds only when the developer has addressed all errors by editing individual source files. For linking, the developer has to specify each object file necessary. Errors may crop up at the linking stage also since some errors are detectable only after linking the entire program.

Guiding Basics in Efficient Lighting Design

Discovery of fire and subsequently lighting has contributed hugely to the modern advancements in human life all over the world. However, only a few are aware of proper applications of lighting or that effective lighting also needs planning and design. Most people incorrectly infer lighting design to mean simply selecting lighting equipment for a system. Of course, selecting the most energy-efficient and cost-effective products is important, but they are simply the tools to achieve the design.

In reality, lighting design requires assessing and meeting the needs of the people who will use the space. It also requires skillful balancing between the functional aspects and the aesthetic impact of the lighting system.

That makes lighting both an art and a science. It also implies there cannot be any hard and fast rules for designing lighting systems. Additionally, there will also not be any single ideal solution optimum for all lighting problems. Typically, lighting designers face conflicting requirements and must set priorities before reaching a satisfactory compromise. Assets necessary for successful lighting designers include a proper understanding of basic lighting concepts, extensive experience, careful planning, assessment and analysis.

Lighting mostly involves use of energy. One of the chief concerns is achieving optimum energy efficiency, which means getting maximum lighting quality with minimum consumption of energy. This requires a combination of thoughtful design together with selecting the appropriate lamp, luminaire and control system. Additionally, decisions made must include informed choices of the level of illumination required, the integration and awareness of the space or environment being lit.

Lighting designers must have an intimate knowledge of the human eye and the way it perceives light and color. For example, light falling on an object is partly absorbed and partly reflected by the object. We see the object because of the reflected light entering our eye. The color of the reflected light also determines the perceived color of the object.

A flexible lens within the eye helps to focus the image on the retina and allows clear vision. The retina of the eye has many rods and cones. These convert light into electrical impulses that reach the brain via the optic nerve. The brain interprets the impulses into a proper image. However, illumination levels also change the way the eye perceives an object.

During the day and in normal daylight conditions, the cones in the retina enable us to see details in color. This is photopic or daytime adaption of the eye. As light levels dip, cones become less effective and the more sensitive rods take over. For example, in a well-lit street, the eye sees a mixture of cones and rods to see.

However, rods do not differentiate colors and respond only to different shades of black and white. The overall impression in average lighting is an image with lower color – the mesopic adaption. As light levels fall even further, such as in dim moonlight, the cones cease to function altogether and the eye loses all capability to see in color. This gives completely black and white vision – the scotopic or nighttime adaption.

Latest Touch Display for the Raspberry Pi

Those who were on the lookout for a proper touch display for their single board computer, the Raspberry Pi or RBPi can now rest easy. The official RBPi touch display is on sale at several stores and others will be receiving stock very soon. Users of RBPi models such as Rev 2.1, B+, A+ and Pi 2 can now use the simple embeddable display, instead of having to hook it up to a TV or a monitor. Watch the You-Tube video demonstration for a better understanding.

The new official touch display for the RBPi is a 7” touchscreen LCD. A conversion board interlinks the display module with the LCD and plugs into the RBPi through the display connector. Although the ribbon cable is the same as that used by the camera, the two do not work interchangeably. Therefore, identify the display connector first, before plugging in the ribbon cable from the display.

You can power up the display in one of three ways: using a separate power supply, using a USB link or by using GPIO jumpers. When using a separate power supply, you need a separate USB power supply with a micro-USB connector cable. The power supply must have a rating of at least 500mA and requires plugging in to the display board at PWR IN.

It is also possible to power the RBPi through the display board. For this, use an official RBPi power supply of rating 2A and plug it into the display board at PWR IN. Use another standard micro-USB connector cable from the PWR OUT connector and plug it into the RBPi power in point.

Powering the display from the RBPi GPIO requires using two jumpers – one from the 5V and the other from the GND pins of the GPIO.

After plugging in the ribbon cable and making one of the above power connections between the RBPi and the display, using the display requires updating and upgrading the OS on the RBPi. On rebooting, the OS automatically identifies the new display and starts to use it as its default display rather than the HDMI. To allow the HDMI display to stay on as default, the config.txt file must contain the line:

display_default_lcd=0

For further setup steps, follow these instructions.

The RBPi display comes with an integrated 10-point touchscreen. The driver for the touchscreen is capable of outputting both full multi-touch events and standard mouse events. Therefore, it is capable of working with ‘X’ – the display system of Linux, although X was never designed to work with a touchscreen.

For finger touch operations in cross-platform applications, the Python GUI development system Kivy is a great help. Although designed to work with touchscreen devices on tablets and phones, Kivy works fine with RBPi.

The 7” touchscreen display for the RBPi is of industrial quality from Inelco Hunter and boasts of an RGB display with a resolution of 800×480 at 60fps. It displays images with 24-bit color and a 70-degree viewing angle. The metal backed display has mounting holes for the RBPi and comes with an FT5406 10-point capacitive touchscreen.

What are Flying Probe Test Systems?

When testing a component or an electronic gadget, it is usual to hold two probes to the test points. Probes are metal prods insulated except for the tips touching the test points on one end and connected by flying leads on the other to an instrument. The instrument could be a voltmeter, an ammeter, an ohmmeter or a combination called the multi-meter. Such an arrangement is good for testing individual components or a printed circuit board. However, in a manufacturing scenario, where boards are produced in hundreds or thousands, humans cannot match the speed and special testing machines are used.

Such testing machines use a set of flying probe test systems for testing, trimming or alignment of components on a printed circuit board or a gadget. Most of these machines are computerized test beds with high speed, unprecedented positioning accuracy and extensive test coverage. They remove the requirement for a bed-of-nails testing fixture and provide a wide variety of test facilities contributing to validation of low-volume production and of the R&D department.

The typical configuration of a flying probe test system consists of four standard moving probes installed diagonally to the board under test. Advanced machines may also have two optional Z-mechanisms for holding another pair of moving probes that can move up and down vertically.

The vertical Z-axis probes enable access to test points that the standard moving probes find difficult to reach. In addition, the Z-axis probes can make proper contact with locations at different heights. With such flexibility, flying probe test systems can directly contact through-holes and heads of connector pins. To prevent slippages and false contacts, the probe tip may be of the dagger or inverse cone head type, all resulting in increased test coverage.

The testing machines are highly accurate measurement systems that include several 4-quadrant sources and measurement systems. Almost invariably, these are embedded with AC programmable generators that can also be used as function generators. Therefore, the testers are capable of applying measuring signals that are best suited to specific electronic components.

The measurement system associated with the flying probes usually features high-resolution ADC/DACs, which help to make precise tests and measure the dynamic characteristics of the circuit.

To enable accurate and repeatable measurements, these testers possess an XY table or stage made of highly polished native granite. Modern flying probe test systems boast of superfast movement of probes with positioning accuracies better than conventional models by at least 25%.

The super-fast movement is the result of using state-of-the-art high power and fast-moving drive motor systems controlled by new control software speeding up the test by 30-50% over conventional models. With the addition of three bottom probe units, combination tests can be performed more efficiently, further cutting down test times.

Modern flying probe test systems come with vision test systems offering simple AOI functions. Detection of missing, wrongly oriented or positioned components is made simple with the use of megapixel color digital cameras, backed by ring illuminations with high-intensity white LEDs. This combination is helpful in reading not only barcodes and 2D codes, but in color identification tests, OCR functioning and modifying contact points as well while debugging testing programs.

What is Vapor Phase Reflow Soldering?

Vapor Phase Reflow Soldering is an advanced soldering technology. This is fast replacing other forms of soldering processes manufacturers presently use for assembling printed circuit boards in high volumes for all sorts of electronic products. Soldering electronic components to printed circuit boards is a complex physical and chemical process requiring high temperatures. With the introduction of lead-free soldering, the process is more stringent, required still higher temperatures and shorter times. All the while, components are becoming smaller, making the process more complicated.

Manufacturers face soldering problems because of many reasons. Main among them is the introduction of lead-free components and the lead-free process of soldering. The other reason is boards often can contain different masses of components. The heat stored by these components during the soldering process varies according to their mass, resulting in uneven heat distribution leading to warping of the printed boards.

With Vapor Phase reflow soldering, the board and components face the lowest possible maximum temperatures necessary for proper soldering. Therefore, there is no overheating of components. The process offers the best wetting of components with solder and the soldering process happens in an inert atmosphere devoid of oxygen – resulting in the highest quality of soldering. The entire process is environment friendly and cost effective.

In the Vapor Phase Reflow Soldering process, the soldering chamber initially contains Galden, an inert liquid, with a boiling point of 230°C. This is same as the process temperature for lead-free Sn-Ag solders. During start up, Galden is heated up to its boiling point, causing a layer of vapor above the liquid surface, displacing the ambient air upwards. As the vapor has a higher molecular weight, it stays just above the liquid surface, ensuring an inert vapor zone.

A printed circuit board and components introduced in this inert vapor zone faces the phase change of the Galden vapor trying to cool back its liquid form. The change of phase from vapor to liquid involves the release of a large amount of thermal energy. As the vapor encompasses the entire PCB and components, there is no difference in temperature even for high-mass parts. Everything inside the vapor is thoroughly heated up to the vapor temperature. This is the biggest advantage of the vapor phase soldering process.

The heat transfer coefficients during condensation of the vapor ranges from 100-400Wm-3K-1. This is nearly 10 times higher than heat transfer coefficients involved in convection or radiation and about 10 times lower than that with contact during liquid soldering processes. The excellent heat transfer rate prevents any excessive or uneven heat transfer and the soldering temperature of the vapor phase reflow process stays at a constant 235°C.

There are several advantages from the Vapor Phase Reflow Soldering process. Soldering inside the vapor zone ensures there can be no overheating. As the vapor completely encompasses the components, there are no cold solders due to uneven heat transfer and shadowing. The inert vapor phase process precludes the use of nitrogen. Controlled heating up of the vapor consumes only one-fifth the usual direct energy consumption, and saves in air-conditioning costs.

As the entire process is a closed one, there is no creation of hazardous gasses such as from burnt flux. Additionally, Galden is a neutral process fluid and environment friendly.

How Do You Read Resistor Values?

Resistors range from huge multi-watt giants to sub-miniature surface mount devices (SMDs) and parts with different types of leads in between. The larger varieties do not pose much of a problem as they usually have a big-enough surface for printing the value of the resistance, its tolerance, and other necessary specifications. For smaller sizes, codes are generally used for letting the user know the details of the resistor.

Two common methods are under use for identifying resistors – color coding for resistors with leads and number coding for SMD resistors. Color coding is an easy way to convey a lot of information concisely and effectively. One of the advantages is that specifications of the resistor are visible irrespective of its orientation on the PCB – very useful for overcrowded boards. As SMD resistors have only limited surfaces, number coding is more suitable.

Color coding for resistors

Resistors with color coding come with one of two standard codes – the 4-band code or the 5-band code. The 4-band coding is used more with resistors of low precision with 5, 10, and 20% tolerances. Higher precision resistors with tolerances of 1% and lower are marked with 5-band color codes.

The colors used have their own values. For example, Black represents zero, Brown represents one, Red represents two, Orange represents three, Yellow represents four, Green represents five, Blue represents six, Violet represents seven, Gray represents eight, White represents nine, Gold represents 0.1, and Silver represents 0.01.

For tolerances, Gray represents ±0.05%, Violet represents ±0.1%, Blue represents ±0.25%, Green represents ±0.5%, Brown represents ±1%, Red represents ±2%, Gold represents ±5%, Silver represents ±10%, while an absence of color represents ±20%.

The 4-Band color coding scheme

The 4-band color coding has thee color bands crowded on one side with the fourth band separated from the others. One has to read the code from the left to right beginning with the crowded colors on the left and the separated color band on the extreme right. Starting from the left, the first two color bands represent the most significant digits of the resistance value, while the third band represents the multiplier digit. The isolated fourth band is the tolerance band. As an example, a resistor of 4.7KΩ, 5% value will have the colors bands Yellow, Violet, and Red representing 4700Ω, with a fourth band of Golden color. In cases where there are only three color bands, it means the resistor has a ±20% tolerance.

The 5-band color coding scheme

High quality, high precision resistors with tolerances of 2%, 1% or lower are represented by five color bands, with the first three denoting the three most significant digits of the resistance value. The fourth band represents the multiplier value, while the fifth stripe gives the tolerance. Some resistors have an additional sixth band denoting the reliability or the temperature coefficient.

Number coding for SMD resistors

SMD resistors usually have three or four numbers on them, depending on whether they are of 5% or 1% tolerance. The last number is the multiplier with the others representing the most significant digits of the resistance value. In some cases, an alphabet is used, representing the resistor’s tolerance. However, if the alphabet is an R, it represents a decimal at its position. For more details, refer to this web site.

How Gray Are The Gray Codes?

Many data acquisition systems and rotary encoders use the Gray Codes for their operation. As only one bit changes state while the numbers progress, read errors from timing and mechanical issues are minimized. Initially, use of Gray Codes was limited to specific applications, but now this versatile coding scheme is extensively used in Karnaugh maps, error detection systems, and in rotary and optical encoders.

In general, a Gray Code represents numbers using the binary encoding scheme and it groups a sequence of bits such that only one bit in the group changes from the number before and after it. Frank Gray, a researcher from Bell Labs, described the code in his 1947 patent, where he called it the Binary Reflected Code. After the patent was granted in 1953, the encoding system was referred to as the Gray Code.

Being an un-weighted code, the columns of bits in the Gray Code do not imply any base weight in contrast to the Binary number system. For instance, in the Binary number system, the right most column holds the most significant bit and carries a weight of 20=1; the second column has the weight of 21=2; the third 22=4, and so on. That means each column represents a base (2 in Binary) raised to a power, with the final value calculated by multiplying the bit by the weight of its column and adding the results of the columns.

Although columns in the Gray Code are also positional, they are not weighted, as the Gray Code is a numeric representation of a cyclic encoding scheme. The code rolls over and repeats, therefore, it is unsuitable for mathematical operations. To be used in displays or in mathematical computations, Gray Code sequences need to be converted to Binary or Binary Coded Decimal (BCD).

Gray Codes are a member of unit-distant, minimal-change codes. That means only a single bit of the sequence changes with the progress of the number count. Therefore, Gray Codes are more flexible during synchronization and misalignment as they limit the maximum read error to one unit. This property makes them useful in error detection schemes as well. Communication systems use Gray Codes in preference to parity check, as detection of unexpected changes in data is better with Gray Codes. If you sum up the bits in a number, the sum of the next number will change only by one, with the sum alternating even and odd.

Rapidly changing values can lead to errors due to interfacing or hardware constraints. This is where the Gray Code is most useful, as only a single bit quantifies the change. That is also the reason most mechanical rotary and optical encoders offer Gray Code outputs. However, Gray Codes have progressed farther than the encoding mask that Frank Gray documented in his patent.

For example, aircrafts use mechanical altimeters where the encoding disk is synchronized to the dials, producing a sort of Gray Code output known as the Gillham Code. The specialized code offers a single-bit change for each increment of 100 feet – allowing an easy tracking of the altitude.