Author Archives: Andi

Cayenne on a Raspberry Pi

If you are building projects for IoT or the Internet of Things, a single board computer such as the Raspberry Pi, also known as the RBPi, can be a great asset. Moreover, with Cayenne installed on the RBPi, you have a drag-n-drop IoT project builder that the developers of the Cayenne software, myDevices, claims is the first in the world.

Therefore, now it is easy to connect your RBPi to a mobile or online dashboard. On the other side, you have a breadboard ready to connect relays, lights, and motion sensors. Of course, you have always had the freedom to write an application, read multiple pages of documentation, and take time to learn new programming languages, write pages of code, and then debug them to make it all work together. Alternatively, you can reduce the time you spend preparing for your project, because Cayenne helps to get your project up and running in a fraction of the time, and you can build your automation projects in minutes.

With Cayenne, myDevices makes all this possible, because they created Cayenne for makers and developers eager to build and prototype amazing IoT projects with their RBPi, as quickly as possible. Users get a free Cayenne account, which allows them to create unlimited number of projects. There is also a full-fledged IoT maker support capability that allows remote control of sensors, actuators, motors, and GPIO boards.

On the free account, you can also store unlimited amount of data that the hardware components collect including triggers and alerts, providing all the tools necessary for automation. That allows you to set up custom dashboards and threshold alerts capable of highlighting your projects with fully customizable drag-n-drop widgets.

According to myDevices, Cayenne is the first of its kind of builder software that empowers developers to use its drag-n-drop features for creating quick IoT projects and host their connected device projects. Cayenne allows remote control of hardware, displays sensor data, store data, analyze it, and do several other useful things.

In the Cayenne platform, users can find several major components, such as:

The main Application – useful for setting up and controlling IoT projects with drag-n-drop widgets.
The Online Dashboard – set this up through a browser to control your IoT projects.
The Cloud – useful for storing devices, user and sensor data, actions, triggers, and alerts. Additionally, it is also responsible for data processing and analysis.
The Agent – useful for communicating with the server, hardware, and agent for the implementation of outgoing and incoming alerts, triggers, actions, and commands
Whenever you press a button from the online dashboard or the Cayenne app on your mobile, the command travels to the Cayenne Cloud for processing and travelling to your hardware. The same process takes place in the reverse direction as well. Cayenne offers users plenty of features.

You can connect to your IoT through Ethernet, Wi-Fi, or mobile apps. It is possible to discover and setup your RBPi on a network via Ethernet or Wi-Fi. Dashboards are customizable and widgets are drag-n-drop. It is possible to remotely access your RBPi, shut it down, or reboot it. Users can add sensors, actuators, and control extensions connected to the RBPi, and many more.

What if Your Life was Speech Activated?

Although we mostly use speech when interacting with other human beings, interacting with machines using speech is still a distant dream. So far, human-to-machine communication technology has been reserved for science fiction movies. However, many are working to provide groundwork for transforming that vision to reality. For instance, speech recognition software, such as Apple’s Siri for the iPhone 4s, is now quite popular. Yet, there are several challenges to address and many kinks to be smoothened out related to voice authentication and voice-activated commands.

VocalZoom, a startup based in Israel, utilizes military technology and develops proprietary optical sensors to map out vibrations emanating from people when they speak. Their HMC or human-to-machine sensor is coupled to an acoustic microphone voice signal. They translate the output to a machine-readable sound signal. The system delivers a speech-recognition technology that is highly accurate and unparalleled in the market today.

VocalZoom approached the problem of speech recognition in an entirely different way. They came across a military technology commonly used for eavesdropping – a laser microphone to sense vibrations on windows. Designers at VocalZoom surmised that if windows vibrate when people speak, surely other things did too. Their research led them to facial skin vibrations because of voice. They created a special low-cost sensor small enough to measure facial vibrations similar to the way microphones did. Their speech recognition system uses microphones, audio processors and the special sensor.

The special sensor is actually an interferometer to measure distance and velocity. Therefore, it can be used as a microphone for measuring vibrations of audiobe used for 3D imaging, proximity sensing, biometric authentication, tapping detection and accurate heart-rate detection. The multifunction sensor has a very wide dynamic range useful for implementation in many applications, for instance, to measure vibrations in engines, industrial printers, or turbines.

A typical sensor for measuring distance and velocity, such as time-of-flight based sensors, use an emitter and a detector. However, designers at VocalZoom use a laser for both purposes. That means their interferometer is of a super low-cost design that practically has no optical component. However, they had to cope with noise issues and it was necessary to develop noise reduction methods when using the sensor with speech recognition systems.

The noise reduction methods used by VocalZoom often use optical sensors to improve speech recognition. They have reached a stage where in an environment with a lot of background noise, they can reduce the results of the speech recognition or voice authentication to a very low error rate.

In actual practice, the laser is directed at the face of the person talking. It measures vibrations that are in the order of tens and hundreds of nanometers, not usually picked up by normal sensors. As the laser measurements are so precise, other surrounding noise does not interfere with the micro-measurements of the skin, which are then converted into clear audio.

Very soon, you will be able to use the optical laser technology of VocalZoom together with Siri or Google Voice and other voice-recognition applications for a wholly different experience.

Smart Amplifiers to Give More Bass

As our smartphones get smaller and thinner, one of the consequences is the loss of bass or low frequency sounds we are accustomed to hearing naturally. The miniaturization of all components, including the loudspeaker, leads to voice or audio reproduction from the gadget seem unnatural. This is mainly because handset manufacturers have been slow to improve the audio performance, except in high-end handsets, leading to a lack of low-frequency audio.

However, the situation is changing now. A technology called smart amplifier is available to extract the maximum performance from the micro-speaker of a cell phone. Where the coupling between a traditional amplifier and its speaker is unidirectional, a smart amplifier senses the loudspeaker’s operation while playing. It also applies advanced algorithms to drive the loudspeaker to its maximum without hurting your ears.

To discuss the operation of a smart amplifier, it is important to understand that a loudspeaker is a vital component in the audio reproduction chain. If the design of the loudspeaker is not up to the mark, no amount of amplification or audio processing will overcome its shortcomings. However, if you even have a reasonable loudspeaker to start with, a smart amplifier can turbo charge it and push it to its limits.

Speakers contain a frame, voice coil, magnet, and diaphragm. Electrical current from an amplifier coursing through the voice coil magnetizes it, making it react with and move against the fixed magnet of the speaker. The movement causes the membrane or diaphragm attached to the coil to also move back and forth, and emanate audible sound waves. The movement of the diaphragm is called excursion, and it has its limits – audible distortions can occur when an amplifier exceeds the limits of this excursion – leading to failure in extreme cases.

Traditionally, amplifiers have used simple equalization networks at their outputs to limit this excursion. Because there can be large varieties of speakers, and different operating conditions including extreme audio signals, the filters are generally conservative. They actually limit the capability of the amplifier to push the speaker to its true limit. Additionally, current through the voice coil generates heat to some extent, and this factor limits the extent to which an amplifier can drive the speaker.

With micro-speakers commonly used in smartphones, smart amplifiers use feedback when driving them. A common method with Class-D amplifiers is to add IV or current and voltage sense to the DAC or digital to analog converter that provides a feed-forward solution. With IV-sense, the system receives feedback about the speaker’s voice coil temperature, its loading, and variations from unit to unit. The algorithm in the system uses this information to extract the maximum SPL or sound pressure level from the speaker without damaging it.

However, before a smart amplifier can drive a loudspeaker safely, a few steps are necessary. These include thermal characterization, excursion characterization, and SPL measurements for the speaker. Usually, data plots are necessary of excursions versus frequency and safe operating area limits.

Smart amplifiers such as the TAS2555 from Texas Instruments have a DSP or digital signal processor integrated. That reduces the time required for software development tremendously.

Trombe Wall to Heat and Cool Buildings Using Renewable Energy

Researchers at Lund University, Sweden have devised a technique for using an adaptation of the nineteenth century Trombe wall for heating and cooling modern buildings. The modified structure is capable of reducing carbon emissions associated with the heating and cooling processes, as well. Residents of Saint Catherine in Egypt are trying out the invention.

Trombe wall basics

A Trombe wall was a popular method used in the nineteenth century to keep buildings cool during the day and warm at night. The construction was simple, consisting of a very thick wall painted black on the outside surface and with a glass pane in front of it. The black surface, being a good absorber allowed the wall to absorb heat from the sun’s rays falling on it. The glass surface, being a bad radiator trapped the heat for some time. However, as the temperature dropped during the night, the heat was released slowly, keeping the building warm for several hours. Homes and buildings in the northern hemisphere had a south facing wall, while those in the southern hemisphere, a north-facing one.

An additional advantage of this structure is that the glass sheet causes the release of infrared rays. The warmth produced by these rays is more agreeable than the heat generated by traditional convection methods.

Marwa Dabaieh, an architectural scientist at the university has tried out the modern version of the Trombe wall in Egypt where 94% of the energy used is derived from fossil fuels. She explains that the innovation could help reduce dependence on electricity and cut down carbon emissions.

Cost effective production

The researchers have taken care to retain the basic construction methods. The old but popular passive technique has been employed, meaning there are no mechanical parts involved. This makes for an economical operation. The materials that are used are easily available. Wood and locally quarried stone are used for the basic construction, while wool is used for insulation. The glass used is produced locally, too.

Ventilation system

The modified version relies solely on naturally available solar energy and prevailing wind currents in the region. This makes for a very cost effective design structure.

Dabaieh reveals that the new design employs the concept of ventilation to utilize the air streams to generate cooling techniques. This is a major improvement upon the older version of the Trombe wall, which often caused over heating inside the building. The researchers are continually adjusting the vent structures and positions to make the temperature more endurable. This eliminates the need for air conditioning in the hot summer months.

Roping in the locals

Dabaieh reveals that the project has engaged local residents in the construction and installation process. This will help cut down costs further and provide employment opportunities for young people. Since many homeowners in St Catherine who have put up the Trombe wall, have expressed their satisfaction about the structure, several other residents are keen on installing it.

The adapted Trombe wall is a cheap and efficient system that could serve to meet the challenges posed by rising energy requirements worldwide.

Quantum Dot Solids to Bring In a New Age Electronics

Quantum dot solids, a term for crystals fabricated from crystals may be the next thing after silicon wafers to bring about major changes in the field of electronics. Just as wafers constructed from single silicon crystals changed the tools of communication technology about half a century ago, a team of scientists in Cornell University working on quantum dot solids expects to transform this field further.

Larger structures from nanocrystals

The scientists grew larger crystals from nanocrystals of lead selenium. They then shaped square 2D superlattice structures by the process of fusion, taking care to maintain the atomic coherence. The atomic coherent lattice ensures that the atoms are directly connected to each other. There is no other intervening substance. As a result, these superstructures have superior electrical properties compared to those of existing nanocrystals of semiconductors. The researchers anticipate that this would aid absorption of energy and emission of light.

Tobias Hanrath, associate professor in Robert Frederick Smith School of Engineering, along with graduate student Kevin Whitham has led the study. The research findings have been published in Nature Materials.

Hanrath stresses upon the fact that the building blocks making up the superstructure have been designed with a degree of accuracy that matches with atomic scale precision. He goes on to say it would be reasonable to assume that the structures are as perfect as possible.

The current work is based on an earlier research done by the group, details of which have been brought out in a paper in Nano Letters in 2013. The study had dealt with new technique for bonding quantum dots. This involved monitored displacement or shift of ligands, which are connector molecules.

Tweaking the structure

Electronic coupling of each quantum dot or connecting the dots, as the paper has termed the process was considered a significant challenge. The new research appears to have resolved this problem. Compared to the previous structure consisting of nanocrystal solids linked with ligands, the new superstructure is vastly superior as it allows an ample scope for modifications. The nanocrystals undergo extremely strong coupling, which brings about energy band formation. Scientists can manipulate the bands according to the structure of the crystals. The researchers say that this maneuvering could lead to the development of new artificial materials with adaptable electronic structure and properties.

From lab to industry

Whitham does concede that a lot of work has to be done before starting production of these crystals on an industrial scale. The superlattice conceived by the group has several sources of flaws. This is principally because the nanocrystals making up the lattice are not exactly identical. The defects reduce the possibilities to which the electronic structure can be controlled. He points out furthermore, that the understanding of the structures formed by connecting the quantum dots is not yet complete and that this knowledge is essential for improving the results.

Whitham says that he expects that other scientists will further the work done by his team and improve upon the superlattice structure by removing the existing flaws. He is confident that additional research on the subject could lead to game changing techniques in the field of communication technology.

What is an Integrated Development Environment?

Those who develop and streamline software use IDEs or Integrated Development Environments. IDEs are software applications providing a programming environment for developing and debugging software. The older way of developing software was to use unrelated individual tasks such as coding, editing, compiling and linking to produce a binary or an executable file. An IDE combines these separate tasks to provide one seamless development environment for the developer.

Developers have a multitude of choices when selecting an IDE. They can choose IDEs made available from software companies, vendors and Open Source communities. There are free versions and those whose pricing depends on the number of licenses necessary. In general, IDEs do not follow any standard and developers select an IDE based on its own capabilities, strengths and weaknesses.

Typically, all IDEs provide an easy and useful interface, with automatic development steps. Developers using IDEs run and debug their programs all from one screen. Most IDEs offer links from a development operating system to a target application platform such as a microprocessor, smartphone or a desktop environment.

Developing executable software for any environment entails creating source files, compiling them to produce the machine code and linking these with each other along with any library files and resources to produce the executable file.

Programmers write code statements for specific tasks they expect their program to handle. This forms the source file and developers write in statements specific to a high-level language such as C, Java, Python, etc. The language of the source file is evident from the extension that developers use for the file. For example, a file written using c language usually has a name similar to “myfile.c.”

Compilers within the IDE translate source files to the appropriate machine level code or object files suitable for the target environment. The IDE will offer a choice of compilers suitable for the proper environment. In the next level, a linker collects all the object files that a program requires and links them together. Linking also takes in specified library files while assigning memory and register values to variables in the object files. Library files are necessary for supporting the tasks needed by the operating system. The output of the linker is an executable file, in low-level code, understood by the hardware in the system.

Without an IDE, the task of the developer is highly complicated. He or she must compile each source file separately. If the program has more than one source file, they must have separate names so that the compiler can identify them. While invoking the compiler, the developer must specify the proper directory containing the source files along with specifying another directory for holding the output files.

Any error in the source files leads to a failure in compiling and the compiler usually outputs error messages. Compilation succeeds only when the developer has addressed all errors by editing individual source files. For linking, the developer has to specify each object file necessary. Errors may crop up at the linking stage also since some errors are detectable only after linking the entire program.

Anoto – The Digital Pen & Paper Concept

John J. Loud holds the first 1888 patent for a ballpoint pen. He described this as a writing instrument capable of writing on rough surfaces such as wood, coarse wrapping paper and other surfaces that common fountain or quill pens could not. Unfortunately, Loud’s ballpoint pen was unsuitable for smooth writing and his patent lapsed. In 1938, Biro, a Hungarian newspaper editor, invented the actual ballpoint pen we are so familiar with today.

Writing on ordinary paper does not allow interfacing to the computer and transferring handwritten notes to the electronic media has always posed difficulties. However, a new development by Anoto Sweden is set to overcome this handicap faced by the humble ballpoint pen and paper and turn them into a suitable digital writing interface anyone can use.

The Anoto pen is hardly distinguishable from an ordinary ballpoint pen. Removing and replacing the cap constitutes a simple on-off function. In the Anoto concept, the pen has a digital camera and an advanced image processor inside it. Data from the pen travels wirelessly to the PC via a radio transceiver built into the pen.

The digital pen can use any ordinary paper printed with a special proprietary grid pattern. This grid only makes the paper look somewhat off-white to the user. The pen contains real ink that leaves its mark on the paper. A camera, built into the digital pen, takes snapshots of the grid nearly fifty times each second in infrared light and memorizes the position of the pen with respect to the grid. As the ink is invisible to the infrared camera, the pen keeps no record of the marks on the paper. The built-in memory stores several pages of handwritten text.

The Anoto patterned paper the user is writing on is actually a tiny part of one large sheet with several domains. These are set aside for various specific activities such as a digital notepad or licensed to companies for use as certain applications. Anoto can configure each domain for a different functionality, which the pen recognizes based on its position on the gird and reacts accordingly. The entire grid pattern covers nearly 60 million square kilometers so you can stop worrying over running out of paper.

The Sony Ericsson Chatpen from Sony is the world’s first digital pen built with the Anoto concept. It looks like a somewhat chubbier version of a normal ballpoint pen, offering little hint of the cutting-edge technology concealed within. There are other Anoto partners such as Vodafone, to supply the GPRS network and Esselte and 3M, to supply the paper products. Anoto is sparing no efforts for making this the standard infrastructure for digital paper. For this, Anoto is entering into alliances with Microsoft, MeadWestvacod and Logitech. Microsoft is incorporating the functionality of the digital pen into its .NET platform.

The Anoto digital pen and paper concept has an incredible scope of potential applications. As a simple example, you can scribble a quick note on your pad and then send it as a fax or an email simply by ticking the send box printed in a corner of your page. Astonishingly, you will be doing this without access to a computer.

EEG Controlling Music through Raspberry Pi

Imagine controlling Pandora with your brainwaves. Whenever a song comes up that you do not enjoy, make it switch to the next one. All you need is an EEG sensor, a pianobar and a single board computer such as the RBPi or Raspberry Pi. Once you train the RBPi to differentiate the bad from good music, you are good to go.

You need to train the Bayesian classifier to recognize good music from the bad. However, basic machine learning techniques do not always turn out very good. Therefore, with this time-series data, you can use it in sequences to reduce false positives.

Using an EEG headset to control songs you dislike is great, especially when you are moving around or doing something away from your computer. You simply slip on the Mindwave Mobile headset from the Brainwave Starter Kit and use the included app to see your brainwaves change in real-time on your mobile. You can monitor your levels of relaxation and attention while watching the response of your brain when you are listening to your favorite music. The Brainwave store has multiple brain training games and educational apps, which are classified according to age and personal interests.

Data from the Mindwave Mobile headset travels via Bluetooth to communicate wirelessly with the RBPi. Using the free developer tools available online from NeuroSky, you can write your own programs to interact with the Mindwave Mobile headset. On the Mindwave Mobile, you can see the EEG power spectrums of alpha, beta and other waves from your brain. With the NeuroSky eSense, you can even sense eye blinks and differentiate between attention and meditation states.

When using the EEG headset with the RBPi and a Bluetooth module, you can record data of some labeled songs that you like and some that do not appeal to you. From the Mindwave headset, the RBPi will get data on waves from your brain such as the delta, theta, low alpha, high alpha, low beta, high beta, mid gamma and high gamma. It will also get an approximation of your meditation and attention levels using FFT or Fast Fourier Transform. Additionally, the headset also provides a skin contact signal level.

It is difficult to make out much from the brainwaves unless you have received adequate training to do so. Machine learning helps here, as you can use software to differentiate good music from the bad. The basic principle is to use Bayesian Estimation to construct two multivariate Gaussian models, one based on good music and the other representing bad ones.

Initially, the algorithm may only be accurate about 70-percent of the time. Although this is rather unreliable, you can use the temporal data and wait for say, four simultaneous estimates before you decide to skip the song. The result is a way to control the songs played, using only your brainwaves.

Pianobar on the RBPi controls the music stream to Pandora. You start pianobar and then start the EEG program using python. It will tell you if the headset is placed properly on your head since it gives a low signal warning. Once it detects a song, it will skip it once it detects four bad signals in a row.

Monitor Your Solar System with a Raspberry Pi

Most photovoltaic systems contain parts such as the solar modules (panels) to provide the electrical power, a battery charger for converting the panel output to the battery voltage, a battery pack to store energy during the day and provide it during the night time, an inverter to transform the battery voltage to the proper line voltage for operating home appliances and an line source selector to switch between the solar and grid power.

When the sun is shining during the daytime, the solar photovoltaic cells convert the sunlight falling on them into electricity. Although the efficiency of the conversion may be only about 17%, solar power can easily reach 1KW/m2 and suitable panels can produce 5000 Watts in these conditions.

Solar panels typically produce a high voltage, 120V DC being a common figure. The battery charger has to convert this to match the battery voltage, generally 48V DC. Solar light power charges the batteries continuously during the daytime; therefore, the charger has to keep tracking the maximum power point to optimize the yield of the system. As the charger has to charge the battery also, this device forms the most elaborate part of the system.

With the above arrangement, the solar panels charge the battery during the daytime and the battery discharges during the night. The size of the battery depends on one day of consumption plus some extra to tide over an overcast day. That also decides the size of the solar panel. Batteries are essentially heavy and the lead-acid types generally have a lifespan of about 7 years.

The batteries feed the inverter, which converts the 48V DC into the line voltage – usually 230V AC or 110V AC. With a 5KW continuous rating, inverters can essentially run almost all household appliances such as the clothes dryer, the washing machine, the dishwasher and the electric kitchen oven. When the inverter is supplying a large load, the battery current may climb up to 200A.

Multiple sensors measure the solar field power from and temperature of the solar modules divided into arrays. The information comes to a PV panel via a CAN bus, which unites all the sensors. The PV panel also acts like a gateway between the CAN bus and a single board computer.

The tiny, versatile single board computer, the Raspberry Pi or RBPi is suitable for gathering data from the PV panel and storing them in a database. On the RBPi is a web server connected to the home Ethernet network.

Another set of sensors monitor the battery voltage, current and temperature. These are also on CAN bus and the information collects on a PV battery monitor board. A Wi-Fi module on the board acts as a gateway between the CAN bus and the Ethernet.

The boards and modules of the monitoring subsystem do not provide any interface with the user, except for a few activity modules. The system is meant for being supervised and controlled remotely. This is possible with a Web User Interface or an Android application.

Quadcopters Now Fly Below Water

It may sound bizarre, but it was bound to happen. Quadcopters, after they conquered flying in air, are now also equally capable of flying below water. Of course, submarine vehicles need a different build to keep the water from entering the system. Therefore, submarine quadcopters will always be sturdier and more expensive than their airborne counterparts will. You can witness the Deepflight Dragon – a beautiful quadcopter – on Lake Tahoe, California.

Graham Hawkes, submarine designer, was initially interested in aviation, but was disappointed as he was born too late for building airplanes in his backyard. Therefore, he concentrated his design expertise towards building Deepflight Dragon. Presently, the design is still in preliminary testing stage and the stabilization software is yet unfinished. Kip Laws, chief scientist of Deepflight, is delighted with the progress after the first test of the vehicle.

With its four vertical thrusters, Deepflight Dragon looks more like a two-seater Formula One car without wheels. You could easily pass it off as a flying car when on a helipad. Graham has applied aircraft technology to build the drone of the deep. It is a simple, stable vehicle able to move around freely and hovering when the driver wants it to – for whom it is a piece of cake to drive.

Graham first stumbled on the idea for the Dragon when he found people trying to build a full-sized quadcopter capable of carrying a man and flying like a drone. His calculations told him there would never be enough energy and endurance in a drone to carry the weight of a man and batteries while flying. However, if taken underwater, the buoyant force of water will help carry the weight – water is a fluid 850 times denser than air.

That made Deepflight Dragon a two-person underwater drone. One of the biggest advantages of flying underwater is the buoyancy provided by water. Deepflight Dragon has positive buoyancy, which means it naturally floats. Therefore, to submerge, it only has to pull itself downwards to the equivalent of five percent of its weight. This also allows Deepflight Dragon to have an all-day endurance with only a 15 KWHr battery pack.

The back cockpit of the drone has only two controls. The first is a lever on the left and the other is a joystick on the right. The lever is for engaging upward or downward vertical thrust, with the joystick making the sub move forward or backward, while also allowing it to turn left or right.

Although the controls look simple, they are somewhat different from those on an airborne copter, which simply tilts forward to go forward. As the Dragon has to pull downwards to get itself underwater, tilting the joystick forward actually makes it move backwards. Additionally, when the drone is moving forward, its rear end will go up, hindering vision.

All this makes it necessary to have a stabilization system to keep the sub on a level plane – with an extra set of thrusters mounted under the rear wing. Being in an X/Y orientation, these extra thrusters move the sub and allow it to make turns. That leaves the main four thrusters to control the depth of the sub and to level it.