Category Archives: Customer Projects

Pairing your Raspberry Pi with an SPI OLED

OLEDs are interesting because they produce very crisp and brighter displays than is possible with the regular Liquid Crystal Displays. Being made of thin films of organic molecules, OLEDs use less power than conventional Light Emitting Diodes and LCDs. Driving OLEDs from your Raspberry Pi (RBPi) can be a great project to learn about these nifty solid-state displays and the processes that drive them.

Adafruit offers a lovely little monochrome SPI OLED module with a resolution of 128×32, driven by a SSD1306 driver chip. You can refer to excellent tutorials, libraries and guides on the website if you are driving this display from an Arduino. Although other OLED modules support I2C interface, this module supports only SPI, has pixel and text drawing functions, without any geometric drawing functions.

Although very small, only about 1-inch diagonal, the display is very readable because of the very high contrast of the OLED display. It has 128×32 individual white pixels, and the controller chip can turn each one of them on or off individually. No backlights are needed, as the OLED display produces its own light. This feature enhances the contrast and reduces the power consumption as well. The SSD1306 driver chip of the display communicates over the SPI bus, and requires four to five pins from the RBPi.

The OLED and its driver require a power supply voltage of 3.3V and logic levels of 3.3V for communication. If you are using this with the 5V supply of the RBPi, a 3.3V regulator should be used to power the display. On average, the display consumes about 20mA from the 3.3V supply, but this actually depends on how much of the display is lit. OLEDs usually require a high-voltage drive for the good contrast, but since this switched capacitor charge pump is already built-in into the display, this is one of the easiest OLED modules to interface.

Apart from the SPI specifications of the module, there is also a D/C pin, which controls the Data/Command going into the module. When you pull the D/C pin HIGH (connect it to VDD), the signals present at the pin are treated as data. When you pull it LOW (connect it to VSS), the signals present at the D/C pin are treated as command and are transferred to the command register.

What this essentially means is the opcode and the argument bytes that follow it are treated as a single command, even though this is a multi-byte command. For example, the command for setting the contrast control consists of a one-byte opcode followed by a one-byte contrast value. Therefore, the D/C pin has to be pulled LOW for the entire sequence when sending this command. Similarly, when sending image data, the D/C pin has to be pulled HIGH, which will enable the data to be moved into the image memory buffer.

The project is simplified when you use the following software – Occidentalis 0.2 for boot image, WiringPi-Python for accessing the GPIO pins and Py-Spidev for accessing the python bindings to the spidev Linux kernel driver. For further details, refer here.

Make a time clock with the Raspberry Pi

People working on projects are usually required to keep their time records up to date. However, those who are more engrossed with the technicalities of their work, find that they slip on their time keeping chores. If you fall into that category, let your Raspberry Pi (RBPi) help you out. Along with the tiny single board computer, you will additionally need one RGB LED, an OLED character display and a rotary encoder. Your time will be logged directly into a Google Docs spreadsheet.

The purpose-built time clock has several off-the-shelf components. For the RBPi, use the Occidentalis 0.2 operating system. The 128×32 OLED display uses SPI interface and this OS has the interface bundled in.

The time clock is very simple to operate. When you start it up, it will pull in a list of jobs from a specified Google Docs spreadsheet. To scroll through the list of jobs, simply rotate the knob on the rotary encoder. When you want to record time for specific project, just locate it by rotating the knob and click it. That starts the clock ticking on the spreadsheet. For logging off, simply click once again.

Use any case suitable for the RBPi or make your own. Since the HDMI, audio and video ports will not be used it is acceptable if the case does not provide access to these ports. For those who like to do things professionally, designing the carriage for the RBPi on OpenSCAD could be great fun. Printing the carriage on a Makerbot Replicator will give the required professional touch. Since the RBPi board does not have any mounting holes, you may have to put in edge-clips for holding it. Having posts at the corners of the board will make a snug fit.

Although the documentation of the RBPi mentions some ground pins as DNC or Do Not Connect, using them as extra grounds can be very convenient, just make sure you are using the proper ground pins.

Using the GPIO pin 5 of the RBPi for the push button can be of dual advantage. This keeps the pin 5 available for a Safe Mode boot, for example, when there is a recent firmware release. Simply hold the knob down when booting up and your RBPi comes up in the safe mode. Additionally, there is no chance of the pin being accidentally held low, such as could be the case if it was used for one of inputs to the quadrature encoders.

Use the software from here. The PWM library drives the RGB LED and the RgbLed class animates its color transition loops. The rotary encoder uses the RotaryEncoder.Worker class for polling through the encoder GPIOs and for keeping the application code as simple as possible.

Since the application code utilizes the CPU only to around 20%, its temperature rise is within safe range. The color-coding gives idea of the state of the machine. When no time is being logged, the LED shows purple. As time is logged against a task, the LED pulses slowly and turns blue. When rapidly flashing, you know that the spreadsheet on the Google Docs is being updated.

A live translation project using Raspberry Pi

Up until now, wearable computing has been confined to some odd bulky wristwatches. Most people are probably aware of the Augmented Reality Glasses, commonly referred to as Google Glass that Google has been working on for quite a while. Google Glass is still in limited release and not available to everyone. So, in the meantime, you can use your Raspberry Pi (RBPi) to fill in the gap. The project has everything you desire – small in size, light in weight and light in power consumption; a cheap lithium-ion battery makes it run for hours.

The project has two RBPi Single Board Computers working as close as possible to the universal translator of the Star Trek fame. The two displays are a pair of digital glasses, quite off-the-shelf. Other standard equipment used are a Jawbone Bluetooth microphone and a Vuzix 1200 Star wearable display. When fully functional, the system uses Microsoft’s publicly accessible API or Applications Programming Interface to perform voice recognition and translation on the fly.

For example, Will Powell, the originator of the project, uses the glasses to have a conversation with Elizabeth, who speaks Spanish. Although Will has never learned Spanish, he is able to converse meaningfully returning the answers in English. Powell’s blog shows a video of the system in action and shows the details of the build.

This project glass inspired translating unit works in real time and displays conversation as subtitles on your glasses. Both RBPi run the Debian squeeze operating system. For using the system, individual users will be wearing the Vuzix 1200 Star glasses, and these are connected to the s-video connector on his RBPi. For a clean and noise cancelled audio feed, Will uses a Jawbone Bluetooth microphone connected to either a smartphone or a tablet.

The Bluetooth microphone picks up the speakers voice and streams it across the network to pass it through Microsoft’s translation API service. For regularly used statements, a caching layer improves the performance. The subtitles face their longest delay when passing through this API service, The RBPi picks up the translated text the server passes back and this is then displayed on the glasses display.

Once a person has spoken, it takes a few seconds of delay before the translation pops-up on the other persons glass display. Moreover, the translations are not always fluid or coherent. However, that has nothing to do with the technology used here; rather it is based on the inaccuracies of the translation API. It is really amazing as to how such a relatively simple setup could offer speech recognition and translation at very near real-time.

At this rate, Augmented Reality Glasses will become popular very soon, and Google has suggested they will make their Glass project commercial very soon. Mobile communication is standing on the brink of a revolutionary technology that Google’s Glass is sure to bring about. However, Powell’s work shows there is still a lot of room to experiment and explore different kinds of functions and applications in this field.

The project also shows that very soon it may not matter what language you speak, anyone will be able to understand you, provided everyone is wearing the right glasses.

REX – a brain for robots

Not to be confused with Tyrannosaurus the king of beasts, REX is a complete development platform for sophisticated robotic applications. While most robotic designers use the Arduino platform as a base for their robots, Mike Lewis and Kartik Tiwari were not impressed with the available hardware. Their design, REX, is specifically targeted towards robots. REX poses no wiring hassles, has built-in battery inputs and has a robot programming environment that it boots up directly into.

The duo felt people who designed robots needed a new and more advanced platform. When using a single microcontroller for handling multiple sensors, motors and other electronics, problems start arising. The situation worsens as you plan on adding increasingly sophisticated tasks such as speech recognition and computer vision. The Arduino is, by default, not a multitasking platform and is intended for running a single task at a time. However, robotics essentially requires multiple tasks to be running at any given time.

Therefore, REX came up with a 32-bit ARM Cortex-A8 processor core running at 1GHz, an 800 MHz DSP core and 512 MB of RAM. The board runs on the Alphalem Operating System and boasts of a host of features such as built-in drivers for sensors and other similar devices, a task manager to allow launching multiple processes and support for several programming languages such as C, C++ and Python. The Arduino-style programming environment facilitates developing your own robot applications.

REX is a low-cost robot development platform that targets advanced robotics. Although simple robotics can be handled by the Arduino project and is fairly straightforward, REX is geared towards handling the extra functionality required where you need voice recognition and computer vision. Being simple and low-cost, the REX platform helps make more advanced robotics projects more accessible to the average hobby roboticist.

At the core of REX is the ADE or Alphalem Development Environment, consisting of scripts or programs written in C++, which form an Application Programming Interface for communicating with devices connected to REX using the I2C expansion ports. Apart from the built-in drivers that the Alphalem team selected for REX for driving sensors and actuators, the ADE also has a process management system for running multiple programs in parallel for efficient robot control. This, the team claims is the most useful features that REX offers to robotic designers.

Physically, REX is about the size of a standard pack of playing cards, small and compact. This palm-sized, single board computer is priced at $99 for its basic model, which includes the DSP, camera, microphone inputs and preloaded OS. You can use REX to control small simple robots easily.

However, this is not to mean that REX cannot handle complicated stuff. In fact, REX is extremely powerful and is able to handle a huge range of sensors such as speech recognition and machine vision. This allows it to be used for some very complicated robotic activities.

Incidentally, the name for the project was earlier AlphaOne, to commemorate Apple’s first PC. However, Mike, as the product engineer, proposed that the name should be changed to REX since he had a Jurassic Park mug on his desk.

FishPi: How Raspberry Pi controls an autonomous ocean explorer

FishPi is a project for developing the prototype of a sun-powered autonomous ocean-going surface transport controlled by a Raspberry Pi (RBPi). The project is working on a small boat, to be propelled by solar energy to traverse the Atlantic Ocean and during its journey, the boat will be taking pictures and gathering data.

The goal of the FishPi project is simple. They intend to develop FishPis ranging from vessels running on batteries for a few hours to solar powered vessels full of features and capable of sustaining months at sea. The vessels will be MUSV or Marine Unmanned Surface Vessels, which will be capable of crossing the Atlantic ultimately and unaided.

An RBPi unit will provide all the command and control features of the FishPi vessels. The vessel will have an onboard solar panel, and the RBPi will control the data logging, navigation, power management and control of other devices on-board. The solar panel will charge a Lithium-ion battery pack, which will be driving a ducted propeller system. The Amateur Radio Satellite Network will be used to transmit images from the FishPi to the shore, by using satellites to integrate the ship-to-shore communication system.

During its journey, FishPi will use its environmental monitoring and data-gathering capabilities to measure the temperature of the air and sea, salinity and pH, barometric pressure, light levels and more. It will transmit some of the data along with images relayed in real-time.

For this, the RBPi is attached to a 16-channel PWM, a temperature sensor, a compass, a GPS, a USB webcam, a USB Wi-Fi dongle and a RockBLOCK satellite communicator. All these, except the compass and the webcam, are within a box on the FishPi Proof-of-Concept Vehicle or POCV now.

Initially, the base-station was planned to be connected to the POCV with a 32-core cable. However, this became too complicated and caused a lot of interference, so it had to be abandoned. Presently, the base-station contains another RBPi, connected to a USB hub and a 4-port Wi-Fi Router. The Wi-Fi link allows real-time remote control possibilities with the use of xrdp and the FreeRDP client. Additionally, this allows live video streaming to the world over the internet.

The electronic speed controller, the webcam, rudder, temperature sensor, the GPS and the compass are integrated with the C&CS or Command & Control System of the POCV. Currently, the coding is for manual control only, and the POCV can move forward, backward, to the left and to the right. With the webcam as a visual guide, the POCV can be driven remotely, but so far, this has been tested only indoors.

In the future, this control will be automated to the extent of giving the POCV the command and leaving it to navigate itself. For tracking, routing and waypoints, GPX files are being used while GeoTiff file formats are being used for the maps. Telemetry is an important function for any ocean-going vehicle. The POCV will be communicating both ways via the RockBLOCK Satellite Communication link.

There is always a chance that the vehicle can capsize in rough seas, and therefore, next in line is a self-righting mechanism.

Wear Your Raspberry Pi And Listen To Software Defined Radio

Carrying your radio along is nothing new, since a small and portable radio set is readily available. However, there is a different charm in carrying your single board computer with you while it is playing software-defined radio, and this is exactly what Miller Jacobson did with his Raspberry Pi (RBPi). There are two aspects to this project, the hardware and the software. While the software is simple enough, the hardware is somewhat involved.

To carry your RBPi around means to free it from the wall-mounted power supply unit. Power to the RBPi then has to come from batteries. Jacobson used lithium ion cells placed in an imported battery box. The enclosure uses four cells and can recharge USB devices and cell phones. It also incorporated charging voltage regulation and protection. He salvaged cells from a dead laptop battery unit, since these generally have quite a few good cells with only one or two cells bad, which render the entire unit useless.

To save space, Jacobson did not use connectors between the RBPi and the battery box. Instead, he soldered wires directly to the DC power jack on the battery box. The other end of the wires he soldered to the +5V and GND pins at the GPIO pins of the RBPi. He took the video output from the RCA port and used right-angled connectors everywhere he could, so that space used was at a minimum.

Jacobson used a second enclosure box, which was nearly the same size as the battery box, and fitted the two boxes one on top of the other, arranging to screw them together. Within the second enclosure box, he drilled holes to mount the RBPi board. He also cut holes on the sidewalls to take out the projecting wires and make the USB and HDMI ports accessible. Some ventilation holes in the enclosure allowed cooling.

For the display, Jacobson used a Nyxio Venture head mounted display. This is a cheap MMV or Mobile Media Viewer with a composite input. It features a slim profile with a full-sized image and has 2GB flash memory for storage. The virtual display simulates a huge virtual 62-inch screen in a 16:9 wide format on a non-radiation LCD panel.

For an input device, Jacobson used a generic wireless mini keyboard and trackpad. These are generally used for giving presentations and connect via a wireless interface. The tiny wireless receiver connects to the USB port of the RBPi.

Jacobson uses a Realtek RTL2832U based TV tuner as the front-end for the Software Defined Radio or SDR. The tuner covers a huge chunk of spectrum in the VHF and UHF range. The processor in the RBPi board was quite capable of handling the processing of the signals from the tuner.

GNU Radio, GNU Radio Companion and multimon are some of the software being used to receive and decode the APRS packets from the tuner. Jacobson is using Python for some of the other components such as for filtering the signals from the noise, extracting the raw audio and removing the DC offset, etc.

Home Protection with Raspberry Pi

Planning to go on a vacation, but afraid of who will look after your home for you? Worry not, for the mighty Raspberry Pi (RBPi) is here. Not only will RBPi look after your entire house, it will send you an email of what is happening in your home and let you see it on your mobile or on a PC. How cool is that?

Most alarm systems incorporate three primary sensors. The first is a temperature sensor to detect the rise in temperature in case of a fire. The second is an intrusion detection sensor to detect if an intruder has gained access to the insides of the house and third is a motion detection sensor. Apart from these primary sensors, you may add smoke detectors and cameras according to your necessity.

The software consists mainly of a database to store all the events with a time stamp, a dashboard to display the status of the sensors, configure them and to program the alarm system. The Raspberry Pi also acts as a web-server to send email alerts and to display the dashboard on a remote computer or Smartphone.

Depending on the size of the home, its vulnerability and the number of sensors being used, you could divide the area into a number of zones. This makes it easier to arm the sensors belonging to a specific zone. For example, a door and few windows of your home may be facing a busy street during the day and you may decide not to arm the sensors in this zone in the daytime. As night falls, the street gets deserted and you may want the sensors in that zone to be armed for the night.

Dividing the home into zones also has the advantage of knowing in which area or areas the alarm has been triggered. The camera for that zone can then be switched on to assess the situation visually.

Since RBPi runs on Linux, and Linux multitasks very well, the software runs in the background. The software is programmed to wake up RBPi about once every minute and check in on each of the armed sensors in all the zones. If there is no activity, it simply updates the logs for the database and the dashboard and goes back to sleep.

If a sensor trips, or generates an activity, Raspberry Pi records it in its logs, and sends you an email with the details. The dashboard then indicates the alarm condition in the zone where the alarm originated. You have a choice of turning off the alarm after checking it out.

You can login to the server from a remote PC using a username and a password. The web-browser will display the dashboard and a green button lets you know that the RBPi is running your home alarm software and is transmitting the information from the sensors. If the alarm system goes down for some reason, or there is a problem with the connectivity between the Raspberry Pi and your computer, this green button will turn red within a minute. You can now proceed to test, arm or disarm the sensors in each zone. For details of software and setup, refer here.

Gardening with the Raspberry Pi

Many of you may be garden enthusiasts and would welcome the idea of automating some of the maintenance requirements of your plants. For example, keeping tabs on the quantity of water that is required by the plants based on the moisture in the soil, the available sunlight and the environmental temperature might be easy for an experienced gardener. However, gardeners who have just started gardening find it a difficult equation to balance. Even an experienced gardener may have to depend on a novice if taking leave from his garden for a few days.

With a Raspberry Pi (RBPi), most of the above gardening issues can be fixed. The Raspberry Pi can take care of the garden’s watering requirements based on a few environmental measurements. This can bring relief to an experienced gardener forced to leave his beloved plants for a few days. The novice gardener can quit worrying if he is starving his plants or drowning them in water. This is how Devon approached the problem with his Raspberry Pi.

Avid gardening enthusiasts know that too much water to a plant can be as bad as too little. For the Raspberry Pi to determine how much water should be delivered to the plant, it is necessary to know how much moisture is present in the soil in the first place. That, combined with the temperature and the amount of available light can let Raspberry Pi control the pump that delivers the water to the garden.

Since Raspberry Pi is not capable of measuring analog signals that most sensors put out, an Analog to Digital Converter attachment is necessary. For this, using the MCP3008 ADC is a good choice since it allows eight sensors to be used at a time. For sensing the amount of sunlight available, a Light Dependent Resistor or LDR is useful. To measure the ambient temperature with some amount of precision, a temperature sensor such as the TMP35 or TMP37 will do. For sensing humidity in the soil, a homemade humidity sensor using a few long metal nails will be fine.

All the sensors will need a DC voltage supply and a return ground connection, with the signal from each sensor going to one of the channels of the ADC. The 3.3VDC from the Raspberry Pi board is good enough for the sensors. While only one temperature sensor and one LDR is enough, you may need more than one humidity sensor, depending on how big your garden is.

The humidity sensors check the resistance of the soil between a pair of probes inserted into the ground. As the soil dries up, the resistance increases between the two probes of the humidity sensor. If several such probes are placed at different places in the garden, the Raspberry Pi has a fairly good idea of the state of dryness of the soil in the garden.

The final and most important part of the entire system is the pump that delivers water to the garden. Using a tank and a submersible pump eliminates a whole bunch of issues that many gardeners face. You can experiment with drip-irrigation also if you like the idea. Devon has kindly shared the software and the code used, and you can download them here.

How to Paint with Light and Raspberry Pi

You can paint with light if you use a camera with large exposure times, while generating moving images with a Raspberry Pi (RBPi). Light painting is not new and traditionally images were hand-painted with a penlight. With the availability of cheap micro-controllers and addressable RGB LEDs, the idea of light painting has taken on a different meaning.

Since the images are large, producing them requires huge amounts of memory, something that RBPi has ample quantities of. Adafruit has Digital Addressable RGB LED Strips and connecting them to the Raspberry Pi is quite simple except that Raspberry Pi will not be able to supply the high currents that the LED strips demand, therefore, an external power supply will be required to power the strips.

Since the project will move about a lot, strong and reliable connectors will be required to interface between the Raspberry Pi and the LED strips. Connections from the GPIO of the RBPi are best taken via a 26-pin IDC cable and header. The LED strips are connected using two JST 4-pin plug and receptacle cables. The wires of the cables are soldered together in the proper sequence.

Since the RGB LED strip requires updating at very high speeds, this is addressed with a Serial Peripheral Interface or SPI bus. However, the GPIO libraries that RBPi uses with the “Wheezy” OS distribution are not fast enough. Therefore, Raspberry Pi needs a change of OS and must use “Occidentalis”, which is the Adafruit Raspberry Pi Education Linux distro, and includes the SPI support.

Occidentalis also has sshd that makes it easier to transfer images from a PC to the Raspberry Pi. sshd is the “secure shell” daemon server. It is like a secure version of telnet, allowing a user running the ssh client program on a local computer to connect to another (remote) computer running the sshd server, and logon to the remote computer. Unlike telnet, the communications are encrypted against network sniffers.

A Python Image module is used to convert the image transferred to the Raspberry Pi to an RGB format suitable for the LED display via the SPI devices. Instead of repeatedly processing each row or column of the image on the fly, the entire image is preprocessed into the hardware specific format suitable to the LED strip and stored in the memory of the RBPi as arrays. Refreshing the display is then only a matter of reading these arrays from the memory into the SPI port. More details of the hardware and software are available here.

The motion rig consists of a large PVC pipe bent into a ring on a hula loop. The LED strip is mounted on this and retained with zip ties. The ring assembly, batteries and the electronics is attached to the rear of a bicycle, which provides the motion. The entire arrangement including the bicycle must be painted matt black to be invisible in the photos.

For the power supply, 12V batteries have to be used, and a DC-to-DC converter is required for powering the LED strip and the electronics, all of which operate at 5V. The result of all this labor is limited only by your imagination.

What is IFTTT? How can you use it?

Kevin Andersson has a lot to look forward to when he wakes up every morning. As soon as he puts his feet on the ground, all the lights in his home turn on. When he steps on to the weighing scale, the coffee maker activates itself to prepare a mug of steaming coffee.

Kevin has made all these events possible by installing a motion sensor in his bedroom and connecting it to the lighting arrangement with the help of an internet service known as IFTTT, which is the acronym for “If This, Then That”.

Since Kevin is a programmer by profession, you would naturally believe that he put his superior programming ability to use to bring about this high level of automation in his home. Strangely enough, he made all this possible without writing any kind of code. He just invested in some hardware items, linked them up and made use of the IFTTT service available on the web so that the gadgets could communicate with each other.

A Sneak Peek into Internet Services

Most of the services made possible by the IFTTT are for use on the Internet only. For instance, you can automatically save snaps you get onto in Facebook in your Dropbox folder. This is very handy indeed. IFTTT used with Gmail becomes a seriously powerful tool.

You can do other cool and trendy things like uploading only certain photos on Flickr. Although Siri works with only the default apps of Apple, you can integrate Siri with the apps you use on IFTTT.

Connecting Real World Devices

You may not find these applications available on the net amazing enough, since you may take the Internet for granted as most people do. However, the fact that IFTTT services can hook up your everyday home devices and make them perform remarkable tasks like preparing your coffee without your needing to step into the kitchen is amazing indeed!

The services can link many of your home gadgets like Belkin WeMo devices used for sensing motion, home lighting system made available by Phillips and a variety of equipment to suit your specific needs.

What exactly is IFTTT?

If This, Then That implies a cause and effect relationship. If a situation triggers an event, a certain result occurs. Say for example, if the stock price of a certain product rises above a specific mark, the stock market will send you a Google alert. Here the rising of the stock above a particular value is the trigger or the cause and the alert sent to you by the market is the effect or the result.

Linden Tibbets and his brother Alexander, the brains behind IFTTT conceived of the project in terms of how people react to ordinary objects in the home and the office like doorknobs and cell phones. Often, people use these objects in ways the designer did not intend. For instance, you may use your phone as a paperweight because you can judge from its looks that it is heavier than a sheet of paper. Tibbets and his brother have extended this idea into the digital world so that IFTTT allows individuals to use Internet applications in modes the developers of the packages did not expect.