Author Archives: Andi

CORATAM with the Raspberry Pi

The ubiquitous Single Board Computer, the Raspberry Pi, or the RBPi is a perfectly suitable candidate for CORATAM or Control of Aquatic Drones for Maritime Tasks. Sitting within each drone, an RBPi becomes a part of a swarm of robotic systems. Portugal is using this novel method for exploring and exploiting its maritime opportunities as the sea is one of the country’s main resources. Although land-based and air-based swarms of robots have been extensively used for studying the aquatic environment for the proposed expansion of Portugal’s continental shelf, swarms in aquatic environments are a different breed altogether.

Tasks in aquatic environment are usually expensive to conduct. This is because of all the special operational requirements of manned vehicles and support crews. Therefore, Portugal has thought of an alternative approach where they have used collectives of relatively simple and inexpensive aquatic robot swarms. As each robot is easily replaceable, these have a high potential of applicability for essential tasks such as prospecting sites for sea border patrolling, bridges inspection, sea life localization, environmental monitoring, aquaculture, and so on.

The collectives of robots work on a decentralized control based on the principles of self-organization. This gives them the capability of performing efficiently on tasks that require robustness to faults, scalability, and distributed sensing.

With the development of CORATAM, Portugal is hoping to achieve three main objectives. The first is to explore the novel approach of control synthesis in a set of maritime tasks, but in the real world. The second is to develop a swarm of aquatic robots with fault-tolerant ad-hoc network architecture, heterogeneous in nature and scalable. The third is to disclose all the hardware and software components developed under an open-source license, to enable others to build their own aquatic robots.

Each robot is about 60 cm in length, and inexpensive, as the designers have used all widely available, off-the-shelf hardware. Each robot uses a differential drive mono-hull boat, which can travel at a maximum speed of 1.7 m/s, in a straight line. The maximum angular speed the robots can achieve is 90°/s.

An RBPi-2 SBC supports the on-board control of each robot. They communicate via a wireless protocol (802.11g Wi-Fi) and each broadcasts its UDP datagram. The neighboring robots and the monitoring station receive the broadcast, forming a distributed network without any central coordination or a single point of failure. All robots are equipped with compass sensors and GPS, and each broadcasts its position to the neighboring robots every second.

All robots use prototype hardware, making it inexpensive when compared to the majority of the commercially available unmanned surface vehicles. Therefore, the robots serve as a platform suitable for research and development, and easily maintainable. Additionally, the open source nature of the platforms makes them suitable for different manufacturing processes, sensory payloads, design choices, and different actuators to be used.

An artificial neural network-based controller controls each robot. The normalized readings of the sensors form the inputs of the neural network, while the output of the network controls the actuators on the robots. Each sensor reading and actuation value is updated every 100 ms.

User-Centric Surround Sound on Headphones

All of us have two ears and together with the brain, these form a formidable aural processing system that we take for granted. In real life, these external appendages not only help in locating our position with respect to our surroundings, but also in pinpointing the sources of various sounds that surround us. For example, we instinctively move to the right to avoid a honking vehicle approaching us from behind on the left.

When enjoying the stage performance of a group of musicians playing their instruments, we can point out various instruments with reasonable accuracy, even with our eyes closed. We can do this because the sound reaching the left ear is somewhat different from that reaching the right ear and by unconsciously moving our heads ever so slightly we emphasize that difference. Our brain processes various aspects of the sounds reaching it from the left and right ears, analyzes them and forms a mental image of the position of the instrument relative to the two ears.

A similar situation is artificially created when we listen to the reproduction of stereophonically recorded sound played back through two loudspeakers placed some distance apart. Our brain is able to take in the variation in sounds and form the two dimensional aural image between the two loudspeakers. A quadraphonic or surround sound system, such as in a home theater, helps to generate a more realistic image in three dimensions.

However, when listening through headphones, the brain loses a major part of the information. Minute movements of the head produce no variation in the aural information from the two ears, as the headphone is now attached to the head and moves along with it. Therefore, surround sound through headphones does not provide the same level of satisfaction as that coming from a set of loudspeakers of a home theater – but this may be changing now.

A Neoh headphone has a 9-axis motion sensing mechanism to track the smallest micro-movement of the wearers head. The sensors comprise a magnetometer, accelerometers and gyroscopes. This is similar to the way your smartphone can sense its tilt when you play Temple Run on it.

The movement data from the headphones goes to a binaural algorithm via Bluetooth to the sound source the user is employing. This could be a game console, a smart TV, a tablet or a smartphone. The sound source processes the surround sound format making the perceived sound field remain static for the wearer.

Therefore, if the user turns his head (wearing the headphones) to the right or to the left, he or she will hear and localize the appropriate sounds respective to the original sources. The user will feel as if he or she were listening to a home theater sound system or a conventional cinema performance.

According to the company, these special headphones go far beyond stereo. The audio processing app runs with the headphones to virtualize different audio sources, thereby creating an immersive audio sphere. They claim the headphones offer a far more realistic sound experience than what users can experience with the best home theaters today.

Oracle, Raspberry Pi and a Weather Station for Kids

Kids now have a wonderful opportunity of learning about their world while at the same time enhancing their programming skills. The Raspberry Pi Foundation is teaming up with Oracle to create an initiative – The Oracle Academy Raspberry Pi Weather Station. The initiative is inviting schools to teach their kids programming skills by applying for a weather station hardware kit that children can build and develop.

With the firm’s philanthropic arm, Oracle Giving, funding the first thousand kits, schools can get the kits without incurring any expenditure – until the stocks last. Students have the freedom to decide how to build their application. They will be using elements that SQL developed in collaboration with Oracle, while the data collected will be hosted on clouds belonging to Oracle.

The scheme is targeted at children between the ages of 11 and 16. Apart from honing their crafting skills for building the weather station, schoolchildren will also learn to write code for tracking wind speed, direction, humidity, pressure and temperature. In addition, students are also encouraged to build a website for displaying their local weather conditions. Children participating in the scheme can connect with other participants via a specially built website that doubles up to provide technical support.

According to Jane Richardson, director at the Oracle Academy EMEA, the scheme can lead to gratifying and effective careers for children as they learn computer science skills, database management and application programming. The goal of the project is twofold. Primarily, it shows children that computer science can help them in measuring, interrogating and understanding the world in a better way. Secondly, the project provides them with a hands-on opportunity to develop these skills.

The weather station is built with the Raspberry Pi or RBPi SBC as its control station. The complete set of sensor measurements the weather station handles includes Air quality, relative humidity, barometric pressure, soil temperature, ambient temperature, wind direction, wind gust speed, wind speed and rainfall. All this is measured and logged in real time with a real-time clock. Although this combination helps to keep the cost of the kit under control, users are free to augment the features further on their own.

Kids go through the scheme via three main phases of learning – collection, display and interpretation of weather parameters. In the collection phase, children learn about interfacing different sensors, understanding their methods of working and then writing code in Python for interacting with them. At the end of this phase, kids record their measurements in a MySQL database hosted on the RBPi. For this, students can deploy their weather station in an outdoor location on the grounds of their school.

In the display phase, kids learn to create an Apache, PHP 5 and JavaScript website for displaying the measurements they have collected from their weather station. They can upload their measurements to the Oracle cloud database, so that could be used by other schools as well.

In the interpretation of weather phase, children learn to discern patterns in weather data, analyze them and use that to predict future weather. For this, they can use both the local data they have collected and national weather from the online Oracle cloud database.

Are There Any Living Computers?

Those twiddling with the origin of life at the forefront of technology, call it synthetic biology, to use the politically correct words. Some splice genes from other organisms to produce better food products. Others flounder with genes for producing tomatoes that can survive bruises. Many graft jellyfish genes to potatoes to make them glow when they need to be watered. Making completely new organisms from scratch is a simple technique today.

In 2013, the Semiconductor Research Corp., from N.C., started a Semiconductor Synthetic Biology or SSB program to cross human genes and semiconductors. Their aim is to create hybrid computers, something like cyborgs. Although they have progressed far, they have yet to overcome many intermediate hurdles along the way.

Ultimately, they want to make living computers. They intend to make low power biological systems that can process signals much the same way as the human brain can. At present, they are trying to build a CMOS hybrid life form, for which, they are combining CMOS and biological components to allow signal processing and sensing mechanisms.

According to the Director of Cross-Disciplinary Research and Special Projects at SRC, there are several dimensions to the opportunity of using semiconductors in synthetic biology and this could enter various physical directions. He feels that research in SSB will generate a new data explosion – such as big data. It will be important to see how synthetic biology along with semiconductors will handle big data, especially in the science of health and in medical care.

One of the opportunities that can offer proof-of-the-concept is in the form of personalized medicine. This is because it is now possible to sequence the genome of a person – the process generating a vast database of genetic dispositions. Additionally, this also helps in testing the response of an individual to a particular drug in the lab, before it is actually administered.

The SSB program is connecting cells to semiconductor interfaces to read out signals indicating the activities inside a specific cell. In the next step, they intend to design new cells that have characteristics that are more desirable, such as sensitivity to specific substances – making them suitable for use as sensors. Apart from extracting signals from cells, researchers in the program plan to inject signals into cells. Their intention is to generate a two-way communication system, thus creating a hybrid system, half biological and half electronic, which will be capable of processing massive amounts of information; in short, a living computer.

In traditional drug discovery, passive arrays of cells are used. Each of the cells is exposed to a slightly varying drug. A scanner beam, usually a laser, electrically checks each cell and measures its response. That narrows down the drugs that show the maximum promise for further testing. However, the electrical or optical response of a cell to a drug is not a reliable method to capture all the activity within the cell. The SSB program can do that and is about one thousand times faster.

Arrays of sensing pixels can solve the problem, where each pixel measures a different parameter. With the CMOS chip performing a sensor fusion on the results, researchers expect to uncover the complete metabolic response of the cell to a drug.

CHIP Competes With the Raspberry Pi

The extremely popular tiny, credit card sized, inexpensive, single board computer, the Raspberry Pi or the RBPi may soon have a rival. So far, the contender, known as the CHIP, is waiting for its crowdfunding project to complete. In the future, expect more of such similar devices jostling the market place.

Unlike the RBPi, CHIP is completely open source – for both its software and its hardware. Once in the market, the design and documentation will be available to people to download. Therefore, with the schematic available, people will be free to make their own version and add improvements or tweaks to the design.

CHIP’s operating system is based on Debian Gnu Linux, which means it will support several thousand apps right out of the box. On the hardware side, there are some improvements on the specifications of the RBPi. As against the 700MHz CPU of the RBPi, CHIP runs on a single core CPU at 1GHz. Users can do without the SD Card, as CHIP has storage memory of 4GB built into the card. The 512MB RAM is the same as that in the later models of the RBPi. While users have to add separate dongles for Wi-Fi and Bluetooth when using the RBPi, CHIP has both built on-board.

CHIP can connect to almost any type of screen. Its base unit offers composite video output, but there are adapters for both VGA and HDMI. An optional case for the CHIP enables it work with a touchscreen and a keyboard. The entire package is the size of an original Game Boy.

All this may not be surprising since there have been prior competitors with better specifications and more features than those of the original RBPi do. However, all the competitors so far were unable to beat the price factor – they were all more expensive than the RBPi. This is the first challenger bringing the price lower than that of an RBPi – the basic unit of the CHIP costs only $9. The Next Thing Co., the manufacturers, call this the “world’s first nine dollar computer,” and in their opinion, CHIP is “built for work, play and everything in between.”

Along with a lower price tag, CHIP has a smaller profile than the RBPi. As it has a more powerful processor and more memory, CHIP could easily replace RBPi as the primary choice for projects. The entire board is packed with several sockets and pins. Its hardware features include a UART, USB, SPI, TWI (I2C), MIPI-CSI, Eight digital GPIOs, parallel LCD output, one PWM pin, composite video out, mono audio in, stereo audio out and a touch panel input.

Users of CHIP will learn coding basics and play games on the tiny computer that may soon usurp the title of king of the budget microcomputers, so far being enjoyed by the RBPi. CHIP measures only 1.5×2.3 inches and is compatible with peripherals such as televisions and keyboards. It runs on Linux, works with any type of screen and comes with a host of pre-installed applications. Therefore, users can simply make it work out of the box, without having to download anything.

Converting Scanned Images into Editable Files

The printed world and the electronic one are primarily connected through computers running the OCR or Optical Character Recognition software programs. Traditional document imaging methods use a two-dimensional environment of templates and algorithms for recognizing objects and patterns.

Current OCR methods can recognize not only a spectrum of colors, but can also distinguish between the forefronts in a document from its background. They work with low-resolution images that mediums such as cell phone cameras, the internet and faxes provide. For this OCR methods often have to de-skew, de-speckle and use 3-D image correction on the images.

Primarily, OCR software programs use two different methods for optical character recognition. The first is feature extraction and the second is matrix matching. With feature extraction, the OCR software program recognizes shapes using mathematical and statistical techniques for detecting edges, ridges and corners in a text font so that it can identify the letters, sentences and paragraphs.

OCR software programs using feature extraction achieve the best results when the image is clean and straight, has very distinguishable fonts such as Helvetica or Arial, uses dark letters on a white background and has at least 300dpi resolution. In reality, these conditions are not always possible. To allow reading words accurately in less ideal circumstances, OCR techniques have switched to matrix matching.

Matrix matching falls in the category of artificial intelligence. For example, organizations such as law enforcement agencies include matrix matching in the software they use for recognizing images within video feeds. The process combines feature extraction together with similarity measurements.

Similarity measurement utilizes complex algorithms and statistical formulas to compare images relative to others within the same image or within the document. This helps to recognize images within a spectrum of colors even in 3D environments. This technology allows OCR software to recognize crooked images, images with too much background interference and images that need alteration for correct reading and interpretation. Matrix matching techniques are also better at recognizing images at a lower resolution.

Today, several OCR software packages include features that can de-speckle and de-skew the image. They can also change the orientation of the page. A special technique called the 3D correction can straighten images that the camera captured at an angle.

OCR has been traditionally linked with scanning software. The scanning process offers clues that make the OCR results more accurate. However, not all images are available in a hard copy, and a scanner may not be readily available. Sometimes, text to be extracted is available only in a PDF file or some other graphic file downloaded from the Internet. While older PDF files did not allow you to copy text, most of the modern PDF files created today have a cursor mouse pointer. That allows copying the text from the document on to your clipboard.

However, advanced PDF creating software includes features to protect the text in the converted document using a password. If you want to extract text from such protected PDF documents, your OCR software program will ask you for the password.

Incandescent Bulbs May Not Be Dead Yet

If you thought that incandescent bulbs were dead and buried, well, you need to think again. Although incandescent bulbs had many things going in their favor such as a warm glow, dimming capability and low cost, efficiency was not one of them. Most of the energy that went into an incandescent bulb was wasted as heat and only a little was converted into visible light. Now, scientists at MIT and Purdue University are developing an ultra-efficient new incandescent light bulb. It reuses the heat it gives off by converting the heat into light.

Traditional incandescent bulbs heat a tungsten filament, causing it to glow. This also creates both visible and infrared light. While the visible light is useful, the infrared wavelength is dissipated as heat and is hardly of use. In the new type of incandescent bulb, scientists have coated the filament with a structure called photonic crystals.

Photonic crystals are made from abundant elements and applied on the filament using conventional material deposition technology. Although the crystals allow visible light to pass thorough unimpeded, they reflect the infrared wavelengths back into the filament. This heats the filament further, keeps it glowing and emitting more of the visible light, while the bulb itself uses much less electricity than it does otherwise.

According to the scientists, the bulb can have a high luminous efficacy, a measure of how well a light source produces visible light – a ratio of luminous flux to consumed power. For instance, regular incandescent bulbs show a luminous efficacy of 2-3 percent, CFLs come in at 7-15 percent (excluding ballast loss) and LEDs at 5-20 percent. The new, two-stage incandescent, once developed further, would be able to manage greater than 40 percent luminous efficacy.

For those who perceive luminous efficiency in lumens per watt, the maximum luminous efficiency of 100 percent, is 683 lm/W. That means, incandescent bulbs have a luminous efficiency of 13-20 lm/W, CFLs of 47-103 lm/W and LEDs are 34-136 lm/W. Comparatively, it is expected the new incandescent bulb would show a luminous efficiency of 273 lm/W.

To make the concept successful, scientists had to design the photonic crystal such that it worked for a very wide range of wavelengths and angles. They had to make the photonic crystals in the form of a stack of thin layers, which they deposited on a substrate. The efficient tuning of how the material interacts with light depends on the right thickness and sequence of the layers, according to the scientists.

The photonic crystals cover the filament, allowing only visible light to pass through. The crystals reflect infrared light just as a mirror would, adding more heat to the filament. As only the visible light goes out, the heat waves keep bouncing back into the filament until they can come out in the form of visible light.

Although at present the luminous efficacy reached is only about 6.6 percent, it is rivaling that of the commercial LEDs and CFLs. However, it is too early to say the two-stage incandescent will be able to beat the LEDs, because research on LEDs is also progressing very fast.

Sneaker Technology: Headlights on Your Foot

Sneaker technology is going places. Not that it is traveling, but manufacturers are imbibing the humble sneaker with special powers that help the wearer. One of such gadgets is the Smart Concept Sole from Vibram. Sneakers made by the company have a remote controlled LED lighting system. Wearers can choose to illuminate the ground ahead at night as they walk. In addition, Vibram is planning to embed more sensors within the soles of their sneakers to warn the wearer of environmental hazards invisible to him/her.

The LED lights on the Vibram sneaker soles work like mini flashlights. This concept is useful for tactical boots, running shoes, work shoes and more. In addition to the front LED lights, the sole also has a red tail light, making the wearer visible from behind. Vibram demonstrated their Smart Concept Sole at the Outdoor Retailer trade show in the Salt Lake City at Utah.

According to Vibram, inspiration for the Smart Concept Sole came from tactical needs for things such as firefighting, law enforcement, and military operations. The sole has an integrated electronic board controlling the integrated hardware and a fob-sized remote control unit. The user can replace the standalone remote with an application on his/her smartphone.

The lighting system forms the most universally useful application for the Smart Concept Sole. Switched on by the user, the integrated LEDs throw a diffused array of light on the path ahead. This allows the user to see where they are going in the dark. This is a better than using a hand-held flashlight, as the lights on the sole allow the user to maintain a low profile. The front LEDs come with three brightness settings, with a flash setting for the red LED tail light. That increases the wearer visibility when indulging in activities such as running at night.

Although kids have been using flashing lights on their shoes for long, the Smart Concept Sole has unique capabilities going farther than path illumination alone. According to Vibram, they are planning to stock the sneaker soles with a variety of sensors that can provide a warning system for users. For example, a gas sensor could monitor for hazardous gases. This is particularly useful in law enforcement and for firefighting.

Similarly, a proximity sensor on the foot could monitor if there were any obstacles in scenarios such as in smoke-filled buildings, unfamiliar territory, and dark places. In the same way, a temperature sensor could warn of high temperatures underfoot.

Apart from LEDs on the soles, Vibram also makes special soles that allow the user to have maximum grip on ice, slippery terrain, and wet surfaces. That improves the safety of the user in difficult conditions. They make the sole from three layers. The outer layer is made of rubber, the next from a special fabric and the inner layer is of a polyurethane compound. These layers help in improving the grip of the sole in slippery terrains.

The special sole from Vibram is highly adaptive to low temperature conditions because of its soft make. The rubber and fabric provide perfect adhesion without the risk of delamination or abrasion.

Raspberry Pi and a Simple Robot

Using a pair of DC motors and connecting them to two wheels can be the basics of a simple robot. Once you add a single board computer to this basis structure, you can do almost whatever your like with your robot. However, making a robot do more than simply run around requires many mechanical appendages that may prove difficult to get unless you have access to a workshop or you are proficient with 3D printing.

To simplify things for beginners, the robot chassis from Adafruit is a versatile kit. With this simple robot kit and a single board computer such as the Raspberry Pi or RBPi, you can start your first lessons in robotics.

As the kit is for beginners just starting with their first robot, there are no sensors. A Motor HAT (Hardware Attached on Top) controls two motors connected to two wheels on a chassis. The front of the chassis has a swivel castor, which makes it stable. The RBPi mounts on the chassis and a battery supplies the necessary power for the SBC and the motors.

Once you are familiar with generating a set of instructions in Python to make the robot move the way you want it to, you can start adding sensors to the kit. For example, simply adding a camera will allow the robot to see where it is going. Adding an ultrasonic range finder will allow the robot to avoid bumping into obstacles in its path.

The Mini Rover Robot Chassis Kit from Adafruit includes almost everything one needs to build a functional robot. It has an anodized aluminum chassis, two mini DC motors, two motor wheels, a front castor wheel, and a top plate with standoffs for mounting the electronics.

It is convenient to use the latest RBPi models such as the Model 2, B+, or A+, as these have suitable mounting holes that allow easy attachment to the robot chassis. Although it is also possible to use the RBPi Zero, its small size makes it unsuitable to mount the motor HAT securely.

The Motor HAT can drive DC and stepper motors from the RBPi and is suitable for small robot projects. The brass standoffs help to hold the Motor HAT securely to the RBPi. Power comes from two sources. One 4x AA battery pack supplies the motors. Another small USB battery pack powers the RBPi. The RBPi also requires a Wi-Fi dongle to keep it connected to the computer and to control the RBPi robot.

Your RBPi must be running the latest version of the Operating System – Raspbian Jessie. If you do not have this, allow the RBPi to access the Internet and download the necessary software.

The Motor HAT library examples included provide adequate software for this project to start. For example, you can use the example scripts provided to make the robot move forward, backward or to turn in different directions. Preferably, place the robot on level ground, where there are no obstacles. As the robot has no sensors, it can hit something or easily fall off the edge of a table.

Remote Controlled Car with a Raspberry Pi

A single board computer such as the Raspberry Pi or RBPi can work wonders on a remote controlled car. Running Python on the RBPi allows it to handle three tasks a remote controlled car needs most – self-driving on a track, detection of sign and traffic lights and avoiding front collisions. The RC car has three subsystems – input units consisting of a camera and ultrasonic sensors, a processing unit and a control unit.

The processing unit on the RC car communicates with the RBPi to handle several tasks. These include receiving data from the RBPi, training, and predicting the neural network, detecting objects, measuring distances, and sending instructions to the Arduino through the USB connection.

The computer also runs a multithread TCP server program for receiving streamed image frames and ultrasonic data from the RBPi. The computer converts the image frames into gray scale and decodes them into numpy arrays.

To make object recognition and steering simple and fast, the RC car uses a neural network. The advantage is once the network is trained, it can work with only the trained parameters, making predictions very fast. The output layer of the network has four nodes corresponding to the steering control instructions – forward, reverse, left, and right. The input layer has over 38,000 nodes and uses only the lower half of the input images for training and prediction.

Although the project uses the shape-based approach for object detection, it only focuses on detecting the stop sign and traffic lights. Detection and training was both using OpenCV using both positive and negative samples. Positive samples are images that contain the desired object while negative samples are random images without the desired object.

The controller on the RC car needs four low-going signals corresponding to the forward, reverse, left, and right actions. Four pins on the Arduino provide these signals simulating button-press actions that drive the RC car.

The ultrasonic sensor measures the distance of an obstacle in front of the RC car. This includes measuring proper sensing angle and other surface conditions. Other measurements from the Pi camera allow the RC car to stop at the correct distance from the object.

The monocular vision approach of the RC car makes it difficult to get accurate distance measurements. In turn, other factors also influence the distance measurement, which includes errors in the actual measurement, variations in detecting the bounding box of the object, and nonlinear relationship between distance and camera coordinates. The error increases when camera distances are great and the camera coordinates are changing rapidly.

The traffic light recognition process uses image processing for detecting red and green lights. First part of the training involves detecting the traffic light by decoding its bounding box. Next, Gaussian blur reduces the image noise to find the brightest point within the bounding box. Finally, red or green state determination within the brightest spot detects the actual state of the traffic light.

The project uses an RBPi Model B+, a Pi camera and an ultrasonic sensor, HC-SR04. The RBPi streams ultrasonic sensor and color video data via its local Wi-Fi connection. It scales the video down to QVGA resolution to achieve low latency.