Category Archives: Newsworthy

Create Steam without Boiling Water

According to the primary text books of physics, pure water boils to produce steam at 100°C when the pressure equals 1 atmosphere or 760 mm of mercury, provided the heat supplied equals 640 Kcal for every Kilogram of water. That means, to produce steam, you need to boil water, so it changes its phase from liquid to gas. However, scientists are proving that it is possible to produce steam from water without boiling it – simply by supplying the latent heat necessary to change the phase.

Boiling is not necessary for producing steam if the vessel containing water is lined with a black material capable of absorbing a range of visible and infrared wavelengths of light. This material can create heat from sunlight and pass it on to the water, creating steam without the water going through the boiling stage.

According to a report in the Science Advances, scientists have created such a new, extremely black material. The material is a deep black color as it reflects very little visible light. The base material is pocked with tiny channels or Nano pores, over which there is a layer of gold nanoparticles each only a few billionths of a meter wide. This arrangement can absorb light from the visible spectrum and from some parts of the infrared spectrum, reaching 99% efficiency.

As the structure of the material is highly porous, it floats on water surface and soaks up the sunrays falling on it. As light falls on a gold nanoparticle within one of the Nano pores, photons in the applicable range of wavelength stir up electrons on the gold surface. The electrons oscillate back and forth, and the oscillating electrons are known as plasmons. The plasmons produce intense localized heating, vaporizing the water nearby.

To excite a plasmon, the wavelength of light has to match the size of the nanoparticle it hits. Therefore, to use as much of the sun’s spectrum as possible, scientists have created gold nanoparticles in the pores of a variety of sizes. That allows the material to absorb a large range of the wavelengths of light.

Jia Zhu, material scientist at the Nanjing University of China, is pioneering the research group. According to Jia, scientists have been successful in producing steam with plasmonic material earlier as well. However, the new material is different as it improves the efficiency of the entire process, and converts more than 90% of the light energy falling on it to steam.

According to mechanical engineer Nicholas Fang of MIT, not a part of the research, the team has actually produced an intriguing solution. Although scientists have achieved higher efficiencies with other material such as carbon nanotubes, the new material, though not as efficient, will be cheaper to manufacture.

Steam is a very useful form of energy and generating steam efficiently can help many industries. These include producing freshwater from saline water, also known as desalination, running steam engines and sterilization. In the industry, steam is used also for humidification, moisturization, cleaning, atomization, motive, propulsion, drive, and heating. There are several steam-using equipment as well.

Piq: This Ski-sensor Measures Details of your Skiing

Most skiers want feedback about their skiing, for improving their technique. The ski sensor from Rossignol offers one that not only does what skiers want in unprecedented detail, but also light and tiny enough to be unobtrusive. For instance, you get details about edge-to-edge transition time, in-air rotation, g-force, airtime and more. The sensor is slick enough and low profile, so you may not even notice that you have it on you.

This multi-sport ski sensor, Piq, measures just 44 x 38 x 5.4 mm. In the three-piece setup, the largest is the AA-battery sized charging unit. When not in use, you can simply plug this into the USB port of your computer and leave it for charging. It has a steel clamp to allow the Piq sensor to snap under it when you are resting. This gives the Piq sensor a quick recharge during say, lunchtime. In real use, the Piq sensor stays in a small pocket on the ankle strap that you strap around your ankle. You must be careful when you wrap and strap the ankle strap to prevent the Piq sensor from flying out during some of the most aggressive sessions.

Once you have had it on securely, you can forget about the Piq. Those who tried it on for multiple days, say the Piq never budged, even when the skier straight-lined it at over 100 kmph, jumped, skied corn snow, groomers, hard pack, and deep powder. In general, whether you slash, thrash, and even smash a few gates, this tiny, light, and secure Piq sensor will stay with you.

The Piq sensor has its own battery, powering it on for continuous tracking for about three hours, according to the manufacturer. In actual practice, the battery lasts longer than the manufacturer’s claim, before needing a recharge. This is indeed a big plus for the Piq, as it is very rare for the battery performance in a device to exceed the manufacturer’s claims.

While you are on the snow, the Piq sensor will track and record several statistics such as your speed, rotation time in the air, total airtime, G-force when you land, and the G-force when you take a turn. It will record your edge-to-edge transition time and the angulation of your ski in a turn, generally known as the carving degree. You can time your skiing time, as against standing or riding the chair, etc., your total run, and all your motions including the turns and jumps during the session. Piq will even count the turns per minute when you are skiing.

A free Android or iOS app companion allows the user to get access to the data the Piq sensor has acquired. No cable connection is necessary, as the smartphone connects to the sensor via Bluetooth 4.0. However, the app does not give you the data in real-time. Rather, it synchronizes your session when you trigger the specific function within the app.

An interactive, info-graphic style interface displays the data you pulled in and allows you to look at topline data for the session. You can then drill down to specifics about your turns and jumps.

Amputee Patients Feel Again Using Bionic Fingers

Although prosthetics do help amputees to get back some use of their missing limb, feeling is not among them. However, that may soon be changing now. Bionics prosthetics research from EPFL is promising enough to allow an amputee patient to perceive and distinguish between smooth and rough textures. An artificial finger connected surgically to nerves in the upper part of the patient’s arm does the trick. It is expected that this advance will expedite the development of the sense of touch in prosthetic limbs.

The EPFL research has also proven that the same prosthetic touch sensors meant for amputees can be easily tested on people who are able-bodied. For instance, non-amputee persons can feel roughness by stimulation of their nerves – without surgery.

Sylvestro Micera and his team at EPFL in Switzerland and SSSA in Italy have developed this technology in collaboration with Calogero Oddo and his team at SSSA – they have published the results in eLife. Their research is opening new windows on the development of bionic prostheses, and sensory perception is helping to improve the progress.

Dennis Aabo Sørensen, a hand amputee, is helping EPFL with its prosthetic research for some time. The team has implanted electrodes above the stump on his left forearm. The bionic finger connected to his stump allows him to feel sensations of texture at the tip of the index finger of his phantom hand. However, he still feels his missing hand as if he had a closed fist.

When EPFL connected a bionic hand to the electrodes in his left forearm, S⌀rensen could recognize both shape and softness. This time, the team wired the bionic finger to the electrodes meant for his fingertip. Rubbing the bionic finger against several pieces of plastic engraves with different patterns produced a sensation of texture at the tip of the index finger of his phantom hand. For 96 percent of the time, Sørensen was able to differentiate correctly between smooth and rough plastics using his bionic finger.

The group at SSSA in Italy tested the bionic finger on non-amputees while the subjects wore EEG caps. They noted the brain activity of the subjects while they were touching the plastic surfaces with their actual finger. They then compared these against the activity detected while they touched the same surfaces with the bionic fingertip. This was proof to the scientists that bionic fingers could activate the same parts of the brain, as did the real digits.

Therefore, the team is confident not only about leading to prosthetics that can feel, but also about offering the power of artificial touch to industrial, surgical and rescue robots as well.

The artificial fingertip was equipped with sensors that were wired to nerves in Sørensen’s arm. As the fingertip, assisted by a machine, moved over different pieces of plastic with smooth or rough patterns engraved on it, the sensors generated appropriate electrical signals. These signals were then translated into a series of electrical spikes to imitate the language of the nervous system. Once the spikes were delivered to the nerves, Sørensen was able to distinguish between rough and smooth surfaces with repeatable accuracy.

What Are The Cobots?

When inventors Joseph Eagleburger and George Devol were discussing about science fiction novels in 1954, they initiated the idea of industrial robots. It took them six years to give shape to their idea and Unimate entered a secure place in the robotic hall of fame, as the world’s first industrial robot. In 1961, Unimate began working on the assembly lines of the General Motors.

At first, people looked on with suspicion on the safety issues related to Unimate. At the time, the only reference people had for robots, was the laser-firing robot from “The Day the Earth Stood Still,” a thriller from the 1950s. Now, 50 years hence, industrial robots are far less scary.

Traditionally, robots were constructed to work under restriction inside robotic work cells with physical barriers for the safety of human workers. However, modern robots work completely outside any cage. On the factory floors today, working safely alongside their human counterparts, you will find unfettered working robots that are termed collaborative robots or cobots. Nevertheless, no robot is entirely devoid of health and safety features.

Unlike in the past, today’s industrial robots or cobots are designed specifically to work safely around humans. In fact, now robots work hand-in-hand with humans on the same assembly tasks and it has been independently certified that this is safe. The two-armed collaborative robot from ABB Robotics, YuMi, contributed largely to this certification.

To prevent accidents with human workers, cobots utilize sensors installed on them. The sensors monitor the location of humans around them on the factory floor and react to human contact. Therefore, even if a person does come too close to the machinery, it simply and automatically shuts down. Moreover, cobots work with strength, speed, and force limited to avoid causing serious injury to humans if there is any contact.

Most cobots are simple enough to require practically no skill in programming them. Anyone, who can operate a smartphone, can program them to operate. In contrast, complex robots of about a decade ago needed a host of highly skilled technicians to program and monitor them while in operation.

Among the industries that are being transformed by such collaborative machinery, the most to benefit is the automotive industry. As such, this sector has always been at the forefront of industrial robotics. Automotive manufacturers have been using robots and robotic equipment since the 1960s, but a lot has changed since then. The competitive nature of the industry forces manufacturing lines to be highly efficient, flexible and more productive than ever before.

Not that all this means any advancement in robotics is a threat to human jobs on the production line. For instance, builders use a concrete mixer to help the bricklayer and not to replace him. In the same way, collaborative robots only assist workers on the assembly line and do not actually replace them. According to some experts, production line workers will ultimately use collaborative robots as helpers in the same way as engineers use computers to further their own work and make their jobs easier.

Tractor Beams are a Possibility Now

For some time now, science fiction has been predicting the tractor beam, the strange column of energy that can transport everything from living beings to inanimate objects through space. Now this long-envisioned chunk of technology from science fiction is en-route to becoming science fact. At the Public University of Navarre in Spain, scientists are successfully manipulating tiny objects in midair. They are doing this with what they claim are acoustic holograms.

Ordinarily, holograms are three-dimensional optical structures, that is, they are made from light. More specifically, they are made by photons diffracting through interference patterns on a holographic plate. Although not popularly known, sound waves can do this as well. Sound waves, when interfering constructively and destructively with ultrasonic waves, can generate 3-D structures. That allows the sound waves to exert force on objects and behave in the same way a tractor beam does.

For instance, scientists have earlier demonstrated acoustic levitation techniques. These can suspend particles within the standing ultrasonic wave created by a single array aimed at a reflector, or between a pair of ultrasound emitter arrays. By varying the phase of the ultrasound, the nodes can move, thereby transporting the particles along a single axis. However, the acoustic levitation technique is fundamentally limited because the design relies on a fixed enclosure. Acoustic holograms are a step forward since they accomplish the same with only a single acoustic emitter and do not require any special enclosure.

These 3-D structures, made of sound or acoustic holograms, are actually bridging the gap between optical and acoustical trapping – shaped in the form of bottles, twisters, or tweezers. Scientists produce them typically from 400 numbers of 10 mm ultrasonic transducers arranged in an array of 20 x 20. Each transducer generates 40 kHz waves of ultrasonic sound using programmable relative phase modulation.

Within each transducer, there are two elements – one to determine the shape of the ensuing structure, and another, a holographic lens generated when the sound waves emitted phase-coincide at the structure’s nodal point. When the rapidly oscillating sound waves of the transducer array combine with the holographic lens, they are able to suspend small particles of about 3 mm in diameter in midair. In addition, they also grant control over the position and orientation of the particles.

Brüel and Kjær have done further research on the subject of STSF, Spatial Transformation of Sound Fields or acoustic holography. They have combined acoustic holography with transitory calculations. This allows defining any sound field signifier such as particle velocity, sound intensity, or sound pressure as a function of position and time. They have demonstrated through animated maps the control over a specific property change as a function of time.

Scientists at the Public University of Navarre in Spain have published their findings in Nature Communications. They have successfully levitated, rotated, and otherwise manipulated a tiny ball in midair using a grid of Ultrasonic transducers that send out high intensity sound to create a kind of force field around the object. They moved the tiny ball around the grid as though human fingers were doing the work.

PINE64 : A 64-bit Contender for the Raspberry Pi

Earlier, a DIY computing project could cost an enthusiast hundreds of dollars. Now, with single board computers such as the Raspberry Pi or RBPi or its latest kin, the Raspberry Pi Zero, anyone can start a new project at the cost of a cup of coffee. Seen from the other side of the fence, a competitor has to include a better choice of components, offer a better price or both. PINE64 Inc. has taken the third route.
PINE54 Inc. is attempting to improve on the legacy so far built up by the RBPi. According to the team, two mathematical constants make up the name of their board – Pi and Euler’s Number e. As it has a 64-bit processor, the name also includes the number 64 along with an A to differentiate it from future versions. The PINE A64 runs on an ARMv8 processor, the Cortex-A53, and is available for just $15.
PINE A64 measures 12.7×7.94cms and uses a 64-bit processor, a quad-core ARM Cortex A53 running at 1.2GHz. A dual-core Mali 400 MP2 handles the graphics. Memory includes a micro SD slot to handle cards up to 256GB and 2GB DDR3 SDRAM onboard. Ports available on the PINE A64 include one gigabit Ethernet, two USB 2.0, one HDMI 1.4 connector for 4K output, a stereo mini-jack connector and a charging circuit for a 3.7V Lithium battery.
PINE64 Inc. will also be offering separate modules to augment the functionality of PINE A64. The modules will add a touch panel port, a 5MP camera port, Bluetooth 4.0 and Wi-Fi connectivity and a 4-Lane MIPI video port. The board runs on 5V power via its micro USB connector, but can fall back on its internal battery with on-board power management.
According to Johnson Jeng, the co-founder of PINE64 Inc., the company has designed a simple, smart and affordable computer. People can use this to bring their next big ideas to life. The 64-bit quad-core single board computer is available at an exceptional price. It is compatible with several open-source platforms, enabling people to build a community of innovation and creativity.
Just like other ARM-based single board computers, you can set up PINE A64 to operate as a gaming console or a mini-computer. You can control your connected home or allow it run your own media center. PINE A64 can operate with Android 5.1, openHAB, Ubuntu Linux, OpenWRT and Kodi. Additionally, it supports Miracast and offers the H.265 video standard to give your 4Kx2K output.
The Raspberry Pi Foundation concentrates on delivering performance without increase in costs, and hence, prefers to retain the ARMv7 architecture for the RBPi family even when ARMv8 64-bit chips are readily available. According to Eben Upton, the founder of the RBPi series, a more powerful processor will certainly come with a boost in the prices.
With companies now launching new Systems-on-a-Chip or SoC platforms that are 64-bit and super-cheap, PINE64 Inc. has decidedly stolen a march over the RBPi series. Allwinner started this trend with the 64-bit Cortex A53 processor for their tablets and now PINE64 Inc. has used it to power their PINE A64, A64+ and A64+ 2GB boards.

A Camera to See around Corners

That light travels in straight lines, is a well-known fact of physics. This property of light prevents us from seen around corners, unless helped by a mirror. However, scientists have not allowed this limitation to prevent them from developing a far from average camera that can see around the average corner, without using x-rays or mirrors.

Genevieve Gariepy, along with his group of scientists, has developed a latest device with a detector that treats walls and floors as if they were virtual mirrors. Along with some clever data processing techniques, this amazing device has the power and is able to track moving objects that are out of its direct line of sight.

A mirror works by reflecting scattered light from an object. As the surface of a mirror is usually shiny, all the reflected light travels in a well-defined angle. As light scattered from different points on the object travel at the same angle even after reflection, the eye sees a clear image of the object. On the other hand, light reflected from a non-reflective surface is scattered randomly – preventing us from seeing a clear image.

At the University of Edinburgh and the Heriot-Watt University, researchers have found a way to tease out information on an object even when light from it is scattered randomly by a non-reflecting surface. They have published their method in Nature Photonics and it relies on the technology of laser range finding. This technology measures the distance of an object by the time it takes a light pulse to travel to the object, scatter and return to the detector.

In principle, the method of measurement is similar to range finding by ultrasonic technology. Only, instead of sound pulses, researchers use pulses of light. In practice, researchers bounce a laser pulse off the floor making it scatter. A tiny fraction of the laser light striking the object backscatters on the floor. The detector records the back scattered light from the virtual mirror spot, close to the initial spot where the laser strikes.

The speed of light is a constant and a known factor. Therefore, the device triangulates the position of the object by measuring the time interval from the start of the laser pulse to the scattered light reaching the patch on the floor. There are a few difficulties involved with this method.

The light levels the detector has to detect on the virtual spot are extremely low. Moreover, the timing measurement has to be accurate to within 500 nanoseconds or 500 billionths of a second. To overcome both these obstacles, researchers had to go in for some serious laser and detector technology. They had to use a laser pulse only ten femtoseconds long for the timing measurements.

The ultra-sensitive camera uses a one-pixel avalanche diode array called as SPAD to detect the image on the patch of the floor. The combination acts as an ultrafast stopwatch for recording the time the light pulse arrived after scattering. All this happens within a few billionths of a second.

It helps if the out of sight object the device is trying to locate is moving, while the nearby objects are not. The moving object then generates an image that changes with time and this can be filtered from the unvarying background.

How Sensors help Seniors Live Independently

With the benefits of medical science and increased awareness, people are now living longer than their ancestors did. Along with longer living, they also desire to live as independently as possible in their senior years. However, certain risks are part of independent lifestyles. These include inadequate care resulting in deteriorating health and debilitating falls. Researchers are addressing these issues by developing smart homes. They are using sensors and other devices and technologies for enhancing the safety of residents while monitoring their health conditions.

In-home sensors permit unobtrusive monitoring of individuals. That offers enormous potential for providing timely interventions and for improving the health trajectory, because health problems can be detected early, before they become more serious. Therefore, individuals are assured of continued high functional ability, independence with better health outcomes.

University of Missouri has an ongoing project in HAS or Health Alert Systems using sensor technology. They are testing HAS in senior housing in Cedar Falls, Iowa and in Columbia, Mo. They presently use motion sensors to monitor activity, acoustic and vision sensors for fall detection, Kinetic depth images for gait analysis and webcams for silhouette images. They have a new hydraulic bed sensor to capture quantitative restlessness, respiration and pulse. HAS also uses pattern recognition algorithms for detecting pattern changes in the data collected by sensors. Based on this, HAS can generate health alerts and forward them to clinicians, who diagnose them further to determine appropriate intervention.

Researchers at the university are evaluating the usability and effectiveness of HAS for managing chronic heath conditions. They are presently testing the HAS at remote sites, away from healthcare providers. Researchers expect this approach will provide important information on ways to scale up the system into other settings. According to the researchers, the next big step will be to move the system into independent housing where most seniors prefer to be. This will also offer significant potential healthcare cost savings, enabling seniors to live independently.

This research will improve the health care and the quality of life for older adults. Researchers are focusing on newer approaches for assisting health care providers in identifying potential health problems early. This will offer a model in eldercare technology, which will keep seniors independent while at the same time, reducing healthcare expenses. The project also has a plan – It will train the next generation of researchers in handling real, cyber-physical systems. It will mentor students through an interdisciplinary team, while the research outcomes are integrated into the classroom teachings.

Similar efforts are also under research in other places. For example, researchers at the Intel Labs, Carnegie Mellon University in Pittsburgh, are working on ways of taking out the drudgery involved in housework. They are presently designing HERB or Home Exploring Robotic Butler, a smart and resourceful robot. According to the researchers, HERB will be able to walk into a room, assess its layout and move about by itself.

Researchers at Intel Labs believe disabled and senior citizens will adopt robot butlers early on, as they most need help around the house.

How Smart Sensor Technology helps Beehives

Plants are necessary for life on the planet Earth, as they transform the gas Carbon-Di-Oxide that animals exhale into life-sustaining Oxygen. Plants, in turn, depend largely on bees to pollinate their flowers and propagate thereby. That makes honey bees a keystone species, which humans have recognized throughout history. Bees help to pollinate nearly 70% of all plants on earth assuring about 30% of the global food supply. That makes bees a predictor of our planet’s future health.

Global warning has brought with it an alarming rise in the growth rates of damaging pathogens such as fungi, viruses and mites. At the same time, there has been a serious disrupt in the natural rhythms that the bee population had adapted over centuries of consistent seasonal weather patterns. Crop production is infested with pesticides, which bees ingest and transmit back to their hives during pollination. This often leads to a total collapse of colonies. Electromagnetic radiation level in the atmosphere is rising with the exponential growth of cell phones and wireless communication towers. This interferes with the ability of the bees to navigate in flight.

All the above has made it imperative for scientists to monitor the activity of honey bees within their hives in the daytime as well as at night including during inclement weather. At the University College of Cork in Ireland, a group of food business, embedded systems engineering and biology students have recently taken up the challenge. They have developed a unique platform for monitoring, collecting and analyzing activity of bees within the colonies unobtrusively.

The project Smart Beehive has earned top honors in the Smarter Planet Challenge 2014 of IEEE/IBM. Using mobile technology, the project deploys big data, wireless sensor networks and cloud computing for recording and uploading encrypted data.

Waspmote is a modular hardware sensor platform. Libelium has developed Waspmote for any sensor network and wireless technology to connect to any cloud platform. The UCC team of students has used Waspmote as their starting point along with integrated hive condition and gas sensors. They have used ZigBee radios, GSM and 3G communications to study the impact of oxygen, carbon dioxide, humidity, temperature, airborne dust levels and chemical pollutants on the honey bees. The students captured data from initial observations in two scientific papers and three invention disclosures.

According to the famous physicist Albert Einstein, man can survive only for four years on earth if there were no bees left. Smart technology can integrate beehive sensors and analyze the data they collect. Therefore, such platforms play a critical role not only in ensuring continuation of pollination, but also in ultimately monitoring, understanding and managing the precious global resources as well.

The Plug & Sense! Technology from the Libelium Waspmote wireless sensor platform offers the use of a wide range of sensors, integrating more than 70 of them at a time. It can adapt to any scenario of monitoring with wireless sensors such as water quality, vineyard monitoring, livestock tracking, irrigation control, air and noise pollution, etc.

Outdoor deployment is possible because of the waterproof enclosures used by Plug & Sense! Moreover, using solar panels, the honeybee project has the ability to harvest energy.

Computer Translated Sign Language

There are many people in the world who cannot hear because their hearing ability is impaired. This disability also precludes them from holding audible conversations with others. For a long time, telephone calls dominated the long-distance communication scenario. However, over the past couple of decades, other means of communication have also evolved, such as emails and text-based messaging. Although these supplement voice calls largely, the problem of face-to-face communication with the deaf still remains.

Using sign language is one means of face-to-face communication that the hearing-impaired use and this is as efficient as their methods of communication using smartphones, tablets and computers. Similar to using any other language, two people can communicate face-to-face only when both are capable of using the sign language. Lately, communication between two individuals is now easier because of the use of translators in computers. This allows the user to understand even when they do not understand the spoken language.

Now MotionSavvy is using the same technology for translating sign language to another language that the user can understand. They are using a dedicated tablet, Uni, created to enable efficient two-way communication between those who can hear and those whose hearing is impaired.

Uni involves the use of two distinct technologies. First, it monitors sign language by using integrated cameras and interprets the signs using a special recognition software. Then it translates the signs into spoken words. The other part of the technology involves converting spoken words into text. This happens when the other person responds by speaking. Uni converts this speech into text, displaying it on the screen for the deaf person to read.

The World Federation of the Deaf claims there are more than 70 million deaf people in the world. With the technology offered by MotionSavvy, there is a dramatic potential to influence the lives of such people.

MotionSavvy is launching Uni with the ability to read at least 2000 signs initially. They will be issuing updates for adding more signs. However, they are offering SignBuilder software, with which users can configure new signs.

Uni is available in two versions – hardware and software. You can buy the hardware device that includes the software, or the software alone. You can use the software-only solution on a computer that has the Leap Motion controller. For both solutions, users need to pay a monthly subscription that allows them access to SignBuilder and CrowdSign.

The basic Uni Dictionary contains about 2000 signs. Although this confers the ability to hold meaningful conversations, individuals can add new vocabulary to Uni Dictionary with the help of SignBuilder. They can also share the new signs with others on the Uni network, by using the software CrowdSign. MotionSavvy expects the number of signs to grow exponentially with people using the two software programs.

At present, Uni is able to recognize signing by hands in front of its camera. Eventually, MotionSavvy expects to implement recognition of all facial emotions. Uni is working for people using signs such as CASE or SEE. MotionSavvy is working on improving recognition and adding more features to accommodate culturally strong ASL users as well.