Monthly Archives: July 2017

How Black can VantaBlack be?

How Black can VantaBlack be?

Any student of Physics knows that black surfaces are good absorbers of radiation of the visible and the infrared spectrum of light—they readily soak in visible light and heat rays. That is the reason they look black, as they reflect very little light. In fact, the amount of light reflected by a black surface is a measure of its blackness. Therefore, when VantaBlack, from the UK-based Surrey NanoSystems, is called blacker than black, there is a specific reason—it reflects only 0.04% of the light falling on it, including visible, UV, and IR radiations.

This property of VantaBlack gives it excellent characteristics. It offers high front-to-back thermal conduction along with high thermal shock resistance. As VantaBlack is also super-hydrophobic, it rejects water accumulation, the material design is perfect for applications varying from thermal camouflage to space exploration.

Vanta is an acronym for Vertically Aligned Carbon NanoTube Arrays. Billions of such tubes are grown on a substrate using a modified process of chemical vapor deposition. Each square centimeter of the substrate can hold more than a billion such tubes, each about 20 nm in diameter and from 5 to 14 µm long.

This packed forest of carbon nanotubes effectively traps incoming light. Individual photons bounce between the microscopic spaces separating each tube, and eventually dissipate as heat. There is very little particle fallout or outgassing as the material is a high thermal conductor. The heat passes on to the substrate, which has low thermal tolerances.

Surrey NanoSystems have developed three versions of VantaBlack—they vary on their capabilities of light absorption, heat abstinence, and application processes. The first version has already been described above. The second is known as S-VIS and it is unique as it can be sprayed on to a material (but not with a spray can), rather than deposited by vapor deposition. That means S-VIS can be directly applied to any surface material.

As S-VIS is sprayed on, the nanotubes cannot remain aligned. They are rather scattered, with the result that the light-absorbing properties of S-VIS are diminished, reflecting about 0.23% of the visible spectrum. Additionally, the user has to bake the material after spraying it, which limits the type of substrate he or she can use. However, S-VIS is perfect for any complex-shaped or 3-D object or for any applications where there is no flat surface.

VantaBlack’s third version is known simply is 2.0, and the company claims it to be even darker than the first version. It is so black that Surrey has not been able to measure the light reflected from 2.0 with their MID-IR or UV-VIS spectrometers. There is very little information on 2.0 from the company, but they have demonstrated in a video that 2.0 can absorb laser light.

There are innumerable uses for VantaBlack. Primarily, most uses are in the optical field, which benefit from the light-absorbing and low outgassing features of the material. For instance, pairing VantaBlack with a precision IR imaging platform such as FLIR can result in a high-resolution system able to differentiate between heat sources. When used in Earth-based telescopes, VantaBlack can reduce the atmospheric distortion and prevent practically all stray light reflection from the polished lenses.

Accurate Methods of Gas Analysis

Accurate Methods of Gas Analysis

The earliest methods of detecting poisonous gases involved using birds such as canaries. This was mainly inside mines, where the presence of carbon monoxide, methane, and carbon dioxide had a harmful effect on the miners. The canaries, being sensitive to life threatening gases, would stop singing in the presence of such gases. This was a signal for the miners to evacuate.

Modern methods use several other means of detection, and are more accurate. These involve detecting gases accurately and analyzing them in different areas. For instance, households have carbon monoxide detectors to alert families of the presence of dangerous gases. Explosive detection in airports uses gas chromatography, while human breath analysis forms one of the diagnostic tools for patients in hospitals.

Most gases are frequently undetectable, unless they possess a peculiar odor. That makes the ability to analyze the composition of gases so crucial to human health and safety. By knowing the composition of the gases, we get a clue as to the operation of different processes operating and improving upon them. At present, there exist technologies for several optical, laser, and spectroscopic gas analysis. The major processes involve laser absorption spectroscopy, photoionization, and paramagnetism, involving different types of electronic components.

Laser Absorption Spectroscopy

Laser absorption spectroscopy is the operating principle behind several technologies involving gas analysis. The basic principle being different molecules absorb specific light waves, and the amount of energy a gas absorbs gives an indication of its composition. The characteristic absorption spectra offer very accurate gas detection and analysis. One of the laser-based instruments used is the Tunable Diode Laser Spectrometer (TDLS).

Tunable Diode Laser Spectrometer

With TDLS, it is possible to measure low concentrations of gases such as carbon dioxide, ammonia, water vapor, or methane. Within the instrument, a photodiode measures the reduction in signal intensity when the emission wavelength of the laser is adjusted to match the absorption lines of the target molecule. The measurement readings give an estimation of the concentration of the target gas.

For proper operation of the TDLS, it is important to select a suitable absorption line for the compound under study. This makes TDLS highly specific and sensitive. The ability of TDLS to measure several points simultaneously and its non-intrusive nature has been of great help in combustion diagnostics.

IR Spectroscopy

A similar technology involves infrared (IR) spectroscopy. By measuring the absorption of a light source through the gas sample, IR spectroscopy helps to analyze the gas composition. This technology focuses on the IR wavelengths that excite the molecules of the gas. Detection involves use of Fourier Transform Infrared Technology (FTIR). In actual use, a combination of light frequencies is directed towards a sample and the detectors within the instrument measure the light the gas absorbs. After repeating this process several times with different combination of light frequencies, a computer processes the raw absorption data and converts the result using a Fourier Transform algorithm. IR spectroscopy can measure more than 20 different gases simultaneously, and is very suitable for measuring carbon dioxide and organic compounds.

Ammonia Sensors

These very sensitive devices use IR spectroscopy for detecting ambient levels of atmospheric ammonia in the environment.

Audio HAT for the Raspberry Pi Zero

The Raspberry Pi Zero (RBPiZ) and its successor, the Raspberry Pi Zero Wireless (RBPiZW) are very small single board computers. The Pi Foundation wanted to keep their cost and size to the low side, so they did not include either a 3.5 mm audio jack or any other audio port. Although this may seem like a setback for many users, some of them were went ahead and figured out how to get audio out of the board with a little hacking.

Another reason for not including an audio port is the Broadcom chipset used for the RBPiZ and RBPiZW does not have a true analog output. Instead, there are two pulse width modulated (PWM) pins that spew out digital output at very high speeds. To get audio out of these two PWM output pins, one has to filter the signal to the audio frequency range. This allows one to fake an audio signal by adjusting the duty cycle of the PWM pins.

According to physicists, for simulating any analog frequency from a PWM signal, the PWM frequency should necessarily be at least ten times higher than the highest frequency to be replicated in the analog range. As the audio signals humans can hear range from 20 Hz to 20 KHz, the minimum PWM frequency should ideally be about 200 KHz. However, the PWM output from the two RBPis is 50 MHz, so we can comfortably filter out the audio part while suppressing the higher frequencies.

The schematic of the audio HAT for the RBPis shows that the two stereo audio channels, left and right, are designated as PWM0_OUT and PWM1_OUT. On the PWM0_OUT, R21 and R20 are two resistors acting as a voltage divider to bring down the 3.3 V signal to about 1.1 V peaks. The corresponding voltage divider on the PWM1_OUT is formed of R27 and R26. Therefore, the stereo audio line level can give an output of 1.1 V peak-to-peak.

The RC low-pass filter that prevents the high frequencies from passing through is made up of capacitors C20 and C26, working in conjunction with R21 and R27 respectively. With the values of the components used on the board, the cut-off frequency for this RC low-pass filter is 17865 Hz, which is very close to the upper limit of audio frequencies, or 20 KHz.

That still leaves the DC voltage part of the signal on the lines, and one must remove it to prevent damage to any speakers or headphones subsequently connected to them. This is done by capacitors C48 and C34, which allow only AC part of the signal to pass through, and block all DC voltages.

As the PWM pins are being taken to the outside of the board, one must also protect the RBPi from ElectroStatic Discharge (ESD), which can travel back and destroy the RBPi. This is taken care of by ESD protection diodes.

All the above sounds very good and simple, but on the RBPi, the actual PWM0 signal on pin #40, and the PWM1 signal on pin #45, are not available, as they have not been terminated into exposed pads. To circumvent this problem, the PWM0 signal has to be rerouted through software to GPIO pin #18, and the PWM1 signal to GPIO pin #19.

Talk to your Raspberry Pi

The Raspberry Pi Foundation has tied up with Google for a project called the Artificial Intelligence Yourself or AIY. This is a Hardware on Top or HAT project for the Raspberry Pi 3 (RBPi3) to transform the single board computer into a virtual assistant. This is the first time that Google is offering something exclusively for hobbyists, and the kit comes free with the printed issue 57 of the MagPi—the official magazine of the Raspberry Pi.

The kit with the MagPi magazine consists of a Voice HAT board, a speaker, a stereo microphone board, a large arcade push button, and a set of wires. This is all one needs to add-in voice integration to the RBPi3, turning it into a personal Alexa alternative. Alexa is an intelligent personal assistant developed by Amazon. Intelligent personal assistants are capable of offering real time information, such as news, traffic, weather, apart from playing audiobooks, streaming podcasts, setting alarms, making to-do lists, playing back music, and most importantly, capable of voice integration.

The MagPi magazine contains all the build instructions for putting together the free hardware voice kit; you only need to add the RBPi3 to get it working. There is also a custom cardboard case to house the entire kit along with the RBPi3. Apart from the RBPI3, the AIY voice project will work with an RBPi2 and an RBPiZW as well. Once the hardware is assembled, you will need some software setup, with access to the Google Assistant SDK and Google Cloud Speed API.

The MagPi 57 issue offers several voice integration ideas for the AIY voice kit and you can enhance them or build your own projects. For instance, you can have a voice integration project to answer all your questions just as Alexa does. Alternately, you can create a voice-controlled robot. In fact, some owners of RBPi are building secret AIY projects at Hackster.

According to Billy Rutledge, Google’s director on the project, the AIY project demonstrates a practical method of starting and running a natural language recognizer in conjunction with the Google Assistant. Not only will you have all the functions of the Google Assistant, you can as well add your own pairs of questions and answers.

The Voice Kit and RBPi3 combination acts as a voice recognizer and uses the Google Assistant SDK to recognize speech. For evaluating local commands, it uses a local Python application. You can talk to the Google Assistant, which makes use of the Google Cloud Speech API to answer back. If you wish to use voice capabilities in your future projects, check out the Maker’s guide for more creative extensions.

The arcade style button has additional functions other than initiating the speech interaction. A bright LED mounted within the button signals to verify your device is running properly through different types of blinking. For instant, the LED pulses to indicate the device is just starting up, and the voice recognizer has not started functioning yet. Once the device is ready to be used, the LED blinks every few seconds. The LED glows steadily when the device is listening, and pulses if the device is thinking or responding.

Two Raspberry Pi HAT Controller Modules

Atomo Systems, from Hong Kong, will be producing the Atomo Modular Electronic System for building electronic projects with four parts—Control, IO, Power, and Connector. The system also includes two low-cost HAT modules with onboard ARM MCUs compatible to the Raspberry Pi (RBPi). The combined controller connector board uses a small and inexpensive MCU, similar to what an Arduino Uno uses. However, the ARM MCU is faster, has more IO, and is better compatible with the RBPi.

The idea behind building such a modular system is to allow the user to focus more on the project rather than worrying about running extra wires for power or adding more IO. The system is highly flexible and has ample system resources. For instance, if you need to solve larger problems, you can simply add more resources such as by swapping controllers rather than starting all over again.

Any electronic project needs Inputs and outputs to connect to the rest of the world. The modular electronic system comes with IO modules with a useful amount of IO. In addition to offering adequate power for most applications, you can double up the modules using the 8-module connector board.

The onboard connectors on the extended controllers offer features such as multi-channel clock generation and bus multiplexing. Therefore, you can easily keep track of the system temperature using the built-in thermistor, and drive a fan if the temperature exceeds a certain limit.

The modular electronic system needs power to work. Apart from deriving power from the USB socket, other options are also available, from 13 W to 2 kW. These include a 5.5 mm DC Barrel Plug, ATX, and POE. Voltages on tap include 12 VDC, 5 VDC, and 3.3 VDC. For driving higher power devices such as heaters and motors, the input voltage may be used directly.

All the controllers are compatible to the 40-pin HAT connector on the RBPi. They contain EEPROMs for the RBPi HAT to allow for system configuration and automatic device driver setup. Separate SPI and I2C interfaces allow addressing two PWMs, two ADCs, and four GPIOs. The MKE02Z16VLD4 MCU by NXP powers both. This is a 44-pin LQFP, 5 V tolerant, and ESD robust ARM Cortex m0+ CPU running at 40 MHz. One of the controllers is a low power module, while the other is a high power module capable of handling up to 600 W of power usage, via a 34-pin power module connector.

Compatibility with the HAT connector on the RBPi allows programming on the RBPi for updating the controllers. Additionally, you can simply use the Atomo as a modular HAT. This way, you can handle ROS robots or any other system where the RBPi is solely used for interfacing and processing, while the Atomo HAT provides the additional power, IO, or real time control the project requires.

The low power RBPi HAT combined controller and Connector boards make two IO module systems. Therefore, you can build POE powered RBPi applications for a simple RBPi powered robot. This board features 2×28-pin IO modules powered by the RBPi itself. The higher power version has a standard 34-pin power module.

What happens when you turn a computer on?

Working on a computer is so easy nowadays that we find even children handling them expertly. However, several things start to happen when we turn on the power to a computer, before it can present the nice user-friendly graphical user interface (GUI) screen that we call the desktop. In a UNIX-like operating system, the computer goes through a process of booting, BIOS, Master Boot Record, Bootstrap Loading, grub, init, before reaching the operating level.

Booting

As soon as you switch on the computer, the motherboard initializes its own firmware to get the CPU running. Some registers, such as the Instruction Pointer of the CPU, have permanent values that point to a fixed memory location in a read only memory (ROM) containing the basic input output system (BIOS) program. The CPU begins executing the BIOS from the ROM.

BIOS

The BIOS program has several important functions, which begin with the power on self-test (POST) to ensure all the components present in the system are functioning properly. POST indicates any malfunction in the form of audible beeps. You have to refer to the Beep Codes of the motherboard to decipher them. If the computer passes the test for the video card, it displays the manufacturer’s logo on its screen.

After checking, BIOS initializes the various hardware devices. This allows them to operate without conflicts. Most BIOSs follow the ACPI create tables for initializing the devices in the computer.

In the next stage, the BIOS looks for an Operating System to load. The search sequence follows an order predefined by the manufacturer in the BIOS settings. However, the user can change this Boot Order to alter the actual search. In general, the search order starts with the hard disk, CD-ROMs, and thumb drives. If the BIOS does not find a suitable operating system, it displays an error. Otherwise, it reads the master boot record (MBR) to know where the operating system is located.

Master Boot Record

In most cases, the operating system resides in the hard disk. The first sector of the hard disk is the master boot record (MBR), and its structure is independent of the operating system. It consists of a special program, the bootstrap loader, and a partition table. The partition table is actually a list of all the partitions in the hard disk and their file system types. The bootstrap loader contains the code to start loading the operating system. Complex operating systems such as Linux use the grand unified boot loader (GRUB), which allows selecting of one of the several operating systems present on the hard disk. Booting an operating system using GRUB is a two-stage process.

GRUB

Stage one of the GRUB is a tiny program and its only task is to call stage two, which contains the main code for loading the Linux Kernel and the file system into the RAM. The Kernel is the core component of the operating system, remains in the RAM throughout the session, and controls all aspects of the system through its drivers and modules. The last step of the kernel boot sequence is the init, which determines the initial run-level of the system. Unless otherwise instructed, it brings the computer to the graphical user interface (GUI) for the user to interact.

Graphene Cells Generate Energy for Prosthetic Hands

At the Glasgow University, Scotland, a team of scientists has discovered a new use for graphene cells —the honeycomb form of carbon. They are using it to develop prosthetic limbs or more specifically, robotic arms with a sense of touch built-in.

The world over, several researchers and their teams are trying to make synthetic skin, which is flexible, and at the same time, has a sense of touch similar to the various types of sensory receptors the human skin possesses. At the Glasgow University, the scientists are powering an experimental form of electronic skin. They are using the power produced by solar cells made of graphene.

Although many types of prosthetic hands are available that can reproduce several mechanical functions of human limbs, the sense of touch is one not included yet. It would benefit the amputees a lot, if they could use a prosthetic hand that could sense what it touched, as this would be much closer to a real hand.

Such prosthetic systems do need clean electric energy, but providing that is hardly an easy task. However, the team of researchers from the School of Engineering at the University of Glasgow has discovered that by using ultra-thin honeycomb of carbon, also called graphene, they can generate the necessary clean power derived from the Sun.

Incorporating these clean energy generators in electronic skin with the sense of touch means robots can enhance their ability and performance when interacting with humans and detect potential dangers in a better way.

The team, led by Ravinder Bahiya, describes the process of integrating such photovoltaic cells made of graphene into the electronic skin, in detail, in the journal Advanced Functional Material.

Now the team is planning to use the same technology for powering other motors driving the prosthetic hand. According to the team, this is the only way they will be creating a prosthetic limb that is completely autonomous in its energy generation—something close to the normal limb.

Graphene Cells / Graphene Solar Cells

Graphene is actually a single layer of carbon atoms bonded together in a repeating pattern of hexagons. This structure makes it a two-dimensional material with amazing characteristics—a wonder material with extreme strength, flexibility, transparency, and astonishing conductivity. As it is made from carbon, a material abundantly available on the earth, graphene has the endless potential to improve existing products, while inspiring new ones.

Graphene’s superb transparency and conductivity make it an excellent choice for solar cells. However, although a great conductor in itself, graphene is not good at collecting the electrical current produced within the solar cell. While looking at alternative ways of modifying graphene for the purpose, scientists found graphene oxide (GO) to be more suitable for solar cells. Graphene oxide, although less conductive than graphene, is more transparent and a better charge collector.

Most organic cells generally use conductive indium tin oxide (ITO) and a non-conductive glass layer as their transparent electrodes. However, ITO is a brittle and rare substance that makes solar panels expensive. On the other hand, graphene as a replacement for ITO makes cheaper electrodes for photovoltaic cells.

Five New Advancements in Solar Cells

The earth receives a huge amount of sunlight every hour. Converted to electricity, this would amount to 52 PW/hr. This is more than ten times the entire amount of electricity produced per hour by China in 2013. In the same year, top countries of the world together produced only 16 PW/hr. of electricity. As this is much less than the actual potential of generation of electricity from the solar energy falling on the planet earth, several countries are actively engaged on research and development on photovoltaic cells.

There have been several breakthroughs in photovoltaic cell technology. For instance, early cells were very expensive and inefficient—almost $1800/watt and 4% respectively. Costs have now come down to $0.75/watt, while the efficiency has increased to 40%. Since, then, there have been several other breakthroughs in the solar cell domain.

Printable Solar Cells

At the New Jersey Institute of Technology (NJIT), researchers have developed a printable solar cell, and they can print or paint this on a surface. According to the lead researcher Dr. Mitra, they are aiming for printable sheets of solar cells that any home-based inkjet printer will be able to print and place on the wall, roof, or billboard to generate power. The printable cells are made of carbon nanotubes 50,000 times smaller than a human hair.

All-Carbon Flexible Solar Cells

Scientists at the Stanford University have made these flexible solar cells from a special form of carbon called graphene. According to Zhenan Bao, one of the team and a professor of chemical engineering at Stanford, the flexible carbon solar cells can be coated on to the surface of cars, windows, or buildings for generating electricity.
By replacing expensive materials when manufacturing conventional solar cells, the all-carbon solar cell is expected to make the cells much cheaper.

Transparent Solar Cells

At the Michigan State University, a team of researchers has made solar cells that appear transparent to the visible spectrum of sunlight. Rather, these non-intrusive solar cells convert light beyond the visible spectrum to electricity. Therefore, these can be used on smartphones, on windowpanes of buildings, or in windshields of vehicles without impeding their performance.

According to MSU assistant professor Richard Lunt, their aim is to produce solar harvesting surfaces that are invisible. However, the present efficiency of these cells is a mere 1%, as they are in their initial stages.

Wearable Ultra-Thin Solar Cells

In South Korea, at the Gwangju Institute of Science and Technology, scientists have used gallium arsenide to develop solar cells with a thickness of just one micrometer, more than 100 times thinner than human hair. According to Jongho Lee, an engineer at the institute, such thin cells can be integrated into fabric or glass frames to power the next wave of wearable electronics.

To create such thin cells, the scientists removed extra adhesives from the traditional cells, and cold-welded them on flexible substrates at 170°C.

Solar Cells with 100% Efficiency

By extracting all the energy from excitons, researchers at the University of Cambridge have found methods of making solar cells that are more efficient. Such a hybrid cell combines organic material and inorganic material into high conversion efficiency.

Bio-Inspired Robot Walks with a Rhythm

Walking robots are not new, as robotic engineers have been fascinated by human movements while walking and have tried to incorporate them into their robots. As a result, we have had several walking robots, starting with WABOT I, the first anthropomorphic robot demonstrated in 1973 by I. Kato and his team at the Waseda University, Japan. Almost everyone remembers ASIMO, a humanoid robot introduced in 2000, as an Advanced Step in Innovative Mobility, designed to be a multi-functional mobile assistant.

Where ASIMO moved as if it were scared of falling, the robotic legs developed by the researchers from the University of Arizona are the first model to be walking in a biologically accurate manner. The robotic legs are based on a bio-inspired combination of a musculoskeletal architecture, complete with a neural architecture and sensory feedback.

The human-like gait of the robotic legs comes from three reasons. First, the musculoskeletal system of the robot is very similar to ours, with artificial tendons and muscles, made from Kevlar straps and servomotors, driving the movements. Second, a variety of sensors on the robot provide a continuous feedback regarding the hip position, limb loading, muscle stretch, foot pressure, and ground contact—all necessary to dynamically adjust its gait. Third, a Central Pattern Generator (CPG) controls the movement of the robot at a relatively high level, mimicking the cluster of nerves that serve the same purpose in a human spinal cord.

When we humans walk, we do so almost without thinking about walking. That is because the nerves within our spinal cord allow us to do so. They collect sensory feedback and use it to adjust the rhythm of our walking style. The CPG works the same way for the robot. Just as a baby learns to walk, the CPG too, creates the simplest walking pattern relying on just two neurons, firing alternately.

Babies exhibit this simple walking pattern when placed on a treadmill, even before they have learnt to walk on their own. Once the robot masters this initial simplistic gait, feedback from other sensors provide additional inputs to form a complex network to allow the robot produce a variety of gaits.

As such, the intention of the research on robotic legs is not to help robots walk better, but rather to understand the neurophysiological process that humans and animals use for walking.

These biped robots have yet to demonstrate how to walk truly autonomously on uneven and various terrains robustly, such as humans do in daily life. However, this class of machines is inspiring the design of efficient simple biped robotic systems that exhibit natural passive gaits, optimal in some energetic sense, and analogous to the comfortable walking gait of humans−the aim being to reduce the consumption of metabolic energy per unit distance to a minimum.

Although researchers have been trying to achieve the above idea by simply compensating for the loss of energy by adding a minimum set of actuators to a passive system when the robot is not descending, they have not yet successfully exploited the idea for operational legged robots.

VNC: Controlling a Raspberry Pi from Anywhere

Sometime you wish you could remotely control your Single Board Computer (SBC), the Raspberry Pi (RBPi). This could be because you have set up your RBPi as a home security system with a camera that you want to monitor remotely, or the RBPi is in control of some appliance that you would like to switch on/off from a remote location. Ordinarily, to access an RBPi from outside your home network, you would need to give it an IP address, and set up your home router accordingly. However, there is another method to bypass all that.

Before you begin, make sure your RBPi has the latest OS installed, and is set up to access your home network. Also, as you will be exposing the RBPi to the Internet, change its default password at the setup process. Once you have done this, you can use VNC Connect to access your SBC from anywhere.

Using VNC, you can easily connect to any computer remotely on the same network. Additionally, VNC Connect allows you to connect to any computer from anywhere using a cloud connection, and this includes the RBPi as well. Once you have set it up, the VNC Viewer app will allow you to access the graphic interface of your RBPi from any other computer or smartphone.

The most recent version of the RBPi operating system, namely PIXEL, comes with VNC Connect already present. Others can install it via the apt-get command. You will need to install both realvnc-vnc-server and realvnc-vnc-viewer. Once you have done that, run the raspi-config and set VNC as enabled. This will allow you to set up VNC Connect.

Use a browser to go to the sign-up page of RealVNC Raspberry Pi. Enter your email address in the sign up box. The on-screen instructions will now guide you to complete setting up your account with a password.

On the screen of your RBPi, you should see a VNC icon, which you can click to open. Now, click on the Status Menu and select Licensing. Here, you can enter your email address and its password you created on the sign-up page. On the next prompt, select Direct and Cloud Connectivity, to make your RBPi accessible online.

Now go to the computer or smartphone from which you would like to control your RBPi, and download the VNC Viewer application therein. Open the application, and enter your email address and its password you created on the sign-up page.

This should make your RBPi pop-up automatically as an option. You can use to open up the connection. It will prompt you for the username and password of your RBPi. By default, this is pi as username and raspberry as password, unless you have changed the password as instructed earlier. It takes only a few seconds to connect to your RBPi.

Now, as long as your RBPi is connected to the Internet, you can log into and access its graphic desktop from anywhere. That means you have complete control of any software on the RBPi, check on the status of any project it is running, or even play the games stored on your private server.