Category Archives: Computing

What is an In-Memory Processor?

According to a university press release, the world’s first in-memory processor is now available. This large-scale processor will redefine the use of energy to a higher efficiency level when it is processing data. Researchers at LANES or the Laboratory at Nanoscale Electronics and Structures in Switzerland, at the EPFL or Ecole Polytechnique Fédérale de Lausanne have developed the new processor.

The latest information technology systems produce copious amounts of heat. Engineers and scientists are looking for more efficient ways of using energy to lower the production of heat, thereby helping to reduce carbon emissions as the world aims to go greener in the future. In trying to reduce the unwanted heat, they are going to the root of the problem. They want to investigate the von Neumann architecture of a processor.

In a contemporary computing architecture, the information processing center is kept separated from the storage area. Therefore, the system spends much of its energy in shuttling information between the processor and the memory. This made sense in 1945, when John von Neumann first described the architecture. At the time, processing devices and memory storage were intentionally kept separate.

Because of the physical separation, the processor must first retrieve data from the memory before it can perform computations. The action involves movement of electric charges, repeatedly discharging and charging capacitors, including transiting currents. All these leads to energy dissipation in the form of heat.

At EPFL, researchers have developed an in-memory processor, which performs a dual role—that of processing and data storage. Rather than using silicon, the researchers have used another semiconductor—MoS2 or molybdenum disulphide.

According to the researchers, MoS2 can form a stable monolayer, which is only three atoms thick, and can interact only weakly with its surroundings. They created a monolayer consisting of a single transistor, simply by peeling it off using Scotch tape. They could design a 2D version of an extremely compact device using this thin structure.

However, a processor requires many transistors to function properly. The research team at LANE could successfully design a large-scale transistor that consists of 1024 elements. They could make this entire structure within a chip of 1×1 cm dimensions. Within the chip, each component serves as a transistor and a floating gate to store a charge. This controls the conductivity of the transistors.

The crucial achievement of the researchers was the processes the team used for creating the processor. For over a decade, the team has perfected their ability to fabricate entire wafers that had MoS2 in uniform layers. This allowed them to design integrated circuits using industry standard tools on computers. They then translated these designs into physical circuits, leading to mass production of the in-memory processor.

With electronics fabrication in Europe needing a boost for revival, the researchers want to leverage their innovative architecture as a base. Instead of competing in fabrication of silicon wafers, the researchers envisage their research as a ground-breaking effort for using non-von Neumann architecture in future applications. They look forward to using their highly efficient in-memory processor for data-intensive applications, such as those related to Artificial Intelligence.

What is a CPU?

We use computers every day, and most users are aware of the one indispensable hardware component in it—the CPU or the Central Processing Unit. However, contrary to popular belief, the entire desktop computer or the router is not the CPU, as the actual CPU is small enough to fit in the palm of your hand. Small as it is, the CPU is the most important component inside any computer.

That is because the central processing unit is the main driving force or the brain of the computer and is the only component that does the actual thinking and decision-making. To do that, CPUs typically contain one or more cores that break up the workload and handle individual tasks. As each task requires data handling, a CPU must have access to the memory where such data actually resides. To enable fast computing, the memory speed must be high. This is generally RAM or Random Access Memory, and together with a great amount of cache memory, which is part of the CPU, helps the central processing unit to complete tasks at high speed. However, the RAM and cache can only store a small amount of data, and the CPU must periodically transfer the required data from external disk drives, as these can hold much more of it.

Being processors, CPUs are available in large varieties of ISAs or Instruction-Set Architectures. ISAs can be highly distinct, making them so extreme that software running on one ISA may not run on others. Even within CPUs using the same ISA, there may be differences in microarchitecture, specifically related to the actual design of the CPU. Manufacturers use different microarchitectures to offer CPUs with various levels of performance, features, and efficiency.

A CPU with a single core is highly efficient in accomplishing tasks that require a serial, sequential order of execution. To improve the performance even further, CPUs with multiple cores are available. Where consumer chips typically offer up to eight cores, bigger server CPUs may offer anywhere from 32 to 128 cores. CPU designers target improving per-core performance by increasing the clock speed, thereby increasing the number of instructions per second that the core handles. This is again dependent on the microarchitecture.

Crafting CPUs is an incredibly intricate endeavor, navigated by only a select few experts worldwide. Noteworthy contributors to this field include industry giants like Intel, AMD, ARM, and RISC-V International. Intel and AMD, the pioneers in this arena, consistently engage in fierce competition, each striving to outdo the other in various CPU categories.

ARM, on the other hand, distinguishes itself by offering its proprietary ARM ISA, a technology it licenses to prominent entities such as Apple, Qualcomm, and Samsung. These licensees then leverage the ARM ISA to fashion bespoke CPUs, often surpassing the performance of the standard ARM cores developed by the parent company.

In a departure from the proprietary norm, RISC-V International promotes an open-standard approach with its RISC-V ISA. This innovative model allows anyone to freely adopt and modify the ISA, fostering a collaborative environment that encourages diverse contributions to CPU design.

To truly grasp how well a CPU performs, your best bet is to dive into reviews penned by fellow users and stack their experiences against your specific needs. This usually involves delving into numerous graphs and navigating through tables brimming with numbers. Simply relying on the CPU specification sheet frequently falls short of providing a comprehensive understanding.

What is Edge Computing?

Many IT professionals spend most of their careers within the safe and controlled environments of enterprise data centers. However, managing equipment in the field is a different ball-game altogether.

Pandemics such as the COVID-19 are increasingly transforming the world. The emerging ecosystem is confronting and challenging this transformation. Among this mayhem, edge computing is entering as a key transition phase, with the massive shift towards home-based work. Along with the generation of new opportunities for distributing computing, key players are deploying increasing numbers of edge data centers for navigating the sharp economic downturn.

The major benefit of edge computing is it acts on data at the source. This distributed computing framework works by bringing enterprise applications closer to sensors acting as data sources within the IoT system and connecting them with local edge servers and cloud storage systems. Edge computing can deliver strong business benefits with its better bandwidth availability, improved response times, and faster insights.

Edge computing has the potential to enable new services and technologies with its low-latency wireless connectivity. This could transform global business and society. According to some technologists, edge computing can bring in a new era of powerful mobile devices with no limit on their ability to compute power and data.

Consultants and futurists are projecting a growth of up to US$4.1 trillion for the edge economy by 2030. Linux Foundation, in their report Edge 2020 claim edge investment will take wing after 2024, and the power footprint of the deployed edge IT and data center facilities reaching 102, 000 MW by 2028. They expect the annual capital expenditures to reach US$146 billion by then.

In the technology world, however, there are divided opinions regarding the short-term prospects of edge computing. Although there is no doubt about the usefulness of edge computing, people are skeptical about the time frame for edge computing to become profitable. Therefore, starting with 2020, investors and end-users are looking intently at the economics of edge computing and focusing more on its near-term cost-benefits rather than on its long-term potentials.

There is a huge opportunity in edge data centers, as edge computing plays out over several years, with long deployment horizons and gradual adoption of technologies boosting the market. However, executives do not expect the revolution to go through cheaply, with the build-out of edge computing pressurizing the economics of digital infrastructure. This may create repeatable form factors leading to more affordable deployments. Experts are confident that most edge data facilities will be highly automated, remotely managed, and require no human intervention.

At the present, it is difficult to say which edge projects will succeed. With product segmentation and a fluid ecosystem, even promising ventures can struggle as they try to locate profitable niches. While investors are wary of speculative projects, it is reasonable to expect well-funded platform builders and stronger incumbents will acquire promising edge players, especially those running short of funding.

Tower operators are also influencing the competitive landscape. Their massive real estate holdings and financial strengths are positioning the tower operators as potentially important players in the edge computing ecosystem.

What are RTUs – Remote Terminal Units?

Nowadays, small computers make up remote terminal units or RTUs and SCADA units. Users program controller algorithms into these units, allowing them to control sensors and actuators. Likewise, they can program algorithms for logic solvers, power factor calculators, flow totalizers, and many more, according to actual requirements in the field.

Present RTUs are powerful computers able to solve complex algorithms or mathematical formula describing external functions. Sensing devices or sensors gather data from the field, sending the signals back to the RTU. By solving the algorithms in it using the input signals, the RTU then sends out control instructions to valves or other control actuators. As scan periods in RTUs are very small, the entire activity happens very fast, hardly taking a few milliseconds, with the RTU repeating the process.

Regulatory agencies certifying RTUs prefer use of dedicated hardware for solving certain safety related functions such as toxic gas concentration and smoke detection. Therefore, they make sure of the reliability of detection for safety related functions.

The RTU operates in a closed system. Sensors measure the process variables, while actuators adjust the process parameters and controllers solve algorithms for controlling the actuators in response to the measured variables. The entire system works together based on wiring or some form of communication protocol. This way, the RTU enables the field processes near it to operate according to design.

Before the controller in the RTU can solve the algorithm, it has to receive an input from the field sensor. This requires a defined form of communication between the RTU and the various sensors in the field. Likewise, after solving the algorithm, the RTU has to communicate with the different actuators in the field.

In practice, sensors usually feed into a master terminal unit or MTU that conditions their input, changing it to the binary form from the analog form, if necessary. This is because sensors may be analog or digital types. For instance, a switch acting as a sensor can send information about its state using a digital one or +5 V when it is open and a digital zero or 0 V when it is closed. However, a temperature sensor has to send an analog signal or a continuously varying voltage representing the current temperature.

The MTU uses analog to digital converters to convert analog signals from the sensors to a digital form. All communication between the MTU and the RTU is digital in nature, and a clock signal synchronizes the communication.

The industry uses RTUs as multipurpose devices for remote monitoring and control of various devices and systems, mostly for automation. Although industrial RTUs perform similar function as programmable logic circuits or PLCs do, the former operates at a higher level as RTUs are basically self-contained computer units, containing a processor and memory for storage. Therefore, the industry often uses RTUs as intelligent controllers or master controller units for controlling devices that automate a process. This process can be a part of an assembly line.

By monitoring the analog and digital parameters from the field through sensors and connected devices, RTUs can control them and send feedback to the central monitoring station for industries dealing with power, water, oil, and similar distribution.

Raspberry Pi to Linux Desktop

You may have bought a new Single Board Computer (SBC), and by any chance, it is the ubiquitous Raspberry Pi (RBPi). You have probably had scores of projects lined up to try on the new RBPi, and you have enjoyed countless hours of fun and excitement on your SBC. After having exhausted all the listed projects, you are searching for newer projects to try on. Instead of allowing the RBPi to remain idle in a corner, why not turn it into a Linux desktop? At least, until another overwhelming project turns up.

An innovative set of accessories converts the RBPi into a fully featured Linux-based desktop computer. Everything is housed within an elegant enclosure. The new Pi Desktop, as the kit is called, comes from the largest manufacturer of the RBPi, Premier Farnell. The kit contains an add-on board with an mSATA interface along with an intelligent power controller with a real-time clock and battery. A USB adapter and a heat sink are also included within a box, along with spacers and screws.

Combining the RBPi with the Pi Desktop offers the user almost all functionalities one expects from a standard personal computer. You only have to purchase the solid-state drive and the RBPi Camera separately to complete the desktop computer, which has Bluetooth, Wi-Fi, and a power switch.

According to Premier Farnell, the system is highly robust when you use an SSD. Additionally, with the RBPi booting directly from an SSD, it ensures a faster startup.

Although several projects are available that transform the RBPi into a desktop, you should not be expecting the same level of performance from the RBPi as you would get from a high-end laptop. However, if you are willing to make a few compromises, it is possible to get quite some work done on a desktop powered with the RBPi.

Actually, the kit turns the RBPi into a stylish desktop computer with an elegant and simple solution within minutes. Unlike most other kits, the Pi Desktop eliminates a complex bundle of wires, and does not compromise on the choice of peripherals. You connect the display directly to the HDMI interface.

The added SSD enhances the capabilities of the RBPi. Apart from extending the memory capacity up to 1 TB, the RBPi can directly boot up from the SSD instead of the SD card. This leads to a pleasant surprise for the user, as the startup is much faster. Another feature is the built-in power switch, which allows the user to disconnect power from the RBPi, without having to disconnect it from the safe and intelligent power controller. You can simply turn the power off or on as you would on a laptop or desktop.

The stylish enclosure holds the add-on board containing the mSATA interface and has ample space to include the SDD. As the RBPi lacks an RTC, the included RTC in the kit takes care of the date and time on the display. The battery backup on the RTC keeps it running even when power to the kit has been turned off. There is also a heat sink to remove heat built-up within the enclosure.

Cloud Storage and Alternatives

Ordinarily, every computer has some local memory storage capacity. Apart from the Random Access Memory or RAM, computers have either a magnetic hard disk drive (HDD) or a solid-state disk (SSD) to store programs and data even when power is shut off—RAM cannot hold information without power. The disk drive primarily stores the Operating System that runs the computer, other application programs, and the data these programs generate. Typically, such memory is limited and tied to a specific computer, meaning other computers cannot share it.

A user has two choices for adding more memory to a computer—he/she can either buy a bigger drive or add to the existing one, or he can use cloud storage. Various service providers offer remote memory storage, and the user has to pay a nominal rental amount for using a specific amount of cloud memory.

There are several advantages of using such remote memory. Most cloud storage services offer desktop folders where users can drag and drop files from their local storage to the cloud and vice versa. As accessing the cloud services requires Internet connection, the user can avail the cloud facilities from anywhere, while sharing it between several computers and users.

The user can use the cloud service as a back up for storing a second copy of their important information. In the event an emergency strikes and the user loses all or part of their data on their computer, accessing the cloud storage through the Internet can help to restore the stored information on the cloud. Therefore, cloud storage can act as a disaster recovery mechanism.

Compared to local memory storage, cloud services are much cheaper. Therefore, users can reduce their annual operating costs by using cloud services. Additionally, the user saves on power expenses, as cloud storage does not require the user to supply power that local memory storage would need.

However, cloud storage has its disadvantages. Dragging and dropping files to and from the cloud storage takes finite time on the Internet. This is because cloud storage services usually limit the bandwidth the user can avail for a specific rental charge. Power interruptions and or bad Internet connection during the transfer process can lead to corruption of data. Moreover, the user cannot access his/her data on the cloud storage unless there is an Internet connection available.

Storing data remotely also brings up the concerns of safety and privacy. As the remote memory is likely to be shared by other organizations, there is a possibility of data comingling.

Therefore, people prefer using private cloud services, which are more expensive, rather than using cheaper public cloud services. Private cloud services may also offer alternative payment plans, and these may be more convenient for users. Usually, the private cloud services have better software for running their services, and offer users greater confidence.

Another option private cloud services often offer is of encrypting the stored data. That means only the actual user can make use of their data, and others, even if they can access it, will see only garbage.

What is a wireless router?

Most of the electronic gadgets we use today are wireless. When they have to connect to the Internet, they do so through a device called a router, which may be a wired or a wireless one. Although wired routers were very common a few years back, wireless routers have overtaken them.

Routers, as their name suggests, direct a stream of data from one point to another or to multiple points. Usually, the source of data is the transmitting tower belonging to the broadband dealer. The connection from the tower to the router may be through a cable, a wire, or wireless. To redirect the traffic, the router may have a network of multiple Ethernet ports to which users may connect their PCs, or, as in the latest versions, it may transmit the data wirelessly. The only wire a truly wireless router will probably have is a cable to charge its internal battery.

Technically speaking, the wireless router is actually a two-way radio, receiving the signals from the tower and retransmitting them for other devices to receive. A SIM card inside the router identifies the device to the broadband company, helping it to keep track of the routers statistics. Modern wireless routers follow international wireless communication standards—the 802.11n being the latest, although there are several of the type 802.11b/g/n, meaning they conform to the earlier standards as well. Another differentiation between various routers is their operating speed, and the band on which they operate.

The international wireless communication standards define the speed at which routers operate. For instance, wireless routers of the type 802.11b are the slowest, with speeds reaching up to 11 Mbps. While those with the g suffix can deliver a maximum speed of 54 Mbps, those based on the 802.11n standard are the fastest, reaching up to 300 Mbps. However, a router can deliver data only as fast as the Internet connection allows. Therefore, even if it has a rating of n or 300 Mbps, it will perform at speeds of 100 Mbps at the most. Nonetheless, a fast wireless router can increase the speed of your network, and this allows PCs to interact faster, making them more productive.

International standards allow wireless communication on two bands—2.4 GHz and 5.0 GHz. Most wireless routers based on the 802.11b, g, and n standards use the 2.4 GHz band. These are the single band routers. However, the 802.11n standard allows wireless devices to operate on the 2.4 GHz or the 5.0 GHz band also. These are the dual-band routers, which can transmit in either of the two bands via a selection switch, or in some devices, they can operate in both frequencies at the same time.

A newer standard, 802.11a, allows wireless networking on the 5.0 GHz band, while also transmitting on the 2.4 GHz band used by the 802.11b, g, and n standards. These are also dual band wireless routers with two different types of radios that support connections on both 2.4 GHz and 5.0 GHz bands. The 5.0 GHz band offers better performance, lower interference, and more coverage.

What happens when you turn a computer on?

Working on a computer is so easy nowadays that we find even children handling them expertly. However, several things start to happen when we turn on the power to a computer, before it can present the nice user-friendly graphical user interface (GUI) screen that we call the desktop. In a UNIX-like operating system, the computer goes through a process of booting, BIOS, Master Boot Record, Bootstrap Loading, grub, init, before reaching the operating level.

Booting

As soon as you switch on the computer, the motherboard initializes its own firmware to get the CPU running. Some registers, such as the Instruction Pointer of the CPU, have permanent values that point to a fixed memory location in a read only memory (ROM) containing the basic input output system (BIOS) program. The CPU begins executing the BIOS from the ROM.

BIOS

The BIOS program has several important functions, which begin with the power on self-test (POST) to ensure all the components present in the system are functioning properly. POST indicates any malfunction in the form of audible beeps. You have to refer to the Beep Codes of the motherboard to decipher them. If the computer passes the test for the video card, it displays the manufacturer’s logo on its screen.

After checking, BIOS initializes the various hardware devices. This allows them to operate without conflicts. Most BIOSs follow the ACPI create tables for initializing the devices in the computer.

In the next stage, the BIOS looks for an Operating System to load. The search sequence follows an order predefined by the manufacturer in the BIOS settings. However, the user can change this Boot Order to alter the actual search. In general, the search order starts with the hard disk, CD-ROMs, and thumb drives. If the BIOS does not find a suitable operating system, it displays an error. Otherwise, it reads the master boot record (MBR) to know where the operating system is located.

Master Boot Record

In most cases, the operating system resides in the hard disk. The first sector of the hard disk is the master boot record (MBR), and its structure is independent of the operating system. It consists of a special program, the bootstrap loader, and a partition table. The partition table is actually a list of all the partitions in the hard disk and their file system types. The bootstrap loader contains the code to start loading the operating system. Complex operating systems such as Linux use the grand unified boot loader (GRUB), which allows selecting of one of the several operating systems present on the hard disk. Booting an operating system using GRUB is a two-stage process.

GRUB

Stage one of the GRUB is a tiny program and its only task is to call stage two, which contains the main code for loading the Linux Kernel and the file system into the RAM. The Kernel is the core component of the operating system, remains in the RAM throughout the session, and controls all aspects of the system through its drivers and modules. The last step of the kernel boot sequence is the init, which determines the initial run-level of the system. Unless otherwise instructed, it brings the computer to the graphical user interface (GUI) for the user to interact.

Connect with a New Type of Li-Fi

Many of us are stuck with slow Wi-Fi, and eagerly waiting for light-based communications to be commercialized, as Li-Fi promises to be more than 100 times faster than the Wi-Fi connections we use today.

As advertised so far, most Li-Fi systems depend on the LED bulb to transmit data using visible light. However, this implies limitations on the technology being applied to systems working outside the lab. Therefore, researchers are now using a different type of Li-Fi using infrared light instead. In early testing, this new technology has already crossed speeds of 40 gigabits per second.

According to the Li-Fi technology, a communication system first invented in 2011, data is transmitted via high-speed flickering of the LED light. The flickering is fast enough to be imperceptible to the human eye. Although lab-based speeds of Li-Fi have reached 224 gbps, real-world testing reached only 1 gbps. As this is still higher than the Wi-Fi speeds achievable today, people were excited about getting Li-Fi in their homes and offices—after all, you need only an LED bulb. However, there are certain limitations with this scheme.

LED based Li-Fi presumes the bulb is always turned on for the technology to work—it will not work in the dark. Therefore, you cannot browse while in bed in the dark. Moreover, as in regular Wi-Fi, there is only one LED bulb to distribute the signal to different devices, implying the system will slow down as more devices connect to the LED bulb.

Joanne Oh, a PhD student from the Eindhoven University of Technology in the Netherlands, wants to fix these issues with the Li-Fi concept. The researcher proposes to use infrared light instead of the visible light from an LED bulb.

Using infrared light for communication is not new, but has not been very popular or commercialized because of the need for energy-intensive movable mirrors required to beam the infrared light. On the other hand, Oh proposes a simple passive antenna that uses no moving parts to send and receive data.
Rob Lefebvre, from Engadget, explains the new concept as requiring very little power, since there are no moving parts. According to Rob, the new concept may not be only marginally speedier than the current Wi-Fi setups, while providing interference-free connections, as envisaged.

For instance, experiments using the system in the Eindhoven University have already reached download speeds of over 42 gbps over distances of 2.5 meters. Compare this with the average connection speed most people see from their Wi-Fi, approximately 17.5 mbps, and the maximum the best Wi-Fi systems can deliver, around 300 mbps. These figures are around 2000 times and 100 times slower respectively.

The new Li-Fi system feeds rays of infrared light through an optical fiber to several light antennae mounted on the ceiling, which beam the wireless data downwards through gratings. This radiates the light rays in different direction depending on their wavelengths and angles. Therefore, no power or maintenance is necessary.

As each device connecting to the system gets its own ray of light to transfer data at a slightly different wavelength, the connection does not slow down, no matter how many computers or smartphones are connected to it simultaneously.

Python Libraries for Machine Learning

Machine learning helps with many practical applications, suitably augmented by deep learning and with extensions of the overall field of artificial intelligence. Many people, with the help of analytics and statistics, are busy navigating the vast universe of deep or machine learning, artificial intelligence, and big data. However, they do not really have to qualify as data scientists, as popular machine learning libraries in Python are available.

Machine learning is promoting deep learning and AI for all kinds of machine assists, including driverless cars, better prevention healthcare, and even better movie recommendations.

Theano

A machine-learning group at the Universite de Montreal developed and released Theano a decade ago. In the machine learning community, Theano is one of the most used mathematical compiler for CPUs and GPUs. A 2016 paper describes Theano as a “Python framework for fast computation of mathematical expressions,” and offers a thorough overview of the library.

According to the paper, development of several software packages build on the strengths of Theano, offering higher-level user interface, making them more suitable for specific goals. For instance, expressing training algorithms mathematically and evaluating the architecture of deep learning models using Theano became easier with the development of Keras and Lasagne.

Likewise, a probabilistic programming framework PyMC3, using Theano, derives expressions automatically for gradients. PyMC3 also generates C-codes for fast execution. That people have forked Theano over two-thousand times, it has almost 300 contributors on GitHub, and it garners more than 25,000 commits, is testimony to its popularity.

TensorFlow

Although a newcomer to the world of open source, TensorFlow is a library for numerical computing and uses data flow graphs. In its first year itself, TensorFlow has helped students, artists, engineers, researchers, and many others. According to the Google Developers Blog, TensorFlow has helped with preventing blindness in diabetes, early detection of skin cancer, language translation, and more.

TensorFlow has appeared several times in the most recent Open Source Yearbook. It has been included as a project in the list of top ten open source projects to watch in 2017. In a tour of Google’s 2016 open source releases, an article by Josh Simmons refers to Magenta, a TensorFlow based project.

According to Simmons, Magenta advances the technology in machine intelligence for music and art generation. It also helps build a collaborative community of coders, artists, and researchers dealing with machine learning. According to another researcher, Rachel Roumeliotis, she lists TensorFlow as a language for powering AI as a part of her roundup of Hot programming trends of 2016.

Anyone can learn more about TensorFlow by watching the live stream of recording from the TensorFlow Dev Summit 2017, or by reading the DZone series—TensorFlow on the Edge.

Scikit-Learn

Spotify engineers at okCupid use Scikit-Learn for recommending music, for helping evaluate and improve their matchmaking system, and for exploring phases of new product development at Birchbox. Scikit-Learn is built on Matplotlib, SciPy, and NumPy. It has 800 contributors on GitHub, and garners almost 22,000 commits.

The Scikit-Learn project website offers free tutorials, where one can read about using Scikit-Learn for machine learning. Alternately, they can watch the PyData Chicago 2016 talk given by Sebastian Raschka.