Category Archives: Computing

Solid State Drives – Why Are They So Fast?

For most people, an HDD or hard disk drive inside their computer is the flat broad box that stores their Operating System, files, documents, and other essentials. So far, not many users were aware of the inner workings of their HDD. Lately, with speeds of computers going up many folds, people have started looking at alternatives for the HDD – the SSD or the Solid State Drive.

Whatever else you change in your computer system, the general experience remains the same. For example, you may get a new display, add more RAM or install a new graphics card. Barring a few moments of exhilaration, you do not experience the constant euphoria that you get when you replace your regular HDD with an SSD.

An SSD suddenly transforms your computer into a high-speed demon. Additionally, you get this feeling every time you use the computer. Even if you do not realize this increase in speed with an SSD, you will appreciate it as soon as you have to revert to operating a computer with a regular HDD. It is truly amazing the way this new technology is helping to transform our computer experience.

To understand the functioning of SSDs, it is necessary to know the computer’s inner structure or architecture regarding its memory. A computer’s memory architecture is actually made up of three sections: the cache, the temporary memory and the actual memory storage itself.

The CPU or the Central Processing Unit of a computer is intimately connected to the cache memory and accesses it almost instantaneously. As the computer operates, the CPU uses the cache memory as a sort of scratch pad for all its interim calculations and procedures.

The temporary memory, also known as the RAM or Random Access Memory of a computer is the place where the CPU stores information related to all the active programs and running processes. Although the CPU can access the RAM at high speeds, the access is slower than that for cache memory.

For permanent storage, your computer uses the memory within the HDD or the SSD. These may be programs, documents, configuration files, movie files, songs, and many more. Unlike cache and RAM, an HDD or an SSD retains its contents even when the computer has been shut down.

When people replace their HDD with an SSD, their computer operates at a higher speed even when they have not updated their cache or RAM. This is fundamentally because of the difference in the way of working of an HDD and an SSD.

An HDD is essentially an electromagnetic device. Inside, there is a motor to spin the several magnetic platters stacked one on top of the other. Before the CPU can read data from the magnetic plates, they have to spin until the right sector comes under the reading heads, which then move in to read from the exact location. All this mechanical movement takes time.

On the other hand, the SSD, being an all-electronic device, involves no mechanical movements. It uses a grid of electrical cells to store and retrieve data. Moreover, these cells are further separated into sections called pages. Further, pages are clumped together to form blocks. All this contributes to the fantastic speed of an SSD.

What is 3D Flash Memory?

Slowly, but steadily, the memory market is veering away from magnetic disc storage systems to solid-state drives or SSDs. Not only are prices falling fast, manufacturers are producing SSDs with improved technologies, leading to denser memories, higher reliability and lower costs. For example, Samsung has recently announced SSD and systems designs that will drive their new 3-D NAND into mass markets.

Samsung’s latest SSDs are the 850 EVO series. According to Jim Eliot, a marketing executive for Samsung, these are 48-layer, 256 Gbit 3-D NAND cells, with 3-bits per cell. The new chips show more than 50% better power efficiency and twice the performance when compared to the 32-layer chips Samsung is now producing. In the future, Samsung is targeting Tbit-class chips made with more than 100 layers.

On a similar note, an engineer with SK Hynix says that by the third quarter, the company will start production of 3-D NAND chips with more than 30 layers. By 2019, SK Hynix will be making chips containing more than 190 layers.

At present, 3-D NAND production is still low in yield and the cost of production is higher than for producing traditional planar flash chips. However, these dense chips bring promises of several generations of continuing decreases in costs and improvements in the performance of flash. According to analysts and vendors, it might take another year or so before the new technology is ready for use in the mainstream.

Samsung was the first to announce 3-D NAND production, with rivals catching up fast. Toshiba has already announced its intentions of producing 256 Gbit 3-D NAND chips in September. These will also have 48 layers and 3-bits per cell.

According to Jim Handy, an analyst at the Objective Analysis, Los Gatos, California, sales of the 3-D NAND will not pick up before 2017. With Samsung shipping its V-NAND SSDs at a loss, they are gearing up to put the 48-layer devices in volume production. This will enable them to beat the cost of traditional flash.

The reason is not hard to find. Wafers of 3-D chips with 32-layers cost 70% higher than wafers for traditional flash. On the other hand, wafers for 48-layer versions cost only 5-10% higher, but have 50% more layers. Therefore, although the 48-layer chips tend to start with a 50% yield, they will easily approach the planar flash yield levels with a year or so.

According to expert analysts, it takes a couple of years for any new technology to mature. Therefore, the prediction that 3D NAND will reach a majority of flash bit sales only after 2018.

The number of 3D layers providing an optimal product is still under experimentation. Also, included is the development of a new class of controllers and firmware for managing the larger block sizes. Vendors are still exploring other unique characteristics of these 3D chips.

For example, Samsung has designed controllers and firmware that addresses the unique requirements of 3-D NAND and is selling its chips only in SSD form. According to the head of Samsung’s Memory Solutions Lab, Bob Brennan, SSDs provide higher profit margins as compared to merchant chips, and are the fastest way to market.

Emulating Brain Functions with a Memristor Computer

Chip designs at the atomic level may require emulating the functioning of the human brain, while upholding Moore’s Law. In fact, this might be the only way forward. All forward-looking semiconductor design organizations such as Intel, HP, Samsung, and others know this and this has sparked an exponential interest in the study of memristors.

Among all known electronic components, memristors alone are capable of emulating the brain. It is common knowledge now that a human brain performs far better than the fastest supercomputer, while consuming only about 20 watts of power. Supercomputers cannot emulate, rather only simulate the brain, but they consume thousands of watts and are expensive to the tune of millions of dollars.

At Santa Fe, N.M., Knowm Inc. has expanded its portfolio by adding three different types of memristors. They offer a new high-density die-only option, along with all the raw data manufacturers will need to perform their characterization. Although another organization HP/Hynix is also trying to build commercial memristors, Knowm has beaten them to the market by diversifying its offering of memristors. Knowm is now offering three models with slow, medium, or fast hysteresis, based on the material they are made of – tungsten, tin, or chromium.

Regardless of the basic metal-ion used for its manufacture, all memristors work in the same way. The device consists of two electrodes, with a layer of metal located close to one of them. As a voltage is applied across the electrodes, metal ions move through the device towards the electrode with the lower potential. The device has a layer of amorphous chalcogenide material, which is also its active layer. Metal ions moving through this active layer form a conductive pathway between the electrodes. As the pathways spin through the active material layer, the device resistance drops. When the direction of the applied potential is reversed, the conductive channels dissolve and the device resistance increases.

This characteristic makes the memristor a bipolar device. The memristor also emulates the way neurons in the brain learn. Neurons send pulses through their synapses to strengthen them, equivalent to lowering the resistance, or they do not send pulses, thus causing the synapses to atrophy, equivalent to increasing their resistance.

IBM, together with DARPA or the Defense Advance Research Agency, is developing programs to simulate the process with digital computers. Their software will span the spectrum of accuracy in modeling the way synapses work. However, memristors are the only true emulators of the brain and its functions, so far. Some researchers are going to the extent of erecting scaffoldings for connecting memristor-based emulators to large-scale models. This is creating the need for components such as those Knowm is now offering.

Knowm’s offer is a treasure trove for researchers, being raw data from over 1000 experiments. It helps tremendously, as there is actually no well-defined specification to characterize the memristors properly. That allows researchers to generate their own characterization data, based on the properties of their choice from among the slow, medium, or fast hysteresis types. Knowm offers 16 memristors packaged in a ceramic single dual-in-line package. Their die only option holds 180 memristors.

Can Hardware Thwart Attackers?

At the Gamecon Congress 2015 at Cologne, Germany, Intel announced its 6th Generation of Core processors. Although they did not elaborate, the new chips have vPro on-chip hardware, which is important to the security of business users. Among the new 6th Generation vPro Cores, several of them have new hardware capabilities that include Authenticate and Unite. Intel claims this on-chip hardware to be unhackable, and these can verify the identity of users, allowing them to project their screens onto any WiDi or Wireless Display in the world.

According to Tom Garrison, General Manager of Intel Business Client, the 6th generation vPro Cores will offer workplaces higher productivity, higher security, and higher collaborative experience for business users. As more enterprises now allow users to choose their own devices ranging from Windows to Apple products, Intel is targeting at lowering the price of transforming the workplace in accommodating them. For instance, Intel claims to have dropped their workplace cost per user from $250 to $150 through the transformation.

Intel 6th Gen vPro users now have three-way docking – with their WiDi, WiGig or their business network. They can also use the wired Thunderbolt dock as it has a dual function, charging the laptop and offering 40-Gbit speeds. Intel claims 300 design wins for their 6th generation design WiDi capabilities and about 600 for the WiGig.

According to Intel, with their 6th Gen vPro cores, users will see several improvements, even in the five-year old laptops that several businesses still use. These include a 2.5-fold performance improvement, three times extension of battery life, and 30-times speedier graphics performance. Intel is also offering its mobile users a performance equivalent to Xeon-caliber. However, the most important announcement still remains the authentication hardware from Intel.

According to Garrison, Authentication is a new capability, never before seen. Using Authentication, the information technology department of a business can guarantee the authenticity of any user with two or more factors. That makes break-ins using stolen credentials virtually outdated.

With Authentication, users can select up to four additional factors supported by the unhackable on-chip hardware. This includes PIN, proximity of the phone using Bluetooth, a defined location such as the office, home, or any other, and biometrics such as retina scan or fingerprint.

With Intel Unite hardware, the 6th Gen Core processors allow business users to link their laptop screens wirelessly onto any connected display in the world. They can also control their environment such as dim the lighting and other niceties. Users have to enter a six-digit PIN to connect to any WiDi equipped display, whether in a conference room or anywhere in the world. For business users, this comes complete with Skype. Now, the 6th Generation core users can mirror their laptop display on any big screen-presentation without having to turn to dongles, cords, or wires.

Even in this down market, Intel is claiming 200 business-wins for its 6th Gen vPro Cores, along with 100 vPro design-ins, and over 30 wins with their Intel Ultrabook designs. They have also completed 300 trials of deployment for their WiDi wireless display technology, and more than 600 trials with their WiGig wireless docking design-wins.

What is the MHL Specification?

A present, we are inundated with a plethora of digital devices. For example, we have set-top boxes or STBs, Blu-ray players, AVRs, automobile information systems, monitors, TVs, tablets, smartphones and others making up this large and diverse ecosystem. For getting all these to plug-and-play together is no mean feat and the latest standard connector that manufacturers are adopting for compatibility is the USB Type-C.

The protocol that the USB Type-C will be using for the delivery of audio, video, data and power is the MHL Alt Mode for the superMHL and MHL 1, 2 and 3 specifications. MHL Alt Mode over USB Type-C will allow interconnections of more than 750 million MHL devices. With USB Type-C, you can never plug-in a device in the wrong way – this is a reversible connector.

Backward compatible with USB 2.0 and USB 3.1, both Gen 1 and Gen 2, MHL Alt Mode for USB Type-C features power charging and Immersive audio such as DTS:X, Dolby Digital, Dolby Atmos and more. It allows transmission of 4K data at 60fps over a single lane or 8K data at 60fps over 4 lanes. You can use your existing remote to control existing MHL phones, as there is backward compatibility with existing MHL specifications.

The MHL Consortium has developed and published the MHL Alternate Mode for the superMHL and MHL 1, 2 and 3 specifications. They have established a liaison with the USB-IF and USB 3.0 Promotion Group for obtaining the official SVID and for ensuring the development of the specification confirms to USB Type-C, USB Billboard and USB Power Delivery specifications.

MHL Alt Mode over USB Type-C presents a single, small form factor connector, an ideal for many devices for delivering audio, video, data and power. You can simultaneously charge your smartphone, tablet or notebook when you connect them with larger displays such as car information systems, projectors, monitors and TVs
.
You will know your USB Type-C port on your host and device supports MHL if you see the USB-IF logo near the port. Smartphones and tablets with the MHL Alt Mode can easily connect to existing car infotainment systems, projectors, monitors and TVs. With MHL cables and MHL-to-HDMI adapters supporting USB Type-C, you can connect to HDMI Type A devices as well. For this, you will need a simple, thin MHL cable that has an HDMI Type A connector on one end and a USB Type-C connector on the other.

A user can use his existing TV remote to control the device as the MHL Alt Mode supports Remote Control Protocol as well. In contrast, an alternative technology DisplayPort from MPEG-LA does not offer the same compatibility. With the DisplayPort Alt Mode, the user would have to actually go to the connected device and manually control it.

Protocol adapters for the DisplayPort Alt Mode will not work for the MHL Alt Mode and vice-versa. Manufacturers will have to use proper labeling to identify DP Alt Mode or MHL Alt Mode on adapters to avoid consumers from being confused. Proper protocol adapters are necessary for the MHL Alt Mode to support VGA, DVI and HDMI displays.

Oracle, Raspberry Pi and a Weather Station for Kids

Kids now have a wonderful opportunity of learning about their world while at the same time enhancing their programming skills. The Raspberry Pi Foundation is teaming up with Oracle to create an initiative – The Oracle Academy Raspberry Pi Weather Station. The initiative is inviting schools to teach their kids programming skills by applying for a weather station hardware kit that children can build and develop.

With the firm’s philanthropic arm, Oracle Giving, funding the first thousand kits, schools can get the kits without incurring any expenditure – until the stocks last. Students have the freedom to decide how to build their application. They will be using elements that SQL developed in collaboration with Oracle, while the data collected will be hosted on clouds belonging to Oracle.

The scheme is targeted at children between the ages of 11 and 16. Apart from honing their crafting skills for building the weather station, schoolchildren will also learn to write code for tracking wind speed, direction, humidity, pressure and temperature. In addition, students are also encouraged to build a website for displaying their local weather conditions. Children participating in the scheme can connect with other participants via a specially built website that doubles up to provide technical support.

According to Jane Richardson, director at the Oracle Academy EMEA, the scheme can lead to gratifying and effective careers for children as they learn computer science skills, database management and application programming. The goal of the project is twofold. Primarily, it shows children that computer science can help them in measuring, interrogating and understanding the world in a better way. Secondly, the project provides them with a hands-on opportunity to develop these skills.

The weather station is built with the Raspberry Pi or RBPi SBC as its control station. The complete set of sensor measurements the weather station handles includes Air quality, relative humidity, barometric pressure, soil temperature, ambient temperature, wind direction, wind gust speed, wind speed and rainfall. All this is measured and logged in real time with a real-time clock. Although this combination helps to keep the cost of the kit under control, users are free to augment the features further on their own.

Kids go through the scheme via three main phases of learning – collection, display and interpretation of weather parameters. In the collection phase, children learn about interfacing different sensors, understanding their methods of working and then writing code in Python for interacting with them. At the end of this phase, kids record their measurements in a MySQL database hosted on the RBPi. For this, students can deploy their weather station in an outdoor location on the grounds of their school.

In the display phase, kids learn to create an Apache, PHP 5 and JavaScript website for displaying the measurements they have collected from their weather station. They can upload their measurements to the Oracle cloud database, so that could be used by other schools as well.

In the interpretation of weather phase, children learn to discern patterns in weather data, analyze them and use that to predict future weather. For this, they can use both the local data they have collected and national weather from the online Oracle cloud database.

Are There Any Living Computers?

Those twiddling with the origin of life at the forefront of technology, call it synthetic biology, to use the politically correct words. Some splice genes from other organisms to produce better food products. Others flounder with genes for producing tomatoes that can survive bruises. Many graft jellyfish genes to potatoes to make them glow when they need to be watered. Making completely new organisms from scratch is a simple technique today.

In 2013, the Semiconductor Research Corp., from N.C., started a Semiconductor Synthetic Biology or SSB program to cross human genes and semiconductors. Their aim is to create hybrid computers, something like cyborgs. Although they have progressed far, they have yet to overcome many intermediate hurdles along the way.

Ultimately, they want to make living computers. They intend to make low power biological systems that can process signals much the same way as the human brain can. At present, they are trying to build a CMOS hybrid life form, for which, they are combining CMOS and biological components to allow signal processing and sensing mechanisms.

According to the Director of Cross-Disciplinary Research and Special Projects at SRC, there are several dimensions to the opportunity of using semiconductors in synthetic biology and this could enter various physical directions. He feels that research in SSB will generate a new data explosion – such as big data. It will be important to see how synthetic biology along with semiconductors will handle big data, especially in the science of health and in medical care.

One of the opportunities that can offer proof-of-the-concept is in the form of personalized medicine. This is because it is now possible to sequence the genome of a person – the process generating a vast database of genetic dispositions. Additionally, this also helps in testing the response of an individual to a particular drug in the lab, before it is actually administered.

The SSB program is connecting cells to semiconductor interfaces to read out signals indicating the activities inside a specific cell. In the next step, they intend to design new cells that have characteristics that are more desirable, such as sensitivity to specific substances – making them suitable for use as sensors. Apart from extracting signals from cells, researchers in the program plan to inject signals into cells. Their intention is to generate a two-way communication system, thus creating a hybrid system, half biological and half electronic, which will be capable of processing massive amounts of information; in short, a living computer.

In traditional drug discovery, passive arrays of cells are used. Each of the cells is exposed to a slightly varying drug. A scanner beam, usually a laser, electrically checks each cell and measures its response. That narrows down the drugs that show the maximum promise for further testing. However, the electrical or optical response of a cell to a drug is not a reliable method to capture all the activity within the cell. The SSB program can do that and is about one thousand times faster.

Arrays of sensing pixels can solve the problem, where each pixel measures a different parameter. With the CMOS chip performing a sensor fusion on the results, researchers expect to uncover the complete metabolic response of the cell to a drug.

CHIP Competes With the Raspberry Pi

The extremely popular tiny, credit card sized, inexpensive, single board computer, the Raspberry Pi or the RBPi may soon have a rival. So far, the contender, known as the CHIP, is waiting for its crowdfunding project to complete. In the future, expect more of such similar devices jostling the market place.

Unlike the RBPi, CHIP is completely open source – for both its software and its hardware. Once in the market, the design and documentation will be available to people to download. Therefore, with the schematic available, people will be free to make their own version and add improvements or tweaks to the design.

CHIP’s operating system is based on Debian Gnu Linux, which means it will support several thousand apps right out of the box. On the hardware side, there are some improvements on the specifications of the RBPi. As against the 700MHz CPU of the RBPi, CHIP runs on a single core CPU at 1GHz. Users can do without the SD Card, as CHIP has storage memory of 4GB built into the card. The 512MB RAM is the same as that in the later models of the RBPi. While users have to add separate dongles for Wi-Fi and Bluetooth when using the RBPi, CHIP has both built on-board.

CHIP can connect to almost any type of screen. Its base unit offers composite video output, but there are adapters for both VGA and HDMI. An optional case for the CHIP enables it work with a touchscreen and a keyboard. The entire package is the size of an original Game Boy.

All this may not be surprising since there have been prior competitors with better specifications and more features than those of the original RBPi do. However, all the competitors so far were unable to beat the price factor – they were all more expensive than the RBPi. This is the first challenger bringing the price lower than that of an RBPi – the basic unit of the CHIP costs only $9. The Next Thing Co., the manufacturers, call this the “world’s first nine dollar computer,” and in their opinion, CHIP is “built for work, play and everything in between.”

Along with a lower price tag, CHIP has a smaller profile than the RBPi. As it has a more powerful processor and more memory, CHIP could easily replace RBPi as the primary choice for projects. The entire board is packed with several sockets and pins. Its hardware features include a UART, USB, SPI, TWI (I2C), MIPI-CSI, Eight digital GPIOs, parallel LCD output, one PWM pin, composite video out, mono audio in, stereo audio out and a touch panel input.

Users of CHIP will learn coding basics and play games on the tiny computer that may soon usurp the title of king of the budget microcomputers, so far being enjoyed by the RBPi. CHIP measures only 1.5×2.3 inches and is compatible with peripherals such as televisions and keyboards. It runs on Linux, works with any type of screen and comes with a host of pre-installed applications. Therefore, users can simply make it work out of the box, without having to download anything.

Converting Scanned Images into Editable Files

The printed world and the electronic one are primarily connected through computers running the OCR or Optical Character Recognition software programs. Traditional document imaging methods use a two-dimensional environment of templates and algorithms for recognizing objects and patterns.

Current OCR methods can recognize not only a spectrum of colors, but can also distinguish between the forefronts in a document from its background. They work with low-resolution images that mediums such as cell phone cameras, the internet and faxes provide. For this OCR methods often have to de-skew, de-speckle and use 3-D image correction on the images.

Primarily, OCR software programs use two different methods for optical character recognition. The first is feature extraction and the second is matrix matching. With feature extraction, the OCR software program recognizes shapes using mathematical and statistical techniques for detecting edges, ridges and corners in a text font so that it can identify the letters, sentences and paragraphs.

OCR software programs using feature extraction achieve the best results when the image is clean and straight, has very distinguishable fonts such as Helvetica or Arial, uses dark letters on a white background and has at least 300dpi resolution. In reality, these conditions are not always possible. To allow reading words accurately in less ideal circumstances, OCR techniques have switched to matrix matching.

Matrix matching falls in the category of artificial intelligence. For example, organizations such as law enforcement agencies include matrix matching in the software they use for recognizing images within video feeds. The process combines feature extraction together with similarity measurements.

Similarity measurement utilizes complex algorithms and statistical formulas to compare images relative to others within the same image or within the document. This helps to recognize images within a spectrum of colors even in 3D environments. This technology allows OCR software to recognize crooked images, images with too much background interference and images that need alteration for correct reading and interpretation. Matrix matching techniques are also better at recognizing images at a lower resolution.

Today, several OCR software packages include features that can de-speckle and de-skew the image. They can also change the orientation of the page. A special technique called the 3D correction can straighten images that the camera captured at an angle.

OCR has been traditionally linked with scanning software. The scanning process offers clues that make the OCR results more accurate. However, not all images are available in a hard copy, and a scanner may not be readily available. Sometimes, text to be extracted is available only in a PDF file or some other graphic file downloaded from the Internet. While older PDF files did not allow you to copy text, most of the modern PDF files created today have a cursor mouse pointer. That allows copying the text from the document on to your clipboard.

However, advanced PDF creating software includes features to protect the text in the converted document using a password. If you want to extract text from such protected PDF documents, your OCR software program will ask you for the password.

The Raspberry Pi Goes to Zero

If you thought the legendary Raspberry Pi or RBPi was the smallest single board computers could get, well, you need to think again. Not only has the famous SBC shrunk in size, it has become a lot cheaper as well. The charitable Raspberry Pi foundation that launched the best selling computer in the UK is now offering their next model, the RBPI-Zero and in the US, it costs just $5.

RBPi-Zero comes with a 512MB RAM and a core that boasts of being 40-percent faster than what the RBPI-1 came with. The miniaturized SBC sports a Mini-HDMI port and two Micro USB ports, one of them for power. While comparing the RBPI-Zero with the first RBPI, the Raspberry Pi Foundation says the RBPI-Zero is equally revolutionary. They explained it would be manufactured in Wales, run the full Raspbian, while including other applications such as Minecraft and Scratch.

Similar to the requirements for the RBPi, the RBPi-Zero requires the user to attach their own power supply, keyboard, mouse or any other input device and the display screen. The cost of the new board is low because several components from the RBPi board are no longer present or have been simplified for the RBPi-Zero. According to Uben Upton, the founder of Raspberry Pi, all components on the new board justify their existence.

However, cutting features was not the sole process of getting the RBPI-Zero down to the bare-bones pricing of $5. The major contribution comes from the grand success of its predecessor, the RBPi, being the most successful computer in the UK for decades. The massive sales have enabled the Foundation to cut costs to unimaginable levels. The sheer numbers in sales have given them the economies of scale.

One of the processes in reducing the cost of the RBPi-Zero was keeping all components on one side of the board instead of two – it simplified manufacturing by removing half the assembly costs. According to Upton, they have moved the physical product around and the cost of metal connections has made an impact.

By redesigning the RBPi-Zero, the engineering solution to the necessities of space and cost has resulted in an extraordinarily aesthetic board. The precision and beauty of Zero comes out in its compactness and its symmetry. Just like its predecessor, nothing is hidden and all its inner workings are exposed to anyone with an interest. As Upton says, it is nice when things look attractive because they are functional.

The small form factor of the RBPi-Zero makes it simple for the board to be used in many more projects, whether it is robotics or Internet-connected devices. The easy to use board massively increases creative possibilities. You can use the RBPI-Zero in places where the RBPi would be difficult to fit. Presently, the Zero, a full-featured computer, will provide raw power somewhere between the first generation of the RBPi and its second generation.

The launch plans for the Zero are massive, with tens of thousands ready to ship. Raspberry Pi magazines such as the Magpi will feature a freebie RBPi-Zero with its 10,000 issues. Upton is expecting five such launch partners.