Skip to main content
Future of Computer Technology
Jan 1, 2013

It was only a decade ago that a phenomenon called the “Internet” came along and changed the way we communicated, did business and conducted our lives. Now, computer technology has become an essential and significant part of our daily lives. But how did it all start and where is it heading?

I entered the world of computers at an early age. I had an Atari 800 XL, the third version of the Atari introduced in 1983, and I was doing some basic programming. The computer contained a full 64K of memory, 1.8 MHz processing power (CPU), and looked like a bulky keyboard. I enjoyed spending a lot of time with games and programming; however I felt very limited with its capabilities. After more than 20 years, from time to time I still feel the same about my 2.8 GHz eight-core desktop computer with 8 GB of memory. While the computers are getting faster and more powerful, our need for computing power is also increasing just to complete our daily tasks at home and work. I always wondered how it all started and where we are heading with computer technology.

Over forty years ago, Gordon Moore, Intel Co-founder, predicted the number of transistors incorporated in a chip to double every 24 months. This is popularly known as Moore’s law, and used countless times by futurists [1] and science fiction writers. For the last half century, computer’s functionality and performance increased in line with Moore’s law, while costs are decreasing. However, fundamental barriers in semiconductor technology are emerging including the power needs and limits of manufacturing in atomic dimensions. Computer industry is already working on technologies to keep Moore’s Law alive.

Intel is already experimenting with 3D transistors that are smaller, faster and more energy efficient using a 22nm (nanometer, 10-12 m) manufacturing process compared to today’s 32nm systems. This will be a significant step forward to build more transistors onto silicon chips. Another approach will be by replacing the silicone in transistors. In 2010, IBM showcased a graphene transistor running at 100 GHz, with a potential of up to 1000 GHz. A graphene layer is only one atom thick with a honeycomb-like structure of carbon atoms. The graphene has unique electrical, optical, mechanical and thermal properties with promising applications in many industries.

In 1971, a theoretical prediction was made for the missing link in electronics, memristor, a fourth element to supplement resistor, capacitor and inductor, which form the basis of today’s electronic devices. HP’s demonstration in 2010 shows a resistor with memory that remembers the electric voltage applied even when the power is turned off. Requiring very little energy to store information promises to run ten times more power efficient and ten times faster than current counterparts.

In 1994, Leonard Adelman [2] proposed DNA computing to solve the famous, “shortest path problem”. Since then, many approaches have been made to utilize the properties of DNA for computing. In 2010, researchers at California Institute of Technology demonstrated a DNA computer, most advanced to date, which can calculate square roots. This approach could be put in use inside living organisms, and perform vital tasks such as disease detection. The promise of DNA computers depends on the parallel processing capabilities of DNA molecules that can try many possibilities at once with low power requirements [3]. Further advancements in DNA based computers in the coming decades will bring faster and low-powered computers to our daily lives.

In 1981, the famous physicist Richard Feynman speculated the possibility of computers obeying quantum mechanical laws that might best simulate the real-world quantum systems. This is a big challenge even for today’s fastest supercomputers. Since then, researchers are pacing towards building quantum computers that rely on quantum mechanics to conduct operations. Quantum computers can use properties like entanglement [4] and qubits. In entanglement, particles behave identically independent of the distance between them, and qubits act as both memory and state of the entanglement. Shor’s Algorithm, formulated by Peter Shor in 1994 for prime factorization is a powerful example of quantum computing that allows breaking encryption algorithms like RSA encryption more effectively and quickly than today’s super computers. Quantum computers can perform at much higher speeds than traditional computers, and are able to solve more complex problems. Recent developments in quantum computing such as quantum photonic chips [5], and first commercial quantum computer by D-Wave shows that we might be closer than we think to have our very own quantum computer in the coming decades.

While greater shift in computing might stem from the change in underlying technology in processing the information, computer form factors (desktop, laptop, tablet, phone, etc.) have a more direct effect in our daily use of computers. Human computer interaction changed significantly with the introduction of smartphones (e.g. iPhone, Android phones), and tablets (e.g. iPad), moving from keyboards and mouse as the primary input methods towards touch screens.

Today’s touch screens, although providing infinite ways of input structures available on their screens, lack tactile feedback when compared to keyboards. New keyboard designs with small screens on each key opens infinite customization of the input, but are still limited to initial design of the key (e.g. usually cubic). Recent developments in touchscreen designs enable users to feel clicks, vibrations and other tactile input by using “haptic technology”. Haptic technology takes advantage of the user’s sense of touch and provides feedback by applying forces, vibrations, or motions to the user. As seen in early prototypes, flexible screens and electronics will provide a more realistic feel for human computer interaction by shifting their forms to a specific shape (e.g. game pad, key, and wheel) in the coming decades.

Another level in human computer interaction even eliminates user’s touch. Apple introduced Siri, a smart virtual assistant, in 2011 as a part of their iOS operating system for iPhones and iPads. Siri is capable of analyzing user’s complex audio inputs to carry out many tasks including scheduling a meeting, creating a reminder, typing and sending SMS messages, and many other functions available in smart phones. Microsoft introduced Kinect in 2010, a motion-sensing device that enables users to control and interact with the game console using gestures and spoken commands. The Kinect interprets specific gestures by using an infrared projector and camera to track the movement of objects and individuals in three dimensions.

Samsung introduced a 46’’ transparent display using LCD technology in 2010, and demonstrated flexible displays in CES 2011. Transparent and flexible displays will easily find use in wearable electronics such as contact lenses and glasses. An obvious application of transparent display is Augmented Reality (AR) where information is displayed on top of real world images. While today’s smartphones and tablets allows augmented reality by combining information with the real-time video feed from the camera of the device, transparent display eliminates the need of using camera. Current AR technology includes head-mounted displays and virtual retinal displays for visualizing the information. It is widely applied in various areas including entertainment, advertising, game industry, navigation, education, military applications, and information sharing.

While having larger, transparent, and flexible displays in different forms, one direction in display technologies is to reduce the size or even eliminate the display through projection and holograms. Current trends in projectors include 3D projection, synchronization of multiple projections, and pico projectors. The world’s smallest glass lens (1mm x 1mm) introduced in 2011 will help minimize some of the problems of projectors such as size, power and heat, and improve their integration in smartphones and tablets.

Recent prototypes of holographic displays progressed significantly demonstrating 3D and full color animated images, since its first introduction at the MIT Media Lab 1989 [6]. While developments in 3D displays are promising, holography provides the best 3-D experience since it is closest to how we see our environment. A hologram uses an optical effect called “diffraction” to produce the light that would have come from an object, and makes the image of the object appear in front of the viewer. It is possible to view objects from different angles in holographic display by walking around them. Unlike other 3D display systems, holographic displays do not require special glasses for viewing and allows multiple viewers to experience the view from different angles at the same time. Future applications of holography can be implemented in health, entertainment and communication sectors, from 3D movies to telepresence applications.

All of the above examples and trends in computers deal with the computing as a product. Cloud computing can be defined as the delivery of computing as a service where shared resources, software, and information are provided over a network. Cloud computing describes a new delivery and consumption model that allows dynamic scalability and virtualization of resources. Users can dynamically upgrade the storage and computing power from virtualized resources on demand without hardware changes on the base system. Organizations can save from investing on expensive hardware, and human capital.

Many technologies from coming centuries are featured in science fiction books and movies such as Minority Report, Star Trek, and Star Wars. While some of them are already available to consumers, others might require decades to come. Motivation for the advancement in computer technologies usually stems from our needs and desires. At the same time, new technologies significantly impact consumer behavior and increase our dependence on new technologies. Our economy is structured such that all citizens have to consume more and more, even if that means disposing of perfectly good technological devices. Do we really need a new computer or phone every year? Most of us don’t.

Computer technologies are a significant part of our daily lives, and the line between the products and services is becoming thinner with the dependency on computers increasing in every aspect of our life. While improving the quality of our lives by making our daily tasks easier, computers and Internet technologies can affect us in different ways. It has already started to change how we read, write and even communicate with others. Many concerns are raised about the negative effects of the use of technology including Internet addiction, privacy, attention span, concentration, time consumption, anxiety, isolation, depression, digital security, communication disorder and various health issues. The challenge for us is to understand the benefits of the technology, have a balance in dependence and its use, and protect ourselves from its adverse effects.

Acknowledgment: This article is produced at Mergeous [7], an online article and project development service for authors and publishers dedicated to the advancement of technologies in the merging realms of science and religion.


1. Kaku, Michio. 2011. Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100, Knopf Doubleday Publishing Group.

2. Demir, Halil I. 2011. Super Computers in a Cell, The Fountain, Issue 80, March - April.

3. Adleman, Leonard M. 1994. "Molecular Computation of Solutions to Combinatorial Problems," Science, 266 (11), 1021–1024.

4. Demir, Halil I. 2011. Quantum Worlds from Entanglement to Telepathy, The Fountain, Issue 84, November – December.

5. Shadbolt, P. J. et al., Generating, manipulating and measuring entanglement and mixture with a reconfigurable photonic circuit, arXiv:1108.3309v1 [quant-ph].

6. Hilaire, P. St., S. A. Benton, M. Lucente, M. L. Jepsen, J. Kollin, H. Yoshikawa and J. Underkoffler. 1990. "Electronic display system for computational holography."In Practical Holography IV, Proceedings of the SPIE, volume 1212-20, pp. 174-182, Bellingham, WA. 7. Mergeous, Online article and project development platform,