Computers and Evolution
At first glance, the average person would be familiar with only the last 30 years of computer history. In fact, the origins of the computer, in the way of simple counting aids, date back at least 2,000 years. The abacus was invented around the 4th century BC, in Babylonia (now Iraq). Another device called the Antikythera mechanism was used for registering and predicting the motion of the stars and planets around the 1st century BC. Wilhelm Schickard built the first mechanical calculator in 1623, but the device never made it past the prototype stage. This calculator could work with six digits and carry digits across columns.
First-generation computers (1939-1954) used vacuum tubes to compute. The simple vacuum tube had been developed by John Ambrose Fleming, in 1904. The vacuum tube was used in radios and other electronic devices throughout the 1940s and into the 1950s. Most computer developments during this time were used for military purposes. During World War II “the Colossus” (December 1943) was designed in secret at Bletchley Park to decode German messages. The ENIAC (Electronic Numerical Integrator Analyzor and Computer) was developed by Ballistics Research Laboratory, in Maryland, in 1945. This computer was used to assist in the preparation of firing tables for artillery. The UNIVAC (Universal Automatic Computer) was developed in 1951, by Remington Rand. The UNIVAC was the first commercial computer sold. The Census Bureau purchased the UNIVAC on June 14, 1951. It contained a magnetic storage system and tape drives and was so large it was housed in a garage-sized room. The UNIVAC contained 5,200 vacuum tubes and weighed about 29,000 pounds. The UNIVAC I, which was an upgrade of the original UNIVAC, was used to calculate and predict the winner in the 1952 presidential campaign. Interestingly, TV networks refused to trust UNIVAC I’s prediction results.
Second-generation computers (1954-1959) used transistors rather than vacuum tubes. Dr. John Bardeen, Dr. Walter Brattain, and Dr. William Shockley developed the first transistor in December 1947. Transistors were developed in an attempt to find a better amplifier and a replacement for mechanical relays. The vacuum tube, although it had been used for nearly 50 years, consumed lots of power, operated hot, and burned out rapidly. Transistors provided a new, more efficient method of computing. International Business Machines (IBM) dominated the early second-generation market. IBM, with Tom Watson Jr. as CEO, introduced the model 604 computer in 1953. This computer used transistors. The 604 developed into the 608 in 1957. This was the first solid-state computer sold on the commercial market. IBM had a number of other significant developments during the same time frame. They developed the 650 Magnetic Drum Calculator, which used a magnetic drum memory rather than punched cards. IBM also developed the 701 scientific “Defense Calculator.” This series of computers dominated mainframe computers for the next decade. Although IBM dominated the second generation, several other companies developed computer systems. In 1956, Bendix sold a small business computer, the G-15A, for $45,000. This computer was designed by Harry Huskey.
Third-generation computers (1959-1971) were built with integrated circuits (IC). An IC is a chip made up of many transistors. Three companies played major roles in the development of third-generation computers. The first IC was patented by Jack Kilby, of Texas Instruments (TI), in 1959. Although IC development started in 1959, it wasn’t until 1963 that a commercial IC hearing aid was sold. IBM again played a major role in the development of computers during the third generation. They produced SABRE, the first airline reservation tracking system for American Airlines. IBM also announced the System/360. This computer was an all-purpose mainframe computer, which used an 8-bit character word. Digital Equipment Corporation (DEC) introduced the first “mini-computer” in 1968. This was a smaller-sized version of normal computer systems of the day and was called the PDP-8. The “minicomputer” was named after the “mini-skirt” of the 1960s. Early computer applications were also developed during this time. In 1962, Ivan Sutherland demonstrated “Sketchpad,” which was installed on a mainframe computer. This program provided engineers the ability to make drawings on the computer using a light pen. Doug Engelbart demonstrated (1968) an early word processor. Toward the end of the third generation, the Department of Defense started development of Arpanet (the precursor of the Internet), and Intel Corp started producing large-scale integrated (LSI) circuits.
The microprocessor was developed in the early 1970s. From 1971 through the present is generally known as the fourth generation of computer development. There have been many developments in computer technology during this time. In 1971, Gilbert Hyatt, at Micro Computer Co., patented the first microprocessor. Ted Hoff, at Intel Corp., introduced the first 4-bit processor in February of that year, the 4004. In 1972, Intel developed the 8-bit 8008 and 8080 microprocessors. The 8080 was the microprocessor design IBM used with its original IBM PC sold commercially in the early 1980s. Control Program/ Microprocessor (CP/M) was the earliest widely used microcomputer operating system. This language was used with early 8-bit microprocessors. Many of the components seen on modern computers were developed in the 1970s. IBM developed the first sealed hard drive in 1973. It was called the “Winchester,” after the rifle company. It had a total capacity of 60 megabytes. Xerox developed Ethernet in 1973. Ethernet was one of the first environments that allowed computers to talk to each other. The Graphical User Interface (GUI) was developed by Xerox in 1974. Common GUIs seen today are Apples’ Mac OS and Microsoft’s Windows Operating System. In 1976 one of the companies that revolutionized microcomputer development was started. Apple was a startup business in 1975-76. Jobs and Wozniak developed the Apple personal computer in 1976. In 1977, the gaming industry started. Nintendo began to make computer games that stored data on chips on the inside of game cartridges. A few of the early popular games included “Donkey Kong” (1981) and “Super Mario Brothers” (1985). Probably the most significant software occurrence was the contract between IBM and Microsoft’s Bill Gates in 1980. In 1980, IBM offered Microsoft a contract to build a new operating system for IBM’s new desktop PC. Microsoft bought QDOS from Seattle Computer and eventually developed MS-DOS. This contract formed the beginnings of Microsoft, which is now the largest software company in the world. Another important event took place in 1987, when Bill Atkinson of Apple Computer developed a program called “Hypercard.” Hypercard used hypertext and was a predecessor of the graphical environment used on the World Wide Web today.
Fifth-generation computing (the present and beyond) encompasses common use of the Internet, World Wide Web, virtual reality, Artificial Intelligence, and daily use of sophisticated technological innovations.
Several important events set the stage for fifth-generation computing. Among these was the development of the World Wide Web in 1991, by Tim Berners-Lee; the first Web browser, “Mosaic,” in 1993; the release of Netscape Navigator in 1994; and the release of Internet Explorer by Microsoft in 1996. Today, technology and computing are moving forward at an ever-increasing rate. The World Wide Web is the common program to browse the Internet. As computers increase in power, virtual reality is becoming common as well. Doctors can use virtual reality to operate on a patient prior to a real surgery. Pilots log hundreds of hours in flight simulators before ever setting foot in the cockpit of an airplane, and astronauts can train for complex maneuvers before takeoff. Computers are becoming smarter as well. Artificial Intelligence and expert systems are being developed daily. The increase in technology has spun off numerous computer-like devices, such as smart cell phones, MP3 players, and many more personal portable computers.
It’s interesting to note that as the computer has evolved to support ever more sophisticated software applications, computers are now used to simulate and model everything from the evolution of man to the weather. Information gathered from anthropological finds can be entered into computers, enabling the simulation of prehuman-to-human evolution. By understanding human evolution, scientists can learn, in addition to other benefits, more about natural selection and the processes all life goes through in the evolutionary process. Computers develop climate change models by analyzing environmental data gathered from sensors around the world. These models can forecast what the environment might be like in 50 or 100 years and help humankind prepare for future environmental shifts.
The fast pace of increasing technology has led to serious human physical and psychological conditions. Since the computer has become a necessary component of everyday business, the work environment has seen an increase in repetitive stress injuries (RSI). RSI include carpal tunnel syndrome (CTS), tendonitis, tennis elbow, and a variety of similar conditions. The field of computer ergonomics attempts to improve worker productivity and reduce injuries by designing computer equipment that will be able to adjust to the individual’s natural body positions. Technostress, a term originally popularized by Sethi, Caro, and Schuler, refers to stress associated with the continually changing and uncertain technology environment individuals are faced with either at work or home. As a result of the rapid and uncertain change in technology (resulting in technostress), humans, probably more so than at any point in history, must have the ability to quickly adapt to new situations and environments.
As the computer continues to change the world, we will undoubtedly see more technological innovations in the near future. The computer is indeed quickly evolving into a new form that, today, we cannot imagine.
Computers and Research
In the past, research required significantly greater time to complete than today. Data had to be gathered, then analyzed by hand. This was a very slow, tedious, and unreliable process. Today, computers take much of the manual labor away from research. Primarily, computers assist researchers by allowing them to gather, then analyze, massive amounts of data in a relatively short period of time.
Even though scientists began identifying and understanding DNA in depth in the 1950s, detailed analysis could not be performed until technologies were able to analyze and record the volumes of data associated with DNA research. The Human Genome Project began in 1990 and was coordinated by the Department of Energy (DOE) and the National Institutes of Health (NIH), resulting in the coding of the human genetic sequence. The goals of this project were to identify all the approximately 30,000 genes in human DNA, to determine the sequences of the 3 billion chemical base pairs that make up human DNA, to store this information in databases, to improve tools for data analysis, to transfer related technologies to the private sector, and to address the ethical, legal, and social issues (ELSI) that may arise from the project. The Human Genome Project was originally intended to last 15 years but was completed in just 13 due to computer technology advances.
Technologies such as distributed computing (thousands or millions of computers working on the same project at the same time) and the Internet have aided in the development of new research methodologies. For example, when a home computer is turned on, its microprocessor is sitting idle most of the time regardless of the task the user is performing. Distributed processing takes advantage of the idle time by running programs in the background. The user is usually never aware another program is running. The SETI@Home project is one example of how distributed processing can be used in research. This project uses a screen-saver program, designed for home computers, that analyzes radio signals from outer space for patterns or other signs of alien life. Individuals volunteer to use their home computer as part of the program. Each home computer receives data from a radio telescope in Puerto Rico. The home computer then analyzes the data and returns the results. The screen-saver program is the distributed program interfacing through the Internet with the radio telescope. Mainframe computers are typically used to analyze this type of data but can be very expensive to use. Research costs are significantly reduced using distributed computing.
Computers can be used for modeling. Modeling is similar to building a virtual prototype. For instance, rather than an auto manufacturer physically building a new car, then testing it for safety, a computer model is virtually created. That model can then be tested as though it were a real car. The modeling process is quicker and less expensive than traditional methods of testing car safety and performance and allows for a greater variety of tests in a short time frame.
Computers are also used to assist communication between researchers located at geographically separated locations. Researchers in Puerto Rico can easily and instantly communicate with researchers in Hawaii. Researchers from eastern European countries can easily collaborate with their peers from the West. The ability to share resources and knowledge creates an environment where people from many different geographical areas, backgrounds, and experiences can effectively merge, creating a more productive research team.
Computers are being used in education research to better understand how individuals learn. By knowing how individuals learn, educational programs can be tailored so each person can learn more efficiently.
Ironically, computers are being used in research to learn how humans interact with computers. By understanding the interaction process, software can be designed so it is more intuitive and easier to use. This increases user satisfaction and productivity-boosting efficiencies.
Computers impact every facet of research, from education to space research. The ability of the computer to quickly analyze and store massive quantities of data has been a key to the success of the computer in research.
Computers and Genetics
The field of genetics, or the study of genes, is incredibly complicated and contains massive amounts of data. Biotechnology is the study of genetics aided by computers. Computers are an absolute necessity in the field of biotechnology.
Computers help scientists get a three-dimensional visualization of long strings of DNA. Before the advent of computer use in genetics, scientists were able to make only rough guesses as to the makeup of DNA structure.
Computer technology is necessary in managing and interpreting large quantities of data that are generated in a multitude of genetic projects, including the Human Genome Project and companion efforts, such as modeling organisms’ genetic sequences. Information in all forms of biotech databases, such as the nucleotide sequence, genetic and physical genome maps, and protein structure information, has grown exponentially over the last decade. As the quantity of data increases, computers become even more important in managing access to information for scientists worldwide. Around the world, there are hundreds of large databases used in genetic research. For researchers to obtain accurate information, it is often necessary to access several different databases.
Computers are able to interface between different types of databases using programs such as Entrez for text term searching. Entrez is a tool used for data mining (searching many databases for specific information such as trends or patterns). Entrez has access to nucleotide and protein sequence data from over 100,000 organisms. It can also access three-dimensional protein structures and genomic-mapping information. Access to this data is important for scientists to understand the DNA structure of organisms. There is similar software used for sequence similarity searching, taxonomy, and sequence submission.
Among the benefits computer technology has brought to the field of biotechnology is the ability to increase the rate at which pharmaceutical drugs can be developed. Screening is a process by which researchers learn how a chemical or natural product affects the disease process. Using computer technology, researchers are now able to screen hundreds of thousands of chemical and natural product samples in the same time a few hundred samples were screened a decade ago. Modern computer technology has enabled the discovery of thousands of new medicines in an ever-shortening time frame.
In the future, computers will be able to simulate cells at two different levels. Computers will be able to simulate cells at the atomic level, allowing scientists to learn how proteins fold and interact. It’s important to understand this basic interaction, since proteins are the building blocks of all life. On a larger scale, computers can simulate biochemical compounds, where they can learn more about cell metabolism and regulation. By understanding how the cell works and being able to simulate cells, scientists would then be able to build larger biological models. Rather than test the effects of drugs on animals or humans, scientists would be able to simulate the same test on virtual organisms. Scientists could even create a simulated model of an individual testing the effect medications have on the human system. This technology would enable doctors to treat patients more effectively.
Organizations have been established to create and maintain biomedical databases. The National Center for Biotechnology Information (NCBI) was created in 1988 toward this purpose. A few of NCBI’s responsibilities are to conduct research on fundamental biomedical problems at the molecular level using mathematical and computational methods; maintain collaborations with several NIH, academia, industry, and governmental agencies; and foster scientific communication by sponsoring meetings, workshops, and lecture series.
Computers and Education
Computers have changed the face of education. Basic computer skills are becoming more necessary in everyday life. Every facet of education has been affected by computer technology. English, philosophy, psychology, and history teachers now have a wide range of informational and educational resources and teaching tools accessible through the Internet. Mathematicians use computers to better understand equations. Science teachers use computers to gather and analyze large quantities of experimental data. Health and human performance (physical education) teachers are able to use computers to model human anatomy, which provides insight to the cause and prevention of sports injuries. Computer science instructors teach a variety of skills, such as programming, networking, and computer applications. Education in each discipline is important to the success of children worldwide.
The advent of the computer in education has changed many teaching methods. Teachers have traditionally used textbooks and lectured about a particular topic. Today, computer technology has brought interactive learning methodologies into the classroom. Computer simulations are common. Using the Internet for research is common as well.
Computers in education provide students with a wide array of diverse learning techniques. Some students excel at individually paced courses, while others learn better working in groups. Computers provide a means of improving the learning environment for each type of learner. For example, individual students can use a program on compact disc (CD) that provides course content, complete with quizzes and exams. This enables students to work at their own pace, mastering each unit and then continuing with subsequent lessons. Computers and the Internet can be used by groups of students as a tool for collaboration, enabling them to work together even though they are geographically separated.
In today’s busy world, many individuals are taking classes online in order to advance their education and careers. Distance courses provide supplemental classes to high school students and lifelong learners. Distance education is becoming more prevalent as computer technology improves. Home computers with network connections to the Internet are now faster and easier to use. Students enrolled in distance courses today can expect to take part in discussions and chats, view images and video, and be provided with a long list of course-specific Internet resources. Courseware (a program used by teachers to organize and deliver online course content) is becoming very friendly and efficient to use to organize and present course material. Courseware is not only making distance learning easier but is also used to supplement onsite courses as well.
Children with special needs benefit from computer technology in the classroom. A general class of computer technologies that helps children with special needs learn and function is called “assistive technologies.” There are many ways assistive technologies help children with disabilities to learn. For instance, applications provide cause-and-effect training activities, which is a beneficial learning style for special needs children. In more severe cases, assistive technologies offer students with cerebral palsy and other debilitating conditions a way to learn through the use of speech-generating devices (augmentative and alternative communication, or AAC). Assistive technologies also assist those who are hearing and visually impaired by using computers as an interface to learning environments.
Computers have changed the face of education in business as well. Today, keeping up with current technology is necessary for companies to remain competitive in a global market. Employees must continually upgrade their skills in order to remain valuable to the company. Computers allow individuals to update their skills through both online professional development and computer-based training applications. Some companies have developed extensive industry-specific curricula, creating a unique learning environment that is partly online and partly onsite. In this example, industry employees are able to learn computer-networking concepts in a combined-media format, containing elements such as text, image, audio, video, and simulations. High school and college students may also participate in this online learning environment.
Computers are used to provide a variety of assessments. These range from the computerized versions of the traditional quiz or exam to interactive-skills-based exams. Individuals have a variety of ways they learn best. Some are visual. Some are better able to memorize information. The computer has provided the means to create a wider variety of assessments, enabling teachers to better determine students’ knowledge and skill in a particular discipline or content area. Once individuals are assessed, computers can then analyze the data. Administrators and teachers can monitor and analyze learning trends.
With the world becoming more technical, it is necessary to learn about computers in every educational grade. Whether it is learning about computers or using computers to teach other disciplines, computers are key in the success of today’s children as well as adult learners. Computers are the way we work today. With the world and technology changing ever more quickly, it is more important than ever that computers be included in every facet of education.
Many third-world countries are now in the process of developing internal networking technologies, and the world continues to get smaller. The Internet has enabled children around the world to collaborate and communicate with each other. It has brought similar teaching methodologies to the forefront worldwide, creating the most unique learning environment the world has thus far seen.
Computers and the Global Village
The world is continually shrinking thanks to the advent of electronic mediums such as radio and television and, more recently, the computer and the Internet. These technologies have electronically interconnected the world. Marshall McLuhan first coined the phrase “global village” in the early 1960s. McLuhan was a professor at the University of Toronto’s St. Michael’s College. He studied the effects of mass media on behavior and thought. McLuhan wrote several books about the effect of the media on humankind. He first predicted world connectivity in 1965.
What is the global village? We’ll start by defining the word village. What is a village? A village is local. You pass villagers each day on the street. You live next door to villagers. You discuss neighborhood events with the villager who lives next door. Villagers with common interests gather for meetings at the local public school. They gather to socialize at restaurants and other locations. Everyone in the village is connected in some way. This village can be your neighborhood or the city where you live. News, gossip, and community events are known commonly throughout the village. Fires, deaths, and other important community news spread rapidly throughout the community. The village is geographically limited in size.
The global village has been created through the use of the electronic medium. From the 1920s through the 1960s, it was represented by radio, television, movies, and the telephone. One could experience events around the world through these mediums. Regardless of the individual’s physical location in the world, they were able to experience the stock market crash in 1929, the Japanese attack on Pearl Harbor in 1941, the Cuban missile crisis of 1963, and the social movements that took place in the late 1960s, in much the same manner as individuals in a village experience their local events within the community. The 1970s saw the development of Arpanet. Arpanet was a U.S. Department of Defense project that connected computers together from several geographical areas across the United States into one network. The modern-day Internet was built upon the technologies and concepts of Arpanet. The introduction of the personal computer in the early 1980s combined with the growth of computer networks and the Internet for business use initiated the socialization of the “net” (Internet) in the 1990s. The World Wide Web (1993) has enabled this socialization, creating a common, easy-to-use interface that became the standard way to navigate and use the Internet.
Throughout the latter part of the 20th and first part of the 21st century, the Internet has developed into the “global village” McLuhan spoke of in the 1960s. The Internet has created a social and information culture similar to the traditional village, yet in a virtual environment. You communicate or chat daily with individuals who are online. You purchase goods through online auctions. You write letters that contain pictures and movies and send them to family and friends through the use of electronic mail. You check the headlines on the daily paper, perhaps the New York Times or the Scotsman, while living in rural Montana. Through telecommuting, you can work in large urban areas and live in less crowded rural settings. The global village concept extends to education as well. You can take a course to further your education or career from any university in the world that offers distance learning, all from the comfort of you home. This new global village, through the Internet, enables you to be a participant in worldwide events, regardless of location.
The global village has changed how we interact with information. Traditional books are being supplemented by e-books, Web sites, and other electronic sources. McLuhan said reading a book is an individual personal experience. The e-book or Web site (or other electronic medium) becomes a group experience due to the nature of the medium. The information you read is being read by perhaps 100 or 1,000 other individuals at the same time who are physically dispersed around the globe, just as you are.
The global village has in part grown out of a need for socialization. Although it is more personal to interact with individuals face-to-face, career and family needs take a significant amount of time out of our daily lives. Social interaction is an important component of healthy individuals’ lives. In today’s world, it is normal that both parents have to work to support the family. In one-parent homes, it is difficult to make ends meet with just one job. A parent will often have two or more jobs. Family obligations then take priority once the workday is done. The Internet acts to meet socialization needs. When parents are busy at work, children are home alone until the parents get off work. Children can browse the Internet and take part in chats with friends. After the children are in bed, parents can go online e-mailing or chatting with family and friends. Individuals who live outside a family setting take advantage of the Internet as a social tool as well. Many of these individuals work long hours and haven’t the energy or desire to socialize away from home. The Internet meets this socialization need as well by creating a virtual meeting place right in your home.
Since the world is virtually growing smaller, individuals are becoming more aware of issues such as politics and culture. It has become easy and inexpensive to post information and propaganda on a Web site. This has given a voice to politically oriented groups, regardless of cause. People with similar interests gather on the net, creating communities. Communities can be created based on hobby, gender, nationality, or any other preferences. Most often, chats and discussion groups are the preferred means of community interaction within the global village. Culture (cyber, ethnic, and racial) plays an important role on the Internet. Due to its global nature, the Internet has users from many ethnic and racial groups, who form communities based upon their similar interests. Like villages or neighborhoods, cultures form within the Internet. Cyberculture is a general term for the subcultures that have developed on the Internet.
The “global village” has changed the world we live in. Although most concepts remain constant, the methods of communication change with advances in technology. In every example given, the Internet has enabled the creation of our modern global village with its specific technology, moral, ethical, and social aspects. Every aspect of the physical village is contained in the global village. Communities form regardless of the physical location or medium, and individuals with similar interests will associate with each other. Books will still be printed, but the medium used by the global village will change the way we use the printed traditional book. Politics contain the same message, but the global village carries the message farther. Culture develops and changes the way we interact with each other both online and off.
The global village has extended our reach. It enables individuals to reach out and participate in world events instantaneously. Our friends are now global. Our education is now global. The (online) communities we are involved in are global. Social interaction has departed from the more personal face-to-face environment to the new cybercommunity. The global village is changing the way we work, learn, communicate, and interact with others. For all the benefits the new village brings, however, there are negative aspects as well. Some say that within the cyberworld, the traditional personal environment is being supplanted with an almost isolationist mentality.
Through the use of real-time multimedia, the Internet will evolve into a more personalized experience. Internet and electronic medium tools will become more intuitive. The Internet will become the facilitator of the global village, the new village nearly every individual on the Earth will interact within. In the future, the global village, created by electronic media, will merge with the traditional village setting, creating a new experience somewhere between the real and virtual.
Computers and Intelligence
Artificial Intelligence (AI) is the science and engineering of making intelligent machines, especially intelligent computer programs. So, how does human intelligence differ from AI? AI is being developed to enable machines to solve problems. The goal in the development of AI isn’t to simulate human intelligence; it is to give machines the ability to make their own decisions based on specific criteria. Researchers in AI have suggested that differences in intelligence in humans relate to biochemical and physiological conditions such as speed, short-term memory, and the ability to form accurate and retrievable long-term memories. Modern computers have speed and short-term memory but lack the ability to relate experience to problems. They are unable to compare current problems to past events (“memories” based on experience).
Alan Turing, a mathematician, started researching AI in 1947. By the late 1950s, many scientist were attempting to develop AI systems through a software design approach. Turing developed a test to evaluate intelligence in machines, to see whether a machine could “pass as human” to a knowledgeable observer. He theorized the test could be made with the observer communicating with a computer and a person by teletype (the teletype was prevalent in the 1950s). Essentially, the observer was attempting to discern which was human and which wasn’t. Although the “Turing test” was never conducted in full, some test components have been used.
Although some AI researchers’ goals are to simulate human intelligence, others feel that machines do not have to be “intelligent” in the same way humans are to be able to make decisions. Using traditional software programming, researchers at IBM developed “Deep Blue.” Deep Blue is a computer system that was designed with the intelligence to play chess without human assistance. Many researchers claim the breadth of Deep Blue’s knowledge is so narrow that it doesn’t really show intelligence since the computer only examines and then responds to chess moves. They claim that Deep Blue doesn’t actually understand a chess position. Other AI researchers claim there is an intelligence involved in Deep Blue. How does a human brain work to enable the individual to make a decision? The brain works because each of its billions of neurons carries out hundreds of tiny operations per second, none of which in isolation demonstrates any intelligence at all. As a result of the background computations going on in your brain, the individual is able to complete conscious thoughts, which lead to intelligent decisions. Essentially, although very narrow in scope, Deep Blue computes millions of chess moves, as a background thought, then will determine the best strategic move. Is this process intelligence? The human mind computes, then determines chess moves. The computer mind computes, then determines chess moves. It would seem that there is at least a level of intelligence within Deep Blue.
Epistemology is a branch of philosophy that studies the nature of knowledge, its presuppositions and foundations, and its extent and validity. Cybernetics uses epistemology, theoretically enabling computers to intelligently understand problems and determine decisions. Cybernetics and AI are similar but use different means to theoretically achieve intelligence in computers. AI involves the application in the real world of knowledge stored in a machine, implying that it is essentially a soft-coded, rule-based expert system (programmers give the computer intelligence). Cybernetics, by contrast, has evolved from a “constructivist” perspective. Under this theory, a computer learns from past experience. The computer builds a database of experiences, then correlates these to solve problems. Cybernetics calls for computers to learn, then change their behavior based upon past experience. Although AI has been at the forefront of computer intelligence for the last 50 years, there is currently renewed interest in cybernetics due to limitations in the ability to further develop AI programs.
AI researchers have attempted to bridge the computer intelligence gap by developing new technologies such as “neural nets.” NASA is working on developing “fuzzy logic” and “neural net” technology for use with the Mars Technology Program, attempting to create robots that can make human decisions. Fuzzy logic is closer to the way human brains work, and its approach to problems duplicates how a person would make decisions. A neural network is a processing device used for solving problems using a step-by-step approach, as humans do. This method will allow a robot such as a Mars rover to choose a course on its own, and remember it, without the aid of a remote driver, acting according to logic, not just mechanics.
Many philosophers believe true AI is impossible. Some believe it is immoral. Despite the negative aspects of AI, researchers continue to move forward, attempting to develop a humanlike artificial intelligence. There are many uses for AI, ranging from game playing (such as chess), speech recognition (as in automated telephone systems), to expert systems as well as intelligently guiding and steering vehicles on other planets in our solar system. Researchers are continually working to improve the intelligence of computers and robots.
Computers and the Space Age
Computers have been an integral part of the space program since the National Aeronautics and Space Administration’s (NASA) founding in the late 1950s. Today, computers are used in every facet of space exploration. They are used for guidance and navigation functions, such as rendezvous, reentry, and mid-course corrections, as well as for system management functions, data formatting, and attitude control.
Throughout the years, NASA’s computing focus for manned space flight has been to take proven technologies and adapt them to space flight. The reliability of proven technologies is of primary importance when working in the manned space flight program. In unmanned programs, NASA has been able to be more innovative and has encouraged innovative new technologies.
There are three types of computer systems NASA uses in the space program: (1) ground-based, (2) unmanned onboard, and (3) manned onboard computer systems. Ground-based systems do the majority of computing, being responsible for takeoffs, orbital attitudes, landings, and so on. Unmanned onboard computers are usually small computers that require little energy and can operate on their own without failure for long periods of time. NASA’s Cassini-Huygens mission to Saturn was launched in October of 1997 and arrived in July of 2004. The specialized computer systems on Cassini-Huygens project has worked flawlessly in deep space for over 7 years. Manned onboard systems control all aspects of the manned spacecraft. As in space shuttle missions, once the ground computers at NASA release control of the spacecraft, the onboard computers take control. The shuttle is a very complicated spacecraft. There are literally thousands of sensors and controls spread throughout it. Information from these sensors is fed into the shuttle’s computer systems, enabling realtime navigation, communications, course control, maintenance of the living environment, reentry, and many additional functions. There are typically many smaller computer systems on manned spacecraft that are networked together. This allows for real-time processing of massive amounts of data. System reliability is one of the most important features of the onboard computer systems. If a system crashes, the astronauts will lose control of the spacecraft.
The Mercury project was America’s first man-in-space effort and took place in the early 1960s. NASA subcontracted the development of the Mercury spacecraft to McDonnell-Douglas. The Mercury capsule itself was designed in a bell shape. The capsule wasn’t able to maneuver on its own and was barely large enough for one astronaut to fit into. A ground system computer computed reentry, then transmitted retrofire and firing attitude information to the capsule while in flight. The ground system computer controlled every part of the Mercury mission; therefore, an onboard computer was not necessary.
The first onboard computer systems were developed by IBM for the Gemini project of the late 1960s and early 1970s. The onboard computer was added to provide better reentry accuracy and to automate some of the preflight checkout functions. The computer IBM developed was called the “Gemini Digital Computer.” This computer system functioned in six mission phases: prelaunch, ascent backup, insertion, catch-up, rendezvous, and reentry. Due to the limited amount of space on the Gemini capsule, the size of the computer was important. The Gemini Digital Computer was contained in a box measuring 18.9 inches high by 14.5 inches wide by 12.75 inches deep and weighed 58.98 pounds. The components, speed, and type of memory were influenced due to the size limitation of the computer. Gemini VIII was the first mission that used an auxiliary-tape memory. This allowed programs to be stored and then loaded while the spacecraft was in flight.
One of NASA’s primary challenges in the early days of space exploration was developing computers that could survive the stress of a rocket launch, operate in the space environment, and provide the ability to perform increasingly ambitious missions.
On May 25, 1961, President John F. Kennedy unveiled the commitment to execute Project Apollo in a speech on “Urgent National Needs.”
The Apollo program’s goal of sending a man to the moon and returning him safely, before the decade was out, was a lofty and dangerous one. One of the most important systems of the Apollo spacecraft was the onboard guidance and navigation system (G&N). This system played the leading role in landing the lunar module on the moon at precise locations. The G&N performed the basic functions of inertial guidance, attitude reference, and optical navigation and was interrelated mechanically or electrically with the stabilization and control, electrical power, environmental control, telecommunications, and instrumentation systems. The inertial guidance subsystem sensed acceleration and attitude changes instantaneously and provided attitude control and thrust control signals to the stabilization and control system. The optical navigation subsystem “sighted” celestial bodies and landmarks on the moon and Earth, which were used by the computer subsystem to determine the spacecraft’s position and velocity and to establish proper alignment of the stable platform.
The computer and astronaut communicated in a number language. Communication was through a device called the “display and keyboard unit” (or “disky,” abbreviated to DSKY). This unit was different than modern keyboards and monitors in that it had a 21-digit display and a 19-button keyboard. Two-digit numbers were programs. Five-digit numbers represented data such as position or velocity. The command module had one computer and one DSKY. The computer and one DSKY were located in the lower equipment bay, with the other DSKY on the main console. The Apollo command module and the lunar module had nearly identical computer systems.
The space shuttle has flown 46 shuttle flights since the mid-1980s. The shuttle’s computer system has been upgraded through the years and has become very complex. This computer maintains navigation, environmental controls, reentry controls, and other important functions.
Adapted computer hardware and software systems have been developed to support NASA’s exploration of our solar system. Autonomous systems that to some extent “think” on their own are important to the success of the Mars rovers Spirit and Opportunity. NASA has also made use of power sources such as nuclear and solar to power spacecraft as they explore the outer edges of our solar system. The computer systems onboard these spacecraft are built to use the least amount of power possible while still remaining functional. Redundant systems, or multiple systems that can perform the same function in case of a system failure, are important in deep-space exploration. Again, reliability and system hardware and software survivability are important, not just to manned spaceflight but also to missions that may last as long as 10 or 15 years.
Without computer technology, NASA would not have been able to achieve its long list of accomplishments. Computers use distributed computer systems that provide guidance and navigation functions, system management functions, data formatting, and attitude control. Without this functionality, it would be impossible to place satellites and astronauts in orbit, explore the Martian landscape, or take photos of Saturn’s moons.
Computers and the Future
In the past 60 years, the development of the computer has had a more profound impact on civilization than any other technological advance in history. In the 1940s, development of the computer was spurred on by the technological advantages it gave the military. After World War II, the business world learned to adopt this technology, giving them strategic and competitive advantages. The development of Arpanet, the forerunner of the Internet, initiated the Internet’s development by connecting remote computers together into one global network. In the late 1970s, Jobs and Wozniak’s Apple computer brought computers to each person’s desktop, starting the microcomputer revolution. Today, we see computers in all forms. Some computers resemble the first Apple and still sit on the user’s desktop, but others are now more portable and can be built into nearly every device, such as personal digital assistants, automobiles, phones, and even kitchen appliances. Each of these computers either does or will eventually have the ability to be connected through a network enabling the user or appliance to communicate with others worldwide.
In the future, we will see computers playing an even greater role in our world. Computers will be even more essential in education and business and will become a necessity in the home, enabling individuals to control functions and the environment within the home. Utilities and “smart appliances” will be connected to a controlling computer. Lights, heating, cooling, security, and many other utilities will be controlled through a central computer by kiosks (displays) located strategically around the home. Entrance may be controlled by the main computer through voice or fingerprint authentication. When a person enters the home, that individual’s prefer-ence in lighting, temperature, television programming, music, and more will be adjusted accordingly. Appliances will be networked through the Internet. For instance, if you are returning home at night and would like the lights turned on, connect to your home’s controlling computer by using your wireless personal digital assistant, which has software that enables you to turn on the lights and increase the temperature. You will also be able to check your appliances and settings remotely. Perhaps you are concerned you left the oven on when you left for a business trip; turn your oven off by connecting to it through your personal digital assistant’s wireless connection to the Internet. Refrigerators will be able to scan the items within it, then automatically order items over the Internet. You will never run out of milk or butter. Computers, in one variety or another, will inundate the home of the future.
Computers will continue to play an active role in education. It will be even more important in the future to teach our children useful technology skills. Education’s role will not only be to teach through the use of computers but also to teach the theory and skills necessary for students to use technology in their personal and professional lives. Just as in the home, computers will be everywhere in the school. The Internet will still be used for research but to a much greater degree than today. Computers will be able to teach complete courses, transforming the teacher into a facilitator rather than a provider of knowledge. In this classroom, the computer may give an initial test determining the students’ basic skills, then teach the entire course, using feedback as a learning indicator and customizing each lesson accordingly. Although common today, distance education will continue to grow. The quality of distance education instruction will continue to improve by giving teachers more and better tools with which to teach. These tools will comprise higher Internet bandwidths, the ability to bring higher-quality multimedia (video, audio, and others) to the students’ desktop, and software that will create an easy-to-use interactive learning environment. Classrooms will be “wired” in more ways than ever. Computers are providing an ever-increasing knowledge base for students. They will be able to bring simulations to life in a three-dimensional holographic way, enabling students to be active participants in their own learning.
Computers will also continue to play a major role in business. Many of the current business trends will continue and improve, with integrated electronic commerce systems, online bill paying and customer services, improved communications, and telecommuting enabling a mobile workforce and instant access to information such as sales and marketing forecasts. Business will continue to develop and use advanced technologies to build cars more efficiently and to fuel them with inexpensive, nonpolluting fuels. Computers will enable the business professional to stay more connected than ever.
Computers will play an even larger role in the military. The military computer system is essential in the management of warfare. Without the military computer system, missiles will not fire or may miss their targets, navigation systems will not work, intelligence gathering is inhibited, and battlefield supply and maintenance becomes impossible. Computers will “attack” enemy computer systems using viruses, denial of service, and other types of cyberwarfare. Computers from warring nations will attempt to attack civilian sectors as well, such as power plants, financial institutions, and other strategic targets.
Robotics is a developing technology that is currently in its infancy. There is incredible potential for robotics in the future. Robots will assist surgeons because they are more steady and precise. Although robots are being used to explore our solar system today, their role will become more significant and complicated in the future. Robots will be used to clean your home or take you to work, using AI to make humanlike decisions on their own. Robots are currently being used to manufacture items such as automobiles, but given AI, they will also be able to design cars for maximum efficiencies in both production and use. Intelligent robots will also be used for dangerous jobs or on military missions. Currently, technology is not advanced enough and distances in space are too great to be explored by humans. In the more-distant future, robots with AI will be used to colonize planets at the edge of or outside of our solar system.
The science fiction of Star Trek may become reality in the future. The U.S. Air Force is investigating teleportation (moving material objects from one location to another using worm holes). Beginning in the 1980s, developments in quantum theory and general relativity physics have succeeded in pushing the envelope in exploring the reality of teleportation. Computers are mandatory in exploring the practicality, possibilities, and application of such new technologies.
The trend toward smaller, faster computers will continue. Currently under early stages of development by IBM and several research partners are “quantum computers.” Quantum computers work at the atomic level rather than on a “chip” or “circuit” level, as in computers of today. They will be able to work on a million computations at one time, rather than just one, as current technology allows, increasing computing power and decreasing size requirements dramatically.
Computers in the future will enable Marshal McLuhan’s vision of the “Irresistible Dream,” already beginning to be realized through the Internet, of connecting all computers in the world into one large network, like a global nervous system. This network will be distributed so that there is not one central location of control. This network will be able to reconfigure itself, solve unanticipated problems, be fault tolerant, and always accessible. Users will be able to access any kind of information almost instantaneously, anywhere, anytime.
The use of computers in our world will continue to grow. For the foreseeable future, we will need faster and smaller computers to satisfy the ever-growing need for computing power, either for research, business, education, or for our own personal use. Computer-aided artificial intelligence will give robots the ability to think and perform tasks for the convenience of humans. One comprehensive global network will allow individuals to connect to the resources of the world. Technologies are currently being developed that will make these visions a reality.
- Caro, D. H. J., Schuler, R. S., & Sethi, A. S. (1987). Strategic management of technostress in an information society. Lewiston, NY: Hogrefe.
- McLuhan, M. (1964): Understanding media. New York: Mentor.
- McLuhan, M., & Fiore, Q. (1968): War and peace in the global village. New York: Bantam.
- Minsky, M. (Ed.). (1968). Semantic information processing. Cambridge: MIT Press.