Text 4. Four Generations of Computers
The first vacuum tubes computers are referred to as the generation computers. Transistors, smaller and more reliable devices invented in 1948, improved computers and made them faster. Computers of the second generation used a large number of transistors and were able to reduce computational time. In the 1970's, vacuum tube deposition by transistors became the norm, and entire assemblies became available on tiny "chips," It became possible to create cheaper computer systems. Computers based on the integrated circuit technology were called the third generation computers. In 1975, the first personal computer the ALT AIR was marketed in a kit form. The Altair had no keyboard, but a panel of switches to enter the information. Bill Gates and his partners wrote BASIC compiler for the machine. The next year the Apple Company began to market PCs also in a kit form: it included a monitor and a keyboard. Soon different companies, such as Microsoft, Apple and many smaller PC related companies came on the market. In 1980's, very large scale integration (VLSI), in which hundreds of thousands of transistors were placed on a single chip, became common. By the late 1980s, some personal computers had got microprocessors and could process about 4,000,000 instructions per second. The era of the fourth generation computers began.
Cray Research and Control Data Inc. dominated the field of supercomputers, or the most powerful computer systems, through the 1970s and 1980s.
In the early 1980s, however, the Japanese government announced a gigantic plan to design and build a new generation of supercomputers. This new generation, the socalled "fifth" generation, is using new technologies in very large integration, along with new programming languages, and will be capable of working in the area of artificial intelligence, such as voice recognition.
However, the progress in the area of software has not matched the great advances in hardware. Software has become the major cost of many systems because programming productivity has not increased very quickly.
The computer field continues to experience huge growth. Computer networking, computer mail, and electronic publishing are just a few of the applications that have grown in recent years. Advances in technologies continue to produce cheaper and more powerful machines offering the promise that in the near future, computers will be in every house.
Answer the questions:
1 What was the trend in computer system improvement during 1970's?
2 Why did it become possible to create cheaper computer systems?
3 When was the first personal computer marketed?
4 Who wrote BASIC compiler for the first PC?
5 What new computer companies came in the market in the late 1970's?
6 What are supercomputers?
7 Who announced a gigantic plan to design and build a new generation of supercomputers?
8 What are the basic features of the "fifth generation" computers?
9 Why has software become the major cost of many systems?
10 What are the applications of computers at present?
Text 5. Different Types of Computers
A computer is an electronic machine that can accept, store, manipulate, and transmit data in accordance with a set of specific instructions.
Digital computers are divided into five main types, depending on their size and power.
They are mainframes, minicomputers, desktop PCs, laptops, and handheld computers.
Mainframes are the largest and most powerful computers. The basic configuration of a mainframe consists of a central system which processes immense amount of data very quickly. This central system provides data information and computing facilities for hundreds of terminals connected together in a network.
Minicomputers are smaller and less powerful than mainframes. However, they can perform more than one task at a time. Minicomputers are mainly used as file servers for terminals. Typical applications include academic computing, software engineering and other sophisticated applications in which many users share recourses.
PCs carry out their processing on a single microchip. They are used as personal computers in the home or as workstations for a group. Broadly speaking, there are two classes of personal computers - desktop PCs, which are designed to be placed on your desk, and portable PCs, which can be used as a tiny notebook. They are ideal for business executives who travel a lot.
The smallest computers can be held in one hand. They are called handheld computers or palmtops. They are used as PC companions or as electronic organizers for storing notes, reminders and addresses.
A computer system consists of two parts: the software and the hardware. The software is the information in the form of data and program instructions, The hardware components are the electronic and mechanical parts of the system. The basic structure of a computer system is made up of three main hardware sections: the central processing unit or CPU, the main memory, and the peripherals- the keyboard, the mouse, the monitor and the printer.
Answer the questions:
1 What is a computer?
2 What types of digital computers do you know?
3 What is the basic configuration of the mainframe?
4 What types of computers are used to process immense amount of data?
5 Which are the most suitable computers for the home use: desktop PCs or minicomputers?
6 What is a handheld computer?
7 What is the purpose of laptops?
8 What parts does a computer system consist of?
9 What is the difference between the software and the hardware?
10 What are the main peripherals?
Theme III. Programming and Computer Languages
Text 1. Computer Programming
Computers can deal with different kinds of problems if they are given the right instructions for what to do. Programming is the process of preparing a set of coded instructions, which enables the computer to solve specific problems or to perform specific functions. The essence of computer programming is the encoding of the problem by means of algorithms. The thing is that any problem is expressed in mathematical terms, but the computer cannot manipulate them. Any problem must be specially processed for the computer to understand it.
The phase in which the programs are written is called the development stage. The programs are lists of instructions that will be followed by the control unit of the CPU. The instruction of the program must be complete in the appropriate sequence, or else the wrong answer will result. To guard against these errors, logic plans should be developed.
There are two common techniques for planning the logic of a program. The first technique is flowcharting. A flowchart is a plan in the form of a graphic or pictorial representation that uses predefined symbols to illustrate the program logic. It is a picture of the logical steps to be performed by the computer. Each of the predefined symbols shapes stands for a general operation. The symbol shape communicates the nature of the general operation, and specifics are written within the symbol. A plastic or metal guide called a template is used to make drawing the symbols easier.
The second technique for planning program logic is called a pseudo code. A pseudo code is an imitation of actual program instructions. It allows a program-like structure without the burden of programming rules to follow. A pseudo code is less time-consuming for the professional programmers than is flowcharting. It also emphasizes a top-down approach to program structure. A pseudo code has three basic structures - sequence, decision and looping logic. With these structures, any required logic can be expressed.
Once you have written your program, you have to test it with sample data to see if there are any bugs or errors. Usually they are, so the program has to be cleared of them or "debugged".
Answer the questions:
1 What is programming?
2 What is the essence of programming?
3 What should be done with the problem before processing by the computer?
4 What is a program?
5 What are instructions?
6 What are the main techniques for planning the program logic?
7 What is a flowchart?
8 What is a template and what is it used for?
9 What do you understand by a pseudo code?
10 What are the basic structures of a pseudo code?
Text 2. Programming Languages
Modern computers have dramatically changed our life. Nevertheless, they do not understand natural languages, as the central processor operates only on binary code numbers. That is why people use symbolic languages, which can be easily converted into a machine code.
Basic languages, where a program is similar to the machine code version are known as low-level languages. A low-level language uses a symbolic code of the particular computer and requires a special program - assembler to convert it into the actual machine language.
To make the programs easier to write, programmers worked out a number of high-level languages such as BASIC, COBOL, FORTRAN, Pascal, C languages, Java and others.
In 1958, a group of computer scientists met in Zurich and from this meeting came ALGOL. It is used for mathematical and scientific purposes. A derivative of ALGOL is known as a C language originally designed for UNIX operating systems. Today it is used to write commercial applications programs. This portable language is small and very efficient. New versions of C are C++ and Objective C. They represent a new style of object-oriented programming. With object-oriented programming, we can concentrate on particular things, giving each object specific functions.
Visual BASIC is an object-oriented programming language developed by Microsoft in 1990 to create all sorts of applications from small system utilities to database programs and Internet server applications. The original BASIC appeared in 1965, while the adjective "Visual" refers to the technique used to create a graphical user interface.
In 1990, a team of software engineers at Sun Microsystems worked out Java - an object-oriented programming language similar to C++. It is specially designed to run on the Web. Small Java programs are called applets, they can be downloaded automatically and let you watch a moving text and interact with information on the screen. Today Java is a hot technology that runs on any computer because there are Java interpreters or Java Virtual Machines for most operating systems.
Answer the questions:
1 Why do you think computers have dramatically changed our life?
2 Why cannot computers understand natural languages?
3 What is the difference between a low-level and a high-level language?
4 What are the main advantages of the C language?
5 When was Visual Basic developed and how does it differ from its original?
6 Why can we say that Java is a hot technology today?
7 When did a group of computer scientists meet?
8 What did come of their meeting?
9 Why do you think programmers all over the world use English words for high-level languages?
10 What other programming languages do you know?
Text 3. Computer Programs
A program written in a high-level language is often called a source program, and it cannot be directly processed by the computer until it has been compiled, which means interpreted into machine code. The program produced after the source program has been converted into machine code is referred to as an object program or object mode. This is done by a computer program called the compiler, which is unique for each computer. Consequently, a computer needs its own compiler for the various high-level languages if it is expected to accept programs written in those languages.
The compiler is a systems program which may be written in any language, while the computer's operating system is a true systems program which controls the central processing unit, the input, the output and the secondary memory devices. Another systems program is the linkage editor, which fetches required systems routines and links them to the object program in machine code. The resulting program is called the load module, which is the program directly executable by the computer.
Although systems programs are part of the software, they are usually provided by the manufacturer of the machine.
Unlike systems programs, software packages or application programs are sold by various vendors and not necessarily by computer manufacturers. They are a set of programs designed to perform certain applications which conform to particular specifications of the user. A companies payroll is an example of such a package which allows the user to input data - hours worked, pay rates, special deductions, names of employees - and get salary calculations as output. These packages are coded in machine language on magnetic tapes or disks which can be purchased, leased or rented by users who choose the package that mostly corresponds to their needs. Institutions and R&D centers often commission their own programmers to write applets to meet the specifications of the users.
Answer the questions:
1 What is a source program?
2 Can a source program be directly processed by the computer?
3 Why do programmers need a compiler?
4 Can the compiler be written in any language?
5 What other systems programs do you know?
6 How do the programmers call the resulting program and what does it usually do?
7 Who provides systems programs to the users?
8 What is the difference between systems programs and software packages?
9 Who sells application programs?
10 How are packages presented to the public?
Theme IV. Faces of the Internet
Text l. The Wild World of the Internet
The Internet is a wild-wired world. In just a few short years, it went from an obscure tool for universities and physicists to a fact of life in the homes and businesses of people.
The Internet began in the 1960s as a communication network between educational institutions and private organizations. The U.S. department of Defense contributed to the technology, and by the 1970s, a standard protocol connected a collection of networks (ARPANET). Private companies fund the Internet operations today. Most users connect to the Net through Internet Service Providers.
As the terminology of the Internet is constantly expanding, it is important to have a common definition of it. The Internet is a network of networks. It is a massive collection of computer networks, which connect millions of computers, people, software programs, databases, and files.
There are thousands of computer networks around the world. Some are Internet-connected, while others are not, Some networks are private, and others are publicly accessible. The communication between disparate computer environments is possible because of the communication protocols. All computers get access to the Internet by means of Transmission Control Protocols. TCP/IP is the suite of protocols used to send packets of information to each other.
Many people use the term World Wide Web as a synonym for the Internet. Actually, the Web is just one of the many services available as part of the Internet. The Web is a great technology for communicating, while the Internet provides powerful and universal connectivity that makes the Web possible. Users can get access to the information they need through Web browsers.
The Internet has already become a part of our culture. Its educating power is enormous. However, some critics think that it can be addictive. Trying to escape from the real world into the virtual world of the Internet some people can behave like drug addicts.
Answer the questions:
1 Is the Internet an obscure tool for universities or is it a part of our modern life?
2 How did the Internet begin?
3 Who funds the Internet operations today?
4 How do people communicate in the Internet?
5 What is the Internet Service Provider responsible for?
6 Do you know the difference between the Web and the Internet?
7 What protocols do users need to get access to the Internet?
8 Why do you think the Internet will continue to dominate?
9 Is educating power of the Internet enormous?
10 When can the Internet become addictive?
Text 2. Electronic mail
Electronic mail or E-mail is a very simple Internet activity. It is based on ASCII files and carries out the exchange of text messages and computer files over communications networks. Nowadays all modern Internet service providers handle it.
Sending e-mail across the Internet is a lot like writing a postcard in pencil and sending it through the country. Anyone who handles it has the opportunity to read or even change its content. Growing concern about e-mail privacy has prompted the two leading Web-browser makers, Netscape and Microsoft, to include e-mail encryption capabilities in the most recent versions of their software. However, before you can use encryption software, you need a digital ID, and so does everyone to whom you will be sending encrypted e-mail. When someone gets e-mail with your digital signature, that person will know that your message has not been tampered with and that you are truly the sender.
In comparison with ordinary mail (Snail mail), electronic text messages have several advantages, though they cannot always get through. Even the slightest error in the address will stop a delivery. On the other hand, the delivery is fast and cheap despite the distance. Incoming mail is easily annotated, returned to its sender or to other people. Besides, you can send multiple copies as easily as you can send one. Before writing a mail, you must know the exact address. An e-mail address is a string that identifiers a user so, that the user can receive the Internet e-mail. After sending an e-mail, it goes into a mailbox at your service provider and you have to log in to get it. There are several e-mail filters, attached in the e-mail software that automatically sort the incoming mail into different folders or mailboxes based on information contained in the message. Filters may also be used either to block or to accept e-mail from designated sources. E-mail management system is also in wide use today. It is an automated e-mail response system used by an Internet-based business to sort incoming e-mail messages into predetermined categories and either to reply to the sender with an appropriate response or to direct his e-mail to a customer service representative.
Answer the questions:
1 Do all Internet service providers handle e-mail now?
2 How can you protect your mail from aliens?
3 What advantages does E-mail have in comparison with ordinary post?
4 What filters is E-mail based on?
5 Why do Web-browser makers include e-mail encryption capabilities into their software?
6 What do you know about the e-mail Management system?
7 How often do you use e-mail?
8 How will a person know that you are truly the sender?
9 Can you send multiple copies of your letter?
Text 3. Phishing
A new study by R&D firm Gartner found that the number of online scams known as "phishing attacks" has increased in the last year and that an online consumer is frequently tricked into divulging sensitive information due to phishing attacks of criminals. The study, which ended in April 2004, surveyed 5000 adult Internet users and found out that around 3 percent of those surveyed reported of giving personal or financial information after being drawn into a phishing scam. Phishing scams use e-mail messages and Web pages designed to look like correspondence from legitimate online businesses. The results suggest that as many as 30 million adults have experienced a phishing attack and that 1.78 million adults could have fallen victims to the scams, Gartner says.
Phishing attacks typically begin with e-mail messages purporting to come from established companies such as EBay, Best Buy, Citigroup, and others. Within the e-mail messages, Web page links direct recipients to Web sites disguised as official company Web pages, where the recipient is asked to enter personal information such as his or her social security number, account number, password, or credit card information.
The U.S. federal authorities and leading Internet service provider such as America Online, EarthLink, and Microsoft have taken a more aggressive stance on the scams. In March, the U.S. Federal Trade Commission and the U.S. Department of Justice moved to stop a phishing scam that had tricked hundreds of Internet users into giving credit card and bank account numbers to Web sites that looked of AOL and Pay Pal, part of EBay. The FTC charged Zachary Keith Hill of Houston with deceptive and unfair practices in that case and the DOJ, named Hill as a defendant in a criminal case filed in Virginia. A success rate of 3 percent is plenty to encourage further attacks, Gartner R&D centre says.
Answer the questions:
1 What is a phishing attack?
2 What are online consumers tricked into?
3 What did the Gartner survey find?
4 What are phishing scams designed for?
5 How many adults have experienced a phishing attack?
6 How do phishing attacks typically begin with?
7 What is the recipient asked to enter during a phishing attack?
8 What information had hundreds of Internet users tricked into?
9 What success rate is enough to encourage further attacks?
10 Who was charged by the FTC with deceptive and unfair practices?
Theme V. New Technologies and Search Systems
Text 1. The Web Wide Service and HTTP
People have dreamt of a universal information database since late nineteen forties. Only recently, the new technologies have made such systems possible. The most popular system currently in use is the World Wide Web. The Internet is having a dramatic effect on the way the Web works. Not long ago, a great Web site was one that had nicely formatted texts and information on some subjects. However, the situation has changed now with appearing of Hypertext Transfer Protocol (HTTP) - the most frequently used protocol on the Internet today. It grew out of a need for the universal protocol to simplify the way the users can get access to the Internet information. HTTP is a generic, stateless, object-oriented protocol. It allows systems to be built independently of the data transferred. HTTP is a client/server protocol. This means that the client and server interact to perform a special task. For example, a user may click a link on the Hypertext Markup Language page. This causes the page to be replaced with a new one. The client browser uses HTTP commands to communicate with the HTTP server. A connection is established from the client to the server through the default TCP port 80. Once the connection has been made to the server, the request message is sent to the server. The requests are typically for a resource file consisting of an image, audio, animation, video or another hypertext document. After that, the server sends a response message to the client with the requested data. The server ordinary closes the connection, unless the client's browser has configured a "keep alive" option.
The nature of the World Wide Web provides a way to interconnect computers running different operating systems and display information created in a variety of existing media formats.
In short, the possibilities for hypertext in the worldwide environment are endless. With the computer industry growing at today's pace, no one knows what awaits us in the 21st century.
Answer the questions:
1 How long have people dreamt of a universal information database?
2 What was a great Web site several years ago?
3 What do you know about Hypertext Transfer Protocol?
4 Why can we say that HTTP is a client/server protocol?
5 How can the client browser communicate with the HTTP server?
6 How is a connection from the client to the server established?
7 When does the server close the connection?
8 Can computers with different operating systems interconnect on the Web?
9 Are the possibilities for hypertext endless?
10 The computer industry is growing rapidly, isn't it?
Text 2. The Opera browser
In 1994, two Norwegians, Jon S. von Tetzcher and Geir Ivarsoy, developed a Web browser while working for the Norwegian telecom company Telenor. When Telenor decided not to use the program, they left to start Opera Software in 1995, and introduced the Opera browser as the shareware for the Windows platform in 1996. In 1998, in an effort to expand their market, Opera Software began a project to port the browser to many different platforms. In 2000, the project succeeded, as the Opera browser was selected for use as the embedded browser for the Ericsson YS210 Cordless Screen Phone, and Psion and Screen Media information appliances. At the end of 2000, Opera made their pay-for-play browser available for free download, but in a version that included integrated banner ads.
The Opera browser has lagged a step behind Netscape and IE in both features, such as searching from the address bar, and giving support for advanced standards such as XML, CSS, or Java. As most Web developers work with either IE or Netscape (often both), there was often compatibility of problems between Opera and complex web pages such as a problem exacerbated by the often non-standard ways in which IE and Netscape treat their features. We found that Opera 5.11 did not folly implement the CSS or JavaScript on several Web pages, while the new 6.0 beta version did. Opera and its supporters hope that this new version will close the feature and compatibility gap with IE and Netscape.
Internet browser company Opera Software has added features for tighter security and the ability to surf the Web with voice commands in the latest version of its browser, Opera 8 for Windows and Linux. Opera sees the security issue as one it can use to carve into Microsoft's dominance of the browser market with its Internet Explorer. The desktop browser gives extra information about the identity of Web sites, automatically activating an information field that gives a level of security from 1 to 3 and listing the certificate owner of the site when the user visits a secure Web site. The browser can also identify the origins of pop-up Web sites.
Answer the questions:
1 When was the Opera browser developed?
2 Who developed the browser?
3 When did they start Opera Software and introduce the Opera browser as the shareware for the Windows platform?
4 What project did Opera Software start in an effort to expand their market?
5 What use was the Opera browser selected for in 2000?
6 What version of the browser was available for free downloading at the end of 2000?
7 What problems were there between Opera, 5 and complex web pages?
8 What features has Internet browser company Opera Software added to the browser?
9 How does Opera see the security issue?
10 What extra information does the desktop browser give?
Text 3. Planet-scale grid
In 2007, scientists will begin smashing protons and ions together in a multinational experiment to understand what the universe looked like a second after the Big Bang The particle accelerator used in this test will release a vast flood of data on a scale unlike anything seen before, and for that scientists will need a computing grid of equally great capability. As part of this effort, which costs about 5 billion euros ($6.3 billion U.S.), scientists are building a grid using 100,000 CPUs, mostly PCs and workstations, available at university and research labs in the U.S., Europe, Japan, Taiwan and other locations. Scientists need to harness raw computing power to meet computational demands and to give researchers a single view of this disparate data.
Researchers believe that improving the ability of a grid to handle petabyte-scale data, split up among multiple sites will benefit not only the scientific community but also mainstream commercial enterprises. They expect that corporations will one day need a similar ability to harness computing resources globally as their data requirements grow. It was important to prove that they can maintain the processes for an extended period almost without human attendance. This means ensuring that network interconnects are tuned and synchronized and that there's sufficient security and monitoring, as well as staffing and automation, at the respective data gathering sites.
The more difficult aspect is providing simultaneous access to the data by as many as 1,000 physicists working around the world. One limiting factor, which is getting a lot of attention from the approximately 100 developers working on the grid worldwide, has been the capabilities of resource brokers - the middleware that submits the jobs and distributes the work. If the processing isn't effectively routed, databases can crash under heavy loads. There is also a need to ensure that the system has no single point of failure. This involves keeping track of the data. The data could be in one place, while the CPU resource capable of processing it is in another. Metadata, which describes what the data is about, will play a critical role.
Answer the questions:
1 What will scientists begin doing in a multinational experiment to understand what the universe looked like a second after the Big Bang?
2 What will scientists need a computing grid of great capability for?
3 What are scientists building in many locations?
4 How much does this project cost?
5 What do researchers believe in?
6 What do scientists need to harness?
7 What will the project benefit from?
8 What does it mean to maintain the processes for an extended period almost without human attendance?
9 What is the limiting factor?
10 What can happen if the processing is not effectively routed?
Text 4. Keyloggers
Security experts are praising Sumitomo Mitsui Banking Corporation for admitting that it was the target of a failed $424 million hacking attempt. According to media reports, the UK's National High Tech Crime Unit (NHTCU) has issued a warning to large banks to guard against key logging, the method adopted by the would-be thieves in an attack on the Japanese bank's London systems. The intruders tried to transfer money out of the bank via 10 accounts around the world.
Keyloggers record every keystroke made on a computer, they are commonly used to steal passwords. U.S. games developer Valve had the source code to its latest version of Half-Life stolen after a virus delivered a keystroke recorder program into Valve's founder's computer. Keyloggers have become more sophisticated, moving away from software forms to sniffer-type hardware devices. They have now got little hardware loggers that are like a dongle that you place between the keyboard connection and the base unit. A cleaner can come in and pop one of these things in. No one ever looks around the back of their PCs.
That type of operation would also mean that an organization's level of encryption or firewall strength could become irrelevant. Hacker sites offer keylogging software for free. Keystroke recorders are also sold on seemingly legitimate Web sites, purportedly for employees to keep an eye on what staff is doing at their computers. Attacks on individuals' machines are an everyday occurrence and users must remain vigilant. We see from 15 to 20 new pieces of malware a day, and they are worms and Trojans that do keylogging. Individuals probably don't even know about it.
The malware doesn't display a scull and crossbones or play "The Blue Danube" over your speakers to announce its presence. Users are urged to update antivirus software probably several times a day and not to forget to install Microsoft patches and a firewall.
Answer the questions:
1 Who is praising Sumitomo and what for?
2 What warning has the UK's National High Tech Crime Unit (NHTCU) issued?
3 How did the intruders try to transfer money out of the bank?
4 What do keyloggers do and what are they commonly used for?
5 How was the latest version of Half-Life stolen?
6 How are keyloggers moving?
7 What are hardware loggers and where can one "place them?
8 What would this type of operation mean?
9 What kind of software do hacker sites offer?
10 What kind of malware does keylogging offer?
Theme VI. Local Area Networks
Text 1. A Brief History of Local Area Nets (LANs)
A local area network is a system, which allows microcomputers to share information and recourses within a limited, local area generally less than one mile from the server to a workstation. In other words, LAN is a communication network used by a single organization. Although it is only with the arrival of the microcomputer, that companies have been able to implement LANs, the concept itself is not new. The first computers in the 1950s were mainframes. Large, expensive, and reserved for very few select users, these monsters occupied entire buildings. Costing hundreds of thousands of dollars, these large computers were not able to run the newer, more sophisticated business programs that were coming out for IBM PCs and their compatibles. By the mid-1980s, thousands of employees began bringing their own personal computers to work in order to use the new business software written for PCs. As employers began exchanging floppy disks and keeping their own databases, companies met serious problems with maintaining the integrity of their data. LANs offered a solution to such problems.
LANs represent a logical development and evolution of computer technology, A network consists of two main elements - the physical structure that links the equipment, and the software that allows communication. The physical distribution of nodes is a network topology, while the rules, which determine the formats by which the information may be exchanged, are known as protocols.
The first LANs were relatively primitive. Faced with a serious shortage of software designed for more than one user, the first LANs used file locking, which allowed only one user at a time to use a program. Gradually, however, the software industry has become more sophisticated, today's LANs offer power, complex accounting and productivity programs to several users at the same time. Each microcomputer attached to the network retains its ability to work as an independent personal computer running its own software.
Answer the questions:
1 What does a local area network allow microcomputers to do?
2 What is an area from the server to a workstation?
3 What is a LAN?
4 Could old large computers run new sophisticated programs?
5 When did office workers begin bringing personal computers to their workplaces?
6 When did companies begin to have problems with maintaining the integrity of their data?
7 What does a network consist of?
8 What is a network topology?
9 What do you know about file locking?
10 What can modern LANs offer to a user?
Text 2. Types of physical configuration for LANs
There are different ways a local area network can operate. Keep in mind that the form of the LAN does not limit the media of transmission. One of the oldest types of network is the star, which uses the same approach to sending and receiving messages as a telephone system. It means that all messages in a LAN star topology must go through a central computer that controls the flow of data. It is easy to add new workstations to the LAN and allow the network administrator to give certain nodes higher status than others. The major weakness of the star architecture is that the entire LAN fails if anything happens to the central computer.
Another major network topology is the bus. In many such networks, the workstations check whether a message is coming down the highway before sending their message. Because all workstations share the same bus, all messages pass other workstations on the way to their destination. Many low-cost LANs use bus architecture. Advantage of the bus topology is that the failure of a single workstation does not cripple the rest of the network. However, too many messages can slow down the network speed.
A ring topology consists of several nodes joined together to form a circle where all workstations must have equal access to the network. In a token ring LAN, a data packet, known as a token is sent from the transmitting workstation through the network. The token contains the address of the sender and the address of the node to receive the message. If the monitoring node fails, the network remains operative. The network may withstand the failure of various workstations. Additional ring networks can be linked together through bridges that switch data from one ring to another.
To provide some level of uniformity among network vendors, the International Standards Organization has developed Open Systems Interconnection standards. Different computers networked together need to know in what form they will receive information. The Open Systems Interconnection standards consist of a seven- layer model that ensures efficient communication within a LAN and among different networks.
Answer the questions:
1 Can a local area network (LAN) operate differently?
2 Does the form of the LAN limit the media of transmission?
3 What is the oldest type of the net connection?
4 What is the major weakness of the star architecture?
5 What is the difference between a bus network and a star network configuration?
6 What does a ring topology consist of?
7 What is the main function of the International Standards Organization?
8 What are the benefits of connecting computers and peripherals in a network?
9 How can you add new additional ring networks?
10 What net is it easier to administrate and why?
Theme VII. Information Technologies in Russia
Text 1. System Integration Service in Russia
When companies merge, rapidly expand, or implement a major new software application, it usually has a major implication for IT strategy. Companies typically find a service supplier to manage all elements of the resulting IT project for them. One of the largest segments of the Russian IT services market is systems integration, planning, design, implementation and project management of a solution to address a customer's technical or business needs. While it is difficult to place a minimum dollar limit, local system integration projects typically exceed $50,000 Consolidation of many Russian large companies into large groups (holding companies) has buoyed demand in systems integration services as IT infrastructures also have to be integrated.
The largest part of the market is comprised of computers based on industry-standard Intel microprocessors that typically run a Microsoft operating system but which are able to run systems like Novell NetWare or Linux. For small networks it is possible to buy a server for under $1,000, backed by a vendor warranty guaranteeing onsite support. Skills to install, maintain and support these servers are relatively plentiful (thus cheap), so Russian customers tend to favor this kind of solution more than do the buyers in comparable Central European markets.
Servers based on alternative processor architecture (usually Reduced Instruction Set Computer or RISC) mostly run a variant of the UNIX operating system. This tend is becoming the choice of larger and richer customer organizations that are prepared to invest the necessary resources to gain benefits from the greater reliability and scalability these more expensive servers can offer.
With huge increases in local Internet use in recent years, demand for smaller entry-level servers has been growing very quickly. Often on smaller networks, companies will start using a commodity desktop computer as a server, and when it can no longer cope (or after the first major tragedy) they opt for purpose-built hardware.
Answer the questions:
1 What major implication do companies usually have when they merge?
2 What do companies typically do to manage all elements of IT project?
3 What is one of the largest segments of the local IT services market?
4 What has buoyed demand in systems integration services?
5 What type of computers does the largest part of the market comprise of?
6 What is a newly bought server backed by?
7 What kind of solution do Russian customers tend to favor more than do the buyers in comparable Central European markets?
8 What does RISC acronym mean?
9 What is the current tendency of larger and richer customer organizations?
10 What will smaller networks companies start using as a server after the first tragedy?
Theme VIII. Ecology and Computer
Text I. Recycling - Reverse Engineering
We are good at recycling old soda cans, but when it comes to old PCs - our work is cut out for us. Over the next three years, 250 million computers are expected to become obsolete, according to the Environmental Protection Agency. That is good news for PC manufacturers but bad news for the environment. The problem is that old PCs can quickly become harmful PCs. A typical computer monitor, for example, contains between 2 and 4 pounds of lead, which can leach into the groundwater in the landfill.
The technology to recycle PCs exists Facilities in Ohio and Pennsylvania can reprocess the lead-laden glass in old computer monitors into glass for new monitors. Metal can be extracted from old chips and plastics can be reused. Often, however, there is little incentive to do any of this. Consumers balk at the coast of shipping junked systems to recycling facilities. Manufacturers balk at taking on the responsibility of disposing of systems they sold years ago. It is not surprising then that 85 percent of the 63 million computers taken out of service last year wound up in landfills.
The challenge is not so much how to recycle PCs but how to make PC recycling economically viable. The team of researchers has developed mathematical models that can evaluate recycling facilities, including collection centers, glass- reprocessing plants and smelting facilities. Such models can determine the most efficient way of how to help engineers to figure out the right combination of fees, tax breaks and additional reprocessing facilities. Mathematical models have long been used to simulate different systems, but the difficulty in simulating PCs recycling is that the data is extremely uncertain. Nevertheless, scientists hope to show some recycling options and to encourage authorities to the opening of local glass-reprocessing facilities. The ultimate goal is to make the system available for any country interested in setting up a recycling program. We hope that such systems will start working in Russia in the near future.
Answer the questions:
1 Can people recycle old PCs or is it still a dream?
2 How many computers are expected to become obsolete?
3 Why can old PCs become harmful for the environment?
4 Why do consumers and manufacturers oppose to recycling?
5 Have scientists developed mathematical models that can evaluate recycling facilities?
6 Why is it difficult to simulate PCs recycling?
7 Can mathematical models determine the most efficient way of PC recycling?
8 What is your attitude to this problem?
9 What is the ultimate goal of the researchers?
10 Do you think such systems will start working in Russia?
ENGLISH-RUSSIAN VOCABULARY OF COMPUTER TECNOLOGY
|