Monday, September 22, 2008

Data Communication Services

Data Communications Services provides a number of networking solutions for campus, including connections within and between campus locations and connections between campus and Urbana-Champaign.

Local-area networks and connections to the campus network

The Network Design Office (NDO) offers free consulting services to campus units that are interested in installing a new local-area network or expanding their present one. The NDO also coordinates changes in networks related to new construction and remodeling as well as connections to the campus fiber backbone network. For more information, contact NDO at 244-1600.

Point-to-point communication circuits

CITES can provide point-to-point circuits for a variety of applications including data, alarm, bells, and more. Circuits required within a building are installed and maintained on a time and materials basis. Intra-building circuits have no recurring monthly charges. Circuits required between buildings are installed on a time and materials basis and incur a $12.00 monthly fee per circuit. For more information call 333-1161.

Off-campus circuits

Off-campus communication circuits extend either voice or data services from a campus building served by the campus wiring distribution system to an off-campus location. These circuits are often provided by a communication vendor such as McLeod, at&t, Verizon, or Sprint. CITES can order these circuits for departments and arrange for installation in the campus building. Installation and monthly charges apply to circuits extending from the wiring center to a campus building. The charges vary by the type of circuit ordered. Please contact the customer service office at 333-1161 for more information.

Quantize your voice

Say what? First conceived in 1937 by Alex Reeves, a voice digitization technique known as Pulse Code Modulation started to be deployed in the United States Public Switched Telephone Network in 1962.

Basically, you start with a 4 KHz analog voice channel. Then you take a "snapshot" of the voice signal's amplitude every 1/8000th of a second (you have to sample at twice the maximum frequency to avoid a problem known as "aliasing"). Then you convert the measured amplitude to a number (the "quantization" process) that is represented by 8 bits. Thus, PCM requires 64 KBPS of digitial bandwidth (8 KHz * 8 bits). This basic channel represents the first level of a digital heirachy, known as a DS0.

A special type of Time-Division Multiplexer (TDM) called a "Channel Bank" takes 24 of these 64K DS0 channels and combines (multiplexes) them into a single aggregate rate of 1.544 MBPS. This rate is the combination of the channel data payload of 1.536 MBPS (64 KBPS * 24 Channels) + 8 KBPS of framing and synchronization bits. The 1.544 MBPS rate is known as the DS1 level in the digital hierarchy. Facilities that support this rate are usually referred to as "T-Spans" or "T1" circuits.

International standards were developed later. Although the basic heirarchial DS0 rate of 64 KBPS was preserved, the algorithm for converting the voice signal to a digital signal is different. Also, the International standard calls for 30 voice channels + a 64 KBPS synchronization channel + a 64 KBPS signaling channel. Therefore, these systems operate at a rate of 2.048 MPBS (1.920 MBPS + 64 KBPS + 64 KBPS). Facilities that support this rate are usually referred to as "E1" circuits.

The 1960s

In addition to the development of 8-bit communication codes and Pulse Code Modulation systems, the sixties brought forth a number of other significant contributions.

The deployment of digitial transmission facilities resulted in the development of standard digital hierarchies, as noted in the previous Pulse Code Modulation section.

Integrated circuit (IC) development created Large Scale Integration (LSI) IC technology.

CRT terminals, developed in the 1950s, saw increased use as the preferred I/O device for computer systems. Computer architectures changed to accomodate interactive I/O.

The first communications satellites are launched.

The Carterphone decision in 1968/1969 allowed devices which were beneficial and not harmful to the network to be connected to the PSTN. This spawned the development of many modem and data communications companies!

The 1970s

Dataphone Digital Service (DDS) started deployment in 1974, bringing digital transmission facilities to the customer's premise. DDS circuit deployment also accelerated the conversion to digital networking within the Bell System.

X.25 began widescale deployments at the end of the 70s, introducing packet switched networking. Large X.25 public networks evolved; such as Telenet (now "Sprintnet") and Tymnet.

The continued development of Integrated Circuits results in widespread availability of LSI and VLSI (Very Large Scale Integration) devices, increased reliability, and decreased costs.

The 1980s

During the 1980s, the development of Dial Modem technology accelerated at a frantic rate.

On January 1, 1984, AT&T divested itself of its 22 Bell System operating companies based upon a 7 year antitrust suit filed against AT&T by the U.S. Department of Justice, and an agreed upon settlement. Ultimately, the Bell Operating Companies ("BOC"s) were grouped together into seven Regional Bell Operating Companies ("RBOC"s):

 - Ameritech Corporation
- Bell Atlantic Corporation
- Bell South Corporation
- Nynex Corporation
- Pacific Telesis Group
- Southwestern Bell Corporation
- US West Incorporated

AT&T itself, now divested, consists of two basic organizations:

 - AT&T Communications:

Provides long distance services.
Provides inter-LATA and network services.

- AT&T Technologies:

AT&T Bell Labs
AT&T International
AT&T Information Systems
AT&T Network Systems

Divestiture caused the carriers to compete in the only unregulated area; business communications services. This resulted in an explosion in business communications, starting with the availability of T1 (1.544 MBPS) services in 1984.

Multiplexing vendors launched new, network-savvy, Time Division Multiplexers. Company networks consolidated voice and data circuits into single high-speed aggregate bit streams; saving money and manpower, while improving network survivability. These new "microprocessor muxes" offered features such as redundancy and automatic circuit rerouting while supporting a wide variety of data and voice I/O types.

Local Area Network deployment accelerated, offering users a new view of data communications networking; the ability to access anything from anywhere, bandwidth-on-demand for data transfers, standardized connectivity, etc. The migration of computer networks has shifted from "Centralized host" to "Client-Server" architectures.

Signaling System #7 (SS7), a digital switch protocol used in the PSTN, began widescale deployment in the US PSTN. Sweden was among the first countries to implement SS7 networking while Bell Atlantic was among the first Local Exchange Carriers (LECs) to complete SS7 network implementation. This offered additional CLASS (Customer Local Area Signaling Services) services: Automatic Callback, Automatic Recall, Computer Access Restriction, Distinctive Alert, Caller ID, Selective Call Acceptance/Blocking, etc.

The 1990s

After the completion of SS7 within the PSTN backbone, additional telephone networking services were offered to business customers. Particularly, enhanced PBX network services such as Virtual Private Networks (VPNs) evolved. These services allowed flexible dialing for business users, and allowed the carrier to integrate Public and Business communications throughout the carrier's SS7 network.

The attractive Virtual Network options for voice services, combined with continued cost reductions in T1 services, have resulted in the segregation of voice and data in the Wide Area Network (WAN). As such, a "new" standard, known as Frame Relay, began deployment. Frame Relay is particularly adept at transporting LAN and X.25 traffic, and Public Frame Relay transport services are available from many carriers.

Wireless communications system use has exploded, with dramatic growth in Cellular voice and data technologies. Of particular interest are the merger of AT&T and McCaw Cellular (Cellular 1) and the development of the Cellular Digital Packet Data (CDPD) standards. Additional frequency allocations have recently occured for the development of wireless Personal Communications Systems (PCS), in an unprecedented spectrum auction by the Federal Communications Commission (FCC).

Slow, but steady increases are seen in the use of Integrated Services Digital Networks (ISDN); providing higher speed digital access capabilities to the residence and businesses.

New methods of integrating voice and data, as well as Local Area and Wide Area networks, are under development. These new "cell-based" transmission technologies are known as Switched Multimegabit Data Service (SMDS), Asynchronous Transfer Mode (ATM), and Broadband ISDN.

What Are Survivable Computer Systems

Definition Of A Survivable Computer System

A computer system, which may be made up of multiple individual systems and components, designed to provide mission critical services must be able to perform in a consistent and timely manner under various operating conditions. It must be able to meet its goals and objectives whether it is in a state of normal operation or under some sort of stress or in a hostile environment. A discussion on survivable computer systems can be a very complex and far reaching one. However, in this article we will touch on just a few of the basics.

Computer Security And Survivable Computer Systems

Survivable computer systems and computer security are in many ways related but at a low-level very much different. For instance, the hardening of a particular system to be resistant against intelligent attacks may be a component of a survivable computer system. It does not address the ability of a computer system to fulfill its purpose when it is impacted by an event such as a deliberate attack, natural disaster or accident, or general failure. A survivable computer system must be able to adapt, perform its primary critical functions even if in a hostile environment, even if various components of the computer system are incapacitated. In some cases, even if the entire "primary" system has been destroyed.

As an example; a system designed to provide real-time critical information regarding analysis of specialized medications ceases to function for a few hours because of wide spread loss of communication. However, it maintains the validity of the data when communication is restored and systems come back online. This computer system could be considered to have survived under conditions outside of its control.

On the other hand, the same system fails to provide continuous access to information under normal circumstances or operating environment, because of a localized failure, may not be judged to have fulfilled its purpose or met its objective.

Fault Tolerant And Highly Availability Computer Systems

Many computer systems are designed with fault tolerant components so they continue to operate when key portions of the system fail. For instance; multiple power supplies, redundant disk drives or arrays, even multiple processors and system boards that can continue to function even if its peer component is destroyed or fails. The probability of all components designed to be redundant failing at one time may be quite low. However, a malicious entity that knows how the redundant components are configured may be able to engineer critical failures across the board rendering the fault tolerant components ineffective.

High availability also plays a role in a survivable computer system. However this design component may not maintain computer system survivability during certain events such as various forms of malicious attack . An example of this might be a critical web service that has been duplicated, say across multiple machines, to allow continuous functionality if one or more the individual web servers was to fail. The problem is that many implementations of high availability use the same components and methodology on all of the individual systems. If an intelligent attack or malicious event takes place and is directed at a specific set of vulnerabilities on one of the individual systems, it is reasonable to assume the remaining computer systems that participate in the highly available implementation are also susceptible to the same or similar vulnerabilities. A certain degree of variance must be achieved in how all systems participate in the highly available implementation.

What's The Difference Between An Attack, Failure, And Accident?

How Do These Differences Impact A Survivable Computer System

In many cases when I am discussing the security of systems with customers, the question of business continuity and disaster recovery come up. Most companies that provide a service that they deem critical just know the system needs to be operational in a consistent manner. However, there is typically little discussion about the various events or scenarios surrounding this and that can lead to great disappointment in the future when what the customer thought was a "survivable computer system" does not meet their expectations. Some of the items I like to bring up during these conversations is what their computer systems goal and objective is, what specifically does continuous operation mean to them, and specifically what constitutes an attack, failure, or accident that can cause loss of operation or failure to meet objectives.

A failure may be defined as a localized event that impacts the operation of a system and its ability to deliver services or meet its objectives. An example might be the failure of one or more critical or non-critical functions that effect the performance or overall operation of the system. Say, the failure of a module of code that causes a cascading event that prevents redundant modules from performing properly. Or, a localize hardware failure that incapacitates the computer system.

An accident is typically an event that is outside the control of the system and administrators of a local / private system. An example of this would be natural disasters such as hurricanes, if you live in south Florida like I do, or floods, or wide spread loss of power because the utility provider cut the wrong power lines during an upgrade to the grid. About two years ago, a client of mine who provides web based document management services could not deliver revenue generating services to their customers because a telecommunications engineer cut through a major phone trunk six blocks away from their office. They lost phone and data services for nearly a week.

An now we come to "attack". We all know accidents will happen, we know that everything fails at one time or another, and typically we can speculate on how these things will happen. An attack, executed by an intelligent, experienced individual or group can be very hard to predict. There are many well known and documented forms of attacks. The problem is intelligence and human imagination continuously advance the form of malicious attacks and can seriously threaten even the most advanced designed survivable computer systems. An accident or failure does not have the ability to think out of the box or realize that a highly available design is flawed because all participants use the same design. The probability that an attack might occur, and succeed may be quite low, but the impact may be devastating.

Conclusion

One of the reasons I wrote this article was to illustrate that it's not all about prevention. Although prevention is a big part of survivable computer system design, a critical computer system must be able to meet its objectives even when operating under hostile or stressful circumstances. Or if the steps taking for prevention ultimately prove inadequate. It may be impossible to think of all the various events that can impact a critical computer system but it is possible to reasonably define the possibilities.

The subject of survivable computer systems is actually one of complexity and ever evolving technology. This article has only touched on a few of the basic aspects of computer system survivability. I intend on continuing this article to delve deeper into the subject of survivable computer systems.

What is Data Communications

The distance over which data moves within a computer may vary from a few thousandths of an inch, as is the case within a single IC chip, to as much as several feet along the backplane of the main circuit board. Over such small distances, digital data may be transmitted as direct, two-level electrical signals over simple copper conductors. Except for the fastest computers, circuit designers are not very concerned about the shape of the conductor or the analog characteristics of signal transmission.

Frequently, however, data must be sent beyond the local circuitry that constitutes a computer. In many cases, the distances involved may be enormous. Unfortunately, as the distance between the source of a message and its destination increases, accurate transmission becomes increasingly difficult. This results from the electrical distortion of signals traveling through long conductors, and from noise added to the signal as it propagates through a transmission medium. Although some precautions must be taken for data exchange within a computer, the biggest problems occur when data is transferred to devices outside the computer's circuitry. In this case, distortion and noise can become so severe that information is lost.

Data Communications concerns the transmission of digital messages to devices external to the message source. "External" devices are generally thought of as being independently powered circuitry that exists beyond the chassis of a computer or other digital message source. As a rule, the maximum permissible transmission rate of a message is directly proportional to signal power, and inversely proportional to channel noise. It is the aim of any communications system to provide the highest possible transmission rate at the lowest possible power and with the least possible noise.

Top 10 Paying Careers

It is no secret that surgeons earn a hefty $189,590 annual salary on an average in the United States today. But the most unexpected news is the salaries of physicians' assistants whose yearly average annual salary is an astonishing $63,490. The Bureau of Labor Statistics reports that their minimum qualification is a college degree and in addition a mandatory accreditation course. It is interesting to know which jobs are the top 10 paying ones in America. There are many surveys producing different results. Although there are some minor differences, most of them agree at least 7 out of 10 times.

The Best Paying Jobs in The United States

Interestingly, surgeons scored 7 points over CEOs, whose average annual salary was $134,960. The skill and the complex nature of the work contributes to their high salaries. That they carry a student loan of upto $100,000 is another factor that contributes.

The top professions on the list is followed by anesthesiologists with $181,420, Obstetricians and gynecologists earning $179,640, Internists, general $158,350 and the list goes on. In 9th place are dentists, whose reported annual average earning is $133,350. With the exception of the CEO who stands at the 8th position in the list, the top professions are all dominated by medical and healthcare professionals.

Personal financial advisors may find a place in the list of top ten earners, were it not for the huge variation in their earnings. An extremely brilliant personal financial advisor may earn up to $145,000 but the lower end is a paltry $28,330. The high salary fluctuation is because of its high growth potential coupled with high economic growth and the educational index required by the job.

Medical scientists earn an average of $100,000, which may be a measly sum, considering their educational backgrounds (PhD & doctoral degrees). But they precede podiatrists ($94,500), lawyers ($91,920), optometrists ($88,100) and computer and information systems managers whose salaries are around $83,890.

Surprisingly, so many other jobs and careers pay significantly higher salaries than positions in federal and state governments. For example, take the salaries of judges, which are positions of high significance in the society, which are at a level of $79,540. This can be understood by looking at the enterprising nature of corporations that hire these professionals.

Let's, now take a look at the next top 10 paying careers in brief:

1. Pilots, co pilots and flight engineers $99,400pa

2. Marketing managers $78,410pa

3. Computer software and applications engineer $76,310pa

4. Biomedical engineer $70,520;

They are trained in biology as well as engineering and work to develop solutions to health problems.

5. Environmental engineer $67,620
They work to fight damages to environment

6. Computer systems analyst $67,520
Systems analysts ensure that organizations make the best of their technological resources

7. Database administrator $61,950

Database administrators create and manage large quantities of financial, inventory and customer data.

8. Physical therapist $61,560

9. Network systems and data communication analyst $61,250

10. Chemist $60,880

Saturday, September 20, 2008

Scientific Management in 21st Century

It is not difficult to find examples of Scientific Management in the 21st Century; the car and computer manufacturing plants, the work environments we go to everyday, the hospitals we are treated in and even some of the restaurants we might eat in, - almost all of them function more efficiently due to the application of Scientific Management. In fact, these methods of working seem so commonplace and so logical to a citizen of the modern world that it is almost impossible to accept that they were revolutionary only 100 years ago.

Although Scientific Management does play an important role in the 21st century, it is necessary to note that this method of management contains weaknesses that limit its influence in current work environments, and consequently not all of its tenants are applicable to modern organizations. Scientific Management is perhaps best seen as an evolutionary stage in management ever developing history. This essay will attempt to highlight both the strengths and weaknesses of Scientific Management in context of the 21st century through examination of its application in several modern organizations.

Scientific Management was developed in the first quarter of the 20th Century; its father is commonly accepted to be F.W. Taylor, although some variations of the theory have been developed by Gantt and Gilbreth. Taylor recognized labor productivity was largely inefficient due to a workforce that functioned by “rules of thumb,” and a mentality that equated increased productivity with a cutting down of the labor force. Against the backdrop of Bethlehem Steel plant, Taylor carried out studies to insure that factual scientific knowledge would replace the traditional “rules of thumb”. The backbone of this activity was his “Time And Motion Study”, as Dale explains, “Taylor employed a young man to analyze all the operations and the motions performed in each and to time the motions with a stopwatch. From knowing how long it took actually to perform each of the elements in each job, it would be possible…to determine a really “fair days work“ (Dale 1963, p. 155.).

Through this study, Taylor could see that work was more efficient when broken down into its constituent parts, and the management, planning, and decision-making functions have been developed elsewhere. Taylor viewed the majority of workers as ill educated and unfit to make important decisions, this is illustrated in the following quotation, “One of the very first requirements for a man who is fit to handle pig iron as a regular occupation is that he shall be so stupid and so phlegmatic that he more nearly resembles […] the ox… Therefore the workman…is unable to understand the real science of doing this class of work” (Taylor 1998, p. 28).

Taylor’s implementation of scientific fact did not stop there; he had also studied the equipment workmen used appropriating the correct scientific design for the task at hand, these insured workers neither over-worked nor under-worked themselves. Furthermore, workers were scientifically selected resulting in workers performing tasks they were biologically able to cope with, and tasks that equaled their skill. Taylor (and later Gant) drove this system by incentivying workers with money.

Taylor’s system insured the most efficient way would be used by all workers, therefore making the work process standard. Invariably managers found that maximal efficiency was achieved by a subdivision of labor. This subdivision entailed breaking the workers tasks into smaller and smaller parts; in short, “specifying not only what is to be done but how it is to be done and the exact time allowed for doing it” (Taylor 1998, p. 17). George Ritzer in his book “The McDonaldization of Society” notes a similar philosophy in a McDonalds staff manua, “It told operators… precise cooking times for all products and temperature settings for all equipment…It specified that French fries be cut at nine-thirty-seconds thick…Grill men…were instructed to put hamburgers down on the grill moving left to right, creating six rows of six patties each” (Ritzer 2000, p. 38).

In many ways McDonalds is the archetypical example of an organization employing Scientific Management in production. Within this restaurant chain, uniformity is complete; no matter what country you are in every branch of McDonalds is the same, as are the methods used to prepare food, clean floors, promote staff and lock up on closing. It is this ability to efficiently supply standard food and service throughout the world that has allowed McDonalds to become the biggest restaurant chain on the planet (Peters and Waterman 1982, p. 173-174).

A theory, whose roots are based on the scientific management model is Fordism. This theory refers to the application of Henry Ford’s faith in mass production (Marcouse, 1996). The theory combined the idea of the moving assembly line together with Taylor’s systems of division of labor and piece rate payment. With Fordism, jobs are automated or broken down into unskilled or semi-skilled tasks. The pace of the continuous flow assembly line dictates work. Although Ford pioneered production in the assembly of consumer goods, such as cars, his theory retained the faults of Taylor’s. Autocratic management ensures a high division of labor in order to effectively run mass production; this leads to little workplace democracy and alienation. Equally, with emphasis on the continuous flow of the assembly line, machinery is given more importance than workers. Nonetheless, a retained benefit of Taylor’s work is the piece rate payment system. Workers are driven by financial motivation; being given a consolation of high wages while employers maintain control over the workforce.

The antithesis of scientific management is the human relations movement established by Elton Mayo. The model is based on the research undertaken by Mayo at the Hawthorne electrical components factory between 1927 and 1932. Mayo followed Taylor’s methods and was attempting to measure the impact on productivity of improving the lighting conditions within the factory. He followed Taylor’s scientific principles by testing the changes against a control, a section of the factory with unchanged lighting (Kelly 1982).

The benefits of scientific management lie within its ability to coordinate a mutual relationship between employers and workers. The theory provides a company with the focus to organize its structure in order to meet the objectives of both the employer and employee. At the time of its inception, Taylor found that the firms who introduced scientific management as he prescribed it became the world’s most meticulously organized corporations (Nelson, 1980). Scientific management also provides a company with the means to achieve economies of scale. This phenomenon occurs because the theory stresses efficiency and the need to eliminate waste. Managers are given the duty to identify ways in which costs can be accounted for precisely, which leads to a division of labor and a specialization amongst staff, thus allowing each employee to become highly effective at carrying out their limited task. Consequently, firms will have in place efficient production methods and techniques. Another benefit of scientific management for a company adopting it is that it will obtain full control of its workforce. Management can dictate the desired minimum output to be produced and, with a piece rate payment system in place, can be guaranteed workers will produce the required amount.

Scientific Management, however, is an incomplete system. What is seen in both the Bethlehem Steel plant under Taylor’s management in 1911, and in every McDonalds restaurant in the World now is a “deskilling” of labor. As jobs are broken down into their constituent elements, and workers tasks are made easier, humans become little more than “machines” in the chain. Their cognitive input is not required and their motions do little to develop themselves; it is here that we touch upon the first problem Scientific Management faces in the 21st Century.

In today’s society the average intelligence of employees has sharply risen; people have been made aware of their value as human beings and any process by which this status is challenged is considered self-depreciating. People are no longer content to receive only fiscal reward for their tasks. Under Taylor’s Scientific Management system workers were viewed as working solely for economic reward. In current organizations, on the other hand, it has been recognized that productivity and success is not just obtained by controlling all factors in the work place, but by contributing to the social well-being and development of the individual employee.

The negative aspects of scientific management are apparent when evaluating the treatment of employees and with the problems that arise from the piece rate payment system. At the beginning of the twentieth century, Taylor’s methods for managing the workers were not completely adhered to. Thousands of plants introduced elements of scientific management, but few firms created formal planning departments or issued instruction cards to machine workers in fear of alienating the workforce (Nelson, 1980). The principals of scientific management are unquestionably authoritarian in that they assume decision-making is best kept at the top of the organization because there exists a lack of trust in the competence of the employees. Taylor believed productivity and efficiency would both rise if there were a division between workers and experts, and contended that almost every act of the workman should be preceded by one or more preparatory acts of the management. He also reasoned that each person must be taught daily by those who are over them (1998). This style of management can be the catalyst for causing anti-motivation and dissatisfaction amongst employees. If workers feel as though they are being treated without due respect, many may become disenchanted with the company and refuse to work to their maximum potential. Similarly, the piece rate payment system may cause the employer to encounter the problem of encouraging staff to concentrate on quantity at the expense of quality.

Higher levels of access to technology and information as well as increased competition present another difficulty to theory of Scientific Management being applied to organizations in the 21st Century. Modern organizations process huge amounts of input, and employees no longer work in isolated units cut off from the organization at large, but are quite literally connected to it. Satellite link-ups and the Internet provide organizations with thousands of bytes of information everyday, enabling companies to work on a global scale and within never shortening time frames. Delivery times, information gathering, data processing and manufacturing techniques are constantly becoming more technologically advanced and efficient.

Alongside this rapid technological growth organizations are finding it increasingly important to react quickly to developments that may affect their welfare. Managers recognize they are unable to control all aspects of employee’s functions, as the sheer layers of information factored into everyday decisions are so high that it is imperative employees use their own initiative. High competition between organizations also means that companies must react fast to maintain market positions. All of these forces modern companies to maintain high levels of flexibility.

In the era during whichScientific Management was developed each worker had a specific task that he or she had to perform with little or no real explanation of why, or what part it plays in the organization as a whole. In this day and age it is virtually impossible to find an employee in the developed world who is not aware of what his or her organization stands for, what their business strategy is, how they are faring, and what their job means to the company as a whole. Organizations actively encourage employees to know about their company and to work across departments, insuring that communication at all levels is mixed and (what is becoming even more popular today) informal. This phenomenon means that, for example, in companies such as EXXON scientists, marketers and manufacturers are all constantly aware of one another’s activities (Peters & Waterman 1982, p. 218).

Another weakness in Scientific Management theory is that it can lead to workers becoming too highly specialized therefore hindering their adaptability to new situations, in the 21st Century employers not only want workers to be efficient they must also exhibit flexibility.

However, it can be reasoned that scientific management is still a relevant concept for understanding contemporary work organizations. Scientific management has proved it has a place in a post-industrial economy and within work organizations, albeit in a hybrid form with the human relations model. This is because scientific management allows a company to control its workforce through a series of measures that guarantees them the desired levels of productivity and efficiency. In spite of this guarantee, the model, as Taylor prescribed it, also manages to alienate the workforce and cause dissatisfaction due to the authoritarian structure of the role of management. The human relations model adds a new dimension to scientific management as it allows management to work on the same principles as Taylor approved, such as time and motion studies, while also serving to fulfill employees’ social needs at the same time.

In conclusion, it can be seen that Scientific Management is still very much a part of any organization in the 21st Century. Its strengths in creating a divide between management functions and work functions have been employed widely at all levels and in all industries. In addition its strengths in making organizations efficient through replacement of “rules of thumb” with scientific fact has both insured its widespread application and ironically bred the conditions that make it less applicable to modern organizations. Now that all modern organizations work on a factual basis and all of them have managerial and employee structures competition is controlled by other factors outside the realms of Scientific Management. Modern organizations rank humanistic factors such as employee initiative, loyalty and adaptability alongside efficiency. For this reason, Taylor’s claim that workers are solely concerned with monetary reward and that every facet of work needs to be controlled from above seems outmoded, untrue, and impractical.

It is perhaps then better to accept that as a complete theory Scientific Management is not visible in modern organizations, however, elements of it are so relevant that they have become deeply ingrained in all modern organizations and are the very reasons why management has taken on new dimension in the 21st Century.

New Learning Opportunities

It goes without saying that constantly developing technologies are simplifying our life as well as studying process. However, there also are some negative aspects of such a rapid know- how
development for it's limiting students from achieving their full potential.
While students and faculty work to achieve new skills, new communication interactions, new relationships, new teaching styles and new learning opportunities many are wondering
how they, as an individual, fit into the grand scheme of education.
Quite obviously, the use of information technology and the skills that
which accompany it are in high demand within all levels of our world
that is now centered on interconnectedness and the fast-paced changes
now taking place in the post-industrialization era. But this in no way
indicates that today's use of information technology can only be seen
as beneficial. As the disadvantages become lost in the incredible list
of advantages, it has become increasingly important to focus on what
technology is giving students and faculty, at all levels of education
in Canada and the United States, but more specifically at the
post-secondary level, and more importantly it has become essential to
examine what is being taken away, and potentially lost, from the
original or ideal view of education.

Perhaps in this debate it is necessary to clarify the meaning of
"education" to further a logical debate. Education is the knowledge or
skill obtained or developed by a learning process or also an
instructive or enlightening experience. This idea of education
through enlightenment and instruction seems somewhat ideal by today's
standards but this ideal did once exist long before our arrival, in
the time of the Athenian School of Thought. It was here that ancient
philosophers like Plato, Aristotle, Socrates and Pythagorus gathered
under ideal classical architecture to discuss and debate. These men
were, and still are, considered great thinkers, and although time has
elapsed and so many things have changed, students continue to study
their ideas and theories. This alone speaks volumes on the importance
of setting and their style of expanding the mind: some how it was
accomplished without the use of technology. Learning and developing
was simply done for the sake of knowing and the sake of broadening a
knowledge base, but today the reasons behind developing knowledge are
quite different and this "ideal" definition of education doesn't seem
to exist in our educational system.

In today's educational system many university students are finding
themselves feeling empty and confused with their current
post-secondary experience, and also previous schooling experiences. In
a recent survey, it has been found that thirty-four per cent of first
year university students' drop out. Perhaps the process of
memorization, regurgitation and remaining yet another nameless student
seems somewhat unappealing to those trying to discover what it is that
they want to do with their lives. A saddening majority of students
will walk away with degrees that hold no real meaning or value.
Students experience pressure to attend university, in hopes that
graduation will present them with a job that will make their parents
proud. In a survey done within elementary and secondary levels of
education by MetLife "only 15 percent of students surveyed said they
believe their school is preparing students extremely well to go to
college" and "less than half (42%) of students report that teachers
very much encourage them to do their best". It all seems to come
down to a scramble to keep a grade point average at a comparable high
with other students or to pass a test or paper that will certainly be
forgotten once the year is over. Emphasis is being pressed in all the
wrong places: students are trying to put forth results when what we
really need is guidance and someone to help develop our own personal
knowledge base. We are seen more or less as numbers, rather than
people who are rarely asked what they think or who they are. The
process of true discovery and development, what schools (and more
specifically universities) want from their students can only come
forth from people who know themselves, who know their strengths and
know the meaning of putting in all you have. But, if students aren't
even given the opportunity to discover all that they are, how could
they possibly give it in a post-secondary setting.

With IT taking such a major role within our societies, importance is
being placed upon skills, expertise and basic knowledge of computer
technology, so in order to remain desirable in a competitive work
force students are looking to develop these needed skills. Where
technology has essentially become a necessity in education and the
workforce, it has become a priority for schools at all levels,
especially at the post-secondary level, to integrate technology into
the curriculum. But, the problems seem to truly arise at the
post-secondary setting where universities rely on funding through the
government and students' tuition payments which accounts for nineteen
per cent of universities total annual revenue in 1999/2000.
Basically the rest of the necessary money for Canadian universities
come from sponsored research funding from governments, the private
sector and other non-government organizations which added up to $2.8
billion in 1999/2000. Universities and colleges all over Canada and
the United States are looking to remain desirable to students by being
comparable or advantageous over other higher education institutes.
This need results in a campaign for profits and results, over the
ideal view of education where development and the students' needs are
the priority.

With this said, it seems that computer and information technology
within the university setting can be quite damaging to students and
their opportunities to receive the education and instruction they
want. Placed upon an already unstable system of education which relies
heavily on student payments and corporate sponsors and donations, it
seems unlikely that positive results would prevail. But the truth is
that information technology can be used positively within the
educational system, especially in higher education. With this in mind,
IT is quite comparable to the use of globalization. Globalization is
quite tricky to define, but one basic definition would sound something
like this: increased mobility of goods, services, labor, technology
and capital throughout the world. Used properly, globalization can
have incredible benefits for many. For example, an unemployed Inuit
woman living in Nunavut can make a living for herself by selling her
artwork online without having to suffer the price of a middle man, or
retailer, taking her hard earned money. This is an example of
globalization working for the people of the world, but this same
concept can be misused and that is how we are finding children working
in sweatshops in India. Applying this same theory upon information
technology and its effect on education one would see that both
negative and positive effects can occur depending on the strength of
the educational system at hand.

Focusing first on the advantages of information technology within the
educational system, many find that this new concept of a global
classroom, where technology is integrated into all levels of the
class, is the means of advancing students to a level of educational
learning that has ceased to ever exist. In a survey done by Campus
Computing Project's nearly 600 U.S. colleges and universities it's
estimated that half their students used the Internet daily for their
studies and with a statistic this high, it's obvious that information
technology will integrate itself into the education system, changing
the traditional classroom setting into a global one. This era of
educational change is considered an extremely exciting time where the
system and structure of learning will be pushed as far as our
imaginations will take us, which essentially has no boundaries. Just
imagine, we are only limited by our own creativity and if we think up
something that doesn't exist yet, it can almost be guaranteed that
technological advances will bring it to us in only a short matter of
time. Essentially, our opportunities as students, as educators and as
life-long learners are breaking past the walls that once held back our
ideas.

Technology is also providing opportunities to develop knowledge in
general with the use of university courses and programs online. If you
have access to the resources you can better your education and
therefore your status in the workforce by partaking in distance
learning, or online courses. And, for those who simply want to broaden
their knowledge without the degrees and programs, the Internet is an
educator all on its own, with endless information available at the
click of a button. Students can interact online with other students,
professors, friends, political figures, government and organizations
around the globe; become involved and aware of politics on a national
and international scale; develop interests that otherwise may not have
been available; be aware of news and events occurring within their
world and the greater world around them and also, information on
nations, governments, companies and people is much easier to assess by
the average web surfer, so things become more transparent and truths
can no longer be hidden.

Ideally, these advantages are what the educational system wants within
their classrooms. Technology is basically becoming a necessity at all
levels of education; it is a skill that is being brought into the
elementary, secondary and even more so, the university classrooms. One
day, technology will most likely be necessary within the realms of our
careers so it is necessary to master the skills now. But as mentioned
above, the advantages are somewhat ideal and don't look quite how we
all want them to in our current system of education. It seems that
they look the worst at the university level because it is here that
universities are no longer public, like most elementary and secondary
schools are.

As public support decreased and societal demand increased, the
government pulled back university funding in the 1980's, so these
institutions in Canada and the United States had to raise tuition to
meet the demands of higher education, especially in light of the
desperately needed advancements that technology has brought about.
Many of these institutions have had to turn to corporations for
funding or receive "gifts" from alumni families, much like Acadia
University did with the undisclosed sum of money that alumni, J.D.
Irving, gave to Acadia to build a botanical garden, and campus meeting
place. Elaine Benoit, spokesperson for Acadia's office of public
affairs, insists this will have no bearing on the research conducted.
"We will continue to conduct the same kind of research we have in the
past. It's not a buy-out; we're not selling ourselves to the family."
Excepting an undisclosed sum of money does at least attach an
institution to a particular family no matter what the spokespeople
say. This is another way that technology can lead education from its
ideal version to a version based on gain and profits.

With technology emerging as such a key player, institutions have used
it to their profitable advantage. "Many educational institutions seem
driven to use newly found access to global data communication that
will increase enrollments and will award a vast range of degrees
through massive investments in distance education programs." But,
unfortunately these steps to be adaptive and remain competitive with
"fast track diplomas" have created programs, that "…when compared
in-depth to the curricula of bona fide academic institutions… …these
ventures appeared to be little more than money-making plots managed by
capitalistic-minded individuals who held verily the slightest regard
for academic values." This simple act of taking advantage of students
need for technology and fast paced education seems to have made
education into a commodity, or means to an end rather than an end in
itself.

Students are now finding themselves referred to as "clients" in most
universities and are feeling even less appreciated and less motivated
to truly put themselves into their studies. Now, how is it that
students become "clients"? The universities are realizing their cost
cutting potential through the use of technology. Wired campuses,
distance learning and online classes and discussions won't require
lecture halls, full faculty, libraries and laboratories. The idea of
students becoming clients simply goes hand in hand with the idea of
commodifying education. Universities are taking roles of businesses
where transactions are conducted. Clients pay for their education, or
their degree, and it is given to them by the institution. As Michael
Margolis stated in his article entitled Brave New Universities, "…Institutions
of higher education in United States are considered superior because
they have delivered a lucrative educational product for a competitive
price…"

Also, in a university setting where information technology plays a
major role, both professors and students may sense a lack of belonging
and a lack of relations that might otherwise exist without the
technology. For example, within a wired campus students use email to
contact or ask a professor a question, rather than taking the time to
visit them in their offices. Potentially, a student could go through
an entire year of classes without ever having to talk to their
professor, and in all certainty this has happened. It seems that this
approach undermines all that education is about. By definition,
education is intertwined with enlightening experiences and
instruction. Certainly in this technology based class and campus
setting the student is receiving instruction, but how could a student
ever be enlightened when enlightenment comes from a sense of
self-discovery. Many Canadian and American universities and
colleges support extremely large classes to cover the institutions
annual operating cost and an example of these classes can be seen at
Dalhousie University in Halifax, Nova Scotia. The universities
introduction to Psychology enrolls approximately 1000 students and it
becomes unrealistic to say that students are engaged, challenged or
asked to develop their thoughts or mind. These sorts of advances in
personal knowledge can only properly expand under certain conditions
and many of these conditions are neglected in just about all North
American classrooms. By the time university comes for many students,
or "clients" as they will soon be referred to, they have mastered the
skills of remaining unknown, cramming and writing last minute papers
and assignments. The technology only makes the latter even easier to
get away with.

Another disadvantage comes forth in the idea of men and women, and
their different ways of learning and accessibility. Women are
underrepresented on the World Wide Web, just as they are in the
high-tech occupations and therefore some underlying discrimination may
prevail at a university setting. In a survey done by
Nielson/Net Ratings men log on more than women (an average of 54
sessions compared to 50 sessions), spend more time on average (31
hours versus 27 hours), and view more pages (1900 versus 1700).
Women, compared to men, are much less likely to use or even attempt to
access the Internet for a variety of reasons. Many women are
intimidated by pornography, prevalent sexist attitudes and the basic
idea that technology is more directed towards men. Perhaps, in a
university classrooms, where laptops are used women are finding they
are even more isolated than an average student might feel. Not only
are they neglected by their professors, but many do not feel
comfortable with the replacement offered: the Internet.

Fortunately, when looking at the list of disadvantages it seems that
they can all be reversed and used to the advantage of students,
teachers, professors, women and anyone else who might feel that they
are losing out because of technology. For example, women are under
represented in all aspects of information technology but it is that
very technology that is bringing women together and bringing
technology into their lives. Women, for example, are emerging as the
dominant users of the Internet. Following in Nielson/Net Rating survey,
"...women at work logged onto the Internet 23 percent more this August
than they did in August 2001… … while men still outpace women in
Internet usage at work, Internet usage by men at work grew only 12
percent year-to-date."

Also with online courses, information, training and advertisement for
conferences the Internet is basically a meeting place for people to
come together and strengthen their role within the world of IT. When
it comes to students, technology can play a major role in bringing
students and professors together through online discussions and also
online communication can make it easier for students to ask questions
or set up a time to meet in person with other students or professors.
This is where information can be misused, and where it tends to be in
today's classrooms as students are finding they are merely a number in
the grand scheme of things but if students are encouraged early on in
the education system to interact, discuss, debate and share with their
peers and teachers then it seems that the technology will be better
used, rather than misused.

Traditional Aboriginal life seems fitting here, under the topic of
technology and ideal teaching styles. In Aboriginal life, the elders
of the community are highly respected and listened to by other members
of the community. Wisdom is carried from one elder to a listener, not
through notes or typing information into our laptops, but is learned
only through listening. You must listen to understand, and perhaps
that is where technology in the post-secondary system, and basically
all educational systems, is lacking. Technology doesn't hear and it
definitely doesn't listen. For the general public, there is
nothing more real and more engaging than the company of another human
being. Technology simply cannot deliver in all areas of human growth
and development, but if teachers and professors fill in the needs of
students and add technology on top of what they have already
developed, the results would be more incredible than anything the
education system has seen yet.

It seems to come down to the fact that technology can only add to
education, it cannot make it which seems to be the mistake being made
by so many educational institutions today. Therefore, it is becoming
more and more apparent that a mix of both worlds needs to be offered
to the students from the very beginning of the education system, so
that once students reach the post-secondary level they will have both
social and technological skills. If students are raised simply relying
on the technology of the time, they will lack social skills that are
mandatory in most occupations and, more importantly, in life. Besides,
as Aristotle clearly stated human beings are social creatures and
why would be want to alter who we naturally are for something as
impersonal and unnatural as technology?

If the post-secondary education system (students, faculty and
administration) continue to abuse information technology in the manner
it is being misused now, then when you add education to the equation
you only add to the severity of the abuse. Education will continue to
move farther and farther from what is an ideal education and students
will move farther from personal growth and development, to simply
being the results of a bigger corporate campus agenda. Isolation,
through the use of technology, will continue to hold students back
from their full potential because they are never engaged, they are
never challenged and from where they stand no one really cares about
whom they are and what they're capable of. It's often said that
children are our future, but ironically they are being treated much
less than that.

Information Technology Advances And Organizations

This is transported by the revolution of technology over the years. The shift of organisational routines did transcend further than what has been thought by Bill Gates few years ago: "If the 1980s were about quality and the 1990s were about reengineering, the 2000s will be about velocity". Indeed it is all about velocity yet the velocity of the digital automation has far gone from the initial notion he had. It is more that what he meant of the "velocity" he mentioned in his book. Information Systems now plays a very important role in all business models.
Organisations uses variety of systems that aids them organise information, to do processing, retrieval of data, communication and manage business transactions. Information is valuable to any business. Proper manipulation and storage of this is a real deal to the success of the organisation. With the advance of information technology, the paper-based office is now turned into a paperless office. Business transactions are rendered at a swift and fast trade. The digital automation is now being adapted by most organisations, seeing business goals be attained with no hassle.
Modern technology provides essential tools to help organisation succeed in meeting their objectives. This advancement in technology allows companies manage information effectively and efficiently. And seeing organisations functions in the global environment, technology mends the gaps between distant partners and overseas business dealings with Internet technology.
The presents of computer in organisations, in business environments, has changed the relationship between managers and employees. Managers are now computer users and constantly connected to manage corporate information needs. Centralisation of information system had made it more manageable and promotes structured transactions and departmentalisation of routines. With this, business faults can easily be controlled and corrected.
Training and education now occurs in automation with the aid of multimedia and digital publishers. Newly hired employees are now preferred with computer knowledge and still training is needed to introduce to organisation's information system.

Vodafone

Vodafone, the largest telecommunication industry in the world had created history in the UK by making the first mobile call on 1st January 1985. Earned reputation in the mobile telecommunication services, including voice mail and data communication, Vodafone is aiming to become the world’s leader in mobile telephony. Within twenty one years of services, Vodafone has become the largest company in Europe and the largest of its kind anywhere in the world. Vodafone operates its network across 27 countries and partner networks in further 27 countries and has gained 186 million customers.

Like other network providers in the UK, Vodafone Mobile Phones has also followed the policy of serving its subscribers through various cheap tariff plans and deals of mobile phone. The cheap tariff plans and deals of mobile phone are cost effective allowing users to minimize their phone bills largely. Vodafone has adopted Contract Mobile Phone deal in collaboration with the online retailing sites in the UK which is available in the leading brands of mobile phones including Nokia, Samsung, Motorola, Sony Ericsson, etc. these handsets are well equipped with the latest technologies which would provide both voice and non voice communication smoothly.

Phone bills, the main concern for the users enables Vodafone to release Contract Mobile Phone deal to curtail the extra service taxes from the phone bills. Contract Mobile Phone Vodafone allows individual to enter in a contract with the service provider to avail the packages of the deal which would reduce the phone bills. The packages of Contract Mobile Phone Vodafone carry various priceless and a few subsidized prices services like SIM free mobile phone, free mobile phone accessories, free mobile phone insurance, free mobile phone handset, free upgrade of mobile phone after a certain period of time, free text and multimedia messages, free roaming facility, reduction in downloading and data transmission charges, reduction in peak hour call charges, and many more which would definitely reduce the phone bills to a great extent.

Vodafone, the largest network provider in the world has offered the most secured deal, Contract Mobile Phone to satisfy tech savvy users in terms of services, network, connectivity and pricing of mobile media.

History of Data Communications

A Brief History of Data Communications

Presently, the United States is the most technologically advanced country in the area of telecommunications with about; 126 million phone lines, 7.5 million cellular phone users, 5 thousand AM radio broadcast stations, 5 thousand FM radio stations, 1 thousand television broadcast stations, 9 thousand cable television systems, 530 million radios, 193 million television sets, 24 ocean cables, and scores of satellite facilities!

This is truly an "Information Age" and sometimes, you need to look at where we've been in order to see the future more clearly!




Data Communications Milestones

What hath God Wrought?

These famous words were telegraphed by Samuel F. B. Morse in 1844, although the patent for the "electric telegraph" was submitted in 1937 by Charles Wheatstone. By the time of the Civil War, telegraph communications spanned the United States, and in 1866, the first trans-Atlantic cable was laid between the US and France (Werner von Siemens from Germany was one of the pioneers in the development of reliable submarine cables).

The basic elements of Morse code were the dot ("dit") and the dash ("dah"). In International Morse code (the most prevalent of the Morse code variants), the dot was the minimal duration element, with the dash equal to three times the duration of the dot. Electrically, current flows with both dots and dashes. The time between each element of the same character was one dot. The amount of time between each character was three dots; and the time between words was equal to seven dots! During these "idle" time intervals, the telegraph line was open (no current flow).

Unfortunately, Morse code suffered from a couple of drawbacks. Skilled telegraph operators were required, and the work of these operators was grueling (Can you imagine banging on a single key all day?). Also, because the code had varying numbers of elements between characters, it was very difficult to automate. But, it still beat the Pony Express!

Blame it on the French!

A Frenchman said that to me! In 1875, a Frenchman named Emile Baudot developed a code suitable for machine encoding and decoding. It consisted of 5 equal-length units (bits) and was suitable for transmitting 32 different code combinations (characters). Since it was necessary to transmit the 26 Latin (A-Z), 10 numeric, plus some control characters; two (out of the 32 combinations) special characters, "FIGS" and "LTRS", were used to select character sets (similar to the CAPS key on many computer keyboards).

This code is commonly referred to as "Baudot Code" (naturally), or ITA#2 (International Telegraph Alphabet, #2) today. In Great Britain, this code is sometimes referred to as the "Murray Code".

Unfortunately, Baudot Code was developed before the practical deployment of associated applications and equipment. As such, it did not enjoy widescale deployment until the later invention of the teletypewriter.

Mr. Watson, come here, I want you!

These famous words were spoken in Boston by Dr. Alexander Graham Bell on March 10, 1876 while working with his invention, the telephone! Alexander Graham Bell filed for a patent for this device on February 14, 1876; TWO HOURS before a similar patent was filed by Elisha Gray of Chicago. After a long legal battle, the United States Supreme Court upheld Dr. Bell's patent.

Work on a public telephone network was well underway in 1878 when the first commercial telephone exchange was brought into service in New Haven, CT. Ultimately, one of the largest companies on this Earth (American Telephone and Telegraph) was spawned. Its been noted that at one point, AT&T employed over a million people!

While the initial invention of the telephone can hardly be considered as a data communications milestone, it is included in this document because almost the entire network backbone of the US Public Switched Telephone Network (PSTN) is now digital!

That Grand Old Teletype

The invention of the teletypewriter (a.k.a. teleprinter) occured in the early 1900s. The largest manufacturer of these devices in the United States was the Teletype Corporation. In fact, although the term "teletype" is often used to refer to such teleprinter devices it is actually a trademark of the AT&T Teletype Corporation.

These teleprinters utilized the 5-bit, 32-character Baudot Code. But because the transmission was machine generated and decoded, it was necessary to delineate the bits in a character. A bit was added to the beginning of the character, called the "Start Bit". Another bit was added to the end of each character. This bit is known as the "Stop Bit". This type of Start/Stop transmission is called "Asynchronous" communication.

Teleprinter mechanisms used the presence of DC current flow to indicate a "Mark" (logic "1") or the lack of DC current (open) to represent a "Space" (logic "0"). In an idle state, constant DC current flow exists. When an open state is present, the receiver detects this as a "Space" and prepares to receive a Baudot character. After the character is received, the Stop Bit ensures that the line is returned to an idle state. This method of DC communication represents what is known as a "Current Loop" interface. Multiple parties can easily be "bridged" onto a single line, and line open conditions result in a noticable constant spacing condition (chattering teleprinter).

You don't see many teleprinters nowadays; osoleted by today's computer printers and visual displays. But widescale use of the teleprinter lasted for over 50 years! Alas, some companies lived and died with teleprinter technology (e.g. Western Union).

Tape Delay!

Teleprinter transmission technology created the "Tape Punch" and "Tape Reader" devices. Why is this significant? Because it advanced the creation of the first "Store and Forward" data messaging systems.

Teleprinter messages could be received on tape, then resent or broadcast to other teleprinters by using the tape reader. If there were errors in the transmission, the tape could be resent.

Data messaging networks evolved, to allow individuals to communicate with each other in a digital format. Telex and TWX were examples of these early messaging systems. Can anyone remember seeing those TWX numbers on business cards? I can recall hearing the ol' teleprinter chattering away occasionally at our Timeplex office in Largo, FL back in 1984.

Eight Bits in a Byte

In the 1960s, significant advances in data communications character coding resulted in the development of 8-Bit characters. In 1962, IBM created and promoted, a coding standard known as Extended Binary-Coded-Decimal Interchange Code, or EBCDIC for short. This coding scheme defined 8-bit characters, allowing up to 256 characters to be used. While the world probably would have been better off with a pure 8-bit code, another standard called the American Standard Code for Information Interchange (ASCII) was adopted in 1963 and ultimately won the standards battle.

ASCII was first defined by the American National Standards Institute (ANSI) in ANSI Standard X3.4 in 1968. The ASCII code is also described in ISO 636 (1973) and CCITT V.2, which calls the standard IA5 (International Alphabet #5). ASCII is a 7-bit code, resulting in a maximum of 128 characters. However, ANSI Standard X3.16 (1976) and CCITT Standard V.4 describe the use of an additional eighth bit as a "Parity Check" bit. This bit gets set such that the sum of the 7-bit character is either ODD or EVEN. As such, single transmission bit errors within a character can be detected! As described in the aforementioned specifications, EVEN parity is suggested for use on asynchronous communications systems and ODD parity used in synchronous systems!

In reality, applications may implement EVEN parity, ODD parity, NO parity, always MARK parity, or always SPACE parity. Parity setup problems have been a source of aggravation for anyone who dials into different BBSs. Even in today's modern Internet culture, one must be cognizant of the proper setup for 7-bit or 8-bit FTP (File Transfer Protocol) transmissions!

Additional expansion of the code using the ESCAPE character was defined in ANSI X3.64-1979 in an effort to "standardize" graphic character representations and cursor control. In fact, the DOS operating system is based upon a 256 character set, and ANSI graphic characters have representation through the extra 128 characters that the DOS system allows.

The Serial transmission of ASCII is defined in ANSI X3.15 - X3.16 and CCITT V.4 and X.4. A start and stop bit are added to the character to delineate the character for asynchronous transmission, as in Baudot Code. However, synchronous transmission of ASCII is also defined.

Quantize your voice

Say what? First conceived in 1937 by Alex Reeves, a voice digitization technique known as Pulse Code Modulation started to be deployed in the United States Public Switched Telephone Network in 1962.

Basically, you start with a 4 KHz analog voice channel. Then you take a "snapshot" of the voice signal's amplitude every 1/8000th of a second (you have to sample at twice the maximum frequency to avoid a problem known as "aliasing"). Then you convert the measured amplitude to a number (the "quantization" process) that is represented by 8 bits. Thus, PCM requires 64 KBPS of digitial bandwidth (8 KHz * 8 bits). This basic channel represents the first level of a digital heirachy, known as a DS0.

A special type of Time-Division Multiplexer (TDM) called a "Channel Bank" takes 24 of these 64K DS0 channels and combines (multiplexes) them into a single aggregate rate of 1.544 MBPS. This rate is the combination of the channel data payload of 1.536 MBPS (64 KBPS * 24 Channels) + 8 KBPS of framing and synchronization bits. The 1.544 MBPS rate is known as the DS1 level in the digital hierarchy. Facilities that support this rate are usually referred to as "T-Spans" or "T1" circuits.

International standards were developed later. Although the basic heirarchial DS0 rate of 64 KBPS was preserved, the algorithm for converting the voice signal to a digital signal is different. Also, the International standard calls for 30 voice channels + a 64 KBPS synchronization channel + a 64 KBPS signaling channel. Therefore, these systems operate at a rate of 2.048 MPBS (1.920 MBPS + 64 KBPS + 64 KBPS). Facilities that support this rate are usually referred to as "E1" circuits.

Using a transmission line code known as Bipolar-Alternate Mark Inversion (AMI), a 1.544 MBPS T1 circuit requires 772 KHz of analog bandwidth. So, why go digital? I could use Frequency Division Multiplexing (FDM) and combine those same 24 channels into a 96 KHz (4 KHz * 24) analog pipe, right? While FDM saves bandwidth, noise is added as the signal travels through every amplifier and modulator. In a digital system, "ones" and "zeroes" go in, and "ones" and "zeroes" go out. Since major sources of analog noise are removed in digital systems, circuit lengths can be extended, and network topologies simplified through the reduction of the number of circuits required between any two telephone exchanges. Quality improves, operating costs decrease!

The 1960s

In addition to the development of 8-bit communication codes and Pulse Code Modulation systems, the sixties brought forth a number of other significant contributions.

The deployment of digitial transmission facilities resulted in the development of standard digital hierarchies, as noted in the previous Pulse Code Modulation section.

Integrated circuit (IC) development created Large Scale Integration (LSI) IC technology.

CRT terminals, developed in the 1950s, saw increased use as the preferred I/O device for computer systems. Computer architectures changed to accomodate interactive I/O.

The first communications satellites are launched.

The Carterphone decision in 1968/1969 allowed devices which were beneficial and not harmful to the network to be connected to the PSTN. This spawned the development of many modem and data communications companies!

The 1970s

Dataphone Digital Service (DDS) started deployment in 1974, bringing digital transmission facilities to the customer's premise. DDS circuit deployment also accelerated the conversion to digital networking within the Bell System.

X.25 began widescale deployments at the end of the 70s, introducing packet switched networking. Large X.25 public networks evolved; such as Telenet (now "Sprintnet") and Tymnet.

The continued development of Integrated Circuits results in widespread availability of LSI and VLSI (Very Large Scale Integration) devices, increased reliability, and decreased costs.

The 1980s

During the 1980s, the development of Dial Modem technology accelerated at a frantic rate.

On January 1, 1984, AT&T divested itself of its 22 Bell System operating companies based upon a 7 year antitrust suit filed against AT&T by the U.S. Department of Justice, and an agreed upon settlement. Ultimately, the Bell Operating Companies ("BOC"s) were grouped together into seven Regional Bell Operating Companies ("RBOC"s):

 - Ameritech Corporation
- Bell Atlantic Corporation
- Bell South Corporation
- Nynex Corporation
- Pacific Telesis Group
- Southwestern Bell Corporation
- US West Incorporated

AT&T itself, now divested, consists of two basic organizations:

 - AT&T Communications:

Provides long distance services.
Provides inter-LATA and network services.

- AT&T Technologies:

AT&T Bell Labs
AT&T International
AT&T Information Systems
AT&T Network Systems

Divestiture caused the carriers to compete in the only unregulated area; business communications services. This resulted in an explosion in business communications, starting with the availability of T1 (1.544 MBPS) services in 1984.

Multiplexing vendors launched new, network-savvy, Time Division Multiplexers. Company networks consolidated voice and data circuits into single high-speed aggregate bit streams; saving money and manpower, while improving network survivability. These new "microprocessor muxes" offered features such as redundancy and automatic circuit rerouting while supporting a wide variety of data and voice I/O types.

Local Area Network deployment accelerated, offering users a new view of data communications networking; the ability to access anything from anywhere, bandwidth-on-demand for data transfers, standardized connectivity, etc. The migration of computer networks has shifted from "Centralized host" to "Client-Server" architectures.

Signaling System #7 (SS7), a digital switch protocol used in the PSTN, began widescale deployment in the US PSTN. Sweden was among the first countries to implement SS7 networking while Bell Atlantic was among the first Local Exchange Carriers (LECs) to complete SS7 network implementation. This offered additional CLASS (Customer Local Area Signaling Services) services: Automatic Callback, Automatic Recall, Computer Access Restriction, Distinctive Alert, Caller ID, Selective Call Acceptance/Blocking, etc.

The 1990s

After the completion of SS7 within the PSTN backbone, additional telephone networking services were offered to business customers. Particularly, enhanced PBX network services such as Virtual Private Networks (VPNs) evolved. These services allowed flexible dialing for business users, and allowed the carrier to integrate Public and Business communications throughout the carrier's SS7 network.

The attractive Virtual Network options for voice services, combined with continued cost reductions in T1 services, have resulted in the segregation of voice and data in the Wide Area Network (WAN). As such, a "new" standard, known as Frame Relay, began deployment. Frame Relay is particularly adept at transporting LAN and X.25 traffic, and Public Frame Relay transport services are available from many carriers.

Wireless communications system use has exploded, with dramatic growth in Cellular voice and data technologies. Of particular interest are the merger of AT&T and McCaw Cellular (Cellular 1) and the development of the Cellular Digital Packet Data (CDPD) standards. Additional frequency allocations have recently occured for the development of wireless Personal Communications Systems (PCS), in an unprecedented spectrum auction by the Federal Communications Commission (FCC).

Slow, but steady increases are seen in the use of Integrated Services Digital Networks (ISDN); providing higher speed digital access capabilities to the residence and businesses.

New methods of integrating voice and data, as well as Local Area and Wide Area networks, are under development. These new "cell-based" transmission technologies are known as Switched Multimegabit Data Service (SMDS), Asynchronous Transfer Mode (ATM), and Broadband ISDN.