2 Chapter 1 • Challenges of the Virtual Environment
Introduction
Businesses are teeing up to new challenges brought on by an increas-
ingly virtual environment. Telecommuting has increased the number of
remote access users who need to access applications with specific busi-
ness configurations. The pervasive use of the Internet provides an easy,
nearly universal, avenue of connectivity, although connections are some-
times slow. The use of hand-held computing has exploded, but questions
remain as to what kind of applications can be used.
For a business facing these types of challenges, the hole in one can be
found in thin-client technology. The leader in this technology is Citrix,
whose main product is MetaFrame. MetaFrame runs over Microsoft’s
Windows 2000 with Terminal Services and provides fast, consistent
access to business applications. With Citrix MetaFrame, the reach of
business applications can be extended over an enterprise network and
the public Internet.
What Defines a Mainframe?
Mainframe computers are considered to be a notch below supercom-
puters and a step above minicomputers in the hierarchy of processing.
In many ways, mainframes are considerably more powerful than super-
computers because they can support more simultaneous programs.
Supercomputers are considered faster, however, because they can exe-
cute a single process faster than a typical mainframe. Depending on how
a company wants to market a system, the same machine that could serve
as a mainframe for one company could be a minicomputer at another.
Today, the largest mainframe manufacturers are Unisys and (surprise,
surprise) IBM.
Mainframes work on the model of centralized computing. Although a
mainframe may be no faster than a desktop computer in raw speed,
mainframes use peripheral channels (individual PCs in their own right)
to handle Input/Output (IO) processes. This frees up considerable proc-
essing power. Mainframes can have multiple ports into high-speed
memory caches and separate machines to coordinate IO operations
between the channels. The bus speed on a mainframe is typically much
higher than a desktop, and mainframes generally employ hardware with
considerable error-checking and correction capabilities. The mean time
between failures for a mainframe computer is 20 years, much greater
than that of PCs.
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 3
NOTE
Mean Time Between Failures (MTBF) is a phrase often used in the com-
puting world. MTBF is the amount of time a system will run before suf-
fering a critical failure of some kind that requires maintenance. Because
each component in a PC can have a separate MTBF, the MTBF is calcu-
lated using the weakest component. Obviously, when buying a PC you
want to look for the best MTBF numbers. Cheap parts often mean a
lower MTBF.
All of these factors free up the CPU to do what it should be doing—
pure calculation. With Symmetric Multiprocessing (SMP), today’s main-
frames are capable of handling thousands of remote terminals. Figure 1.1
shows a typical mainframe arrangement.
Benefits of the Mainframe Model
As you can see in Figure 1.1, the mainframe model supports not only
desktop PCs, but also remote terminals. Traditionally called dumb
terminals because they contained no independent processing capabili-
ties, mainframe terminals today are actually considered “smart” because
of their built-in screen display instruction sets. Terminals rely on the
central mainframe for all processing requirements and are used only for
input/output. The advantages to using terminals are considerable. First,
terminals are relatively cheap when compared to a PC. Second, with
only minimal components, terminals are very easy to maintain. In addi-
tion, terminals present the user with the same screen no matter when
or where they log on, which cuts down on user confusion and application
training costs.
The centralized architecture of a mainframe is another key benefit of
this model. Once upon a time, mainframes were considered to be vast,
complicated machines, which required dedicated programmers to run.
Today’s client/server networking models can be far more complex than
any mainframe system. Deciding between different operating systems,
www.syngress.com
4 Chapter 1 • Challenges of the Virtual Environment
protocols, network topography, and wiring schemes can give a network
manager a serious headache. By comparison, mainframe computing is
fairly straight-forward in its design and in many cases is far easier to
implement. Five years ago, word was that mainframes were going the way
of the dinosaur. Today, with over two trillion dollars of mainframe applica-
tions in place, that prediction seems to have been a bit hasty.
Figure 1.1 The mainframe computing environment.
PC PC PC PC
Hub
Terminal
Terminal
Front-end
processor
Terminal Mainframe
Terminal
Storage drives
Centralized computing with mainframes is considered not only the
past, but also possibly the future of network architecture. As organizations
undergo more downsizing and shift towards a central, scalable solution for
their employees, a mainframe environment looks more and more appealing.
The initial price tag may put many companies off, but for those that can
afford it, the total cost of ownership (TCO) could be considerably less than
a distributed computing environment. The future of mainframes is still
uncertain, but it looks like they will be around for quite some time.
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 5
History and Benefits of Distributed Computing
Distributed computing is a buzzword often heard when discussing today’s
client/server architecture. It is the most common network environment
today, and continues to expand with the Internet. We’ll look at distributed
computing’s origins in this section, and take a look at where it might be
headed.
The Workstation
As we mentioned before, distributed computing was made possible when
DEC developed the minicomputer. Capable of performing timesharing oper-
ations, the minicomputer allowed many users to use the same machine via
remote terminals, but each had a separate virtual environment. Minicom-
puters were popular, but considerably slower than their mainframe coun-
terparts. As a result, to scale a minicomputer, system administrators were
forced to buy more and more of them. This trend in buying led to cheaper
and cheaper computers, which in turn eventually made the personal com-
puter a possibility people were willing to accept. Thus, the reality of the
workstation was born.
Although originally conceived by Xerox Corporation’s Palo Alto
Research Center (PARC) in 1970, it would be some time before worksta-
tions became inexpensive and reliable enough to see mainstream use.
PARC went on to design such common tools as the mouse, window-based
computing, the first Ethernet system, and the first distributed-file-and-
print servers. All of these inventions made workstations a reasonable alter-
native to time-sharing minicomputers. Since the main cost of a computer
is the design and manufacturing process, the more units you build, the
cheaper they are to sell. The idea of the local area network (Ethernet) cou-
pled with PARC’s Xerox Distributed File server (XDFS) meant that worksta-
tions were now capable of duplicating the tasks of terminals for a much
lower price tag than the mainframe system. Unfortunately for Xerox, they
ignored almost every invention developed by the PARC group and ended up
letting Steve Jobs and Apple borrow the technology.
The most dominant player in distributed computing, however, is
Microsoft. Using technology they borrowed (some may argue “stole”) from
Apple, Microsoft launched the Windows line of graphical user interface
(GUI) products that turned the workstation into a much more valuable
tool. Using most of the ideas PARC had developed (the mouse, Ethernet,
distributed file sharing), Microsoft gave everyone from the home user to the
network manager a platform that was easy to understand and could be
rapidly and efficiently used by almost everyone. Apple may have been the
first to give the world a point-and-click interface, but Microsoft was the
www.syngress.com
6 Chapter 1 • Challenges of the Virtual Environment
company that led it into the 1990’s. All of these features enabled Microsoft
to develop a real distributed computing environment.
Enter Distributed Computing
Distributed computing has come a long way since that first local area net-
work (LAN). Today, almost every organization employs some type of dis-
tributed computing. The most commonly used system is client/server
architecture, where the client (workstation) requests information and ser-
vices from a remote server. Servers can be high-speed desktops, microcom-
puters, minicomputers, or even mainframe machines. Typically connected
by a LAN, the client/server model has become increasingly complex over
the last few years. To support the client/server model a wide array of oper-
ating systems have been developed, which may or may not interact well
with other systems. UNIX, Windows, Novell, and Banyan Vines are several
of the operating systems that are able to communicate with each other,
although not always efficiently.
However, the advantages to the client/server model can be consider-
able. Since each machine is capable of performing its own processing,
applications for the client/server model tend to vary based on the original
design. Some applications will use the server as little more than a file-
sharing device. Others will actually run processes at both the client and
server levels, dividing the work as is most time-effective. A true client/
server application is designed to provide the same quality of service as a
mainframe or minicomputer would provide. Client/server operations can
be either two- or three-tiered, as described in the following sections.
Two-Tiered Computing
In two-tiered computing, an applications server (such as a database) per-
forms the server-side portion of the processing, such as record searching
or generation. A client software piece will be used to perform the access,
editing, and manipulation processes. Figure 1.2 shows a typical two-tiered
client/server solution. Most distributed networks today are two-tiered
client/server models.
Three-Tiered Computing
Three-tiered computing is used in situations where the processing power
required to execute an application will be insufficient on some or all
existing workstations. In three-tiered computing, server-side processing
duties are still performed by the database server. Many of the process
duties that would normally be performed by the workstation are instead
handled by an applications processing server, and the client is typically
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 7
Figure 1.2 Two-tiered computing solution.
Client
requests data
Client PC Database
server returns
requested Database
information Server
responsible only for screen updates, keystrokes, and other visual changes.
This greatly reduces the load on client machines and can allow older
machines to still utilize newer applications. Figure 1.3 shows a typical
three-tiered client/server solution.
Figure 1.3 Three-tiered computing solution.
Client wants to run Applications server requests
a database query database file
Client PC
Applications Database
Server Server
Applications server processes
DB server
query and returns
returns file
output to client
www.syngress.com
8 Chapter 1 • Challenges of the Virtual Environment
NOTE
Windows 2000 with Terminal Services and Citrix MetaFrame can be con-
sidered either two-tiered or three-tiered computing, depending on the
network design. Although there are some differences between the
methods used, both Terminal Services and MetaFrame use a client PC and
an applications server.
Distributed Computing and the Internet
Recently, a new distributed-computing model has emerged: the Internet,
which is one giant distributed-computing environment. Client PCs connect
to servers that pass requests to the appropriate remote servers, which exe-
cute the commands given and return the output back to the client. The
Internet was originally devised by the military to link its research and engi-
neering sites across the United States with a centralized computer system.
Called Advanced Research Projects Agency Network (ARPAnet), the system
was put into place in 1971 and had 19 operational nodes. By 1977, a new
network had connected radio packet networks, Satellite Networks
(SATNET), and ARPAnet together to demonstrate the possibility of mobile
computing. Called the Internet, the network was christened when a user
sent a message from a van on the San Francisco Bay-shore Freeway over
94,000 miles via satellite, landline, and radio waves back to the University
of Southern California campus.
In 1990, MCI created a gateway between separate networks to allow
their MCIMail program to send e-mail messages to users on either system.
Hailed as the first commercial use of the Internet, MCIMail was a precursor
for the rapid expansion of Internet services that would explode across the
United States. Now, a large portion of the world is able to surf the Internet,
send e-mail to their friends, and participate in live chats with other users.
Another growing demand on the Internet is the need to use distributed
computing to run applications remotely. Thin-client programs, which are
capable of connecting to remote application servers across an Internet con-
nection, are becoming more and more common for organizations that need
to make resources available to users outside their local network. We’ll talk
about thin clients later in the chapter; for now it’s enough to know that
Citrix is the major supplier of thin-client technology and Web connectivity
today.
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 9
Benefits of Distributed Computing
Distributed computing can be an excellent fit for many organizations. With
the client/server model, the hardware requirements for the servers are far
less than would be required for a mainframe. This translates into reduced
initial cost. Since each workstation has its own processing power, it can
work offline should the server portion be unavailable. And through the use
of multiple servers, LANs, wide area networks (WANs), and other services
such as the Internet, distributed computing systems can reach around the
world. It is not uncommon these days for companies to have employees
who access the corporate system from their laptops regardless of where
they are located, even on airplanes.
Distributed computing also helps to ensure that there is no one central
point of failure. If information is replicated across many servers, then one
server out of the group going offline will not prevent access to that infor-
mation. Careful management of data replication can guarantee that all but
the most catastrophic of failures will not render the system inoperable.
Redundant links provide fault-tolerant solutions for critical information
systems. This is one of the key reasons that the military initially adopted
the distributed computing platform.
Finally, distributed computing allows the use of older machines to per-
form more complex processes than what they might be capable of other-
wise. With some distributed computing programs, clients as old as a 386
computer could access and use resources on your Windows 2000 servers
as though they were local PCs with up-to-date hardware. That type of
access can appear seamless to the end user. If developers only had to write
software for one operating system platform, they could ignore having to
test the program on all the other platforms available. All this adds up to
cost savings for the consumer and potential time savings for a developer.
Windows 2000 with Terminal Services and Citrix MetaFrame combine both
the distributed computing qualities and the mainframe model as well.
Meeting the Business
Requirements of Both Models
Organizations need to take a hard look at what their requirements will be
before implementing either the mainframe or distributed computing model.
A wrong decision early in the process can create a nightmare of manage-
ment details. Mainframe computing is more expensive in the initial cost
outlay. Distributed computing requires more maintenance over the long
run. Mainframe computing centralizes all of the applications processing.
Distributed computing does exactly what it says—it distributes it! The
reason to choose one model over the other is a decision each organization
www.syngress.com
10 Chapter 1 • Challenges of the Virtual Environment
has to make individually. With the addition of thin-client computing to the
mix, a network administrator can be expected to pull all of his or her hair
out before a system is implemented. Table 1.1 gives some general consider-
ations to use when deciding between the different computing models.
Table 1.1 Considerations for Choosing a Computing Model
If you need… Then consider using…
An environment with a Distributed computing. Each end user will have
variety of platforms avail- a workstation with its own processing capabili-
able to the end user ties and operating system. This gives users
more control over their working environment.
A homogeneous environ- Mainframe computing. Dummy terminals allow
ment where users are administrators to present a controlled, standard
presented with a standard environment for each user regardless of
view machine location.
Lower cost outlays in the Distributed computing. Individual PCs and com-
early stages puters will cost far less than a mainframe
system. Keep in mind that future maintenance
costs may outweigh that savings.
Easy and cost-efficient Mainframe computing. Once the mainframe
expansion system has been implemented, adding new ter-
minals is a simple process compared with
installing and configuring a new PC for each
user.
Excellent availability of soft- Distributed computing. The vast majority of
ware packages for a variety applications being released are for desktop
of business applications computing, and those software packages are
often less expensive even at an enterprise level
than similar mainframe packages.
An excellent Mean Time Mainframe computing. The typical mainframe
Between Failures (MTBF) incorporates more error-checking hardware
than most PCs or servers do. This gives them a
very good service record, which means less
maintenance costs over the life of the equip-
ment. In addition, the ability to predict hard-
ware failures before they occur helps to keep
mainframe systems from developing the same
problems that smaller servers frequently have.
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 11
The Main Differences Between Remote
Control and Remote Node
There are two types of remote computing in today’s network environments
and choosing which to deploy is a matter of determining what your needs
really are. Remote node software is what is typically known as remote
access. It is generally implemented with a client PC dialing in to connect to
some type of remote access server. On the other side, remote control soft-
ware gives a remote client PC control over a local PC’s desktop. Users at
either machine will see the same desktop. In this section we’ll take a look
at the two different methods of remote computing, and consider the bene-
fits and drawbacks of each method.
Remote Control
Remote control software has been in use for several years. From smaller
packages like PCAnywhere to larger, enterprise-wide packages like SMS,
remote control software gives a user or administrator the ability to control
a remote machine and thus the ability to perform a variety of functions.
With remote control, keystrokes are transmitted from the remote machine
to the local machine over whatever network connection has been estab-
lished. The local machine in turn sends back screen updates to the remote
PC. Processing and file transfer typically takes place at the local level,
which helps reduce the bandwidth requirements for the remote PC.
Figure 1.4 shows an example of a remote control session.
Figure 1.4 Remote control session.
Local
Server
Local
Remote PC Screens Client
Data
Keystrokes
Remote LAN
Connection
www.syngress.com
12 Chapter 1 • Challenges of the Virtual Environment
Benefits of Remote Control
Remote control software has become increasingly popular for enterprise
management. With a centralized management tools package, support per-
sonnel are able to diagnose and troubleshoot problems on a remote
machine. This can improve support response time and user satisfaction.
In addition, centralized management tools give an administrator the ability
to collect and manage information from a wide number of machines and to
keep accurate logs of current configurations and installed software. This
can be invaluable for keeping track of license usage and monitoring for vio-
lations of an organization’s computing policies.
Remote control software can be used as a teaching tool. If an adminis-
trator was on the remote PC and connected to a user’s local desktop, he or
she could then use that connection to train the user in hands-on tasks
through demonstration. Both the user and the administrator are seeing the
same screens, which helps eliminate any confusion about what is being
discussed. Since either person can take control of the session, the admin-
istrator can demonstrate a concept and then have the user perform the
specific tasks with supervision.
Remote control software also can offer a more secure computing envi-
ronment. In organizations that handle sensitive information, rules exist
governing the proper use and storage of such information. Often, em-
ployee’s personal computers are not allowed to contain regulated informa-
tion, which could prevent remote workers from accessing their files unless
they were on the organization’s asset. With remote control computing,
employees can dial in and control a company asset remotely. The adminis-
trator can prevent that user from downloading any restricted information
to their home PC. This is invaluable both as a time saving system and as a
way to stay within the legal boundaries required of the organization. Many
organizations employ remote control solutions specifically for this purpose.
With the growing emphasis on information security, good security policies
can prevent possible future litigation. Both Windows 2000 with Terminal
Services and Citrix MetaFrame offer solutions to this problem. We’ll intro-
duce them to you later in this chapter.
Downsides to Remote Control
Remote control software does have some limitations. Currently, most pack-
ages are limited in the screen resolution they can display. The maximum
resolution for Terminal Services clients is 256 colors. Also, programs that
heavily utilize graphics will bog down the session and greatly reduce any
performance benefits that remote control otherwise provides. Citrix
MetaFrame has recently released Feature Release 1, an add-on package for
MetaFrame 1.8 that provides the capability to have clients use 24-bit color.
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 13
The Citrix client has the ability to scale the session graphics back if too
much bandwidth is being used. The higher the graphical resolution
required, the more bandwidth the application will attempt to consume and
the more frequently the graphics will be updated. Because of this, high-end
graphical packages such as a CAD application are not appropriate for a
Terminal Services or MetaFrame environment. Windows Office applications
such as Word and Excel are ideal for remote control sessions.
TIP
Feature Release 1 is available for holders of Citrix’s Subscription
Advantage. It provides both Service Pack 2 for MetaFrame 1.8 (available
regardless of whether you have Subscription Advantage or not) and a
whole set of new features, including multi-monitor support, higher color
depth for client sessions, and SecureICA as a built-in feature. All of these
features will also be available when Citrix releases MetaFrame 2.0.
Traditional remote control packages typically require more network
ports than remote node for the same number of users. This is because the
user must not only dial in and connect to a local machine, they must then
use that local machine to the exclusion of other users. In many cases, this
means actually using two separate machines to merely accomplish the
tasks that one local machine could normally fill. Sounds a bit wasteful,
right? Thankfully, Microsoft and Citrix have developed ways around those
requirements.
Another potential danger of remote-control computing is that it is a
possible point of failure for network security. If someone from outside the
network could gain access to the local PC with the remote control software,
they could perform any task as if they were local to that network. For this
reason, many administrators prefer not to leave remote-controlled PCs con-
stantly on, to carefully control the list of people that know they exist, and
to carefully control the security mechanisms that are used to authenticate
remote users.
A final drawback to remote control is that file transfers between the
local and remote PC will obviously be limited to the connection speed of
the network connection between the two machines. For most users, this
will be a POTS (Plain Old Telephone System) connection with a maximum
speed of around 56 Kbps. Although MetaFrame typically runs well on a
28.8 Kbps modem connection, high-speed connections such as ADSL or
cable modems are excellent to use with both remote-controlled and remote-
www.syngress.com
14 Chapter 1 • Challenges of the Virtual Environment
access sessions. These types of services are still only offered in select
areas. As their coverage grows, expect to see more organizations using
remote control computing packages such as Terminal Services and
MetaFrame.
Remote Node
Remote node computing, also known as remote access computing, can be
considered the traditional dial-in method. A remote PC, equipped with a
modem or another type of network connector, makes a connection across a
WAN to a local server. That remote PC is now considered a local node on
the network, capable of accessing network resources like any local PC
would (within the security limitations imposed by the remote access
system). The local server is responsible for providing all network informa-
tion, file transfers, and even some applications down to the remote node.
The remote node is responsible for processing, executing, and updating the
information with which it is working. It all has to be done over whatever
connection speed the client is capable of achieving.
Due to these limitations, remote node computing can use a lot of band-
width. Careful consideration needs to be used when planning a remote-
node environment. As shown in Figure 1.5, there is little difference
between a client on a local PC and a remote-node client. The server will
handle requests from either machine in the same fashion. If the local client
were to request 2MB worth of data, the server would send it over the LAN
connection. If the remote PC requested the same data, it would have to be
sent over the WAN connection. For a 2MB file on a 56 Kbps connection, it
could be around 6 minutes just to pull that data down. After modifica-
tions, the remote PC would then have to push that file back up to the
server. A remote node using a dial-up connection is treated like any other
local user on the network.
Figure 1.5 Remote access computing.
Local
Client
Local
Remote PC Server
LAN
WAN
Data transfers across all machines
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 15
Why Use Remote Access?
With all of the problems inherent in the connection speed, why would com-
panies consider remote access instead of remote control? For starters,
remote access is relatively simple to configure. All that is required is a way
for the remote computer to connect to a local server. Common solutions
are direct dial (with NT RAS or an equivalent solution) and connecting
though the Internet. The remote machine can join the network as long as
some sort of connection can be established.
Another key benefit is that a variety of operating systems can utilize
remote access to connect to central servers. This means organizations with
differing platforms among their users can provide remote access services to
all of them. The services available may differ from client to client, but all
users will be able to access network resources at least at a very basic level.
Remote access computing is in some ways more secure than remote
control computing. Since many systems can be direct dialed, there is little
chance of anyone interrupting the signal between the remote PC and the
local remote access server. For clients that connect through some other
WAN connection such as the Internet (dial-up ISP, high-bandwidth connec-
tions, and so on) there are many packages that can provide secure com-
munications between the remote client and the local servers. Securing
these communications is essential for a good network security plan since
at least some of the packets will contain user logon information.
Recently, a slew of new virtual private network (VPN) products have hit
the shelves. These packages attempt to allow remote nodes to have secure
communications with the centralized server, typically through a protocol
such as Point-to-Point Tunneling Protocol (PPTP). With encryption
strengths up to 128-bit, these software packages can encode packets so
tightly that it is virtually impossible for them to be decrypted. Unfortu-
nately, much of this technology is not available outside of North America
due to U.S. export laws.
Remote access sessions also have no self-imposed graphics restrictions.
If the client PC is set to display 24-bit True Color, then that is what it will
attempt to show. This can be beneficial when trying to view detailed
images. Unfortunately, this also means that large images coming from the
remote access server can take a long time to display correctly. If executing
a program that pulls a large number of graphics files from the remote net-
work, performance will certainly be slowed, perhaps to the point of
affecting system usability.
However, the biggest advantage of remote access computing over
remote control computing is the hardware requirement. In remote access
computing, a minimal number of local machines can typically handle a
www.syngress.com
16 Chapter 1 • Challenges of the Virtual Environment
large number of user connections. This eliminates the need for each user
to have a local machine that they can remote control. Users can work
offline on their remote PC, and then connect to the local network to upload
changes. This also centralizes the possible failure points, making it easier
to diagnose and troubleshoot problems.
Drawbacks of Remote Node Computing
As mentioned earlier, speed is the key issue with remote node computing.
Since users are moving a lot more data than with remote control com-
puting, speed limitations can be crippling. High-speed Internet connections
using cable modems and ADSL can alleviate some of the problems, but
even then maximum speeds will typically be about 1/5 that of a LAN con-
nection unless the user is willing to pay a large monthly fee (upwards of
$1,000 a month for a personal T1 connection); with those types of connec-
tions, there is the added necessity of ensuring that secure communications
are maintained or you risk leaving your network vulnerable to outside
intrusion. For this reason, many organizations are unwilling to permit any
type of remote access beyond direct-dial solutions.
Since remote access computing requires that the remote PC be capable
of performing the application processing, the hardware requirements for
the remote PCs could become more of a factor. This could mean more fre-
quent replacement of PCs, or holding off on new software upgrades
because the clients will not be able to run them. The remote PC is also
much more vulnerable to virus attacks than it would be in a remote con-
trol situation. Another drawback with remote access computing is the
issue of client licensing. If clients are allowed to individually install and
maintain copies of the software on home PCs, tracking license compliance
becomes difficult for IT management.
A final consideration for remote access computing is hardware platform
compatibility. With no control over the individual’s PC configuration, it is
often necessary to strictly define the types of configurations that will be
supported. This often limits client’s use, since many will not be compliant
with the standards defined. Installing a remote control server can alleviate
many of these problems.
So How Do You Choose?
There are pros and cons to both access models. Both have certain key fea-
tures that make them very desirable. Thankfully, Microsoft and Citrix have
realized the benefits of both models and developed Terminal Services and
MetaFrame, respectively. As a combination of remote access and remote con-
trol services, these two packages are capable of fulfilling the requirements of
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 17
any organization’s remote computing needs. Later in this chapter we’ll
explore the details of each program. Table 1.2 lists some of the reasons to
consider either a remote control or remote access solution.
Table 1.2 Remote Control Versus Remote Access
Remote Control Remote Access
Only passes screen updates and Many users can connect to a single piece
keystrokes back and forth between of hardware because processing and
the remote PC and the local PC. application execution is taking place on
This means that considerably less the remote PC.
bandwidth is required.
Allows remote clients with older Full availability of screen resolutions to
technology to access new applica- support graphical applications. Since the
tions by using the local client as an
remote PC is limited only by it’s own
intermediary between itself and capabilities, higher quality graphics can
the local server. be displayed that would not be viewable
on a remote control session.
Administrators can prevent sensi- Familiarity with the desktop, since it is
tive data from being copied off an always their own.
organization’s assets.
The Thin-Client Revolution
Microsoft and Citrix have been quick to see the limitations imposed by
mainframe computing, distributed computing, remote control, and remote
access—yet all of the models presented to this point have had features that
could make them desirable to an organization. A mainframe has a central
server that handles applications processing, distributed computing gives
each user a customizable desktop and applications set, remote control com-
puting lets older clients access newer software, and remote access com-
puting lets multiple users connect to a single access point. So why not take
the best of all worlds?
That’s what Windows 2000 Terminal Services and MetaFrame do. By
offering a combination of all of those benefits, the two packages allow
remote users to connect to a server, open a virtual desktop, and perform
remote control computing without the necessity of a local PC. The server
handles all applications processing and sends only screen updates to the
client. There is some variation in how the two services work, which we will
discuss later in this chapter. One key point is that MetaFrame uses
Windows 2000 Terminal Services as the underlying structure of its com-
puting environment.
www.syngress.com
18 Chapter 1 • Challenges of the Virtual Environment
Key Concepts
Two important terms to learn for this section are fat clients and thin
clients. The terms “thin” and “fat” refer to the bandwidth requirements that
a client places on the network. A fat client is a machine or application that
requires a large amount of bandwidth to function. Fat clients are typically
run on self-contained machines that have everything from memory to a
processor. Fat-client machines can run their own applications locally or
pull them off a server in a client/server environment. Fat clients are easily
customized and can be used independent of a network connection.
Because fat-client machines execute processes locally, they free up the
server solely to serve information. Most operating systems and the majority
of computers today are fat-client machines.
The term thin client was originally used to indicate a specific type of
software that provided applications and information to remote PCs at a
reduced bandwidth level. Using the best parts of remote control com-
puting, thin client programs send only screen updates and keyboard
strokes back and forth between the client PC and the server providing the
thin-client environment. Thin-client software is popular because it can
alleviate bandwidth problems by compressing data into packets small
enough to fit over even a moderately slow dial-up connection. Today, the
term thin client can be used to reference either a software package or a
machine that is designed specifically for use in a thin-client environment.
Thin-client machines possess only a few of the hardware components of
their fat-client counterparts. They are akin to the old terminals of main-
frame computing. Thin-client machines are considered “intelligent” termi-
nals. This means that they often contain their own memory and display
instructions, but get all their information from a server. There is no local
operating system, no hard drive, and very little processing capability. The
true differentiation between a thin-client machine and fat-client machine is
that a fat client has a hard drive and a thin client doesn’t.
So how does all this apply to Windows 2000 Terminal Services and
Citrix MetaFrame? For starters, both are thin-client software packages.
They each provide the user with a virtual desktop, a concept familiar to
users of Windows and other similar graphical environments. Application
processing is handled at the server level, allowing older PCs with operating
systems such as DOS or even UNIX to execute applications seamlessly
within a Windows 2000 desktop. Seamless execution means that the fact
that the application’s processing is taking place at the server level should
be transparent to the end user. Terminal Services and MetaFrame both
provide a multiuser environment to the Windows 2000 operating system
and both utilize the same underlying infrastructure.
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 19
NOTE
Windows 2000 Terminal Services is commonly referred to as simply
Terminal Services.
The Beginning of Terminal
Services and MetaFrame
It is impossible to discuss the history of Windows NT Terminal Services
without also discussing the history of Citrix. Ed Iacobucci was the head of
the IBM/Microsoft joint effort to develop OS/2. As part of that development
effort, Iacobucci conceived an idea whereby different types of computers on
the network would be able to run OS/2 even though they were not
designed to do so.
His idea marked the beginnings of MultiWin technology. MultiWin per-
mits multiple users to simultaneously share the CPU, network cards, I/O
ports, and other resources that the server has available. This technology is
the basis for multiuser support.
Iacobucci left IBM in 1989 to form Citrix Systems when neither Micro-
soft nor IBM was interested in his MultiWin technology. Citrix developed
the technology, known as MultiView, for the OS/2 platform. Unfortunately
for them, the days of OS/2 were numbered. In 1991, sensing that his com-
pany was in trouble, Iacobucci turned to Microsoft to try to develop the
same technology for the Windows NT platform.
Microsoft granted Citrix license to their NT source code and bought a
six-percent stake in the company. The success of Citrix would only help
Microsoft’s market share grow at a time when they had a relatively small
percentage of the market. The investment paid off. In 1995, Citrix shipped
WinFrame and brought multiuser computing to Windows NT for the first
time. However, the success not only of WinFrame but also of the NT plat-
form in general would be a problem for Citrix. With sales of Windows NT at
an enormously strong level, Microsoft decided they no longer needed the
help of Citrix for thin-client computing. As a result, they notified Citrix of
their intent to develop their own multiuser technology in February of 1997.
Citrix’s stock took an immediate nose-dive when the announcement
was made public. Shares fell 60 percent in a single day, and the future of
the company was uncertain. After several months of intense negotiations
between the two companies, a deal was struck. Microsoft’s desire was to
www.syngress.com
20 Chapter 1 • Challenges of the Virtual Environment
immediately become a player in the thin-client world, but developing their
own architecture to do so would be time consuming. So Citrix agreed to
license their MultiWin technology to Microsoft to incorporate into future
versions of Windows. In return, Citrix had the right to continue the devel-
opment of the WinFrame 1.x platform independent of Microsoft, and also to
develop the MetaFrame expansions of Microsoft’s new Terminal Services
platform. These two products are based on Citrix’s Independent Computing
Architecture (ICA) protocol, which we will discuss later in this chapter.
Introduction of Terminal Services
By the middle of 1998, Microsoft had developed and released Windows NT
Server 4.0, Terminal Services Edition. This was Microsoft’s first attempt at
a thin-client operating system, and it borrowed heavily from Citrix’s earlier
efforts. While NT 4.0 Terminal Services looks the same as a regular NT 4.0
server, they are substantially different. Service packs for one will not work
for the other. Hot fixes have to be written separately as well. Even printer
drivers sometimes need to be “Terminal Services aware,” or certified to
work with Terminal Services. Windows NT 4.0, Terminal Services Edition
shipped as a completely independent platform with a rather hefty price tag.
Citrix soon followed with MetaFrame 1.0 for Windows NT 4.0, Terminal
Services Edition, and later with MetaFrame 1.8. Both versions of MetaFrame
had several advantages over Windows’ Terminal Services. Microsoft bor-
rowed some of those advantages when they developed Windows 2000 with
Terminal Services. With this release, Terminal Services is incorporated
directly into the Windows 2000 platform as a service rather than an
entirely separate architecture. This simplifies maintenance by allowing
Windows 2000 servers with Terminal Services to receive the same up-
grades and hot fixes as any other Windows 2000 server, rather than wait-
ing for a specific Terminal Services version. Any Windows 2000 server can
install Terminal Services, though a separate license may be required
depending on the role the server will play. We’ll look at those roles under
the Windows 2000 Terminal Services section.
Continuing with their agreement, Citrix has released MetaFrame 1.8 for
Windows 2000 Servers. There are no upgrades from the MetaFrame for NT
4.0 Terminal Services addition, but it still provides functionality that
Terminal Services alone cannot. In addition, Citrix’s ICA protocol is consid-
ered to be faster than Microsoft’s Remote Desktop Protocol (RDP). Citrix
also provides some additional tools that can be added to MetaFrame to
extend its functionality and administration abilities. We’ll look at each
product individually and explore their advantages and disadvantages.
www.syngress.com
Challenges of the Virtual Environment • Chapter 1 21
Windows 2000 Terminal Services
Terminal Services provides Windows 2000 administrators with the ability
to distribute a multiuser environment to fat- and thin-client machines.
We’ve already discussed the advantages of managing a centralized com-
puting system. Microsoft makes full use of those advantages in presenting
Windows 2000 Terminal Services as a viable thin-client solution. Microsoft
bases Terminal Services on the RDP protocol. RDP 5.0 is the version cur-
rently shipping with Windows 2000 and it is considerably improved over
RDP 4.0 that shipped with NT 4.0 Terminal Services. We’ll go into RDP in
much more detail a little later in this chapter. For now, it’s enough to know
that RDP is the underlying technology for Terminal Services. Terminal
Services, just like ICA, is the underlying technology used by Citrix.
What Exactly Is Terminal Services?
Terminal Services is a complete multiuser technology used in conjunction
with Windows 2000 Server or Advanced Server to give users that connect
to the Terminal Services-enabled server a traditional Windows 2000
desktop view. Typically users will use a client piece on their local PC that
makes the connection to the remote Terminal Services. Figure 1.6 shows
how this client presents the remote desktop on the local user’s machine.
Figure 1.6 Terminal Services client view.
www.syngress.com