Linux in a Windows World/Linux's Place in a Windows Network/Linux Deployment Strategies
(Initial conversion from Docbook)
(Initial conversion from Docbook)
Creating a plan for deploying Linux can make the difference between success and failure in that endeavor. Although it's possible to simply drop one or two isolated Linux boxes onto a network and have them work correctly, integration with other systems—particularly Windows computers—requires careful planning. You need to select particular server programs to use on the Linux computer that interact with the clients in the way you intend, so as not to disrupt existing servers. In the case of a desktop migration, careful planning and testing is in order. The problem in this case isn't so much the technical challenges of configuring a single system, but the difficulties involved in ensuring that all your existing files are accessible and that all your users are comfortable with the new systems. Finally, thin client deployment poses its own challenges. Knowing when to use thin clients, and how Linux can fit into a thin client strategy, will help you plan and implement such a plan.
One of the most fundamental aspects of deploying Linux is installing the OS. This book doesn't provide a chapter on Linux installation, both because the task varies substantially from one distribution to another and because I presume you don't need that level of detail. If you're completely new to Linux, you should probably buy a more introductory book, ideally one targeted at the distribution you've chosen. At a minimum, you should consult the documentation that came with your distribution for help on how to install it.
Linux Server Options
Chapter 1 described Linux's features as a server OS in broad strokes, including information on common server distributions and pointers to a few specific server programs. This chapter continues this examination with a closer look at the types of servers covered in this book. This information isn't enough to get the server programs up and running, though; for that, you should consult the relevant chapters of this book. Rather, these descriptions are intended to help you decide precisely what servers you should run—whether to use NetBIOS domains or Kerberos for authentication, for instance.
Linux File and Print Servers
One very popular role for Linux servers on Windows-dominated networks is as file and print servers. These computers can store users' files and Windows programs, and make printers available to all users in an area. Some server programs handle both file and print services, but others perform just one role. Common file server protocols on Linux include:
- The Network File Server is a popular file server for Unix-to-Unix file sharing. It provides Unix-style file metadata, such as ownership and permissions, so it's very well suited to file sharing between Linux systems or between Linux and other Unix-like OSs. NFS is not, however, ideal for file sharing with Windows clients; NFS client software for Windows isn't common, and NFS lacks support for some Windows filesystem features, such as system and hidden bits. For this reason, this book doesn't describe configuring Linux as an NFS server or running NFS clients on Windows.
- This protocol is a common one on Macintosh networks, particularly those with systems that run the older Mac OS Classic (that is, Mac OS prior to Mac OS X). Sometimes referred to as AppleTalk , which is the lower-level protocol upon which AppleShare relies, this protocol provides features required by Mac OS but not used by other OSs. This protocol isn't common on Windows-only networks. You might want to run it to support Mac OS clients, but it's not described in this book. Two AppleShare servers are common on Linux: Netatalk (http://netatalk.sourceforge.net) and the Columbia AppleTalk Package (CAP; http://www.cs.mu.oz.au/appletalk/cap.html).
- The NetWare Core Protocol is a file- and printer-sharing protocol traditionally used by Novell's NetWare product. It's a server OS that delivers files to DOS, Windows, and other clients. As such, it is, in principle, a good candidate for a protocol to run on a Windows-dominated network; however, Linux's NCP server software, MARS_NWE (http://www.compu-art.de/mars_nwe/), has never been enthusiastically embraced. For this reason, I don't describe it in this book and instead focus on SMB/CIFS.
- The Server Message Block/Common Internet File System is the most popular file- and printer-sharing protocol in the Windows world. In Linux, it's implemented by the Samba server (http://www.samba.org). SMB/CIFS provides the filesystem features used by Windows, so Linux servers must find a way to implement them, and Samba provides numerous options to do so. Because of its popularity on Windows networks, this book devotes all of Part II to Samba.
AppleTalk, NCP, and SMB/CIFS all provide printer sharing as well as file sharing; however, NFS is a file sharing system only. To provide printer sharing among themselves, Unix systems typically use another protocol. These protocols are also used for local printing: programs submit print jobs locally to the same server that accepts remote print jobs. The most common tools for the job are as follows:
- The Line Printer Daemon is both the name of a server and the protocol it implements. This has been the most common network printer sharing protocol in the Unix and Linux worlds for a long time. Until recently, Linux systems have used LPD as the default local printing queue, as well. Two LPD server implementations are common in Linux: the original Berkeley Standard Distribution (BSD) LPD and the next-generation LPRng (http://www.lprng.com).
- The Internet Printing Protocol is implemented most often by the Common Unix Printing System. This protocol was designed to simplify network printer sharing configuration by supporting auto-detection of local printers. It also features mechanisms to deliver information about printers to applications so that they can set margins appropriately, give users the option of activating duplexers and other advanced features, and so on. Most major Linux distributions now use CUPS as their default printing systems. Although IPP is seldom used directly by Windows, Chapter 4 describes some basics of CUPS configuration in support of sharing printers with Windows systems via Samba.
- A non-Unix printing system
- You can use a non-Unix printing system, such as AppleShare, NCP, or SMB/CIFS, to share printers between Linux systems. This approach can sometimes be convenient if you've shared a printer using one of these systems and want to make the printer available to other Linux systems. If you use CUPS, sharing between the Linux systems should be simpler.
Because of the dominant role of SMB/CIFS in Windows file and printer sharing, this book strongly emphasizes the use of Samba as a file and printer sharing tool for Windows networks. Configuring a basic Samba server requires adjusting just a few configuration options, but the server provides numerous options that enable you to fine-tune the configuration and define file and printer shares for all occasions.
Linux Authentication Servers
Maintaining local account databases can quickly become a major hassle when more than a handful of computers are involved, particularly when users frequently move between computers (as in a university's computing center). Part Vof this book is devoted to authentication servers—servers that tell other computers whether a user has entered a valid username and password (or otherwise provided valid authentication credentials). By localizing the authentication process to just one computer (or conceivably a master computer and a small number of backups), account maintenance can be greatly simplified. Several authentication systems are in common use:
- NIS and NIS+
- Network Information Services and its variant, NIS+, have been the traditional Unix methods of providing centralized login services. In fact, NIS and NIS+ go beyond this duty, but providing authentication services has been one of their main purposes. Like LPD, though, NIS and NIS+ are showing their age. They're also not commonly used on Windows networks, so this book doesn't cover them.
- Windows NT domains
- The authentication system used by SMB/CIFS can provide network authentication. This system is built around Windows NT domains, which use a computer known as the domain controller to authenticate users on behalf of all servers. Configuring Samba to function as a domain controller is described in Chapter 5, and configuring a Linux system to authenticate accounts against a domain controller is described in Chapter 7. Note that, when Linux is configured to use a domain controller for its own accounts, that domain controller can be either a Linux (or other Unix-like) system running Samba or a Windows NT/200x domain controller.
- The Lightweight Directory Access Protocol is essentially a type of database. It's often used to store account information, and when so configured, you can set up clients to access the LDAP server. Although configuring Windows systems to directly access an LDAP server for authentication is unusual, it is possible, and LDAP is becoming increasingly common. Furthermore, LDAP is used as a component in Microsoft's Active Directory authentication system. For these reasons, Chapter 8 describes LDAP authentication.
- This tool, named after the underworld's three-headed guard dog from Greek mythology, is a high-security cross-platform authentication and encryption system. You can configure clients to use Kerberos for a few protocols or for everything, including local logins. One of the main advantages of Kerberos is that it supports single-login operation; that is, you enter your username and password once, and thereafter you don't need to enter them again, even when you access new servers. For instance, after Kerberos-based local login, you don't need to enter your password when retrieving your mail from a POP server or logging into a remote system via Telnet. Kerberos is also a component of Microsoft's AD. Chapter 9 describes this system in more detail.
- Active Directory
- If your network already uses AD, chances are it already uses both LDAP and Kerberos (Kerberos might not be enabled in AD, but it usually is); however, Microsoft's Kerberos implementation is a bit odd, and AD configuration in Linux is complex. Windows AD servers, however, can also use the same NT domain protocols Linux systems use. Thus, if you want a Linux server to authenticate users against an existing AD domain controller, your best bet is to treat it like an NT domain controller. If you want Linux to take over AD domain controller duties, you're out of luck, at least as of early 2005. You can migrate the network to another authentication system, though.
Which authentication system should you use? In most cases, you should stick with whatever you're using now, unless that system is causing you problems. If you don't currently use a centralized authentication system but want to implement one, any of these tools should work well. NT domains are particularly useful if you've got many older Windows 9x/Me systems. LDAP's strength is in handling large numbers of users and in creating synchronized sets of login servers for redundancy in case of network problems. Kerberos was designed with security, cross-platform operation, and single-login operation in mind, but to get the most out of it, you need to use special Kerberized clients and servers—that is, programs that have been modified to use Kerberos.
Remote Login Servers
Remote login servers, as the name implies, enable you to log into a computer remotely. Broadly speaking, these servers come in two types: text-mode and GUI. Examples of these servers include:
- This protocol and server was once a common way to access one Unix system from another in text mode; however, its security is based on a trusted-hosts model, which means that the server trusts the security on the client. In today's network environment, this is an unsound assumption on any but the most private of LANs, and then only when all users can be trusted. For this reason, rlogin is a poor choice for remote login duties and isn't further described in this book.
- This protocol and server normally requires authentication by entering a username and password during the text-mode login process. This is a step up from rlogin, but Telnet (like rlogin) sends all data, including the password, over the network in an unencrypted form. This makes Telnet a very risky protocol on any but very well-protected LANs, and it should never be used over the Internet at large. Nonetheless, Telnet is still fairly common.
- The Secure Shell protocol provides encryption for all data it passes between systems, including the username, the password, and all other data. This characteristic makes it the preferred protocol for remote text-mode logins. SSH also supports tunnelling data—passing data through SSH to create an encrypted connection for a protocol that doesn't normally support encryption. This ability is most easily accessed for X servers; it enables SSH to function as an encrypted link for remote GUI logins, thus straddling the line between the text-mode and GUI tools. Chapter 10 describes SSH in more detail.
- The X Window System
- Linux's default GUI environment, the X Window System (or X for short) is network-enabled; you can have a program (an X client) running on one computer and use the X server on another computer to display a window and accept keyboard and mouse input. One unusual feature of this arrangement is that it places the server on the computer at which the user is sitting. This fact can be confusing because most people think of servers as being remote and powerful computers. This arrangement also creates a chicken-and-egg problem: how do you tell the remote client to launch a program that uses your local X server as a display? One answer is to use a text-mode login tool, such as Telnet or SSH, to create an initial connection, as described in Chapter 11. Another answer is to use a dedicated X login server protocol, described next.
- The X Display Manager Control Protocol is a login protocol for X. An XDMCP server runs on the X client system and accepts login requests from X servers. Linux uses XDMCP locally to provide GUI login screens for users, but you can reconfigure the XDMCP server to accept remote logins, as well. Three XDMCP servers are common in Linux: the original X Display Manager (XDM), the KDE Display Manager (KDM), and the GNOME Display Manager (GDM). All these tools are described in Chapter 11.
- The Remote Frame Buffer protocol can transfer an entire desktop bitmap over the network wire and accept back keyboard and mouse inputs. RFB is most commonly implemented in a server known as Virtual Network Computing (VNC). Under Linux, VNC is implemented as a special X server that uses a network connection to a VNC client rather than a local display, keyboard, and mouse for input and output. One consequence of this arrangement is that the VNC client/server terminology is more intuitive to most people: the VNC client runs on the user's computer, and the server is the remote system the user wants to access. A conventional Linux VNC configuration involves the user running a VNC server after making a text-mode connection in some different capacity, but you can configure VNC in other ways. VNC servers for Windows are also available, enabling you to log into Windows systems from Linux. Chapter 11 describes VNC.
As a general rule, SSH is the best choice for text-mode logins because of its security features. (Kerberos ships with a version of Telnet that encrypts data, though, so the Kerberos Telnet can be a good choice, too.) You can also use SSH to tunnel an X connection, thus providing encryption for your X session. When it comes to remote GUI access, both "plain" X and VNC have their advocates. The two systems send data over the network in different ways, so their performance differs in ways that depend on the characteristics of the network. As a general rule, VNC performs well when the network has lots of bandwidth and either high or low latencies. X, by contrast, sends less data and so needs less bandwidth, but X sends lots of back-and-forth transactions and so works best when network latencies are low. You should treat these rules of thumb with some skepticism, though; variant protocols, tunneling X through SSH, and so on can alter both protocols' performance characteristics radically.
Mail is an important part of many small networks, as well as on the Internet at large. Broadly speaking, mail protocols can be classified as push mail protocols , in which the sender initiates the transfer, or pull mail protocols , in which the recipient initiates the transfer. Several mail protocols exist, and for each of these, several servers can handle them:
- The Simple Mail Transfer Protocol is the most common push mail protocol on the Internet. On Linux, sendmail (http://www.sendmail.org), Postfix (http://www.postfix.org), and Exim (http://www.exim.org) are the most common SMTP servers to ship with Linux; qmail (http://www.qmail.org) is also popular. Each is a major server, so to conserve space, this book describes just two in Chapter 13: sendmail and Postfix. The most popular SMTP server on the Internet is sendmail, and it's the default with many Linux distributions; however, sendmail is also tricky to configure for anything but a basic default setup, at least for those who aren't already sendmail adepts. Postfix was designed as an alternative to sendmail using a modular design and streamlined configuration process, and distributions have slowly been switching to it as the default mail server. The default Postfix configuration file is very well-commented, and Postfix is usually easier for novice mail administrators to configure. Both sendmail and Postfix can interface with other mail server tools, which can perform virus scanning, spam checking, and other mail-related services.
- The Internet Message Access Protocol is a popular way to deliver mail to end users. In a simple configuration, a mail server computer runs an SMTP server to receive off-site mail and also runs a POP or IMAP server to deliver mail to end users who run mail clients such as Microsoft's Outlook or KDE's KMail. IMAP enables users to store mail in folders on the server, which makes it handy if users want to access their mail from different programs or computers. This feature can increase the disk requirements of the mail server computer, though. Numerous IMAP servers for Linux exist, including the University of Washington IMAP (UW-IMAP; http://www.washington.edu/imap/), Cyrus IMAP (http://asg.web.cmu.edu/cyrus/imapd/), Courier IMAP (http://www.inter7.com/courierimap/), and Dovecot (http://dovecot.org). Which one you use depends in part on your SMTP server because the IMAP server must be able to read the mail stored by the SMTP server. UW-IMAP and Cyrus IMAP both read mail in the format that's the default for sendmail, Postfix, and Exim. If you use qmail and its maildir format, or if you reconfigure another SMTP server to use this format, Courier IMAP is a better choice. Dovecot can handle both formats. IMAP servers are covered in Chapter 13.
- The Post Office Protocol is another pull mail server, similar in basic concept to IMAP. POP, though, provides no means to store mail in folders on the server; typically, the client downloads all the messages and deletes them from the server. The user then stores messages locally, if desired. The four IMAP servers just mentioned also support POP. Several others, such as Qpopper (http://www.eudora.com/qpopper/) and qmail-pop3d (which ships with qmail) are also available. POP servers are covered in Chapter 13.
Miscellaneous Linux Servers
In addition to file and printer sharing, authentication, remote login protocols, and mail servers, this book covers several others that are likely to be useful on Windows-dominated networks. These protocols don't fit into neat categories, but some are extremely important, and, in fact, entire books have been written about some of them:
- Backup software
- Various servers can be used for backup purposes. One of these is Samba; you can mount a shared volume and back it up using local tools or more sophisticated techniques to do so in other ways. Chapter 14 covers this topic, as well as a more specialized backup utility, the Advanced Maryland Automatic Network Disk Archiver (AMANDA; http://www.amanda.org). AMANDA's strength is in scheduling automated backups of many systems on a network, which can be a great boon if you need to automate the backup of a whole network. Commercial products, such as Veritas Netbackup (http://www.veritas.com) and Legato (http://www.legato.com), are also available.
- The Dynamic Host Configuration Protocol enables a single server to deliver IP addresses and other basic TCP/IP configuration information to clients when they boot or bring their network interfaces online. Even a modest Linux system can make an excellent DHCP server for your network. The Internet Software Consortium (ISC; http://www.isc.org) produces a reference DHCP server that's easily the most common Linux DHCP server. Chapter 15 covers this server.
- The Domain Name System converts hostnames into IP addresses and vice versa. Each DNS server functions locally, but servers usually link together to function globally, providing name resolution for systems worldwide. The ISC's DNS server, the Berkeley Internet Name Domain (BIND), is the standard one for Linux. Alternatives do exist, though, such as djbdns (http://cr.yp.to/djbdns.html). The latter can be somewhat easier to configure, although managing a full Internet domain with either package isn't trivial. Linux can make a good DNS server, but how you do this depends on your intent. If you want to run a server so that the world can resolve your domain's IP addresses, you need to create a robust DNS server with good Internet connections. If you want to run a local DNS server so that local computers can resolve each others addresses as well as addresses on the Internet, without providing your systems' names on the Internet, you can probably get by with a much simpler DNS server. Chapter 15 covers both BIND and djbdns.
- The Network Time Protocol enables a computer to set its clock to the time maintained by an atomic clock accessible on the Internet. (In fact, many Internet time sources are available, all of which link back to highly accurate sources in one way or another.) The Linux NTP server (http://www.ntp.org) ships with most distributions and functions as both a client and a server. It obtains its time from one or more remote sources and can operate as a server for your own local systems. Even a modest Linux system can function as an NTP server for all but very large networks. One alternative to NTP is to use a time-setting protocol that's part of SMB/CIFS. NTP is generally the cleaner approach on Linux, but you might use the SMB/CIFS time server functionality to set clocks on Windows clients from a Linux NTP server. Chapter 15 covers NTP.
Some protocols—most notably Kerberos—rely upon clients and servers having synchronized clocks. Thus, if you use Kerberos, you should also configure NTP or some other time protocol on all your Kerberos clients and servers.
Linux Desktop Migration
In some ways, migrating desktop systems to Linux is more difficult than migrating a server. The problem isn't the migration process itself; that's very similar, although configuration of individual programs obviously differs. The problem is the scale of the migration; if you plan to migrate all of a site's users to Linux, you need to install and configure the OS on multiple systems, train the users, and deal with the inevitable glitches that will arise.
When considering a Linux desktop migration, you should begin by examining several factors that will likely influence the likelihood of a successful transition. These factors include the availability of administrative expertise, the need and your capacity for end-user training, the availability of appropriate desktop software for your site, the need for generating Windows-compatible files or reading files generated on Windows from off-site, and Linux compatibility of your existing hardware. Any of these factors might present a real challenge to Linux migration. Other changes you're planning can also interact with these factors; for instance, if you intend to upgrade some hardware, existing hardware compatibility may not be as important. In the end, you must evaluate the feasibility of a Linux migration yourself, based on your own site's needs.
If you decide to proceed with a migration, you should begin by examining your needs and developing a plan of action. Decide what software you'll need (both the distribution and the applications you'll run) and begin the migration with a small-scale test; it's better to iron out any wrinkles you encounter on a dozen machines rather than on a hundred machines. The small-scale deployment will enable you to fine-tune your deployment strategy before scaling it up. In fact, for a very large deployment, you may want to scale it up in several stages, starting with one or two test systems, then moving to a dozen or so, then a hundred, and so on.
Linux and Thin Clients
A lot of attention has been devoted to Linux on the desktop recently. The primary goal of Linux desktop operation is to give users access to typical desktop applications—word processors, spreadsheets, web browsers, etc. An alternative exists to this configuration, though: thin client computing. In many respects, thin client computing is very old; the typical mainframe model, with a large central server and many dumb terminals attached to it, closely resembles thin client computing. Thin clients, though, give users the ability to run GUI programs. Thin client computing has certain advantages and disadvantages compared to traditional workstation configurations. You can use Linux as a thin client OS or as the OS accessed by thin clients. Before going too far with a desktop Linux deployment, you may want to consider a Linux thin client solution. It's not for everybody, but some sites can benefit from it. For more details about thin client configuration, consult Chapter 12.
In a thin client configuration, most computers are thin clients—relatively limited computers that consist of a keyboard, a mouse, a monitor, and just enough computing power to display data on the screen and communicate with a central login server. This login server is a multiuser system that can handle all of the network's users' ordinary desktop computing tasks. As such, the central system must usually be quite powerful. Because a typical desktop computer's CPU is mostly idle as a user types or reads, and because a multiuser system can save memory by using shared libraries and similar tricks, the central system doesn't need to be as powerful as the combination of all the workstations it replaces. For instance, consider an office of 10 users that require 10 2-GHz Pentium 4 computers with 512 MB of RAM. In a thin client configuration, you probably don't need a 20-GHz Pentium 4 with 5 GB of RAM (if such a computer even existed!); something along the lines of a dual 3-GHz Pentium 4 with 2 GB of RAM will suffice. Actual requirements will depend on the specific applications, the network bandwidth, and other factors.
The thin clients themselves can be either dedicated hardware devices or recycled older computers. Even an 80486 system might make an acceptable thin client. Thin clients frequently boot from the network using Ethernet cards that support network boots and an appropriate set of servers. You typically need a DHCP server and a server running the Trivial File Transfer Protocol (TFTP). One type of thin client is known as an X terminal . This is basically a computer that runs an X server and little else. Other thin clients can use the RFB protocol or other protocols. As described in Chapter 12, several dedicated Linux thin client distributions exist, as well as tools that enable thin clients intended for Windows to connect to Linux servers.
One big advantage of thin clients is that, by centralizing the bulk of the desktop software on one system, you can simplify system administration tasks. The thin clients themselves are simple enough that they require little in the way of maintenance, and as they download their OSs from a server, you can even administer them centrally. More important, the central login server is just one system—admittedly, one with many users, but one system nonetheless. Instead of rolling out a software update to dozens of computers, you can deal with just one. Particularly if you have a number of old computers on hand that you can recycle as thin clients, this approach can save money on hardware compared to upgrading desktop systems.
Thin clients are not without their drawbacks, though. Because GUI displays must be copied over the network, they require better network infrastructure than is required in a more conventional workstation configuration. The central login server will be particularly hard-hit by this requirement. You may need to upgrade your network to a higher speed or segment it and give the central server multiple network interfaces. As a rule of thumb, an unswitched 100-Mbps network can handle about a dozen thin clients; if you use switches, the number goes up to about 100 users. Configuring the thin clients to support sound and give users access to local floppy disks or other removable media may take extra effort. Because the entire network is wholly dependent on a single computer, a failure of that computer will be devastating.
Linux can function as a thin client OS. Typically, you'll prepare a custom Linux installation and configure it to load from the network or from a hard disk in the thin client itself. When connected to a Linux remote login server, you're likely to use X's networking capabilities to handle the communications. However, Linux can be used with RFB or with other protocols to provide users with remote access to a Windows remote login server.
Linux can also function as the central login server. Typically, you'll use X terminals (either dedicated hardware X terminals or old desktop systems configured as X terminals) as the thin clients, but you can use RFB instead, if you prefer or if you've found thin clients that support this protocol but not the X protocols. As a multiuser OS, Linux is particularly well-suited to function as a central login server. Of course, for all but the smallest network, you'll need a pretty powerful computer to fill this role—probably a multi-CPU system with several gigabytes of RAM.