The Network File System (NFS) is probably the most prominent network service using RPC. It allows you to access files on remote hosts in exactly the same way you would access local files. A mixture of kernel support and user-space daemons on the client side, along with an NFS server on the server side, makes this possible. This file access is completely transparent to the client and works across a variety of server and host architectures.
NFS offers a number of useful features:
Data accessed by all users can be kept on a central host, with clients mounting this directory at boot time. For example, you can keep all user accounts on one host and have all hosts on your network mount /home from that host. If NFS is installed beside NIS, users can log into any system and still work on one set of files.
Data consuming large amounts of disk space can be kept on a single host. For example, all files and programs relating to LaTeX and METAFONT can be kept and maintained in one place.
Administrative data can be kept on a single host. There is no need to use rcp to install the same stupid file on 20 different machines.
It's not too hard to set up basic NFS operation on both the client and server; this chapter tells you how.
Linux NFS is largely the work of Rick Sladkey, who wrote the NFS kernel code and large parts of the NFS server.[1] The latter is derived from the unfsd user space NFS server, originally written by Mark Shand, and the hnfs Harris NFS server, written by Donald Becker.
Let's have a look at how NFS works. First, a client tries to mount a directory from a remote host on a local directory just the same way it does a physical device. However, the syntax used to specify the remote directory is different. For example, to mount /home from host vlager to /users on vale, the administrator issues the following command on vale:[2]
# mount -t nfs vlager:/home /users |
mount will try to connect to the rpc.mountd mount daemon on vlager via RPC. The server will check if vale is permitted to mount the directory in question, and if so, return it a file handle. This file handle will be used in all subsequent requests to files below /users.
When someone accesses a file over NFS, the kernel places an RPC call to rpc.nfsd (the NFS daemon) on the server machine. This call takes the file handle, the name of the file to be accessed, and the user and group IDs of the user as parameters. These are used in determining access rights to the specified file. In order to prevent unauthorized users from reading or modifying files, user and group IDs must be the same on both hosts.
On most Unix implementations, the NFS functionality of both client and server is implemented as kernel-level daemons that are started from user space at system boot. These are the NFS Daemon (rpc.nfsd ) on the server host, and the Block I/O Daemon (biod ) on the client host. To improve throughput, biod performs asynchronous I/O using read-ahead and write-behind; also, several rpc.nfsd daemons are usually run concurrently.
The current NFS implementation of Linux is a little different from the classic NFS in that the server code runs entirely in user space, so running multiple copies simultaneously is more complicated. The current rpc.nfsd implementation offers an experimental feature that allows limited support for multiple servers. Olaf Kirch developed kernel-based NFS server support featured in 2.2 Version Linux kernels. Its performance is significantly better than the existing userspace implementation. We'll describe it later in this chapter.
[1] |
Rick can be reached at jrs@world.std.com. |
[2] |
Actually, you can omit the –t nfs argument because mount sees from the colon that this specifies an NFS volume. |