Section 4.1: Introduction
- This chapter presents an overview of the system architecture for the AFS-3 WADFS. Different treatments of the AFS system may be found in several documents, including [3], [4], [5], and [2]. Certain system features discussed here are examined in more detail in the set of accompanying AFS programmer specification documents.
- After the archtectural overview, the system goals enumerated in Chapter 3 are revisited, and the contribution of the various AFS design decisions and resulting features is noted.
Section 4.2: The AFS System Architecture
Section 4.2.1: Basic Organization
- As stated in Section 3.2, a server-client organization was chosen for the AFS system. A group of trusted server machines provides the primary disk space for the central store managed by the organization controlling the servers. File system operation requests for specific files and directories arrive at server machines from machines running the AFS client software. If the client is authorized to perform the operation, then the server proceeds to execute it.
- In addition to this basic file access functionality, AFS server machines also provide related system services. These include authentication service, mapping between printable and numerical user identifiers, file location service, time service, and such administrative operations as disk management, system reconfiguration, and tape backup.
Section 4.2.2: Volumes
Section 4.2.2.1: Definition
- Disk partitions used for AFS storage do not directly host individual user files and directories. Rather, connected subtrees of the system's directory structure are placed into containers called volumes. Volumes vary in size dynamically as the objects it houses are inserted, overwritten, and deleted. Each volume has an associated quota, or maximum permissible storage. A single unix disk partition may thus host one or more volumes, and in fact may host as many volumes as physically fit in the storage space. However, the practical maximum is currently 3,500 volumes per disk partition. This limitation is imposed by the salvager program, which examines and repairs file system metadata structures.
- There are two ways to identify an AFS volume. The first option is a 32-bit numerical value called the volume ID. The second is a human-readable character string called the volume name.
- Internally, a volume is organized as an array of mutable objects, representing individual files and directories. The file system object associated with each index in this internal array is assigned a uniquifier and a data version number. A subset of these values are used to compose an AFS file identifier, or FID. FIDs are not normally visible to user applications, but rather are used internally by AFS. They consist of ordered triplets, whose components are the volume ID, the index within the volume, and the uniquifier for the index.
- To understand AFS FIDs, let us consider the case where index i in volume v refers to a file named example.txt. This file's uniquifier is currently set to one (1), and its data version number is currently set to zero (0). The AFS client software may then refer to this file with the following FID: (v, i, 1). The next time a client overwrites the object identified with the (v, i, 1) FID, the data version number for example.txt will be promoted to one (1). Thus, the data version number serves to distinguish between different versions of the same file. A higher data version number indicates a newer version of the file.
- Consider the result of deleting file (v, i, 1). This causes the body of example.txt to be discarded, and marks index i in volume v as unused. Should another program create a file, say a.out, within this volume, index i may be reused. If it is, the creation operation will bump the index's uniquifier to two (2), and the data version number is reset to zero (0). Any client caching a FID for the deleted example.txt file thus cannot affect the completely unrelated a.out file, since the uniquifiers differ.
Section 4.2.2.2: Attachment
- The connected subtrees contained within individual volumes are attached to their proper places in the file space defined by a site, forming a single, apparently seamless unix tree. These attachment points are called mount points. These mount points are persistent file system objects, implemented as symbolic links whose contents obey a stylized format. Thus, AFS mount points differ from NFS-style mounts. In the NFS environment, the user dynamically mounts entire remote disk partitions using any desired name. These mounts do not survive client restarts, and do not insure a uniform namespace between different machines.
- A single volume is chosen as the root of the AFS file space for a given organization. By convention, this volume is named root.afs. Each client machine belonging to this organization peforms a unix mount() of this root volume (not to be confused with an AFS mount point) on its empty /afs directory, thus attaching the entire AFS name space at this point.
Section 4.2.2.3: Administrative Uses
- Volumes serve as the administrative unit for AFS ?le system data, providing as the basis for replication, relocation, and backup operations.
Section 4.2.2.4: Replication
Read-only snapshots of AFS volumes may be created by administrative personnel. These clones may be deployed on up to eight disk partitions, on the same server machine or across di?erent servers. Each clone has the identical volume ID, which must di?er from its read-write parent. Thus, at most one clone of any given volume v may reside on a given disk partition. File references to this read-only clone volume may be serviced by any of the servers which host a copy.
Section 4.2.2.4: Replication
- Volumes serve as the unit of tape backup and restore operations. Backups are accomplished by first creating an on-line backup volume for each volume to be archived. This backup volume is organized as a copy-on-write shadow of the original volume, capturing the volume's state at the instant that the backup took place. Thus, the backup volume may be envisioned as being composed of a set of object pointers back to the original image. The first update operation on the file located in index i of the original volume triggers the copy-on-write association. This causes the file's contents at the time of the snapshot to be physically written to the backup volume before the newer version of the file is stored in the parent volume.
- Thus, AFS on-line backup volumes typically consume little disk space. On average, they are composed mostly of links and to a lesser extent the bodies of those few files which have been modified since the last backup took place. Also, the system does not have to be shut down to insure the integrity of the backup images. Dumps are generated from the unchanging backup volumes, and are transferred to tape at any convenient time before the next backup snapshot is performed.
Section 4.2.2.6: Relocation
- Volumes may be moved transparently between disk partitions on a given file server, or between different file server machines. The transparency of volume motion comes from the fact that neither the user-visible names for the files nor the internal AFS FIDs contain server-specific location information.
- Interruption to file service while a volume move is being executed is typically on the order of a few seconds, regardless of the amount of data contained within the volume. This derives from the staged algorithm used to move a volume to a new server. First, a dump is taken of the volume's contents, and this image is installed at the new site. The second stage involves actually locking the original volume, taking an incremental dump to capture file updates since the first stage. The third stage installs the changes at the new site, and the fourth stage deletes the original volume. Further references to this volume will resolve to its new location.
Section 4.2.3: Authentication
- AFS uses the Kerberos [22] [23] authentication system developed at MIT's Project Athena to provide reliable identification of the principals attempting to operate on the files in its central store. Kerberos provides for mutual authentication, not only assuring AFS servers that they are interacting with the stated user, but also assuring AFS clients that they are dealing with the proper server entities and not imposters. Authentication information is mediated through the use of tickets. Clients register passwords with the authentication system, and use those passwords during authentication sessions to secure these tickets. A ticket is an object which contains an encrypted version of the user's name and other information. The file server machines may request a caller to present their ticket in the course of a file system operation. If the file server can successfully decrypt the ticket, then it knows that it was created and delivered by the authentication system, and may trust that the caller is the party identified within the ticket.
- Such subjects as mutual authentication, encryption and decryption, and the use of session keys are complex ones. Readers are directed to the above references for a complete treatment of Kerberos-based authentication.
Section 4.2.4: Authorization
Section 4.2.4.1: Access Control Lists
- AFS implements per-directory Access Control Lists (ACLs) to improve the ability to specify which sets of users have access to the ?les within the directory, and which operations they may perform. ACLs are used in addition to the standard unix mode bits. ACLs are organized as lists of one or more (principal, rights) pairs. A principal may be either the name of an individual user or a group of individual users. There are seven expressible rights, as listed below.
- Read (r): The ability to read the contents of the files in a directory.
- Lookup (l): The ability to look up names in a directory.
- Write (w): The ability to create new files and overwrite the contents of existing files in a directory.
- Insert (i): The ability to insert new files in a directory, but not to overwrite existing files.
- Delete (d): The ability to delete files in a directory.
- Lock (k): The ability to acquire and release advisory locks on a given directory.
- Administer (a): The ability to change a directory's ACL.
Section 4.2.4.2: AFS Groups
- AFS users may create a certain number of groups, differing from the standard unix notion of group. These AFS groups are objects that may be placed on ACLs, and simply contain a list of AFS user names that are to be treated identically for authorization purposes. For example, user erz may create a group called erz:friends consisting of the kazar, vasilis, and mason users. Should erz wish to grant read, lookup, and insert rights to this group in directory d, he should create an entry reading (erz:friends, rli) in d's ACL.
- AFS offers three special, built-in groups, as described below.
- 1. system:anyuser: Any individual who accesses AFS files is considered by the system to be a member of this group, whether or not they hold an authentication ticket. This group is unusual in that it doesn't have a stable membership. In fact, it doesn't have an explicit list of members. Instead, the system:anyuser "membership" grows and shrinks as file accesses occur, with users being (conceptually) added and deleted automatically as they interact with the system.
- The system:anyuser group is typically put on the ACL of those directories for which some specific level of completely public access is desired, covering any user at any AFS site.
- 2. system:authuser: Any individual in possession of a valid Kerberos ticket minted by the organization's authentication service is treated as a member of this group. Just as with system:anyuser, this special group does not have a stable membership. If a user acquires a ticket from the authentication service, they are automatically "added" to the group. If the ticket expires or is discarded by the user, then the given individual will automatically be "removed" from the group.
- The system:authuser group is usually put on the ACL of those directories for which some specific level of intra-site access is desired. Anyone holding a valid ticket within the organization will be allowed to perform the set of accesses specified by the ACL entry, regardless of their precise individual ID.
- 3. system:administrators: This built-in group de?nes the set of users capable of performing certain important administrative operations within the cell. Members of this group have explicit 'a' (ACL administration) rights on every directory's ACL in the organization. Members of this group are the only ones which may legally issue administrative commands to the file server machines within the organization. This group is not like the other two described above in that it does have a stable membership, where individuals are added and deleted from the group explicitly.
- The system:administrators group is typically put on the ACL of those directories which contain sensitive administrative information, or on those places where only administrators are allowed to make changes. All members of this group have implicit rights to change the ACL on any AFS directory within their organization. Thus, they don't have to actually appear on an ACL, or have 'a' rights enabled in their ACL entry if they do appear, to be able to modify the ACL.
Section 4.2.5: Cells
- A cell is the set of server and client machines managed and operated by an administratively independent organization, as fully described in the original proposal [17] and specification [18] documents. The cell's administrators make decisions concerning such issues as server deployment and configuration, user backup schedules, and replication strategies on their own hardware and disk storage completely independently from those implemented by other cell administrators regarding their own domains. Every client machine belongs to exactly one cell, and uses that information to determine where to obtain default system resources and services.
- The cell concept allows autonomous sites to retain full administrative control over their facilities while allowing them to collaborate in the establishment of a single, common name space composed of the union of their individual name spaces. By convention, any file name beginning with /afs is part of this shared global name space and can be used at any AFS-capable machine. The original mount point concept was modified to contain cell information, allowing volumes housed in foreign cells to be mounted in the file space. Again by convention, the top-level /afs directory contains a mount point to the root.cell volume for each cell in the AFS community, attaching their individual file spaces. Thus, the top of the data tree managed by cell xyz is represented by the /afs/xyz directory.
- Creating a new AFS cell is straightforward, with the operation taking three basic steps:
- 1. Name selection: A prospective site has to first select a unique name for itself. Cell name selection is inspired by the hierarchical Domain naming system. Domain-style names are designed to be assignable in a completely decentralized fashion. Example cell names are transarc.com, ssc.gov, and umich.edu. These names correspond to the AFS installations at Transarc Corporation in Pittsburgh, PA, the Superconducting Supercollider Lab in Dallas, TX, and the University of Michigan at Ann Arbor, MI. respectively.
- 2. Server installation: Once a cell name has been chosen, the site must bring up one or more AFS file server machines, creating a local file space and a suite of local services, including authentication (Section 4.2.6.4) and volume location (Section 4.2.6.2).
- 3. Advertise services: In order for other cells to discover the presence of the new site, it must advertise its name and which of its machines provide basic AFS services such as authentication and volume location. An established site may then record the machines providing AFS system services for the new cell, and then set up its mount point under /afs. By convention, each cell places the top of its file tree in a volume named root.cell.
Section 4.2.6: Implementation of Server
Functionality
- AFS server functionality is implemented by a set of user-level processes which execute on server machines. This section examines the role of each of these processes.
Section 4.2.6.1: File Server
- This AFS entity is responsible for providing a central disk repository for a particular set of files within volumes, and for making these files accessible to properly-authorized users running on client machines.
Section 4.2.6.2: Volume Location Server
- The Volume Location Server maintains and exports the Volume Location Database (VLDB). This database tracks the server or set of servers on which volume instances reside. Among the operations it supports are queries returning volume location and status information, volume ID management, and creation, deletion, and modification of VLDB entries.
- The VLDB may be replicated to two or more server machines for availability and load-sharing reasons. A Volume Location Server process executes on each server machine on which a copy of the VLDB resides, managing that copy.
Section 4.2.6.3: Volume Server
- The Volume Server allows administrative tasks and probes to be performed on the set of AFS volumes residing on the machine on which it is running. These operations include volume creation and deletion, renaming volumes, dumping and restoring volumes, altering the list of replication sites for a read-only volume, creating and propagating a new read-only volume image, creation and update of backup volumes, listing all volumes on a partition, and examining volume status.
Section 4.2.6.4: Authentication Server
- The AFS Authentication Server maintains and exports the Authentication Database (ADB). This database tracks the encrypted passwords of the cell's users. The Authentication Server interface allows operations that manipulate ADB entries. It also implements the Kerberos mutual authentication protocol, supplying the appropriate identification tickets to successful callers.
- The ADB may be replicated to two or more server machines for availability and load-sharing reasons. An Authentication Server process executes on each server machine on which a copy of the ADB resides, managing that copy.
Section 4.2.6.5: Protection Server
- The Protection Server maintains and exports the Protection Database (PDB), which maps between printable user and group names and their internal numerical AFS identifiers. The Protection Server also allows callers to create, destroy, query ownership and membership, and generally manipulate AFS user and group records.
- The PDB may be replicated to two or more server machines for availability and load-sharing reasons. A Protection Server process executes on each server machine on which a copy of the PDB resides, managing that copy.
Section 4.2.6.6: BOS Server
- The BOS Server is an administrative tool which runs on each file server machine in a cell. This server is responsible for monitoring the health of the AFS agent processess on that machine. The BOS Server brings up the chosen set of AFS agents in the proper order after a system reboot, answers requests as to their status, and restarts them when they fail. It also accepts commands to start, suspend, or resume these processes, and install new server binaries.
Section 4.2.6.7: Update Server/Client
- The Update Server and Update Client programs are used to distribute important system files and server binaries. For example, consider the case of distributing a new File Server binary to the set of Sparcstation server machines in a cell. One of the Sparcstation servers is declared to be the distribution point for its machine class, and is configured to run an Update Server. The new binary is installed in the appropriate local directory on that Sparcstation distribution point. Each of the other Sparcstation servers runs an Update Client instance, which periodically polls the proper Update Server. The new File Server binary will be detected and copied over to the client. Thus, new server binaries need only be installed manually once per machine type, and the distribution to like server machines will occur automatically.
Section 4.2.7: Implementation of Client
Functionality
Section 4.2.7.1: Introduction
- The portion of the AFS WADFS which runs on each client machine is called the Cache Manager. This code, running within the client's kernel, is a user's representative in communicating and interacting with the File Servers. The Cache Manager's primary responsibility is to create the illusion that the remote AFS file store resides on the client machine's local disk(s).
- s implied by its name, the Cache Manager supports this illusion by maintaining a cache of files referenced from the central AFS store on the machine's local disk. All file operations executed by client application programs on files within the AFS name space are handled by the Cache Manager and are realized on these cached images. Client-side AFS references are directed to the Cache Manager via the standard VFS and vnode file system interfaces pioneered and advanced by Sun Microsystems [21]. The Cache Manager stores and fetches files to and from the shared AFS repository as necessary to satisfy these operations. It is responsible for parsing unix pathnames on open() operations and mapping each component of the name to the File Server or group of File Servers that house the matching directory or file.
- The Cache Manager has additional responsibilities. It also serves as a reliable repository for the user's authentication information, holding on to their tickets and wielding them as necessary when challenged during File Server interactions. It caches volume location information gathered from probes to the VLDB, and keeps the client machine's local clock synchronized with a reliable time source.
Section 4.2.7.2: Chunked Access
- In previous AFS incarnations, whole-file caching was performed. Whenever an AFS file was referenced, the entire contents of the file were stored on the client's local disk. This approach had several disadvantages. One problem was that no file larger than the amount of disk space allocated to the client's local cache could be accessed.
- AFS-3 supports chunked file access, allowing individual 64 kilobyte pieces to be fetched and stored. Chunking allows AFS files of any size to be accessed from a client. The chunk size is settable at each client machine, but the default chunk size of 64K was chosen so that most unix files would fit within a single chunk.
Section 4.2.7.3: Cache Management
- The use of a file cache by the AFS client-side code, as described above, raises the thorny issue of cache consistency. Each client must effciently determine whether its cached file chunks are identical to the corresponding sections of the file as stored at the server machine before allowing a user to operate on those chunks.
- AFS employs the notion of a callback as the backbone of its cache consistency algorithm. When a server machine delivers one or more chunks of a file to a client, it also includes a callback "promise" that the client will be notified if any modifications are made to the data in the file at the server. Thus, as long as the client machine is in possession of a callback for a file, it knows it is correctly synchronized with the centrally-stored version, and allows its users to operate on it as desired without any further interaction with the server. Before a file server stores a more recent version of a file on its own disks, it will first break all outstanding callbacks on this item. A callback will eventually time out, even if there are no changes to the file or directory it covers.
Section 4.2.8: Communication Substrate: Rx
- All AFS system agents employ remote procedure call (RPC) interfaces. Thus, servers may be queried and operated upon regardless of their location.
- The Rx RPC package is used by all AFS agents to provide a high-performance, multi-threaded, and secure communication mechanism. The Rx protocol is adaptive, conforming itself to widely varying network communication media encountered by a WADFS. It allows user applications to de?ne and insert their own security modules, allowing them to execute the precise end-to-end authentication algorithms required to suit their specific needs and goals. Rx offers two built-in security modules. The first is the null module, which does not perform any encryption or authentication checks. The second built-in security module is rxkad, which utilizes Kerberos authentication.
- Although pervasive throughout the AFS distributed file system, all of its agents, and many of its standard application programs, Rx is entirely separable from AFS and does not depend on any of its features. In fact, Rx can be used to build applications engaging in RPC-style communication under a variety of unix-style file systems. There are in-kernel and user-space implementations of the Rx facility, with both sharing the same interface.
Section 4.2.9: Database Replication: ubik
- The three AFS system databases (VLDB, ADB, and PDB) may be replicated to multiple server machines to improve their availability and share access loads among the replication sites. The ubik replication package is used to implement this functionality. A full description of ubik and of the quorum completion algorithm it implements may be found in [19] and [20].
- The basic abstraction provided by ubik is that of a disk file replicated to multiple server locations. One machine is considered to be the synchronization site, handling all write operations on the database file. Read operations may be directed to any of the active members of the quorum, namely a subset of the replication sites large enough to insure integrity across such failures as individual server crashes and network partitions. All of the quorum members participate in regular elections to determine the current synchronization site. The ubik algorithms allow server machines to enter and exit the quorum in an orderly and consistent fashion.
- All operations to one of these replicated "abstract files" are performed as part of a transaction. If all the related operations performed under a transaction are successful, then the transaction is committed, and the changes are made permanent. Otherwise, the transaction is aborted, and all of the operations for that transaction are undone.
- Like Rx, the ubik facility may be used by client applications directly. Thus, user applicatons may easily implement the notion of a replicated disk file in this fashion.
Section 4.2.10: System Management
- There are several AFS features aimed at facilitating system management. Some of these features have already been mentioned, such as volumes, the BOS Server, and the pervasive use of secure RPCs throughout the system to perform administrative operations from any AFS client machinein the worldwide community. This section covers additional AFS features and tools that assist in making the system easier to manage.
Section 4.2.10.1: Intelligent Access
Programs
- A set of intelligent user-level applications were written so that the AFS system agents could be more easily queried and controlled. These programs accept user input, then translate the caller's instructions into the proper RPCs to the responsible AFS system agents, in the proper order.
- An example of this class of AFS application programs is vos, which mediates access to the Volume Server and the Volume Location Server agents. Consider the vos move operation, which results in a given volume being moved from one site to another. The Volume Server does not support a complex operation like a volume move directly. In fact, this move operation involves the Volume Servers at the current and new machines, as well as the Volume Location Server, which tracks volume locations. Volume moves are accomplished by a combination of full and incremental volume dump and restore operations, and a VLDB update. The vos move command issues the necessary RPCs in the proper order, and attempts to recovers from errors at each of the steps.
- The end result is that the AFS interface presented to system administrators is much simpler and more powerful than that offered by the raw RPC interfaces themselves. The learning curve for administrative personnel is thus flattened. Also, automatic execution of complex system operations are more likely to be successful, free from human error.
Section 4.2.10.2: Monitoring Interfaces
- The various AFS agent RPC interfaces provide calls which allow for the collection of system status and performance data. This data may be displayed by such programs as scout, which graphically depicts File Server performance numbers and disk utilizations. Such monitoring capabilites allow for quick detection of system problems. They also support detailed performance analyses, which may indicate the need to reconfigure system resources.
Section 4.2.10.3: Backup System
- A special backup system has been designed and implemented for AFS, as described in [6]. It is not sufficient to simply dump the contents of all File Server partitions onto tape, since volumes are mobile, and need to be tracked individually. The AFS backup system allows hierarchical dump schedules to be built based on volume names. It generates the appropriate RPCs to create the required backup volumes and to dump these snapshots to tape. A database is used to track the backup status of system volumes, along with the set of tapes on which backups reside.
Section 4.2.11: Interoperability
- Since the client portion of the AFS software is implemented as a standard VFS/vnode file system object, AFS can be installed into client kernels and utilized without interference with other VFS-style file systems, such as vanilla unix and the NFS distributed file system.
- Certain machines either cannot or choose not to run the AFS client software natively. If these machines run NFS, it is still possible to access AFS files through a protocol translator. The NFS-AFS Translator may be run on any machine at the given site that runs both NFS and the AFS Cache Manager. All of the NFS machines that wish to access the AFS shared store proceed to NFS-mount the translator's /afs directory. File references generated at the NFS-based machines are received at the translator machine, which is acting in its capacity as an NFS server. The file data is actually obtained when the translator machine issues the corresponding AFS references in its role as an AFS client.
Section 4.3: Meeting AFS Goals
- The AFS WADFS design, as described in this chapter, serves to meet the system goals stated in Chapter 3. This section revisits each of these AFS goals, and identifies the specific architectural constructs that bear on them.
Section 4.3.1: Scale
- To date, AFS has been deployed to over 140 sites world-wide, with approximately 60 of these cells visible on the public Internet. AFS sites are currently operating in several European countries, in Japan, and in Australia. While many sites are modest in size, certain cells contain more than 30,000 accounts. AFS sites have realized client/server ratios in excess of the targeted 200:1.
Section 4.3.2: Name Space
- A single uniform name space has been constructed across all cells in the greater AFS user community. Any pathname beginning with /afs may indeed be used at any AFS client. A set of common conventions regarding the organization of the top-level /afs directory and several directories below it have been established. These conventions also assist in the location of certain per-cell resources, such as AFS configuration files.
- Both access transparency and location transparency are supported by AFS, as evidenced by the common access mechanisms and by the ability to transparently relocate volumes.
Section 4.3.3: Performance
- AFS employs caching extensively at all levels to reduce the cost of "remote" references. Measured data cache hit ratios are very high, often over 95%. This indicates that the file images kept on local disk are very effective in satisfying the set of remote file references generated by clients. The introduction of file system callbacks has also been demonstrated to be very effective in the efficient implementation of cache synchronization. Replicating files and system databases across multiple server machines distributes load among the given servers. The Rx RPC subsystem has operated successfully at network speeds ranging from 19.2 kilobytes/second to experimental gigabit/second FDDI networks.
- Even at the intra-site level, AFS has been shown to deliver good performance, especially in high-load situations. One often-quoted study [1] compared the performance of an older version of AFS with that of NFS on a large file system task named the Andrew Benchmark. While NFS sometimes outperformed AFS at low load levels, its performance fell off rapidly at higher loads while AFS performance degradation was not significantly affected.
Section 4.3.4: Security
- The use of Kerberos as the AFS authentication system fits the security goal nicely. Access to AFS files from untrusted client machines is predicated on the caller's possession of the appropriate Kerberos ticket(s). Setting up per-site, Kerveros-based authentication services compartmentalizes any security breach to the cell which was compromised. Since the Cache Manager will store multiple tickets for its users, they may take on different identities depending on the set of file servers being accessed.
Section 4.3.5: Access Control
- AFS extends the standard unix authorization mechanism with per-directory Access Control Lists. These ACLs allow specific AFS principals and groups of these principals to be granted a wide variety of rights on the associated files. Users may create and manipulate AFS group entities without administrative assistance, and place these tailored groups on ACLs.
Section 4.3.6: Reliability
- A subset of file server crashes are masked by the use of read-only replication on volumes containing slowly-changing files. Availability of important, frequently-used programs such as editors and compilers may thus been greatly improved. Since the level of replication may be chosen per volume, and easily changed, each site may decide the proper replication levels for certain programs and/or data. Similarly, replicated system databases help to maintain service in the face of server crashes and network partitions.
Section 4.3.7: Administrability
- Such features as pervasive, secure RPC interfaces to all AFS system components, volumes, overseer processes for monitoring and management of file system agents, intelligent user-level access tools, interface routines providing performance and statistics information, and an automated backup service tailored to a volume-based environment all contribute to the administrability of the AFS system.
Section 4.3.8: Interoperability/Coexistence
- Due to its VFS-style implementation, the AFS client code may be easily installed in the machine's kernel, and may service file requests without interfering in the operation of any other installed file system. Machines either not capable of running AFS natively or choosing not to do so may still access AFS files via NFS with the help of a protocol translator agent.
Section 4.3.9: Heterogeneity/Portability
- As most modern kernels use a VFS-style interface to support their native file systems, AFS may usually be ported to a new hardware and/or software environment in a relatively straightforward fashion. Such ease of porting allows AFS to run on a wide variety of platforms.