This is a copy of a paper that was presented at the Local Networks and Distributed Office Systems Conference, London, May 1981. You can find it in the proceedings of that conference on pages 345-361.

This online version resulted from scanning and OCRing a computer generated original. Since optical character recognition is certainly not perfect, if you notice any errors, please let me () know. Thanks!

A note on the references - to jump to the references section of the paper to look at a citation, click on the numeric reference entry in bracketed references. E.g. in "[BaH77-1]" click on the 1.

A note on the figures - since the figures were also scanned in, they are certainly not perfect. In case you are unable to read some of the detail on the versions included with the text, you can click on the figures themselves to see somewhat larger versions where more detail is visible.

A note on the font - in the original there was more flexibility available in font selection and placement than HTML currently allows (do let me know of the HTML symbolic stuff is ready for prime time). Consequently there is some variation between the text and the figures - particularly in the Public Key Encryption section. The slightly decended down arrow used to denote encryption with a public key is denoted by a lower case "d" in the text of this version. Similarly a "u" denotes "decryption" with a private key. Not the best, but hopefully readable. Suggestions welcomed.

If you are somehow looking at a hardcopy version of this paper and wish to view it on the WWW, you can find it at the URL:

http://www.webstart.com/jed/papers/Managing-Domains/
for now - naturally...

Last update to this HTML version, August 29, 2006 --Jed

Managing Domains in a Network Operating System

James E. (Jed) Donnelley
Lawrence Livermore National Laboratory, Livermore, California, USA
"James E (Jed) Donnelley"
http://www.webstart.com/jed/
Abstract:
The needs for economy, reliability, security, and privacy in distributed information systems have suggested that such systems be constructed from modules operating in separate domains. Ideally the domain of each module would permit access to only those resources needed to accomplish its task. In practice it has proved difficult to provide such precise access control.

At the level of process-to-process communication there is the thorny problem of defining a resource independent protocol for communicating resource access (capabilities). Several approaches to this problem are discussed, including a solution based on public key encryption.

At the level of human-to-process communication there is a similar problem of communicating access to resources. Here a person at a terminal needs to be able to indicate which resources should be made available to the processes performing a requested task. The advantages of a discriminating domain control at the process level are largely wasted if the human-to-process protocols fail to address this important issue.

Keywords:
Capability, network, operating system, security, sharing, resource, encryption

1. Introduction

The fundamental task of a computer operating system (OS) is resource management. One half of this task consists of assuring that people and processes are granted access to the resources that they need and are authorized to use. Because computer system users are usually quite vocal in expressing their unmet needs for resource access, OS implementers are always made acutely aware of this availability half of the resource management task.

The other half of resource management is assuring that people and processes are not granted access to resources that they do not need or are not authorized to use. System users seldom complain too loudly about being granted access to resources that they do not need or are not authorized to access. Because of this, OS implementers tend to ignore this control aspect of resource management until its indirect effects become apparent. It is the control aspect of resource management that we will consider here.

As distributed systems increase in size, the adverse effects of inadequate control of resource access accumulate until they can not be ignored. The economic effect is that scarce resources will be consumed by users with little need (e.g., the U.S. gasoline shortage). The reliability effect is that local failures can have devastatingly global consequences (e.g., the New York power failure). The security and privacy effect is the damage that can result from control in the hands of an enemy (e.g., the broken Japanese code of WW II which led to their defeat at Midway).

2. Single CPU vs. Network Operating Systems

Single-CPU OS's create virtual resources such as allocated files, assigned tapes, or multiprogrammed processes by multiplexing their directly attached peripheral devices (a href="Figure-1.gif">fig. 1). In this type of system the OS is lord and master of its realm and is able to dole out resource access in any way it chooses. The absolute power wielded by these stand-alone systems has given rise to numerous forms of resource access control (absolute power corrupts absolutely). These include: user indexes and groups, directories, passwords, access control lists, capability lists, and others [Bob72-2, RiT74-25, Org72-22, Wul74-28].

In recent years the advent of distributed computing systems has created a need for distributed OS's adequate to manage access control for their distributed resources. The component OS [Don79-6] for a single machine in a distributed computing facility (fig. 2a) generally has much less authority than did its omnipotent stand-alone ancestors. These component systems do control their directly attached peripherals but often must share access to most resources with other computers on a communication network (fig. 2b). The resource-access-control mechanisms possible in such a distributed environment are considerably more restricted than those available to stand-alone systems.

Figure 2 - Adding a Network Operating System component

3. The Network OS Experience at LLNL

The Octopus network [Fle73-12, Wat78-26] at the Lawrence Livermore National Laboratory (LLNL) is a tightly coupled local computer network that has been growing and maturing since about 1964. As new resources have been added to the network, protocols have been developed to allow them to be shared. Although attempts were naturally made to build new protocols as extensions to existing protocols, with hindsight it is clear that there are many areas where protocol commonality can be more effectively exploited.

The problem of software interconnection has not been too serious for new services added to the network (e.g., hardcopy output, mass storage) because they only need to support one service protocol. For new general-purpose or worker systems, however, the problem of interfacing with all of the service protocols has become prodigious, particularly when contrasted with the relative ease of hardware interconnection made possible by modern broadcast bus technologies (fig. 2)[DoY76-7]. To correct this situation an effort has been undertaken at LLNL to develop an integrated set of network OS protocols. To solve the problems of general-purpose worker systems, these! protocols must be appropriate for use both across the network and between processes within a single multiprogramming system. To ensure that the protocols are effective in the time-critical environment within a single computer system, we have implemented a purely message-passing kernel inside of which most of our protocols are being tested [Don79-].

The protocols we have developed make up the Livermore Network Communication System (LINCS). Although LINCS includes protocols from the link level to the service levels [Wat79-27, FlW80-15, Don79-6, MiD80-20, DuB80-8], most of the development has been at the service levels where, unfortunately, effective international standards have not yet emerged. We review here some of the concepts developed at LLNL in the area of protocols for access control of distributed resources [DoF80-4].

4. Interprocess Communication Primitives

To define an access control protocol, it is necessary to specify the communication primitives upon which it will be built. It is important to require as little as possible of these primitives in order to allow the defined protocol to be utilized with as wide a variety of communication facilities as possible.

The primitives that we utilize here consist of a simple bit-string or "bucket of bits" communication facility. We assume that the active elements capable of communicating (hereafter referred to as processes) are able to send and receive bit strings (messages) of unlimited length and that the messages can be addressed to and from an unlimited supply of network addresses. We assume that the issues faced by typical end-to-end and lower level protocols (error detection and correction, flow and congestion control, routing and identification, etc.) are properly handled by the message system [FlW78-14, Pos8O-24, Wat79-27].

Although the LINCS standard end-to-end protocol cannot afford to explicitly open and close virtual circuits [Fle79-13, FlW78-14], both it and the LINCS standard message system interface are designed to map directly to existing connection-oriented protocol standards such as the ISO standard ITT78 [ITT78-16] and the U.S. Defense Department standard TCP [Pos80-24]. The LINCS standards include a begin marker which may be interpreted as "open connection," an end marker which can close a connection, and an additional marker in the data stream that can be used for signaling or synchronization [Wat79-27].

5. Defining the Distributed Domain Management Problem

The essence of the domain management problem can be characterized by considering the three processes diagrammed in fig. 3, a resource service and two resource-using processes. The problem can be divided into three parts: 1 authorization by the service process; 2 communication of the access between any pair of processes; and 3 validation by the service process.

For our discussion it will be convenient to have a single term to denote resource access. We will use the term capability [DeV66-3, Eng72-10, Fab74-11, Lan75-17, LaS76-18, Wul74-28]. We say that a process has a capability to a resource if it has been authorized access to the resource. A capability is thus a key that can unlock access to a resource. We define the domain of a process to be the set of capabilities that it possesses.

This use of the term capability is a generalization of the way it is often used in the discussion of capability-list (C-list) OS's [Lan75-17, LaS76-18, Wul74-28]. In C-list systems, capabilities do authorize resource access, but the term is generally restricted to refer to a specific form of capability implemented as some type of pointer that is protected by the kernel of a stand-alone OS. In such systems all communication and validation of capabilities (in this restricted sense) is mediated by the system kernel. This centralized approach to access control unfortunately does not extend well into a distributed environment [Don76-5, Don79-6].

Our purpose here is to explore the types of protocols suitable for capability passing and validation in a network OS. Our protocols must allow serving processes to give out capabilities to potential users in such a way that:

  1. The capabilities can be validated when used as authorization for service requests.
  2. The capabilities can be communicated or passed between any two processes that can communicate data.

6. The Inalienable Right to Pass Capabilities

The requirement of capability communication often provokes some controversy. It is sometimes argued that allowing a process to pass a capability to another process is not always desirable. Indeed, a number of OS's go to a great deal of effort to offer features that restrict capability passing.

Because the right of access to any resource must originate from within the domain of the serving process, all capabilities must be passed at least once - namely, from the service process to a using process - to be of any use at all. To handle this "first-pass" situation, some OS's have used the expedient of a "pass-once" capability [BaH77-1]. Since a stand-alone OS can arrange to monitor capability pointer passing, it can mark "pass-once" capabilities as unpassable after they have once been passed from their service process to a user. In addition to being somewhat awkward, however, these attempts to restrict capability passing have the disadvantage that they can never really work. They can only work in the limited sense of the term capability discussed above (a pointer protected by a stand-alone OS), but not in the more functional sense (that we have adopted) of granting the right to access a resource.

To see the difficulty of restricting capability passing, we need only consider processes A, B, and S pictured in fig. 3. Suppose that A has a capability to a resource serviced by S. Also suppose that A can communicate with B (if not, then A cannot pass anything to B, so no special capability-passing restriction is necessary). If a monitoring OS kernel has denied the mechanism for passing direct access to a resource from A to B, A can still give B the right to indirect access. A can simply have B send all its service requests to A for forwarding to S. A will also have to return the results of such requests to B.

In some cases passing indirect access is a useful mechanism: for example, in cases where A wishes to monitor or filter B's accesses to a resource or where A wishes to be able to cut off B's access at a later time. In many other cases, however, the inability to pass direct access simply places an unnecessary additional communication burden on the passing process.

Because it is our intent to make the domain-management protocols we consider as convenient as possible, and because it is not possible to effectively outlaw capability communication, we will only consider protocols that support the direct communication of resource access. Indirect capability passing is always available to any process that chooses to use it in lieu of direct passing.

7. The Tools Available for Capability Validation

In choosing a domain-management protocol, it is convenient to center attention first on the validation mechanisms available. The capability-passing mechanisms seem to follow naturally from the validation techniques chosen.

When a service process receives a request for resource access, it receives two pieces of information that can be used for capability validation (fig. 3): first, the message data (the string of bits), and second, the address of the sender (remember, we assume that our communication primitives validly identify the addresses of sending processes). These two pieces of information naturally give rise to two basic approaches for capability validation which we call data authorization and address authorization.

8. Control by Password

Password control is a simple form of data authorization. To create a password-controlled capability, a service process creates a block of data containing the resource identification and a secret password to authorize resource access. To validate an access request authorized by a password-controlled capability, the service process need only compare the password in the received capability data block with the valid authorizing password. The authorizing password can either be chosen at random and stored with the resource or be computed from the resource identification by a secret algorithm known only to the server (e.g., encrypted with a secret key). The currently implemented test version of the LINCS standard file server computes password-controlled capabilities by encrypting the resource identification with the National Bureau of Standards Data Encryption Standard (DES [NBS77-3]) using a secret key that is kept with the file.

Password-controlled capabilities are easy to communicate. To pass the right to access a password-controlled capability from one process to another it suffices to pass the capability data block (including its password) in a message (fig. 3).

9. The Problem of Data Theft

The ease of communicating password-controlled capabilities also gives rise to their most significant problem. A process with a password-controlled capability in its domain (e.g., in its memory space) must be careful not to reveal the secret password. Although the domains of processes are generally protected from information theft, password-controlled capabilities make some of a process's information particularly sensitive. For example, a resource such as a file or directory may remain in existence long after the process and its memory are no longer used.

The problem of data theft is especially serious in light of typical programming practices. It is quite common, when debugging programs, to output pieces of program memory in order to examine them for inconsistencies. If such memory output contains password-controlled capabilities to sensitive resources, then it must be carefully protected (e.g., not taken to a consultant for analysis or left in view of visitors).

10. Control by Access List

A service process performing address authorization refers to the address of the requesting process (as supplied by the communication facility) in determining whether or not to grant a resource request. The conceptually simplest form of address authorization is an access list protocol. Here a service process creates and maintains a list of the addresses of processes that are allowed to access the resource. To authorize a using process a server adds the user's address to an access list and sends the resource identification to the user. To validate an access request the service process checks to see if the requester's address is in the access list for the identified resource.

Communicating access-list-controlled capabilities is more difficult than communicating password-controlled capabilities (fig. 4). The basic difficulty is that the service process must update its access list every time a capability is passed from one using process to another. To do this the serving process must be notified that the capability transfer is taking place.

11. The Reflection Problem

At first glance, it might appear that the passing of a capability protected by an access list could be accomplished by the steps 1, 2, and 3 pictured in fig. 4. First process A, which has the capability, requests the service process to add the address of process B, which is to receive the capability, to the access list. Second, the server acknowledges the fact that it has complied. Third, process A sends the resource identification to process B.

Unfortunately, these steps do not quite suffice for secure capability passing. To understand what goes wrong we must make a more careful assessment of what is meant by our requirement that capabilities can be communicated from one process to another.

Capability communication must allow a process that lacks a capability to receive it in a message. In addition, however, if the analogy to receiving data is allowed, receiving a capability in a message should imply that the sender possessed the capability. Although this is a property of our trivial protocol for communicating password-controlled capabilities (the capability and its data block are identical), it is not a properly of the access list protocol pictured as steps 1, 2, and 3 in fig. 4.

To demonstrate this point, consider the reflector process illustrated in fig. 5. The reflector just receives capabilities in messages and returns them to the sender. Various servers, such as those for directories, translators, editors, etc., have this basic reflection property. Unfortunately, if the access list protocol above is used, then the sneak pictured in fig. 5 can obtain unauthorized access to all the reflector's resources simply by reflecting their identification information. We will see a similar reflection problem appear in conjunction with data theft when we consider control by public key encryption.

This access list protocol can be repaired by replacing step 2 in fig. 4 by the illustrated step 2* A process receiving a capability can be thus assured by the server that the sender did indeed possess the capability.

Although the access list protocol does eliminate the data theft danger of the password protocol, it has a number of difficulties of its own:

12. Control by Encryption of Addresses

Encrypted address control is a variation on access list control that eliminates the need to maintain the access list. Using this approach, a server encrypts the address of the authorized process into the capability data block before sending it out. When this information is sent back for validation, the server must decrypt the address and compare it with the address from which the message actually came (as reported from the message system). The protocol for communicating capabilities controlled by encrypted addresses is identical to the access list protocol (fig. 4).

13. Control by Public Key Encryption

My colleague John Fletcher and I developed this last example of a control protocol to provide protection from data theft while still requiring only one message to communicate one capability [DoF80-4]. This method depends on the existence of a practical mechanism for encryption by public keys. We assume that every process A has a secret decryption algorithm, which we will denote by Au, (u = up into the bright light of day). We also assume that any process can encrypt data to be sent to A with a public encryption algorithm, which we will denote by Ad (d = down into the dark world of encryption). Therefore, for any process A and data c, we have AuAdc = c. Finally, we assume that Au and Ad commute, AuAdc = AdAuc = c. The reader is referred to [Lem79-19] for further discussion of such algorithms.

Controlling capabilities by public key encryption is similar to password control in one respect: If c is the identification information for the resource, then any process able to obtain the data Suc is assumed to have the right to access the resource. That is, if a process can produce a capability description "signed" by the server, then the capability is assumed to be valid. To guarantee that random data will not be interpreted as a valid (random) capability, we include the server's public key, Sd, in the capability description, c. Since the server's public key must also be included in the clear text of the capability data block (along with the server's address), any process can check a capability for validity by removing the server's signature (Su) and looking in c for its public key Sd.

If processes just store and communicate Suc directly, they are in danger of losing their capability to data theft. A process A can protect its capability from data theft during storage by applying Ad, and storing it as AdSuc.

To communicate a public-key-encrypted capability from A to B, it might appear sufficient for A to transform its stored AdSuc with BdAu. This would remove A's protection from Suc and reprotect it for B. Unfortunately, however, this simple form of capability communication can not assure B that A was actually authorized to access the resource. If A could steal the capability data block from B's domain, it could reflect it off B (send it to B and ask for it back as in the reflection example given previously) and thereby gain unauthorized access. Similarly, if a process could steal the capability from A after it had been readied for transfer to B, it could be reflected off B to obtain unauthorized access.

These difficulties can be remedied by performing the sending and receiving transformations pictured in fig. 6. Process A sending a capability to process B transforms its stored AdSuc with BdAuAu. The first Au removes A's protection; the second Au signs the capability as coming from A; and the final Bd protects it so that it cannot be stolen from B's domain when it arrives.

When B receives a capability from A, it transforms the received BdAuSuc with BdAdBu. The first Bu undoes the protection that A put on for B, the middle Ad unsigns the capability, and the final Bd protects the capability for residence in B's domain. To avoid the capability reflection problem, B must check to insure that A possessed the capability before the communication. To do this B can "look" at the clear text of the capability, c, by performing the transformation SdBu on the stored BdSuc. B must check c for the included key, Sd. Fig. 6 follows the transformations performed as a capability c passes from a server S to A, then to B, and finally back to S for validation.

It might appear that looking at the clear capability c would open it up to data theft. However, no process able to steal c would be able to produce the Suc needed to access the resource (except S).

Since the receiving algorithm unsigns the capability, one might hope - as I did - that the check for Sd in c would be unnecessary. It was pointed out, by Jim Ellison [Ell81-9] and others, however, that reflection will remove protection if the capabilities are not checked. For example, if a process X is able to steal the form of a capability stored in B and reflect it off of B without check, the result would be XdBuXdSuc. This can be transformed by X into the valid capability XdSuc.

An important requirement of this protocol is that its transformations (receiving, looking, and sending) be indivisible. That is, intermediate results must not appear in the memories of processes from which they might be stolen. For example, if the transformation BdAuAu(AdSuc) were to yield the intermediate result Suc in memory after the first Ad, the integrity of the protection could be breached.

The private encryption key and the intermediate results can be protected in several ways. For example, in a multiprogrammed OS component the transformations can be performed by the OS kernel in response to a virtual user instruction (only the kernel knows the process's private decryption key). In a smaller single-domain system (e.g., a microprocessor system), it might prove effective to have the transformations performed in a hardware device that alone knows the system's private decryption key.

It has been pointed out elsewhere that in many situations the problem of distributing public encryption keys is as severe as that of distributing secret keys [PoK79-23]. The difficulty comes from trying to match a public key with some previously identified entity. For example, is this public key really that of the person with whom I intend to correspond? Fortunately this difficulty is not present in the capability protection mechanism discussed here because the public keys are distributed in the capability data blocks. It is true that a sender of a capability can endanger a trusting receiver by passing it a bad capability (e.g., with a bad public key). This is simply an instance of the well-known Trojan Horse problem, however, and is necessarily present in the best of capability management mechanisms.

This public key protocol has the same strength against data theft as the encrypted-address scheme but requires no extraneous message passing. Its major weakness is the processing costs involved in the transformations required. It depends, of course, on the existence of a suitable public-key encryption algorithm.

14. A Capability Passing Structure

We have explored four examples of capability-passing protocols. Each seems to have strengths and weaknesses. From the viewpoint of a process wishing to communicate capabilities, this is a rather negative result. It appears that there is little likelihood of finding a best capability-passing protocol that can be used as a standard. There is a great deal of commonality in these example protocols, however. By exploiting this commonality it may be possible at least to define a common capability-passing structure.

All of our example capability-passing protocols can be divided into three analogous parts: the communication proper, the sending transformation, and the receiving transformation. The communication proper is simply a matter of communicating the capability data block (including the server's network address and the server-dependent information). The sending and receiving transformations break down as pictured in Table 1.


                                  Table 1

                Sending and Receiving Transformations for the
                         Capability-Passing Protocols

________________________________________________________________________
Protocol               Sending                  Receiving
________________________________________________________________________
Password               None                     None

Access list or         Request server to        Receive authorization 
encrypted address      forward                  from server
                       authorization

Public key A to B      BdAuAu                   BdAuBu to check c,
                                                then BdAdBu to store

To integrate these protocols into a common structure, we can supply each capability-passing process with library routines for the sending and receiving transformations that can handle the three distinct forms. The stored capability data block and the address to which the capability is to be sent are passed to the routine SendTransform. It returns a capability data block suitable for direct transfer. The capability data block must always keep a description of the form of protection that it uses in clear text. This protection form description allows the transformation routines to dynamically decide which kind of protection method is in use and thus which transformations to perform.

The received capability data block and the address from which the block was received are passed to the routine ReceiveTransform. It checks the capability (if necessary) and returns the transformed capability data block in a form suitable for storing in the domain of the receiving process.

15. Human Management of Process Domains

Up to this point we have only considered processes controlling the domains of other processes. This level of control allows system designers to manage their resource allocation internally for economy, reliability, security, and privacy. If computers really are around to do our bidding, however, then we humans will have to get into the act somewhere.

From a computer's point of view we are simply input. The input that we supply goes into the domain of some process. Generally when we begin to use a computer system our input is read by some powerful process that can identify us and make sure that we are given access to only our authorized resources. This "login" mechanism has been discussed ad nauseam elsewhere so we will not consider it further here.

Once we are identified, however, we begin communicating with a process that (unfortunately) must have access to all of our resources so it can satisfy our requests. Generally the program in this process (e.g., an OS command language, job control language, or terminal input language processor) does little work by itself, so it is usually quite trustworthy. Whenever we want anything done, however, it gives some of our resources to some other utility process to carry out our bidding. This is where our human domain management becomes important.

It is difficult to understand why people seem to trust computers so implicitly. They certainly don't trust us if they can help it. Perhaps people feel they have no choice. With many computer systems this is true, but it need not be. Why give programs that do our bidding a chance to waste resources (economy), mess up resources (reliability), or give resource access to our enemies (security and privacy)? If we have adequate protocols for process domain control, we can give processes working on our behalf exactly the resources they need in order to do our bidding.

16. Human Expression of Resource Access Control

A basic problem in this area is to define a protocol that people can use for expressing their domain specifications. We consider several approaches for such a protocol.

a. Explicit Resource Specification

Using this approach, a person would explicitly name each resource or resource group that should be given to a utility process. Since parameter information is usually required by utilities in addition to their required resources, resource specifications would have to be distinguished syntactically. An approach such as this could be facilitated by passing some resources by default or with a shorthand notation. In spite of such efforts, however, this approach would likely prove inconvenient for users. The simple expedient of passing all resources to every utility would probably be the norm.

b. Let the Utility Choose

With this approach utilities would request needed resources from the command interpreter, either directly or by instructing it how to parse the user's request. This approach, might keep utilities from inadvertently doing harm to a person's resources, but it still leaves every utility in the position of a Trojan Horse.

c. Canned Command Parsing Specifications

With this approach, the command interpreter is given predefined procedures for parsing the user's commands to decide which resources to pass to utilities. These parsing specifications might come in the form of macros or tables for a table-driven parser. The specifications should be generated or at least reviewed by a trusted person (if you can't trust a systems programmer, who can you trust?). People can use any convenient command syntax with blissful indifference and still be protected from inadvertently or maliciously destructive programs. A difficulty would arise only if someone were given a new utility and parsing specification as a gift. Anyone that would use such a gift specification without review would knowingly submit to a potential Trojan Horse. Let the user beware.

17. Conclusions

The resource control portion of an operating system's management task can have important consequences for the overall economy, reliability, security, and privacy of the system. In distributed systems, where this control is most important, it is also the most difficult to achieve. We have discussed several methods for achieving precise control of process domains in a distributed environment.

Such precise control cannot be fully exploited if people are not able to conveniently use it to protect themselves from inadvertently or maliciously destructive processes. We have described a promising approach for human management of process domains.

The problem of domain management is common to all computer systems, but it is particularly important in distributed systems. This problem can only be squarely addressed and solved if it is disentangled from the jungle of details needed to make resources available in distributed systems.

18. Acknowledgments

The development of the LINCS protocols requires the cooperation of many people at LLNL. The turmoil of this interaction provides an environment rich in protocol lore. Domain management is one of several protocol areas whose commonality is most easily perceived in such an environment. Many of the concepts presented here on process domain control arose from early discussions on LINCS capability communication with John Fletcher and Dick Watson of LLNL.

This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under contract number W-7405-Eng-48.

19. References

1. [BaH77]
Baskett, F,, J. H. Howard, and J. T. Montague,
"Task Communication in Demos,"
in Proc. of the Sixth Symposium on Operating System Principles, Purdue University, November 16-18, 1977 (in ACM Operating Systems Review 11(5), 1977), pp. 23-31.
2. [Bob72]
Bobrow, D. G.,
"Tenex, A Paged Time Sharing System for the PDP-10,"
Commun. ACM 15(3), 135 (March 1972).
3. [DeV66]
Dennis J. B., and E. C. Van Horn,
"Programmed Semantics for Multiprogrammed Computations,"
Commun. ACM 9(3), 143 (March 1966).
4. [DoF80]
Donnelley, J. E., and J. G. Fletcher,
"Resource Access Control in a Network Operating System",
in Proc. ACM Pacific '80 Regional Conference, San Francisco, Calif., November 10-12, 1980 (ACM 1980), pp. 115-126.
5. [Don76]
Donnelley, J. E.,
"A Distributed Capability Computing System,"
Proc. of the Third International Conference on Computer Communication, Toronto, Ontario, August 3-6 1976 (ICCC, 1976), pp. 432-440.
6. [Don79]
Donnelley, J. E.,
"Components of a Network Operating System,"
Computer Networks 3, 389 (1979). Also in Proc. Fourth Conference on. Local Computer Networks, Minneapolis, Minnesota, October 22-23, 1979 (IEEE, 1979), pp. 1-12.
7. [DoY76]
Donnelley, J. E., and J. Yeh,
"Interaction Between Protocol Levels in a CSMA Broadcast Network,"
Computer Networks 3, 9 (1979).
8. [DuB80]
DuBois, P. J.,
"NLTSS Disk File Service (Prototype),"
Lawrence Livermore National Laboratory, Report UCID-18857 (August 1980).
9. [Ell81]
Ellison, James,
private communication
Aerospace Corporation, February 1981.
10. [Eng72]
England, D. M.,
"Architectural Features of System 250,"
Infotech State of the Art Report 14: Operating Systems (Infotech International Ltd., Maidenhead, Berkshire, England, 1972), pp. 395-428.
11. [Fab74]
Fabry, R. S.,
"Capability Based Addressing,"
Communications of the ACM, August 1974, Vol.37, pp.54-60.
12. [Fle73]
Fletcher, J. G.,
"The Octopus Computer Network,"
Datamation 19(4), 58 (April 1973).
13. [Fle79]
Fletcher, J. G.,
"Serial Link Protocol Design: A Critique of the X.Z5 Standard, Level 2,"
Lawrence Livermore National Laboratory Report UCRL-83604 (October 1979).
14. [FlW78]
Fletcher, J. C., and R. W. Watson,
"Mechanisms for a Reliable Timer-Based Protocol,"
Computer Networks 2, 271 (Sept./Oct. 1978). Also in Proc. Computer Network Protocols Symposium, Liege, Belgium (February 1978), pp. C5-1/C5-17.
15. [FlW80]
Fletcher, J. G., and R. W. Watson,
"Service Support in a Network Operating System,"
in VLSI: New Architectural Horizons (Digest of Papers, COMPCON Spring 80, 20th IEEE Computer Society International Conference, San Francisco, Calif., February 25-28, 1980), IEEE Catalog No. 80CH1491-0 C (IEEE, 1980), pp. 415-424.
16. [ITT78]
International Telegraph and Telephone Consultative Committee: Provisional recommendation X.3, X.25, X.28, and X.29 on packet-switched data transmission services. Geneva, Switzerland (1978).
17. [Lan75]
Landau, C. R.,
"The RATS Operating System,"
Lawrence Livermore National Laboratory, Report UCRL-77378 (1975).
18. [LaS76]
Lampson, B. W., and H. Sturges,
"Reflections on an Operating System Design,"
Commun. ACM 19(5), 251 (May 1976).
19. [Lem79]
Lempel, A.,
"Cryptology in Transition,"
Computing Surveys, ACM 11(4), 285 (December 1979).
20. [MiD80]
Minton, J., and J. E. Donnelley,
"The Syntax and Semantics of NLTSS Message Tokens,"
Lawrence Livermore National Laboratory, Report UCID-18852 (April 1980).
21. [NBS77]
National Bureau of Standards, Federal Information Processing Standards, Publ. 46 (1977).
22. [Org72]
Organick, E. I.,
"The Multics System: An Examination of Its Structure,"
(MIT Press, Cambridge, Mass., 1972).
23. [PoK79]
Popek, J. G., and C. S. Kline,
"Encryption and Secure Computer Networks,"
Computer Surveys, ACM 11(4), 331 (December 1979).
24. [Pos80]
Postel, J. B.,
"DoD Standard Internet and Transmission Control Protocol Specification,"
IEN 128, 129 (January 1980), Available through the Defense Advanced Research Project Agency, IPTO, Arlington, Va.
25. [RiT74]
Ritchie, D. M., and K. Thompson,
"The Unix Time Sharing System,"
Commun. ACM 18(7), 365 (July 1974).
26. [Wat78]
Watson, R. W.,
"The LLL Octopus Network: Some Lessons and Future Directions,"
in Proc. Third USA-Japan Computer Conference, San. Francisco, Calif, (October 1978), pp, 12-21.
27. [Wat79]
Watson, R. W.,
"Delta-t Protocol Specifications,"
Lawrence Livermore National Laboratory, Report UCRL-52881 (November 1979).
28. [Wul74]
Wulf, W., E. Cohen, W. Crowin, A. Jones, R. Levin, C. Pierson, and F. Pollack
"Hydra: the Kernel of a Multiprocessor System,"
Commun. ACM 17(6), 337 (June 1974).

DISCLAIMER
This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government thereof, and shall not he used for advertising or product endorsement purposes.

GRS


Contact the author for comments about this page.