Cascading Interfaces

William Kent
Database Technology Department
Hewlett-Packard Laboratories
Palo Alto, California

July 1992

 

The following scenario illustrates a variety of requirements on object models. Key points to keep in mind:

Requested services are often provided via cascading interfaces, frequently involving different objects at each interface. Objects may be categorized as "visible'' or "invisible'' with respect to a particular interface. The visible objects are identified and manipulated at the interface. The invisible objects are not, even if the user is aware of their effect on the behavior of the visible objects.

The object model is multi-purpose, designed to be applicable to a variety of areas such as user interfaces, application interfaces, network management, etc. Different areas may use different components from the model, and may involve different populations of object types and instances. This is illustrated in the following scenario:

At a graphical user interface, a user manipulates presentation objects such as icons and windows. The user may directly create and destroy icons and windows; change their size, position, and color; convert icons into windows and vice versa; manipulate text and graphics appearing in the windows; and so on. As much as possible, for the user's convenience, the GUI tries to maintain the illusion that such activities are manipulating a real thing, such as a document. Occasionally, however, the user must realize that the icon is not the document, just as a photograph or TV image is not the real person; you can draw a mustache on the photograph or turn the TV image green without altering the person. The user may notice that he has several icons for the same document in different desktops or folders; several icons can't be the same thing as one document. He knows that editing the text in a window does not alter the "real'' text hidden under the interface until he issues a "save'' request; until then, the real text is available to refresh the window. He is more keenly aware of the difference if he can open several windows on the same document and edit them differently, seeing two different versions of the document at the same time -- neither one being the real text.

At the outset the GUI user may identify the underlying document when he requests a window to be opened on it. He knows that save and restore requests involve that document. But the document is largely invisible; the objects the user deals with are the icons, windows, and their contents at the interface. Thus the user is aware that the icons and windows in the GUI are distinct objects from the documents they represent and map to.

Another interface may define the semantics of an application. It could be neutral to the user interface, being usable from a variety of command-line or graphic interfaces. A publishing application may support requests to create or destroy an article, assign writers to it, specify its subject and planned length, schedule it for a certain issue of a magazine, sketch its geometric layout - long before any text is written. At this interface, users don't think that assigning a writer to an article is a message "to'' either the writer or the article. Similarly, scheduling the article for a certain issue is not a message "to'' either the article or the issue. These users don't have any notion of "where'' an issue or a writer is, and it doesn't make sense to move them. The notion of moving an article only makes sense with respect to its position in a magazine issue, not within the computer system. Thus at these interfaces objects may appear to jointly own operations and to be dispersed in various ways.

Information might be maintained as a complex combination of data and procedures. For some articles, the length might be a simple property assigned or altered by the layout editor. For others, the length might be computed from the text, font size, and illustrations. In some cases, assigning a length to an article might trigger a complex procedure which truncates or pads text and adjusts fonts, sizes, headings, and illustrations to fit.

Different users have different assumptions about what an article "contains''. When a writer or copy editor opens the document, he expects to see its text. (But if the magazine has regional or international editions, there may be different versions of the text.) When the layout editor opens the document, he expects to see a graphic image of its shape in the magazine, probably without text. The librarian expects to see a title, authors, publication data, abstract, and keywords. The reprint manager expects to see an inventory, price, page count, and order history. Thus the notion of the "content" of an object may be different for different uses.

All these users are dealing with the same article as an identifiable object. To many of them, the text of the document is just an incidental property.

Requests at this application interface map into requests at a lower level which deal with computational resources. A request to edit the text of an article translates to the invocation of a certain editor with a certain text file. They have to be identified, found, brought to a common work site, dispatched, and so on. The same happens with a request to rearrange the layout of the article, or to change the assigned writers, or to reschedule it to a different issue. Each such request may involve different units of program and/or data, possibly at different locations. The text file, layout file, editor program, and graphics package are all different objects from the article itself. The program and data units are objects involved in providing these services, but they are not visible to users of the publishing application interface.

The existence of distinct objects at the various levels is illustrated by the various things a user might intend when he slides an icon around the screen:

In summary, at any given interface there are:

Different interfaces may support different objects, and even different kinds of objects with respect to other categorizations. Objects at one interface may be dispersed, being mapped into coherent objects at a lower interface. Thus there may be different object models at the different interfaces. Between interfaces there are applications (services) which map from one level to another. A GUI service maps from user interface actions to publication manager requests. The publication manager application in turn maps these to a computational resource interface. This may in turn cascade down through several more layers of implementation and communication services.