next up previous contents index
Next: 16 The Zipcode Message-Passing Up: High-Level Asynchronous Software Previous: 15.2.3 What We Learned

15.3 Time Warp

   

Discrete-event simulations are among the most expensive of all computational tasks. With current technology, one sequential execution of a large simulation can take hours or days of sequential processor time. For example, many large military simulations take days to complete on standard single processors. If the model is probabilistic, many executions will be necessary to determine the output distributions. Nevertheless, many scientific, engineering, and military projects depend heavily on simulation because experiments on real systems are too expensive or too unsafe. Therefore, any technique that speeds up simulations is of great importance.

We designed the Time Warp Operating System (TWOS)  to address this problem. TWOS is a multiprocessor operating system that runs parallel discrete-event simulations. We developed TWOS on the Caltech/JPL Mark III Hypercube. We have since ported it to various other parallel architectures, including the Transputer  and a BBN Butterfly GP1000. TWOS is not intended as a general-purpose multiuser operating system, but rather as an environment for a single concurrent application (especially a simulation) in which synchronization is specified using virtual time  [Jefferson:85c].

The innovation that distinguishes TWOS from other operating systems is its complete commitment to an optimistic style of execution and to processing rollback  for almost all synchronization. Most distributed operating systems either cannot handle process rollback at all or implement it as a rarely used mechanism for special purposes such as exception handling, deadlock , transaction abortion, or fault recovery. But the Time Warp Operating System embraces rollback as the normal mechanism for process synchronization, and uses it as often as process blocking is used in other systems. TWOS contains a simple, general distributed rollback mechanism capable of undoing or preventing any side effect, direct or indirect, of an incorrect action. In particular, it is able to control or undo such troublesome side effects as errors, infinite loops, I/O,  creation and destruction of processes, asynchronous message communication, and termination.

TWOS uses an underlying kernel to provide basic message-passing capabilities, but it is not used for any other purpose. On the Caltech/JPL Mark III Hypercube, this role was played by Cubix,  described in Section 5.2. The other facilities of the underlying operating system are not used because rollback forces a rethinking of almost all operating system issues, including scheduling, synchronization, message queueing, flow control, memory management, error handling, I/O, and commitment. All of these are handled by TWOS. Only the extra work of implementing a correct message-passing facility prevents TWOS from being implemented on the bare hardware.

We have been developing TWOS since 1983. It is now an operational system that includes many advanced features such as dynamic creation and destruction of objects, dynamic memory management, and dynamic load management. TWOS is being used by the United States Army's Concept and Analysis Agency to develop a new generation of theater-level combat simulations. TWOS has also been used to model parallel processing hardware, computer networks, and biological systems.

Figure 15.6 shows the performance of TWOS on one simulation called STB88. This simulation models theater-level combat in central Europe [Wieland:89a]. The graph in Figure 15.6 shows how much version 2.3 of TWOS was able to speed up this simulation on varying numbers of nodes of a parallel processor. The speedup shown is relative to running a sequential simulator on a single node of the same machine used for the TWOS runs. The sequential simulator uses a splay tree for its event queue. It never performs rollback, and hence has a lower overhead than TWOS. The sequential simulator links with exactly the same application code as TWOS. It is intended to be the fastest possible general-purpose discrete-event simulator that can handle the same application code as TWOS.

  
Figure 15.6: Performance of TWOS on STB88

Figure 15.6 demonstrates that TWOS can run this simulation more than 25 times faster than running it on the sequential simulator, given sufficient numbers of nodes. On other applications, even higher speedups are possible. In certain cases, TWOS has achieved up to 70% of the maximum theoretical speedup, as determined by critical path analysis.

Research continues on TWOS. Currently, we are investigating dynamic load management [Reiher:90a]. Dynamic load management is important for TWOS because good speedups generally require careful mapping of a simulation's constituent objects to processor nodes. If the balance is bad, then the run is slow. But producing a good static balance takes approximately the same amount of work as running the simulation on a single node. Dynamic load management allows TWOS to achieve nearly the same speed with simple mappings as with careful mappings.

Dynamic load management is an interesting problem for TWOS because the utilizations of TWOS' nodes are almost always high. TWOS optimistically performs work whenever work is available, so nodes are rarely idle. On the other hand, much of the work done by a node may be rolled back, contributing nothing to completing the computation. Instead of balancing simple utilization, TWOS balances effective utilization, the proportion of a node's work that is not rolled back. Use of this metric has produced very good results.

Future research directions for TWOS include database management, real-time user interaction with TWOS, and the application of virtual time synchronization to other types of parallel processing problems. [Jefferson:87a] contains a more complete description of TWOS.

Participants in this project were David Jefferson, Peter Reiher, Brian Beckman, Frederick Wieland, Mike Di Loreto, Philip Hontalas, John Wedel, Paul Springer, Leo Blume, Joseph Ruffles, Steven Belenot, Jack Tupman, Herb Younger, Richard Fujimoto, Kathy Sturdevant, Lawrence Hawley, Abe Feinberg, Pierre LaRouche, Matt Presley, and Van Warren.



next up previous contents index
Next: 16 The Zipcode Message-Passing Up: High-Level Asynchronous Software Previous: 15.2.3 What We Learned



Guy Robinson
Wed Mar 1 10:19:35 EST 1995