This is a review of Leslie Lamport's paper Time, Clocks, and the Ordering of Events in a Distributed System. I focus on the paper's proof of conditions under which a distributed system of physical clocks can produce an ordering of events, which does not violate causality. The proof lies in the appendix of the paper, and is extremely brief: Not unlike reading a mystery novel. I expand on the paper by presenting the proof, all the math worked out, explained with a less mysterious level of brevity.
We are presented with a set of processes, , each has a clock . returns the timestamp of event occurring on ; returns the timestamp of 's clock at physical time . We assume all processes are moving at the same speed, so there's no need to worry about relativity.
We define a relation , or "happens before", as a relation on the set of events such that:
If and are events occurring on , and , then .
If is the sending of a message, and the receiving, then .
A nice property of is, can causally effect if and only if . Because, for events and occurring on and , can only effect if has learned about by receiving a message from .
To break ties, we order events according to .
will never violate causality, as two events occurring at the same time can't cause each other.
For our system of clocks to create an ordering of events satisfying , we define two clock conditions.
C1: If and are events on , and comes before , then .
C2: If is the sending of a message by process , and the message's receipt by process , then .
C1 ensures the first rule of is satisfied, and C2 the second. Thus, a system of clocks satisfying the clock conditions will order events with a relation.
Before considering physical clocks, we consider a simpler clock. Logical clocks satisfy the clock conditions, but don't order events in a way compatible with our notion of time.
To order events with a logical clock, processes follow two rules.
IR1: Each process increments between any two successive events.
IR2: (a) If an event is the sending of a message by process , then the message contains a timestamp . (b) Upon receiving a message in event , a process sets greater than or equal to its present value, and greater than .
Processes implementing these rules satisfy the clock conditions C1 and C2, by IR1 and IR2 respectively.
In a sense, logical clock time and the time we are used to line up. guarantees events happening before a message is sent, are ordered before events on the receiving process after the message is received. This preserves causality, as in order for an event on to cause an event on a message must be sent between the processes.
When events can not cause eachother, our logical clocks order events according to the number of events preceding them. This has no basis in our typical notion of time. To address this, we turn to physical clocks.
Physical clocks track time as we perceive it. A watch is a physical clock, and like a watch, physical clocks can become out of sync. This can cause causality to break; If one clock is so far ahead of another the receiving timestamp is smaller than the sending one, then receive send.
This section models a physical clock, derives the conditions under which causality can break, and proves what conditions will ensure it does not.
To model a physical clock, we put two conditions on its behavior.
PC1: There exists a constant , such that for all : .
PC2: There exists a constant such that, for all , : .
There are two important things to note about these conditions. First, physical clocks are not naturally imbued with them. In a real system, a process might turn off indefinitely, in which case the clock ceases advancing, and is no longer bounded. In this sense, the math in this paper isn't easily mapped to reality. Second, while physical clocks will generally run at rates close to correct (making PC1 reasonable), they will not automatically synchronize (making PC2 unreasonable, without a synchronization algorithm).
Clocks become out of sync when they run at the wrong rate. Mathematically speaking we look to the derivative. For a clock running at the correct speed , too fast , and too slow .
For any two processes communicating, there is a lower bound on how quickly they may exchange messages. At the limit, this is the distance between the processes, divided by the speed of light. We call this lower bound .
Our goal is to find values of for which, for any message, sent by at , and received by at , . If this is true, will satisfy C2. As is a lower bound on message send time, . Thus, to satisfy C2:
By PC1, the slowest possible clock advances at a rate . Thus, . As we are interested in solving the innequality for all possible values of , we plug in the lower bound into the right side of (2), yielding a bound on .
If this inequality is true, will satisfy C2. And thus, for software satisfying C1, will preserve causality. If the inequality is false, it will be possible to order the receiving of a message before its sending, causing to break causality.
Having discovered conditions under which causality breaks, we set about describing an algorithm which preserves causality, and prove it works. Our algorithm is the same one used for logical clocks, slightly modified for physical time.
IRP1: The physical clocks for each process are always advancing: .
IRP2: (a) For each , if sends a message at physical time , then contains a timestamp . (b) Upon receiving a message at , process sets .
We model our processes as being arranged in a graph, the edges of which represent communication links. We make three assumptions about our graph and the behavior of its processes.
The diameter, , of the graph is the shortest number of links which connects any two processes.
For a message sent at physical time , and received at , we define the total delay . where is the maximum unpredictable delay of the message, and the minimum predictable delay for the message. For speaking about the entire graph, we define , , and , for all possible .
Every seconds a message is sent along every communication link.
With these assumptions, we set about finding a value for . To do so, we will find lower and upper bounds on , then subtract them to find a value for .
Say process sends a message to at time , which is received at time . Per PC1, must be at least as fast as the slowest clock.
Per IRP2, after the message has been received, the clock of the receiving process must satisfy . Combining this with (4).
Turning our attention to the second term on the right side of (5).
Combining (5) and (6).
We now briefly digress into delay.
Combining (7) and (8).
(9) sets a lower bound on the receiving clock's time, in terms of the sending clock. Now, imagine a series of processes . At time , process receives a message from process , sent at time , where . Consider what happens when we repeatedly apply (9) to .
Every time we apply the inequality, it subtracts , and decreases the time and clock subscript by one. For a process , there is a chain of processes leading up to it, so (9) can be applied times to find a lower bound on 's clock.
Finally, as a cleanup step, as , we can write (11) in terms of .
You may want to take a bong rip at this point.
Now imagine this scenario, applied to our graph of processes. Recall assumptions (1) and (3), which say our graph has diameter , and every process sends a message to every other process every units of time. If we imagine timestamps propagating through the graph as messages are sent, every units of time, a timestamp from every process will have had the opportunity to propagate to every other process.
We can thus apply our earlier inequality for the minimum value of a clock in a series of processes, for a series of processes of length . For any two processes, when :
This sets a lower bound on the value of a clock, in terms of some known value of another clock and the amount of time that has passed. We are now half way to finding . In the next section, we look for an upper bound.
Say at time , is the clock with the highest value. By PC1:
However, this argument is subtly incomplete. When a process receives a message, it sets its clock's value to be (IRP2). Imagine sends a message at , setting . Upon receiving the message at time , the receiver sets their clock to . Is it possible for ? If so, (14) would be incorrect.
Let's examine the inequality.
The smallest value of is which occurs if a clock is running at exactly the correct rate. We can plug this into the inequality, and say it is satisfied if, . This is false, as by definition is less than the smallest possible transmission time for the message. Hence, (14) remains true, even when considering clocks settings themselves ahead upon receiving messages from the fastest clock.
Equation (13) holds for any , so setting , , and combining (13) and (14).
This holds for all , and thus bounds the value of a clock. The farthest off two clocks can be, is the gap between the fastest and slowest possible clocks. That is, the difference between the upper and lower bound of (16).
Thus, the maximum difference between clocks in our system is:
Notice how (18) applies to all , , where . This means, for any , we can find a , such that . This is the lowest value for , so we can replace the term in (18).
This concludes our proof for a bound on . Our algorithm takes units of time to synchronize, and then the amount two clocks may be out of sync is bounded.
In equation (3), we found the maximum that preserves causality. We can combine (3) and (20) investigate this further.
All of these variables, except describe physical properties of the system. , the rate at which messages are sent, is a property of the software. So, we rearrange the equation in terms of .
Selecting a value for which satisfied this inequality, would guarantee ordering events such that, if caused , .
Thanks to M for reviewing.