Notes
2025/09/27

RC week 18: a sketch for a time loop calculus

This is a continuation of the notes from two weeks ago and last week about paradoxical solutions for recursive loops that show similarities to the halting problem.

Briefly, the idea is to extend a minimal programming language with values that are “oscillating” so that they conceptually represent multiple values at the same time. This is useful for resolving fixed points of functions, allowing us to find a value v that equals f(v) for the following function f:

def f(v):
    if v == 'x':
        'y'
    else:
        'x'

Notice that f shows a similar structure to the liar's paradox: If the value is 'x', the function returns 'y', but if the value is 'y', the function returns 'x', which makes it impossible to come up with a (terminating) value v that equals f(v).

Here's a sketch for a minimal calculus that supports paradoxical values in the form of sequences of values that loop back to the original value.

Time loops

The main idea is simple: If a function such as f above turns 'x' into 'y' and then 'y' into 'x', why not represent a paradoxical fixed point solution for f as a sequence of steps before the value loops again?

Let's write such a value as ['x'; 'y'], which stands for “first 'x', then 'y'” (then back to the start). Such a sequence is called a “time loop sequence” (or just “time loop” for short). Since such a sequence is a loop, it represents the sequence 'x', 'y', 'x', 'y', 'x', 'y', and so forth. But even though it conceptually “extends” forever, the starting point matters and ['x'; 'y'] is not the same value as ['y'; 'x'].

Time loops extend program execution with a notion of discrete time steps: The time loop ['x'; 'y'] can be thought of as a value that is 'x' at step 0, 'y' at step 1, 'x' at step 2, 'y' at step 3, and so forth. The value “oscillates” without any need (nor possibility) to iterate manually through the sequence.

Such a time loop is not a list: It is not possible to inspect the length of a time loop, nor is it possible to append a value to a list or use other list-like operations. From the perspective of the language, a time loop is just a single value, which allows us to use it in a comparison such as ['x'; 'y'] == 'x'.

Comparing time loops

How can a time loop such as ['x'; 'y'] be compared to an atomic value such as 'x'? Since ['x'; 'y'] represents a value that is 'x' at step 0, 'y' at step 1, and so forth, we can think of atomic values like 'x' as a sort of time loop that always has the same value: 'x' at step 0, 'x' at step 1, and so forth. Now we can evaluate ['x'; 'y'] == 'x' by matching up these values, which will give us true at step 0, false at step 1, true at step 2, false at step 3, and so forth, which we can represent as [true; false].

Using such a time-looped boolean value in a conditional follows the same approach. Let's use Church encoding, allowing us to define true as the function (x, y) => x and false as the function (x, y) => y. The if-else construct is then just sugar and if-else(p, t, f) stands for p(t, f). To evaluate if [true; false]: 'y' else: 'x' we thus need to evaluate [true; false]('y', 'x').

We can now again view atomic values conceptually as time loops with the same value at each step and perform function application by “zipping” together the time-looped function with its argument, giving us the following:

if [true; false]:
    'y'
else:
    'x'
# -->
[true; false]('y', 'x')
# -->
[true; false](['y'; 'y'], ['x'; 'x'])
# -->
[true('y', 'x'); false('y', 'x')]
# -->
['y'; 'x']

We have now seen how we can go from f(['x'; 'y']) to ['y'; 'x']. This is almost what we want: It is the value that we started with, except that it is shifted by one position in the time loop. This makes sense, because applying the function “took a step” and we ended up with a value that is just like the original time loop, except shifted by one step.

For now, we will leave aside the question when and how time loops get shifted back (so that we get true fixed points) and instead first look at how time loops behave in more interesting examples.

Lifting nested loops

What happens if we use a time loop as part of f?

def f(v):
    if v == ['x'; 'y']:
        'y'
    else:
        'x'

The time loop ['x'; 'y'] cannot be a fixed point of f, because the comparison would evaluate to [true; true], which is equivalent to [true], so f(['x'; 'y']) would return ['y']. We need something stronger than just a single time loop, something that can express that ['x'; 'y'] returns ['y'] and that ['y'] returns ['x'; 'y'], thus closing the loop. In other words, we need a nested time loop: [['x'; 'y']; ['y']]

To be able to “package together” time loops as fixed points, we need to ensure that a comparison such as [['x'; 'y']; ['z']] == ['a'; 'b'] compares ['x'; 'y'] with ['a'; 'b'], then ['y'] with ['a'; 'b'], not ['x'; 'y'] with 'a', then ['y'] with 'b'. In other words, we need to ensure that the ['a'; 'b'] on the right is lifted into the nested time loop on the left instead of zipping together the two sides immediately.

To achieve this, let's add a notion of depth to time loops. An atomic value like 'x' has a depth of 0, a time loop like ['x'; 'y'] has a depth of 1, a time loop like [['x'], ['x'; 'y']] has a depth of 2, etc. The depth depth([s; t]) for arbitrary s and t is defined as max(depth(s), depth(t)) + 1.

Two time loops are only zipped together if they have the same depth. Otherwise, each element of the time loop with the greater depth is compared to the full time loop with the smaller depth.

[['x'; 'y']; ['y']] == ['x'; 'y']
# -->
[['x'; 'y'] == ['x'; 'y']; ['y'] == ['x'; 'y']]
# -->
[[true; true]; [false; true]]
# -->
[[true]; [false; true]]

This ensures that reasoning about the fixed points of a function using a case-by-case analysis remains compositional, because the different cases can then be bundled together in a nested time loop, without changing how individual elements behave as arguments to the function.

Combining time loops

Time loops also make it easy to combine different subexpressions that all depend on the fixed point of a function. Let's take the following function as an example:

def f(v):
    if v == 'x':
        'y'
    else:
        if v == 'x':
            'z'
        else:
            'x'

As always, the task is to find a fixed point v so that v equals f(v), but now the additional challenge is that we would like to be able to reason about both uses of v independently. The inner conditional is just a regular liar, so ['x'; 'z'] would be a solution for the inner part. But the outer conditional would turn ['x'; 'z'] into ['x'; 'y'], so we need a solution that is true for the outer and inner conditional at the same time.

Luckily, the fact that all time loops follow the same conceptual clock makes this pretty easy. ['x'; 'y'] is a fixed point for f, because at time step 0 (in the 'x' case) only the then-branch of the outer conditional is relevant, while at time step 1 (in the 'y' case) only the else-branch of the inner conditional is relevant. In other words, even though the value ['x'; 'y'] is used in two places, the sequences are synchronized and the then-branch of the inner conditional is never visited. We can thus simplify the above function:

def f(v):
    if v == 'x':
        'y'
    else:
        'x'

The synchronization of time steps ensures that we can reason about fixed points compositionally and that most standard ways of simplifying conditionals according to non-paradoxical boolean logic still apply.

Shifting time loops

We can now return to the question of when and how to shift time loops. All of the above solutions aren't technically fixed points, because applying the above functions to the proposed time loops gets us shifted versions of the function arguments: ['x'; 'y'] becomes ['y'; 'x'], ['x'; 'y'; 'z'] becomes ['y'; 'z'; 'x'], etc.

What is still missing is a way to shift back time loops after “time has passed”, which means after a function has been fully applied. This is not as trivial as it sounds, because function application can be a multi-step process in the presence of currying and the evaluation rules need to ensure that a time loop is shifted by a single position after all functions have been fully applied.

How this is best done is still a work in progress. For now, the above sketch of the time loops calculus leads to fixed points that are almost, but not quite right.