Derivation

L2 chain derivation — deriving L2 blocks from L1 data — is one of the main responsibilities of the rollup node, both in validator mode, and in sequencer mode (where derivation acts as a sanity check on sequencing, and enables detecting L1 chain re-organizations).

The L2 chain is derived from the L1 chain. In particular, each L1 block following L2 chain inception is mapped to a sequencing epoch comprising at least one L2 block. Each L2 block belongs to exactly one epoch, and we call the corresponding L1 block its L1 origin. The epoch's number equals that of its L1 origin block.

To derive the L2 blocks of epoch number E, we need the following inputs:

  • L1 blocks in the range [E, E + SWS), called the sequencing window of the epoch, and SWS the sequencing window size. (Note that sequencing windows overlap.)

  • Batcher transactions from blocks in the sequencing window.

  • These transactions allow us to reconstruct the epoch's sequencer batches, each of which will produce one L2 block. Note that:

    • The L1 origin will never contain any data needed to construct sequencer batches since each batch must contain the L1 origin hash.

    • An epoch may have no sequencer batches.

  • Deposits made in the L1 origin (in the form of events emitted by the deposit contract).

  • L1 block attributes from the L1 origin (to derive the L1 attributes deposited transaction).

  • The state of the L2 chain after the last L2 block of the previous epoch, or the L2 genesis state if E is the first epoch.

To derive the whole L2 chain from scratch, we start with the L2 genesis state and the L2 genesis block as the first L2 block. We then derive L2 blocks from each epoch in order, starting at the first L1 block following L2 chain inception. Refer to the Architecture section for more information on how we implement this in practice. The L2 chain may contain pre-Bedrock history, but the L2 genesis here refers to the Bedrock L2 genesis block.

Each L2 block with origin l1_origin is subject to the following constraints (whose values are denominated in seconds):

  • block.timestamp = prev_l2_timestamp + l2_block_time

    • prev_l2_timestamp is the timestamp of the L2 block immediately preceding this one. If there is no preceding block, then this is the genesis block, and its timestamp is explicitly specified.

    • l2_block_time is a configurable parameter of the time between L2 blocks (2s on Optimism).

  • l1_origin.timestamp <= block.timestamp <= max_l2_timestamp, where

    • max_l2_timestamp = max(l1_origin.timestamp + max_sequencer_drift, prev_l2_timestamp + l2_block_time)

    • max_sequencer_drift is a configurable parameter that bounds how far the sequencer can get ahead of the L1.

Finally, each epoch must have at least one L2 block.

The first constraint means there must be an L2 block every l2_block_time seconds following L2 chain inception.

The second constraint ensures that an L2 block timestamp never precedes its L1 origin timestamp, and is never more than max_sequencer_drift ahead of it, except only in the unusual case where it might prohibit an L2 block from being produced every l2_block_time seconds. (Such cases might arise for example under a proof-of-work L1 that sees a period of rapid L1 block production.) In either case, the sequencer enforces len(batch.transactions) == 0 while max_sequencer_drift is exceeded. See Batch Queue for more details.

The final requirement that each epoch must have at least one L2 block ensures that all relevant information from the L1 (e.g. deposits) is represented in the L2, even if it has no sequencer batches.

Post-merge, Ethereum has a fixed 12s block time, though some slots can be skipped. Under a 2s L2 block time, we thus expect each epoch to typically contain 12/2 = 6 L2 blocks. The sequencer will however produce bigger epochs in order to maintain liveness in case of either a skipped slot on the L1 or a temporary loss of connection to it. For the lost connection case, smaller epochs might be produced after the connection was restored to keep L2 timestamps from drifting further and further ahead.

Eager Block Derivation

Deriving an L2 block requires that we have constructed its sequencer batch and derived all L2 blocks and state updates prior to it. This means we can typically derive the L2 blocks of an epoch eagerly without waiting on the full sequencing window. The full sequencing window is required before derivation only in the very worst case where some portion of the sequencer batch for the first block of the epoch appears in the very last L1 block of the window. Note that this only applies to block derivation. Sequencer batches can still be derived and tentatively queued without deriving blocks from them.

Protocol Parameters

The following table gives an overview of some protocol parameters, and how they are affected by protocol upgrades.

Batch Submission

Sequencing & Batch Submission Overview

The sequencer accepts L2 transactions from users. It is responsible for building blocks out of these. For each such block, it also creates a corresponding sequencer batch. It is also responsible for submitting each batch to a data availability provider (e.g. Ethereum calldata), which it does via its batcher component.

The difference between an L2 block and a batch is subtle but important: the block includes an L2 state root, whereas the batch only commits to transactions at a given L2 timestamp (equivalently: L2 block number). A block also includes a reference to the previous block (*).

(*) This matters in some edge case where a L1 reorg would occur and a batch would be reposted to the L1 chain but not the preceding batch, whereas the predecessor of an L2 block cannot possibly change.

This means that even if the sequencer applies a state transition incorrectly, the transactions in the batch will still be considered part of the canonical L2 chain. Batches are still subject to validity checks (i.e. they have to be encoded correctly), and so are individual transactions within the batch (e.g. signatures have to be valid). Invalid batches and invalid individual transactions within an otherwise valid batch are discarded by correct nodes.

If the sequencer applies a state transition incorrectly and posts an output root, then this output root will be incorrect. The incorrect output root will be challenged by a fault proof, then replaced by a correct output root for the existing sequencer batches.

Refer to the Batch Submission specification for more information.

Batch Submission Wire Format

Batch submission is closely tied to L2 chain derivation because the derivation process must decode the batches that have been encoded for the purpose of batch submission.

The batcher submits batcher transactions to a data availability provider. These transactions contain one or multiple channel frames, which are chunks of data belonging to a channel.

A channel is a sequence of sequencer batches (for any L2 blocks) compressed together. The reason to group multiple batches together is simply to obtain a better compression rate, hence reducing data availability costs.

Channels might be too large to fit in a single batcher transaction, hence we need to split it into chunks known as channel frames. A single batcher transaction can also carry multiple frames (belonging to the same or to different channels).

This design gives use the maximum flexibility in how we aggregate batches into channels, and split channels over batcher transactions. It notably allows us to maximize data utilization in a batcher transaction: for instance it allows us to pack the final (small) frame of one channel with one or more frames from the next channel.

Also note that we use a streaming compression scheme, and we do not need to know how many batches a channel will end up containing when we start a channel, or even as we send the first frames in the channel.

And by splitting channels across multiple data transactions, the L2 can have larger block data than the data-availability layer may support.

All of this is illustrated in the following diagram. Explanations below.

Last updated