Rollup Specification
Overview
The rollup node is the component responsible for deriving the L2 chain from L1 blocks (and their associated receipts).
The part of the rollup node that derives the L2 chain is called the rollup driver. This document is currently only concerned with the specification of the rollup driver.
Driver
The task of the driver in the rollup node is to manage the derivation process:
Keep track of L1 head block
Keep track of the L2 chain sync progress
Iterate over the derivation steps as new inputs become available
Derivation
This process happens in three steps:
Select inputs from the L1 chain, on top of the last L2 block: a list of blocks, with transactions and associated data and receipts.
Read L1 information, deposits, and sequencing batches in order to generate payload attributes (essentially a block without output properties).
Pass the payload attributes to the execution engine, so that the L2 block (including output block properties) may be computed.
While this process is conceptually a pure function from the L1 chain to the L2 chain, it is in practice incremental. The L2 chain is extended whenever new L1 blocks are added to the L1 chain. Similarly, the L2 chain re-organizes whenever the L1 chain re-organizes.
L2 Output RPC method
The Rollup node has its own RPC method, optimism_outputAtBlock
which returns a 32 byte hash corresponding to the L2 output root.
Structures
These define the types used by rollup node API methods. The types defined here are extended from the engine API specs.
BlockID
hash
:DATA
, 32 Bytesnumber
:QUANTITY
, 64 Bits
L1BlockRef
hash
:DATA
, 32 Bytesnumber
:QUANTITY
, 64 BitsparentHash
:DATA
, 32 Bytestimestamp
:QUANTITY
, 64 Bits
L2BlockRef
hash
:DATA
, 32 Bytesnumber
:QUANTITY
, 64 BitsparentHash
:DATA
, 32 Bytestimestamp
:QUANTITY
, 64 Bitsl1origin
:BlockID
sequenceNumber
:QUANTITY
, 64 Bits - distance to first block of epoch
SyncStatus
Represents a snapshot of the rollup driver.
current_l1: Object - instance of L1BlockRef.
current_l1_finalized: Object - instance of L1BlockRef.
head_l1: Object - instance of L1BlockRef.
safe_l1: Object - instance of L1BlockRef.
finalized_l1: Object - instance of L1BlockRef.
unsafe_l2: Object - instance of L2BlockRef.
safe_l2: Object - instance of L2BlockRef.
finalized_l2: Object - instance of L2BlockRef.
pending_safe_l2: Object - instance of L2BlockRef.
queued_unsafe_l2: Object - instance of L2BlockRef.
Output Method API
The input and return types here are as defined by the engine API specs.
method: optimism_outputAtBlock
params: blockNumber: QUANTITY, 64 bits - L2 integer block number.
returns: version: DATA, 32 Bytes - the output root version number, beginning with 0.
outputRoot: DATA, 32 Bytes - the output root.
blockRef: Object - instance of L2BlockRef.
withdrawalStorageRoot: 32 bytes - storage root of the L2toL1MessagePasser contract.
stateRoot: DATA: 32 bytes - the state root. syncStatus: Object - instance of SyncStatus.
Batch Submitter
Overview
The batch submitter, also referred to as the batcher, is the entity submitting the L2 sequencer data to L1, to make it available for verifiers.
The format of the data transactions is defined in the derivation: the data is constructed from L2 blocks in the reverse order as it is derived from data into L2 blocks.
The timing, operation and transaction signing is implementation-specific: any data can be submitted at any time, but only the data that matches the derivation rules will be valid from the verifier perspective.
The most minimal batcher implementation can be defined as a loop of the following operations:
See if the
unsafe
L2 block number is past thesafe
block number:unsafe
data needs to be submitted.Iterate over all unsafe L2 blocks, skip any that were previously submitted.
Open a channel, buffer all the L2 block data to be submitted, while applying the encoding and compression as defined in the derivation.
Pull frames from the channel to fill data transactions with, until the channel is empty.
Submit the data transactions to L1
The L2 view of safe/unsafe does not instantly update after data is submitted, nor when it gets confirmed on L1, so special care may have to be taken to not duplicate data submissions.
Overview
Precompiled contracts exist on Glide chains at predefined addresses. They are similar to predeploys but are implemented as native code in the EVM as opposed to bytecode. Precompiles are used for computationally expensive operations, that would be cost prohibitive to implement in Solidity. Where possible predeploys are preferred, as precompiles must be implemented in every execution client.
Glide chains contain the standard Ethereum precompiles as well as a small number of additional precompiles. The following table lists each of the additional precompiles. The system version indicates when the precompile was introduced.
Name | Address | Introduced |
---|---|---|
P256VERIFY | 0x0000000000000000000000000000000000000100 | Fjord |
P256VERIFY
The P256VERIFY
precompile performs signature verification for the secp256r1 elliptic curve. This curve has widespread adoption. It's used by Passkeys, Apple Secure Enclave and many other systems.
It is specified as part of RIP-7212 and was added to the OP-Stack protocol in the Fjord release. The op-geth implementation is here.
Address: 0x0000000000000000000000000000000000000100
Last updated