Skip to article content

SLATE

Supra-Laplacian Encoding for Transformer on Dynamic Graphs

Abstract

Fully connected Graph Transformers (GT) have rapidly become prominent in the static graph community as an alternative to Message-Passing models, which suffer from a lack of expressivity, oversquashing, and under-reaching. However, in a dynamic context, by interconnecting all nodes at multiple snapshots with self-attention, GT loose both structural and temporal information. In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new spatio-temporal encoding to leverage the GT architecture while keeping spatio-temporal information. Specifically, we transform Discrete Time Dynamic Graphs into multi-layer graphs and take advantage of the spectral properties of their associated supra-Laplacian matrix. Our second contribution explicitly model nodes’ pairwise relationships with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction. SLATE outperforms numerous state-of-the-art methods based on Message-Passing Graph Neural Networks combined with recurrent models (e.g. , LSTM), and Dynamic Graph Transformers, on 9 datasets. Code is open-source and available at this link https://github.com/ykrmm/SLATE.

Introduction

Dynamic graphs are crucial for modeling interactions between entities in various fields, from social sciences to computational biology Ying et al., 2018He et al., 2020Jumper et al., 2021Kaba & Ravanbakhsh, 2022. Link prediction on dynamic graphs is an all-important task, with diverse applications, such as predicting user actions in recommender systems, forecasting financial transactions, or identifying potential academic collaborations. Dynamic graphs can be modeled as a time series of static graphs captured at regular intervals (Discrete Time Dynamic Graphs, DTDG) Skarding et al., 2021Yang et al., 2024.

Standard approaches for learning representations on DTDGs combine Message-Passing GNNs (MP-GNNs) with temporal RNN-based models You et al., 2022Sankar et al., 2020Pareja et al., 2020. In static contexts, Graph Transformers (GT) Dwivedi & Bresson, 2020Wu et al., 2024Kreuzer et al., 2021 offer a compelling alternative to MP-GNNs that faced several limitations Xu et al., 2019Topping et al., 2022. Indeed, their fully-connected attention mechanism captures long-range dependencies, resolving issues such as oversquashing Alon & Yahav, 2021. GTs directly connect nodes, using the graph structure as a soft bias through positional encoding Rampášek et al., 2022. Incorporating Laplacian-based encodings in GTs provably enhances their expressiveness compared to MP-GNNs Kreuzer et al., 2021Dwivedi & Bresson, 2020.

Exploiting GTs on dynamic graphs would require a spatio-temporal encoding that effectively retains both structural and temporal information. The recent works that have extended GTs to dynamic graphs capture spatio-temporal dependencies between nodes by using partial attention mechanisms Liu et al., 2021Yang et al., 2022Wang et al., 2021Hu et al., 2023. Moreover, these methods also employ encodings which independently embed the graph structure and the temporal dimension. Given that the expressiveness of GTs depends on an accurate spatio-temporal encoding, designing one that interweaves time and position information could greatly enhance their potential and performance.

The vast majority of neural-based methods for dynamic link prediction rely on node representation learning Pareja et al., 2020Yang et al., 2021Rossi et al., 2020You et al., 2022Sankar et al., 2020. Recent works enrich node embeddings with pairwise information for a given node-pair using co-occurrence neighbors matching Yu et al., 2023Wang et al., 2021 or cross-attention on historical sub-graphs Wang et al., 2021. However these methods neglect the global information of the graph by sampling different spatio-temporal substructures around targeted nodes.

SLATE is a fully connected transformer for dynamic link prediction, which
innovatively performs a joint spatial and temporal encoding of the
dynamic graph. SLATE models a DTDG as a multi-layer graph with temporal
dependencies between a node and its past. Building the supra-adjacency
matrix of a randomly-generated toy dynamic graph with 3 snapshots
(left) and analysing the spectrum of its associated supra-Laplacian
(right) provide fundamental spatio-temporal information. The
projections on eigenvectors associated with smaller eigenvalues
(\lambda_1) capture global graph dynamics: node colors are different
for each time step. Larger eigenvalues ( e.g. \lambda_{\text{max}}),
capture more localized spatio-temporal information
(see 7.1).

Figure 1:SLATE is a fully connected transformer for dynamic link prediction, which innovatively performs a joint spatial and temporal encoding of the dynamic graph. SLATE models a DTDG as a multi-layer graph with temporal dependencies between a node and its past. Building the supra-adjacency matrix of a randomly-generated toy dynamic graph with 3 snapshots (left) and analysing the spectrum of its associated supra-Laplacian (right) provide fundamental spatio-temporal information. The projections on eigenvectors associated with smaller eigenvalues (λ1\lambda_1) capture global graph dynamics: node colors are different for each time step. Larger eigenvalues ( e.g. λmax\lambda_{\text{max}}), capture more localized spatio-temporal information (see 7.1).

Pioneering work in the complex network community has studied temporal graphs with multi-layers models and supra-adjacency matrices Valdano et al., 2015Kivelä et al., 2014. The spectral analysis of such matrices can provide valuable structural and temporal information Cozzo et al., 2016Radicchi & Arenas, 2013. However, how to adapt this formalism for learning dynamic graphs with transformer architectures remains a widely open question.

In this work, we introduce Supra-LAplacian encoding for spatio-temporal TransformErs (SLATE), a new unified spatio-temporal encoding which allows to fully exploit the potential of the GT architecture for the task of dynamic link prediction. As illustrated on Figure 1, adapting supra-Laplacian matrices to dynamic graph can provide rich spatio-temporal information for positional encoding. SLATE is based on the following two main contributions:

We conduct an extensive experimental validation of our method across 11 real and synthetic discrete-time dynamic graph datasets. SLATE outperforms state-of-the-art results by a large margin.  We also validate the importance of our supra-Laplacian unified spatio-temporal encoding and the edge module for optimal performances. Finally, SLATE remains efficient since it uses a single-layer transformer, and we show impressive results on larger graph datasets, indicating good scalability, and limited time-memory overhead.

Dynamic Graph Neural Networks on DTDGs.

The standard approach to learn on DTDGs Skarding et al., 2021Yang et al., 2024 involves using two separate spatial and temporal models. The spatial model is responsible for encoding the structure of the current graph snapshot, while the temporal model updates the dynamic either of the graph representations Sankar et al., 2020Seo et al., 2018You et al., 2022Li et al., 2019Kuo et al., 2023 or the graph model parameters Pareja et al., 2020Hajiramezanali et al., 2019. Recently, ROLAND You et al., 2022 introduced a generic framework to use any static graph model for spatial encoding coupled with a recurrent-based (LSTM Hochreiter & Schmidhuber, 1997, RNN, GRU) or attention-based temporal model. These above methods mainly use a MP-GNN as spatial model Kipf & Welling, 2016Veličković et al., 2018Ying et al., 2018. However, MP-GNNs are known to present critical limitations: they struggle to distinguish simple structures like triangles or cycles Morris et al., 2019Chen et al., 2020, and fail to capture long-range dependencies due to oversquashing Alon & Yahav, 2021Topping et al., 2022. To overcome these limitations, some works have adopted a fully-connected GT as spatial model, benefiting from its global attention mechanism Chu et al., 2023Wei et al., 2022Zheng et al., 2023. In Sankar et al., 2020, the local structure is preserved by computing the attention on direct neighbors. In contrast to these works, uses a unique spatio-temporal graph transformer model, greatly simplifying the learning process.

Graph Transformers

In static contexts, Graph Transformers have been shown to provide a compelling alternative to MP-GNNs Dwivedi & Bresson, 2020. GTs Wu et al., 2024Rampášek et al., 2022Kim et al., 2022Ying et al., 2021 enable direct connections between all nodes, using the graph’s structure as a soft inductive bias, thus resolving the oversquashing issue. The expressiveness of GTs heavily depends on positional or structural encoding Dwivedi & Bresson, 2020Mialon et al., 2021Dwivedi et al., 2022Beaini et al., 2021. In Dwivedi & Bresson, 2020, the authors use the eigenvectors associated with the kk-lowest eigenvalues of the Laplacian matrix, which allows GTs to distinguish structures that MP-GNNs are unable to differentiate. Following the success of Laplacian positional encoding on static graphs, SLATE uses the eigenvectors of the supra-Laplacian of a multi-layer graph representation of DTDGs as spatio-temporal encoding.

Dynamic Graph Transformers

To avoid separately modelling structural and temporal information as dynamic Graph Neural Networks usually do on DTDGs, recent papers have adopted a unified model based on spatio-temporal attention Liu et al., 2021Hu et al., 2023. This novel approach make those models close to transformer-based methods classically employed to learn on Continuous Time Dynamic Graphs (CTDG) Xu et al., 2020Wang et al., 2021Wang et al., 2022. Among them, some preserve the local structure by computing attention only on direct neighbors Xu et al., 2020Sankar et al., 2020, while others sample local spatio-temporal structures around nodes Wang et al., 2021Liu et al., 2021Yang et al., 2022Hu et al., 2023 and perform fully-connected attention. However, their spatio-temporal encoding is still built by concatenating a spatial and a temporal encoding that are computed independently. The spatial encoding is either based on a graph-based distance Wang et al., 2021Hu et al., 2023 or on a diffusion-based measure Liu et al., 2021. The temporal encoding is usually sinus-based Xu et al., 2020Wang et al., 2022Banerjee et al., 2022 as in the original transformer paper Vaswani et al., 2017. Another drawback of these methods Liu et al., 2021Wang et al., 2021Hu et al., 2023Wang et al., 2022 is that they use only sub-graphs to represent the local structure around a given node. Therefore, their representations of the nodes are computed on different graphs and thus fail to capture global and long-range interactions. Contrary to those approaches, our SLATE model uses the same graph to compute node representations in a fully-connected GT between all nodes within temporal windows. It features a unified spatio-temporal encoding based on the supra-Laplacian matrix.

For dynamic link prediction, many methods are based only on node representations and use MLPs or cosine similarity to predict the existence of a link Sankar et al., 2020Seo et al., 2018Rossi et al., 2020. Recent approaches complement node representations by incorporating pairwise information. Techniques like co-occurrence neighbors matching Yu et al., 2023Wang et al., 2021 or cross-attention on historical sub-graphs Wang et al., 2021 are employed. However, these methods often overlook the global graph structure by focusing on sampled spatio-temporal substructures. For instance, CAW-N Wang et al., 2021 uses anonymous random walks around a pair of nodes and matches their neighborhoods, while DyGformer Yu et al., 2023 applies transformers to one-hop neighborhoods and calculates co-occurrences. These localized approaches fail to capture the broader graph context. TCL Wang et al., 2021 is the closest to SLATE, using cross-attention between spatio-temporal representations of node pairs. TCL samples historical sub-graphs using BFS and employs contrastive learning for node representation. However, it still relies on sub-graph sampling, missing the full extent of the global graph information. In contrast, SLATE leverages the entire graph’s spectral properties through the supra-Laplacian, incorporating the global structure directly into the spatio-temporal encoding. This holistic approach allows SLATE to provide a richer understanding of dynamic interactions, leading to superior link prediction performance.

The SLATE Method

In this section, we describe our fully-connected dynamic graph transformer model, SLATE, for link prediction. The core idea in Section 3.1 is to adapt the supra-Laplacian matrix computations for dynamic graph transformer (DGTs), and to introduce our new spatio-temporal encoding based on its spectral analysis. In Section 3.2, we detail our full-attention transformer to capture the spatio-temporal dependencies between nodes at different time steps. Finally, we detail our edge representation module for dynamic link prediction in Section 3.3. Figure 2 illustrates the overall SLATE framework.

The SLATE model for link prediction with dynamic
graph transformers (DGTs). To recover the lost spatio-temporal structure
in DGTs, we adapt the supra-Laplacian matrix computation to DGTs by
making the input graph provably connected (a), and use its spectral
analysis to introduce a specific encoding for DGTs (b). (c) Applies a
fully connected spatio-temporal transformer between all nodes at
multiple time-step. Finally, we design in (d) an edge representations
module dedicated to link prediction using cross-attention on multiple
temporal representations of the nodes.

Figure 2:The SLATE model for link prediction with dynamic graph transformers (DGTs). To recover the lost spatio-temporal structure in DGTs, we adapt the supra-Laplacian matrix computation to DGTs by making the input graph provably connected (a), and use its spectral analysis to introduce a specific encoding for DGTs (b). (c) Applies a fully connected spatio-temporal transformer between all nodes at multiple time-step. Finally, we design in (d) an edge representations module dedicated to link prediction using cross-attention on multiple temporal representations of the nodes.

Notations. Let us consider a DTDG G\mathcal{G} as an undirected graph with a fixed number of NN nodes across snapshots, represented by the set of adjacency matrices A={A1,...,AT}\mathcal{A} = \{A_1, ..., A_T\}. Its supra-graph, the multi-layer network Gˉ=(Vˉ,Eˉ)\bar{G}=(\bar{V},\bar{E}), is associated to a supra-adjacency matrix Aˉ\bar{A}, obtained by stacking AiA_i diagonally (see Eq [eq:supradj] in Appendix 7.1). Then, the supra-Laplacian matrix Lˉ\bar{L} is defined as Lˉ=IDˉ1/2AˉDˉ1/2\bar{L} = I - \bar{D}^{-1/2}\bar{A}\bar{D}^{-1/2}, where II is the identity matrix and Dˉ\bar{D} is the degree matrix of Gˉ\bar{G}. Let xuRF\mathbf{x_u} \in \mathbb{R}^{F} be the feature vector associated with the node uu (which remains fixed among all snapshots). Finally, let consider the random variable y{0,1}y \in \{0,1\} such that y=1y=1 if nodes uu and vv are connected and y=0y=0 otherwise.

Supra-Laplacian as Spatio-Temporal Encoding

In this section, we cast Discrete Time Dynamic Graphs (DTDGs) as multi-layer networks, and use the spectral analysis of their supra-graph and generate a powerful spatio-temporal encoding for our fully-connected transformer.

DTDG as multi-layer graphs. If a graph is connected, its spectral analysis provides a rich information of the global graph dynamics, as shown in Figure 1. The main challenge in casting DTDG as multi-layer graphs relates to its disconnectivity, which induces as many zero eigenvalues as connected components. DTDG have in practice a high proportion of isolated nodes per snapshot (see Figure 3 in experiments), making the spectral analysis on the raw disconnected graph useless. Indeed, it mainly indicates positions relative to isolated nodes, losing valuable information on global dynamics and local spatio-temporal structures. [We experimentally validate that it is mandatory to compute the supra-Laplacian matrix on a connected graph to recover a meaningful spatio-temporal structure.

Supra-Laplacian computation. To overcome this issue and make the supra-graph connected, we follow three steps: (1) remove isolated nodes in each adjacency matrix, (2) introduce a virtual node in each snapshot to connect clusters, and (3) add a temporal self-connection between a node and its past if it existed in the previous timestep. We avoid temporal dependencies between virtual nodes to prevent artificial connections. These 3 transformation steps make the resulting supra-graph provably connected. This process is illustrated in Figure 2[a]{style=“color: black”}, and we give the detailed algorithm in Appendix 7.3.

Spatio-temporal encoding. With a connected Gˉ\mathcal{\bar{G}}, the second smallest eigenvalue λ1\lambda_1 of the supra-Laplacian Lˉ\bar{L} is guaranteed to be non-negative (see proof in Appendix 7.3), and its associated Fiedler vector ϕ1\phi_1 reveals the dynamics of Gˉ\mathcal{\bar{G}} (Figure 1). In practice, similar to many static GT models Kreuzer et al., 2021Rampášek et al., 2022Dwivedi et al., 2022, we retrieve the first kk eigenvectors of the spectrum of Lˉ\bar{L}, with kk being a hyper-parameter. The spectrum can be computed in O(k2N)O(k^2N') and have a memory complexity of O(kN)O(kN') where NN' is the size of Aˉ\bar{A}, and we follow the literature to normalize the eigenvectors and resolve sign ambiguities Kreuzer et al., 2021. The supra-Laplacian spatio-temporal encoding vector of the node uu at time tt is:

SLEu,t={gθST(Lˉu,t[ϕ1,ϕ2,...,ϕk]diag(Λ))if ut is not isolatedgθST(0kdiag(Λ))otherwise}\text{SLE}_{u,t} = \left\{\begin{array}{lr} g_{\theta_{ \text{ST} }}(\bar{L}_{u, t} \cdot [\phi_1, \phi_2,...,\phi_k] \oplus \text{diag}(\Lambda))& \text{if $u_{t}$ is not isolated}\\[4pt] g_{\theta_{ \text{ST}}}(\mathbf{0}_k \oplus \text{diag}(\Lambda))& \text{otherwise}\\ \end{array}\right\}

where \oplus denotes the concatenation operator. Lˉu,t[ϕ1,ϕ2,...,ϕk]=[ϕ1u,t,ϕ2u,t,...,ϕku,t]\bar{L}_{u, t} \cdot [\phi_1, \phi_2,...,\phi_k] = [\phi_1^{u,t},\phi_2^{u,t},...,\phi_k^{u,t}] contains the projections of the node uu at time tt in the eigenspace spanned by the kk first eigenvectors of Lˉ\bar{L}, diag(Λ)\text{diag}(\Lambda) contains the eigenvalues of Lˉ\bar{L} (which are the same for all nodes) and gθSTg_{\theta_{\text{ST}}} is a linear layer allowing to finely adapt the supra-graph spectrum features to the underlying link prediction task. Note that because we did not include isolated nodes in the computation of the supra-Laplacian, we replace the eigenvector projections by a null vector 0k\mathbf{0}^k for these nodes. All the steps involved in constructing our spatio-temporal encoding are illustrated in Figure 2{reference-type=“ref” reference=“fig:model”}[b]{style=“color: black”}.

Fully-connected spatio-temporal transformer

In this section, we describe the architecture of our fully-connected spatio-temporal transformer, fθTf_{\theta_{T}}, to construct node representations that captures long-range dependencies between the nodes at each time step. We illustrate our fully-connected GT in Figure 2{reference-type=“ref” reference=“fig:model”}[c]{style=“color: black”}. We employ a single transformer block, such that our architecture remains lightweight.  This is in line with recent findings showing that a single encoder layer with multi-head attention is sufficient for high performance, even for dynamic graphs Wu et al., 2024.

The input representation of the node utu_{t} is the concatenation of the node embeddings (which remains the same for each snapshot) and our supra-Laplacian spatio-temporal encoding:

zu,t=gθE(xu)SLEu,t\mathbf{z}_{u,t} = g_{\theta_{\text{E}}}(\mathbf{x_u}) \oplus \text{SLE}_{u,t}

where gθEg_{\theta_{\text{E}}} is a linear projection layer and \oplus denotes the concatenation operator. Then we stack all the representations of each nodes at each time step within a time window of size ww to obtain the input sequence, ZR(Nw)×dZ \in \mathbb{R}^{(Nw) \times d}, of the GT.

The fully-connected spatio-temporal transformer, fθTf_{\theta_{T}}, then produces a unique representation Z~R(Nw)×d\tilde{Z} \in \mathbb{R}^{(Nw) \times d} for each node at each time-step :

Z~=fθT(Z).\tilde{Z} = f_{\theta_{T}}(Z).

Surprisingly, considering all temporal snapshots did not yield better results in our experiments (see 4{reference-type=“ref+label” reference=“fig:tw”} in 4.2{reference-type=“ref+label” reference=“sec:model_analysis”}).

Unlike previous DGT methods that sample substructures around each nodes Liu et al., 2021Yang et al., 2022Wang et al., 2021, SLATE leverages the full structure of the DTDG within the time window. This approach ensures that no nodes are arbitrarily discarded in the representation learning process, as we use the same information source ZZ for all nodes.

Edge Representation with Cross-Attention

In this section, we present our innovative edge representation module Edge. It is designed for efficient dynamic link prediction and leverage the node representations learned by our fully-connected spatio-temporal GT. We illustrated our module in Figure 2{reference-type=“ref” reference=“fig:model”}[d]{style=“color: black”}. This module is composed of a cross-attention model, fθXAf_{\theta_{\text{XA}}}, that captures pairwise information between the historical representation of two targeted nodes followed by a classifier to determine the presence of a link.

For a link prediction at time t+1t+1 on a given node pair (u,v)(u, v), we aggregate all temporal representations of uu and vv resulting in two sequences \tilde{Z}_{u,t}= [\tilde{\mathbf{z}}_{u, t\scalebox{0.5}[0.9]{-}w}, \ldots, \tilde{\mathbf{z}}_{u, t}] and \tilde{Z}_{v,t}= [\tilde{\mathbf{z}}_{v, t\scalebox{0.5}[0.9]{-}w}, \ldots, \tilde{\mathbf{z}}_{v, t}]. We use these multiple embeddings to build a pairwise representation that captures dynamic relationships over time. Then, the cross-attention module fθXAf_{\theta_{\text{XA}}} produces a pairwise representation of the sequence Eu,vRw×dE_{u,v} \in \mathbb{R}^{w\times d} :

Eu,v=fθXA(Z~u,t,Z~v,t).E_{u, v} = f_{\theta_{\text{XA}}}( \tilde{Z}_{u,t}, \tilde{Z}_{v,t}).

We obtain the final edge representation eu,vRd\mathbf{e_{u,v}}\in \mathbb{R}^d by applying an average time-pooling operator and we compute the probability that the nodes ut+1u_{t+1} and vt+1v_{t+1} are connected with:

p(y=1 eu,v)=σ(MLP(eu,v)).p(y=1|~\mathbf{e_{u, v}}) = \sigma(\text{MLP}(\mathbf{e_{u,v}})).

SLATE differs from methods that enrich node and edge representations with pairwise information by sampling substructures around each node Wang et al., 2021Yu et al., 2023Wang et al., 2021Hu et al., 2023. Instead, we first compute node representations based on the same dynamic graph information contained in ZZ. Then, we capture fine-grained dynamics specific to each link (u,v)(u,v) through a cross-attention mechanism.

Our training resort to the standard Binary Cross-Entropy loss function. In practice, for a node uu, we sample a negative pair vnegv_{\text{neg}} and a positive pair vposv_{\text{pos}}:

Lθ=BCE(p(y=1eu,vpos))+BCE(p(y=0eu,vneg)).\mathcal{L_\theta} = \text{BCE}(p(y=1| \mathbf{e_{u,v_{\text{pos}}}})) + \text{BCE}(p(y=0| \mathbf{e_{u,v_{\text{neg}}}})) .

In this context, θ={θXA,θT,θST,θE}\theta =\{\theta_\text{XA},\theta_\text{T},\theta_\text{ST},\theta_\text{E}\} represents all the parameters within the edge representation module θXA\theta_\text{XA}, the fully-connected transformer θT\theta_T, the spatio-temporal linear layer θST\theta_{ST} and the node embedding parameters θE\theta_E as illustrated in 2{reference-type=“ref+label” reference=“fig:model”}.

SLATE Scalability

[The theoretical complexity of attention computation is O(N2)O(N^2) per snapshot, scaling to O((NT)2)O((NT)^2) when considering all TT snapshots. However, as shown in our experiments (4{reference-type=“ref+label” reference=“fig:tw”}) and consistent with recent works Karmim et al., 2024, a large temporal context is often unnecessary. By using a time window ww with wTw \ll T (similar to other DGT architectures Liu et al., 2021Yang et al., 2022), we reduce complexity to O((Nw)2)O((Nw)^2). For predictions at time t+1t+1, we focus only on snapshots from GtwG_{t-w} to GtG_t. Ablation studies confirm that smaller time windows deliver excellent results across various real-world datasets. We further leverage FLASH Attention Dao et al., 2022 to optimize memory usage and computation. Additionally, we incorporate Performer Choromanski et al., 2022, which approximates the softmax computation of the attention matrix, reducing the complexity to O(Nw)O(Nw). This enables us to scale efficiently to larger graphs, as shown in [tab:efficiency]{reference-type=“ref+label” reference=“tab:efficiency”}, while maintaining high performance (see 8{reference-type=“ref+label” reference=“tab:performer_vs_transformer”}) with manageable computational resources.]{style=“color: black”}

Experiments

We conduct extensive  experiments to validate SLATE for link prediction on discrete dynamic graphs, including state-of-the-art comparisons in 4.1{reference-type=“ref+label” reference=“sec:sota_compa”}. In 4.2{reference-type=“ref+label” reference=“sec:model_analysis”}, we highlight the benefits of our two main contributions, the importance of connecting our supra-graph, and the ability of SLATE to scale to larger datasets with reasonable time and memory consumption compared to MP-GNNs.

Implementation details. We use one transformer Encoder Layer Vaswani et al., 2017. For larger datasets, we employ Flash Attention Dao et al., 2022 for improved time and memory efficiency. Further details regarding model parameters and their selections are provided in 3{reference-type=“ref+label” reference=“tab:searchparam”}. We fix the token dimension at d=128d=128 and the time window at w=3w=3 for all our experiments. We use an SGD optimizer for all of our experiments. Further details on hyper-parameters search, including the number of eigenvectors for our spatio-temporal encoding, are in 10{reference-type=“ref+label” reference=“app:param”}.

Comparison to state-of-the-art

Since both the continuous and discrete communities evaluate on similar data, we compare SLATE to state-of-the-art DTDG ([tab:dtdg_main_auc]{reference-type=“ref+label” reference=“tab:dtdg_main_auc”}) and CTDG ([tab:ctdg_main_auc]{reference-type=“ref+label” reference=“tab:ctdg_main_auc”}) models. Best results are in bold, second best are underlined. More detailed results and analyses are presented in [app:additionnalexp]{reference-type=“ref+label” reference=“app:additionnalexp”}.

Baselines and evaluation protocol. To compare the benefits of fully connected spatio-temporal attention with a standard approach using transformers, we designed the ROLAND-GT model based on the ROLAND framework You et al., 2022. This model follows the stacked-GNN approach Skarding et al., 2021, equipped with the encoder fθTf_{\theta_T} described in Section 3 including static Laplacian positional encoding Dwivedi & Bresson, 2020, and a LSTM Hochreiter & Schmidhuber, 1997 updating the node embeddings.

We adhere to the standardized evaluation protocols for continuous models Yu et al., 2023 and discrete models Yang et al., 2021. Our evaluation follows these protocols, including metrics, data splitting, and the datasets provided. Results in [tab:dtdg_main_auc]{reference-type=“ref+label” reference=“tab:dtdg_main_auc”} and [tab:ctdg_main_auc]{reference-type=“ref+label” reference=“tab:ctdg_main_auc”} are from the original papers, except those marked with ^\dagger. We report the average results and standard deviations from five runs to assess robustness. Additional results, including hard negative sampling evaluation, are in 11.2{reference-type=“ref+label” reference=“app:hardnss”}.

Datasets. In 1{reference-type=“ref+label” reference=“tab:datasets”} 9{reference-type=“ref+label” reference=“app:dts”}, we provide detailed statistics for the datasets used in our experiments. An in-depth description of the datasets is given in 9{reference-type=“ref+label” reference=“app:dts”}. We evaluate on DTDGs datasets provided by Yu et al., 2023 and Yang et al., 2021, we add a synthetic dataset SBM based on stochastic block model Lee & Wilkinson, 2019, to evaluate on denser DTDG.

[]{#tab:dtdg_main_auc label="tab:dtdg_main_auc"}

Comparison to discrete models, on DTDG. [tab:dtdg_main_auc]{reference-type=“ref+label” reference=“tab:dtdg_main_auc”} We showcases the performance of SLATE against various discrete models on DTDG datasets, highlighting its superior performance across multiple metrics and datasets. SLATE outperforms all state of the art models on the HepPh, Enron, and Colab datasets, demonstrating superior dynamic link prediction capabilities. Notably, it surpasses HTGN by +2.1 points in AUC on HepPh and +1.1 points in AP on Enron. Moreover, SLATE shows a remarkable improvement of +7.6 points in AUC over EvolveGCN on Colab. It also performs competitively on the AS733 dataset, with scores that are closely second to HTGN, demonstrating its robustness across different types of dynamic graphs. What also emerges and validates our method from this comparison is the average gain of +6.29 points by our fully connected spatio-temporal attention model over the separate spatial attention model and temporal model approach, as used in ROLAND-GT. We also demonstrate significant gains against sparse attention models like DySat, with an increase of +6.45. This study, conducted on the protocol from Yang et al., 2021, emphasizes SLATE’s  capability in handling discrete-time dynamic graph data, offering significant improvements over existing models.

Comparison to continuous models, on DTDG. [tab:ctdg_main_auc]{reference-type=“ref+label” reference=“tab:ctdg_main_auc”} In dynamic link prediction, SLATE outperforms models focused on node (TGN, DyRep, TGAT), edge (CAWN), and combined node-pairwise information (DyGFormer,TCL). Notably, it surpasses TCL by over 21 points in average, showcasing the benefits of our temporal cross attention strategies. SLATE’s advantage stems from its global attention mechanism, unlike the sparse attention used by TGAT, TGN, and TCL. By employing fully-connected spatio-temporal attention, SLATE directly leverages temporal dimensions through its Edge module. This strategic approach allows SLATE to excel, as demonstrated by its consistent top performance and further evidenced in Appendix with hard negative sampling results (see [tab:ctdg_full_ap]{reference-type=“ref+label” reference=“tab:ctdg_full_ap”} and [tab:ctdg_full_auc]{reference-type=“ref+label” reference=“tab:ctdg_full_auc”} in [app:additionnalexp]{reference-type=“ref+label” reference=“app:additionnalexp”}). We demonstrate average results that are superior by 13 points compared to the most recent model on DTDG, DyGFormer Yu et al., 2023.

Model Analysis

Impact of different SLATE components. Table [tab:impact_components]{reference-type=“ref” reference=“tab:impact_components”} presents the AUC results of different configurations of SLATE on four datasets. This evaluation demonstrates the impact of our proposed spatio-temporal encoding and the Edge module on dynamic link prediction performance.

First, we show the naive spatio-temporal encoding approach using the first kk Laplacian eigenvectors associated with the kk lowest values Dwivedi & Bresson, 2020 (7.4{reference-type=“ref+label” reference=“app:lappe”}), combined with sinusoidal unparametrized temporal encoding Vaswani et al., 2017 (7.5{reference-type=“ref+label” reference=“app:timepe”}), without the Edge module. The Laplacian is computed sequentially on the ww snapshots, then concatenated with the temporal encoding indicating the position of the snapshot, with k=12k=12 for both SLATE and the naive encoding. The AUC scores across all datasets are significantly lower, highlighting the limitations of this naive encoding method in capturing complex spatio-temporal dependencies.

Replacing the baseline encoding with our proposed SLATE encoding, still without the Edge module, results in significant improvements: +6.47 points on CanParl, +8.08 points on USLegis, and +3.77 points on UNtrade. These improvements demonstrate the effectiveness of our spatio-temporal encoding. Adding the Edge module to the naive encoding baseline yields further improvements: +7.25 points on CanParl and +1.57 points on Enron. However, it still falls short compared to the enhancements provided by the SLATE encoding.

Finally, the complete model, with the Edge module, achieves the highest AUC scores across all datasets: +9.39 points on CanParl and +10.58 points on USLegis. These substantial gains confirm that integrating our unified spatio-temporal encoding and the Edge module effectively captures intricate dynamics between nodes over time, resulting in superior performance.

DatasetSLATE w/o trsf
Colab85.03 ± 0.7290.84 ± 0.41
USLegis63.35 ± 1.2495.80 ± 0.11
UNVote78.30 ± 2.0599.94 ± 0.05
AS73381.50 ± 1.3597.46 ± 0.45

Figure 3:

image

Critical role of supra-adjacency transformation. Here, we demonstrate the importance of the transformation steps of the supra-adjacency matrix, as detailed in Section 3.1, by removing isolated nodes, adding virtual nodes, and incorporating temporal connections (Figure 2). Table [tab:process_adj] presents the performance of SLATE with and without transformation (trsf) on four datasets. Without these critical transformations, there is a systematic drop in performance, particularly pronounced in datasets with a high number of isolated nodes, as shown in Figure 3 (27% in Colab, 53% in USLegis, 35% in UNVote, and 59% in AS733). These results clearly highlight the significant improvements brought by our proposed transformations. [More detailed experiments regarding each transformation, particularly on the importance of removing isolated nodes and adding a virtual node, are presented in Table [tab:isolated_vs_slate,tab:with_wo_vn].]{style=“color: black”}

ModelsMem.t / ep.Nb params.
EvolveGCN46Go1828s1.8 M
DySAT42Go1077s1.8 M
VGRNN21Go931s0.4 M
ROLAND-GT w/o FlashOOM-1.9 M
ROLAND-GT44Go1152s1.9 M
SLATE w/o FlashOOM-2.1 M
SLATE48Go1354s2.1 M
SLATE-Performer17Go697s2.1 M

(a)

Model performance based on the window size, w = ∞ corresponds to considering all snapshots. Results in average precision (AP).

(b)

Figure 4:Model performance based on the window size, w = ∞ corresponds to considering all snapshots. Results in average precision (AP).

Impact of the time-window size.

We demonstrate in Figure 4 the impact of the time window size on the performance of the SLATE model. A window size of 1 is equivalent to applying a global attention transformer to the latest snapshot before prediction, and an infinite window size is equivalent to considering all the snapshots for global attention. This figure highlights the importance of temporal context for accurate predictions within dynamic graphs. We observe that, in most cases, too much temporal context can introduce noise into the predictions. The USLegis, UNVote and CanParl datasets are political graphs spanning decades (72 years for UNVote), making it unnecessary to look too far back. For all of our main results in [tab:ctdg_main_auc]{reference-type=“ref+label” reference=“tab:ctdg_main_auc”} and [tab:dtdg_main_auc]{reference-type=“ref+label” reference=“tab:dtdg_main_auc”} we fix for simplicity w=3w=3. However, our ablations have identified w=4w=4 as an optimal balance, capturing sufficient temporal context without introducing noise into the transformer encoder and ensuring scalability for our model. Therefore, SLATE performances could further be improved by more systematic cross-validation of its hyper-parameters, e.g. ww.

Model efficiency.

The classic attention mechanism, with a complexity of O(N2)O(N^2), can be memory-consuming when applied across all nodes at different time steps. However, using Flash-Attention Dao et al., 2022 and a light transformer architecture with just one encoder layer, we successfully scaled to the Flights dataset, containing 13,000 nodes and a window size of w=3w=3. [By using the Performer encoder Choromanski et al., 2022, which approximates attention computation with linear complexity, memory usage is reduced to 17GB]{style=“color: black”}. Our analysis shows that our model empirically matches the memory consumption of various DTDG architectures while maintaining comparable computation times ([tab:efficiency]{reference-type=“ref+label” reference=“tab:efficiency”}). Furthermore, it is not over-parameterized relative to existing methods. We trained on an NVIDIA-Quadro RTX A6000 with 49 GB of total memory.

Qualitative results

[We present qualitative results in 5{reference-type=“ref+label” reference=“fig:spectrum_toy”} comparing the graph and its spectrum before and after applying the proposed transformation in SLATE. The projection is made on the eigenvector associated with the first non-zero eigenvalue. Before transformation, the DTDG contains isolated nodes (7, 23 and 26) and two distinct clusters in the snapshot at t=3t=3. In this case, the projection is purely spatial, as there are no temporal connections, and some projections also occur on isolated nodes due to the presence of distinct connected components. After the proposed transformation into a connected multi-layer graph, the projection captures richer spatio-temporal properties of the dynamic graph. By connecting the clusters with a virtual node and adding temporal edges, our approach removes the influence of isolated nodes and enables the construction of an informative spatio-temporal encoding that better reflects the dynamic nature of the graph.]{style=“color: black”}

Projection of the eigenvector associated with the first non-zeroeigenvalue on a toy DTDG before and after transformation. On the left,the DTDG is unprocessed, showing only spatial projections due to thelack of temporal connections. On the right, after applying the SLATEtransformation, the graph captures rich spatio-temporal properties,allowing for a more informative spatio-temporalencoding{#fig:spectrum_toy width=“75%”}

Conclusion

We have presented the SLATE method, an innovative spatio-temporal encoding for transformers on dynamic graphs, based on supra-Laplacian analysis. Considering discrete-time dynamic graphs as multi-layer networks, we devise an extremely efficient unified spatio-temporal encoding thanks to the spectral properties of the supra-adjacency matrix. We integrate this encoding into a fully-connected transformer. By modeling pairwise relationships in a new edge representation module, we show how it enhances link prediction on dynamic graphs. SLATE performs better than previous state-of-the-art approaches on various standard benchmark datasets, setting new state-of-the-art results for discrete link prediction.

Despite its strong performances, SLATE currently operates in a transductive setting and cannot generalize to unseen nodes. We aim to explore combinations with MP-GNNs to leverage the strengths of local feature aggregation and global contextual information. On the other hand, SLATE scales reasonably well to graphs up to a certain size but, as is often the case with transformers, future work is required to scale to very large graphs.


Appendices

Supra-Laplacian and other positional encoding

Spectral Theory on multi-layer networks

To leverage the benefits of fully-connected spatio-temporal attention across all nodes at multiple timestamps, we encode the spatio-temporal structure by considering a DTDG as a multi-layer graph. For a simple DTDG G={G1,G2,G3}\mathcal{G} = \{G_1, G_2, G_3\} with a fixed number of nodes, we define the square symmetric supra-adjacency matrix AˉRN×N\bar{A} \in \mathbb{R}^{N \times N} as follows:

Aˉ=(A1I0IA2I0IA3)\bar{A} = \begin{pmatrix} A_1 & I & 0\\ I & A_2 & I \\ 0 & I & A_3 \end{pmatrix}

Then, we can utilize the rich spectral properties associated with its supra-Laplacian Lˉ=DˉAˉ\bar{L} = \bar{D} - \bar{A}. Several studies have analyzed the spectrum of those multi-layer graphs Cozzo et al., 2016Dong et al., 2014Radicchi & Arenas, 2013. Especially, Radicchi & Arenas, 2013 demonstrated that ϕ1\phi_1, the Fiedler vector, associated with the second smallest eigenvalue λ1\lambda_1, known as the algebraic connectivity or Fiedler value, highlights structural changes between each layer. For a DTDG, this provides valuable information about the graph’s dynamics over time. We verified this property experimentally by generating a DTDG containing 3 snapshots of a random Erdős-Rényi graph Erdős & Rényi, 1959 with 10 nodes each and connecting them temporally according to [eq:supradj]{reference-type=“ref+label” reference=“eq:supradj”} (see illustration on 1{reference-type=“ref+label” reference=“fig:supralap”}). We then project all nodes of the DTDG onto different vectors associated with eigenvalues λi\lambda_i, with λ0λ1...λmax\lambda_0 \leq \lambda_1 \leq ... \leq \lambda_{\text{max}}. We observe that projecting onto ϕ1\phi_1 provides dynamic information, while projecting onto ϕi\phi_i associated with larger eigenvalues λi\lambda_i reveals increasingly localized structures. These properties strongly motivate the use of spectral analysis of a multi-layer graph derived from a DTDG to achieve a unified spatio-temporal encoding.

Supra-graph construction

[]{#alg:spectrum label="alg:spectrum"} `adjacencies` $\leftarrow$ \[ \]
$\bar{\mathcal{A}} \leftarrow \texttt{BlockDiag}(\texttt{adjacencies})$
$\bar{\mathcal{A}} \leftarrow \texttt{AddTempConnection}(\bar{\mathcal{A}})$

$\bar{L} = I - D^{-1/2}\bar{A}D^{-1/2}$
$\phi^T\Lambda\phi = \texttt{GetKFirstEigVectors}(\bar{L},k)$

In practice, when isolated nodes are removed, we obtain a mask of size NN. This mask helps us identify which nodes are isolated at each time step and determines whether their positional encoding will be 0k\mathbf{0}^k or the projection on the basis of the kk eigenvectors. The mask also guides us in adding temporal connections between a node and its past, as isolated nodes do not have temporal connections. In summary, the matrix Aˉ\bar{A} has a different size from N×WN \times W because we remove isolated nodes and add virtual nodes. The masks help us map the actual indices in G\mathcal{G} to the rows in Aˉ\bar{A}.

[Transformation of a random DTDG into a connected multi-layer network.The left side shows independent snapshots with isolated nodes anddisconnected clusters. The proposed transformation (right) ensuresconnectivity by introducing temporal edges, removing isolated nodes, andadding a virtual node to connect the clusters within eachsnapshot.]{style="color: black"}{#fig:transformation width=“\textwidth”}

[In 6{reference-type=“ref+label” reference=“fig:transformation”}, we illustrate the process of transforming a random DTDG into a connected multi-layer network. On the left, we see three independent snapshots, with several isolated nodes (6, 23, and 26) and multiple clusters in the snapshot at t=3t=3. The proposed transformation in SLATE ensures that the resulting multi-layer graph becomes fully connected by adding temporal connections, removing isolated nodes, and introducing a virtual node to bridge the different clusters within each snapshot. ]{style=“color: black”}

SupraLaplacian Positional Encoding

Proof of the positivity of λ1\lambda_1 when the graph is connected Fiedler, 1973
Theorem 1: The second smallest eigenvalue, λ1\lambda_1 (the Fiedler value), is strictly positive if and only if the graph is connected.

Proof 1: Assume that the graph is not connected. This implies that it can be divided into at least two disjoint connected components without any edges connecting them. For such a graph, it is possible to construct a vector whose entries correspond to these connected components such that the product ϕiTLˉϕi=0\phi_i^T\bar{L}\phi_i=0, where ϕi\phi_i is an eigenvector. This demonstrates that λ1=0\lambda_1=0. On the contrary, if λ1>0\lambda_1 > 0, the only vector that satisfies ϕiTLˉϕi=0\phi_{i}^T\bar{L}\phi_i=0 under normal conditions (non-zero ϕi\phi_i) is the constant vector, indicating that the graph cannot be divided without cutting edges, thus it is connected.

Laplacian Positional Encoding

Lt=IDt1/2AtDt1/2=ϕtTΛϕtL_t = I - D_{t}^{-1/2}A_t D_{t}^{-1/2} = \phi_{t}^{T} \Lambda \phi_t
LapPEit=(ϕi,1t,ϕi,2t,...,ϕi,dpost)\text{LapPE}^t_{i} = (\phi^t_{i,1},\phi^t_{i,2},...,\phi^t_{i,d_{\text{pos}}})

LtL_t represents the Laplacian matrix of the graph GtG_t. It is obtained by decomposing the graph as the product of eigenvectors ϕt\phi_t and eigenvalues Λt\Lambda_t. The Laplacian positional encoding defined in [eq:lpe]{reference-type=“ref+label” reference=“eq:lpe”} provides a unique positional representation of the node ui,tu_{i,t} with respect to the kk eigenvectors of GtG_t.

Unparameterized temporal encoding

timePE(t,k)={sin(t10000(2k/dtime))if k is evencos(t10000((2k+1)/dtime))if k is odd\text{timePE}(t, k) = \begin{cases} \sin\left(\frac{t}{10000^{(2k/d_{\text{time}})}}\right) & \text{if } k \text{ is even} \\ \cos\left(\frac{t}{10000^{((2k+1)/d_{\text{time}})}}\right) & \text{if } k \text{ is odd} \end{cases}

In [eq:timepe]{reference-type=“ref+label” reference=“eq:timepe”}, tt refers to the tt-th snapshot of our DTDG G\mathcal{G}, and kk is the dimension in our temporal encoding vector of size dtimed_{\text{time}}. This temporal is from Vaswani et al., 2017. To build the ROLAND-GT separate spatio-temporal encoding we concatene the positional encoding LapPE ([eq:lpe]{reference-type=“ref+label” reference=“eq:lpe”}) and the time encoding ([eq:timepe]{reference-type=“ref+label” reference=“eq:timepe”}).

GCN Positional Encoding

We add in our comparison in 7, the GCN positional encoding against our SLATE Model. This encoding is derived from a 2-layer GCN as designed by Kipf and Welling Kipf & Welling, 2016. This method aggregates the local neighborhood information around a target node with message passing. We use the node embedding as positional encoding to enhance the transformer’s awareness of the local structural context. This approach aims to integrate structural insights into the transformer model. It is inspired by the prevalent hybrid architectures combining MP-GNNs and transformers in static Graph Transformers Rampášek et al., 2022. It reflects an evolving trend in graph neural network research, where the strengths of both MP-GNNs in capturing local graph structures and transformers in modeling complex data dependencies are leveraged to enhance model performance on graph-based tasks. However, in our experiments, we found that SLATE significantly outperformed the GCN-based positional encoding.

Baselines

Discrete Time Dynamic Graphs Link Prediction models We describe the DTDG models from Yang et al., 2021:

Continuous Time Dynamic Graphs Link Prediction models We report the description of the CTDG baselines provided in Yu et al., 2023.

Datasets

Datasets description

Datasets Domains Nodes Links Snapshots ---------- ------------ -------- ----------- ----------- CanParl Politics 734 74,478 14 USLegis Politics 225 60,396 12 Flights Transports 13,169 1,927,145 122 Trade Economics 255 507,497 32 UNVote Politics 201 1,035,742 72 Contact Proximity 692 2,426,279 8064 HepPh Citations 15,330 976,097 36 AS733 Router 6,628 13,512 30 Enron Mail 184 790 11 Colab Citations 315 943 10 SBM Synthetic 1000 4,870,863 50

[]{#tab:datasets label=“tab:datasets”}

Datasets split

For the datasets from Yu et al., 2023, we follow the same graph splitting strategy, which means 70% of the snapshots for training, 15% for validation, and 15% for testing. We use the same number of snapshots as in HTGN Yang et al., 2021, the value varies for each dataset (2{reference-type=“ref+label” reference=“tab:splitdtdg”}).

Datasets HepPh AS733 Enron Colab ---------------------------------- ------- ------- ------- ------- $l$ (number snapshots in *test*) 6 10 3 3 : $l$ represents the number of snapshots in the test dataset. The DTDG is split temporally, following [@yang2021discrete-timespace]

Implementation details and parameters search

For each of our experiments, we used a fixed embedding size of d=128d=128, a time window w=3w=3, and a single layer of transformer Encoder. Additionally, for the calculation of our positional encoding vectors, we consider that the graph is always undirected. In 3{reference-type=“ref+label” reference=“tab:searchparam”}, we provide the remaining hyperparameters that we adjusted based on the datasets. We selected these datasets by choosing the hyperparameters that yielded the best validation performance in AP. k is the number of linearly independent eigenvectors we retrieve, it’s important to note that dd does not increase when dim_pe grow because d=dkd' = d - \text{k}. nhead_xa is the number of head inside the Edge Representation module define in 3.3{reference-type=“ref+label” reference=“sec:xa”}. nhead_encoder is the number of head inside SLATE 3{reference-type=“ref+label” reference=“sec:model”}, dim_ffn is the dimension of the feed forward networks in SLATE and norm_first is a condition in SLATE to wether or not applying a layer norm before the full attention.

Parameters Search Range ----------------- --------------------------- `k` \[4,6,10,12,14\] `nhead_xa` \[1,2,4,8\] `nhead_encoder` \[1,2,4,8\] `dim_ffn` \[128,512,1024\] `norm_first` \[True,False\] `learning_rate` \[0.1,0.01,0.001,0.0001\] `weight_decay` \[0,5e-7\] : Hyperparameter search range.

Experiments: Additionnal results

AP results for DTDG models

We present additional results with Average Precision metrics to evaluate the dynamic link prediction capibility of models. SLATE outperforms all other DTDG models across various datasets, achieving the highest average precision (AP) scores. Specifically, SLATE surpasses the best-performing model, HTGN, with significant improvements: +1.22 on HepPh, +1.09 on Enron, and +3.33 on SBM. This highlights the effectiveness of our approach in dynamic link prediction tasks.

[]{#tab:dtdg_full_ap label=“tab:dtdg_full_ap”}

Y \| Y \| Y \| Y \| Y \| Y \| Y Method & HepPh & AS733 & Enron & Colab &
SBM & Avg\
GCN & 73.67 ± 1.05 & 97.11 ± 0.01 & 91.00 ± 0.73 & 90.17 ± 0.25 & 94.57
± 0.30 & 89.30 ± 0.47\
GIN & 70.55 ± 0.84 & 93.43 ± 0.47 & 89.47 ± 1.52 & 87.82 ± 0.52 & 85.64
± 0.11 & 85.38 ± 0.69\
EvolveGCN & 81.18 ± 0.89 & 95.28 ± 0.01 & 92.71 ± 0.34 & 87.53 ± 0.22 &
92.34 ± 0.17 & 89.81 ± 0.33\
GRUGCN & 85.87 ± 0.23 & 96.64 ± 0.22 & 93.38 ± 0.24 & 87.87 ± 0.58 &
91.73 ± 0.46 & 91.09 ± 0.35\
DySat & 84.47 ± 0.23 & 96.72 ± 0.12 & 93.06 ± 1.05 & 90.40 ± 1.47 &
90.73 ± 0.42 & 91.07 ± 0.66\
VGRNN & 80.95 ± 0.94 & 96.69 ± 0.31 & 93.29 ± 0.69 & 87.77 ± 0.79 &
90.53 ± 0.14 & 89.85 ± 0.57 \
HTGN & 89.52 ± 0.28 & **98.41** ± 0.03 & 94.31 ± 0.26 & 91.91 ± 0.07 &
94.71 ± 0.13 & 93.77 ± 0.15\
ROLAND-GT & 82.75 ± 0.31 & 93.66 ± 0.14 & 89.86 ± 0.29 & 85.03 ± 1.96 &
93.62 ± 0.28 & 88.98 ± 0.59\
& **90.74** ± 0.51 & 98.16 ± 0.36 & **95.40** ± 0.29 &**92.15** ± 0.28 &
**98.04** ± 0.29 & **94.90** ± 0.34\

[]{#app:additionnalexp label=“app:additionnalexp”}

Comparison state of the art: Hard Negative Sampling

We present a extensive set of results for our method in comparison to CTDG models in the task of dynamic link prediction on discrete-time dynamic graphs in [tab:ctdg_full_ap]{reference-type=“ref+label” reference=“tab:ctdg_full_ap”} and [tab:ctdg_full_auc]{reference-type=“ref+label” reference=“tab:ctdg_full_auc”}. Here, we emphasize the effectiveness of our model when employing hard historical negative sampling. Historical negative sampling technique (hist) was introduced in Poursafaei et al., 2022 to enhance the evaluation of a model’s dynamic capability by selecting negatives that occurred in previous time-steps but are not present at the current time for prediction. Inductive negative sampling evaluating the capability of models to predict new links that never occured before. Our results demonstrate that our model excels at distinguishing hard negative edges compared to other CTDG models, as evidenced by improved performance in both AP and AUC metrics. SLATE also consistently outperforms other models using the indudctive (ind) sampling method across multiple datasets, showcasing its superior capability in capturing dynamic graph interactions. Notably, SLATE achieves significant improvements on datasets such as USLegis and Trade, demonstrating its robustness and effectiveness in dynamic link prediction tasks.

Model Analysis: Additional results

Datasetsw/o Edge
CanParl89.45 ± 0.3892.37 ± 0.51
USLegis93.30 ± 0.2995.80 ± 0.11
Flights99.04 ± 0.6199.07 ± 0.41
Trade94.01 ± 0.7396.73 ± 0.29
UNVote93.56 ± 0.6899.94 ± 0.05
Contact97.41 ± 0.1098.12 ± 0.37
HepPh90.44 ± 1.0793.21 ± 0.37
AS73396.84 ± 0.2697.46 ± 0.45
Enron90.57 ± 0.2796.39 ± 0.18
COLAB86.34 ± 0.3490.84 ± 0.41

(a)

Comparison of encoding against separate structural/positional encoding and time encoding.

(b)

Figure 5:Comparison of encoding against separate structural/positional encoding and time encoding.

Figure 7 provides a comparison between SLATE spatio-temporal encoding and separate spatial and temporal encodings, including the Laplacian Dwivedi et al., 2022 (Lap [eq:lappe]) and GCN (7.6) encodings. For calculating the spatial encoding, we selected two common strategies; the first involves using, as we do, the first kk eigenvectors of the Laplacian Dwivedi et al., 2022, but only for the current snapshot. Empirically, we found that the GCN encoding did not yield satisfying results, in contrast to the hybrid architecture strategies widely used for static Graph Transformers Rampášek et al., 2022.

We show in [tab:edge_impact]{reference-type=“ref+label” reference=“tab:edge_impact”} SLATE, with its cross-attention mechanism for edge representation, significantly enhances the predictive accuracy of SLATE w/o Edge model. We show improvement across various datasets, further emphasizing the importance of modeling temporal interactions explicitly, we gain for example +6.4 points on UNVote, +2.6 points on USLegis and +5.8 points on Enron. SLATE’s ability to capture intricate dynamics between two nodes across time dimensions results in substantial performance gains, making it a valuable addition to our model architecture.

Impact of the time-pooling function. In 4{reference-type=“ref+label” reference=“tab:timepool”}, we present the performance of the time-pooling function used in 3.3{reference-type=“ref+label” reference=“sec:xa”}, across the US Legis, UN Vote, and Trade datasets, with the time window set to w=3w=3. Using k=3k=3 corresponds to averaging over all snapshots within the window, whereas k=1k=1 focuses exclusively on the last element of EuvE_{uv}. The results indicate that averaging (mean pooling) consistently outperforms max pooling, irrespective of the kk value. For our primary analysis, we therefore adopt k=3k=3.

------------ ----------- ----------- ----------- ----------- ----------- ----------- Pool USLegis UNvote Trade AUC AP AUC AP AUC AP Max 93.03 88.68 87.92 87.72 93.37 93.32 Avg. $k=3$ 94.50 89.67 **99.72** **99.75** 96.71 96.88 Avg. $k=2$ **95.35** **92.17** 99.69 99.67 96.76 96.93 Avg. $k=1$ 99.59 99.44 **96.78** **96.97** ------------ ----------- ----------- ----------- ----------- ----------- ----------- : Comparison of for several time pooling methods (random sampling), on USLegis, UNVote and Trade.

[Detailed analysis of the DTDG-to-multi-layer transformation in SLATE]{style=“color: black”} [We provide a closer examination of the performance of SLATE under various transformations applied to the DTDG during its conversion into a multi-layer graph. The [tab:isolated_vs_slate]{reference-type=“ref+label” reference=“tab:isolated_vs_slate”} demonstrates the negative effect of retaining isolated nodes, which leads to a significant drop in performance on both the Colab and USLegis datasets. By removing these nodes and focusing on the spectrum associated with the first non-zero eigenvalue, SLATE achieves a substantial performance improvement.]{style=“color: black”}

[The 6{reference-type=“ref+label” reference=“tab:with_wo_vn”} highlights the importance of introducing a virtual node (VN) that connects clusters within each snapshot. Without the VN, the model underperforms, as shown in the results for the Enron dataset. This confirms that each transformation step, from removing isolated nodes to adding temporal connections and VNs, plays a critical role in enhancing the quality of the spatio-temporal encoding.]{style=“color: black”}

Models Colab USLegis --------------------- ----------- ----------- SLATE with isolated nodes 86.73 66.57 **90.84** **95.80** : Effect of introducing a virtual node (VN) on the Enron dataset. The addition of the VN improves SLATE's performance in terms of both AP and AUC.
Models AP AUC -------- ----------- ----------- w/o VN 93.74 95.18 **95.40** **96.39** : Effect of introducing a virtual node (VN) on the Enron dataset. The addition of the VN improves SLATE's performance in terms of both AP and AUC.

[AUC : Impact of time window on multiple models]{style=“color: black”} [The analysis in 7{reference-type=“ref+label” reference=“tab:tw_models”} demonstrates that the impact of the time window on model performance is consistent across different types of models, including our transformer-based approach and two MP-GNNs (EGCN and DySAT). Interestingly, we observe that a relatively short time window produces optimal results for all models on the UNVote dataset, which spans 72 snapshots. Specifically, both EGCN and DySAT achieve their highest AUC with W=4W=4, while SLATE achieves peak performance at W=2W=2. This indicates that capturing spatio-temporal dynamics does not necessarily require long temporal windows, and in fact, shorter windows can often lead to better performance by focusing on more immediate temporal interactions.]{style=“color: black”}

Model Nb param. $W=1$ $W=2$ $W=3$ $W=4$ $W=5$ $W=\infty$ ------- ----------- ------- ----------- ------- ----------- ------- ------------ EGCN 1.8M 86.96 86.48 86.74 **87.66** 85.26 86.74 DySAT 1.8M 83.93 81.90 86.15 **88.71** 80.08 77.04 2.1M 96.68 **99.73** 98.74 95.90 95.79 92.24 : Effect of time window size on AUC for different models. Shorter windows provide optimal results across all models.

More qualitative analysis

[We conduct a fine-grained analysis of the impact of not processing the DTDG correctly. 8{reference-type=“ref+label” reference=“fig:quali_no_temp”} demonstrates that without temporal connections, the result is purely spatial projections with no spatio-temporal information, as the three snapshots remain independent. 9{reference-type=“ref+label” reference=“fig:quali_isolated”} illustrates the effect of retaining isolated nodes while adding temporal connections. Keeping these nodes leads to multiple disconnected components in the graph, where many projections focus solely on the isolated nodes, neglecting the core structure of the DTDG. This issue is further intensified by the fact that we consider only the kk eigenvectors associated with the first non-zero eigenvalue, limiting the ability to capture the full spatio-temporal dynamics.]{style=“color: black”}

[Effect of missing temporal connections in a DTDG. Without temporaledges, the figure illustrates that the projections are purely spatial,and the three snapshots remain independent, with no spatio-temporalinteractioncaptured.]{style="color: black"}{#fig:quali_no_temp width=“85%”}

[Effect of retaining isolated nodes in a DTDG with added temporalconnections. The figure shows that keeping isolated nodes results inmultiple disconnected components, where many projections focus on thesenodes, obscuring the overall spatio-temporal structure of thegraph.]{style="color: black"}{#fig:quali_isolated width=“85%”}

Scalability

[In [tab:efficiency]{reference-type=“ref+label” reference=“tab:efficiency”}, we demonstrate that Performer Choromanski et al., 2022 significantly reduces memory consumption and speeds up training time per epoch. Moreover, as shown in 8{reference-type=“ref+label” reference=“tab:performer_vs_transformer”}, using Performer an efficient approximation of the attention matrix with linear complexity---does not significantly degrade the results compared to the standard Transformer encoder. Performer is a highly advantageous solution for scaling to larger graphs while maintaining the benefits of dynamic graph transformers. Its linear complexity allows it to handle larger datasets efficiently, without sacrificing performance.]{style=“color: black”}

Models AS733 USLegis UNtrade ------------------- -------------- -------------- -------------- SLATE-Transformer 97.46 ± 0.45 95.80 ± 0.11 96.73 ± 0.29 SLATE-Performer 95.39 ± 0.61 95.14 ± 0.84 96.21 ± 0.77 : AUC performance comparison between using a standard Transformer encoder ([@vaswani2017attentionneed]) and a Performer encoder ([@choromanski2022rethinkingattentionperformers])
::: tabularx
c\| Y \| Y \| Y \| Y \| Y \| Y \| Y NSS & Method & CanParl & USLegis &
Flights & Trade & UNVote & Contact\
& JODIE & 78.21 ± 0.23 & 82.85 ± 1.07 &96.21 ± 1.42 &69.62 ± 0.44 &68.53
± 0.95 &96.66 ± 0.89\
&DyREP & 73.35 ± 3.67 & 82.28 ± 0.32 & 95.95 ± 0.62 & 67.44 ± 0.83 &
67.18 ± 1.04 & 96.48 ± 0.14\
&TGAT & 75.69 ± 0.78 & 75.84 ± 1.99 & 94.13 ± 0.17 &64.01 ± 0.12 & 52.83
± 1.12& 96.95 ± 0.08\
&TGN & 76.99 ± 1.80 & [83.34 ± 0.43]{.underline} &98.22 ± 0.13 &69.10 ±
1.67 & [69.71 ± 2.65]{.underline}& 97.54 ± 0.35\
&CAWN & 75.70 ± 3.27 & 77.16 ± 0.39 &98.45 ± 0.01 & 68.54 ± 0.18&53.09 ±
0.22 & 89.99 ± 0.34\
&EdgeBank & 64.14 ± 0.00 & 62.57 ± 0.00 &90.23 ± 0.00 & 66.75 ± 0.00 &
62.97 ± 0.00 & 94.34 ± 0.00\
&TCL & 72.46 ± 3.23 & 76.27 ± 0.63 &91.21 ± 0.02 &64.72 ± 0.05 &51.88 ±
0.36 & 94.15 ± 0.09\
&GraphMixer & 83.17 ± 0.53 & 76.96 ± 0.79 & 91.13 ± 0.01& 65.52 ±
0.51&52.46 ± 0.27 & 93.94 ± 0.02\
&DyGformer & **97.76 ± 0.41** & 77.90 ± 0.58 & [98.93 ±
0.01]{.underline}&[70.20 ± 1.44]{.underline} & 57.12 ± 0.62 & **98.53 ±
0.01**\
& & [92.37 ± 0.51]{.underline}& **95.80 ± 0.11** & **99.07 ± 0.41** &
**96.73 ± 0.29** & **99.94 ± 0.05** & [98.12 ± 0.37]{.underline}\
& JODIE & 62.44 ± 1.11 & 67.47 ± 6.40 & 68.97 ± 1.87&68.92 ± 1.40 &76.84
± 1.01 & [96.35 ± 0.92]{.underline}\
&DyREP &70.16 ± 1.70 &91.44 ± 1.18 & 69.43 ± 0.90 &64.36 ± 1.40 & 74.72
± 1.43& 96.00 ± 0.23\
&TGAT & 70.86 ± 0.94&73.47 ± 5.25 &72.20 ± 0.16 &60.37 ± 0.68 &53.95 ±
3.15 & 95.39 ± 0.43\
&TGN & 73.23 ± 3.08& 83.53 ± 4.53 &68.39 ± 0.95 & 63.93 ± 5.41&73.40 ±
5.20 & 93.76 ± 1.29\
&CAWN &72.06 ± 3.94 & 78.62 ± 7.46 & 66.11 ± 0.71 & 63.09 ± 0.74&51.27 ±
0.33 & 83.06 ± 0.32\
&EdgeBank &63.04 ± 0.00 & 67.41 ± 0.00 & [74.64 ±
0.00]{.underline}&[86.61 ± 0.00]{.underline} &[89.62 ± 0.00]{.underline}
& 92.17 ± 0.00\
&TCL & 69.95 ± 3.70& 83.97 ± 3.71 & 70.57 ± 0.18 & 61.43 ± 1.04& 52.29 ±
2.39& 93.34 ± 0.19\
&GraphMixer & 79.03 ± 1.01&85.17 ± 0.70 & 70.37 ± 0.23& 63.20 ±
1.54&52.61 ± 1.44 &93.14 ± 0.34\
&DyGformer & **97.61 ± 0.40**& [90.77 ± 1.96]{.underline}& 68.09 ± 0.43&
73.86 ± 1.13&64.27 ± 1.78 &**97.17 ± 0.05**\
& & [88.71 ± 0.43]{.underline} & **[90.69 ± 0.50]{.underline}** &
**76.83 ± 0.69** & **92.14 ± 0.38** & **98.62 ± 0.49** & 94.29 ± 0.09\
&JODIE & 52.88 ± 0.80 & 59.05 ± 5.52 & 69.99 ± 3.10 & 66.82 ± 1.27 &
[73.73]{.underline} ± 1.61 & [94.47]{.underline} ± 1.08\
&DyREP & 63.53 ± 0.65 & [89.44]{.underline} ± 0.71 &71.13 ± 1.55& 65.60
± 1.28 & 72.80 ± 2.16 & 94.23 ± 0.18\
&TGAT & 72.47 ± 1.18 & 71.62 ± 5.42 & 73.47 ± 0.18 & 66.13 ± 0.78 &
53.04 ± 2.58 & 94.10 ± 0.41\
&TGN & 69.57 ± 2.81 & 78.12 ± 4.46 & 71.63 ± 1.72 & 66.37 ± 5.39 & 72.69
± 3.72 & 91.64 ± 1.72\
&CAWN & 72.93 ± 1.78 & 76.45 ± 7.02 & 69.70 ± 0.75 & 71.73 ± 0.74 &
52.75 ± 0.90 & 87.68 ± 0.24\
&EdgeBank & 61.41 ± 0.00 & 68.66 ± 0.00 & **81.10** ± 0.00 &
[74.20]{.underline} ± 0.00 & 72.85 ± 0.00 & 85.87 ± 0.00\
&TCL & 69.47 ± 2.12 & 82.54 ± 3.91 & 72.54 ± 0.19 & 67.80 ± 1.21 & 52.02
± 1.64 & 91.23 ± 0.19\
&GraphMixer & 70.52 ± 0.94 & 84.22 ± 0.91 & 72.21 ± 0.21 & 66.53 ± 1.22
& 51.89 ± 0.74 & 90.96 ± 0.27\
&DyGformer & **96.70** ± 0.59 & 87.96 ± 1.80 & 69.53 ± 1.17 & 62.56 ±
1.51 & 53.37 ± 1.26 & **95.01** ± 0.15\
& & [93.74]{.underline} ± 0.08 & **90.23** ± 0.29 & [76.98]{.underline}
± 1.64 & **91.45** ± 0.39 & **92.78** ± 0.06 & 94.03 ± 0.43\
:::
::: tabularx
c \|Y \| Y \| Y \| Y \| Y \| Y \| Y NSS & Method & CanParl & USLegis &
Flights & Trade & UNVote & Contact\
&JODIE & 69.26 ± 0.31 & 75.05 ± 1.52 & 95.60 ± 1.73 & 64.94 ± 0.31 &
63.91 ± 0.81 & 95.31 ± 1.33\
&DyREP &66.54 ± 2.76 &75.34 ± 0.39& 95.29 ± 0.72 & 63.21 ± 0.93 &62.81 ±
0.80 & 95.98 ± 0.15\
&TGAT & 70.73 ± 0.72 &68.52 ± 3.16 &94.03 ± 0.18 & 61.47 ± 0.18& 52.21 ±
0.98 & 96.28 ± 0.09\
&TGN &70.88 ± 2.34 & [75.99 ± 0.58]{.underline} &97.95 ± 0.14 & 65.03 ±
1.37 & [65.72 ± 2.17]{.underline} & 96.89 ± 0.56\
&CAWN &69.82 ± 2.34 & 70.58 ± 0.48 &98.51 ± 0.01 &65.39 ± 0.12 &52.84 ±
0.10 & 90.26 ± 0.28\
&EdgeBank & 64.55 ± 0.00 &58.39 ± 0.00 & 89.35 ± 0.00 &60.41 ± 0.00 &
58.49 ± 0.00 & 92.58 ± 0.00\
&TCL & 68.67 ± 2.67 & 69.59 ± 0.48 & 91.23 ± 0.02 & 62.21 ± 0.03 & 51.90
± 0.30& 92.44 ± 0.12\
&GraphMixer & 77.04 ± 0.46 & 70.74 ± 1.02 & 90.99 ± 0.05 & 62.61 ± 0.27
&52.11 ± 0.16 & 91.92 ± 0.03\
&DyGformer & **97.36 ± 0.45** & 71.11 ± 0.59 & **98.91 ± 0.01** &[66.46
± 1.29]{.underline} & 55.55 ± 0.42& **98.29 ± 0.01**\
& & [92.44 ± 0.25]{.underline} & **92.66 ± 0.41** & [98.61 ±
0.44]{.underline} & **96.91 ± 0.23** & **99.91 ± 0.09** & [97.68 ±
0.13]{.underline}\
&JODIE & 51.79 ± 0.63 &51.71 ± 5.76 & 66.48 ± 2.59 &61.39 ± 1.83 &70.02
± 0.81 & 95.31 ± 2.13\
&DyREP &63.31 ± 1.23 & **86.88 ± 2.25** & 67.61 ± 0.99& 59.19 ±
1.07&69.30 ± 1.12 & [96.39 ± 0.20]{.underline}\
&TGAT & 67.13 ± 0.84 &62.14 ± 6.60 &[72.38 ± 0.18]{.underline} &55.74 ±
0.91 &52.96 ± 2.14 & 96.05 ± 0.52\
&TGN & 68.42 ± 3.07& 74.00 ± 7.57&66.70 ± 1.64 &58.44 ± 5.51 & 69.37 ±
3.93& 93.05 ± 2.35\
&CAWN & 66.53 ± 2.77&68.82 ± 8.23 & 64.72 ± 0.97&55.71 ± 0.38 &51.26 ±
0.04 & 84.16 ± 0.49\
&EdgeBank &63.84 ± 0.00 & 63.22 ± 0.00&70.53 ± 0.00 & [81.32 ±
0.00]{.underline} & [84.89 ± 0.00]{.underline}& 88.81 ± 0.00\
&TCL & 65.93 ± 3.00&80.53 ± 3.95 &70.68 ± 0.24 & 55.90 ± 1.17&52.30 ±
2.35 & 93.86 ± 0.21\
&GraphMixer & 74.34 ± 0.87 &81.65 ± 1.02 &71.47 ± 0.26 & 57.05 ± 1.22&
51.20 ± 1.60& 93.36 ± 0.41\
&DyGformer &**97.00 ± 0.31** &[85.30 ± 3.88]{.underline} &66.59 ± 0.49 &
64.41 ± 1.40&60.84 ± 1.58 & **97.57 ± 0.06**\
& & [84.38 ± 0.81]{.underline} & 83.53 ± 1.64& **75.09 ± 1.17**& **84.05
± 0.98** & **96.85 ± 0.27** & 93.58 ± 0.16\
&JODIE & 48.42 ± 0.66 & 50.27 ± 5.13 & 69.07 ± 4.02 & 60.42 ± 1.48
&[67.79]{.underline} ± 1.46 & 93.43 ± 1.78\
&DyREP & 58.61 ± 0.86 &[83.44]{.underline} ± 1.16 & 70.57 ± 1.82 &60.19
± 1.24 &67.53 ± 1.98 &94.18 ± 0.10\
&TGAT & 68.82 ± 1.21 & 61.91 ± 5.82 & 75.48 ± 0.26 &60.61 ± 1.24&52.89 ±
1.61 & 94.35 ± 0.48\
&TGN & 65.34 ± 2.87 & 67.57 ± 6.47 & 71.09 ± 2.72 & 61.04 ± 6.01 &67.63
± 2.67 & 90.18 ± 3.28\
&CAWN & 67.75 ± 1.00 & 65.81 ± 8.52 & 69.18 ± 1.52 &62.54 ± 0.67 &52.19
± 0.34 & 89.31 ± 0.27\
&EdgeBank & 62.16 ± 0.00 & 64.74 ± 0.00 & **81.08** ± 0.00 &
[72.97]{.underline} ± 0.00 & 66.30 ± 0.00 & 85.20 ± 0.00\
&TCL & 65.85 ± 1.75 & 78.15 ± 3.34 & 74.62 ± 0.18 & 61.06 ± 1.74 & 50.62
± 0.82 & 91.35 ± 0.21\
&GraphMixer & 69.48 ± 0.63 & 79.63 ± 0.84 & 74.87 ± 0.21 & 60.15 ± 1.29
& 51.60 ± 0.73 & 90.87 ± 0.35\
&DyGformer & **95.44 ± 0.57** & 81.25 ± 3.62 & 70.92 ± 1.78 & 55.79 ±
1.02 & 51.91 ± 0.84 & **94.75** ± 0.28\
& & [ 93.42]{.underline} ± 0.75 & **95.21** ± 0.51 & [79.03]{.underline}
± 0.95 & **92.87** ± 0.62 & **93.74** ± 0.29 & [94.52]{.underline} ±
0.86\
:::

NeurIPS Paper Checklist

  1. Claims

  2. Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?

  3. Answer:

  4. Justification: Our paper focuses on a unified spatio-temporal encoding based on the spectrum of the supra-Laplacian, as developed in 3.1{reference-type=“ref+label” reference=“sec:supralaplacian”}. We also introduce a fully-connected architecture utilizing this spatio-temporal encoding 3.2{reference-type=“ref+label” reference=“sec:full_attention_transformer”} for the task of link prediction 3.3{reference-type=“ref+label” reference=“sec:xa”}. Each of these claims is validated in [tab:impact_components]{reference-type=“ref+label” reference=“tab:impact_components”}, as well as the claim of better SLATE performance against state-of-the-art methods in [tab:ctdg_main_auc,tab:dtdg_main_auc]{reference-type=“ref+label” reference=“tab:ctdg_main_auc,tab:dtdg_main_auc”}.

  5. Guidelines:

    • The answer NA means that the abstract and introduction do not include the claims made in the paper.

    • The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers.

    • The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings.

    • It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper.

  6. Limitations

  7. Question: Does the paper discuss the limitations of the work performed by the authors?

  8. Answer:

  9. Justification: We discuss the limitations of SLATE in the conclusion 5{reference-type=“ref+label” reference=“sec:conclu”}, where we list multiple negative points of our work and suggest possible improvements, particularly in terms of better scalability and evaluating on other graph or node-based tasks.

  10. Guidelines:

    • The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper.

    • The authors are encouraged to create a separate “Limitations” section in their paper.

    • The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be.

    • The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated.

    • The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon.

    • The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size.

    • If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness.

    • While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations.

  11. Theory Assumptions and Proofs

  12. Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?

  13. Answer:

  14. Justification: We state in our 3.1{reference-type=“ref+label” reference=“sec:supralaplacian”} that a connected graph has its second eigenvalue strictly positive. We include this proof and its source in Appendix 7.3{reference-type=“ref+label” reference=“app:supralap”}.

  15. Guidelines:

    • The answer NA means that the paper does not include theoretical results.

    • All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.

    • All assumptions should be clearly stated or referenced in the statement of any theorems.

    • The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition.

    • Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in Appendix or supplemental material.

    • Theorems and Lemmas that the proof relies upon should be properly referenced.

  16. Experimental Result Reproducibility

  17. Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?

  18. Answer:

  19. Justification: We provide a comprehensive overview of our model in 2{reference-type=“ref+label” reference=“fig:model”}. Detailed discussions of our architecture can be found in [sec:full_attention_transformer,sec:xa]{reference-type=“ref+label” reference=“sec:full_attention_transformer,sec:xa”}. Also, algorithm of our supra-Laplacian computation is in 7.3{reference-type=“ref+label” reference=“app:supralap”}. Implementation specifics are outlined in the implementation details section of 4{reference-type=“ref+label” reference=“sec:experiments”} and further elaborated in 10{reference-type=“ref+label” reference=“app:param”}.

  20. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not.

    • If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable.

    • Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed.

    • While NeurIPS does not require releasing code, the conference does require all submissions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example

      1. If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm.

      2. If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully.

      3. If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset).

      4. We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results.

  21. Open access to data and code

  22. Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?

  23. Answer:

  24. Justification: The code of SLATE is provided at this link: https://github.com/ykrmm/SLATE. Our code is designed to be comprehensible, ensuring that all presented results are reproducible.

  25. Guidelines:

    • The answer NA means that paper does not include experiments requiring code.

    • Please see the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.

    • While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark).

    • The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.

    • The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc.

    • The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why.

    • At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable).

    • Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted.

  26. Experimental Setting/Details

  27. Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?

  28. Answer:

  29. Justification: The selection of datasets, their splitting, and descriptions are presented in 9{reference-type=“ref+label” reference=“app:dts”}. We use the same evaluation protocols as papers well-recognized by the community Yu et al., 2023Yang et al., 2021. Hyperparameter optimization is detailed in 10{reference-type=“ref+label” reference=“app:param”}, and the optimizer settings are described at the beginning of 4{reference-type=“ref+label” reference=“sec:experiments”}.

  30. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them.

    • The full details can be provided either with the code, in appendix, or as supplemental material.

  31. Experiment Statistical Significance

  32. Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?

  33. Answer:

  34. Justification: Following the protocols we are based on, all results in the paper, including those from ablation studies, are averaged over 5 runs with the standard deviation reported.

  35. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • The authors should answer “Yes” if the results are accompanied by error bars, confidence intervals, or statistical significance tests, at least for the experiments that support the main claims of the paper.

    • The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions).

    • The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.)

    • The assumptions made should be given (e.g., Normally distributed errors).

    • It should be clear whether the error bar is the standard deviation or the standard error of the mean.

    • It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified.

    • For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates).

    • If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text.

  36. Experiments Compute Resources

  37. Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?

  38. Answer:

  39. Justification: The analysis of the time-memory efficiency of our model is presented in [tab:efficiency]{reference-type=“ref+label” reference=“tab:efficiency”}, where we also compare it with other state-of-the-art models. We also detailed the number of parameters of SLATE.

  40. Guidelines:

    • The answer NA means that the paper does not include experiments.

    • The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage.

    • The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute.

    • The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper).

  41. Code Of Ethics

  42. Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?

  43. Answer:

  44. Justification: Our research adheres strictly to the NeurIPS Code of Ethics. Our study does not involve sensitive data or unethical practices, and we have followed all relevant guidelines to ensure ethical compliance throughout our work.

  45. Guidelines:

    • The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.

    • If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics.

    • The authors should make sure to preserve anonymity (e.g., if there is a special consideration due to laws or regulations in their jurisdiction).

  46. Broader Impacts

  47. Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?

  48. Answer:

  49. Justification: Our model is a discriminative model for link prediction on academic datasets, which do not contain any private or sensitive information.

  50. Guidelines:

    • The answer NA means that there is no societal impact of the work performed.

    • If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact.

    • Examples of negative societal impacts include potential malicious or unintended uses (e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations.

    • The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster.

    • The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology.

    • If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML).

  51. Safeguards

  52. Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?

  53. Answer:

  54. Justification: We don’t release new data or harmful generative models.

  55. Guidelines:

    • The answer NA means that the paper poses no such risks.

    • Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters.

    • Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images.

    • We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort.

  56. Licenses for existing assets

  57. Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?

  58. Answer:

  59. Justification: We properly cite all the datasets and baselines used in our paper. Each dataset and model is credited to its original creators, and we adhere to the specified licenses and terms of use. The evaluation protocols we employ are based on established standards from previous works, ensuring compliance with the original authors’ conditions.

  60. Guidelines:

    • The answer NA means that the paper does not use existing assets.

    • The authors should cite the original paper that produced the code package or dataset.

    • The authors should state which version of the asset is used and, if possible, include a URL.

    • The name of the license (e.g., CC-BY 4.0) should be included for each asset.

    • For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided.

    • If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets{.uri} has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset.

    • For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided.

    • If this information is not available online, the authors are encouraged to reach out to the asset’s creators.

  61. New Assets

  62. Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?

  63. Answer:

  64. Justification: Our paper introduces new assets, including code implementations and datasets for DTDG. Detailed documentation is provided alongside these assets, following structured templates that include information about training and licensing.

  65. Guidelines:

    • The answer NA means that the paper does not release new assets.

    • Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc.

    • The paper should discuss whether and how consent was obtained from people whose asset is used.

    • At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file.

  66. Crowdsourcing and Research with Human Subjects

  67. Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?

  68. Answer:

  69. Justification: We do not conduct research with human subjects.

  70. Guidelines:

    • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.

    • Including this information in the supplemental material is fine, but if the main contribution of the paper involves human subjects, then as much detail as possible should be included in the main paper.

    • According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector.

  71. Institutional Review Board (IRB) Approvals or Equivalent for Research with Human Subjects

  72. Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?

  73. Answer:

  74. Justification: We do not conduct research with human subjects.

  75. Guidelines:

    • The answer NA means that the paper does not involve crowdsourcing nor research with human subjects.

    • Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper.

    • We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution.

    • For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review.

Acknowledgments

We acknowledge the financial support provided by PEPR Sharp (ANR-23-PEIA-0008, ANR, FRANCE 2030). We would also like to thank the LITIS laboratory in Rouen and especially Leshanshui Yang, who helped us better position our method. We also thank Elias Ramzi for his feedback on the paper and assistance in writing the abstract.

References
  1. Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W. L., & Leskovec, J. (2018). Graph convolutional neural networks for web-scale recommender systems. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 974–983.
  2. He, X., Deng, K., Wang, X., Li, Y., Zhang, Y., & Wang, M. (2020). Lightgcn: Simplifying and powering graph convolution network for recommendation. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 639–648.
  3. Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., … Hassabis, D. (2021). Highly accurate protein structure prediction with AlphaFold. Nature 2021 596:7873, 596(7873), 583–589. 10.1038/s41586-021-03819-2
  4. Kaba, O., & Ravanbakhsh, S. (2022). Equivariant networks for crystal structures. Advances in Neural Information Processing Systems, 35, 4150–4164.
  5. Skarding, J., Gabrys, B., & Musial, K. (2021). Foundations and Modeling of Dynamic Networks Using Dynamic Graph Neural Networks: A Survey. IEEE Access, 9, 79143–79168. 10.1109/ACCESS.2021.3082932
  6. Yang, L., Chatelain, C., & Adam, S. (2024). Dynamic Graph Representation Learning With Neural Networks: A Survey. IEEE Access, 12, 43460–43484. 10.1109/ACCESS.2024.3378111
  7. You, J., Du, T., & Leskovec, J. (2022). ROLAND: Graph Learning Framework for Dynamic Graphs. KDD. 10.48550/arxiv.2208.07239
  8. Sankar, A., Wu, Y., Gou, L., Zhang, W., & Yang, H. (2020). DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks. In J. Caverlee, X. (Ben) Hu, M. Lalmas, & W. Wang (Eds.), WSDM ’20: The Thirteenth ACM International Conference on Web Search and Data Mining, Houston, TX, USA, February 3-7, 2020 (pp. 519–527). ACM. 10.1145/3336191.3371845
  9. Pareja, A., Domeniconi, G., Chen, J., Ma, T., Suzumura, T., Kanezashi, H., Kaler, T., Schardl, T., & Leiserson, C. (2020). Evolvegcn: Evolving graph convolutional networks for dynamic graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 5363–5370.
  10. Dwivedi, V. P., & Bresson, X. (2020). A Generalization of Transformer Networks to Graphs. arXiv Preprint arXiv:2012.09699.
  11. Wu, Q., Zhao, W., Yang, C., Zhang, H., Nie, F., Jiang, H., Bian, Y., & Yan, J. (2024). Simplifying and empowering transformers for large-graph representations. Advances in Neural Information Processing Systems, 36.
  12. Kreuzer, D., Beaini, D., Hamilton, W. L., Létourneau, V., & Tossou, P. (2021). Rethinking Graph Transformers with Spectral Attention. Advances in Neural Information Processing Systems, 26, 21618–21629. 10.48550/arxiv.2106.03893
  13. Xu, K., Hu, W., Leskovec, J., & Jegelka, S. (2019). How Powerful are Graph Neural Networks? 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. https://openreview.net/forum?id=ryGs6iA5Km
  14. Topping, J., Giovanni, F. D., Chamberlain, B. P., Dong, X., & Bronstein, M. M. (2022). Understanding over-squashing and bottlenecks on graphs via curvature. The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. https://openreview.net/forum?id=7UmjRGzp-A
  15. Alon, U., & Yahav, E. (2021). On the Bottleneck of Graph Neural Networks and its Practical Implications. 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. https://openreview.net/forum?id=i80OPhOCVH2
Pip
LICENSE