Introduction

Network representation of complex interactions among elements is an overarching framework heavily used in many fields of science1,2,3. For social systems, the dynamics of interactions between individuals (whether electronic, online or face-to-face) can be represented as time-varying networks, often called temporal networks, in which nodes come and go and edges are activated or deactivated as time goes on4,5. Many essential features of human behaviour encoded in the representation of temporal networks have been revealed over the past decade, such as burstiness6,7,8, circadian/diurnal rhythms9, temporal communities10, higher-order interactions11,12, etc.

While the studies of temporal networks shed light on the time-varying nature of interactions between nodes, dynamics in social systems emerge not only at the local level13, but also at the global level. In a wide variety of social contexts, network size (i.e., the number of active nodes) and the number of edges observed at a given point in time are very often not constant, and accordingly the average degree increases or decreases14,15. In fact, the numbers of aggregate nodes and edges have been shown to have a scaling relationship known as the densification power law or densification scaling14. In temporal networks (i.e., a sequence of snapshot networks), any variation in the number of active nodes N and the number of edges M can be a priori attributed to changes in (i) the population in the system (e.g., the number of students present in a school, the number of attendees in a conference, etc); (ii) the probability of two nodes being connected; or (iii) both. With a constant probability of edge creation, N and M will increase if more nodes enter the system, since each node will have a higher chance of finding partners. Likewise, for a given population, if the probability of two nodes being connected increases, M will surely increase, and N will rise as well as isolated nodes, if they exist, will be more likely to get connected.

These two mechanisms are fundamental factors that bring about the dynamics of N and M, yet separating their contributions based on the dynamical behaviour of N and M is a challenging problem. In a wide variety of social and economic systems, network dynamics are likely to be driven by a mixture of these two mechanisms, and moreover their relative importance may occasionally change as the network evolves15. In theory, each of these two mechanisms leads to a distinctive type of densification scaling; The first one, generated by the evolution of population, is a scaling behaviour similar to the typical densification scaling in which the number of edges M scales with the number of active nodes N with a constant exponent \(\alpha\), i.e., \(M\propto N^\alpha\)14. The second one is an accelerating growth of M, which is caused by the evolution of the probability of edge creation15. In fact, for the human contact networks we study, neither of these two types of scaling is observed in their original form. Rather, we observe a “mixed” scaling behaviour which appears to be a composite of the two types and therefore cannot be explained by a single scaling law.

Here, we develop a Bayesian statistical method to identify the source of dynamics generating network densification and sparsification based on the sequence of N and M. To take into account possible changes in the source of dynamics, we derive two specifications (i.e., “regimes”) for the solution of a simple generative model, namely a dynamic hidden variable model, each of which capturing one of the two fundamental mechanisms. By fitting the two specifications simultaneously to the observed mixed scaling relationship using a unified estimation framework, known as the Markov regime-switching model16,17, we are able to estimate the probability that the dynamical source of densification or sparsification at a given point in time is attributed to a particular mechanism. At the same time, the Bayesian inference also allows us to trace the paths of the time-varying parameters directly related to the dynamical source, i.e., the population in the system and the activity level of nodes. An important advantage of the regime-switching model is that it allows the “true” model specification to occasionally switch, possibly depending on the social context.

In this work we analyse networks of face-to-face human interactions collected by the SocioPatterns collaboration18. We focus on four datasets: contact networks in two scientific conferences, a hospital and a workplace. Such networks can indeed be affected by the two fundamental mechanisms at the same time, because (i) individuals can always enter and exit the system, and (ii) presence of a time schedule could facilitate or inhibit face-to-face interactions (e.g., attendees of a conference are more likely to have interactions during coffee breaks than during keynote talks). In particular, using data on academic conferences has an important advantage, as it allows us to compare the dynamical regimes detected by the proposed method with the “ground-truth” conference time schedules. We find indeed that during keynote talks, parallel sessions and coffee breaks, the temporal densification and sparsification in the contact networks formed by conference attendees are mainly related to shifts in the chance of contacts being made between attendees present at the venue. On the other hand, shifts in the population are the main driving force of densification and sparsification during registration and poster sessions. This result is consistent with our intuition that the number of attendees in the middle of the program would be mostly constant, while it may be more likely to change during registration, which is held in the morning, and poster sessions in which not all of the attendees participate. For contact networks in a hospital and a workplace, this kind of comparison with a prespecified time schedule is not possible because there is no such rigorous time constraints to follow. Nevertheless, in all the systems we examined, the proposed method reveals that the main driving force of network densification and sparsification is occasionally switching, suggesting that the formation of social ties in physical space generally involves multiple dynamical sources.

Results

Empirical evidence on mixed densification scaling

We focus our analysis on temporal contact networks taken from the following four datasets:

  • WS-16: Contacts between participants of the Computational Social Science Winter Symposium 2016 at GESIS in Cologne on November 30, 201619.

  • IC2S2-17: Contacts between participants of the International Conference on Computational Social Science 2017 at GESIS in Cologne on July 12, 201719.

  • Hospital: Contacts among patients, nurses, doctors and staffs in a Hospital in Lyon on December 8, 201020.

  • Workplace: Contacts between workers in a office building in France on June 27, 201521.

These data consist of contacts between individuals collected every 20 seconds using RFID sensors18,22. A “contact” is here defined as a physical, face-to-face proximity event. The datasets thus give us temporal networks in which nodes are individuals and edges encode the contacts occurring between them. All datasets exhibit large and abrupt fluctuations of the number of edges that are typical in these non-stationary systems (see Fig. 1, lower panels). In these particular contexts of social interactions, these transitions between high an low activity periods are often related to specified schedules: from talk sessions to coffee breaks in the conferences, changes in shifts in the hospital, from desk work to meetings in the workplace.

Figure 1
figure 1

Dynamical behaviour of number of active nodes N and number of active edges M. In upper panels, dynamical relationship between N and M is shown. Each dot represents a snapshot network created over a 10-min time window. Gray dashed and dotted lines respectively denote N/2 (i.e., the lower bound for M) and \(N(N-1)/2\) (i.e., the upper bound for M). Lower panels show the behaviour of M over time.

In many social and economic dynamical networks, the numbers of aggregate edges and nodes have a superlinear scaling relationship called the “densification power law”14,23,24, in which the average degree is increasing with the number of nodes, i.e., “densification”. For temporal networks, where there is a sequence of network snapshots, a similar type of scaling emerges from the dynamics of the population, in which nodes enter and leave the system, keeping the chance of two nodes being connected constant15,25. However, another type of scaling emerges in real-world systems for which the population is fixed. In such systems densification is “explosive”, with the scaling exponent increasing with N15. While these two classes of scaling could be differentiated and identified from data if we observe a specific type of scaling15, in general there may exist a mixture of them that cannot be easily classified as one of the two classes. Indeed, in the four datasets we study, no clear scaling relationship appears (Fig. 1, upper panels). In the following we show that the mixed shape of empirical densification behavior reflects a mixing of both classes of scaling.

Two dynamical regimes in the dynamic hidden-variable model

To explore the temporal dynamics of densification and sparsification, we consider a dynamic version of the hidden variable model. The probability that two nodes i and j are in contact within a given time window t is:

$$\begin{aligned} \mathscr {P}_{ij,t} = \kappa _t a_{i} a_{j}, \;\;\; i,j=1,\ldots , N_{{\mathrm{p}},t}, \;\; t = 1,\ldots , T. \end{aligned}$$
(1)

where \(a_{i}\) is the “fitness” that represents the intrinsic activity level of node i26,27,28, and T denotes the last time window in the data. There are two time-varying parameters in the model. The first one is \(\kappa _{t}>0\), which modulates the overall activity rhythm of nodes. A variation in \(\kappa\) would reflect the time-schedule of a conference or a school, working hours in an office or a hospital, or the circadian rhythm of individuals9,22,29,30. The second time-varying parameter \(N_{{\mathrm{p}},t}\) denotes the potential number of active nodes at time t, i.e., the total of active and inactive nodes that are in the room or the building. It should be noted that although the number of active nodes (i.e., nodes having at least one edge) \(N_t\) is always observable from the data, the potential number of nodes \(N_{{\mathrm{p}},t}\) is not. We do not usually know how many people were actually in the room at a given time because people could enter and exit the room at any time without being interacting with any other individual. We can observe the number of active nodes that appear in the record of contacts, but in many cases there is no record of nodes without any interaction. We assume that activity \(a_i\) is uniformly distributed on [0, 1], because (i) we do not have any prior information about the full distribution of the activity levels of all nodes including isolated ones, and (ii) introducing a more general distribution prohibits us from obtaining an analytical solution, which makes it difficult to implement parameter estimation.

The average numbers of active nodes N and edges M are analytically given as (see "Analytical expression for N and M" section in Methods for derivation):

$$\begin{aligned} N&= N_{\mathrm{p}}\left[ 1- \frac{2}{\kappa N_{\mathrm{p}}}\left( 1-\left( 1-\frac{\kappa }{2}\right) ^{N_{\mathrm{p}}}\right) \right] , \end{aligned}$$
(2)
$$\begin{aligned} M&= \frac{1}{8} \kappa N_{\mathrm{p}}(N_{\mathrm{p}}-1), \end{aligned}$$
(3)

where we drop time subscript t for brevity. From these expressions, it is clear that the two parameters \(\kappa\) and \(N_{\mathrm{p}}\) play different roles in the determination of N and M, but it is not clear how N and M correlate. To see the direct relationship between N and M, we eliminate one of the two parameters in Eq. (2), using Eq. (3). By doing this, we can effectively endogenise either \(\kappa\) or \(N_{\mathrm{p}}\). Depending on whether we endogenise \(\kappa\) or \(N_{\mathrm{p}}\), we obtain different functional forms that connect N and M.

Regime 1: \(N_{\mathrm{p}}\)-driven dynamics

First, let us consider the case of time-varying \(N_{\mathrm{p}}\). This is a situation in which the dynamics of N and M are fully driven by changes in the population. We call this system as being in “Regime 1” or “state 1”:

Definition 1.

A system is in Regime 1 if \(N_{\mathrm{p}}\) is time-varying and \(\kappa\) is constant, in which case the dynamical relationship between N and M is given by:

$$\begin{aligned} N_t&= N_{\mathrm{p}}(M_t,\kappa ) \left[ 1- \frac{2}{\kappa N_{\mathrm{p}}(M_t,\kappa )}\left( 1-\left( 1-\frac{\kappa }{2}\right) ^{N_{\mathrm{p}}(M_t,\kappa )}\right) \right] \nonumber \\&\equiv h^1(M_t;\kappa ), \end{aligned}$$
(4)

where the time-varying \(N_{\mathrm{p}}\) value is expressed as a function of \(M_t\) and \(\kappa\): \(N_{\mathrm{p}}(M_t,\kappa ) \equiv \frac{1+ \sqrt{1+{32M_t/{\kappa }}}}{2}\) (see, Eq. 3).

For the purpose of parameter estimation, we introduce an error term as \(N_t=h^1(M_t;\widehat{\kappa }) + \varepsilon _{1,t},\) where \(\widehat{\kappa }\) denotes the estimated value of \(\kappa\), and \(\varepsilon _t^1\) is a residual term following a normal distribution with mean zero and standard deviation \(\sigma _1\). Estimated value of \(N_{{\mathrm{p}},t}\) when the system is in Regime 1 leads to:

$$\begin{aligned} \widehat{N}_{{\mathrm{p}},t}|_{S_t=1} = \frac{1+ \sqrt{1+{32M_{t}/{\widehat{\kappa }}}}}{2}, \end{aligned}$$
(5)

where \(S_t=1\) denotes the fact that the system is in Regime 1 at time t. In Regime 1, network dynamics is totally driven by the time-varying nature of the population, what we call “\(N_{\mathrm{p}}\)-driven” dynamics. For a given \(\kappa\), the slope of densification scaling is close to constant, while different \(\kappa\) yield different slopes (Fig. 2, lower left).

Figure 2
figure 2

Schematic of the identification method. Empirical densification is fitted to the regime switching model in which the model switches from Regime 1 to Regime 2 (resp. from Regime 2 to Regime 1) with probability \(p_{12}\) (resp. \(p_{21}\)). Then, the estimated parameters are used to infer the probability of the system being in Regime 1 at a given time t. For panels at lower left and lower middle, different colours denote different \(N_{\mathrm{p}}\), and different symbols denote different \(\kappa\) (see Eqs. 2 and 3). Solid line in the bottom left (resp. bottom middle) denotes the \(N_{\mathrm{p}}\)-driven (resp. \(\kappa\)-driven) scaling with \(\kappa =0.3\) (resp. \(N_{\mathrm{p}}=126\)). If the scaling is \(N_{\mathrm{p}}\)-driven (resp. \(\kappa\)-driven), the time variation of N and M is fully caused by shifts in \(N_{\mathrm{p}}\) (resp. \(\kappa\)).

Regime 2: \(\kappa\)-driven dynamics

Next, let us consider the case of time-varying \(\kappa\). This corresponds to a situation in which the dynamics of the system is fully driven by changes in the overall activity of individuals. We call this system as being in “Regime 2” or “state 2”:

Definition 2.

A system is in Regime 2 if \(\kappa\) is time-varying and \(N_{\mathrm{p}}\) is constant, in which case the dynamical relationship between N and M is given by

$$\begin{aligned} N&= N_{\mathrm{p}}\left[ 1- \frac{2}{\kappa (M,N_{\mathrm{p}})N_{\mathrm{p}}}\left( 1-\left( 1-\frac{\kappa (M,N_{\mathrm{p}})}{2}\right) ^{N_{\mathrm{p}}}\right) \right] \nonumber \\&\equiv h^2(M;N_{\mathrm{p}}), \end{aligned}$$
(6)

where the time-varying value of \(\kappa\) is expressed as a function of \(M_t\) and \(N_{\mathrm{p}}\): \(\kappa (M_t,N_{\mathrm{p}}) \equiv \frac{8M_t}{N_{\mathrm{p}}(N_{\mathrm{p}}-1)}\) (see, Eq. 3).

For estimating, we add an error term as \(N_t=h^2(M_t;\widehat{N}_{\mathrm{p}}) + \varepsilon _{2,t},\) where \(\widehat{N}_{\mathrm{p}}\) denotes the estimated value of \(N_{\mathrm{p}}\), and \(\varepsilon _{2,t}\) is a residual term following a normal distribution with mean zero and standard deviation \(\sigma _2\). Estimated value of \(\kappa\) at time t when the system is in Regime 2 leads to:

$$\begin{aligned} \widehat{\kappa }_t|_{S_t=2} = \frac{8M_t}{\widehat{N}_{\mathrm{p}}(\widehat{N}_{\mathrm{p}}-1)}. \end{aligned}$$
(7)

In Regime 2, network dynamics is fully driven by the individuals’ time-varying activity levels, what we call “\(\kappa\)-driven” dynamics, and the slope of densification scaling in fact increases with N (Fig. 2, lower middle). This kind of accelerating growth of M naturally happens when edges are created in a fixed-population system, in which case the network tends to be denser as the number of inactive nodes vanishes.

Analysis of switching dynamics behind temporal densification and sparsification

A Markov regime switching model

In real-world networks, the mechanism of densification and sparsification may occasionally change depending on the context, such as working schedule, coffee breaks, lunch time, etc. To incorporate such a possibility, we propose a unified framework based on the Markov regime switching model in which the hidden state of a system can switch from Regime 1 to Regime 2 (respectively from Regime 2 to Regime 1) with probability \(p_{12}\) (resp. \(p_{21}\))16,17. An important advantage of the regime switching model is that it allows us to calculate the probability of a system being in Regime \(s\in \{1,2\}\) at time t for a given parameter set \(\varvec{\theta } =\{N_{\mathrm{p}},\kappa ,\sigma _1,\sigma _2,p_{11},p_{22}\}\). This probability of the system being in Regime s can then be interpreted as the relevancy of each mechanism in explaining the densification dynamics at a given time (Fig. 2). We employ a Bayesian approach for the estimation of the parameters, using the Markov chain Monte Carlo (MCMC) to obtain posterior distributions (see, Methods “Bayesian estimation” for the estimation method). It should be noted that \({\varvec{\theta }}\) does not contain \(N_{{\mathrm{p}},t}|_{S_t=1}\) and \(\kappa _t|_{S_t=2}\) since \({\varvec{\theta }}\) only contains constant parameters to be directly estimated by the Bayesian method.

In the following, we use the smoothed probability \({\mathrm{Pr}}(S_t=s|\psi _T;\varvec{\theta })\) which is calculated conditional on all the information available at time T, denoted by \(\psi _T\) (see, Methods “Smoothed probability” for full derivation) 31. Validation analyses using synthetic networks show that the proposed method correctly detects the switching of regimes and estimates the model parameters quite accurately (Table S1, Figs. S1 and S2 in Supporting Information (SI)). Given the probability of being in Regime \(s\in \{1,2\}\), we can estimate the dynamical parameters \(N_{{\mathrm{p}},t}\) and \(\kappa _{t}\) as:

$$\begin{aligned} \widehat{N}_{{\mathrm{p}},t}&= {\mathrm{Pr}}(S_t=1|\psi _{T};\widehat{\varvec{\theta }})\cdot \widehat{N}_{{\mathrm{p}},t}|_{S_t=1} \; +\; {\mathrm{Pr}}(S_t=2|\psi _{T};\widehat{\varvec{\theta }})\cdot \widehat{N}_{\mathrm{p}}, \end{aligned}$$
(8)
$$\begin{aligned} \widehat{\kappa }_{t}&= {\mathrm{Pr}}(S_t=1|\psi _{T};\widehat{\varvec{\theta }})\cdot \widehat{\kappa } \; + \; {\mathrm{Pr}}(S_t=2|\psi _{T};\widehat{\varvec{\theta }})\cdot \widehat{\kappa }_t|_{S_t=2}, \end{aligned}$$
(9)

where \(\widehat{\varvec{\theta }}\) denotes the set of estimated parameters, which is summarised in Table 1 in Methods.

Classification of network dynamics

Figure 3
figure 3

Identification of dynamical regime. Upper panels show the smoothed probability of being in Regime 1 (i.e., \(N_{\mathrm{p}}\)-driven dynamics) at each time window. 95 % credible interval is indicated by shading. Lower panels show N-M plots with classified regimes being denoted by different colours and symbols. We identify a snapshot network as being in Regime 1 (resp. Regime 2) if the estimated probability of being in Regime 1 (resp. Regime 2) is greater than 0.5 in more than 95 % of MCMC sampling. Otherwise, a network is considered as being in an undetermined “gray area”.

The Bayesian estimation of the parameters suggests that the empirical systems’ dynamics are indeed occasionally switching between \(N_{\mathrm{p}}\)-driven and \(\kappa\)-driven (Fig. 3, upper panels). For the conference data, a common feature is that the probability of being in Regime 1 is almost 1 prior to the first session and after the last keynote session of the day, and mostly zero in between (see Fig. 4 for the correspondence between the dynamics and the schedule of the conferences). For WS-16, we see further fluctuations between the two regimes, one linked to the lunch break, the other to the poster session which closed the day. This suggests that the dynamics during the oral sessions, keynote talks and breaks are mainly driven by changes in the activity level of participants, while in the “opened” time slots, such as registration, closing and poster session, their dynamics are explained by time-varying population. The same patterns linked to the schedule are found on the other days of the conferences (see S3a–c).

For the Workplace data we see a roughly similar pattern (Fig. 3, top right). The dynamics in the early morning and evening are driven by a variation in \(N_{\mathrm{p}}\), as well as around lunch time and coffee break, and changes in activity level are the main source of dynamics in between. This is, of course, not necessarily a general property of contact networks in physical space. We also see that the regime remains almost constant in most of the day (Fig. S3e), or there might be days in which the regime constantly changes throughout the day (Fig. S3f). In the case of the Hospital data, there is no clear tendency for the regime-switching pattern (Fig. 3, third column and S3d), which seems natural for such an open environment with visitors and medical workers coming and going, and no general, fixed schedule for working hours.

We next attempt to classify the snapshot networks into two groups based on their probability of being in a particular regime. We identify a snapshot network at t as being in Regime 1 (resp. Regime 2) if more than 95 % of samples for the value of \({\mathrm{Pr}}(S_t=1|\psi _T;\varvec{\theta })\) generated by MCMC are greater than 0.5 (resp. lower than 0.5), i.e., in more than 95 % of parameter sampling the dynamics at t is considered to be attributed to Regime 1 (resp. Regime 2). Otherwise, the system is considered as being in an undetermined “gray area”. As seen in the lower panels of Fig. 3, the location of snapshot networks in the N-M space is strongly related to which regimes they belong to. As expected, the snapshots in Regime 1 exhibit a scaling whose slope is almost constant (i.e., \(N_{\mathrm{p}}\)-driven scaling), while the snapshots in Regime 2 exhibit accelerating growth patterns (i.e., \(\kappa\)-driven scaling). Classifying each time window according to the underlying dynamical mechanism is essentially equivalent to identifying patterns in the N-M space.

Temporal dynamics of population and activity level

Figure 4
figure 4

Estimation of \(N_{{\mathrm{p}},t}\) and \(\kappa _t\) for (a) WS-16 and (b) IC2S2-17. \(\widehat{N}_{{\mathrm{p}},t}\) and \(\widehat{\kappa }_{t}\) are shown in the upper and the lower panels, respectively, and the 95 % credible interval is indicated by shading. Upper panels show the number of active nodes (dashed blue line) at each time, thus the difference between the two lines represents the number of isolated nodes. Lower panels also show the number of edges at each time (dashed red line). Vertical dotted lines indicate the time windows of the scheduled sessions, with the labels in the middle.

We also examine the evolution of the dynamical parameters for both regimes (Fig. 4). For the two conferences (WS-16 and IC2S2-17), the estimated population size \(\widehat{N}_{{\mathrm{p}},t}\) increases at the beginning of the day and decreases at the end, consistent with the dynamics of participants entering and exiting the venue. The estimated activity parameter \(\widehat{\kappa }_{t}\) is high during these periods, and the level is consistent with those seen in highly active windows during social breaks. During the main program, the population is virtually constant and the size is consistent with the number of attendants (\(\sim\) 120 for WS-16, \(\sim\) 200 for IC2S2-17). The variation of network size is thus mainly driven by the schedule, which constrains the participants’ networking activity. In the case of WS-16, the fluctuation of \(\widehat{N}_{{\mathrm{p}},t}\) during the lunch break and the poster session are worth noting since the variation of observed network size N seems to be driven by both mechanisms; we see slight reductions in the estimated population while the overall activity is still high in these time windows. This demonstrates the ability of the proposed method to extract mixed-regime periods in which both of the two mechanisms are at work (see Fig. 2, right, for schematic). Similar patterns are also found in the other days (see S4).

Figure 5
figure 5

Estimation of \(N_{{\mathrm{p}},t}\) and \(\kappa _t\) for (a) Hospital and (b) Workplace.

In the Hospital data, the regime-switching dynamics is much less periodic, with lots of transitions and mixed periods (Fig. 5a). This is however not surprising, because there is no fixed schedule regulating either the activity or the number of people present. For the Workplace data, we also do not expect a priori to see a clear segmentation of regimes because of the absence of a rigid schedule as in a hospital. However, the dynamics uncovered by our method indicates that the situation is much simpler than that for Hospital, as there seem to be less variation in population size, aside from the “opening” and “closing” effects and a reduction in population around the lunch time (Fig. 5b). The day that exhibited many regime switches presents however many episodes of small variations in population size (see S4), similar to the dynamics observed in a Hospital.

Non-monotonic behaviour of network density

Since both types of scaling emerging from two different dynamics exhibit superlinearity, the average degree is always increasing in N. However, the density of networks, defined by \(2M/(N(N-1))\), is not always increasing with N (Fig. 6). In fact, when the dynamics is \(N_{\mathrm{p}}\)-driven, the network density mostly decreases as the network size N increases (Fig. 6, blue circle). So, a rise in N causes the density to be smaller when the engine of dynamics is changes in population. In contrast, when changes in \(\kappa\) play a dominant role, the network density may increase when the network size is sufficiently large (Fig. 6, pale-red cross). This is because when the number of active nodes N is close to its upper bound \(N_{\mathrm{p}}\), at which the activity levels of remaining inactive nodes are fairly low, the overall activity \(\kappa\) needs to be large enough for those low-activity nodes to get at least one edge. This would necessarily increase the total number of edges in the network to a large extent, which leads to a “true” densification of networks.

Figure 6
figure 6

Density versus the number of active nodes. Classification of dynamical regimes is conducted in the same way as in Fig. 3.

These properties are also confirmed by the analytical equation for the average network density15

$$\begin{aligned} \frac{2M}{N(N-1)} = \frac{\kappa }{4} \left( \frac{1}{1-q_0(\kappa ,{N}_{\mathrm{p}})}\right) ^2 \left( 1+\frac{q_0(\kappa ,{N}_{\mathrm{p}})}{N-1}\right) , \end{aligned}$$
(10)

where \(q_0\) denotes the fraction of isolated nodes in the system (see Eq. 16 in Methods “Analytical expression for N and M”). If the system is in Regime 1, in which \(\kappa\) is constant, the density monotonically approaches \(\kappa /4\) as \(N_{\mathrm{p}}\rightarrow \infty\) (i.e., \(q_{0}\rightarrow 0\) and \(N\rightarrow \infty\)). On the other hand, if the system is in Regime 2, in which \(N_{\mathrm{p}}\) is constant, there is no a priori upper bound, and the density exhibits a non-monotonic behaviour. In Regime 2, a change in \(\kappa\) has two opposing effects on the network density. First, an increase in \(\kappa\) directly increases density through a rise in the probability of edges being created. Second, a shift in \(\kappa\) would also increase N, which reduces the density through the third term in Eq. 10. Since \(q_0\rightarrow 0\) as N becomes sufficiently large, the latter effect is vanishing, and therefore the density begins to rise with N for a sufficiently large N.

Discussion

Densification and sparsification of networks can occur for two reasons, namely a variation in the population \(N_{\mathrm{p}}\) and a variation in the overall activity level \(\kappa\). A key finding of this work is that the relative importance of each of these two dynamical factors occasionally change, depending on the social context under study. By fitting the model to the observed scaling relations, we can detect the main factor that is relevant at a given point in time. Shifts in \(N_{\mathrm{p}}\) and/or \(\kappa\) affect the activity of all individuals equally, so \(\kappa\) could be considered as an effective “temperature” and \(N_{\mathrm{p}}\) as an effective “chemical potential” of the system in a grand canonical approach. While in this work we studied face-to-face networks of individuals, by its versatility the proposed method could also be used for a wide variety of dynamical systems.

There are some remaining issues for future research. First, the baseline model, a dynamic hidden variable model, relies on a “homogeneous mixing” hypothesis, which implies that nodes are connected to each other at random, given their activity levels. If we look at the structural properties of networks, such as triadic closure and community structure, such a hypothesis—especially for social contexts—would be unrealistic. However, the fact that the proposed method works remarkably well indicates that, as long as we look at network dynamics at a sufficiently coarse scale, keeping local properties aside, this homogeneous mixing assumption is a good approximation. In fact, introducing a non-random structure would easily make it impossible to obtain analytical expressions that would be needed for identification.

Second, we assumed that the distribution of intrinsic node activities is uniform for simplicity. Ideally, one would need to set this distribution based on empirical evidence. However, measuring the empirical intrinsic activity levels of individuals is extremely difficult because one needs activity levels of totally inactive individuals as well. Furthermore, this parameter might very well have its own temporal evolution. If available, such a rich information would allow for a refinement of the method.

Third, while the current method works well for temporal networks whose dynamical regime is occasionally switching, for fixed-regime systems in which the whole dynamics could be explained by either a \(N_{\mathrm{p}}\)-driven or a \(\kappa\)-driven regime, the proposed regime-switching model is unnecessary. In such cases, one would fit the empirical scaling to each of the two models separately, and then find out which model is better fitted15.

Fourth, the parameters \(N_{\mathrm{p}}\) and \(\kappa\) could also be estimated using a numerical joint distribution of \((N_t,M_t)\) that would be obtained by simulation for a given \((N_{\mathrm{p}},\kappa )\), which is considered as a likelihood function. However, since a particular combination of \((N_t,M_t)\) can be generated by different combinations of \((N_{\mathrm{p}},\kappa )\), the observed combination of \((N_t,M_t)\), as opposed to the whole scaling relationship, could be insufficient to accurately uncover the true values of \(N_{{\mathrm{p}},t}\) and \(\kappa _t\) especially when network size is small15. On the other hand, there will be an advantage of the simulated likelihood approach in that we could introduce realistic network properties such as clustering, community structure, homophily, etc.

In many cases, examining the source of network dynamics from the level of each individual would be prohibitively difficult because each individual has his/her own circumstance, and privacy issues often prohibit researchers from obtaining enough information to reveal particular individuals’ behaviour. In contrast, global quantities, such as the total numbers of nodes and edges, are much more widely accessible, and therefore utilising these quantities will be inevitable when high-resolution data are difficult to collect. A contribution of this work is that the proposed model allows us to detect the role of the two fundamental dynamical factors just by using information on the global network dynamics. Any dynamical processes occurring on networks, regardless of whether they are micro- or macro-phenomena, would be largely affected by the underlying dynamics of networks. This is in particular the case for spreading processes such as epidemics. A better understanding of the dynamics of densification and sparsification could thus benefit public health policies, which are of central importance for modern social systems.

Methods

Analytical expression for N and M

In this section we derive Eqs. (2) and (3). The numbers of active nodes N and edges M can be expressed as functions of parameters \(\kappa\) and \(N_{\mathrm{p}}\) (we drop time subscript t for brevity):

$$\begin{aligned} {\left\{ \begin{array}{ll} N &{}= (1- q_0(\kappa ,N_{\mathrm{p}})) N_{\mathrm{p}},\\ M &{}= \frac{\overline{k}(\kappa , N_{\mathrm{p}}) N_{\mathrm{p}}}{2}, \end{array}\right. } \end{aligned}$$
(11)

where \(\overline{k}(\kappa ,N_{\mathrm{p}})\) denotes the average degree over all the existing nodes including isolated ones, and \(q_0(\kappa ,N_{\mathrm{p}})\) denotes the fraction of isolated nodes or equivalently the probability that a randomly chosen node being isolated.

Let \(\rho (a)\) be the density of node activities, and define \(u(a,a^\prime )\) as the probability that there is an edge between two nodes having activity levels a and \(a^\prime\). The average degree \(\overline{k}(\kappa ,N_{\mathrm{p}})\) is given by the number of possible partners times the average of \(u(a,a^\prime )\) (see, section S1 in SI for a full derivation):

$$\begin{aligned} \overline{k}(\kappa ,N_{\mathrm{p}}) = (N_{\mathrm{p}}-1) \int \int d a d a^\prime \rho (a) \rho (a^\prime ) u(a, a^\prime ), \end{aligned}$$
(12)

It should be noted that Eq. (12) is equivalent to the average degree in the standard fitness model27 if \(N_{\mathrm{p}}-1\) is replaced with N, which is only asymptotically true in our model.

The fraction of isolated nodes in the system is given by (see, section S1 in SI):

$$\begin{aligned} q_0(\kappa ,N_{\mathrm{p}}) = \int d a^\prime \rho (a^\prime ) \left[ 1 - \int u(a^\prime , a) \rho (a) d a \right] ^{N_{\mathrm{p}}-1}. \end{aligned}$$
(13)

Substituting \(\rho (a) = 1\) (i.e., uniform distribution on [0, 1]) and \(u(a, a^\prime ) = \kappa a a^\prime\) into Eq. (12) leads to:

$$\begin{aligned} \overline{k}(\kappa ,N_{\mathrm{p}}) = \frac{\kappa }{4} (N_{\mathrm{p}}-1). \end{aligned}$$
(14)

Similarly, \(q_0\) is given by:

$$\begin{aligned} q_0(\kappa ,N_{\mathrm{p}})&= \int _0^1 \left( 1 - \frac{\kappa a^\prime }{2} \right) ^{N_{\mathrm{p}}-1}d a^\prime . \end{aligned}$$
(15)

By defining a variable \(x \equiv 1 - \frac{\kappa a^\prime }{2}\), we have:

$$\begin{aligned} q_0(\kappa ,N_{\mathrm{p}})&= \frac{2}{\kappa }\int _{1-\frac{\kappa }{2}}^1 x^{N_{\mathrm{p}}-1} dx \nonumber \\&= \frac{2}{\kappa N_{\mathrm{p}}}\left[ 1-\left( 1-\frac{\kappa }{2}\right) ^{N_{\mathrm{p}}}\right] . \end{aligned}$$
(16)

Combining these results with Eq. (11), we have:

$$\begin{aligned} N&= N_{\mathrm{p}}\left[ 1- \frac{2}{\kappa N_{\mathrm{p}}}\left( 1-\left( 1-\frac{\kappa }{2}\right) ^{N_{\mathrm{p}}}\right) \right] ,\end{aligned}$$
(17)
$$\begin{aligned} M&= \frac{1}{8} \kappa N_{\mathrm{p}}(N_{\mathrm{p}}-1). \end{aligned}$$
(18)

It should be noted that if \(|1-\kappa /2| < 1\) and \(N_{\mathrm{p}}\) is sufficiently large, then \(q_0(\kappa ,N_{\mathrm{p}}) \simeq 0\) and thereby \(N \simeq N_{\mathrm{p}}\) and \(M \propto N^2\), as is shown in the study of the static fitness model26,27,28.

Bayesian estimation

This section describes how we can infer the model parameters and the dynamical regime at a given time interval t. Let \({\mathrm{Pr}}(S_t=s|\psi _{t-1};\varvec{\theta })\) be the probability that a network is in state s (i.e., in Regime s) conditional on information available at the end of time interval \(t-1\), denoted by \(\psi _{t-1}\), for a given set of constant parameters \(\varvec{\theta }=\{N_{\mathrm{p}},\kappa ,\sigma _1,\sigma _2,p_{11},p_{22}\}\). More specifically, \(\psi _{t-1}\) is the set of variables observed from time 0 up to \(t-1\), which represents the full history of the observed variables at the end of \(t-1\). The likelihood function is then given by:

$$\begin{aligned} L(\{\varvec{D}_t\}|\varvec{\theta }) = \prod _{t=1}^T \sum _{s=1}^2f(\varvec{D}_t|S_t=s,\psi _{t-1};\varvec{\theta }){\mathrm{Pr}}(S_t=s|\psi _{t-1};\varvec{\theta }), \end{aligned}$$
(19)

where \(\{\varvec{D}_t\}\) denotes the sequence of observations \(\varvec{D}_t = (N_t,M_t)\), and f is given by:

$$\begin{aligned} f(\varvec{D}_t|S_t=s,\psi _{t-1};\varvec{\theta }) = \frac{1}{\sqrt{2\pi \sigma _s^2}}\exp {\left( -\frac{(N_t-h^s)^2}{2\sigma _s^2} \right) }, \;\; s= 1,2. \end{aligned}$$
(20)

The log-likelihood function leads to:

$$\begin{aligned} \log {L}(\{\varvec{D}_t\}|\varvec{\theta })&= \sum _{t=1}^T \log \sum _{s=1}^2f(\varvec{D}_t|S_t=s,\psi _{t-1};\varvec{\theta }){\mathrm{Pr}}(S_t=s|\psi _{t-1};\varvec{\theta }), \nonumber \\&= \sum _{t=1}^T \log \sum _{s=1}^2\sum _{r=1}^2f(\varvec{D}_t|S_t=s,\psi _{t-1};\varvec{\theta }){\mathrm{Pr}}(S_{t-1}=r|\psi _{t-1};\varvec{\theta })p_{rs}. \end{aligned}$$
(21)

Bayesian inference is conducted based on the relationship \(p(\varvec{\theta }|\{\varvec{D}_t\})\propto L(\{\varvec{D}_t\}|\varvec{\theta })p(\varvec{\theta })\), where \(p(\varvec{\theta }|\{\varvec{D}_t\})\) and \(p(\varvec{\theta })\) are posterior and prior densities, respectively. For each parameter we collect 20,000 samples (four chains, 5,000 samples after 5,000 burn-in for each chain) generated from the posterior using Markov chain Monte Carlo (MCMC). We implement MCMC using Pystan ver. 2.19.032, which runs the No-U-Turn sampler (NUTS)33. The dataset and the Python code used in this work are available from Zenodo34. The mean parameter values are summarised in Table 1.

Table 1 Estimated parameters. For each parameter, mean and 95% credible interval obtained by MCMC are shown at the upper and lower rows, respectively. \(N_{\mathrm{max}}\) denotes \(\max _t\{{N_t}\}\).

Now we describe how information is updated in each period. The probability of being in state s conditional on information at time t is written as:

$$\begin{aligned} {\mathrm{Pr}}(S_t=s|\psi _{t};\varvec{\theta })&= \frac{f(\varvec{D}_t|S_t=s,\psi _{t-1})\cdot {\mathrm{Pr}}(S_t=s|\psi _{t-1})}{\sum _{s}f(\varvec{D}_t|S_t=s,\psi _{t-1})\cdot {\mathrm{Pr}}(S_t=s|\psi _{t-1})}, \nonumber \\&= \frac{\sum _{r}f(\varvec{D}_t|S_t=s,\psi _{t-1})\cdot {\mathrm{Pr}}(S_{t-1}=r|\psi _{t-1})p_{rs}}{\sum _{s}\sum _{r}f(\varvec{D}_t|S_t=s,\psi _{t-1})\cdot {\mathrm{Pr}}(S_{t-1}=r|\psi _{t-1})p_{rs}}, \end{aligned}$$
(22)

where we drop argument \(\varvec{\theta }\) in f for brevity. Given the initial guess for \({\mathrm{Pr}}(S_0=r|\psi _{0})\), we can recursively update the probability of being in state s.

Smoothed probability

The probability \({\mathrm{Pr}}(S_t=s|\psi _{t};\varvec{\theta })\) obtained in Eq. (22) is based on information available at time t for a given parameter set \(\varvec{\theta }\). We can also obtain the probability based on all information, represented by information set \(\psi _{T}\). Let \(\varvec{\xi }_{t|T}\equiv [{\mathrm{Pr}}(S_t=1|\psi _{T};\varvec{\theta }),{\mathrm{Pr}}(S_t=2|\psi _{T};\varvec{\theta })]^{\prime }\) be the vector of probabilities conditional on information at T. \(\varvec{\xi }_{t|T}\) can be calculated by conducting backward iteration from T16:

$$\begin{aligned} \varvec{\xi }_{T-1|T}&= \varvec{\xi }_{T-1|T-1}\odot \{\varvec{P}^\prime [\varvec{\xi }_{T|T}(\div )\varvec{\xi }_{T|T-1}]\},\nonumber \\ \varvec{\xi }_{T-2|T}&= \varvec{\xi }_{T-2|T-2}\odot \{\varvec{P}^\prime [\varvec{\xi }_{T-1|T}(\div )\varvec{\xi }_{T-1|T-2}]\},\nonumber \\&\vdots \nonumber \\ \varvec{\xi }_{t|T}&= \varvec{\xi }_{t|T-1}\odot \{\varvec{P}^\prime [\varvec{\xi }_{t+1|T}(\div )\varvec{\xi }_{t+1|T-1}]\}, \end{aligned}$$
(23)

where \(\odot\) and \((\div )\) denote element-by-element multiplication and element-by-element division, respectively, and \(\varvec{P}=(p_{ss})\) is the transition matrix. Note that all the terms in the RHS of the first equality are already known from the previous estimation procedure. After calculating \(\varvec{\xi }_{T-1|T}\), we use it to calculate the RHS of the second line. We repeat this until we obtain \(\varvec{\xi }_{t|T}\).

Validation

We check the accuracy of the inference method based on synthetic network data generated by the regime-switching hidden variable model. For given parameters \(N_{\mathrm{p}}\), \(\kappa\), \(p_{11}\), and \(p_{22}\), and time-varying variables \(\{N_{{\mathrm{p}},t}\}\) and \(\{\kappa _{t}\}\), we generate sequences of \(\{N_t\}\) and \(\{M_t\}\) in a way prescribed in the model. When the network at t is in Regime 1 (Regime 2), the true \(N_{{\mathrm{p}},t}\) (\(\kappa _{t}\)) is given by \(N_{{\mathrm{p}},t}= 0.95N_{{\mathrm{p}}, t-1}\) (\(\kappa _{t}=0.95\kappa _{t-1}\)), and \(N_{{\mathrm{p}},t}=N_{\mathrm{p}}\) (\(\kappa _{t}=\kappa\)) otherwise. We assume that the initial probability of being in Regime 1 is set at 0.5, and \(N_{\mathrm{p,0}} = N_{\mathrm{p}}\) and \(\kappa _{0}=\kappa\). For each parameter, we collect 20,000 samples by MCMC (5,000 samples from four chains after 5,000 burn-in iterations).

The estimated parameters under different sets of ground-truth \(\varvec{\theta }\) are summarised in Table S1 in SI. The estimated smoothed probabilities well match the true states of the generated networks (Fig. S1 in SI). We also group the generated networks based on the probability of being in Regime 1; For each time period, if more than 95% of the sampled values for \({\mathrm{Pr}}(S_t=1|\psi _{t};\widehat{\varvec{\theta }})\) are higher (lower) than 0.5, then we classify the corresponding snapshot as being in Regime 1 (Regime 2). If it is not classified as Regime 1 or 2, the network is considered to be in a “gray area”. As shown in the middle and the right columns of Fig. S1, the classification of generated networks based on estimated parameters is consistent with the ground truth, while there are some networks that are in gray zones especially when the observed pairs of \((N_t,M_t)\) are overlapped between the two regimes. A comparison between the estimated and the true paths of \(N_{{\mathrm{p}},t}\) and \(\kappa _{t}\) is also shown in Fig. S2.