Author Name
stringlengths
5
28
ORCID
stringlengths
1
23
Topic Name
stringlengths
22
96
Full Paper Text
stringlengths
2
65.5k
Analogous Paper Text
stringlengths
2
65.5k
Daniel Halpern
-
Aligning AI with Human Values via RLHF and Social Choice Theory
{'Metric Distortion with Elicited Pairwise Comparisons': 'Title: Metric Distortion with Elicited Pairwise Comparisons\\nAbstract\\nIn this work we study the metric distortion problem in voting theory under a limited\\namount of ordinal information. Our primary contribution is threefold. First, we consider\\nmechanisms which perform a sequence of pairwise comparisons between candidates. We\\nshow that a widely-popular deterministic mechanism employed in most knockout phases\\nyields distortion O(logm) while eliciting only m − 1 out of Θ(m2) possible pairwise com-\\nparisons, where m represents the number of candidates. Our analysis for this mechanism\\nleverages a powerful technical lemma recently developed by Kempe [Kem20a]. We also\\nprovide a matching lower bound on its distortion. In contrast, we prove that any mecha-\\nnism which performs fewer than m− 1 pairwise comparisons is destined to have unbounded\\ndistortion. Moreover, we study the power of deterministic mechanisms under incomplete\\nrankings. Most notably, when every agent provides her k-top p', 'A Panel Study on the Dynamics of Social Media Use and Conspiracy Thinking': "Title: A Panel Study on the Dynamics of Social Media Use and Conspiracy Thinking\\n Tilburg UniversityBeliefs in times of corona: Investigating the relationship between media use andCOVID-19 conspiracy beliefs over time in a representative Dutch samplevan Wezel, Marloes; Krahmer, Emiel; Vromans, Ruben; Bol, NadinePublished in:International Journal of CommunicationPublication date:2023Document VersionPublisher's PDF, also known as Version of recordLink to publication in Tilburg University Research PortalCitation for published version (APA):van Wezel, M., Krahmer, E., Vromans, R., & Bol, N. (2023). Beliefs in times of corona: Investigating therelationship between media use and COVID-19 conspiracy beliefs over time in a representative Dutch sample.International Journal of Communication , 17, 692-711.General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright ownersand it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portalTake down policyIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediatelyand investigate your claim.Download date: 12. Nov. 2023International Journal of Communication 17(2023), 692–711 1932–8036/20230005 Copyright © 2023 (Marloes van Wezel, Emiel Krahmer, Ruben Vromans, and Nadine Bol). Licensed under the Creative Commons Attribution Non-commercial No Derivatives (by-nc-nd). Available at http://ijoc.org. Beliefs in Times of Corona: Investigating the Relationship Between Media Use and COVID-19 Conspiracy Beliefs Over Time in a Representative Dutch Sample MARLOES VAN WEZEL1 EMIEL KRAHMER RUBEN VROMANS NADINE BOL Tilburg University, The Netherlands We investigated the relationship between different media sources (traditional media, online news media, online health sources, social media) and COVID-19 related conspiracy beliefs, and how these change over time, using four-wave panel data from a representative sample of the Dutch population (N = 1,166). Across waves, 0.1%–3.4% of our sample were certain the selected conspiracy theories were true, though this belief was unstable over time. Random intercept cross-lagged panel models revealed that individuals’ temporary level of conspiracy beliefs did not significantly depend on their temporary level of media use at a previous occasion, or vice versa. However, significant correlations at the group level indicated that more frequent use of health-related and social media sources were associated with higher levels of conspiracy beliefs. These results suggest that relationships between media use and conspiracy beliefs are nuanced. Underlying processes should be investigated to develop tailored communication strategies to combat the ongoing infodemic. Keywords: media use, digital media, conspiracy beliefs, misinformation, COVID-19, random intercept cross-lagged panel models Marloes van Wezel: m.m.c.vanwezel@tilburguniversity.edu Emiel Krahmer: e.j.krahmer@tilburguniversity.edu Ruben Vromans: r.d.Vromans@tilburguniversity.edu Nadine Bol: nadine.bol@tilburguniversity.edu Date submitted: 2021-03–15 1This work was supported by a “Corona: Fast-track data” grant from NWO (Netherlands Organization for Scientific Research) [440.20.030]. We would like to thank Ellen Hamaker and Jeroen Mulder for their advice on the statistical analyses. Furthermore, we wish to thank Joris Mulder for providing us with insights into the CBS Statistics Netherlands and Longitudinal Internet Studies for the Social Sciences panel statistics to compare our sample demographics with nation-wide demographics. International Journal of Communication 17(2023) Beliefs in Times of Corona 693 The COVID-19 pandemic has been dominating our lives since early 2020, with cumulative infection rates of many millions of cases, including millions of deaths worldwide as of March 2021 (World Health Organization [WHO], 2020a). Besides this viral pandemic, the WHO has warned against an ongoing “infodemic”: the existence of an overwhelming amount of information about the coronavirus, of which some is accurate, some is not (WHO, 2020b). Due to this information overload, many people experience difficulties disentangling accurate information from misinformation. Generally, scholars refer to inaccurate or unverified claims as misinformation (e.g., Nyhan & Reifler, 2010; Su, 2021). Misinformation is the overarching term to which conspiracy theories and conspiracy beliefs belong. Conspiracy theories are propositions that are based on the idea that major social or political events, such as the coronavirus pandemic, are plotted by powerful and malicious individual(s) (Aaronovitch, 2010; Douglas et al., 2019). They are often believed by groups of people with a common intention (e.g., to challenge the government). Due to the potential detrimental impact of conspiracy beliefs on fighting the coronavirus pandemic, we focus on the belief in COVID-19 conspiracy theories and its relationship with media consumption. Since the start of the pandemic, many conspiracy theories about the origin, impact, and treatment of the coronavirus have circulated, ranging from the virus being a secret Chinese bioweapon (Woodward, 2020) to a patented invention from Bill Gates (Huff, 2020). Beliefs in such unverified or inaccurate information may harm societal response toward the pandemic. Several studies found that conspiracy beliefs negatively relate to adherence to COVID-19 preventive measures (e.g., Allington, Duffy, Wessely, Dhavan, & Rubin, 2020; Freeman et al., 2020) such as wearing face masks (Romer & Jamieson, 2020) or practicing physical distancing (Pummerer et al., 2020). Moreover, belief in COVID-19 conspiracy theories has been associated with vaccination hesitancy (Freeman et al., 2020; Romer & Jamieson, 2020). To reiterate: Effectively combating a pandemic such as COVID-19 does not solely consist of managing the virus but also social processes that impact its spread and eradication. Consequently, scholars have argued that, in addition to epidemiologists, social scientists should be consulted to effectively combat this pandemic (Van Bavel et al., 2020). Despite the proposed impact of COVID-19 misinformation, there is a lack of knowledge about the extent to which different media sources play a role in how people form conspiracy beliefs. Prior research has primarily focused on the role of social media in this respect (e.g., Su, 2021). However, conspiracy theories are also increasingly discussed in mainstream (digital) media, which might offer an alternative route to get acquainted with and start believing such theories. Furthermore, much of the earlier research is cross-sectional, which leaves open the question of directionality between media use and conspiracy beliefs. For both possible directions, some suggestive evidence can be found: Some researchers assume that deliberately misleading information and conspiracy theories diffuse broadly across social media users (Vosoughi, Roy, & Aral, 2018), while others suggest that conspiracy theories tend to spread across communities of people that already adopt these theories, leading to so-called echo chambers (Metaxas & Finn, 2017; Uscinski, DeWitt, & Atkinson, 2018). So, the question of causality arises: Do certain media sources discourage or promote conspiracy beliefs, or do those who are more or less likely to believe in conspiracy theories seek information from different media sources? 694 Marloes van Wezel et al. International Journal of Communication 17(2023) In this article, we tackle these questions in a longitudinal study among a large and representative sample of the Dutch population, which are repeatedly asked about their media use and COVID-19 conspiracy beliefs. Misinformation and Conspiracy Beliefs Terms such as misinformation, conspiracies, conspiracy theories, conspiracy beliefs, and conspiracy thinking are often used interchangeably, while in fact these are different concepts (Douglas et al., 2019). Misinformation is an umbrella term that refers to narratives or claims that are unverified, not supported, or even counterargued and rejected by expert opinions, such as conspiracy theories (Nyhan & Reifler, 2010). While all conspiracy theories are misinformation, not all misinformation is necessarily a conspiracy theory (e.g., an honest mistake). Conspiracy theories are generally disseminated with conscious underlying intentions, such as stimulating a social movement or making sense of events that counter existing worldviews (Douglas et al., 2019). They often point at secret plots by a group of powerful individuals as the driving force behind significant social or political events in society (Aaronovitch, 2010). Conspiracy theories entail allegations of plotting, which may (not) be true, unlike conspiracies, which are secret plots that have been proven to exist (Keeley, 1999; Levy, 2007). Conspiracy beliefs refer to thoughts and feelings that a specific conspiracy theory is true (Douglas et al., 2019). Finally, a broader concept is conspiracy thinking, which refers to the idea that individuals who believe in one conspiracy theory tend to believe in other conspiracy theories too (e.g., Imhoff & Bruder, 2014). COVID-19 Conspiracy Theories Misinformation and conspiracy theories about COVID-19 are a global problem, and alarming numbers of people are exposed to them (e.g., the United States: 48%, Mitchell & Oliphant, 2020; the United Kingdom: 46%, Ofcom, 2020; see also Cha et al., 2021). In the Netherlands, more than 500 unverified claims spread by Twitter trolls were mentioned in more than 12,000 tweets by almost 4,000 individual Twitter accounts (Vermanen, 2020). Most COVID-19 conspiracy theories that circulate(d) were about miracle cures (e.g., use of [hydroxy]chloroquine or bleach), followed by origin stories (e.g., the virus escaped from a Wuhan lab, was a secret Chinese bioweapon, was created by Bill Gates, or was a result of 5G technology; Evanega, Lynas, Adams, & Smolenyak, 2020). These theories were not only disseminated by online trolls but also by prominent, powerful individuals, such as President Trump of the United States and President Bolsonaro of Brazil, and were frequently reported in mainstream media (Constine, 2020; Evanega et al., 2020). In the Netherlands, some political parties disseminated COVID-19 misinformation that questioned the necessity of preventive measures and vaccines to combat the pandemic, which according to the Dutch minister of health is worrying since it may fuel false beliefs about the pandemic that directly threaten public health (Klaassen & van Mersbergen, 2021). COVID-19 conspiracy beliefs are detrimental to the effectiveness of governmental policies to combat the spread of the coronavirus as they have been related to lower perceived risk of COVID-19 (Krause, Freiling, Beets, & Brossard, 2020) and institutional trust (Banai, Banai, & Mikloušić, 2020; Pummerer et al., 2020). Moreover, conspiracy believers show lower adherence to preventive measures (Allington et al., 2020; Freeman et al., 2020; Pummerer et al., 2020; Romer & Jamieson, 2020) and more International Journal of Communication 17(2023) Beliefs in Times of Corona 695 vaccination hesitancy (Freeman et al., 2020; Romer & Jamieson, 2020). On top of that, COVID-19 conspiracy beliefs may lead to increased political polarization (e.g., Allcott et al., 2020). Although many strategies have been developed over the years to counter the harmful impacts of conspiracy beliefs (e.g., debunking; Dentith, 2020), less attention has been paid to minimize the initial exposure to such claims. To that end, it is essential to know which specific media sources are involved in the dissemination of conspiracy theories. Media Selection in the Current Media Landscape Within the ongoing COVID-19 infodemic, the question arises about how individuals select particular media sources over others to seek information about the coronavirus. Importantly, the media landscape has been evolving so rapidly that the distinction between traditional, mainstream media (e.g., TV, newspapers, radio) and digital media (e.g., online news sites, social media) is fading. For instance, individuals increasingly read newspapers online (Wennekers, Huysmans, & de Haan, 2018), and traditional media sources have their own social media channels. The majority of online news consumption is accounted for by media consumers that visit the digital variant of their favorite mainstream media sources (Flaxman, Goel, & Rao, 2016). Additionally, the spread of information regarding COVID-19 was not limited to mainstream and social media sources. COVID-19 related information about the preventive measures, for example, was typically communicated via online health sources such as governmental websites (e.g., National Institute for Public Health and the Environment; Rijksinstituut voor Volksgezondheid en Milieu [RIVM] in the Netherlands). What makes the contemporary media landscape even more complex is that besides information communicated by authorities and journalists, the countless different social media platforms provide access to opinions and worldviews from virtually anyone in the world, and this can be impactful. To illustrate, research by the Center for Countering Digital Hate (2021) showed that just 12 individual social media users (i.e., the so-called Disinformation Dozen) were responsible for almost two-thirds of the anti-vaccine information circulating online. In this extremely large reservoir of available information, individuals tend to scan media contents selectively to expose themselves primarily to information that aligns with their beliefs or needs (e.g., reinforcement theory, Atkin, 1973; or confirmation bias, Nickerson, 1998) though some media sources seem to be more inviting for this than others. The confirmation bias—the selection of information that aligns with one’s worldview or beliefs over information that counters that—for example, has been found to be especially prevalent in online digital media and less so in printed, offline media (Pearson & Knobloch-Westerwick, 2019). Presumably, this is due to the way in which media sources present information to their audiences. That is, the underlying machine-learning algorithms of digital media are designed to personalize content to users’ preferences and previous information consumption. Contents that the user probably dislikes is automatically filtered out, creating a personal filter bubble (Pariser, 2011). This filtering process causes media users to be selectively exposed to content that aligns with their existing beliefs (Pariser, 2011). Put differently, individuals (unconsciously) live in their own digital “echo chamber,” where their worldviews are echoed by the contents they encounter (Flaxman et al., 2016; Metaxas & Finn, 2017). Scholars have argued that such echo chambers are at least partially responsible for increased ideological polarization and conspiracy beliefs (e.g., Baumann, Lorenz-Spreen, Sokolov, & 696 Marloes van Wezel et al. International Journal of Communication 17(2023) Starnini, 2020). Though, it should be noted that strict homogeneous communication within echo chambers is rare (Guess, Nyhan, Lyons, & Reifler, 2018). Conspiracy Beliefs and Media Use Despite this clear pressure point of how digital media may promote conspiracy beliefs, its detrimental role is debated on (Douglas et al., 2019; Uscinski et al., 2018). Although social media are often considered the culprit of creating and disseminating misinformation and conspiracy theories, both for COVID-19 (e.g., Allington et al., 2020; Su, 2021) and in general (e.g., Allcott, Gentzkow, & Yu, 2019), traditional media also increasingly mention conspiracy theories (e.g., the QAnon movement, Wong, 2020), and both traditional and social media also disseminate correct or verified claims. Furthermore, one should distinguish between the dissemination and development of conspiracy theories. While the Internet allows conspiracy theories to spread quicker and to a larger audience, this does not mean that more conspiracy theories are being developed (Clarke, 2007; Uscinski et al., 2018). The Internet can serve as an effective debunking tool, as the countless Internet users can immediately refute conspiracy theories (Clarke, 2007). Moreover, conspiracy beliefs rarely travel beyond their own echo chamber, so their impact on the mass audience seems limited (Metaxas & Finn, 2017; Uscinski et al., 2018). Despite these debates on the precise relationship between media use and conspiracy beliefs, this association with regard to COVID-19 related conspiracy theories is rarely scrutinized. Jamieson and Albarracin (2020) found that exposure to mainstream print and broadcast media was associated with more accurate beliefs about the coronavirus (e.g., about prevention and lethality of COVID-19 infections) and with fewer misinformation beliefs (see also Allington et al., 2020). In contrast, social media use was positively related to being misinformed (Allington et al., 2020; Jamieson & Albarracin, 2020). Notably, these studies are correlational in nature and therefore do not directly corroborate the idea that social media are fueling conspiracy beliefs in society. Following the reinforcing spiral model (RSM; Slater, 2015), there are valid arguments for both (causal) directions. In particular, the RSM proposes that media use can both influence outcome variables—such as conspiracy beliefs—and be influenced by these same variables. According to Slater (2015), the process of media selection is dynamic and ongoing, which means that certain media content influences subsequent attitudes and beliefs, which in turn may influence future media selection. To illustrate, if someone encounters a Facebook post about COVID-19 being a biochemical weapon, their beliefs regarding the coronavirus might change (media selection à beliefs), and as a result, this individual may start following the Facebook page to receive future information via this source (beliefs à media selection). Importantly, media selection is heavily influenced by individual differences and social contexts (Slater, 2015), so the interaction between differential media use on the one hand and being susceptible to conspiracy beliefs on the other hand is not straightforward. Present Study This study aims to answer the following research question: International Journal of Communication 17(2023) Beliefs in Times of Corona 697 RQ1: What are the relationships between the use of different types of media sources and COVID-19 related conspiracy beliefs, and how do these change over time? We contribute to the literature on these topics in four ways. First, given the complexity of contemporary media landscape, and specifically the communication of COVID-19 related information, this study highlights the unique impact that different types of media have on beliefs in COVID-19 conspiracy theories and vice versa. Hence, this study distinguishes four media sources (traditional media sources, online news sources, online health sources, and social media sources) to investigate their potentially differential relationship with people’s conspiracy beliefs. Second, given the longitudinal (four-wave panel) design, we can move past earlier research that is largely cross-sectional, to get more nuanced insights into the reciprocal relationships between media use and conspiracy beliefs. Third, this study was conducted on a large Dutch population-based sample, which enhances the generalizability and strengthens the replicability of our results. Fourth, we use random intercept cross-lagged panel models (RI-CLPMs; Hamaker, Kuiper, & Grasman, 2015) to nuance our understanding of between- and within-person differences in media use and conspiracy beliefs. RI-CLPMs allow us to decompose longitudinal data into stable, between-person differences versus temporal, within-person dynamics. Hence, we can assess whether people who use certain media more than others have stronger conspiracy beliefs (and vice versa), which is captured in between-person differences. Furthermore, we can unravel whether people who use certain media more than they usually do also hold stronger conspiracy beliefs (and vice versa), which is captured in the within-person differences. These distinctions help us to better understand whether media use and conspiracy beliefs are related because of differences between people in general or whether these are related because of within-person changes over time. Methods Sampling Procedure Data were collected through CentERdata’s Longitudinal Internet Studies for the Social Sciences (LISS) panel, consisting of 5,000 households in the Netherlands, comprising approximately 7,500 individuals. It represents a true probability sample of households drawn from the population register by Statistics Netherlands (LISSPANEL, n.d.). Selected households that cannot otherwise participate are provided with a computer and Internet connection. Panel members complete online questionnaires every month, for which they receive financial compensation. In addition, the LISS panel yearly collects data on sociodemographic variables and health status, among other core topics, which allows for researchers to add these to their survey data. The data for our study were collected on four occasions. For the first wave, a random sample of 1,937 panel members were invited in the midst of the COVID-19 outbreak in May 2020. A total of 1,465 fully completed questionnaires (75.6%) were returned. These panel members were invited to complete the second-wave questionnaire in June 2020 (response rate: 92.3%, n = 1,352), followed by two more waves in July 2020 (response rate: 90.4%, n = 1,222) and October 2020 (response rate: 95.4%, n = 1,166). Time intervals of one month were applied between the first three waves to capture people’s media use and misinformation beliefs during the rapid change of preventive measures in the Netherlands (May: first 698 Marloes van Wezel et al. International Journal of Communication 17(2023) lockdown; June: regaining some freedom with reopening of high schools, cultural sector, and hospitality sector; July: increased infection rates across Europe, debates on obligation to wear face masks), with a two-month break jumping to the second lockdown in October. Respondents who completed all four questionnaires comprised the sample for data analysis (N = 1,166). Measures Media Use People indicated how often, in an average week in the past month, they used 15 types of media sources to receive information about COVID-19. Media sources included traditional media sources (news, current affairs programs and talk shows on television, newspapers, magazines, radio), online news sources (websites or apps of television news or newspapers, other news websites), online health sources (health websites or apps, government websites), and social media (social networking sites and chat programs). Respondents rated their media use on a scale from one to seven days a week, with the additional option to answer “never.” We considered use of (a) traditional media sources, (b) online news sources, (c) health-related sources, and (d) social media sources as first-order factors, and media use as second-order factor. The descriptive statistics of these variables are provided in Tables A1 and A2, and the zero-order correlations and scatterplots of the media use subscales across time as well as the density plots for the subscales for each wave are visualized in Figure A1 of the online supplementary material (OSM).2 Confirmatory factor analysis (CFA) showed adequate model fit of this four-dimensional structure across four waves:3 χ2 (1673) = 6063.67, p < .001, comparative fit index (CFI) = .895, Tucker-Lewis index (TLI) = .888, root mean square error of approximation (RMSEA) = .047, standardized root mean square residual (SRMR) = .101. Four media use subscales were computed based on mean scores for each wave. Conspiracy Beliefs We measured respondents’ belief in conspiracy theories about COVID-19 with three conspiracy statements per wave (see Table B1 of the OSM).4 These were rated on a scale ranging from 1 “certainly not true” to 5 “certainly true.” The statements represented various conspiracy theories about the outbreak, spread, and potential treatment of the novel coronavirus and were similar to those used in previous research to assess misinformation beliefs (e.g., Čavojová, Šrol, & Ballová Mikušková, 2020). CFA showed adequate 2The OSM can be accessed via: https://osf.io/kjtdz/?view_only=d937b5cd61ed464b9e302bb8e6013b36 3To test for measurement invariance of media use across the four waves, we compared the configural model (i.e., factor loadings, intercepts, and latent means are able to differ across the four waves) with the strong model (i.e., factor loadings and intercepts are constrained across the four waves). The difference between the configural and strong model is significant, indicating that the four waves have different loadings and intercept structures. Hence, results of the strong model are reported here. 4For each wave, we presented respondents with 10 statements about COVID-19, of which three were conspiracy-related statements. Exploratory factor analysis showed that these three items loaded well on one component (factor loadings > .45). The current analyses are based on those conspiracy-statements. An overview of all 40 statements can be found in Appendix B, Table B3 in the OSM. International Journal of Communication 17(2023) Beliefs in Times of Corona 699 model fit across four waves:5 χ2 (48) = 222.14, p < .001, CFI = .948, TLI = .928, RMSEA = .056, SRMR = .036. Following earlier research (e.g., Čavojová et al., 2020), we calculated a mean score for the three statements for each wave. Sociodemographic Variables We extracted the following sociodemographic variables from the LISS Core questionnaire: age, gender, and education level. Education level was based on the categories used by CBS Statistics Netherlands (n.d.): Primary education, prevocational secondary education (VMBO), senior general secondary education (HAVO), pre-university education (VWO), senior secondary vocational education (MBO), higher vocational education (HBO), and university education (WO). Statistical Analysis The analyses were conducted with R (Version 3.6.1), using packages such as lavaan (Version 0.6-3; Rosseel, Jorgensen, & Rockwood, 2020) and ggplot2 (Version 3.2.0; Wickham et al., 2020). To test the reciprocal relationship between media use and conspiracy beliefs, we used RI-CLPMs (Hamaker et al., 2015). The RI-CLPM is an extension of the cross-lagged panel model, which not only accounts for temporal stability but also for trait-like, time-invariant stability through the inclusion of a random intercept (i.e., a factor with all loadings constrained to one). The random intercept allows to distinguish variance at the within-level from variance at the between-level, which means that relationships between variables of interest pertain to within-person dynamics rather than between-person differences (Hamaker et al., 2015). We performed four separate RI-CLPMs to test relationships between the four subcategories of media use (i.e., traditional media sources, online news sources, online health sources, and social media) and conspiracy beliefs. Mean scales of media use subcategories and conspiracy beliefs were calculated before running the RI-CLPMs. As we tested the same model four times, we corrected for potential alpha inflation due to multiple testing (i.e., Bonferroni correction) and considered all our findings significant at (α / k = 0.05 / 4) p < .0125. We followed the approach of Hamaker and colleagues (2015) according to which we specified the stable between components and the fluctuating within components. For the between components, two random intercepts (one for media use and one for conspiracy beliefs) with factor loadings constrained to one were included to represent the stable, time-invariant differences between individuals with regard to media use and conspiracy beliefs. The correlation between these random intercepts demonstrates the association between stable between-person differences in media use and stable between-person differences in conspiracy beliefs. For the within components, eight variables were defined to represent the differences between a unit’s observed measurements and the unit’s expected score based on the grand means and its random intercepts. In our model, these represent the within components of media use and conspiracy beliefs, respectively. Furthermore, lagged regression 5As the three conspiracy statements differed per wave, our latent construct of conspiracy beliefs was per definition measurement variant. As we cannot establish measurement invariance, we reported the results of the configural model (i.e., factor loadings, intercepts, and latent means are able to differ across the four waves) here. 700 Marloes van Wezel et al. International Journal of Communication 17(2023) were estimated, with auto-regressive paths reflecting within-person changes (or stability) over time in media use and conspiracy beliefs, respectively, and cross-lagged paths reflecting the extent to which media use and conspiracy beliefs are linked reciprocally, based on whether changes from an individual’s expected score on media use (or conspiracy beliefs) are predicted from preceding deviations on conspiracy beliefs (or media use) and are an average of the within-person changes. A conceptual depiction of the RI-CLPMs for this study is shown in Figure 1. Figure 1. RI-CLPM of the relationship between media use and conspiracy beliefs across four waves.6 Note. CB = conspiracy beliefs. 6Media use was categorized in four subcategories of media sources. As a result, we ran four RI-CLPMs for each media use subcategory. International Journal of Communication 17(2023) Beliefs in Times of Corona 701 Results Sample Characteristics Respondents in our sample were on average 56 years old (M = 55.62, SD = 17.32, range = 18–103), and 50.3% were female (n = 586). About 28% completed a lower education level (primary education or VMBO), 34% a middle education level (HAVO, VWO, or MBO), and 38% completed a higher education level (HBO or WO). Overall, this sample was mostly representative of the Dutch population.7 With regard to COVID-19, most respondents (May: 93.1%; June: 93.4%; July: 91.9%; October: 88.2%) believed that they had not been infected with the virus. Although a slight increase of reported COVID-19 infections by our sample was found in October, only a small minority had been tested positive for COVID-19 in May (0.3%), June (0.1%), July (0.2%), and October (1.0%). Respondents also indicated more frequently in October about knowing people who had been diagnosed with the coronavirus compared with earlier months: In May, June, and July, about 28% reported knowing others who had been infected with the virus (May: 28.1%; June: 28.0%; July: 27.8%), whereas in October, 38.8% knew someone who had been infected with the virus. Across the waves, 0.1%–3.4% of our sample was certain that the selected conspiracy theories were true, with an additional 1.2%–13.7% of our sample who thought it was likely that these were true (though this group comprised different individuals in each wave; for an overview per statement see Table B2 in the OSM). TV news was the most used media source (May: 94.3%; June: 91.9%; July: 90.4%; October: 92.4%) and apps such as health apps were the least used media source (May: 14.7%; June: 13.2%; July: 12.6%; October: 10.6%). For more detailed information, see Table A2 in the OSM. Model Testing The model that looked at relationships between media use of traditional media sources and conspiracy beliefs revealed adequate fit: χ2 (9) = 81.03, p < .001, CFI = .984, TLI = .951, RMSEA = .083, SRMR = .041. The results (see Table 1) revealed several effects at the within-person level. Auto-regressive paths indicated statistically significant relationships over time in terms of conspiracy beliefs. Individuals with relatively high conspiracy beliefs (relative to an individual’s own mean) in May (wave 1) were more likely to have more conspiracy beliefs in June (wave 2; ꞵ = .22, SE = .04, p < .001). However, individuals with relatively high conspiracy beliefs in July (wave 3) were more likely to have less conspiracy beliefs in October (wave 4; ꞵ = −.14, SE = .06, p = .008). For media use, no significant relations were found over time, which means that receiving COVID-19 information relatively frequently via traditional media sources (relative to an individual’s own mean) on one occasion did not lead to also receiving COVID-19 information relatively frequently via traditional media sources on another occasion. No significant cross-lagged effects of media use on conspiracy beliefs were found, or vice versa, indicating that receiving more COVID-19 information from traditional media sources did not contribute to more conspiracy beliefs (e.g., waves 3–4: ꞵ = −.05, SE 7Our sample was slightly older than the mean age of the Dutch population (StatLine, 2020): Mdif = 6.17, 95% confidence interval (CI) [5.17, 7.16], t(1165) = 12.16, p < .001. 702 Marloes van Wezel et al. International Journal of Communication 17(2023) = .03, p = .330), and having more conspiracy beliefs did not lead to receiving more COVID-19 information via traditional media sources (e.g., waves 3–4: ꞵ = −.02, SE = .10, p = .793). Furthermore, results showed no significant between-person correlation, no significant cross-sectional association at wave 1 nor correlated change at waves 2–4. The model for online news media use also showed adequate fit: χ2 (9) = 74.23, p < .001, CFI = .983, TLI = .947, RMSEA = .079, SRMR = .040. Similar to traditional media use, results (see Table 1) showed statistically significant auto-regressive effects over time for conspiracy beliefs. These represented similar patterns as described above.8 For media use, receiving COVID-19 information relatively frequently via online news media sources (relative to an individual’s own mean) at a certain point in time did not lead to also receiving COVID-19 information relatively frequently via online news media sources at a later point in time. No significant cross-lagged effects of media use on conspiracy beliefs were found, or vice versa, indicating that receiving more COVID-19 information from online news media sources did not contribute to more conspiracy beliefs (e.g., waves 3–4: ꞵ = −.07, SE = .03, p = .210), and having more conspiracy beliefs did not lead to receiving more COVID-19 information via online news media sources (e.g., waves 3–4: ꞵ = .01, SE = .14, p = .864). Results further showed no significant between-person correlation, no significant cross-sectional association at wave 1 nor correlated change at waves 2–4. With regard to online health sources, the model showed adequate fit: χ2 (9) = 89.72, p < .001, CFI = .967, TLI = .896, RMSEA = .088, SRMR = .044. The results again showed statistically significant auto-regressive effects for conspiracy beliefs, which showed again similar patterns (for details, see Table 1). For media use, receiving COVID-19 information relatively frequently via online health sources (relative to an individual’s own mean) at an earlier point in time did not lead to significantly receiving COVID-19 information relatively more or less frequently via online health sources at a later point in time. No significant cross-lagged effects of media use on conspiracy beliefs were found, or vice versa, indicating that receiving more COVID-19 information from online health sources did not contribute to more conspiracy beliefs (e.g., waves 3–4: ꞵ = –.04, SE = .03, p = .453), and having more conspiracy beliefs did not lead to receiving more COVID-19 information via online news media sources (e.g., waves 3–4: ꞵ = −.02, SE = .10, p = .724). Results further showed no significant cross-sectional association at wave 1, nor correlated change at waves 2–4. At the between-person level, we found a moderate correlation between media use and conspiracy beliefs (ꞵ = .27, SE = .02, p < .001). This suggests that people with relatively frequent use of health-related sources also reported relatively high levels of conspiracy beliefs compared with the group average. For our final model in which we looked at the relationship between social media use and conspiracy beliefs, results showed adequate model fit: χ2 (9) = 72.69, p < .001, CFI = .981, TLI = .940, RMSEA = .078, SRMR = .042. Statistically significant auto-regressive effects in this model indicated stability in social media use and conspiracy beliefs over time. For media use, receiving COVID-19 information relatively frequently via social media sources (relative to an individual’s own mean) in May (wave 1) led to also receiving COVID-19 information relatively frequently via social media sources in 8The auto-regressive paths between conspiracy beliefs over time are similar across the four RI-CLPMs. Therefore, these results are not repeated in-text. Precise estimates can be found in Table 1. International Journal of Communication 17(2023) Beliefs in Times of Corona 703 June (wave 2: ꞵ = .15, SE = .05, p = .005). Similar patterns for conspiracy beliefs over time were found as described above (for details, see Table 1). With regard to cross-lagged effects, we found no significant effects of media use on conspiracy beliefs, or vice versa: Receiving more COVID-19 information from social media sources did not contribute to more conspiracy beliefs (e.g., waves 3–4: ꞵ = .03, SE = .02, p = .556), and having more conspiracy beliefs did not lead to receiving more COVID-19 information via social media sources (e.g., waves 3–4: ꞵ = .00, SE = .17, p = .952). We also found no significant cross-sectional association at wave 1, nor correlated change at waves 2–4. At the between-person level, social media use and conspiracy beliefs correlated moderately (ꞵ = .25, SE = .04, p < .001), indicating that people with relatively frequent use of social media sources also reported relatively high levels of conspiracy beliefs compared with the group mean. Table 1. Standardized Estimates of the RI-CLPMs Regarding the Relationships Between Media Use and Conspiracy Beliefs Across Four Waves Specified for Four Types of Media Use (N = 1,166). Traditional Media Online News β SE p β SE p Auto-regressive paths Media use w1 à Media use w2 .06 .06 .324 .12 .05 .025 Media use w2 à Media use w3 .13 .07 .021 −.01 .06 .939 Media use w3 à Media use w4 .04 .06 .527 −.07 .08 .329 Conspiracy beliefs w1 à Conspiracy beliefs w2 .22 .04 .000 .22 .04 .000 Conspiracy beliefs w2 à Conspiracy beliefs w3 .02 .05 .665 .02 .05 .742 Conspiracy beliefs w3 à Conspiracy beliefs w4 −.14 .06 .008 −.15 .06 .006 Cross-lagged paths Media use w1 à Conspiracy beliefs w2 .08 .03 .118 −.03 .02 .447 Media use w2 à Conspiracy beliefs w3 −.05 .03 .384 −.01 .02 .803 Media use w3 à Conspiracy beliefs w4 −.05 .03 .330 −.07 .03 .210 Conspiracy beliefs w1 à Media use w2 −.07 .07 .124 −.05 .08 .258 Conspiracy beliefs w2 à Media use w3 .03 .09 .625 −.01 .12 .859 Conspiracy beliefs w3 à Media use w4 −.02 .10 .793 .01 .14 .864 Additional correlations Correlation w1 −.01 .02 .874 −.01 .03 .816 Residual correlation w2 .01 .02 .902 −.05 .02 .269 Residual correlation w3 .03 .02 .625 .04 .03 .514 Residual correlation w4 .02 .02 .682 −.01 .03 .922 Between-person correlation −.04 .03 .291 −.03 .03 .397 Health Sources Social Media β SE p β SE p Auto-regressive paths Media use w1 à Media use w2 .14 .07 .065 .15 .05 .005 Media use w2 à Media use w3 .07 −.17 .08 .345 .10 .06 .123 Media use w3 à Media use w4 .08 .028 −.09 .08 .244 704 Marloes van Wezel et al. International Journal of Communication 17(2023) Conspiracy beliefs w1 à Conspiracy beliefs w2 .22 .02 −.15 .04 .000 .21 .04 .000 Conspiracy beliefs w2 à Conspiracy beliefs w3 .05 .726 .03 .05 .613 Conspiracy beliefs w3 à Conspiracy beliefs w4 .06 .005 −.14 .06 .008 Cross-lagged paths Media use w1 à Conspiracy beliefs w2 .02 .02 .613 −.04 .01 .302 Media use w2 à Conspiracy beliefs w3 .01 −.04 .03 .08 −.02 −.01 .03 −.01 .03 .27 .03 .841 .01 .02 .891 Media use w3 à Conspiracy beliefs w4 .03 .453 .03 .02 .556 Conspiracy beliefs w1 à Media use w2 .06 .424 −.04 .10 .272 Conspiracy beliefs w2 à Media use w3 .09 .141 −.02 .14 .760 Conspiracy beliefs w3 à Media use w4 .10 .724 .00 .17 .952 Additional correlations Correlation w1 .02 .923 .03 .04 .520 Residual correlation w2 .02 .592 −.02 .03 .586 Residual correlation w3 .02 .827 −.00 .04 .949 Residual correlation w4 .03 .648 .09 .04 .076 Between-person correlation .02 .000 .25 .04 .000 Note. All results but the between-person correlation reflect correlations at the within-person level. Results are considered significant at p < .0125 to correct for potential alpha inflation due to multiple testing. Discussion This study expanded extant literature by adopting a longitudinal design to investigate changes in media use and conspiracy beliefs over time during the COVID-19 pandemic, in a large, representative Dutch sample. Using RI-CLPMs, we identified how the use of specific media sources related to COVID-19 conspiracy beliefs, and how these relationships changed over time. Our results indicated that at group level the use of online health sources and social media were related to beliefs in COVID-19 conspiracies, such that more frequent use of these media sources was correlated with higher levels of conspiracy beliefs. However, the relationship between media use and conspiracy beliefs at the within-person level was not corroborated. Put differently, within individuals, using certain media sources to gather information about COVID-19 did not lead to changes in conspiracy beliefs over time, nor did beliefs in conspiracy theories lead to changes within individuals using specific media sources. As such, our results suggest that the relationship between media use and belief in conspiracy theories is more complicated than our model can show. Theoretical Implications Our findings have three important implications for theory. First, the use of RI-CLPMs provided evidence for the correlational association between the use of online health sources and social media on the one hand and conspiracy beliefs on the other, where more frequent use of these media sources was related to stronger conspiracy beliefs. This seems in line with earlier correlational research where the use of digital media sources (e.g., social media), has been associated with conspiracy beliefs (Allington et al., 2020). As the between-person effects in our study represent the averaged measurements across four waves, we were able to add more robust evidence for the positive relationship between use of certain types of media sources and conspiracy beliefs. Despite the group-level associations between media use and conspiracy beliefs, we International Journal of Communication 17(2023) Beliefs in Times of Corona 705 were not able to detect within-person effects, for which several explanations could be noted. For instance, the distribution of effect sizes varied as a function of time (Dormann & Griffin, 2015), and perhaps significant effects could be detected at smaller time intervals, such as two-week intervals. In addition, it could be that within-person effects do exist, but that these were very small and could not be detected with the current sample size, due to low statistical power. Nonetheless, given the large sample size and the full coverage of the media use and conspiracy beliefs scales, we believe this is unlikely and that alternative, theoretical explanations should be considered. The reciprocal relationships between media use and conspiracy beliefs may therefore not be as straightforward as previously assumed (Allington et al., 2020; Jamieson & Albarracin, 2020). Second, the lack of causal effects at the within-person level might point at underlying processes to these relationships at an individual level that hinder the revelation of any causal effects. These findings corroborate the differential susceptibility to media effects model (DSMM: Valkenburg & Peter, 2013), which poses that individual differences cause differential use and effects of media. Such differential use and impact of media might have led to aggregated null effects between media use and conspiracy beliefs as for some individuals these relationships may be positive, for some negative, and for some stable over time. In turn, when making the link to the reinforcing spirals model (RSM; Slater, 2015), for some people their beliefs may be mainly affected by the media they consume, while for others their beliefs mainly determine their media selection and consumption. Consequently, relations between media use and beliefs as proposed by the RSM are not per definition uni- or bidirectional but may be unique per individual, and this should be elaborated on in future work. Third and finally, our models showed that conspiracy beliefs were unstable over time, and that the statements per wave were believed by small numbers of people (0.1%–3.4% who were certain the selected conspiracy theories were true, and an additional 1.2%–13.7% of the sample who thought it was likely that these were true). This may suggest that conspiracy beliefs are personal, and individuals do not necessarily “fall” for the same theories, which is in contrast with previous work that suggests the existence of a “conspiracy mindset” (Imhoff & Bruder, 2014). Our findings reveal the possibility that conspiracy beliefs are in fact unstable as a construct as individuals over time move from having stable conspiracy beliefs (within individuals, higher conspiracy beliefs in May were associated with higher conspiracy beliefs in June), to having changing conspiracy beliefs (within individuals, higher conspiracy beliefs in June were not associated with higher conspiracy beliefs in July), to having contrasting conspiracy beliefs (within individuals, higher conspiracy beliefs in July were associated with lower conspiracy beliefs in October). Nonetheless, since only a small proportion of the sample believed in conspiracy theories, we cannot make strong claims about the possible nonexistence of a conspiracy mindset. Future research could employ different sampling strategies to target conspiracy thinkers to better understand how media use shapes their beliefs, and how conspiracy beliefs shape their media use. Limitations and Future Research Directions Although we contributed to prior research by investigating various types of media sources over time, we did not gather any data on the specific content about the coronavirus that respondents encountered via these media sources. However, the content gathered from different social media sources can vary a 706 Marloes van Wezel et al. International Journal of Communication 17(2023) great deal. For example, Twitter posts of The New York Times presumably contain very different content from the memes disseminated by friends via Facebook. Future research may therefore look into different types of media use in more detail, for example, by analyzing media headlines or deeper content. Furthermore, the role of media credibility in people’s media use and conspiracy beliefs was not considered. If an extensive social media user recognizes that information to be found there might not be credible, they may be less likely to believe conspiracy theories unlike someone with similar screen times who considers social media to be a credible source. Ignoring these unique individual differences in perceived media credibility may lead to aggregated null effects or very small effect sizes (see DSMM; Valkenburg & Peter, 2013). Indeed, research has shown that conspiracy beliefs depended on trust in social media news (Xiao, Borah, & Su, 2021). Hence, future research could investigate the influence of perceived media credibility for each media source type to get a better understanding of the interplay between media use and conspiracy beliefs. Similarly, the role of variables such as age, educational level, health literacy, anxiety, and perceived control on an individual level could be scrutinized as these are found to predict believing COVID-19 conspiracies (e.g., Duplaga, 2020; Šrol, Mikušková, & Cavojova, 2021). It is essential to get more insights into specific subgroups of the population that are more or less susceptible to fake news and developing conspiracy beliefs. This may potentially pave the way for better and tailored intervention strategies to combat the ongoing infodemic. Conclusion This study investigated the reciprocal relationships between the use of different media sources (i.e., traditional media, online news, online health-related, and social media sources) for receiving information about COVID-19 and COVID-19 related conspiracy beliefs over time in a representative Dutch sample. Although we found that media use and conspiracy beliefs were related at the group level, the analysis of four RI-CLPMs revealed that there were no cross-lagged, within-person effects between media use and conspiracy beliefs, which means that the use of certain media sources to stay informed about COVID-19 did not cause changes in COVID-19 conspiracy beliefs, and believing COVID-19 conspiracy theories did not lead to the use of specific media sources. These findings suggest that the relationships between media use and conspiracy beliefs may be more complex than previously thought. References Aaronovitch, D. (2010). Voodoo histories: The role of the conspiracy theory in shaping modern history. New York, NY: Riverhead Books. Allcott, H., Boxell, L., Conway, J. C., Gentzkow, M., Thaler, M., & Yang, D. Y. (2020). Polarization and public health: Partisan differences in social distancing during the coronavirus pandemic (Working paper No. 26946). Retrieved from https://www.nber.org/papers/w26946 International Journal of Communication 17(2023) Beliefs in Times of Corona 707 Allcott, H., Gentzkow, M., & Yu, C. (2019). Trends in the diffusion of misinformation on social media. Research & Politics, 6(2), 1–8. doi:10.1177/2053168019848554 Allington, D., Duffy, B., Wessely, S., Dhavan, N., & Rubin, J. (2020). Health-protective behaviour, social media usage and conspiracy belief during the COVID-19 public health emergency. Psychological Medicine, 51(10), 1–7. doi:10.1017/S003329172000224X Atkin, C. (1973). Instrumental utilities and information seeking. In P. Clark (Ed.), New models for mass communication research (pp. 205–242). Beverly Hills, CA: SAGE. Banai, I. P., Banai, B., & Mikloušić, I. (2020). Beliefs in COVID-19 conspiracy theories predict lower level of compliance with the preventive measures both directly and indirectly by lowering trust in government medical officials. PsyArXiv Preprints. doi:10.31234/osf.io/yevq7 Baumann, F., Lorenz-Spreen, P., Sokolov, I. M., & Starnini, M. (2020). Modeling echo chambers and polarization dynamics in social networks. Physical Review Letters, 124(4), 048301. doi:10.1103/PhysRevLett.124.048301 Čavojová, V., Šrol, J., & Ballová Mikušková, E. (2020). How scientific reasoning correlates with health-related beliefs and behaviors during the COVID-19 pandemic? Journal of Health Psychology, 27(3), 534–547. doi:10.1177/1359105320962266 CBS Statistics Netherlands. (n.d.). Education level. Retrieved from https://www.cbs.nl/en-gb/our-services/urban-data-centres/arbeid-en-inkomen/education-level Center for Countering Digital Hate. (2021). The disinformation dozen: Why platforms must act on twelve leading online anti-vaxxers. Retrieved from https://252f2edd-1c8b-49f5-9bb2-b57bb47e4ba.filesusr.com/ugd/f4d9b9_b7cedc0553604720b7137f8663366ee5.pdf Cha, M., Cha, C., Singh, K., Lima, G., Ahn, Y. Y., Kulshrestha, J., . . . Varol, O. (2021). Prevalence of misinformation and factchecks on the COVID-19 pandemic in 35 countries: Observational infodemiology study. JMIR Human Factors, 8(1), e23279. doi:10.2196/23279 Clarke, S. (2007). Conspiracy theories and the Internet: Controlled demolition and arrested development. Episteme: A Journal of Social Epistemology, 4(2), 167–180. doi:10.3366/epi.2007.4.2.167 Constine, J. (2020). Facebook deletes Brazil president’s coronavirus misinfo post. TechCrunch. Retrieved from https://techcrunch.com/2020/03/30/facebook-removes-bolsonaro-video/ Dentith, M. R. X. (2020). Debunking conspiracy theories. Synthese, 198, 9897–9911. doi:10.1007/s11229-020-02694-0 708 Marloes van Wezel et al. International Journal of Communication 17(2023) Dormann, C., & Griffin, M. A. (2015). Optimal time lags in panel studies. Psychological Methods, 20(4), 489–505. doi:10.1037/met0000041 Douglas, K. M., Uscinski, J. E., Sutton, R. M., Cichocka, A., Nefes, T., Ang, C. S., & Deravi, F. (2019). Understanding conspiracy theories. Political Psychology, 40(S1), 3–35. doi:10.1111/pops.12568 Duplaga, M. (2020). The determinants of conspiracy beliefs related to the COVID-19 pandemic in a nationally representative sample of internet users. International Journal of Environmental Research and Public Health, 17(21), 7818. doi:10.3390/ijerph17217818 Evanega, S., Lynas, M., Adams, J., & Smolenyak, K., (2020). Coronavirus misinformation: Quantifying sources and themes in the COVID-19 “infodemic.” Retrieved from https://www.uncommonthought.com/mtblog/wp-content/uploads/2020/12/Evanega-et-al-Coronavirus-misinformation-submitted_07_23_20-1.pdf Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 80(S1), 298–320. doi:10.1093/poq/nfw006 Freeman, D., Waite, F., Rosebrock, L., Petit, A., Causier, C., East, A., . . . Lambe, S. (2020). Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychological Medicine, 52, 251–263. doi:10.1017/S0033291720001890 Guess, A., Nyhan, B., Lyons, B., & Reifler, J. (2018). Avoiding the echo chamber about echo chambers. Knight Foundation, 2, 1–25. Retrieved from https://bit.ly/3bwtArZ Hamaker, E. L., Kuiper, R. M., & Grasman, R. P. P. P. (2015). A critique of the cross-lagged panel model. Psychological Methods, 20(1), 102–116. doi:10.1037/a0038889 Huff, E. (2020, January 28). Bill Gates funded the PIRBRIGHT Institute, which owns a patent on coronavirus; The CDC owns the strain isolated from humans. Humans Are Free. Retrieved from https://humansarefree.com/2020/01/bill-gates-pirbright-institute-cdc-patent-coronavirus.html Imhoff, R., & Bruder, M. (2014). Speaking (un-) truth to power: Conspiracy mentality as a generalised political attitude. European Journal of Personality, 28(1), 25–43. doi:10.1002/per.1930 Jamieson, K. H., & Albarracin, D. (2020). The relation between media consumption and misinformation at the outset of the SARS-CoV-2 pandemic in the US. The Harvard Kennedy School Misinformation Review, 1(2), 1–22. doi:10.37016/mr-2020-012 Keeley, B. L. (1999). Of conspiracy theories. Journal of Philosophy, 96, 109–126. doi:10.2139/ssrn.1084585 International Journal of Communication 17(2023) Beliefs in Times of Corona 709 Klaassen, N., & van Mersbergen, C. (2021, March 27). De Jonge: “Coronakritiek Forum is gevaar voor volksgezondheid” [De Jonge: “Corona-criticism Forum is danger for public health”]. Het Parool. Retrieved from https://www.parool.nl/nederland/de-jonge-coronakritiek-forum-is-gevaar-voor-volksgezondheid~b1df8021/?referrer=https%3A%2F%2Fwww.google.com%2F Krause, N. M., Freiling, I., Beets, B., & Brossard, D. (2020). Fact-checking as risk communication: The multi-layered risk of misinformation in times of COVID-19. Journal of Risk Research, 23(7–8), 1052–1059. doi:10.1080/13669877.2020.1756385 Levy, N. (2007). Radically socialized knowledge and conspiracy theories. Episteme: A Journal of Social Epistemology, 4(2), 181–192. doi:10.3366/epi.2007.4.2.181 LISSPANEL. (n.d.). About the panel. Retrieved from https://www.lissdata.nl/about-panel Metaxas, P., & Finn, S. T. (2017). The infamous# Pizzagate conspiracy theory: Insight from a TwitterTrails investigation. Retrieved from https://bit.ly/3uTJ9CH Mitchell, A., & Oliphant, J. B. (2020). Americans immersed in COVID-19 news; Most think media are doing fairly well covering it. Pew Research Center. Retrieved from https://www.journalism.org/2020/03/18/americans-immersed-in-covid-19-news-most-think-media-are-doing-fairly-well-covering-it/ Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. doi:10.1037/1089-2680.2.2.175 Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. doi:10.1007/s11109-010-9112-2 Ofcom. (2020). Half of UK adults exposed to false claims about coronavirus. Retrieved from https://www.ofcom.org.uk/about-ofcom/latest/features-and-news/half-of-uk-adults-exposed-to-false-claims-about-coronavirus Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. London, UK: Penguin. Pearson, G. D. H., & Knobloch-Westerwick, S. (2019). Is the confirmation bias bubble larger online? Pre-election confirmation bias in selective exposure to online versus print political information. Mass Communication and Society, 22(4), 466–486. doi:10.1080/15205436.2019.1599956 Pummerer, L., Böhm, R., Lilleholt, L., Winter, K., Zettler, I., & Sassenberg, K. (2020). Conspiracy theories in times of crisis and their societal effects: Case “corona.” PsyArXiv Preprints. Retrieved from https://bit.ly/3iMITnq 710 Marloes van Wezel et al. International Journal of Communication 17(2023) Romer, D., & Jamieson, K. H. (2020). Conspiracy theories as barriers to controlling the spread of COVID-19 in the US. Social Science & Medicine, 263, 1–8. doi:10.1016/j.socscimed.2020.113356 Rosseel, Y., Jorgensen, T. D., & Rockwood, N. (2020). lavaan: Latent variable analysis. Retrieved from https://cran.r-project.org/web/packages/lavaan/index.html Slater, M. D. (2015). Reinforcing spirals model: Conceptualizing the relationship between media content exposure and the development and maintenance of attitudes. Media Psychology, 18(3), 370–395. doi:10.1080/15213269.2014.897236 Šrol, J., Mikušková, E. B., & Cavojova, V. (2021). When we are worried, what are we thinking? Anxiety, lack of control, and conspiracy beliefs amidst the COVID-19 pandemic. Applied Cognitive Psychology, 35(3), 1–10. doi:10.1002/acp.3798 StatLine. (2020). Bevolking op 1 januari en gemiddeld; geslacht, leeftijd en regio [Population on January 1st and mean; gender, age and region]. Retrieved from https://opendata.cbs.nl/statline/#/CBS/nl/dataset/03759ned/table?dl=4F8DD Su, Y. (2021). It doesn’t take a village to fall for misinformation: Social media use, discussion heterogeneity preference, worry of the virus, faith in scientists, and COVID-19-related misinformation beliefs. Telematics and Informatics, 58, 1–12. doi:10.1016/j.tele.2020.101547 Uscinski, J. E., DeWitt, D., & Atkinson, M. D. (2018). A web of conspiracy? Internet and conspiracy theory. In A. Dyrendal, D. G. Robertson, & E. Asprem (Eds.), Handbook of conspiracy theory and contemporary religion (pp. 106–130). Leiden, The Netherlands: Brill. doi:10.1163/9789004382022_007 Valkenburg, P. M., & Peter, J. (2013). The differential susceptibility to media effects model. Journal of Communication, 63(2), 221–243. doi:10.1111/jcom.12024 Van Bavel, J. J., Baicker, K., Boggio, P. S., Capraro, V., Cichocka, A., Cikara, M., . . . Willer, R. (2020). Using social and behavioural science to support COVID-19 pandemic response. Nature Human Behaviour, 4(5), 460–471. doi:10.1038/s41562-020-0884-z Vermanen, J. (2020, August 21). Zeker 50 Twitter-trollen verspreiden misinformatie COVID-19 in Nederland [At least 50 Twitter trolls spread misinformation COVID-19 in the Netherlands]. Pointer. Retrieved from https://pointer.kro-ncrv.nl/zeker-50-twitter-trollen-verspreiden-misinformatie-covid-19-in-nederland Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. doi:10.1126/science.aap9559 International Journal of Communication 17(2023) Beliefs in Times of Corona 711 Wennekers, A., Huysmans, F., & de Haan, J. (2018). Lees:Tijd—Lezen in Nederland [Lees:Tijd—Reading in the Netherlands]. Sociaal en Cultureel Planbureau. Retrieved from https://www.scp.nl/publicaties/publicaties/2018/01/18/lees-tijd Wickham, H., Chang, W., Henry, L., Pedersen, T. L., Takahashi, K., Wilke, C., . . . Dunnington, D. (2020). ggplot2: Create elegant data visualisations using the grammar of graphics. Retrieved from https://cran.r-project.org/web/packages/ggplot2/index.html Wong, J. C. (2020, August 25). QAnon explained: The antisemitic conspiracy theory gaining traction around the world. The Guardian. Retrieved from https://www.theguardian.com/us-news/2020/aug/25/qanon-conspiracy-theory-explained-trump-what-is Woodward, A. (2020, October 15). A Chinese virologist continues to claim the coronavirus was engineered as a “bioweapon” and then released. The groups she works for were once led by Steve Bannon. Business Insider. Retrieved from https://www.businessinsider.com/scientists-steve-bannon-coronavirus-engineered-chinese-bioweapon-2020-10?international=true&r=US&IR=T World Health Organization. (2020a). WHO coronavirus disease (COVID-19) dashboard. Retrieved from https://covid19.who.int/ World Health Organization. (2020b). Novel coronavirus (2019-nCoV) situation report (Nr. 13). Retrieved from https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf Xiao, X., Borah, P., & Su, Y. (2021). The dangers of blind trust: Examining the interplay among social media news use, misinformation identification, and news trust on conspiracy beliefs. Public Understanding of Science, 30(8), 977–992. doi:10.1177/0963662521998025 ", 'A Downward Spiral? A Panel Study of Misinformation and Media Trust in Chile': 'Title: A Downward Spiral? A Panel Study of Misinformation and Media Trust in Chile\\nZurich Open Repository andArchiveUniversity of ZurichUniversity LibraryStrickhofstrasse 39CH-8057 Zurichwww.zora.uzh.chYear: 2023Perceived Exposure to Misinformation and Trust in Institutions in Four CountriesBefore and During a PandemicBoulianne, Shelley ; Humprecht, EddaAbstract: Misinformation could undermine trust in institutions during a critical period when people requireupdated information about a pandemic and credible information to make informed voting decisions. This articleuses survey data collected in 2019 (n = 6,300) and 2021 (n = 6,000) in the United States, the United Kingdom, France,and Canada to examine the relationship between perceived exposure to misinformation and trust in nationalnews media and the national/federal government. We do not find that perceived exposure to misinformationundermines trust. We test whether these relationships differ for those with left-wing versus right-wing views,by country, period, or electoral context.Posted at the Zurich Open Repository and Archive, University of ZurichZORA URL: https://doi.org/10.5167/uzh-233359Journal ArticlePublished Version The following work is licensed under a Creative Commons: Attribution-NonCommercial-NoDerivatives 4.0 In-ternational (CC BY-NC-ND 4.0) License.Originally published at:Boulianne, Shelley; Humprecht, Edda (2023). Perceived Exposure to Misinformation and Trust in Institutions inFour Countries Before and During a Pandemic. Inter
{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagation': 'Title: Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagation\\nAbstract. Knowledge graphs (KGs) are widely acknowledged as incom-\\nplete, and new entities are constantly emerging in the real world. Induc-\\ntive KG reasoning aims to predict missing facts for these new entities.\\nAmong existing models, graph neural networks (GNNs) based ones have\\nshown promising performance for this task. However, they are still chal-\\nlenged by inefficient message propagation due to the distance and scal-\\nability issues. In this paper, we propose a new inductive KG reasoning\\nmodel, MStar, by leveraging conditional message passing neural net-\\nworks (C-MPNNs). Our key insight is to select multiple query-specific\\nstartingentitiestoexpandthescopeofprogressivepropagation.Toprop-\\nagate query-related messages to a farther area within limited steps, we\\nsubsequently design a highway layer to propagate information toward\\nthese selected starting entities. Moreover, we introduce a training strat-\\negy called LinkVerify to mitigate the impact of noisy training samples.\\nExperimental \\nresults validate that MStar achieves superior performance\\ncompared with state-of-the-art models, especially for distant entities.\\nKeywords: Knowledge graphs ·Inductive reasoning ·Conditional mes-\\nsage passing.\\n1 \\nIntroduction\\nKnowledge graphs (KGs) have become a valuable asset for many downstream\\nAI applications, including semantic search, question answering, and logic rea-\\nsoning [4,11,33]. Real-world KGs, such as Freebase [1], NELL [21], and DBpe-\\ndia [15], often suffer from the incompleteness issue that lacks massive certain\\ntriplets [5,12]. The KG reasoning task aims to alleviate incompleteness by dis-\\ncoveringmissingtripletsbasedontheknowledgelearnedfromknownfacts.Early\\nstudies [38] assume that KGs are static, ignoring the potential unseen entities\\nand emerging triplets in the continuously updated real-world KGs. This moti-\\nvates the task of inductive KG reasoning [32,46], which allows for incorporating\\nemerging entities and facts during inference.arXiv:2407.10430v1 [cs.CL] 15 Jul 20242 Z. Shao et al.\\nTable 1. Hits@10 \\nresults of RED-GNN [47] in our empirical study. We divide all\\ntriplets in the FB15k-237 (v1) dataset [32] into four groups according to the shortest\\ndistance between head and tail entities. “ ∞” denotes that the head entity cannot reach\\nthe corresponding tail entity in the KG. The maximum shortest distance is 10 in the\\nFB15k-237 (v1) when ignoring triplets belonging to ∞.\\nDistance Proportions Layers =3 Layers =6 Layers =9\\n[1,4)70.25% .611 .594 .587\\n[4,7)22.44% .000 .102 .154\\n[7,10] 3.90% .000 .000 .088\\n∞ 3.41% .000 .000 .000\\nDue to their excellent efficiency and performance, conditional message pass-\\ning neural networks (C-MPNNs), such as NBFNet [50] and RED-GNN [47], have\\nemerged as one of the premier models in the field of inductive KG reasoning. To\\ntransmit conditions, existing C-MPNNs only incorporate conditional informa-\\ntion into the head entity and propagate along the relational paths progressively.\\nHowever, this single-starting-entity strategy \\nresults in a limited conditional mes-\\nsage passing scope, leading to the failure of message passing from the head entity\\nto distant target entities. This inspires us to extend the scope of conditional mes-\\nsage passing to support reasoning on target entities in a farther area.\\nWe conduct an empirical study to analyze the drawbacks of the limited\\nmessage passing scope. Specifically, we report the \\nresults of a C-MPNN, RED-\\nGNN [47], on predicting target entities at different distances in Table 1. It can\\nbe observed that RED-GNN performs poorly for queries with distant target\\nentities, even stacking more message-passing layers. This indicates that exist-\\ning C-MPNNs cannot effectively propagate conditional messages toward distant\\ntarget entities, hindering performance on these queries. Although stacking more\\nGNN layers can alleviate this issue to some extent, it causes high computation\\nand performance declines on the queries with target entities nearby.\\nIn this paper, we propose a novel inductive KG reasoning model MStar based\\nonMulti-Starting progressive propagation, which expands the scope of efficient\\nconditional message passing. Our key insight is to utilize more conditional start-\\ning entities and create shortcuts between the head entity and them. Specifically,\\nwe design a starting entities selection (SES) module and a highway layer to select\\nmultiple starting entities and create shortcuts for conditional message passing,\\nrespectively. First, the SES module encodes entities using a pre-embedded GNN\\nand then selects multiple query-dependent starting entities, which may include\\nentities distant from the head entity. These entities broaden the scope of subse-\\nquent progressive propagation and allow MStar to propagate along query-related\\nrelationalpaths toenhancereasoningconcerningdistantentities.Second,wecre-\\nate shortcuts from the head entity to the selected multiple starting entities in\\nthe highway layer. The design of the highway layer is inspired by skip connec-\\ntion from ResNet [8]. The conditional message can be passed to distant entities\\nthrough the highway layer. For example, in Fig. 1, a 3-layer RED-GNN wouldInductive Knowledge Graph Reasoning with MStar 3\\nU:Univ. of California, Berkeley C:State Univ.also_known_asplays_for\\nT:California Golden Bears C:Univ. TeamU:The Ohio State Univ.\\nS:State of Ohiosupported_by\\nU:Michigan State Univ.\\nS:State of California S:State of MichiganT:MSU Spartans\\nalso_known_asplays_for\\nsupported_byalso_known_as\\nT:Ohio State Buckeyesalso_known_as also_known_as\\nplays_for\\nFig. 1.Amotivatingexampleofdistanttargettailentitiesforpredicting( Univ. of Cal-\\nifornia, Berkeley →also_known_as →State Univ. ). Prefix “U”, “S”, and “T” represent\\nuniversity, state, and basketball teams, respectively. Prefix “C” represents category-\\ntype entities. Different colors and prefixes symbolize distinct entity types.\\nfail to predict the target answer, because the length of the shortest path between\\nhead entity Univ. of California, Berkeley and the target entity State Univ. is\\nlarger than 3. In contrast, our MStar can select multiple starting entities, e.g.,\\nMichigan State Univ. andThe Ohio State Univ. , and transmit conditional mes-\\nsages to them through the highway layer. Thus, MStar can achieve a better\\nreasoning performance than other C-MPNNs on this query. After the highway\\nlayer, we follow it with a multi-condition GNN to perform message passing based\\non the embeddings of multiple starting entities. We also propose a training sam-\\nple filtering strategy called LinkVerify to reduce the impact of the unvisited\\ntarget entities. Overall, MStar visits more query-related distant entities in lim-\\nited steps and provides more conditional information to these entities compared\\nwith existing models.\\nOur main contributions in this paper are summarized as follows:\\n–We propose a novel inductive KG reasoning framework based on C-MPNN,\\nnamedMStar. It extends the scope of conditional message passing to im-\\nprove the predictions of distant target entities.\\n–We design two modules, SES and highway layer. The SES module performs\\nstarting entities selection for visiting distant entities. The highway layer pro-\\nvides shortcuts for efficient conditional message passing, alleviating compu-\\ntation waste during additional propagation.\\n–We conduct extensive experiments on inductive datasets to demonstrate the\\neffectivenessofourframeworkandeachmodule.The\\nresultsshowthatMStar\\noutperforms the existing state-of-the-art reasoning models and improves the\\nperformance on queries with distant target entities.\\nThe rest of this paper is organized as follows. We first discuss related works in\\nSection 2. Then, we describe the reasoning task and propagation mechanisms in4 Z. Shao et al.\\nSection 3. The details of MStar are presented in Section 4, and the experimental\\nresults are reported in Section 5. Finally, in Section 6, we discuss the superiority\\nof MStar and possible extensions in future work.\\n2 Related Work\\n2.1 Knowledge Graph Reasoning\\nKG reasoninghas been anactive research area dueto the incompleteness ofKGs.\\nTypical KG reasoning models process each triplet independently and extract the\\nlatent semantics of entities and relations. To model the semantics of the triplets,\\nTransE [2], TransH [39], TransR [17], and RotatE [29] compute translational dis-\\ntance variously. RESCAL [22], DistMult [44], and ComplEx [35] follow another\\nreasoning paradigm based on semantic matching. Instead of exploring the infor-\\nmationimpliedinasingletriplet,R-GCN[28]andCompGCN[36]captureglobal\\nstructure evidence based on graph neural networks (GNNs). These models, how-\\never, learn unary fixed embedding from training, which cannot be generalized to\\nemerging entities in the inductive KGs. Instead, our model embodies relational\\ninformation to encode emerging entities.\\n2.2 Inductive Knowledge Graph Reasoning\\nOne research line of inductive KG reasoning is rule mining, independent of en-\\ntity identities. RuleN [20] and AnyBURL [19] try to prune the process of rule\\nsearching. Neural LP [45] and DRUM [27] propose to learn logical rules in an\\nend-to-end differentiable manner, learning weights for each relation type and\\npath. However, the rules are usually short due to the expensive computation for\\nmining and may not be generalized to distant entities.\\nAnother research line is subgraph extraction. GraIL [32] extracts subgraphs\\naround each candidate triplet and labels the entities with the distance to the\\nhead and tail entities. CoMPILE [18], TACT [3], SNRI [43], LogCo [23], and\\nConGLR [16] follow a similar subgraph-labeling paradigm. However, the sub-\\ngraphs that these models extract convey insufficient information due to sparsity.\\nThese models constitute our baselines for inductive KG reasoning.\\n2.3 Conditional Message Passing Neural Networks\\nRecently, a variant of GNNs called conditional message passing neural networks\\n(C-MPNNs) [10] propagates messages along the relational paths and encodes\\npairwise entity embeddings. Given a query head uand a query relation qas\\nconditions, C-MPNNs compute embeddings of (v|u, q)for all entity v. To incor-\\nporate conditions into embeddings, NBFNet [50] and A*Net [49] initialize the\\nhead entity with the embedding of query relation and propagate in the full KG\\nforeachGNNlayer.However,conditionalinformationpassingisstillrestrictedin\\nthe neighborhood of the head entity. Differently, RED-GNN [47], AdaProp [48],Inductive Knowledge Graph Reasoning with MStar 5\\nand RUN-GNN [41] propagate the message progressively starting from the head\\nentitywithoutspecialinitialization.Duringprogressivepropagation,theinvolved\\nentity set is augmented step by step with the neighbor entities of the current\\nset instead of being a full entity set. Thus, progressive propagation cannot even\\nvisit distant entities in limited steps. MStar alleviates the above problem by\\nselecting multiple starting entities adaptively for progressive propagation and\\ntransmitting conditional information through shortcuts.\\nEL-GNN [25] is another work related to C-MPNNs. This study proposes that\\nC-MPNNs learn the rules of treating the head entity as constant when the head\\nentity is initialized with conditional information. Thus, EL-GNN learns more\\nrules by assigning unique embeddings for entities whose out-degree in the KG\\nreaches a specific threshold. However, the degree and entity-specific embeddings\\nare fixed, which violates the nature of inductive KG reasoning. Our MStar se-\\nlects starting entities according to the query and generates conditional entity\\nembeddings, which can be applied to unseen entities.\\n2.4 Skip Connection\\nSkip connection [8] is a popular technique in deep learning that skips one or\\nmore layers. Skipping layers contributes to addressing vanishing or exploding\\ngradients [31] by providing a highway for the gradients. ResNet [8] constructs\\nthe highway by adding input xand output F(x). DenseNet [9] provides multiple\\nhighways by concatenating the input of each layer. These models transmit the\\ninput in shallow layers directly to the target deeper layer in an efficient way.\\nInspired by skip connection, MStar constructs a highway with several new edges\\nto transmit messages faster and propagate to farther entities.\\n3 Preliminaries\\nKnowledge Graph A KG G= (E,R,F)is composed of finite sets of entities\\nE, relations R, and triplets F. Each triplet f∈ Fdescribes a fact from head\\nentity to tail entity with a specific relation, i.e., f= (u, q, v )∈ E ×R×E , where\\nu,q, and vdenote the head entity, relation, and tail entity, respectively.\\n(Inductive) Knowledge Graph Reasoning To complete the missing triplet\\nin real-world KGs, KG reasoning is proposed to predict the target tail entity\\nor head entity with a given query (u, q,?)or(?, q, v). Given a source KG G=\\n(E,R,F), inductive KG reasoning aims to predict the triplets involved in the\\ntarget KG G′= (E′,R′,F′), where R′⊆ R,E′̸⊂ E, andF′̸⊂ F.\\nStarting Entities in Progressive Propagation GNNs transmit messages\\nbased on the message propagation framework [7,40]. This framework prepares\\nan entity set to transmit messages for each propagation step. Full propagation6 Z. Shao et al.\\nInputQueryheadPre-Embeded GNN\\n𝑀𝐿𝑃×𝐸𝑚𝑏𝑒𝑑\\ne𝑛𝑡𝑖𝑡𝑦𝑠𝑐𝑜𝑟𝑒𝑠DecoderStarting Entities Selection (SES)\\n12\\n3\\n456Highway Layer\\nheadquery\\ntail\\n𝑟1′\\n𝑟2′\\n𝑟3′\\nAdded\\nℛ′Types\\n𝒯\\n𝑡1\\n𝑡2\\n𝑡3\\n终版\\nMulti -Condition GNN\\n…𝒱0\\n𝒱1\\nEfficient Propagation\\n Selection\\ne.g. n=6, m=3Pre-EmbeddingsConditional\\nEmbeddings\\n?Initialization\\nFig. 2.Framework overview of MStar\\ntransmits messages among all entities at all times. Progressive propagation con-\\ntinuously incorporates the neighbor entities of the entity set in the previous step.\\nBased on progressive propagation, we use starting entities to indicate the entities\\ninvolved in the first layer of the GNN. Given the starting entities S, the entities\\ninvolved in the ℓthlayer of the GNN can be formulated as\\nVℓ=\\x1aS ℓ= 0\\nVℓ−1∪\\x08\\nx|∃(e, r, x )∈ N(e)∧e∈ Vℓ−1\\t\\nℓ >0,\\nwhere N(e)denotes the neighbor edges of the entity e. In particular, NBFNet\\nputs all the entities into S, i.e.,S=E. RED-GNN only puts the head entity into\\nS,i.e.,S={u}withgivenquery (u, q,?).Toofewstartingentitieslimitthescope\\nof conditional message passing. On the contrary, too many start entities disperse\\nthe attention of GNNs on local information which is critical for reasoning. Our\\nmodel MStar strikes a balance by including the head entity and some selected\\nquery-dependent starting entities that are helpful for reasoning.\\n4 Methodology\\n4.1 Model Architecture Overview\\nTheoverviewofMStarispresentedinFig.2.Specifically,wefirstemploythepre-\\nembedded GNN to pre-encode all entities. Then, SES selects nquery-dependent\\nstarting entities according to the pre-embeddings. The highway layer classifies\\nstartingentitiesinto mtypes,consideringthecorrelationbetweentheheadentity\\nand other starting entities. To improve message-passing efficiency, the highway\\nlayer maps each entity type into a new relation and constructs shortcut edges\\nbetween the head entity and other starting entities. Based on the message pass-\\ning on the shortcut edges, we use the highway layer to obtain conditional entityInductive Knowledge Graph Reasoning with MStar 7\\nembeddings as the initialization for multi-condition GNN. Finally, the multi-\\ncondition GNN propagates relational information progressively conditioned on\\nthese starting entities and generates pairwise embeddings of each entity. Ac-\\ncording to the final entity embeddings, the decoder operates as a multilayer\\nperceptron (MLP) and generates scores for each candidate entity.\\n4.2 Starting Entities Selection\\nAs shown in Fig. 1, progressive propagation starts from the only entity (head\\nentity) and cannot reach the distant entities. However, the excessive utilization\\nof starting entities introduces noisy relational paths into the reasoning. Despite\\nthe expansion of the propagation, some starting entities still miss the target\\nentities and visit other distant entities unrelated to the query. Thus, we propose\\nto select multiple query-dependent starting entities adaptively to cover a farther\\narea but not introduce irrelevant noise in reasoning.\\nPre-Embedded GNN To find the starting entities related to the query, we\\nfirst introduce a pre-embedded GNN to learn the simple semantics of the enti-\\nties. The pre-embedded GNN transmits messages among all entities in the KG\\nfollowing the full propagation paradigm. To explore query-related knowledge,\\nthe pre-embedded GNN encodes the relation conditioned on query relation q.\\nSpecifically, the computation for message passing is given by\\nhℓ\\npre|u,q(e) =1\\n|N(e)|X\\n(e,r,x )∈N(e)\\x10\\nhℓ−1\\npre|u,q(x) +ˆrq\\x11\\n, (1)\\nˆrq=Wrq+br, (2)\\nwhere hℓ\\npre|u,q(e)denotes the embedding of the entity ein propagation step ℓ,\\nqis a learnable embeddings for relation q,Wr∈Rd×dis an r-specific learnable\\nweight matrix, and br∈Rdis an r-specific learnable bias. dis the dimension\\nof both entity and relation embeddings. ˆrqdenotes the embedding of relation r\\nconditioned on q. The pre-embedded GNN initializes h0\\npre|u,qas zero vectors and\\nproduces the entity embeddings hL1\\npre|u,qafter L1layers of message passing.\\nSelection Provided with the embeddings of entities conditioned on uandq,\\nwe design a score function to select query-dependent starting entities. The score\\nfunction measures the importance of entities relative to the head entity and\\nquery relation. Given an entity e, the importance score αe|u,qis defined as\\nαe|u,q=W1\\x10\\nReLU\\x10\\nW2\\x10\\nhL1\\npre|u,q(e)⊕hL1\\npre|u,q(u)⊕q\\x11\\x11\\x11\\n,(3)\\nwhere W1∈R1×dandW2∈Rd×3dare learnable weight matrices. ⊕denotes\\nthe concatenation of two vectors. We keep the top- nentities as starting entity\\nsetSu,q.Su,qcan propagate along the relational paths conditioned on the query.8 Z. Shao et al.\\n4.3 Highway Layer\\nGiven multiple starting entities, progressive propagation can traverse more enti-\\nties,particularlythoselocatedatdistantpositions.Thedistantentities,however,\\nreceive nothing about the conditional information, due to the limited scope of\\nconditional message passing. Inspired by the skip connection [8], which allows\\nskip-layer feature propagation, we introduce a highway layer to tackle this issue.\\nAiming to propagate conditional information to the starting entities, we con-\\nsider constructing shortcut edges between the query head entity and the other\\nstarting ones. Due to the different semantics of the starting entities, we classify\\nentities into mtypes based on the embeddings. Each type indicates that this\\ngroup of entities has a specific semantic relationship with the head entity. Then,\\nwe map each entity type to a new semantic relation type and construct new\\nedges. Given conditions u,qand entity e, the entity type is defined as follows:\\nβe|u,q= arg max\\ntWthL1\\npre|u,q(e), t∈[1, m], (4)\\nwhere tis a type of starting entities, and Wt∈R1×dis at-specific learnable\\nweight matrix.\\nGiven starting entity types, the highway layer constructs shortcut edges as\\nHu,q=n\\n(u, r′\\nβe|u,q, e)|e∈ Su,q− {u}o\\n, (5)\\nwhere r′\\nβe|u,qdenotes the new relation that we introduce, corresponding to the\\nstarting entity type. These edges act as a skip connection to support skipping\\npropagation from the head to the starting entities.\\nFinally, the highway layer performs message passing on Hu,qto obtain the\\nembeddings of the selected starting entities:\\ngu,q(e) =X\\n(e,r,x )∈Nhighway (e)gu,q(x)⊙ˆrq, (6)\\nwhere gu,q(e)denotes the embedding of entity e,Nhighway (e)denotes the neigh-\\nbor edges of the entity ein set Hu,q, and ⊙denotes the point-wise product\\nbetween two vectors. To satisfy target entity distinguishability [10], we set a\\nlearnable embedding for the head entity u.\\n4.4 Multi-Condition GNN\\nIn MStar, we introduce a multi-condition GNN to produce the final entity em-\\nbeddings. The multi-condition GNN is a C-MPNN conditioned on the head en-\\ntity and query relation. Specifically, the multi-condition GNN initializes entity\\nembeddings h0\\nu,qasgu,qand propagates from the starting entities progressively.\\nGiven the query triplet (u, q,?), we incorporate the query information into prop-\\nagation in two ways.\\nFirst, we model the embedding of relation rin an edge as ˆrqconditioned on\\nthe query relation qsame as Eq. (2). Second, considering that the semantics ofInductive Knowledge Graph Reasoning with MStar 9\\nedges are query-dependent, we use the attention mechanism [37] and assign a\\nweight for every edge (e, r, x )in step ℓ:\\nγℓ\\n(e,r,x )|u,q=σ\\x10\\nWℓ\\nattnReLU\\x00\\nWℓ\\nattn uhℓ−1\\nu,q(e) +Wℓ\\nattn rˆrq+Wℓ\\nattn qq\\x01\\x11\\n,(7)\\nwhere Wℓ\\nattn∈R1×dγ,Wℓ\\nattn u,Wℓ\\nattn randWℓ\\nattn q∈Rdγ×dare learnable weight\\nmatrices, dγis the dimension of attention, hℓ\\nu,q(e)denotes the embedding of the\\nentity ein multi-condition GNN at step ℓ, and σdenotes a sigmoid function.\\nBased on the two ways above, the entity embeddings are given by\\nhℓ\\nu,q(e) =ReLU\\uf8eb\\n\\uf8edWℓ\\noX\\n(e,r,x )∈N(e)∧{e,x}⊂Vℓu,qγℓ\\n(e,r,x )|u,q\\x00\\nhℓ−1\\nu,q(x)⊙ˆrq\\x01\\uf8f6\\n\\uf8f8,(8)\\nwhere Wℓ\\no∈Rd×dis a learnable weight matrix, Vℓ\\nu,qis the entity set in progres-\\nsive propagation step ℓ, andV0\\nu,q=Su,q.\\n4.5 Training Strategy: LinkVerify\\nTo reason the likelihood of a triplet (u, q, e ), the decoder produces a score func-\\ntion s(·). Given the final output hL2u,qafter L2layers of multi-condition GNN,\\nthe score function is given by\\ns (u, q, e ) =W3\\x00\\nReLU\\x00\\nW4\\x00\\nhL2\\nu,q(u)⊕hL2\\nu,q(e)\\x01\\x01\\x01\\n, (9)\\nwhere W3∈R1×dandW4∈Rd×2dare learnable weight matrices. However,\\nmulti-condition GNN propagates progressively and probably misses several dis-\\ntant target tail entities during the training. In this situation, the prediction\\nknows nothing about the target tail entity and brings a noisy score for training.\\nToalleviatetheproblemabove,weproposeamechanism LinkVerify tofilter\\nnoisy training samples. The noisy sample represents the triplet whose target tail\\nentity is not involved in VL2u,q. Taking the inductive KG reasoning task as a multi-\\nlabel classification problem, we use the multi-class log-loss [14,47] to optimize\\nthe model. Associated with LinkVerify, the final loss is given by\\nL=X\\n(u,q,v )∈F\\x12\\n−s (u, q, v ) +log\\x10X\\n∀e∈Eexp\\x00\\ns(u, q, e )\\x01\\x11\\x13\\n×1\\x00\\nv∈ VL2\\nu,q\\x01\\n.(10)\\n5 Experiments\\nIn this section, we perform extensive experiments to answer the questions below:\\n– Q1:Does MStar perform well on inductive KG reasoning?\\n– Q2:How does each designed module influence the performance?\\n– Q3:Whether MStar can improve reasoning ability about distant entities or\\nnot?10 Z. Shao et al.\\nTable 2. Statistics of the inductive datasets. GandG′denote the KGs in the training\\nand test sets, respectively.\\nDatasets FB15k-237 NELL-995 WN18RR\\nVersions KGs |R| |V| |F| |R| |V| |F| |R| |V| |F|\\nv1G183 2,000 5,226 14 10,915 5,540 9 2,746 6,678\\nG′146 1,500 2,404 14 225 1,034 9 922 1,991\\nv2G203 3,000 12,085 88 2,564 10,109 10 6,954 18,968\\nG′176 2,000 5,092 79 4,937 5,521 10 2,923 4,863\\nv3G218 4,000 22,394 142 4,647 20,117 11 12,078 32,150\\nG′187 3,000 9,137 122 4,921 9,668 11 5,084 7,470\\nv4G222 5,000 33,916 77 2,092 9,289 9 3,861 9,842\\nG′204 3,500 14,554 61 3,294 8,520 9 7,208 15,157\\n5.1 Experiments Settings\\nDatasets We perform inductive KG reasoning experiments on the benchmark\\ndatasets proposed in GraIL [32], which are derived from WN18RR [6], FB15k-\\n237 [34], and NELL-995 [42]. Each benchmark dataset is divided into four ver-\\nsions (v1, v2, v3, v4), and the size typically increases following the version num-\\nber. Each version consists of training and test graphs without overlapping enti-\\nties. The training graphs contain triplets for training and validation, following a\\nsplit ratio of 10:1. The statistics of the datasets are presented in Table 2.\\nBaselines We compare MStar with 10 inductive baselines organized into three\\ngroups, including (i) three rule-based models: RuleN [20], Neural LP [45], and\\nDRUM [27]; (ii) two subgraph-based models: GraIL [32] and CoMPILE [18];\\n(iii) five C-MPNN-based models: NBFNet [50], A*Net [49], RED-GNN [47],\\nAdaProp [48], and RUN-GNN [41].\\nEvaluationandTiePolicy Following[47–49],weevaluateallthemodelsusing\\nthe filtered mean reciprocal rank (MRR) and Hits@10 metrics. The best models\\nare chosen according to MRR on the validation dataset. Subgraph-based models\\ntypically rank each test triplet among 50 randomly sampled negative triplets,\\nwhereas C-MPNNs evaluate each triplet against all possible candidates. In this\\npaper, we follow the latter and take the \\nresults of rule-based and subgraph-based\\nmodels from [48]. Missing \\nresults are reproduced by their official code.\\nThere are different tie policies [30] to compute MRR when several candidate\\nentities receive equal scores. In progressive propagation, all unvisited entities are\\nassigned identical scores. Following [41,47], we measure the average rank among\\nthe entities in the tie, as suggested in [26]. To keep the tie policy consistent, we\\nre-evaluate AdaProp using the official code.Inductive Knowledge Graph Reasoning with MStar 11\\nTable 3. Inductive KG reasoning \\nresults (measured with MRR). The best scores are\\ninboldand the second-best scores are underlined. “-” denotes the result unavailable,\\nand values with suffix “ ⋆” are reproduced using the released code.\\nModelsFB15k-237 NELL-995 WN18RR\\nv1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4\\nRuleN .363 .433 .439 .429 .615 .385 .381 .333 .668 .645 .368 .624\\nNeural LP .325 .389 .400 .396 .610 .361 .367 .261 .649 .635 .361 .628\\nDRUM .333 .395 .402 .410 .628 .365 .375 .273 .666 .646 .380 .627\\nGraIL .279 .276 .251 .227 .481 .297 .322 .262 .627 .625 .323 .553\\nCoMPILE .287 .276 .262 .213 .330 .248 .319 .229 .577 .578 .308 .548\\nNBFNet .270 .321 .335 .288 .584 .410 .425 .287 .686 .662 .410 .601\\nA*Net - - - - - - - - - - - -\\nRED-GNN .341 .411 .411 .421 .591⋆.373⋆.391⋆.195⋆.693 .687 .422 .642\\nAdaProp .279⋆.467⋆.470⋆.440⋆.725⋆.416⋆.413⋆.338⋆.706⋆.703⋆.433⋆.651⋆\\nRUN-GNN .397.473 .468 .463 .617⋆.413⋆.479⋆.282⋆.699 .697 .445 .654\\nMStar .458 .526 .506 .487 .787 .540 .496 .384 .733 .702.442 .645\\nTable 4. Inductive KG reasoning \\nresults (measured with Hits@10)\\nModelsFB15k-237 NELL-995 WN18RR\\nv1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4\\nRuleN .446 .599 .600 .605 .760 .514 .531 .484 .730 .694 .407 .681\\nNeural LP .468 .586 .571 .593 .871 .564 .576 .539 .772 .749 .476 .706\\nDRUM .474 .595 .571 .593 .873 .540 .577 .531 .777 .747 .477 .702\\nGraIL .429 .424 .424 .389 .565 .496 .518 .506 .760 .776 .409 .687\\nCoMPILE .439 .457 .449 .358 .575 .446 .515 .421 .747 .743 .406 .670\\nNBFNet .530 .644 .623 .642 .795 .635 .606 .591.827.799.568.702\\nA*Net .535 .638 .610 .630 - - - - .810 .803.544.743\\nRED-GNN .483 .629 .603 .621 .866⋆.601⋆.594⋆.556⋆.799 .780 .524 .721\\nAdaProp .461⋆.665⋆.636⋆.632⋆.776⋆.618⋆.580⋆.589⋆.796⋆.792⋆.532⋆.730⋆\\nRUN-GNN .496 .639 .631 .665.833⋆.575⋆.659⋆.436⋆.807 .798 .550.735\\nMStar .583 .702 .675 .665 .900 .735 .666 .617 .817.803.547 .726\\nImplementation Details We implement our model using the PyTorch frame-\\nwork [24] and employ the Adam optimizer [13] for training. Due to the relatively\\nsmall size of the inductive dataset and its susceptibility to overfitting, we apply\\nearly stopping to mitigate this issue. We tune the hyper-parameters using grid\\nsearch and select the number of starting entities nin{1,2,4,8,16,32,64}, the\\nnumber of starting entity types min{2,3,5,7,9}. The best hyperparameters are\\nselected according to the MRR metric on the validation sets. All experiments\\nare conducted on a single NVIDIA RTX A6000 GPU with 48GB memory.12 Z. Shao et al.\\n5.2 Main \\nResults (Q1)\\nTables 3 and 4 depict the performance of different models on inductive KG rea-\\nsoning. MStar demonstrates the best performance across all metrics on FB15k-\\n237 and NELL-995, and compares favorably with the top models on WN18RR.\\nWe observe that (i) subgraph-based models typically perform poorly. This is\\nbecause subgraphs are often sparse or empty and provide less information, par-\\nticularly for distant entities. (ii) Rule-based models are generally more competi-\\ntive but are still weaker compared to C-MPNN-based models. However, DRUM\\noutperformsexistingmodelsexceptMStarinHits@10onNELL-995(v1).NELL-\\n995 (v1) is a special dataset and the distance between the head and tail entities\\nfor all triplets in the test graph is no longer than 3, which is very short. Thus, we\\nconjecture that the length of the learned rules limits the reasoning capabilities\\nof rule-based models. Differently, MStar holds an edge over these two groups of\\nmodels on all datasets. This suggests that multiple starting entities in MStar\\nalleviate the distance limit issues as much as possible when reasoning.\\nCompared with the best C-MPNN-based \\nresults, MStar achieves an aver-\\nage relative gain of 9.9% in MRR, 5.2% in Hits@10 on FB15k-237, and 13.9% in\\nMRR, 6.1% in Hits@10 on NELL-995. Existing C-MPNN-based models typically\\nuse all entities in the KG or only the head entity as starting entities, without pro-\\nviding conditional information to distant entities, which can introduce excessive\\nnoise or lack sufficient information. Instead, our MStar selects multiple query-\\ndependentstartingentitiesadaptivelyandpropagatesconditionsfartherthrough\\nthe highway for accurate reasoning. Moreover, LinkVerify in MStar additionally\\nreduces noisy samples in training. We also observe that the improvement of the\\nmodel on WN18RR is not as pronounced as on the other datasets. To provide\\ninsights into this phenomenon, we conduct further analysis in Section 5.4.\\n5.3 Ablation Study\\nVariantsofMStar(Q2) Inthissection,wedesignseveralvariantsofMStarto\\nstudy the contributions of three components: (i) selection, (ii) highway, and (iii)\\nLinkVerify in training. The \\nresults are summarized in Tables 5 and 6, which indi-\\ncate that all components contribute significantly to MStar on the three datasets.\\nFirst, the variant of w/o selection propagates only from the head entity which\\nis the same as RED-GNN. According to the \\nresults, removing selection signif-\\nicantly decreases performance, highlighting the effectiveness of using multiple\\nstarting entities to explore reasoning patterns across a broader neighborhood.\\nSecond, it can be observed that the performance of variant w/o highway is\\nworse than MStar. This observation suggests that transmitting query-dependent\\ninformation to the starting entities is a promising approach to expedite propa-\\ngation for conditions and enhance reasoning accuracy.\\nThird, the variant of w/o LinkVerify is inferior to MStar all the time, as\\ntriplets with unvisited target entities in training KG introduce noise. Removing\\nLinkVerify \\nresults in poorer performance, especially on smaller datasets. ForInductive Knowledge Graph Reasoning with MStar 13\\nTable 5. Ablation study of the proposed framework (measure with MRR)\\nModelsFB15k-237 NELL-995 WN18RR\\nv1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4\\nMStar .458 .526 .506 .487 .787 .540 .496 .384 .733 .702 .442 .645\\nw/o Selection .432 .491 .483 .457 .719 .479 .457 .280 .721 .674 .432 .643\\nw/o Highway .411 .488 .460 .474 .774 .473 .494 .297 .726 .700 .438 .629\\nw/o LinkVerify .426 .517 .498 .481 .661 .502 .482 .375 .729 .698 .420 .641\\nTable 6. Ablation study of the proposed framework (measured with Hits@10)\\nModelsFB15k-237 NELL-995 WN18RR\\nv1 v2 v3 v4 v1 v2 v3 v4 v1 v2 v3 v4\\nMStar .583 .702 .675 .665 .900 .735 .666 .617 .817 .803 .547 .726\\nw/o Selection .534 .686 .644 .629 .775 .693 .619 .425 .811 .778 .528 .717\\nw/o Highway .532 .657 .609 .644 .855 .682 .648 .532 .814 .788 .543 .698\\nw/o LinkVerify .568 .699 .657 .658 .785 .695 .645 .608 .811 .797 .508 .724\\nTable 7. Per-distance evaluation on FB15k-237 (v1) (measured with Hits@10). “ ∞”\\nindicates that the head entity fails to reach the tail entity.\\nDistance Proportions RED-GNN AdaProp RUN-GNN NBFNet MStar\\n1 32.68% .813 .933 .851 .545 .948\\n2 12.20% .640 .520 .740 .760 .780\\n3 25.37% .433 .269 .414 .490 .471\\n4 7.32% .000 .000 .267 .333 .300\\n5 11.22% .000 .000 .217 .261 .174\\n6 3.90% .000 .000 .000 .438 .188\\n7 1.46% .000 .000 .000 .333 .000\\n8 1.46% .000 .000 .000 .333 .167\\n9 0.00% .000 .000 .000 .000 .000\\n10 0.98% .000 .000 .000 .250 .000\\n∞ 3.41% .000 .000 .000 .357 .214\\ninstance,w/oLinkVerifydecreases7.0%forFB15k-237(v1)and1.3%forFB15k-\\n237(v4)relatively.Thisisbecausethenoisytriplets negativelyinfluencetraining\\nwhen data is lacking. Thus, LinkVerify demonstrates to be more effective when\\napplied to KGs with fewer triplets.\\nPer-distance Performance (Q3) To check the reasoning ability on distant\\ntail entities, we compare MStar with several expressive models on FB15k-237\\n(v1). To make the comparison more precise, we split FB15k-237 (v1) into 11\\nsubsets according to the shortest distance between the head and tail entity for\\neach triplet. The comparisons are conducted on each subset based on official14 Z. Shao et al.\\nTable 8. Proportions of long-distance triplets in the KGs. The shortest distance be-\\ntween head and tail entities in a long-distance triplet is longer than 3.\\nDatasets FB15k-237 NELL-995 WN18RR\\nVersions G G′G G′G G′\\nv1 15.78% 29.76% 39.64% 0.00% 34.31% 17.55%\\nv2 8.69% 15.48% 10.62% 2.52% 20.86% 16.33%\\nv3 3.41% 4.51% 11.16% 3.96% 22.32% 26.94%\\nv4 2.39% 2.74% 9.30% 6.98% 22.39% 20.50%\\ncode and parameters. RED-GNN, AdaProp and MStar use 3 layers of GNN.\\nRUN-GNN and NBFNet use 5 and 6 layers of GNN, respectively. The \\nresults\\nare shown in Table 7.\\nCompared to the models with a single starting entity (RED-GNN, AdaProp,\\nand RUN-GNN), MStar performs better significantly on distant entities. For\\ninstance,RED-GNNfailstopredictentitiesbeyond3hops.Moreover,MStarcan\\neven reason about unreachable target entities. This is because MStar can select\\nquery-related starting entities that are disconnected from the head entity but in\\nthe neighborhood of the unreachable entities. These observations demonstrate\\nthat multiple starting entities can expand the reasoning area effectively, and the\\nhighway layer provides additional evidence for reasoning about distant entities.\\nDifferently, the reasoning performance of NBFNet on close entities is signifi-\\ncantlydecreaseddespitetheabilitytoreasonaboutdistantentities.Forinstance,\\nNBFNet is inferior to the other models on Hits@10 for 1-distance triplets with\\na great gap of at least 0.268. This is because NBFNet propagates from query-\\nindependent starting entities and reasons along many noisy relational paths,\\nwhich disrupts the inference about close entities. Instead, MStar improves the\\nreasoning performance for distant entities and keeps the reasoning abilities for\\nclose entities simultaneously. This is achieved due to MStar propagating condi-\\ntionsalongquery-relatedrelationalpathsandremovingnoisylinksbyLinkVerify.\\n5.4 Further Analysis\\nPerspective of Datasets As shown in Tables 5 and 6, the improvement of\\nMStar on WN18RR is not as great as the one on other datasets. As can be\\nseen from Table 2, WN18RR (v1, v2, v3, v4) and NELL-995 (v1) have fewer\\nrelations.Duetotheentity-independentnatureofinductiveKGreasoning,entity\\nembeddings usually rely on the representation of relations. With fewer relations,\\nentities carry more monotonous information. Therefore, it becomes challenging\\nto select query-dependent entities and propagate messages to the target ones. To\\nstudy the situation further, we count the proportion of triplets whose shortest\\ndistance between the head and tail entities exceeds 3. We regard these triplets\\nas long-distance triplets. The result is shown in Table 8. We can see that NELL-\\n995(v1)ownszerolong-distancetripletsinthetestgraphs.Thus,NELL-995(v1)Inductive Knowledge Graph Reasoning with MStar 15\\nTable 9. Comparison of different starting entities selection \\nmethods\\nModelsFB15k-237 (v1) NELL-995 (v1) WN18RR (v1)\\nMRR Hits@10 MRR Hits@10 MRR Hits@10\\nMStar .462 .598 .801 .921 .736 .816\\nw/ random .427 .587 .787 .901 .698 .803\\nw/ degree .403 .553 .362 .595 .709 .810\\ncanresolvetheaboveissuesbypropagatingconditionalinformationtoanytarget\\nentity in 3 hops, even without multiple starting entities.\\nPerspective of Starting Entities Selection MStar leverages an importance\\nscore function to select starting entities. The score function is conditioned on the\\nquery head and relation, aiming to explore query-dependent entities. Here, we\\nconsider two other score function variants, i.e., variant w/ random and variant\\nw/ degree. Variant w/ random scores the entities with random values. Similar\\nto EL-GNN [25], variant w/ degree assigns higher scores to entities with higher\\ndegrees. All variants keep top- nentities as starting ones.\\nTable 9 shows the comparison \\nresults. We can observe that random scores\\nlead to a degraded performance. This is because random starting entities prop-\\nagate along many noisy relational paths. Noisy paths hinder MStar’s ability\\nto capture query-related rules and to reach distant target tail entities. Variant\\nw/ degree is also inferior to our MStar, even worse than random scores. For\\ninstance, the performance of variant w/ degree on FB15k-237 (v1) decreases by\\n54.8% and 54.0% relative to MStar and variant w/ random, respectively. This is\\nmainly due to the fact that the global feature degree fixes the starting entities\\nand cannot support query-dependent propagation.\\n6 \\nConclusion and Future Work\\nIn this paper, we explore the issue of inefficient message propagation for KG rea-\\nsoning and propose a new inductive KG reasoning model called MStar. Specif-\\nically, we propose using multiple starting entities to expand the propagation\\nscope. Moreover, we construct a highway between the head entity and the other\\nstarting entities to accelerate conditional message passing. Additionally, we in-\\ntroduce a training strategy LinkVerify to filter inappropriate samples. Experi-\\nmental \\nresults demonstrate the effectiveness of MStar. In particular, ablation\\nresults validate the superiority of MStar for reasoning about distant entities. In\\nfuture work, we plan to explore alternative modules for selecting and classify-\\ning starting entities. We also intend to investigate \\nmethods to effectively utilize\\nnoisy triplets during training instead of dropping them.', 'Adversarial Environment Design via Regret-Guided Diffusion Models': 'Title: Adversarial Environment Design via Regret-Guided Diffusion Models\\nAbstract\\nTraining agents that are robust to environmental changes remains a significant\\nchallenge in deep reinforcement learning (RL). Unsupervised environment design\\n(UED) has recently emerged to address this issue by generating a set of training\\nenvironments tailored to the agent’s capabilities. While prior works demonstrate\\nthat UED has the potential to learn a robust policy, their performance is constrained\\nby the capabilities of the environment generation. To this end, we propose a\\nnovel UED algorithm, adversarial environment design via regret-guided diffusion\\nmodels (ADD). The proposed method guides the diffusion-based environment\\ngenerator with the regret of the agent to produce environments that the agent finds\\nchallenging but conducive to further improvement. By exploiting the representation\\npower of diffusion models, ADD can directly generate adversarial environments\\nwhile maintaining the diversity of training environments, enabling the agent to\\neffectively learn a robust policy. Our experimental \\nresults demonstrate that the\\nproposed method successfully generates an instructive curriculum of environments,\\noutperforming UED baselines in zero-shot generalization across novel, out-of-\\ndistribution environments. Project page: https://rllab-snu.github.io/projects/ADD\\n1 \\nIntroduction\\nDeep reinforcement learning (RL) has achieved great success in various challenging domains, such\\nas Atari [ 1], GO [ 2], and real-world robotics tasks [ 3,4]. Despite the progress, the deep RL agent\\nstruggles with the generalization problem; it often fails in unseen environments even with a small\\ndifference from the training environment distribution [ 5,6]. To train well-generalizing policies,\\nvarious prior works have used domain randomization (DR) [ 7,8,9], which provides RL agents with\\nrandomly generated environments. While DR enhances the diversity of the training environments,\\nit requires a large number of trials to generate meaningful structures in high-dimensional domains.\\nCurriculum reinforcement learning [ 10,11] has been demonstrated to address these issues by pro-\\nviding instructive sequences of environments. Since manually designing an effective curriculum for\\ncomplicated tasks is challenging, prior works [ 12,13] focus on generating curricula that consider the\\ncurrent agent’s capabilities. Recently, unsupervised environment design (UED, [ 14]) has emerged\\nas a scalable approach, notable for its advantage of requiring no prior knowledge. UED algorithms\\nalternate between training the policy and designing training environments that maximize the regret\\nof the agent. This closed-loop framework ensures the agent learns a minimax regret policy [ 15],\\nassuming that the two-player game between the agent and the environment generator reaches the\\nNash equilibrium.\\n∗Corresponding author: Songhwai Oh\\n38th Conference on Neural Information Processing Systems (NeurIPS 2024).arXiv:2410.19715v2 [cs.LG] 15 Nov 2024There are two main approaches for UED: 1) learning-based \\nmethods, which employ an environment\\ngenerator trained via reinforcement learning, and 2) replay-based \\nmethods, which selectively replay\\namong previously generated environments. The learning-based \\nmethods [ 14,16,17] utilize an\\nadaptive generator that controls the parameters that fully define the environment configuration. The\\ngenerator receives a regret of the agent as a reward and is trained via reinforcement learning to\\nproduce environments that maximize the regret. While the learning-based \\nmethods can directly\\ngenerate meaningful environments, training the generator with RL is unstable due to the moving\\nmanifold [ 16]. Additionally, we observe that the RL-based generator has limited environment\\ncoverage, which limits the generalization capability of the trained agent. In contrast, the replay-based\\nmethods [ 18,19,20] employ a random generator and select environments to revisit among previously\\ngenerated environments. Since the random generator can produce diverse environments without\\nadditional training, they outperform the learning-based \\nmethods in zero-shot generalization tasks\\n[20]. However, the replay-based \\nmethods are sample inefficient as they require additional episodes to\\nevaluate the regret on the randomly generated environments.\\nIn this work, we propose a sample-efficient and robust UED algorithm by leveraging the strong\\nrepresentation power of diffusion models [ 21]. First, to make UED suitable for using a diffusion\\nmodel as a generator, we introduce soft UED, which augments the regret objective of UED with an\\nentropy regularization term, as done in maximum entropy RL [ 22]. By incorporating the entropy term,\\nwe can ensure the diversity of the generated environments. Then, we present adversarial environment\\ndesign via regret-guided diffusion models (ADD), which guides a diffusion-based environment\\ngenerator with the regret of the agent to produce environments that are conducive to the performance\\nimprovement of the agent. Enabling this regret guidance requires the gradient of the regret with\\nrespect to the environment parameter. However, since the true value of the regret is intractable and\\nthe regret estimation \\nmethods used in prior works on UED are not differentiable, a new form of regret\\nestimation method is needed. To this end, we propose a novel method that enables the estimation\\nof the regret in a differentiable form by utilizing an environment critic, which predicts a return\\ndistribution of the current policy on the given environment. This enables us to effectively integrate\\ndiffusion models within the UED framework, significantly enhancing the environment generation\\ncapability.\\nSince the regret-guided diffusion does not require an additional training of the environment generator,\\nwe can preserve the ability to cover the high-dimensional environment domain as the random generator\\nof the replay-based method. Moreover, ADD can directly generate meaningful environments via\\nregret-guided sampling as the learning-based \\nmethods. By doing so, ADD effectively combines the\\nstrengths of previous UED \\nmethods while addressing some of their limitations. Additionally, unlike\\nother UED \\nmethods, ADD allows us to control the difficulty levels of the environments it generates\\nby guiding the generator with the probability of achieving a specific return. It enables the reuse of the\\nlearned generator in various applications, such as generating benchmarks.\\nWe conduct extensive experiments across challenging tasks commonly used in UED research: par-\\ntially observable maze navigation and 2D bipedal locomotion over challenging terrain. Experimental\\nresults show that ADD achieves higher zero-shot generalization performance in unseen environments\\ncompared to the baselines. Furthermore, our analysis on the generated environments demonstrates\\nthat ADD produces an instructive curriculum with varying complexity while covering a large en-\\nvironment configuration space. As a result, it is shown that the proposed method successfully\\ngenerates adversarial environments and facilitates the agent to learn a policy with solid generalization\\ncapabilities.\\n2 Related Work\\n2.1 Unsupervsied Curriculum Reinforcement Learning\\nWhile curriculum reinforcement learning [ 13,23,24] has been shown to enhance the generalization\\nperformance of the RL agent, Dennis et al. [ 14] first introduce the concept of the unsupervised\\nenvironment design (UED). UED encompasses various environment generation mehods, such as\\nPOET [ 12,25] and GPN[ 26]. In this work, we follow the original concept of UED, which aims to\\nlearn a minimax regret policy [ 15] by generating training environments that maximize the regret\\nof the agent. Based on this concept, the learning-based \\nmethods train an environment generator\\nvia reinforcement learning. PAIRED [ 14] estimates the regret with a difference between returns\\n2obtained by two distinct agents, and trains RL-based generator by utilizing the regret as a reward.\\nRecently, CLUTR [ 16] and SHED [ 17] utilize generative models to improve the performance of\\nPAIRED. CLUTR trains the environment generator on the learned latent space, and SHED supplies\\nthe environment generator with augmented experiences created by diffusion models. Despite the\\nprogress, training the generator via RL is unstable due to the moving manifold [ 16,27] and often\\nstruggles to generate diverse environments. On the other hand, replay-based \\nmethods based on PLR\\n[18] utilize a random environment generator and decide which environments to replay. ACCEL [ 20]\\ncombines the evolutionary approaches [ 12,25] and PLR by taking random mutation on replayed\\nenvironments. While these replay-based \\nmethods show scalable performance on a large-scale domain\\n[28] and outperform the learning-based \\nmethods, they do not have the ability to directly generate\\nmeaningful environments. Unlike prior UED \\nmethods, we augment the regret objective of UED\\nwith an entropy regularization term and propose a method that employs a diffusion model as an\\nenvironment generator to enhance the environment generation capability. Our work is also closely\\nrelated to data augmentation for training robust policy. Particularly, DRAGEN [ 29] and ISAGrasp\\n[30] augment existing data in learned latent spaces to train a policy that is robust to unseen scenarios.\\nOur algorithm, on the other hand, focuses on generating curricula of environments without any prior\\nknowledge and dataset.\\n2.2 Diffusion Models\\nDiffusion models [ 21,31,32] have achieved remarkable performance in various domains, such\\nas image generation [ 33], video generation [ 34], and robotics [ 35,36,37]. Particularly, diffusion\\nmodels effectively perform conditional generation using guidance to generate samples conditioned\\non class labels [ 38,39] or text inputs [ 40,41,42]. Prior works also guide the diffusion models\\nutilizing an additional network or loss functions, such as adversarial guidance to generate images\\nto attack a classifier [ 43], safety guidance using pre-defined functions to generate safety-critical\\ndriving scenarios [ 44], and guidance using reward functions trained by human p', 'Safe CoR: A Dual-Expert Approach to Integrating Imitation Learning and Safe Reinforcement Learning Using Constraint Rewards': 'Title: Safe CoR: A Dual-Expert Approach to Integrating Imitation Learning and Safe Reinforcement Learning Using Constraint Rewards\\nAbstract — In the realm of autonomous agents, ensuring\\nsafety and reliability in complex and dynamic environments\\nremains a paramount challenge. Safe reinforcement learning\\naddresses these concerns by introducing safety constraints, but\\nstill faces challenges in navigating intricate environments such\\nas complex driving situations. To overcome these challenges,\\nwe present the safe constraint reward (Safe CoR) framework,\\na novel method that utilizes two types of expert demonstra-\\ntions—reward expert demonstrations focusing on performance\\noptimization and safe expert demonstrations prioritizing safety.\\nBy exploiting a constraint reward (CoR), our framework guides\\nthe agent to balance performance goals of reward sum with\\nsafety constraints. We test the proposed framework in diverse\\nenvironments, including the safety gym, metadrive, and the\\nreal-world Jackal platform. Our proposed framework enhances\\nthe performance of algorithms by 39% and reduces constraint\\nviolations by 88% on the real-world Jackal platform, demon-\\nstrating the framework’s efficacy. Through this innovative\\napproach, we expect significant advancements in real-world\\nperformance, leading to transformative effects in the realm of\\nsafe and reliable autonomous agents.\\nI. I NTRODUCTION\\nThe advance of autonomous driving technology promises\\nto revolutionize the way people commute, offering safer,\\nmore efficient, and accessible transportation options. At the\\nheart of this transformative potential is the importance of\\nensuring the safety and reliability of autonomous vehicles\\nin diverse and dynamic driving environments. To achieve\\nthis, many researchers and engineers have proposed algo-\\nrithms such as rule-based controllers [1], [2] and imitation\\nlearning methods [3]–[5]. Rule-based controllers provide\\na structured approach to decision-making based on prede-\\nfined rules and conditions, while imitation learning allows\\nthe agents to mimic human driving behaviors by learning\\nfrom vast amounts of driving data. However, these methods\\nface significant challenges in handling situations that fall\\nbeyond predefined rules [6]. These scenarios, which are\\nneither encapsulated within the training data nor foreseen\\nin the predefined rule sets, pose critical hurdles in achieving\\nthe comprehensive coverage and reliability that autonomous\\ndriving aspires to achieve.\\nTo address the limitations inherent in imitation learning\\nand rule-based controllers, reinforcement learning (RL) [7],\\n[8] has emerged as a compelling alternative. Unlike its\\npredecessors, RL enables autonomous driving agents to learn\\n1H. Kwon, J. Lee, and S. Oh are with the Interdisciplinary Program in\\nArtificial Intelligence and ASRI, Seoul National University, Seoul 08826,\\nKorea (e-mail: hyeokin.kwon@rllab.snu.ac.kr, junseo.lee@rllab.snu.ac.kr,\\nsonghwai@snu.ac.kr)2G. Lee and S. Oh are with the Department of\\nElectrical and Computer Engineering and ASRI, Seoul National University,\\nSeoul 08826, Korea (e-mail: gunmin.lee@rllab.snu.ac.kr).optimal behaviors through trial and error, interacting directly\\nwith their environment. This method offers significant advan-\\ntages, such as the ability to continuously improve and adapt\\nto new situations over time, potentially covering the gaps\\nleft by imitation learning and rule-based systems. Although\\nRL excels in adaptability and decision-making in complex\\nscenarios, ensuring the safety of autonomous driving agents\\nremains a critical challenge. However, the exploratory nature\\nof RL, which often requires agents to make mistakes to learn,\\nposes a significant risk in real-world driving contexts where\\nsafety is crucial. This fundamental concern highlights the\\nneed for innovative approaches within RL frameworks to\\nbalance exploration with the stringent safety requirements\\nof autonomous driving.\\nTo address the aforementioned issue, the concept of safe\\nreinforcement learning (safe RL) [9], [10] has been intro-\\nduced. This approach aims to incorporate safety constraints\\ninto the optimization process explicitly. By taking account of\\nsafety constraints into the policy optimization process, safe\\nRL methods enhance the agent’s ability to adhere to safety\\nconstraints, thereby improving safety during both the training\\nphase and the final deployment. For instance, incorporating a\\nlane-keeping reward directly into the reward function results\\nin mediocre lane-keeping behavior. On the other hand, when\\nthe lane-keeping component is applied as a constraint within\\nthe safe RL framework, the agent demonstrates significantly\\nimproved lane-keeping performance. Despite these advance-\\nments, challenges persist in the application of safe RL\\nalgorithms for training agents to navigate complex driving\\nenvironments safely.\\nTo overcome these challenges, we propose a novel method\\ncalled safe CoR, which innovatively combines two distinct\\ntypes of expert demonstrations to refine existing safe RL\\nalgorithms. The first type, termed reward expert demonstra-\\ntions, focuses exclusively on maximizing rewards without\\nconsidering safety constraints. Conversely, the second type,\\nsafe expert demonstrations, prioritizes adherence to safety\\nrequirements above all, with subsequent consideration for\\nreward maximization. By distinctly categorizing these ex-\\nperts—reward experts for their focus on performance opti-\\nmization and safe experts for their dual focus on safety and\\nreward maximization—we are able to calculate a constraint\\nreward (CoR). This term aids in the update process, directing\\nthe agent to emulate the reward expert for maximizing\\nrewards while using the safe expert as a regularizer to ensure\\nconstraint satisfaction. Through the strategic application of\\nCoR, our method guides the agent toward reducing constraint\\nviolations (CV) while still achieving high levels of reward, il-arXiv:2407.02245v1 [cs.RO] 2 Jul 2024lustrating a balanced approach to learning optimal behaviors\\nin diverse driving conditions. This dual-expert framework\\nsignificantly enhances the agent’s ability to navigate com-\\nplex driving scenarios, striking a critical balance between\\nambitious performance goals and stringent safety standards.\\nOur experimental outcomes demonstrate that the safe CoR\\nframework significantly improves algorithmic performance\\nwhile diminishing constraint violations across various plat-\\nforms, including the metadrive simulator [11] and safety gym\\nenvironments [12]. Notably, when applied to the real-world\\nJackal platform [10], our framework achieved superior results\\nover simulated environments, empirically demonstrating the\\nadvantage of the proposed framework. These findings un-\\nderscore safe CoR’s substantial potential in advancing the\\ndomain of safe RL.\\nThe contributions of this paper are summarized as follows:\\n•We propose a framework called safe CoR, which\\nuniquely integrates reward-centric and safety-conscious\\nexpert data to refine and enhance the performance of\\nexisting safe RL algorithms in the autonomous driving\\ndomain.\\n•We show empirical evidence demonstrating that agents,\\nunder the guidance of the safe CoR framework, outper-\\nform traditional safe RL algorithms by achieving supe-\\nrior performance metrics, especially in the real-world\\nplatform, with reduced rates of constraint violations in\\nthe training phase.\\n•We validate the superiority of the proposed algorithm in\\nreal-world scenarios utilizing the Jackal robot platform,\\nthereby affirming the framework’s applicability and\\nrobustness across diverse operational environments.\\nII. R ELATED WORK\\nA. Imitation learning\\nImitation learning is one of the main approaches in achiev-\\ning autonomous driving agents. It is a method that guides\\nagents to imitate the given demonstrations extracted from\\nexperts. One of the simplest approaches to imitation learning\\nis behavior cloning (BC), which shows promising results\\nin achieving generalization in real-world environments [13],\\n[14]. Despite its promise, BC is particularly susceptible to\\ncompounding errors, a drawback that significantly hampers\\nits effectiveness [15]. On the other hand, inverse reinforce-\\nment learning (IRL) [16] proposes another way to solve the\\nproblem of designing an autonomous agent, which is to learn\\nthe reward function from the expert demonstrations. Ho et al.\\n[17] proposed an algorithm that integrates IRL and RL, en-\\nabling the agent to acquire expert behaviors and estimate the\\nreward function concurrently. They mathematically proved\\nthe convergence of training both policies and discriminators\\nalternatively and their research opened avenues for further\\nresearchers [4], [18], [19].\\nAdditionally, there have been studies that combine imita-\\ntion learning with online learning. Yiren et al. [20] exper-\\nimentally demonstrated that expert demonstrations can as-\\nsist agents in navigating challenging environments robustly.Despite these advancements, it is crucial to note that the\\nmentioned methods have limitations as they do not directly\\naccount for safety constraints in the learning process.\\nB. Safe reinforcement learning\\nSafe reinforcement learning (safe RL) addresses the crit-\\nical aspect of satisfying the safety of agents by integrating\\nsafety considerations into the learning process. The algorithm\\nforces agents not only to maximize reward sums but also\\nto satisfy given constraints simultaneously. This approach\\ncan be categorized into two methods: Lagrangian-based and\\ntrust-region-based methods.\\nLagrangian-based method transforms the original safe RL\\nproblem into its dual problem. Ray et al. [12] proposed the\\nproximal policy optimization-Lagrangian (PPO-Lagrangian)\\nalgorithm, which extends the traditional PPO framework\\nby incorporating a Lagrangian multiplier approach to effi-\\nciently handle constraints, allowing for dynamic adjustment\\nof the trade-off between policy performance and constraint\\nsatisfaction. Yang et al. [21] proposed the worst-case soft\\nactor-critic (WCSAC), which relaxes constrained problems to\\nunconstrained ones using Lagrangian multipliers. However,\\nsuch algorithms suffer from being overly conservative in\\ntheir updates when constraint violations occur excessively\\nduring the initial learning stage. Additionally, the usage of\\nLagrangian multipliers makes the learning process unstable.\\nTrust-region-based method is an extended version of trust\\nregion policy optimization [22], which solves non-convex op-\\ntimization by transforming it into a simple problem. Achiam\\net al. [9] introduced constrained policy optimization (CPO),\\nwhich addresses the optimization of policy functions under\\nsafety constraints without transforming them into different\\nforms of optimization problems. CPO uniquely maintains\\nsafety constraints by utilizing a trust region method, ensuring\\nthat policy updates remain within predefined safety limits,\\nthereby facilitating the development of safe reinforcement\\nlearning policies. Kim and Oh proposed TRC and OffTRC\\n[10], [23], assuming that the discounted cost sum follows a\\nGaussian distribution. They derived the closed-form upper\\nbound of conditional value at risk (CVaR). Recently, Kim et\\nal. [24] proposed a method that utilizes a distributional critic\\nand gradient-integration technique to enhance the stability of\\nthe agent. However, the above algorithms still face challenges\\nin learning agents for safe driving in complex environments.\\nIII. P RELIMINARY\\nA. Constrained Markov decision process\\nA constrained Markov decision process (CMDP) is a\\nframework that extends the traditional Markov decision\\nprocess (MDP) to incorporate an additional constraint. A\\nCMDP is defined by the tuple ⟨S,A, ρ, P, R, C, γ ⟩: state\\nspaceS, action space A, initial state distribution ρ, transition\\nprobability P, reward function R, cost function C, and\\ndiscount factor γ. The expected reward sum J(π)can be\\nwritten in the aforementioned terms as follows:\\nJ(π):=Eπ"∞X\\nt=0γtR(st, at)#\\n, (1)where at∼π(·|st)andst+1∼P(·|st, at). Similarly, to\\ndefine constraints, the expected cost sum can be expressed\\nas follows:\\nCπ:=Eπ"∞X\\nt=0γtC(st, at)#\\n. (2)\\nThen the objective of safe RL can be represented as follows:\\nmaximize πJ(π)s.t.Cπ≤d\\n1−γ, (3)\\nwith the constraint threshold d.\\nB. Constraint reward\\nConstraint reward (CoR) is an additional objective term\\nthat assesses the relative distance of an agent state between\\ntwo sets of state data [4]. By utilizing two disparate sets\\nof states, denoted as SAandSBrespectively, the agent\\ncan estimate its performance relative to these two sets of\\ndemonstrations. If the distance between the agent’s state and\\nthe first set of states, SA, is less than the distance to the other\\nset of states, SB, the CoR value exceeds 0.5. In contrast,\\nwhen the agent’s state is closer to SBthanSA, the CoR is\\nreduced to below 0.5. In the prior work [4], by defining SAas\\nthe collection of states associated with expert performance\\nandSBas those corresponding to suboptimal or negative\\nbehavior, such as random policy, the CoR enables the training\\nof agents to emulate expert trajectories over undesirable ones.\\nFor the state s, the CoR is defined as follows:\\nCoR(s, SA, SB) =\\x00\\n1 +∆A\\nα\\x01−α+1\\n2\\n\\x00\\n1 +∆A\\nα\\x01−α+1\\n2+\\x00\\n1 +∆B\\nα\\x01−α+1\\n2,\\n∆A=s\\n1\\n|SA|X\\nsa∈SA∥s−sa∥2\\n2,\\n∆B=s\\n1\\n|SB|X\\nsb∈SB∥s−sb∥2\\n2,(4)\\nwhere ∥·∥2is the l2norm, and αrefers to a hyperparameter\\nused to regulate the sensitivity of CoR.\\nIV. S AFE COR\\nThe goal of this work is to combine the strengths of\\nimitation learning (IL) with those of safe reinforcement\\nlearning (safe RL) by utilizing expert demonstrations. The\\nmost straightforward method of combining IL and RL is\\nto redesign the actor’s objective by incorporating an im-\\nitation learning term, such as log-likelihood probability,\\nE(s,a)∼D[logπ(a|s)], where D={s0, a0, . . . , s N, aN}is a\\ndataset of expert trajectories, as in [20]. However, challenges\\narise when applying this approach to safe RL. Using an\\nexpert focused solely on maximizing the reward, referred\\nto as a reward expert, can lead the agent to violate given\\nconstraints. On the other hand, an expert trained through\\nsafe RL algorithms, represented as a safe expert, might suffer\\nfrom the drawback of low reward, despite directly optimizing\\nthe constraint. In other words, relying solely on each typeof expert does not align with the ideal framework we aim to\\nbuild.\\nOne approach to tackle these challenges is to utilize both\\ndemonstrations. In scenarios where safety is assured, the\\nagent is encouraged to prioritize the influence of the reward\\nexpert over the safe expert for higher rewards. Conversely,\\nwhen the agent struggles to adhere to a given constraint, it\\ncan be directed to emulate the behavior of the safe expert\\nrather than the reward expert. Through this strategy, the agent\\ncan be steered towards an optimal balance between the
Zhen Xiang
-,0009-0002-9077-2399,-
BadChain: Backdoor Attacks on Chain-of-Thought Prompting in LLMs
"{'Combining Structural Knowledge with Sparsity in Machine Learning and Signal Processing': 'Title: (...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Zhixiang Shen
-
Multiplex Graph Fusion
"{'Generalized Channel Coding and Decoding With Natural Redundancy in Protocols': 'Title: Generalize(...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Matthew Thomas Jackson
-
Learned Optimization for Reinforcement Learning
"{'Can Learned Optimization Make Reinforcement Learning Less Difficult?': 'Title: Can Learned Optimi(...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Yiyang Zhao
-
Carbon-Efficient Neural Architecture Search (CE-NAS)
"{'Carbon-Efficient Neural Architecture Search (CE-NAS)': 'Title: Carbon-Efficient Neural Architectu(...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Denny Zhou
-
Inducing Chain-of-Thought Reasoning via Decoding Adjustments
"{'Chain of Thought Empowers Transformers to Solve Inherently Serial Problems': 'Title: Chain of Tho(...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Siyuan Huang
-
Cluster-wise Graph Transformer
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Noah A. Smith
0000-0002-2310-6380
Multi-Objective Language Model Alignment
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Simon S. Du
0000-0003-0056-8299
Multi-Objective Language Model Alignment
"{'Decoding-Time Language Model Alignment with Multiple Objectives': 'Title: Decoding-Time Language (...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
Ke Sun
-
Enhanced Deepfake Detection with Diffusion Models
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
"{'Expanding the Scope: Inductive Knowledge Graph Reasoning with Multi-Starting Progressive Propagat(...TRUNCATED)
README.md exists but content is empty.
Downloads last month
22