Study 2.1: Semantic priming

The core data set in this study was that of the Semantic Priming Project (Hutchison et al., 2013; also see Yap et al., 2017). The study of Hutchison et al. (2013) comprised two tasks: lexical decision and naming. We limited our analysis to the lexical decision task because it was more relevant to a subsequent study that we were planning. In the lexical decision task, participants judged whether strings of letters constituted real words (e.g., building) or nonwords (e.g. gop). Importantly, in each trial, the target word that participants assessed was preceded by a prime word. Participants were only required to provide a response regarding the target word. The characteristic feature of the semantic priming paradigm is the analysis of responses to the targets as a function of the semantic relationship between the primes and the targets (Brunellière et al., 2017; de Wit & Kinoshita, 2015; Hoedemaker & Gordon, 2014).

In some studies, the association between prime and target words has been investigated in terms of related versus unrelated pairs (Lam et al., 2015; Pecher et al., 1998; Trumpp et al., 2013) and—in other studies—in terms of first- and second-order relationships (Hutchison et al., 2013). In contrast to these categorical associations, a third set of studies have measured the association between the prime and the target words using continuous estimates of text-based similarity (Günther et al., 2016a, 2016b; Hutchison et al., 2008; M. N. Jones et al., 2006; Lund et al., 1995; Lund & Burgess, 1996; Mandera et al., 2017; McDonald & Brew, 2002; Padó & Lapata, 2007; Petilli et al., 2021; Wingfield & Connell, 2022b). In one of these studies, Mandera et al. (2017) found that computational measures of similarity outperformed human-based associations at explaining language-based priming.

Language, vision and SOA

Priming associations beyond the linguistic realm have also been investigated, with early studies observing perceptual priming effects (Flores d’Arcais et al., 1985; Schreuder et al., 1984). Yet, those early findings were soon reframed by Pecher et al. (1998), who conducted a follow-up with an improved design, and observed vision-based priming only when the task was preceded by another task that required attention to visual features of concepts (Ostarek & Huettig, 2017; also see Yee et al., 2012). Furthermore, two studies have failed to observe vision-based priming (Hutchison, 2003; Ostarek & Huettig, 2017).

Nonetheless, a considerable number of studies have observed perceptual priming, even in the absence of a pretask. A set of these studies used the Conceptual Modality Switch paradigm, in which the primes and the targets are presented in separate, consecutive trials—e.g., Loud WelcomeFine Selection (Bernabeu et al., 2017; J. Collins et al., 2011; Hald et al., 2011, 2013; Louwerse & Connell, 2011; Lynott & Connell, 2009; Pecher et al., 2003; Trumpp et al., 2013). The other set of studies implemented the more classic priming manipulation, whereby a prime word is briefly presented before the target word in each trial—e.g., WelcomeSelection. This design is more relevant to our present study, as it was used in the study we are revisiting (Hutchison et al., 2013). Below, we review studies that have used the primetarget design.

Lam et al. (2015) conducted a semantic priming experiment containing a lexical decision task, in which participants were instructed to assess whether the prime word and the target word in each trial were both real words. The semantic priming manipulation consisted of the following types of associations between the prime and the target words: (1) semantic association (e.g., bolt → screwdriver), (2) action association (e.g., housekey → screwdriver), (3) visual association (e.g., soldering iron → screwdriver), and (4) no association (e.g., charger → screwdriver). In addition, the following SOAs were compared: 500, 650, 800 and 1,400 ms. First, Lam et al. observed priming effects of the semantic type with all SOAs. Second, the authors observed action-based priming with the SOAs of 500, 650 and 1,400 ms. Last, they observed vision-based priming only with the SOA of 1,400 ms. Overall, semantic—i.e., language-based—priming was more prevalent than visual and action priming. The greater role of language-based information converges with other semantic priming studies (Bottini et al., 2016; Lam et al., 2015; Pecher et al., 1998; Petilli et al., 2021), as well as with studies that used other paradigms (Banks et al., 2021; Kiela & Bottou, 2014; Louwerse et al., 2015).

Similarly, the results of Lam et al. (2015) regarding the time course of language-based and vision-based priming were consistent with a wealth of literature observing that the influence of perceptual systems, such as vision, peaks later than the influence of the language system (Barsalou et al., 2008; Louwerse & Connell, 2011; Santos et al., 2011). For instance, studies using electroencephalography have observed perceptual priming effects within 300 ms from the word onset. Thereafter, the perceptual priming effect increased (Amsel et al., 2014; Bernabeu et al., 2017), or it stabilised (Kiefer et al., 2022), or fluctuated (Amsel, 2011). Overall, these patterns reveal a gradual accumulation of information throughout word processing (also see Hauk, 2016), which is consistent with the integration of contextual information (see Hald et al., 2006).

In a more recent study, Petilli et al. (2021) revisited the data of Hutchison et al. (2013) using new variables that indexed language-based and vision-based associations between the prime and the target words. These variables had two important characteristics: (1) they were continuous rather than categorical (see Cohen, 1983; Günther et al., 2016a; Mandera et al., 2017), and (2) they were not dependent on human ratings (cf. Hutchison et al., 2008, 2013; Lam et al., 2015; Pecher et al., 1998). By this means, Petilli et al. avoided the circularity problem (rarely addressed in studies) that arises (or may arise) when human-based ratings are used to explain human behaviour.

Petilli et al. (2021) operationalised word co-occurrence using text-based similarity (Mandera et al., 2017). Next, to operationalise vision-based similarity, the authors obtained images from ImageNet corresponding to each word (a minimum of 100 images per word), and trained vector representations on those images using neural networks (for related work, see Roads & Love, 2020). The resulting computational measure of vision-based similarity was then validated against human-based ratings (Pecher et al., 1998), with a satisfactory result. In a concrete demonstration, Petilli et al. show how vision-based similarity correctly concluded that drills were more visually similar to pistols than to screwdrivers, showing that the measure was not misled by functional similarity. In conclusion, using language-based similarity and vision-based similarity, Petilli et al. investigated language-based and vision-based priming in two tasks—lexical decision and naming—and with both a short and a long SOA.

In lexical decision, the largest effect observed by Petilli et al. (2021) was that of language-based priming with the short SOA (200 ms). The second largest effect was that of language-based priming with the long SOA (1,200 ms). Next, the weakest, significant effect was that of vision-based priming with the short SOA. Last, there was no effect of vision-based priming with the long SOA. Petilli et al. explained the absence of vision-based priming with the long SOA by contending that visual activation had likely decayed before participants processed the target words (also see Yee et al., 2011), owing to the limited semantic processing required for lexical decision (also see Balota & Lorch, 1986; Becker et al., 1997; de Wit & Kinoshita, 2015; Joordens & Becker, 1997; Ostarek & Huettig, 2017). Therefore, the authors suggested that perceptual simulation does not peak before language-based processing in lexical decision, contrasting with the results of Lam et al. (2015) and with the results found in other tasks (Louwerse & Connell, 2011; Santos et al., 2011; Simmons et al., 2008; also see Barsalou et al., 2008).

In the naming task, the largest effect observed by Petilli et al. (2021) was that of language-based priming with the long SOA. The second largest effect was that of language-based priming with the short SOA. Last, there was no effect of vision-based priming with either SOA. This finding contrasts with Connell and Lynott (2014a), who found facilitatory effects of visual strength in both lexical decision and naming. Petilli et al. explained the lack of vision-based priming in the naming task by alluding to the lower semantic depth of this task—compared to lexical decision—, and the mixture of visual and auditory processing in this task (also see Connell & Lynott, 2014a).

In conclusion, there is mixed evidence regarding the time course of language-based and vision-based information in conceptual processing, and particularly in semantic priming. First, regarding language, previous research predicts that language-based priming will have a larger effect with the short SOA than with the long one (Lam et al., 2015; Petilli et al., 2021). Second, regarding vision, three hypotheses are available: (a) more vision-based priming with the long SOA (Louwerse & Connell, 2011; Santos et al., 2011; Simmons et al., 2008; also see Barsalou et al., 2008), (b) vision-based priming only with the short SOA (Petilli et al., 2021), and (c) no vision-based priming (Hutchison, 2003; Ostarek & Huettig, 2017; Pecher et al., 1998; Yee et al., 2012).

Language, vision and vocabulary size

Next, we turn to considering the role of participants’ vocabulary size with respect to language-based and vision-based information (this recaps the general Hypotheses section). First, three hypotheses exist the interaction with language. On the one hand, some research predicts a larger effect of language-based priming in higher-vocabulary participants (Yap et al., 2017; also see Connell, 2019; Landauer et al., 1998; Louwerse et al., 2015; Paivio, 1990; Pylyshyn, 1973). On the other hand, other research has found the opposite pattern (Yap et al., 2009; also see Yap et al., 2012). Also relevant to these mixed findings is the notion that vocabulary knowledge is associated with increased attention to task-relevant variables (Pexman & Yap, 2018). We hypothesised that language-based information—represented by language-based similarity in this study—was indeed important for present task, given its importance across the board (Banks et al., 2021; Kiela & Bottou, 2014; Lam et al., 2015; Louwerse et al., 2015; Pecher et al., 1998; Petilli et al., 2021). Accordingly, the relevance hypothesis predicted that higher-vocabulary participants would present a larger priming effect.

To our knowledge, no previous studies have investigated the interaction between vision-based information and participants’ vocabulary size. We entertained two hypotheses: (a) that lower-vocabulary participants would be more sensitive to visual strength than higher-vocabulary participants, thereby compensating for the disadvantage on the language side, and (b) that this interaction effect would be absent.

The present study

In the present study, we expanded on Petilli et al. (2021) by examining the role of participants’ vocabulary size. In other regards, we used the same primary data set (Hutchison et al., 2013), and a language-based similarity measure that was very similar to that used by Petilli et al. (also created by Mandera et al., 2017). In contrast, our vision-based predictors differed. Whereas Petilli et al. used a human-independent measure trained on images (see description above), we calculated the difference in visual strength (Lynott et al., 2020) between the prime and the target word in each trial.10

Methods

Data set

Code

# Calculate some of the sample sizes to be reported in the paragraph below

# Number of prime-target pairs per participant.
# Save mean as integer and SD rounded while keeping trailing zeros
semanticpriming_mean_primetarget_pairs_per_participant = 
  semanticpriming %>% group_by(Participant) %>% 
  summarise(length(unique(primeword_targetword))) %>% 
  select(2) %>% unlist %>% mean %>% round(0)

semanticpriming_SD_primetarget_pairs_per_participant = 
  semanticpriming %>% group_by(Participant) %>% 
  summarise(length(unique(primeword_targetword))) %>% 
  select(2) %>% unlist %>% sd %>% sprintf('%.2f', .)

# Number of participants per prime-target pair.
# Save mean as integer and SD rounded while keeping trailing zeros
semanticpriming_mean_participants_per_primetarget_pair = 
  semanticpriming %>% group_by(primeword_targetword) %>% 
  summarise(length(unique(Participant))) %>% 
  select(2) %>% unlist %>% mean %>% round(0)

semanticpriming_SD_participants_per_primetarget_pair = 
  semanticpriming %>% group_by(primeword_targetword) %>% 
  summarise(length(unique(Participant))) %>% 
  select(2) %>% unlist %>% sd %>% sprintf('%.2f', .)

The data set was trimmed by removing rows that lacked values on any variable, and by also removing RTs that were more than 3 standard deviations away from the mean. The standard deviation trimming was performed within participants, within sessions and within SOA conditions, as done in the Semantic Priming Project (Hutchison et al., 2013). The resulting data set contained 496 participants, 5,943 prime–target pairs and 345,666 RTs. On average, there were 697 prime–target pairs per participant (SD = 33.34), and conversely, 58 participants per prime–target pair (SD = 4.25).

Variables

While the variables are outlined in the general introduction, a few further details are provided below regarding some of them.

  • Vocabulary size. The test used by Hutchison et al. (2013) comprised a synonym test, an antonym test, and an analogy test, all three extracted from the Woodcock–Johnson III diagnostic reading battery (Woodcock et al., 2001). We operationalised the vocabulary measure as the mean score across the three tasks per participant.

  • Language-based similarity. This measure was calculated using a semantic space from Mandera et al. (2017), which the authors found to be the second-best predictor (\(R\)2 = .465) of the semantic priming effect in the lexical decision task of Hutchison et al. (2013) (we could not use the best semantic space, \(R\)2 = .471, owing to computational limitations). The second-best semantic space (see first row in Table 5 in Mandera et al., 2017) was based on lemmas from a subtitle corpus, and was processed using a Continuous Bag Of Words model. It had 300 dimensions and a window size of six words. The R package ‘LSAfun’ (Günther et al., 2015) was used to import this variable.11

  • Stimulus onset asynchrony (SOA). Following Brauer and Curtin (2018), the categories of this factor were recoded as follows: 200 ms = -0.5, 1,200 ms = 0.5.

A few details regarding the covariates follow.

Figure 2 shows the correlations among the predictors and the dependent variable.

Code

# Using the following variables...
semanticpriming[, c('z_target.RT', 'z_vocabulary_size', 
                    'z_attentional_control',  'z_cosine_similarity', 
                    'z_visual_rating_diff', 'z_word_concreteness_diff', 
                    'z_target_word_frequency', 
                    'z_target_number_syllables')] %>%
  
  # renamed for the sake of clarity
  rename('RT' = z_target.RT, 
         'Vocabulary size' = z_vocabulary_size,
         'Attentional control' = z_attentional_control,
         'Language-based similarity' = z_cosine_similarity,
         'Visual-strength difference' = z_visual_rating_diff,
         'Word-concreteness difference' = z_word_concreteness_diff,
         'Word frequency' = z_target_word_frequency,
         'Number of syllables' = z_target_number_syllables) %>%
  
  # make correlation matrix (custom function from the 'R_functions' folder)
  correlation_matrix() + 
  theme(plot.margin = unit(c(0, 0, 0.1, -3.1), 'in'))

Figure 2: Zero-order correlations in the semantic priming study.

Diagnostics for the frequentist analysis

The model presented convergence warnings. To avoid removing important random slopes, which could increase the Type I error rate—i.e., false positives (Brauer & Curtin, 2018; Singmann & Kellen, 2019), we examined the model after refitting it using seven optimisation algorithms through the ‘allFit’ function of the R package ‘lme4’ (Bates et al., 2021). The results showed that all optimisers produced virtually identical means for all effects, suggesting that the convergence warnings were not consequential (Bates et al., 2021; see Appendix B).

Code

# Calculate VIF for every predictor and return only the maximum VIF rounded up
maxVIF_semanticpriming = car::vif(semanticpriming_lmerTest) %>% max %>% ceiling

The residual errors were not normally distributed, and attempts to mitigate this deviation proved unsuccessful (see Appendix B). However, this is not likely to have posed a major problem, as mixed-effects models are fairly robust to deviations from normality (Knief & Forstmeier, 2021; Schielzeth et al., 2020). Last, the model did not present multicollinearity problems, with all variance inflation factors (VIF) below 2 (see Dormann et al., 2013; Harrison et al., 2018).

Diagnostics for the Bayesian analysis

Code

# Calculate number of post-warmup draws (as in 'brms' version 2.17.0).
# Informative prior model used but numbers are identical in the three models.
semanticpriming_post_warmup_draws = 
  (semanticpriming_summary_informativepriors_exgaussian$iter -
     semanticpriming_summary_informativepriors_exgaussian$warmup) *
  semanticpriming_summary_informativepriors_exgaussian$chains

# As a convergence diagnostic, find maximum R-hat value for the 
# fixed effects across the three models.
semanticpriming_fixedeffects_max_Rhat = 
  max(semanticpriming_summary_informativepriors_exgaussian$fixed$Rhat,
      semanticpriming_summary_weaklyinformativepriors_exgaussian$fixed$Rhat,
      semanticpriming_summary_diffusepriors_exgaussian$fixed$Rhat) %>% 
  # Round
  sprintf('%.2f', .)

# Next, find find maximum R-hat value for the random effects across the three models
semanticpriming_randomeffects_max_Rhat = 
  max(semanticpriming_summary_informativepriors_exgaussian$random[['Participant']]$Rhat,
      semanticpriming_summary_weaklyinformativepriors_exgaussian$random[['Participant']]$Rhat,
      semanticpriming_summary_diffusepriors_exgaussian$random[['Participant']]$Rhat,
      semanticpriming_summary_informativepriors_exgaussian$random[['primeword_targetword']]$Rhat,
      semanticpriming_summary_weaklyinformativepriors_exgaussian$random[['primeword_targetword']]$Rhat,
      semanticpriming_summary_diffusepriors_exgaussian$random[['primeword_targetword']]$Rhat) %>% 
  # Round
  sprintf('%.2f', .)

Three Bayesian models were run that were respectively characterised by informative, weakly-informative and diffuse priors. In each model, 16 chains were used. In each chain, 1,500 warmup iterations were run, followed by 4,500 post-warmup iterations. Thus, a total of 72,000 post-warmup draws were produced over all the chains.

The maximum \(\widehat R\) value for the fixed effects across the three models was 1.00, suggesting that these parameters had converged (Schoot et al., 2021; Vehtari et al., 2021). In contrast, the maximum \(\widehat R\) value for the random effects was 1.13, slightly exceeding the 1.01 threshold (Vehtari et al., 2021). Since the interest of the present research is on the fixed effects, and the random effects were very close to convergence, the present model is valid.

The results of the posterior predictive checks were sound (see Appendix C), indicating that the posterior distributions were sufficiently consistent with the observed data. Furthermore, in the prior sensitivity analysis, the results were virtually identical with the three priors that were considered (refer to the priors in Figure 1 above; to view the results in detail, see Appendix E).

Results of Study 2.1

Code

# Calculate R^2. This coefficient must be interpreted with caution 
# (Nakagawa et al., 2017; https://doi.org/10.1098/rsif.2017.0213). 
# Also, transform coefficient to rounded percentage.

Nakagawa2017_fixedeffects_R2_semanticpriming_lmerTest = 
  paste0(
    (MuMIn::r.squaredGLMM(semanticpriming_lmerTest)[1, 'R2m'][[1]] * 100) %>% 
      sprintf('%.2f', .), '%'
  )

Nakagawa2017_randomeffects_R2_semanticpriming_lmerTest = 
  paste0(
    (MuMIn::r.squaredGLMM(semanticpriming_lmerTest)[1, 'R2c'][[1]] * 100) %>% 
      sprintf('%.2f', .), '%'
  )

Table 1 presents the results. The fixed effects explained 4.22% of the variance, and the random effects explained 11.01% (Nakagawa et al., 2017). It is reasonable that random effects explain more variance, as they involve a far larger number of estimates for each effect. That is, whereas each fixed effect is formed of one estimate, the by-item random slopes for an individual difference variable—such as vocabulary size—comprise as many estimates as the number of stimulus items (in this study, the stimuli refer to the prime–target pairs).12 Conversely, the by-participant random slopes for an item-level variable—such as language-based similarity—comprise as many estimates as the number of participants.

Code

# Rename effects in plain language and specify the random slopes
# (if any) for each effect, in the footnote. For this purpose, 
# superscripts are added to the names of the appropriate effects.
# 
# In the interactions below, word-level variables are presented 
# first for the sake of consistency (the order does not affect 
# the results in any way). Also in the interactions, double 
# colons are used to inform the 'frequentist_model_table' 
# function that the two terms in the interaction must be split 
# into two lines.

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_attentional_control'] = 'Attentional control'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_vocabulary_size'] = 'Vocabulary size <sup>a</sup>'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_recoded_participant_gender'] = 'Gender <sup>a</sup>'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_target_word_frequency'] = 'Word frequency'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_target_number_syllables'] = 'Number of syllables'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_word_concreteness_diff'] = 'Word-concreteness difference'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_cosine_similarity'] = 'Language-based similarity <sup>b</sup>'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_visual_rating_diff'] = 'Visual-strength difference <sup>b</sup>'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_recoded_interstimulus_interval'] = 'Stimulus onset asynchrony (SOA) <sup>b</sup>'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_word_concreteness_diff:z_vocabulary_size'] = 
  'Word-concreteness difference :: Vocabulary size'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_word_concreteness_diff:z_recoded_interstimulus_interval'] = 
  'Word-concreteness difference : SOA'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_word_concreteness_diff:z_recoded_participant_gender'] = 
  'Word-concreteness difference : Gender'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_attentional_control:z_cosine_similarity'] = 
  'Language-based similarity :: Attentional control'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_attentional_control:z_visual_rating_diff'] = 
  'Visual-strength difference :: Attentional control'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_vocabulary_size:z_cosine_similarity'] = 
  'Language-based similarity :: Vocabulary size'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_vocabulary_size:z_visual_rating_diff'] = 
  'Visual-strength difference :: Vocabulary size'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_recoded_participant_gender:z_cosine_similarity'] = 
  'Language-based similarity : Gender'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_recoded_participant_gender:z_visual_rating_diff'] = 
  'Visual-strength difference : Gender'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_recoded_interstimulus_interval:z_cosine_similarity'] = 
  'Language-based similarity : SOA <sup>b</sup>'

rownames(KR_summary_semanticpriming_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_lmerTest$coefficients) == 
    'z_recoded_interstimulus_interval:z_visual_rating_diff'] = 
  'Visual-strength difference : SOA <sup>b</sup>'


# Next, change the names in the confidence intervals object

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_attentional_control'] = 'Attentional control'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_vocabulary_size'] = 'Vocabulary size <sup>a</sup>'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_recoded_participant_gender'] = 'Gender <sup>a</sup>'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_target_word_frequency'] = 'Word frequency'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_target_number_syllables'] = 'Number of syllables'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_word_concreteness_diff'] = 'Word-concreteness difference'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_cosine_similarity'] = 'Language-based similarity <sup>b</sup>'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_visual_rating_diff'] = 'Visual-strength difference <sup>b</sup>'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_recoded_interstimulus_interval'] = 'Stimulus onset asynchrony (SOA) <sup>b</sup>'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_word_concreteness_diff:z_vocabulary_size'] = 
  'Word-concreteness difference :: Vocabulary size'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_word_concreteness_diff:z_recoded_interstimulus_interval'] = 
  'Word-concreteness difference : SOA'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_word_concreteness_diff:z_recoded_participant_gender'] = 
  'Word-concreteness difference : Gender'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_attentional_control:z_cosine_similarity'] = 
  'Language-based similarity :: Attentional control'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_attentional_control:z_visual_rating_diff'] = 
  'Visual-strength difference :: Attentional control'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_vocabulary_size:z_cosine_similarity'] = 
  'Language-based similarity :: Vocabulary size'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_vocabulary_size:z_visual_rating_diff'] = 
  'Visual-strength difference :: Vocabulary size'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_recoded_participant_gender:z_cosine_similarity'] = 
  'Language-based similarity : Gender'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_recoded_participant_gender:z_visual_rating_diff'] = 
  'Visual-strength difference : Gender'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_recoded_interstimulus_interval:z_cosine_similarity'] = 
  'Language-based similarity : SOA <sup>b</sup>'

rownames(confint_semanticpriming_lmerTest)[
  rownames(confint_semanticpriming_lmerTest) == 
    'z_recoded_interstimulus_interval:z_visual_rating_diff'] = 
  'Visual-strength difference : SOA <sup>b</sup>'


# Create table (using custom function from the 'R_functions' folder)
frequentist_model_table(
  KR_summary_semanticpriming_lmerTest, 
  confint_semanticpriming_lmerTest,
  order_effects = c('(Intercept)',
                    'Attentional control',
                    'Vocabulary size <sup>a</sup>',
                    'Gender <sup>a</sup>',
                    'Word frequency',
                    'Number of syllables',
                    'Word-concreteness difference',
                    'Language-based similarity <sup>b</sup>',
                    'Visual-strength difference <sup>b</sup>',
                    'Stimulus onset asynchrony (SOA) <sup>b</sup>',
                    'Word-concreteness difference :: Vocabulary size',
                    'Word-concreteness difference : SOA',
                    'Word-concreteness difference : Gender',
                    'Language-based similarity :: Attentional control',
                    'Visual-strength difference :: Attentional control',
                    'Language-based similarity :: Vocabulary size',
                    'Visual-strength difference :: Vocabulary size',
                    'Language-based similarity : Gender',
                    'Visual-strength difference : Gender',
                    'Language-based similarity : SOA <sup>b</sup>',
                    'Visual-strength difference : SOA <sup>b</sup>'),
  interaction_symbol_x = TRUE,
  caption = 'Frequentist model for the semantic priming study.') %>%
  
  # Group predictors under headings
  pack_rows('Individual differences', 2, 4) %>% 
  pack_rows('Target-word lexical covariates', 5, 6) %>% 
  pack_rows('Prime--target relationship', 7, 9) %>% 
  pack_rows('Task condition', 10, 10) %>% 
  pack_rows('Interactions', 11, 21) %>% 
  
  # Apply white background to override default shading in HTML output
  row_spec(1:21, background = 'white') %>%
  
  # Highlight covariates
  row_spec(c(2, 5:7, 11:15), background = '#FFFFF1') %>%
  
  # Format
  kable_classic(full_width = FALSE, html_font = 'Cambria') %>%
  
  # Footnote describing abbreviations, random slopes, etc. 
  footnote(escape = FALSE, threeparttable = TRUE, 
           # The <p> below is used to enter a margin above the footnote 
           general_title = '<p style="margin-top: 10px;"></p>', 
           general = paste('*Note*. &beta; = Estimate based on $z$-scored predictors; *SE* = standard error;',
                           'CI = confidence interval. Yellow rows contain covariates. Some interactions are ',
                           'split over two lines, with the second line indented. <br>', 
                           '<sup>a</sup> By-word random slopes were included for this effect.',
                           '<sup>b</sup> By-participant random slopes were included for this effect.'))
Table 1: Frequentist model for the semantic priming study.
β SE 95% CI t p
(Intercept) 0.00 0.00 [0.00, 0.01] 1.59 .112
Individual differences
Attentional control 0.00 0.00 [0.00, 0.00] -0.56 .577
Vocabulary size a 0.00 0.00 [0.00, 0.00] 0.02 .987
Gender a 0.00 0.00 [0.00, 0.00] -0.03 .979
Target-word lexical covariates
Word frequency -0.16 0.00 [-0.16, -0.15] -49.40 <.001
Number of syllables 0.07 0.00 [0.07, 0.08] 22.81 <.001
Prime–target relationship
Word-concreteness difference 0.01 0.00 [0.01, 0.02] 3.48 .001
Language-based similarity b -0.08 0.00 [-0.08, -0.07] -22.44 <.001
Visual-strength difference b 0.01 0.00 [0.01, 0.02] 4.18 <.001
Task condition
Stimulus onset asynchrony (SOA) b 0.06 0.01 [0.04, 0.07] 7.47 <.001
Interactions
Word-concreteness difference ×
   Vocabulary size
0.00 0.00 [0.00, 0.01] 1.31 .189
Word-concreteness difference × SOA 0.00 0.00 [0.00, 0.01] 2.57 .010
Word-concreteness difference × Gender 0.00 0.00 [-0.01, 0.00] -0.97 .332
Language-based similarity ×
   Attentional control
-0.01 0.00 [-0.01, 0.00] -2.46 .014
Visual-strength difference ×
   Attentional control
0.00 0.00 [0.00, 0.00] 0.24 .810
Language-based similarity ×
   Vocabulary size
-0.01 0.00 [-0.01, 0.00] -2.34 .020
Visual-strength difference ×
   Vocabulary size
0.00 0.00 [-0.01, 0.00] -1.37 .172
Language-based similarity × Gender 0.00 0.00 [-0.01, 0.00] -0.79 .433
Visual-strength difference × Gender 0.00 0.00 [0.00, 0.01] 1.46 .144
Language-based similarity × SOA b 0.01 0.00 [0.00, 0.01] 3.22 .001
Visual-strength difference × SOA b 0.00 0.00 [-0.01, 0.00] -2.25 .025

Note. β = Estimate based on \(z\)-scored predictors; SE = standard error; CI = confidence interval. Yellow rows contain covariates. Some interactions are split over two lines, with the second line indented.
a By-word random slopes were included for this effect. b By-participant random slopes were included for this effect.

Both language-based similarity and visual-strength difference produced significant main effects. As expected, their effects had opposite directions. On the one hand, higher values of language-based similarity facilitated participants’ performance, as reflected in shorter RTs. On the other hand, higher values of visual-strength difference led to longer RTs. Furthermore, language-based similarity interacted with vocabulary size and with SOA. There were no effects of participants’ gender (see interaction figures below).

The effect sizes of language-based similarity and its interactions were larger than those of visual-strength difference. Figure 3 displays the frequentist and the Bayesian estimates, which are broadly similar. The Bayesian estimates are from the weakly-informative prior model. The estimates of the two other models, based on informative and diffuse priors, were virtually identical to these (see Appendix E).

Code

# Run plot through source() rather than directly in this R Markdown document
# to preserve the format.

source('semanticpriming/frequentist_bayesian_plots/semanticpriming_frequentist_bayesian_plots.R',
       local = TRUE)

include_graphics(
  paste0(
    getwd(),  # Circumvent illegal characters in file path
    '/semanticpriming/frequentist_bayesian_plots/plots/semanticpriming_frequentist_bayesian_plot_weaklyinformativepriors_exgaussian.pdf'
  ))

Figure 3: Estimates for the semantic priming study. The frequentist means (represented by red points) are flanked by 95% confidence intervals. The Bayesian means (represented by blue vertical lines) are flanked by 95% credible intervals in light blue.

Figure 4-a shows the significant interaction between language-based similarity and vocabulary size, whereby higher-vocabulary participants presented a greater benefit from the language-based similarity between prime and target words. This interaction replicates the results of Yap et al. (2017), who analysed the same data set but using a categorical measure of similarity instead. Indeed, this replication is noteworthy as it holds in spite of some methodological differences between the studies. First, Yap et al. (2017) operationalised the priming effect as a categorical difference between related and unrelated prime–target pairs, which were based on association ratings produced by people (Nelson et al., 2004). In contrast, the present study applied a continuous measure of relatedness—i.e., cosine similarity—, which is more precise and may thus afford more statistical power (Mandera et al., 2017; Petilli et al., 2021). Therefore, this interaction demonstrates the consistency between human ratings and computational approximations to meaning (Charbonnier & Wartena, 2019, 2020; Günther et al., 2016b; Louwerse et al., 2015; Mandera et al., 2017; Petilli et al., 2021; Solovyev, 2021; Wingfield & Connell, 2022b). The second difference between the present study and Yap et al. (2017) is that Yap et al. (2017) performed a correlational analysis, whereas the present analysis used maximal mixed-effects models that included several covariates to measure the effects of interest as rigorously as possible.

Figure 4-b presents the non-significant interaction between visual-strength difference and vocabulary size.13 Albeit a non-significant interaction, the effect of visual-strength difference was larger in lower-vocabulary participants.

Code

# Run plot through source() rather than directly in this R Markdown document 
# to preserve the italicised text.

source('semanticpriming/frequentist_analysis/semanticpriming-interactions-with-vocabulary-size.R', 
       local = TRUE)

include_graphics(
  paste0(
    getwd(),  # Circumvent illegal characters in file path
    '/semanticpriming/frequentist_analysis/plots/semanticpriming-interactions-with-vocabulary-size.pdf'
  ))

Figure 4: Interactions of vocabulary size with language-based similarity (panel a) and with visual-strength difference (panel b). Vocabulary size is constrained to deciles (10 sections) in this plot, whereas in the statistical analysis it contained more values within the current range. n = number of participants contained between deciles.

Figure 5 shows that the effects of language-based similarity and visual-strength difference were both larger with the short SOA. However, whereas the effect of language-based similarity was present with both SOAs (i.e., 200 ms and 1,200 ms), the effect of visual-strength difference was almost exclusive to the the long SOA. These results are consistent with Petilli et al. (2021), whereas they contrast with previous findings regarding the slower pace of the visual system in semantic priming (Lam et al., 2015) and in other paradigms (Louwerse & Connell, 2011).

Code

# Run plot through source() rather than directly in this R Markdown document 
# to preserve the italicised text.

source('semanticpriming/frequentist_analysis/semanticpriming-interactions-with-SOA.R', 
       local = TRUE)

include_graphics(
  paste0(
    getwd(),  # Circumvent illegal characters in file path
    '/semanticpriming/frequentist_analysis/plots/semanticpriming-interactions-with-SOA.pdf'
  ))

Figure 5: Interactions of stimulus onset asynchrony (SOA) with language-based similarity (panel a) and with visual-strength difference (panel b) in the semantic priming study. SOA was analysed using z-scores, but for clarity, the basic labels are used in the legend.

Figure 6 shows the non-significant interactions of gender with language-based similarity and with visual-strength difference.

Code

# Run plot through source() rather than directly in this R Markdown document 
# to preserve the italicised text.

source('semanticpriming/frequentist_analysis/semanticpriming-interactions-with-gender.R', 
       local = TRUE)

include_graphics(
  paste0(
    getwd(),  # Circumvent illegal characters in file path
    '/semanticpriming/frequentist_analysis/plots/semanticpriming-interactions-with-gender.pdf'
  ))

Figure 6: Interactions of gender with language-based similarity (panel a) and with visual-strength difference (panel b) in the semantic priming study. Gender was analysed using z-scores, but for clarity, the basic labels are used in the legend.

Human-based and computational measures of visual information

Next, we reflected on the adequacy of visual-strength difference as a measurement instrument, as it had never (to our knowledge) been used before in the study of semantic priming. Even though the effect of this variable on task performance was—as expected—inhibitory (i.e., higher values of this variable leading to longer RTs), we were concerned about the low correlation between visual-strength difference and language-based similarity (\(r\) = .01). First, the negligible size of this correlation raised concerns, as we expected a larger and negative correlation. Second, Petilli et al. (2021) had found a correlation of \(r\) = .50 between vision-based similarity and language-based similarity. This prompted us to compare the performance of our measure—i.e., visual-strength difference—to that of Petilli et al.—i.e., vision-based similarity.

Code

# Calculate some of the sample sizes to be reported in the paragraph below

# Number of prime--target pairs per participant.
# Save mean as integer and SD rounded while keeping trailing zeros
semanticpriming_with_visualsimilarity_mean_primetarget_pairs_per_participant = 
  semanticpriming_with_visualsimilarity %>% group_by(Participant) %>% 
  summarise(length(unique(primeword_targetword))) %>% 
  select(2) %>% unlist %>% mean %>% round(0)

semanticpriming_with_visualsimilarity_SD_primetarget_pairs_per_participant = 
  semanticpriming_with_visualsimilarity %>% group_by(Participant) %>% 
  summarise(length(unique(primeword_targetword))) %>% 
  select(2) %>% unlist %>% sd %>% sprintf('%.2f', .)

# Number of participants per prime--target pair.
# Save mean as integer and SD rounded while keeping trailing zeros
semanticpriming_with_visualsimilarity_mean_participants_per_primetarget_pair = 
  semanticpriming_with_visualsimilarity %>% group_by(primeword_targetword) %>% 
  summarise(length(unique(Participant))) %>% 
  select(2) %>% unlist %>% mean %>% round(0)

semanticpriming_with_visualsimilarity_SD_participants_per_primetarget_pair = 
  semanticpriming_with_visualsimilarity %>% group_by(primeword_targetword) %>% 
  summarise(length(unique(Participant))) %>% 
  select(2) %>% unlist %>% sd %>% sprintf('%.2f', .)

For this purpose, we first subsetted our previous data set to ensure that all trials contained data from all relevant variables—i.e., from all the existing variables and from the newly-added vision-based similarity from Petilli et al. (2021). This process resulted in the loss of 83% of trials, owing to the strict selection criteria that had been applied by Petilli et al. in the creation of their variable—for instance, both the target and the prime word had to be associated to at least 100 pictures in ImageNet. The rest of the preprocessing involved the same steps as the main analysis (detailed in Methods). The resulting data set contained 496 participants, 1,091 prime–target pairs and 254,140 RTs. On average, there were 128 prime–target pairs per participant (SD = 10.37), and conversely, 58 participants per prime–target pair (SD = 4.90).

Figure 7 shows the correlations among the predictors and the dependent variable.

Code

# Using the following variables...
semanticpriming_with_visualsimilarity %>%
  
  select(z_target.RT, z_vocabulary_size, z_attentional_control, 
         z_cosine_similarity, z_visual_similarity, 
         z_visual_rating_diff, z_word_concreteness_diff, 
         z_target_word_frequency, z_target_number_syllables) %>%
  
  # Use plain names
  rename('RT' = z_target.RT, 
         'Vocabulary size' = z_vocabulary_size,
         'Attentional control' = z_attentional_control,
         'Language-based similarity' = z_cosine_similarity,
         'Visual-strength difference' = z_visual_rating_diff,
         'Vision-based similarity' = z_visual_similarity,
         'Word-concreteness difference' = z_word_concreteness_diff,
         'Target-word frequency' = z_target_word_frequency,
         'Number of target-word syllables' = z_target_number_syllables) %>%
  
  # make correlation matrix (custom function from the 'R_functions' folder)
  correlation_matrix() + 
  theme(plot.margin = unit(c(0, 0, 0.1, -3.1), 'in'))

Figure 7: Zero-order correlations in the semantic priming data set that included vision-based similarity.

Diagnostics for the frequentist analysis

The model presented convergence warnings. To avoid removing important random slopes, which could increase the Type I error rate—i.e., false positives (Brauer & Curtin, 2018; Singmann & Kellen, 2019), we examined the model after refitting it using seven optimisation algorithms through the ‘allFit’ function of the ‘lme4’ package (Bates et al., 2021). The results showed that all optimisers produced virtually identical means for all effects, suggesting that the convergence warnings were not consequential (Bates et al., 2021; see Appendix B).

Code

# Calculate VIF for every predictor and return only the maximum VIF rounded up
maxVIF_semanticpriming_with_visualsimilarity = 
  car::vif(semanticpriming_with_visualsimilarity_lmerTest) %>% max %>% ceiling

The residual errors were not normally distributed, and attempts to mitigate this deviation proved unsuccessful (see Appendix B). However, this is not likely to have posed a major problem, as mixed-effects models are fairly robust to deviations from normality (Knief & Forstmeier, 2021; Schielzeth et al., 2020). Last, the model did not present multicollinearity problems, with all VIFs below 2 (see Dormann et al., 2013; Harrison et al., 2018).

Results
Code

# Calculate R^2. This coefficient must be interpreted with caution 
# (Nakagawa et al., 2017; https://doi.org/10.1098/rsif.2017.0213). 
# Also, transform coefficient to rounded percentage.

Nakagawa2017_fixedeffects_R2_semanticpriming_with_visualsimilarity_lmerTest = 
  paste0(
    (MuMIn::r.squaredGLMM(semanticpriming_with_visualsimilarity_lmerTest)[1, 'R2m'][[1]] * 100) %>% 
      sprintf('%.2f', .), '%'
  )

Nakagawa2017_randomeffects_R2_semanticpriming_with_visualsimilarity_lmerTest = 
  paste0(
    (MuMIn::r.squaredGLMM(semanticpriming_with_visualsimilarity_lmerTest)[1, 'R2c'][[1]] * 100) %>% 
      sprintf('%.2f', .), '%'
  )

Table 2 presents the results. Due to space, the covariates are shown in Table 3. The fixed effects explained 3.53% of the variance, and the random effects explained 18.47% (Nakagawa et al., 2017; for an explanation of this difference, see Results of Study 2.1). Figure 8 displays the frequentist estimates of the effects of interest (Bayesian estimates not computed due to time constraints).

Code

# Rename effects in plain language and specify the random slopes
# (if any) for each effect, in the footnote. For this purpose, 
# superscripts are added to the names of the appropriate effects.
# 
# In the interactions below, word-level variables are presented 
# first for the sake of consistency (the order does not affect 
# the results in any way). Also in the interactions, double 
# colons are used to inform the 'frequentist_model_table' 
# function that the two terms in the interaction must be split 
# into two lines.

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_attentional_control'] = 'Attentional control'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_vocabulary_size'] = 'Vocabulary size <sup>a</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_participant_gender'] = 'Gender <sup>a</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_target_word_frequency'] = 'Word frequency'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_target_number_syllables'] = 'Number of syllables'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_word_concreteness_diff'] = 'Word-concreteness difference'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_cosine_similarity'] = 'Language-based similarity <sup>b</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_visual_rating_diff'] = 'Visual-strength difference <sup>b</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_visual_similarity'] = 'Vision-based similarity <sup>b</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_interstimulus_interval'] = 'Stimulus onset asynchrony (SOA) <sup>b</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_word_concreteness_diff:z_vocabulary_size'] =
  'Word-concreteness difference :: Vocabulary size'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_word_concreteness_diff:z_recoded_interstimulus_interval'] =
  'Word-concreteness difference : SOA'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_word_concreteness_diff:z_recoded_participant_gender'] =
  'Word-concreteness difference : Gender'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_attentional_control:z_cosine_similarity'] =
  'Language-based similarity :: Attentional control'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_attentional_control:z_visual_rating_diff'] =
  'Visual-strength difference :: Attentional control'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) ==
    'z_attentional_control:z_visual_similarity'] =
  'Vision-based similarity :: Attentional control'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_vocabulary_size:z_cosine_similarity'] = 
  'Language-based similarity :: Vocabulary size'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_vocabulary_size:z_visual_rating_diff'] = 
  'Visual-strength difference :: Vocabulary size'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_vocabulary_size:z_visual_similarity'] = 
  'Vision-based similarity :: Vocabulary size'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_participant_gender:z_cosine_similarity'] = 
  'Language-based similarity : Gender'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_participant_gender:z_visual_rating_diff'] = 
  'Visual-strength difference : Gender'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_participant_gender:z_visual_similarity'] = 
  'Vision-based similarity : Gender'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_interstimulus_interval:z_cosine_similarity'] = 
  'Language-based similarity : SOA <sup>b</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_interstimulus_interval:z_visual_rating_diff'] = 
  'Visual-strength difference : SOA <sup>b</sup>'

rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients)[
  rownames(KR_summary_semanticpriming_with_visualsimilarity_lmerTest$coefficients) == 
    'z_recoded_interstimulus_interval:z_visual_similarity'] = 
  'Vision-based similarity : SOA <sup>b</sup>'


# Next, change the names in the confidence intervals object

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_attentional_control'] = 'Attentional control'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_vocabulary_size'] = 'Vocabulary size <sup>a</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_participant_gender'] = 'Gender <sup>a</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_target_word_frequency'] = 'Word frequency'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_target_number_syllables'] = 'Number of syllables'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_word_concreteness_diff'] = 'Word-concreteness difference'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_cosine_similarity'] = 'Language-based similarity <sup>b</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_visual_rating_diff'] = 'Visual-strength difference <sup>b</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_visual_similarity'] = 'Vision-based similarity <sup>b</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_interstimulus_interval'] = 'Stimulus onset asynchrony (SOA) <sup>b</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_word_concreteness_diff:z_vocabulary_size'] =
  'Word-concreteness difference :: Vocabulary size'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_word_concreteness_diff:z_recoded_interstimulus_interval'] =
  'Word-concreteness difference : SOA'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_word_concreteness_diff:z_recoded_participant_gender'] =
  'Word-concreteness difference : Gender'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_attentional_control:z_cosine_similarity'] =
  'Language-based similarity :: Attentional control'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_attentional_control:z_visual_rating_diff'] =
  'Visual-strength difference :: Attentional control'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) ==
    'z_attentional_control:z_visual_similarity'] =
  'Vision-based similarity :: Attentional control'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_vocabulary_size:z_cosine_similarity'] = 
  'Language-based similarity :: Vocabulary size'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_vocabulary_size:z_visual_rating_diff'] = 
  'Visual-strength difference :: Vocabulary size'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_vocabulary_size:z_visual_similarity'] = 
  'Vision-based similarity :: Vocabulary size'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_participant_gender:z_cosine_similarity'] = 
  'Language-based similarity : Gender'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_participant_gender:z_visual_rating_diff'] = 
  'Visual-strength difference : Gender'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_participant_gender:z_visual_similarity'] = 
  'Vision-based similarity : Gender'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_interstimulus_interval:z_cosine_similarity'] = 
  'Language-based similarity : SOA <sup>b</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_interstimulus_interval:z_visual_rating_diff'] = 
  'Visual-strength difference : SOA <sup>b</sup>'

rownames(confint_semanticpriming_with_visualsimilarity_lmerTest)[
  rownames(confint_semanticpriming_with_visualsimilarity_lmerTest) == 
    'z_recoded_interstimulus_interval:z_visual_similarity'] = 
  'Vision-based similarity : SOA <sup>b</sup>'


# Create table (using custom function from the 'R_functions' folder)

# Covariates are commented out as they do not fit in the table. 
# They are instead shown in the subsequent table.

frequentist_model_table(
  KR_summary_semanticpriming_with_visualsimilarity_lmerTest, 
  confint_semanticpriming_with_visualsimilarity_lmerTest,
  select_effects = c('(Intercept)',
                     # 'Attentional control',
                     'Vocabulary size <sup>a</sup>',
                     'Gender <sup>a</sup>',
                     # 'Word frequency',
                     # 'Number of syllables',
                     # 'Word-concreteness difference',
                     'Language-based similarity <sup>b</sup>',
                     'Visual-strength difference <sup>b</sup>',
                     'Vision-based similarity <sup>b</sup>',
                     'Stimulus onset asynchrony (SOA) <sup>b</sup>',
                     # 'Word-concreteness difference :: Vocabulary size',
                     # 'Word-concreteness difference : SOA',
                     # 'Word-concreteness difference : Gender',
                     # 'Language-based similarity :: Attentional control',
                     # 'Visual-strength difference :: Attentional control',
                     # 'Vision-based similarity :: Attentional control',
                     'Language-based similarity :: Vocabulary size',
                     'Visual-strength difference :: Vocabulary size',
                     'Vision-based similarity :: Vocabulary size',
                     'Language-based similarity : Gender',
                     'Visual-strength difference : Gender',
                     'Vision-based similarity : Gender',
                     'Language-based similarity : SOA <sup>b</sup>',
                     'Visual-strength difference : SOA <sup>b</sup>',
                     'Vision-based similarity : SOA <sup>b</sup>'),
  interaction_symbol_x = TRUE,
  caption = 'Effects of interest in the semantic priming model that included vision-based similarity.') %>%
  
  # Group predictors under headings
  pack_rows('Individual differences', 2, 3) %>% 
  pack_rows('Prime--target relationship', 4, 6) %>% 
  pack_rows('Task condition', 7, 7) %>% 
  pack_rows('Interactions', 8, 16) %>% 
  
  # Apply white background to override default shading in HTML output
  row_spec(1:16, background = 'white') %>%
  
  # Format
  kable_classic(full_width = FALSE, html_font = 'Cambria') %>%
  
  # Footnote describing abbreviations, random slopes, etc. 
  footnote(escape = FALSE, threeparttable = TRUE, 
           # The <p> below is used to enter a margin above the footnote 
           general_title = '<p style="margin-top: 10px;"></p>', 
           general = paste('*Note*. &beta; = Estimate based on $z$-scored predictors; *SE* = standard error;',
                           'CI = confidence interval. Covariates shown in next table due to space. Some ',
                           'interactions are split over two lines, with the second line indented. <br>', 
                           '<sup>a</sup> By-word random slopes were included for this effect.',
                           '<sup>b</sup> By-participant random slopes were included for this effect.'))
Table 2: Effects of interest in the semantic priming model that included vision-based similarity.
β SE 95% CI t p
(Intercept) 0.01 0.01 [-0.01, 0.02] 0.90 .370
Individual differences
Vocabulary size a 0.00 0.00 [-0.01, 0.01] 0.21 .834
Gender a 0.00 0.00 [-0.01, 0.01] -0.05 .962
Prime–target relationship
Language-based similarity b -0.07 0.01 [-0.09, -0.06] -8.33 <.001
Visual-strength difference b 0.03 0.01 [0.01, 0.04] 3.04 .002
Vision-based similarity b -0.02 0.01 [-0.04, -0.01] -2.55 .011
Task condition
Stimulus onset asynchrony (SOA) b 0.06 0.01 [0.04, 0.08] 6.80 <.001
Interactions
Language-based similarity ×
   Vocabulary size
-0.01 0.01 [-0.02, 0.01] -0.96 .339
Visual-strength difference ×
   Vocabulary size
-0.01 0.01 [-0.02, 0.01] -1.02 .309
Vision-based similarity ×
   Vocabulary size
0.00 0.01 [-0.01, 0.01] -0.01 .991
Language-based similarity × Gender 0.00 0.01 [-0.02, 0.01] -0.75 .456
Visual-strength difference × Gender -0.01 0.01 [-0.02, 0.01] -1.05 .294
Vision-based similarity × Gender 0.00 0.01 [-0.01, 0.01] 0.39 .696
Language-based similarity × SOA b 0.00 0.00 [0.00, 0.01] 0.87 .382
Visual-strength difference × SOA b -0.01 0.00 [-0.02, 0.00] -2.60 .010
Vision-based similarity × SOA b 0.01 0.00 [0.00, 0.01] 1.28 .201

Note. β = Estimate based on \(z\)-scored predictors; SE = standard error; CI = confidence interval. Covariates shown in next table due to space. Some interactions are split over two lines, with the second line indented.
a By-word random slopes were included for this effect. b By-participant random slopes were included for this effect.
Code

# Create table (using custom function from the 'R_functions' folder)

# Only the covariates are shown, and the effects of interest are
# commented out as they were shown in the table above.

frequentist_model_table(
  KR_summary_semanticpriming_with_visualsimilarity_lmerTest, 
  confint_semanticpriming_with_visualsimilarity_lmerTest,
  select_effects = c('Attentional control',
                     # 'Vocabulary size <sup>a</sup>',
                     # 'Gender <sup>a</sup>',
                     'Word frequency',
                     'Number of syllables',
                     'Word-concreteness difference',
                     # 'Language-based similarity <sup>b</sup>',
                     # 'Visual-strength difference <sup>b</sup>',
                     # 'Vision-based similarity <sup>b</sup>',
                     # 'Stimulus onset asynchrony (SOA) <sup>b</sup>',
                     'Word-concreteness difference :: Vocabulary size',
                     'Word-concreteness difference : SOA',
                     'Word-concreteness difference : Gender',
                     'Language-based similarity :: Attentional control',
                     'Visual-strength difference :: Attentional control',
                     'Vision-based similarity :: Attentional control'  # comma deleted
                     # 'Language-based similarity :: Vocabulary size',
                     # 'Visual-strength difference :: Vocabulary size',
                     # 'Vision-based similarity :: Vocabulary size',
                     # 'Language-based similarity : Gender',
                     # 'Visual-strength difference : Gender',
                     # 'Language-based similarity : SOA <sup>b</sup>',
                     # 'Visual-strength difference : SOA <sup>b</sup>',
                     # 'Vision-based similarity : SOA <sup>b</sup>'
  ),
  interaction_symbol_x = TRUE,
  caption = 'Covariates in the semantic priming model that included vision-based similarity.') %>%
  
  # Group predictors under headings
  pack_rows('Individual difference covariate', 1, 1) %>% 
  pack_rows('Target-word lexical covariates', 2, 3) %>% 
  pack_rows('Prime--target covariate', 4, 4) %>% 
  pack_rows('Covariate interactions', 5, 10) %>%
  
  # Apply white background to override default shading in HTML output
  row_spec(1:10, background = 'white') %>%
  
  # Format
  kable_classic(full_width = FALSE, html_font = 'Cambria') %>%
  
  # Footnote describing abbreviations, random slopes, etc. 
  footnote(escape = FALSE, threeparttable = TRUE, 
           # The <p> below is used to enter a margin above the footnote 
           general_title = '<p style="margin-top: 10px;"></p>', 
           general = paste('*Note*. &beta; = Estimate based on $z$-scored predictors; *SE* = standard error;',
                           'CI = confidence interval. Some interactions are split over two lines, with the ',
                           'second line indented. <br>'))
Table 3: Covariates in the semantic priming model that included vision-based similarity.
β SE 95% CI t p
Individual difference covariate
Attentional control 0.00 0.00 [-0.01, 0.00] -1.06 .288
Target-word lexical covariates
Word frequency -0.15 0.01 [-0.17, -0.14] -21.97 <.001
Number of syllables 0.02 0.01 [0.01, 0.04] 3.54 <.001
Prime–target covariate
Word-concreteness difference 0.02 0.01 [0.01, 0.04] 2.73 .006
Covariate interactions
Word-concreteness difference ×
   Vocabulary size
-0.01 0.00 [-0.01, 0.00] -1.15 .252
Word-concreteness difference × SOA 0.01 0.00 [0.01, 0.02] 5.64 <.001
Word-concreteness difference × Gender 0.01 0.00 [0.00, 0.01] 1.34 .179
Language-based similarity ×
   Attentional control
0.00 0.00 [-0.01, 0.01] -0.91 .362
Visual-strength difference ×
   Attentional control
0.00 0.00 [-0.01, 0.01] 0.71 .477
Vision-based similarity ×
   Attentional control
0.00 0.00 [-0.01, 0.01] 0.80 .423

Note. β = Estimate based on \(z\)-scored predictors; SE = standard error; CI = confidence interval. Some interactions are split over two lines, with the second line indented.
Code

# Run plot through source() rather than directly in this R Markdown document
# to preserve the format.

source('semanticpriming/analysis_with_visualsimilarity/semanticpriming_with_visualsimilarity_confidence_intervals_plot.R', 
       local = TRUE)

include_graphics(
  paste0(
    getwd(),  # Circumvent illegal characters in file path
    'semanticpriming/analysis_with_visualsimilarity/plots/semanticpriming_with_visualsimilarity_confidence_intervals_plot.pdf'
  ))

Figure 8: Means and 95% confidence intervals for the effects of interest in the semantic priming model that included vision-based similarity.

The results revealed an effect of the human-based measure, visual-strength difference (as in the main analysis above), along with a smaller effect of the computational measure, vision-based similarity. There was an important difference between these measures regarding the interaction with SOA. Whereas visual-strength difference had a larger effect with the short SOA, vision-based similarity did not interact with SOA, contrary to the results of Petilli et al. (2021). This difference was not due to collinearity between these measures (\(r\) = -.04). Also importantly, both measures appeared to be valid based on their correlations with language-based similarity and with word concreteness (Figure 7). We reflect on this result in the discussion.

Statistical power analysis

Power curves were performed for most effects of interest in the main model. This was done using the main model, not the follow-up that included vision-based similarity. Figures 9 and 10 show the estimated power for some main effects and interactions of interest as a function of the number of participants. To plan the sample size for future studies, these results must be considered under the assumptions that the future study would apply a statistical method similar to ours—namely, a mixed-effects model with random intercepts and slopes—, and that the analysis would encompass at least as many prime–target pairs as the current study, namely, 5,943 (distributed in various blocks across participants, not all being presented to every participant). Furthermore, it is necessary to consider each figure in detail. Here, we provide a summary. First, detecting the main effect of language-based similarity—which had a strong effect on RTs—would require 50 participants. Second, detecting the interaction between language-based similarity and SOA—which was a considerably weaker effect—would require 600 participants. Last, the other effects would require more than 1,000 participants—or, in the case of gender differences, many more than that.

Code

# Run plot through source() rather than directly in this R Markdown document 
# to preserve the italicised text.
source('semanticpriming/power_analysis/semanticpriming_all_powercurves.R', 
       local = TRUE)

include_graphics(
  paste0(
    getwd(),  # Circumvent illegal characters in file path
    'semanticpriming/power_analysis/plots/semanticpriming_powercurve_plots_1_2_3.pdf'
  ))

Figure 9: Power curves for some main effects in the semantic priming study.

Code

include_graphics(
  paste0(
    getwd(),  # Circumvent illegal characters in file path
    'semanticpriming/power_analysis/plots/semanticpriming_powercurve_plots_4_5_6_7_8_9.pdf'
  ))

Figure 10: Power curves for some interactions in the semantic priming study.

Discussion of Study 2.1

The results revealed a significant, facilitatory effect of language-based similarity and a smaller but significant, inhibitory effect of visual-strength difference. That is, a greater language-based similarity resulted in shorter RTs, whereas a greater visual-strength difference resulted in larger RTs. There was also a sizable effect of stimulus onset asynchrony (SOA), with shorter RTs in the short SOA condition (200 ms) than in the long SOA (1,200 ms). Furthermore, there were significant interactions. First, language-based priming was larger in higher-vocabulary participants than in lower-vocabulary ones. Second, both language-based priming and vision-based priming were larger with the short SOA than with the long one. Thus far, these results broadly replicated those of Petilli et al. (2021). It is especially noteworthy that vision-based information had a significant effect, consistent with some of the previous research (Connell & Lynott, 2014a; Flores d’Arcais et al., 1985; Petilli et al., 2021; Schreuder et al., 1984), and contrasting with other research that did not find such an effect (Ostarek & Huettig, 2017) or only observed it after visually-focussed tasks (Pecher et al., 1998; Yee et al., 2012). Last, no effect of gender was found. Below, we delve into some other aspects of these results.

The importance of outliers

The interaction between language-based similarity and vocabulary size (Figure 4-a) was patent in all deciles of vocabulary size but it was clearest among those participants that were more than one standard deviation away from the mean. Outliers in individual differences have played important roles in other areas of cognition as well, such as in the study of aphantasia and hyperphantasia—traits characterised, respectively, by a diminished and an extraordinary ability to mentally visualise objects (Milton et al., 2021; Zeman et al., 2020). Such an influence of outliers provides a reason to study more varied samples of participants when possible. Furthermore, a greater interindividual variation might help detect the effects of individual differences that have been elusive (e.g., Hedge et al., 2018; Muraki & Pexman, 2021; Ponari, Norbury, Rotaru, et al., 2018; Rodríguez-Ferreiro et al., 2020; Rouder & Haaf, 2019).

Human-based and computational measures of vision-based information

Next, in a secondary analysis, we compared the roles of two measures of vision-based priming. The first measure—visual-strength difference—was operationalised as the difference in visual strength between the prime word and the target word in each trial. This difference score was thus based on modality-specific ratings provided by human participants (Lynott et al., 2020). The second measure—vision-based similarity—, created by Petilli et al. (2021), was based on vector representations trained on labelled images from ImageNet. This variable is therefore computational. The effect of visual-strength difference was slightly larger than that of vision-based similarity. This result is consistent with some previous findings suggesting that human-based measures explained more variance than computational measures (De Deyne et al., 2016, 2019; Gagné et al., 2016; Schmidtke et al., 2018; cf. Michaelov et al., 2022; Snefjella & Blank, 2020). If the different degree of human dependence of our two variables were indeed behind the effect size of each, we would need to consider a related issue. The problem of circularity was addressed by Petilli et al., who argued that using human-based predictors—such as ratings—to investigate human behaviour was less valid than using predictors that were more independent of human behaviour—such as computational measures. On the one hand, we identify two reasons for skepticism regarding the circularity hypothesis. First, the underlying basis of all computational measures (e.g., Mandera et al., 2017; Petilli et al., 2021) is indeed human behaviour, notwithstanding the degree to which this human basis is filtered by computational methods. Second, we have not found sufficient research addressing the validity question. Yet, on the other hand, the circularity hypothesis is important enough to warrant dedicated research. Specifically, future studies could be conducted to systematically compare the theoretical insights provided by human-based measures and by computational ones, as well as the effect size achieved by both types.

It is noteworthy that both visual-strength difference and vision-based similarity have independently proven to be relevant, and arguably valid, considering their correlations with other measures—especially word-concreteness difference and language-based similarity—and considering the effects of each measure in semantic priming (see Petilli et al., 2021). However, the differences between these measures are worthy of attention. Visual-strength difference was barely correlated with language-based similarity. Conversely, vision-based similarity was barely correlated with word-concreteness difference (refer to Figure 7). These results call for an investigation into the underlying composition of visual-strength difference and vision-based similarity.

Furthermore, whereas visual-strength difference retained its significant interaction with SOA—also observed in the main analysis presented above—, in contrast, vision-based similarity did not present such an interaction. The lack of an interaction between vision-based similarity and SOA contrasts with the results of Petilli et al. (2021), who found that vision-based similarity was only significant in the short SOA condition. There are several possible reasons for this difference, including: (I) a more conservative method in our current analysis—i.e., a maximal mixed-effects model containing more predictors than the hierarchical regression performed by Petilli et al.—, and (II) the presence of individual differences in the present study (i.e., vocabulary size, attentional control and gender), versus the aggregation performed by Petilli et al.

Last, the interaction between language-based similarity and SOA became non-significant in this sub-analysis. This difference from the original analysis may have been caused by the sizable correlation between language-based similarity and vision-based similarity (\(r\) = .49). In this regard, we should notice the large influence of the addition of a single variable (along with its interactions) into the model.

The influence of the analytical method

Taken together, the sub-analysis that included vision-based similarity offered a glimpse into the crucial role of analytical choices in the present topic. A previous example of this influence appeared in a set of studies that used Latent Semantic Analysis (LSA) as a predictor of semantic priming. Hutchison et al. (2008) operationalised LSA as a difference score, and did not find an effect of this variable. In contrast, later studies did not use a difference score and they observed a significant effect (Günther et al., 2016a; Mandera et al., 2017). We can extrapolate this issue to a very important comparison we often make—namely, that between language-based and embodied simulation. The pervasive superiority of language over the other systems (perception, action, emotion and sociality)—found in the three current studies and in previous ones (Banks et al., 2021; Kiela & Bottou, 2014; Lam et al., 2015; Louwerse et al., 2015; Pecher et al., 1998; Petilli et al., 2021)—would be less trustworthy if the instruments that were used to measure the language system had been far more precise than the instruments used to measure the embodiment system. In this sense, it is relevant to consider how variables are improved in research: it is done iteratively, by comparing the performance of different variables. Critically, the literature contains many comparisons of text-based variables, some dating back to the 1990s (De Deyne et al., 2013, 2016; Günther et al., 2016b, 2016a; M. N. Jones et al., 2006; Lund & Burgess, 1996; Mandera et al., 2017; Mikolov et al., 2013; Wingfield & Connell, 2022b). In contrast, the work on embodiment variables began more than a decade afterwards, and it has been less concerned with benchmarking the explanatory power of variables (but see Vergallito et al., 2020). Instead, this literature contains more comparisons of different modalities—e.g., visual strength, auditory strength, valence, etc. (Lynott et al., 2020; Lynott & Connell, 2009; Newcombe et al., 2012). Thus, if linguistic measures are more precise than embodiment measures due to greater work on the variables, such a difference could account for a certain portion of the superiority of linguistic information over embodied information (see Banks et al., 2021; Kiela & Bottou, 2014; Lam et al., 2015; Louwerse et al., 2015; Pecher et al., 1998; Petilli et al., 2021). Analytical choices such as the operationalisation of variables and the complexity of statistical models can greatly influence the conclusions of research. Indeed, our current results and previous ones suggest that the conclusions of research are inextricable from the method used in each study (see Barsalou, 2019; Botvinik-Nezer et al., 2020; Perret & Bonin, 2019; E.-J. Wagenmakers et al., 2022). Therefore, in the medium term, it may pay dividends to continue examining the influence of analytical choices. Unfortunately, in many research fields, reflecting on the sensitivity of our analyses might conflict with the incentives of the system, which may penalise nuanced conclusions in favour of simplified stories. To overcome such a bias, it may be necessary to devote greater importance to the methodology in scientific papers—for instance, by commenting on the method in the abstract and by extending the methods section in the body of the paper. In stark contrast, our current results should make us question some decisions by scientific publishers such as rendering the methods section in a smaller font than the results section, or placing the method section at the end of the paper. In a nutshell, it may be useful to ensure that scientists are aware that research findings are fundamentally dependent on research methods.

Statistical power analysis

We analysed the statistical power associated with several effects of interest, across various sample sizes. The results of this power analysis can help determine the number of participants required to reliably examine each of these effects in a future study. Importantly, the results assume two conditions. First, the future study would apply a statistical method similar to ours—namely, a mixed-effects model with random intercepts and slopes. Second, the analysis of the future study would encompass at least 5,943 prime–target pairs (distributed in various blocks across participants, not all being presented to every participant).

First, the results revealed that detecting the main effect of language-based similarity would require 50 participants. Next, detecting the interaction between language-based similarity and SOA would require 600 participants. Last, the other effects would require more than 1,000 participants—or, in the case of gender differences, many more than that.

References

Amsel, B. D. (2011). Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials. Neuropsychologia, 49(5), 970–983. https://doi.org/10.1016/j.neuropsychologia.2011.01.003
Amsel, B. D., Urbach, T. P., & Kutas, M. (2014). Empirically grounding grounded cognition: The case of color. NeuroImage, 99, 149–157. https://doi.org/10.1016/j.neuroimage.2014.05.025
Balota, D. A., & Lorch, R. F. (1986). Depth of automatic spreading activation: Mediated priming effects in pronunciation but not in lexical decision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12(3), 336–345. https://doi.org/10.1037/0278-7393.12.3.336
Balota, D. A., Yap, M. J., Hutchison, K. A., Cortese, M. J., Kessler, B., Loftis, B., Neely, J. H., Nelson, D. L., Simpson, G. B., & Treiman, R. (2007). The English Lexicon Project. Behavior Research Methods, 39, 445–459. https://doi.org/10.3758/BF03193014
Banks, B., Wingfield, C., & Connell, L. (2021). Linguistic distributional knowledge and sensorimotor grounding both contribute to semantic category production. Cognitive Science, 45(10), e13055. https://doi.org/10.1111/cogs.13055
Barsalou, L. W. (2019). Establishing generalizable mechanisms. Psychological Inquiry, 30(4), 220–230. https://doi.org/10.1080/1047840X.2019.1693857
Barsalou, L. W., Santos, A., Simmons, W. K., & Wilson, C. D. (2008). Language and simulation in conceptual processing. In Symbols and Embodiment. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199217274.003.0013
Bates, D., Maechler, M., Bolker, B., Walker, S., Christensen, R. H. B., Singmann, H., Dai, B., Scheipl, F., Grothendieck, G., Green, P., Fox, J., Brauer, A., & Krivitsky, P. N. (2021). Package ’lme4. CRAN. https://cran.r-project.org/web/packages/lme4/lme4.pdf
Becker, S., Moscovitch, M., Behrmann, M., & Joordens, S. (1997). Long-term semantic priming: A computational account and empirical evidence. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(5), 1059–1082. https://doi.org/10.1037/0278-7393.23.5.1059
Bernabeu, P., Willems, R. M., & Louwerse, M. M. (2017). Modality switch effects emerge early and increase throughout conceptual processing: Evidence from ERPs. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. J. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (pp. 1629–1634). Cognitive Science Society. https://doi.org/10.31234/osf.io/a5pcz
Bottini, R., Bucur, M., & Crepaldi, D. (2016). The nature of semantic priming by subliminal spatial words: Embodied or disembodied? Journal of Experimental Psychology: General, 145(9), 1160–1176. https://doi.org/10.1037/xge0000197
Botvinik-Nezer, R., Holzmeister, F., Camerer, C. F., Dreber, A., Huber, J., Johannesson, M., Kirchler, M., Iwanir, R., Mumford, J. A., Adcock, R. A., Avesani, P., Baczkowski, B. M., Bajracharya, A., Bakst, L., Ball, S., Barilari, M., Bault, N., Beaton, D., Beitner, J., … Schonberg, T. (2020). Variability in the analysis of a single neuroimaging dataset by many teams. Nature, 582(7810, 7810), 84–88. https://doi.org/10.1038/s41586-020-2314-9
Brauer, M., & Curtin, J. J. (2018). Linear mixed-effects models and the analysis of nonindependent data: A unified framework to analyze categorical and continuous independent variables that vary within-subjects and/or within-items. Psychological Methods, 23(3), 389–411. https://doi.org/10.1037/met0000159
Brunellière, A., Perre, L., Tran, T., & Bonnotte, I. (2017). Co-occurrence frequency evaluated with large language corpora boosts semantic priming effects. Quarterly Journal of Experimental Psychology, 70(9), 1922–1934. https://doi.org/10.1080/17470218.2016.1215479
Brysbaert, M., Warriner, A. B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46, 904–911. https://doi.org/10.3758/s13428-013-0403-5
Charbonnier, J., & Wartena, C. (2019). Predicting word concreteness and imagery. Proceedings of the 13th International Conference on Computational Semantics - Long Papers, 176–187. https://doi.org/10.18653/v1/W19-0415
Charbonnier, J., & Wartena, C. (2020). Predicting the concreteness of German words. Proceedings of the 5th Swiss Text Analytics Conference (SwissText), 2624. https://doi.org/10.25968/opus-2075
Cohen, J. (1983). The cost of dichotomization. Applied Psychological Measurement, 7(3), 249–253. https://doi.org/10.1177/014662168300700301
Collins, J., Pecher, D., Zeelenberg, R., & Coulson, S. (2011). Modality switching in a property verification task: An ERP study of what happens when candles flicker after high heels click. Frontiers in Psychology, 2(10). https://doi.org/10.3389/fpsyg.2011.00010
Connell, L. (2019). What have labels ever done for us? The linguistic shortcut in conceptual processing. Language, Cognition and Neuroscience, 34(10), 1308–1318. https://doi.org/10.1080/23273798.2018.1471512
Connell, L., & Lynott, D. (2014a). I see/hear what you mean: Semantic activation in visual word recognition depends on perceptual attention. Journal of Experimental Psychology: General, 143(2), 527–533. https://doi.org/10.1037/a0034626
De Deyne, S., Navarro, D. J., Perfors, A., Brysbaert, M., & Storms, G. (2019). The Small World of Words English word association norms for over 12,000 cue words. Behavior Research Methods, 51, 987–1006. https://doi.org/10.3758/s13428-018-1115-7
De Deyne, S., Navarro, D. J., & Storms, G. (2013). Better explanations of lexical and semantic cognition using networks derived from continued rather than single-word associations. Behavior Research Methods, 45(2), 480–498. https://doi.org/10.3758/s13428-012-0260-7
De Deyne, S., Perfors, A., & Navarro, D. (2016). Predicting human similarity judgments with distributional models: The value of word associations. Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, 1861–1870.
de Wit, B., & Kinoshita, S. (2015). The masked semantic priming effect is task dependent: Reconsidering the automatic spreading activation process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(4), 1062–1075. https://doi.org/10.1037/xlm0000074
Dormann, C. F., Elith, J., Bacher, S., Buchmann, C., Carl, G., Carré, G., Marquéz, J. R. G., Gruber, B., Lafourcade, B., Leitão, P. J., Münkemüller, T., McClean, C., Osborne, P. E., Reineking, B., Schröder, B., Skidmore, A. K., Zurell, D., & Lautenbach, S. (2013). Collinearity: A review of methods to deal with it and a simulation study evaluating their performance. Ecography, 36(1), 27–46. https://doi.org/10.1111/j.1600-0587.2012.07348.x
Flores d’Arcais, G. B., Schreuder, R., & Glazenborg, G. (1985). Semantic activation during recognition of referential words. Psychological Research, 47(1), 39–49. https://doi.org/10.1007/BF00309217
Gagné, C. L., Spalding, T. L., & Nisbet, K. A. (2016). Processing English compounds: Investigating semantic transparency. SKASE Journal of Theoretical Linguistics, 13(2), 2–22. https://link.gale.com/apps/doc/A469757337/LitRC?u=anon~b6a332f4&xid=9960afc7
Günther, F., Dudschig, C., & Kaup, B. (2015). LSAfun: An r package for computations based on latent semantic analysis. Behavior Research Methods, 47(4), 930–944. https://doi.org/10.3758/s13428-014-0529-0
Günther, F., Dudschig, C., & Kaup, B. (2016a). Latent semantic analysis cosines as a cognitive similarity measure: Evidence from priming studies. Quarterly Journal of Experimental Psychology, 69(4), 626–653. https://doi.org/10.1080/17470218.2015.1038280
Günther, F., Dudschig, C., & Kaup, B. (2016b). Predicting lexical priming effects from distributional semantic similarities: A replication with extension. Frontiers in Psychology, 7, 1646. https://doi.org/10.3389/fpsyg.2016.01646
Hald, L. A., Bastiaansen, M. C. M., & Hagoort, P. (2006). EEG theta and gamma responses to semantic violations in online sentence processing. Brain and Language, 96(1), 90–105. https://doi.org/10.1016/j.bandl.2005.06.007
Hald, L. A., Hocking, I., Vernon, D., Marshall, J. A., & Garnham, A. (2013). Exploring modality switching effects in negated sentences: Further evidence for grounded representations. Frontiers in Psychology, 4, 93. https://doi.org/10.3389/fpsyg.2013.00093
Hald, L. A., Marshall, J. A., Janssen, D. P., & Garnham, A. (2011). Switching modalities in a sentence verification task: ERP evidence for embodied language processing. Frontiers in Psychology, 2, 45. https://doi.org/10.3389/fpsyg.2011.00045
Harrison, X. A., Donaldson, L., Correa-Cano, M. E., Evans, J., Fisher, D. N., Goodwin, C., Robinson, B. S., Hodgson, D. J., & Inger, R. (2018). A brief introduction to mixed effects modelling and multi-model inference in ecology. PeerJ, 6, 4794. https://doi.org/10.7717/peerj.4794
Hauk, O. (2016). Only time will tell – why temporal information is essential for our neuroscientific understanding of semantics. Psychonomic Bulletin & Review, 23(4), 1072–1079. https://doi.org/10.3758/s13423-015-0873-9
Hedge, C., Powell, G., & Sumner, P. (2018). The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences. Behavior Research Methods, 50(3), 1166–1186. https://doi.org/10.3758/s13428-017-0935-1
Hoedemaker, R. S., & Gordon, P. C. (2014). It takes time to prime: Semantic priming in the ocular lexical decision task. Journal of Experimental Psychology: Human Perception and Performance, 40(6), 2179–2197. https://doi.org/10.1037/a0037677
Hutchison, K. A. (2003). Is semantic priming due to association strength or feature overlap? A microanalytic review. Psychonomic Bulletin & Review, 10(4), 785–813. https://doi.org/10.3758/BF03196544
Hutchison, K. A., Balota, D. A., Cortese, M. J., & Watson, J. M. (2008). Predicting semantic priming at the item level. Quarterly Journal of Experimental Psychology, 61(7), 1036–1066. https://doi.org/10.1080/17470210701438111
Hutchison, K. A., Balota, D. A., Neely, J. H., Cortese, M. J., Cohen-Shikora, E. R., Tse, C.-S., Yap, M. J., Bengson, J. J., Niemeyer, D., & Buchanan, E. (2013). The semantic priming project. Behavior Research Methods, 45, 1099–1114. https://doi.org/10.3758/s13428-012-0304-z
Jones, M. N., Kintsch, W., & Mewhort, D. J. (2006). High-dimensional semantic space accounts of priming. Journal of Memory and Language, 55(4), 534–552. https://doi.org/10.1016/j.jml.2006.07.003
Joordens, S., & Becker, S. (1997). The long and short of semantic priming effects in lexical decision. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(5), 1083–1105. https://doi.org/10.1037/0278-7393.23.5.1083
Kiefer, M., Pielke, L., & Trumpp, N. M. (2022). Differential temporo-spatial pattern of electrical brain activity during the processing of abstract concepts related to mental states and verbal associations. NeuroImage, 252, 119036. https://doi.org/10.1016/j.neuroimage.2022.119036
Kiela, D., & Bottou, L. (2014). Learning image embeddings using convolutional neural networks for improved multi-modal semantics. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP, 36–45. https://doi.org/10.3115/v1/D14-1005
Knief, U., & Forstmeier, W. (2021). Violating the normality assumption may be the lesser of two evils. Behavior Research Methods. https://doi.org/10.3758/s13428-021-01587-5
Lam, K. J., Dijkstra, T., & Rueschemeyer, S. A. (2015). Feature activation during word recognition: Action, visual, and associative-semantic priming effects. Frontiers in Psychology, 6, 659. https://doi.org/10.3389/fpsyg.2015.00659
Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to latent semantic analysis. Discourse Processes, 25(2-3), 259–284. https://doi.org/10.1080/01638539809545028
Louwerse, M. M., & Connell, L. (2011). A taste of words: Linguistic context and perceptual simulation predict the modality of words. Cognitive Science, 35(2), 381–398. https://doi.org/10.1111/j.1551-6709.2010.01157.x
Louwerse, M. M., Hutchinson, S., Tillman, R., & Recchia, G. (2015). Effect size matters: The role of language statistics and perceptual simulation in conceptual processing. Language, Cognition and Neuroscience, 30(4), 430–447. https://doi.org/10.1080/23273798.2014.981552
Lund, K., & Burgess, C. (1996). Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, & Computers, 28(2), 203–208. https://doi.org/10.3758/BF03204766
Lund, K., Burgess, C., & Atchley, R. A. (1995). Semantic and associative priming in high-dimensional semantic space. Proceedings of the Cognitive Science Society, 660–665.
Lynott, D., & Connell, L. (2009). Modality exclusivity norms for 423 object properties. Behavior Research Methods, 41(2), 558–564. https://doi.org/10.3758/BRM.41.2.558
Lynott, D., Connell, L., Brysbaert, M., Brand, J., & Carney, J. (2020). The Lancaster Sensorimotor Norms: Multidimensional measures of perceptual and action strength for 40,000 English words. Behavior Research Methods, 52, 1271–1291. https://doi.org/10.3758/s13428-019-01316-z
Mandera, P., Keuleers, E., & Brysbaert, M. (2017). Explaining human performance in psycholinguistic tasks with models of semantic similarity based on prediction and counting: A review and empirical validation. Journal of Memory and Language, 92, 57–78. https://doi.org/10.1016/j.jml.2016.04.001
McDonald, S., & Brew, C. (2002). A distributional model of semantic context effects in lexical processing. Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, 17–24. http://dblp.uni-trier.de/db/conf/acl/acl2004.html#McDonaldB04
Michaelov, J. A., Coulson, S., & Bergen, B. K. (2022). So cloze yet so far: N400 amplitude is better predicted by distributional information than human predictability judgements. IEEE Transactions on Cognitive and Developmental Systems, 1–1. https://doi.org/10.1109/TCDS.2022.3176783
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space (Version 3). arXiv. https://doi.org/10.48550/arXiv.1301.3781
Milton, F., Fulford, J., Dance, C., Gaddum, J., Heuerman-Williamson, B., Jones, K., Knight, K. F., MacKisack, M., Winlove, C., & Zeman, A. (2021). Behavioral and neural signatures of visual imagery vividness extremes: Aphantasia versus hyperphantasia. Cerebral Cortex Communications, 2(2), 035. https://doi.org/10.1093/texcom/tgab035
Muraki, E. J., & Pexman, P. M. (2021). Simulating semantics: Are individual differences in motor imagery related to sensorimotor effects in language processing? Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(12), 1939–1957. https://doi.org/10.1037/xlm0001039
Nakagawa, S., Johnson, P. C. D., & Schielzeth, H. (2017). The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded. Journal of The Royal Society Interface, 14(134), 20170213. https://doi.org/10.1098/rsif.2017.0213
Nelson, D. L., McEvoy, C. L., & Schreiber, T. A. (2004). The University of South Florida free association, rhyme, and word fragment norms. Behavior Research Methods, Instruments, & Computers, 36(3), 402–407. https://doi.org/10.3758/BF03195588
Newcombe, P., Campbell, C., Siakaluk, P., & Pexman, P. (2012). Effects of emotional and sensorimotor knowledge in semantic processing of concrete and abstract nouns. Frontiers in Human Neuroscience, 6(275). https://doi.org/10.3389/fnhum.2012.00275
Ostarek, M., & Huettig, F. (2017). A task-dependent causal role for low-level visual processes in spoken word comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1215–1224. https://doi.org/10.1037/xlm0000375
Padó, S., & Lapata, M. (2007). Dependency-based construction of semantic space models. Computational Linguistics, 33(2), 161–199. https://doi.org/10.1162/coli.2007.33.2.161
Paivio, A. (1990). Mental representations: A dual coding approach. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195066661.001.0001
Pecher, D., Zeelenberg, R., & Barsalou, L. W. (2003). Verifying different-modality properties for concepts produces switching costs. Psychological Science, 14(2), 119–124. https://doi.org/10.1111/1467-9280.t01-1-01429
Pecher, D., Zeelenberg, R., & Raaijmakers, J. G. W. (1998). Does pizza prime coin? Perceptual priming in lexical decision and pronunciation. Journal of Memory and Language, 38(4), 401–418. https://doi.org/10.1006/jmla.1997.2557
Perret, C., & Bonin, P. (2019). Which variables should be controlled for to investigate picture naming in adults? A Bayesian meta-analysis. Behavior Research Methods, 51(6), 2533–2545. https://doi.org/10.3758/s13428-018-1100-1
Petilli, M. A., Günther, F., Vergallito, A., Ciapparelli, M., & Marelli, M. (2021). Data-driven computational models reveal perceptual simulation in word processing. Journal of Memory and Language, 117, 104194. https://doi.org/10.1016/j.jml.2020.104194
Pexman, P. M., & Yap, M. J. (2018). Individual differences in semantic processing: Insights from the Calgary semantic decision project. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(7), 1091–1112. https://doi.org/10.1037/xlm0000499
Ponari, M., Norbury, C. F., Rotaru, A., Lenci, A., & Vigliocco, G. (2018). Learning abstract words and concepts: Insights from developmental language disorder. Philosophical Transactions of the Royal Society B: Biological Sciences, 373, 20170140. https://doi.org/10.1098/rstb.2017.0140
Pylyshyn, Z. W. (1973). What the mind’s eye tells the mind’s brain: A critique of mental imagery. Psychological Bulletin, 80(1), 1–24. https://doi.org/10.1037/h0034650
Ratcliff, R., Thapar, A., & McKoon, G. (2010). Individual differences, aging, and IQ in two-choice tasks. Cognitive Psychology, 60, 127–157. https://doi.org/10.1016/j.cogpsych.2009.09.001
Roads, B. D., & Love, B. C. (2020). Learning as the unsupervised alignment of conceptual systems. Nature Machine Intelligence, 2(1, 1), 76–82. https://doi.org/10.1038/s42256-019-0132-2
Rodríguez-Ferreiro, J., Aguilera, M., & Davies, R. (2020). Semantic priming and schizotypal personality: Reassessing the link between thought disorder and enhanced spreading of semantic activation. PeerJ, 8, e9511. https://doi.org/10.7717/peerj.9511
Rouder, J. N., & Haaf, J. M. (2019). A psychometrics of individual differences in experimental tasks. Psychonomic Bulletin & Review, 26(2), 452–467. https://doi.org/10.3758/s13423-018-1558-y
Santos, A., Chaigneau, S. E., Simmons, W. K., & Barsalou, L. W. (2011). Property generation reflects word association and situated simulation. Language and Cognition, 3(1), 83–119. https://doi.org/10.1515/langcog.2011.004
Schielzeth, H., Dingemanse, N. J., Nakagawa, S., Westneat, D. F., Allegue, H., Teplitsky, C., Réale, D., Dochtermann, N. A., Garamszegi, L. Z., & Araya‐Ajoy, Y. G. (2020). Robustness of linear mixed‐effects models to violations of distributional assumptions. Methods in Ecology and Evolution, 11(9), 1141–1152. https://doi.org/10.1111/2041-210X.13434
Schmidtke, D., Van Dyke, J. A., & Kuperman, V. (2018). Individual variability in the semantic processing of English compound words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(3), 421–439. https://doi.org/10.1037/xlm0000442
Schoot, R. van de, Depaoli, S., Gelman, A., King, R., Kramer, B., Märtens, K., Tadesse, M. G., Vannucci, M., Willemsen, J., & Yau, C. (2021). Bayesian statistics and modelling. Nature Reviews Methods Primers, 1, 3. https://doi.org/10.1038/s43586-020-00003-0
Schreuder, R., Flores d’Arcais, G. B., & Glazenborg, G. (1984). Effects of perceptual and conceptual similarity in semantic priming. Psychological Research, 45(4), 339–354. https://doi.org/10.1007/BF00309710
Simmons, W. K., Hamann, S. B., Harenski, C. L., Hu, X. P., & Barsalou, L. W. (2008). fMRI evidence for word association and situated simulation in conceptual processing. Journal of Physiology-Paris, 102(1), 106–119. https://doi.org/10.1016/j.jphysparis.2008.03.014
Singmann, H., & Kellen, D. (2019). An introduction to mixed models for experimental psychology. In D. H. Spieler & E. Schumacher (Eds.), New methods in cognitive psychology (pp. 4–31). Psychology Press.
Snefjella, B., & Blank, I. (2020). Semantic norm extrapolation is a missing data problem. PsyArXiv. https://doi.org/10.31234/osf.io/y2gav
Solovyev, V. (2021). Concreteness/abstractness concept: State of the art. In B. M. Velichkovsky, P. M. Balaban, & V. L. Ushakov (Eds.), Advances in Cognitive Research, Artificial Intelligence and Neuroinformatics (pp. 275–283). Springer International Publishing. https://doi.org/10.1007/978-3-030-71637-0_33
Trumpp, N. M., Traub, F., & Kiefer, M. (2013). Masked priming of conceptual features reveals differential brain activation during unconscious access to conceptual action and sound information. PLOS ONE, 8(5), e65910. https://doi.org/10.1371/journal.pone.0065910
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Burkner, P.-C. (2021). Rank-normalization, folding, and localization: An improved R-hat for assessing convergence of MCMC. Bayesian Analysis, 16(2), 667–718. https://doi.org/10.1214/20-BA1221
Vergallito, A., Petilli, M. A., & Marelli, M. (2020). Perceptual modality norms for 1,121 Italian words: A comparison with concreteness and imageability scores and an analysis of their impact in word processing tasks. Behavior Research Methods, 52(4), 1599–1616. https://doi.org/10.3758/s13428-019-01337-8
Wagenmakers, E.-J., Sarafoglou, A., & Aczel, B. (2022). One statistical analysis must not rule them all. Nature, 605(7910), 423–425. https://doi.org/10.1038/d41586-022-01332-8
Wingfield, C., & Connell, L. (2022b). Understanding the role of linguistic distributional knowledge in cognition. Language, Cognition and Neuroscience, 1–51. https://doi.org/10.1080/23273798.2022.2069278
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock Johnson III tests of cognitive abilities. Riverside Publishing.
Yap, M. J., Balota, D. A., Sibley, D. E., & Ratcliff, R. (2012). Individual differences in visual word recognition: Insights from the English Lexicon Project. Journal of Experimental Psychology: Human Perception and Performance, 38, 1, 53–79. https://doi.org/10.1037/a0024177
Yap, M. J., Hutchison, K. A., & Tan, L. C. (2017). Individual differences in semantic priming performance: Insights from the semantic priming project. In M. N. Jones (Ed.), Frontiers of cognitive psychology. Big data in cognitive science (pp. 203–226). Routledge/Taylor & Francis Group.
Yap, M. J., Tse, C.-S., & Balota, D. A. (2009). Individual differences in the joint effects of semantic priming and word frequency revealed by RT distributional analyses: The role of lexical integrity. Journal of Memory and Language, 61(3), 303–325. https://doi.org/10.1016/j.jml.2009.07.001
Yee, E., Ahmed, S. Z., & Thompson-Schill, S. L. (2012). Colorless green ideas (can) prime furiously. Psychological Science, 23(4), 364–369. https://doi.org/10.1177/0956797611430691
Yee, E., Huffstetler, S., & Thompson-Schill, S. L. (2011). Function follows form: Activation of shape and function features during object identification. Journal of Experimental Psychology: General, 140(3), 348–363. https://doi.org/10.1037/a0022840
Zeman, A., Milton, F., Della Sala, S., Dewar, M., Frayling, T., Gaddum, J., Hattersley, A., Heuerman-Williamson, B., Jones, K., & MacKisack, M. (2020). Phantasia—the psychological significance of lifelong visual imagery vividness extremes. Cortex, 130, 426–440. https://doi.org/10.1016/j.cortex.2020.04.003

  1. These measures are compared at the end of the Results section.↩︎

  2. Despite the name of the package, the measure we used was not based on Latent Semantic Analysis.↩︎

  3. For future reference, it should be noted that, in Studies 2.2 and 2.3, the stimuli are the stimulus words, as there are no prime words in those studies.↩︎

  4. All interaction plots across the three studies are based on the frequentist models. Further interaction plots available in Appendix D.↩︎




Pablo Bernabeu, 2022. Licence: CC BY 4.0.
Thesis: https://doi.org/10.17635/lancaster/thesis/1795.

Online book created using the R package bookdown.