Abstract
Keywords
Faces constitute a significant part of human communication because they transfer nonverbal signals to other people and transmit rich information about a person, including unique identity information. A growing body of research shows that contextual effects influence how we interpret faces (Wieser & Brosch, 2012), including during instructed threat, in which by merely informing participants that a face will be associated with a shock, persistent threat-related effects are found (Bublatzky et al., 2014). These effects modulate specific event-related potentials (ERPs; see Schellhaas et al., 2020). Such threat signals can likewise be elicited by associating a neutral face with danger because of highly negative biographical knowledge, typically conveyed in various media outlets. Although research has shown that evaluative knowledge modulates ERPs, findings regarding the time course of such effects are conflicting, leading to an ongoing debate on whether early (rather automatic) or late (controlled) stages of processing are modulated (e.g., see Abdel Rahman, 2011; Baum et al., 2020; McCrackin & Itier, 2018).
Early and late ERP components represent distinct stages of face processing (for a review, see Schweinberger & Neumann, 2016). The occipitally scored P1 is known to reflect early stages of stimulus detection and discrimination (e.g., Luck & Hillyard, 1994). The following N170 is viewed as a structural encoding component, and larger amplitudes are reported for faces compared with objects at this component (Eimer, 2011). The subsequent early posterior negativity (EPN) is observed as a differential negativity when contrasting emotional and neutral expressions and has been related to early attentional selection processes (Schupp et al., 2004). The EPN indexes sensitivity to salient emotional information (Junghöfer et al., 2001) and is modulated by emotional expressions partially independent of attentional resources (e.g., see Frühholz et al., 2011; Hammerschmidt et al., 2018; Itier & Neath-Tavares, 2017; Rellecke et al., 2012; Schacht & Sommer, 2009). Finally, the late positive potential (LPP) indicates elaborated stimulus evaluation and controlled attention processes, particularly when the appraisal of affective meaning is involved (e.g., see Hajcak et al., 2009; Schupp et al., 2006).
Evaluative person knowledge has been shown to modulate early and late ERPs. However, experimental tasks and examined ERPs vary considerably, and the resulting findings are conflicting. These experiments included tasks that required responding to an oddball stimulus (Xu et al., 2016), discriminating gender (Luo et al., 2016), recollecting nationality information (Suess et al., 2014), discriminating semantic features (well-known faces vs. new faces, nationality, and names of the individuals pictured; Abdel Rahman, 2011), or responding to evaluative emotional information (Baum et al., 2020; Kissler & Strehlow, 2017). Only one study examined the P1, and it showed no evaluative effects (Luo et al., 2016), whereas there are two conflicting findings regarding the N170 (Luo et al., 2016; Xu et al., 2016). One study used a passive-viewing task and observed no effects (Luo et al., 2016; Xu et al., 2016), whereas the other study required participant to explicitly attend to the gender of faces and found an increased N170 (Luo et al., 2016). It remains an open question whether conflicting findings depend on differences in feature-based attention to faces and emotional information of faces. For later components, larger EPN amplitudes were found for faces paired with negative compared with neutral biographical information (Abdel Rahman, 2011; Luo et al., 2016; Suess et al., 2014; Xu et al., 2016). However, other studies observed no effect of negative evaluative information at the EPN level (Baum et al., 2020; Kissler & Strehlow, 2017; for unfamiliar faces, see Abdel Rahman, 2011). For the LPP, a majority of the studies show larger amplitudes for faces with negative associations (Abdel Rahman, 2011; Baum et al., 2020; Kissler & Strehlow, 2017; Xu et al., 2016; but see Luo et al., 2016). In summary, studies show that evaluative information modulates ERPs but provide a mixed picture of which information-processing stages are affected. Research on emotional expressions showed that feature-based attention to the emotional information is crucial for late modulations, whereas early components are less vulnerable to a competing task (Rellecke et al., 2012; Schindler, Bruchmann, et al., 2020). Whether this holds true for faces associated with evaluative information is unknown.
Statement of Relevance
Humans are highly sensitive to individual faces that have been associated with negative information. In this research, we asked how our brain differentiates individual human faces on the basis of relevant biographic information, such as that provided by newspaper headlines. We gave adult participants the task of responding to faces, some of which we associated with a brutal crime. While they viewed the faces, we recorded electroencephalograms (EEGs). Very early in the viewing period, we found an attention-independent potentiation of brain responses as participants processed the faces of supposed criminals. When participants’ attention was directed to the evaluative background information, subsequent processing of the faces also showed an additional increase in neural activity. These findings suggest a mandatory early prioritization of the processing of negatively evaluated faces, whereas attention to relevant facial features evokes a second stage of potentiated brain responses to faces of individuals charged with a crime. Our findings indicate that negative biographical information that resembles typical newspaper headlines strongly affects the neuronal processing of individual faces.
Because ERPs index distinct stages of processing and these stages depend on available resources, studies in which task conditions are systematically varied might resolve the inconsistent findings during the processing of faces with acquired emotional meaning. In this preregistered study (https://osf.io/erkdt), we examined how feature-based attention modulates evaluative person knowledge for early (P1, N170), midlatency (EPN), and late (LPP) ERPs. Participants were required to (a) perform a perceptual line-discrimination task, in which faces served as distractor stimuli; (b) discriminate the age of faces, in which attention was directed to the face but not to the evaluative information; or (c) decide about the person’s evaluative backstory, thus directing participants’ attention to the emotional information. We expected no P1 effects of evaluative emotional information. For the N170 and EPN, on the basis of findings of partially resource-independent increases for emotional information, we expected increased differentiation between negative and neutral faces when attention was directed to the face and even more when attention was directed to the emotional information. Finally, larger LPP amplitudes for negative compared with neutral faces were expected only in the emotion task.
Method
Participants
Following our preregistered data-sampling plan, we recruited 40 participants. Calculations using G*Power (Version 3.1.7; Faul et al., 2009) showed that this sample size would yield a power of greater than 90% to detect medium-sized effects of evaluative information and interactions of evaluative information and attention tasks (η
Stimuli
The facial stimuli for the experiment were taken from the FACES database with permission for use in the current experiment (Ebner et al., 2010). Specifically, eight identities (four young men, four middle-age men)
1
with neutral expressions were chosen. Half of the faces were associated with highly negative evaluative knowledge and the other half with neutral information (see below). Each half contained two young and two middle-age men. Identities were counterbalanced when they were assigned to the conditions, thus they were equally often associated with negative and neutral information across participants. Moreover, the grouping of identities was balanced across participants; thus, all combinations of identities were equally distributed across experimental conditions. To ensure face validity when coupling the faces with different pieces of narrative information, we displayed only headshots of each face, keeping the facial hair but removing the shirts. The faces were always presented with an overlay of horizontal or vertical thin lines (creating using Presentation software; Version 21.1; Neurobehavioral Systems, 2019). Five horizontal or five vertical lines appeared within the boundaries of each face (horizontal lines: 3.3 degrees of visual angle [DVA]; vertical lines: 4.5 DVA; thickness: 0.03 DVA; centered
Procedure
Participants first responded to a demographic questionnaire. Then they were seated 60 cm in front of a gamma-corrected display (Iiyama G-Master GB2488HSU) running at 60 Hz with a Michelson contrast of .9979 (minimum

Schematic overview of (a) the experimental flow and (b) trial presentation in all tasks. After electroencephalogram (EEG) preparation was complete (a), the experiment started with one of the three attention tasks, counterbalanced across participants. Each task required them to perform a binary forced-choice decision, in which they had to discriminate line orientation, age, or emotional information. During each trial (b), the backstory was presented in the form of newspaper articles, after which the faces of all four group members were presented. The story and members of the criminal group were presented first, followed by the story and members of the firefighters. Next, participants were informed about the discriminative feature. During each task, the trial structure was identical. Note that stimulus displays in the illustration are not drawn to scale, and the depicted face identity was not used in the experiment but is shown only for display purposes.
Participants were instructed to avoid eye movements and blinks during the stimulus presentation and started with the perceptual task, the age task, or the emotion task (see Fig. 1). Task order and response keys (X and M) were counterbalanced across participants. In each trial, participants had to make a two-alternative forced choice, deciding (a) whether the overlaid line orientation was horizontal or vertical, (b) whether the age of the face was old or young, or (c) whether the associated evaluative emotional information was negative or neutral (i.e., whether the face belonged to the criminal group or the firefighter-training group). Before each task started, the background stories were repeated to ensure that participants would remember the evaluative information. The trial structure and mode of presentation were kept constant across all tasks. Each trial started with the display of a fixation cross for 800 to 1,000 ms, after which a face was presented for 100 ms. This short presentation time of the faces was done to avoid attention shifts to other facial features. The face was followed by another fixation cross presented for 1,500 ms, during which responses were recorded. Each face was repeated 16 times during each task, yielding 64 trials presenting faces associated with neutral information and 64 trials with faces associated with negative information, totaling 384 trials across the three tasks. After testing, participants rated each face on valence, arousal, and perceived threat. Finally, they responded to the Beck Depression Inventory-II and the State-Trait Anxiety Inventory (Hautzinger et al., 2009; Spielberger et al., 1999) as well as to a short version of the NEO Five-Factor Inventory (Körner et al., 2008); these measures were not relevant for the current study but for the analyses of individual differences in effects of evaluative person knowledge in larger samples.
Electroencephalogram (EEG) recording and preprocessing
EEG signals were recorded from 64 active electrodes using Biosemi’s ActiView software (Version 8.12; https://www.biosemi.com/download.htm). Four additional electrodes measured horizontal and vertical eye movements. The recording sampling rate was 512 Hz. Off-line data were re-referenced to the average reference and a low-cutoff filter of 0.01 and a high-cutoff filter of 40 Hz were applied. Recorded eye movements were corrected using the automatic eye-artifact correction method implemented in
Data analyses
Data were analyzed using 2 (emotion: negative, neutral) × 3 (task: perceptual, age, emotion) repeated measures analyses of variance. For ERP analyses of the P1, N170, and EPN, channel-group laterality (left, right) was included as a factor. Effect sizes were calculated as η
EEG scalp data were statistically analyzed using Electromagnetic Encephalography Software (EMEGS; Version 2.8; Peyk et al., 2011). The main effects of emotion, task, and their interaction were examined. Time windows were segmented into intervals from 80 to 100 ms for the P1, 120 to 170 ms for the N170, 250 to 350 ms for the EPN, and 400 to 600 ms for the LPP. We measured the P1, N170, and EPN over two symmetrical occipital clusters (P1 and N170: left P9, P7, PO7; right P10, P8, PO8; EPN: left P9, P7, PO7, O1; right P10, P8, PO8, O2). Additionally, we measured the LPP component over a centroparietal cluster (C1, Cz, C2, CP1, CPz, CP2, P1, Pz, P2). We slightly deviated from our preregistration in time (preregistered N170: 130–170 ms; preregistered EPN: 200–350 ms) and space (preregistered EPN: left P9, P7, PO7; right P10, P8, PO8; preregistered LPP: C1, Cz, C2, CP1, CPz, CP2). This was because of the preregistered approach to validate ERP windows for the P1 and N170 by collapsing ERPs across all conditions (Luck & Gaspelin, 2017). For the EPN and LPP, which are typically scored as differences between negative and neutral stimuli, we collapsed waveforms to negative faces and neutral faces across all attention tasks to identify differential effects. Finally, analyses of covariance (ANCOVAs) with reaction time as a covariate were calculated to account for possible influences of reaction time differences on ERP modulations. All data (https://osf.io/rzevn) and a copy of the preregistration (https://osf.io/erkdt) and are available on OSF.
Results
Manipulation check
Face ratings
After the experiment, participants rated the valence, arousal, and perceived threat of faces. Rating showed effects of emotional evaluative information for valence,

Ratings and behavioral results in the three attention tasks. Average ratings of the (a) valence, (b) arousal, and (c) perceived threat of the stimuli are shown for each emotion condition. Average (d) reaction time (RT) and (e) hit rate (proportion of correct responses) for each emotion condition are shown separately for the three different tasks. Data bars show group means, and error bars depict 95% confidence intervals. Semitransparent dots depict individual data.
Hit rate
With regard to hit rate, there was no main effect of emotion,
Means for Behavioral Measures Across the Three Attention Tasks
Note: Values in parentheses are standard deviations. For four participants, behavioral responses were recoded in at least one condition (emotion task in four participants, age task in one participant) because they reported having confused the response keys. No responses were recorded from one participant in one subcondition of the emotion task, presumably because the participant pressed a wrong key.
Reaction time
Regarding reaction time, main effects of emotion,
ERP results
P1
With respect to the P1, there was no main effect of emotion,
N170
For the N170, there was a large main effect of emotion,

Main effects of emotion on the P1 and N170, respectively, in each of the three tasks (perceptual, age, and emotion). Scalp topographies depict the differences between event-related potentials (ERPs) in response to faces associated with criminal (negative) and neutral backstories. Black dots indicate positions of electrodes. The top row of ERP waveforms shows the mean time course for highlighted sensors in the negative and neutral conditions. The bottom row of ERP waveforms shows differences between means for the two emotion conditions; error bands show 95% bootstrapped confidence intervals. The gray shaded areas in all waveform graphs highlight the range of the P1 and N170, respectively. The bar plots show average amplitude across selected sensors in each combination of task and emotion condition. Data bars show group means, and error bars depict 95% confidence intervals. Semitransparent dots depict individual data.
EPN
Concerning the EPN, there was no main effect of emotion,
We observed a significant interaction between emotion and task,

Main effects of emotion on the early posterior negativity (EPN) in each of the three tasks (perceptual, age, and emotion). Scalp topographies depict the differences between event-related potentials (ERPs) in response to faces associated with criminal (negative) and neutral backstories. Black dots indicate positions of electrodes. The top row of ERP waveforms shows the mean time course for highlighted sensors in the negative and neutral conditions. The bottom row of ERP waveforms shows differences between means for the two emotion conditions; error bands show 95% bootstrapped confidence intervals. The gray shaded areas in all waveform graphs highlight the range of the EPN. The bar plot shows average amplitude across selected sensors in each combination of task and emotion condition. Data bars show group means, and error bars depict 95% confidence intervals. Semitransparent dots depict individual data.
LPP
Regarding the LPP, the main effects of emotion,

Main effects of evaluative information on the late positive potential (LPP) in each of the three tasks (perceptual, age, and emotion). Scalp topographies depict the differences between event-related potentials (ERPs) in response to faces associated with criminal (negative) and neutral backstories. Black dots indicate positions of electrodes. The top row of ERP waveforms shows the mean time course for highlighted sensors in the negative and neutral conditions. The bottom row of ERP waveforms shows differences between means for the two emotion conditions; error bands show 95% bootstrapped confidence intervals. The gray shaded areas in all waveform graphs highlight the range of the LPP. The bar plot shows average amplitude across selected sensors in each combination of task and emotion condition. Data bars show group means, and error bars depict 95% confidence intervals. Semitransparent dots depict individual data.
Control analyses: ANCOVAs with reaction time as a covariate
Because we observed reaction time differences between the tasks as well as interactions of emotion and task, we calculated ANCOVAs with reaction time as a covariate (see Table 2). For the P1, the main effect of task disappeared when the analysis accounted for reaction time differences; however, N170 and LPP main effects of emotion, EPN and LPP main effects of task, and most importantly, EPN and LPP interaction effects of emotion and task remained significant.
Results From Repeated Measures Analyses on the Four Event-Related Potential Components, Both With and Without Reaction Time (RT) as a Covariate
Note: Significant main and interaction effects are highlighted in boldface. All
Discussion
We tested how feature-based attention to perceptual, face, or emotional information differentially influences evaluative-knowledge effects on early (P1, N170), midlatency (EPN), and late (LPP) ERP components. Faces of supposed criminals were rated more negatively, arousing, and threatening than neutral faces. During all tasks, accuracy was at ceiling, whereas for reaction times, faster responses toward faces of criminals were found only during the evaluative task. Interestingly, we found a task-independent effect of negative evaluative knowledge on the N170 as well as interactions of emotion and task for the EPN and LPP, showing differentiation between faces with negative and neutral associations only when participants attended to the associated emotional information.
The P1 is hypothesized to be related to early stimulus detection and discrimination (e.g., Luck & Hillyard, 1994). We expected no significant effects of evaluative emotional information, in line with the few studies examining P1 effects for different kinds of evaluative or context manipulations (e.g., Luo et al., 2016; Wieser et al., 2014). This indicates that the mere evaluative information is not sufficient to increase P1 amplitudes for neutral expressions, in contrast to findings of studies that examined associated monetary gains (Hammerschmidt et al., 2017; but see Hammerschmidt et al., 2018). C1 modulations have been observed even in studies associating monetary information with visual stimuli (Rossi et al., 2017) and studies in which participants were conditioned to associate faces with an electric shock (Rehbein et al., 2014). Finally, we found a smaller P1 during the emotion task, which could indicate differences in tonic vigilance.
The N170, which follows the P1, is viewed as a structural encoding component related to configural processing, and its amplitude is increased for faces compared with objects (Eimer, 2011). Here, we found a task-independent evaluative-information effect with negative biographical information increasing amplitudes for the presented faces. Interestingly, N170 modulations are rarely examined in studies on evaluative person knowledge, but results are conflicting (Luo et al., 2016; Xu et al., 2016). A critical factor in our results could be the small number of face identities. Because we used two groups and presented all group members together, it might have been easy to distinguish between subsequently presented faces and associate all face identities with their evaluative background. This possibly enabled participants to discriminate configural features between putative criminal and neutral identities. The task-independent N170 modulation indicates that this discriminatory process did not need attention to face or emotional features. Building on this empirical finding, we found that the processing of configural information can be influenced by instructed evaluative knowledge, enabling an early sensory differentiation. Adding to this idea, a recent study has shown increased N170 responses for neutral faces for which participants exhibit more negative stereotypes (Giménez-Fernández et al., 2020).
For the following EPN component, which has been examined more frequently, studies have also provided a mixed picture so far (e.g., Baum et al., 2020; Kissler & Strehlow, 2017; Luo et al., 2016; Suess et al., 2014). We expected increasing EPN differences across tasks. However, we observed an effect of negative biographical information only when attention was directed to the biographical background of a given face. Thus, our findings conflict with those of studies showing increased EPN amplitudes for faces associated with negative person knowledge in tasks that do not require attention to the emotional information (Suess et al., 2014; Xu et al., 2016). An explanation for this finding could be related to the short presentation time of the faces, which was done to avoid attention shifts to other facial features. Further explanations for conflicting EPN findings could be related to the specific learning task used to acquire the evaluative biographical information (see the Supplemental Material at https://osf.io/abx27/), the limited number of identities, or the specific attention tasks. It might be that in tasks, which require attending to a semantic face feature (e.g., see Suess et al., 2014), the associated emotional biographical information is more readily retrieved than in other tasks. We believe that during the EPN stage, attentional selection mechanisms depend on both the emotional saliency of the given stimulus and available processing capacities (see Junghöfer et al., 2001; Schindler, Caldarone, et al., 2020). Previous studies using emotional facial expressions (e.g., Rellecke et al., 2012; Schindler, Bruchmann, et al., 2020) have clearly shown that the EPN can be dissociated from both earlier stages (N170) and later stages (LPP; see below). For example, in a similar paradigm, we showed increasing EPN amplitudes with increasing attention to relevant features of fearful faces, showing also an increased EPN during a gender-discrimination task (Schindler, Bruchmann, et al., 2020). In line with this, other studies showed significant EPN modulations by emotional expressions when participants discriminated face identity (Hammerschmidt et al., 2018) and even when participants performed perceptual-discrimination tasks (Itier & Neath-Tavares, 2017; Rellecke et al., 2012). Thus, even though experimental effects on the EPN and LPP were highly similar in the present study, this does not imply that both components have similar functional roles.
In contrast to the intermediate stage associated with the EPN, the LPP indicates stimulus evaluation and controlled attention processes (e.g., see Hajcak et al., 2009; Schupp et al., 2006), and LPP effects are based on various sources, including evaluative, episodic, personal, and biographical information (see Schweinberger & Neumann, 2016). The majority of studies that used evaluative-knowledge designs found late differential effects (Abdel Rahman, 2011; Baum et al., 2020; Kissler & Strehlow, 2017; Klein et al., 2015; Xu et al., 2016). Regarding those studies with no significant LPP modulation (e.g., Luo et al., 2016), our data suggest that the attended feature matters much for a late differentiation, and elaborate differentiation does not occur inevitably but requires attention to the emotional information. These findings are similar to those of studies that manipulated feature-based attention to threatening emotional expressions (Rellecke et al., 2012; Schindler, Bruchmann, et al., 2020) and suggest that at these late processing stages, top-down processes interact with the associated emotional significance of a face more generally.
Constraints on generality and future directions
To the best of our knowledge, this is the first EEG study that systematically manipulated attention to evaluative emotional person knowledge. However, the stimulus presentation was very short, which possibly added to the clarity of our findings, and we used specific tasks to manipulate attention to perceptual, face, or emotional features. Future studies are needed, using other tasks to generalize our findings (e.g., attention to gender, face, or nationality information). Secondly, for cover-story purposes, the faces we used were all male, whereas the sample was predominantly women (
Conclusion
In summary, early (N170) effects of evaluative emotional knowledge are task independent and increased for arguably criminal identities. In contrast, differential EPN and LPP effects depended on attention to the emotional background information. These findings are vitally important for researchers who conduct ERP studies using evaluative information because they reveal a systematic pattern of emotional sensitivity for competing attention tasks.
