Insights from human perception of moving faces have the potential to provide interesting insights for technical animation systems as well as in the neural encoding of facial expressions in the brain. We present a psychophysical experiment that explores high-level aftereffects for dynamic facial expressions. We address specifically in how far such after-effects represent adaptation in neural representation for static vs. dynamic features of faces. High-level after-effects have been reported for the recognition of static faces [Webster and Maclin 1999; Leopold et al. 2001], and also for the perception of point-light walkers [Jordan et al. 2006; Troje et al. 2006]. Aftereffects were reflected by shifts in category boundaries between different facial expressions and between male and female walks. We report on a new after-effect in humans observing dynamic facial expressions that have been generated by a highly controllable dynamicmorphable face model. As key element of our experiment, we created dynamic 'anti-expressions' in analogy to static 'antifaces' [Leopold et al. 2001]. We tested the influence of dynamics and identity on expression-specific recognition performance after adaptation to 'anti-expressions'. In addition, by a quantitative analysis of the optic flow patterns corresponding to the adaptation and test expressions we rule out that the observed changes reflect a simple low-level motion after-effect. Since we found no evidence for a critical role of temporal order of the stimulus frames we conclude that after-effects in dynamic faces might be dominated by adaptation to the form information in individual stimulus frames.