Our visual perception seems effortless, but the brain has a limited processing capacity which curtails the amount of sensory information that can be brought into conscious awareness at any moment in time. A widely studied exemplar of this limitation is the ‘attentional blink’ (AB), in which observers are unable to report the second of two rapidly sequential targets if it appears within 200-500 ms of the first. Despite the apparent ubiquity of the AB effect, its computational and neurophysiological underpinnings have remained elusive. Here we propose a simple computational model of temporal attention that unifies the AB with spatial and feature-based attention. We took a novel, integrative approach involving human psychophysics and functional brain imaging, along with neuronal recordings in mice to test this model. Specifically, we demonstrate that the AB only arises when visual targets have dissimilar representations in the brain but is absent when both targets have the same representation. Similarity in this context can be determined either by elementary features such as edge orientation, or by acquired, high-level factors such as numerical or alphabetical order. In this parsimonious model of the AB, attention to an initial target establishes a perceptual filter that is tuned to its unique representation in the brain. Subsequent items that match the filter remain available for conscious report, whereas those that do not match elude awareness altogether.