Abstract Predictor-based Neural Architecture Search (NAS) utilizes performance predictors to swiftly estimate architecture accuracy, thereby reducing the cost of architecture evaluation. However, existing predictor models struggle to represent spatial topological information in graph-structured data and fail to capture deep features of entire architectures, leading to decreased accuracy and generalization issues. Additionally, during the search process, predictors only evaluate architectures without providing forward guidance for discovering new ones, resulting in inefficient search efficiency. Thus, we proposed AE-NAS, an attention-driven evolutionary neural architecture search algorithm, to achieve forward evolution. By incorporating the attention mechanism into the predictor model and integrating it with the existing path-based architecture encoding method, we aim to enhance the representation of topological information and accurately evaluate architecture performance. AE-NAS dynamically adjusts the search direction based on the importance of each path to architecture performance, prioritizing exploration of architectures with greater potential. Finally, our experiments on AE-NAS on the search spaces of NAS-Bench-101 and NAS-Bench-201 proved that the predictor model based on the attention mechanism can significantly improve the architectural performance prediction accuracy and search efficiency.