The segmentation of symbolic music phrases is crucial for music information retrieval and structural analysis. However, existing BiLSTM-CRF
The segmentation of symbolic music phrases is crucial for music information retrieval and structural analysis. However, existing BiLSTM-CRF methods mainly rely on local semantics, making it difficult to capture long-range dependencies, leading to inaccurate phrase boundary recognition across measures or themes. Traditional Transformer models use static embeddings, limiting their adaptability to different musical styles, structures, and melodic evolutions. Moreover, multi-head self-attention struggles with local context modeling, causing the loss of short-term information (e.g., pitch variation, melodic integrity, and rhythm stability), which may result in over-segmentation or merging errors. To address these issues, we propose a segmentation method integrating local context enhancement and global structure awareness. This method overcomes traditional models’ limitations in long-range dependency modeling, improves phrase boundary recognition, and adapts to diverse musical styles and melodies. Specifically, dynamic note embeddings enhance contextual awareness across segments, while an improved attention mechanism strengthens both global semantics and local context modeling. Combining these strategies ensures reasonable phrase boundaries and prevents unnecessary segmentation or merging. The experimental results show that our method outperforms the state-of-the-art methods for symbolic music phrase segmentation, with phrase boundaries better aligned to musical structures.