In existing distributed stochastic optimization studies, it is usually assumed that the gradient noise has a bounded variance. However, rece
In existing distributed stochastic optimization studies, it is usually assumed that the gradient noise has a bounded variance. However, recent research shows that the heavy-tailed noise, which allows an unbounded variance, is closer to practical scenarios in many tasks. Under heavy-tailed noise, traditional optimization methods, such as stochastic gradient descent, may have poor performance and even diverge. Thus, it is of great importance to study distributed stochastic optimization algorithms applicable to the heavy-tailed noise scenario. However, most of the existing distributed algorithms under heavy-tailed noise are developed for convex and smooth problems, which limits their applications. This paper proposes a clipping-based distributed stochastic algorithm under heavy-tailed noise that is suitable for non-smooth and weakly convex problems. The convergence of the proposed algorithm is proven, and the conditions on the parameters are given. A numerical experiment is conducted to demonstrate the effectiveness of the proposed algorithm.