Performing machine learning (ML) with analog instead of digital signals offers advantages in speed and energy efficiency, but component and
Performing machine learning (ML) with analog instead of digital signals offers advantages in speed and energy efficiency, but component and measurement imperfections can make nonlinear analog networks difficult to train. As a result, most schemes involve a precise digital model, either to train alone or in tandem with experiments. Here we take a different perspective: working in the analog domain, we characterize the consequences of the inherent imperfection of a physical learning system and, ultimately, overcome them. We train an analog network of self-adjusting resistors -- a contrastive local learning network (CLLN) -- for multiple tasks, and observe limit cycles and characteristic scaling behaviors absent in `perfect' systems. We develop an analytical model that captures these phenomena by incorporating an uncontrolled but deterministic bias into the learning process. Our results suggest that imperfections limit precision and erase memory of previous tasks by continuously modifying the underlying representation of all learned tasks, akin to representational drift in the brain. Finally, we introduce and demonstrate a system-agnostic training method that greatly suppresses these effects. Our work points to a new, scalable approach in analog learning, one that eschews precise modeling and instead thrives in the mess of real systems.