How do hardware disparities affect federated learning systems?
Network lag and different GPU generations across participants create brutal skew in model training. Models from faster, more powerful nodes end up dominating updates from slower ones, which can silently degrade performance for entire data cohorts whose hardware wasn't as good, potentially affecting the most needed data sources.