Privacy-Preserving Federated Learning with Adaptive Noise Scaling and Enhanced CNN Models (Published)
Federated learning (FL) enables collaborative training across distributed clients without centralizing raw data, making it an attractive approach for privacy-sensitive applications. However, shared model updates in FL may still leak information, leaving systems vulnerable to inference attacks. Differential privacy (DP) provides formal guarantees but often degrades performance, especially in non-independent and identically distributed (non-IID) settings. This work proposes an adaptive noise scaling mechanism to integrate DP into FL more effectively. The method dynamically adjusts client-level noise based on local loss variance, balancing privacy preservation and model utility across heterogeneous clients. In addition, an enhanced Convolutional Neural Network (CNN) architecture with Group Normalization and residual connections is employed to stabilize training and improve generalization under noisy updates. Experiments on the MNIST dataset with 50 clients show that the adaptive federated DP model achieves 96.16% accuracy with a privacy budget of at a noise multiplier of 1.0. This performance surpasses the centralized DP baseline (94.15%) while approaching the non-private FL baseline (99.57%). Overall, the results highlight adaptive differential privacy as a practical and scalable approach for privacy-preserving federated learning, with strong potential in domains such as healthcare, finance, and mobile edge computing.
Keywords: adaptive noise scaling, convolutional neural networks, differential privacy, federated learning, privacy-utility trade-off
Societal Impact of Big Data and Distributed Computing: Addressing Bias and Enhancing Privacy (Published)
This article examines the societal implications of big data and distributed computing technologies, with particular focus on algorithmic bias mitigation and privacy protection. As these technologies transform decision-making across healthcare, finance, and criminal justice, they introduce complex ethical considerations that require thoughtful responses. The paper explores how biases in training data perpetuate social inequities, creating disparate impacts for vulnerable populations, while analyzing the mathematical constraints that make satisfying multiple fairness criteria simultaneously impossible. It also investigates how distributed computing architectures enhance privacy through differential privacy, federated learning, and blockchain-based consent management, enabling organizations to derive insights while maintaining privacy guarantees and regulatory compliance. The research reveals that addressing bias requires comprehensive approaches spanning the entire development lifecycle, from data curation to continuous monitoring. Similarly, privacy protection demands more than technical solutions alone, requiring governance frameworks that navigate tensions between competing privacy principles. Through examination of implementation challenges and governance models, the article provides a balanced assessment of responsible deployment strategies that maximize benefits while minimizing harms, emphasizing multi-stakeholder governance, transparent documentation, and contextual regulation as essential components of ethical technological advancement.
Keywords: algorithmic bias, differential privacy, ethical governance, federated learning, privacy-preserving computation