Preprint

Privacy-Preserving Semantic Segmentation from Ultra-Low-Resolution RGB Inputs

X. Huang, S. Pan, O. Zatsarynna, J. Gall, M. Bennewitz

Privacy-Preserving Semantic Segmentation from Ultra-Low-Resolution RGB Inputs

RGB-based semantic segmentation has become a mainstream approach for visual perception and is

widely applied in a variety of downstream tasks. However, existing methods typically rely on highresolution RGB inputs, which may expose sensitive visual content in privacy-critical environments.

Ultra-low-resolution RGB sensing suppresses sensitive information directly during image acquisition,

making it an attractive privacy-preserving alternative. Nevertheless, recovering semantic segmentation

from ultra-low-resolution RGB inputs remains highly challenging due to severe visual degradation.

In this work, we introduce a novel fully joint-learning framework to mitigate the optimization conflicts exacerbated by visual degradation for ultra-low-resolution semantic segmentation. Experiments

demonstrate that our method outperforms representative baselines in semantic segmentation performance and our ultra-low-resolution RGB input achieves a favorable trade-off between privacy

preservation and semantic segmentation performance. We deploy our privacy-preserving semantic

segmentation method in a real-world robotic object-goal navigation task, demonstrating successful

downstream task execution even under severe visual degradation.

Arxiv Pre-print

2026

BibTeX