An open-source tool for experimenting with noise-based perturbation schemes
The general approach to Statistical Disclosure Control (SDC) based on random perturbation of published data can be implemented in various ways. Such methods trade-off utility versus privacy risk, but there is no consensus yet in the SDC community as to how risk and utility should be defined and quantified. The focus of this work is on utility. We present an open-source tool for comparing experimentally the impact of different perturbation methods on published data. In this initial contribution we outline the general workflow of the proposed tool, motivating the main design choices and indicating directions for future extensions.