Differential privacy

I need to work on a data privacy project and so am spending some time to review some notes I wrote couple years ago. I guess it would be a great exercise to reorganize them into a post series.  And this will be my first post of this series. 🙂

Introduction to data privacy

From the point of view of data privacy, we can categorize attributes of a data into three different groups: key attributes, sensitive attributes, and quasi-identifiers. Let’s use medical data as example. Key attributes include attributes such as patient name and social security number. Sensitive attributes can be the diseases the patient possesses. And quasi-identifiers can be zip code and salary range and so on. To protect the patients’ privacy, simply removing key attributes from the database alone is not sufficient. Because a malicious user may take advantage of other prior knowledge to uniquely identify the owner of a record and/or infer sensitive information from the quasi-identifiers.

For example, consider an extremely small data set as shown below. Assume that this is some patient dataset in a very small community. In the table below, ID and name are essentially the key attribute, age and gender are the quasi-identifiers, and the information whether the patient has aids or not is the sensitive attribute.

ID Name Age Gender AIDS?
0001 Janice 39 F No
0002 Sam 44 M No
0003 Paul 28 M Yes
0004 Alice 25 F Yes
0005 Jody 30 F No

If we are going to disclose the data set for research purposes, of course we should remove the key attribute (ID and name) from the data set. But that is not sufficient. Knowing Alice is in her 20s and from the latter three columns alone will imply that she is likely to have AIDS.

Therefore, to ensure the privacy of records’ owners, we cannot always return a truthful answer for any queries. All privacy-ensuring techniques essentially end up introducing distortion to the query answers. Roughly speaking, we can categorize the privacy-ensuring techniques into two classes: pre-query randomization and post-query randomization. For pre-query randomization, we essentially construct a “fake” database to serve as a surrogate in replace of the original one. This can be achieved by adding fake records, deleting true records, distorting records, suppressing some attributes and generalizing some attributes of records. Many privacy methodolgies such as k-anonymization and l-diversity are proposed along this line of thought. For example, in terms of l-diversity, the conditional entropy of a sensitive attribute W given any quasi-identifier x, H(W|x), is larger than \log_2 (l) bits. Note that the conditional entropy H(W|x) here essentially measures the randomness of the sensitive attribute W given x in terms of number of bits. Therefore, the larger the l, the harder to predict W from x. The advantage of this class of methods is that we can allow users to utilize the data set directly and non-interactively. However, as users are going to use the surrogate data set directly for their studies, the surrogate data set should have all statistics closed to the original one. The latter, of course, is very hard to be guaranteed.

If we restrict the queries that can be made by the users, rather than distorting the data set itself, we may distort the query results afterward. The post-query randomization approach is suitable to data sets that only allow interactive access. An example of this approach is the Laplace mechanism for differential privacy. Even though differential privacy can be made possible in non-interactive setting as well, we will restrict ourselves to interactive setting in this post. In other words, we assume that users cannot directly access the data set but can only make queries regarding the data set.

Differential Privacy

Differential privacy is based on the premise that the query result of a “private” data set should not change drastically with an addition or deletion of a single record. Otherwise, there can be a chance for malicious users to infer the identity or sensitive information of a private record. Note that differential privacy is a rather strong condition since it does not assume the methods a malicious user could use in extracting that information. Differential privacy does not try to limit what records can or cannot exist in a data set. Actually, the definition is blind to (independent of) the actual data inside the data set. To ensure that the statistics of query results do not change drastically with adding or deleting a record, the query outcome is typically distorted by the addition of noise. As a result, the query outcome function K(\cdot) even for a fixed data set D should be considered as a random variable.
Therefore, if K(\cdot) is private, for any two data sets D_1 and D_2 where one data set is equal to the other one with an additional record, we want

    \[ Pr(K(D_1) \in S) \sim Pr(K(D_2) \in S) \]

for all subset S of range K(\cdot). The above formulation can be more precisely characterized with the following definition.

Definition: \epsilon-Differential Privacy

The query K(\cdot) is \epsilon-differentially private if and only if for any subset S of the range of K(\cdot),

(1)   \begin{align*} Pr(K(D_1) \in S) \le e^\epsilon Pr(K(D_2) \in S), \end{align*}

where D_1 and D_2 are any two data sets where they differ by at most one record and one data set is a subset of the other.

Since (1) has to be held by all “adjacent” data sets D_1 and D_2 (including when D_1 and D_2 are swapped), (1) apparently can only be satisfied when e^\epsilon \ge 1 or \epsilon \ge 0.

Connection of Differential Privacy with Information Theory

A necessary condition for differential privacy is that the K-L divergence KL( K(D_1)|| K(D_2) has to be less than \epsilon for any “adjacent” D_1 and D_2. Since Pr(K(D_2) \in \{r\} ) \le \exp( \epsilon) Pr(K(D_1) \in \{r\}) for any singleton set \{ r \}, therefore

(2)   \begin{align*} KL(K(D_1) || K(D_2) ) &= \sum_{r} Pr(K(D_1)=r) \log \frac{ Pr(K(D_1)=r)} {Pr(K(D_2)=r)} \\ & \le \sum_r Pr(K(D_1) = r) \log \exp{\epsilon} = \epsilon. \end{align*}

Note that K-L divergence KL( K(D_1)|| K(D_2) essentially measures the difference between the distributions of K(D_1) and K(D_2). Thus, the smaller the \epsilon, the harder to distinguish between D_1 and D_2 from query K(\cdot) and so more difficult to identify the individual record that differs D_1 and D_2.

What next?

In the next post, we will talk about two privacy mechanisms to achieve differential privacy. Namely, the Laplace mechanism and the exponential mechanism.

Leave a Reply

Your email address will not be published. Required fields are marked *