On the information leakage quantification of camera fingerprint estimates

Camera fingerprints based on sensor PhotoResponse Non-Uniformity (PRNU) have gained broad popularity in forensic applications due to their ability to univocally identify the camera that captured a certain image. The fingerprint of a given sensor is extracted through some estimation method that requires a few images known to be taken with such sensor. In this paper, we show that the fingerprints extracted in this way leak a considerable amount of information from those images used in the estimation, thus constituting a potential threat to privacy. We propose to quantify the leakage via two measures: one based on the Mutual Information, and another based on the output of a membership inference test. Experiments with practical fingerprint estimators on a real-world image dataset confirm the validity of our measures and highlight the seriousness of the leakage and the importance of implementing techniques to mitigate it. Some of these techniques are presented and briefly discussed.


Introduction
The PhotoResponse Non-Uniformity (PRNU) is a multiplicative spatial pattern that is present in every picture taken with a CCD/CMOS imaging device and acts as a unique fingerprint for the sensor itself [1]. The PRNU is due to manufacturing imperfections that cause sensor elements to have minute area differences and thus capture different amounts of energy even under a perfectly uniform light field. The uniqueness of the PRNU has already led to a number of applications in multimedia forensics, both to solve camera identification/attribution problems using images [2] or stabilized videos [3], and to detect inconsistencies that reflect intentional manipulations [4].
Since the PRNU is a very weak signal, its extraction requires the availability of a number (often dozens) of images known to be taken with the camera under analysis. Although several extraction algorithms (both modeland data-driven) exist [1,5], all of them perform some *Correspondence: fperez@gts.uvigo.es 2 atlanTTic, Universidade de Vigo, Signal Processing in Communications Group, E-36310 Vigo, Spain Full list of author information is available at the end of the article sort of averaging across the residuals obtained by denoising the available images. The most prevalent method [1] performs a further normalization to take into account the multiplicative nature of the PRNU.
Unfortunately, both the ease with which the PRNU can be extracted and the existence of relatively good theoretical models that explain its contribution lead to attacks that are similar in intention to digital forgery attacks in cryptography: the so-called PRNU copy attack plants the fingerprint from a desired camera in an image taken by a different device with the purpose of incriminating someone or merely undermining the credibility of PRNU-based forensics [6].
While the PRNU copy attack can be considered a threat to trust, in this paper we identify risks to privacy by showing that there is substantial information leakage into the PRNU from the images used for its estimation. The existence of this leakage has been already indirectly exploited in the so-called triangle test [7], which is a countermeasure against the copy attack that in order to detect the forgery relies on the high correlation between the PRNU Fernández-Menduiña and Pérez-González EURASIP Journal on Information Security (2021) 2021: 6 Page 2 of 13 estimate with any of the image residuals used in the estimation. However, to the best of our knowledge, our work, together with its companion paper [8], constitutes the first attempt at quantifying such leakage by proposing two measures: one based on the mutual information, and another based on the success rate of a membership inference test.
To this end, we provide a detailed derivation of a lower bound for the Mutual Information between a given image and the PRNU, as well as two membership inference tests based on the Neyman-Pearson criterion and the normalized correlation coefficient, respectively. Although we do not explicitly try to recover traces of the images used to extract the PRNU, we show that the leakage is large enough to consider the possibility of recovery a serious threat. In this sense, we remark that images involved in criminal investigations are often of extremely sensitive nature, like in cases involving child abuse and other sexually-oriented crimes, so the mere existence of this leakage calls for the implementation of effective protection mechanisms of the camera fingerprints that ensure privacy is preserved at all times during investigations.
While in an ideal scenario the PRNU of a device can be extracted from flat-field images (e.g., of a cloudy sky or a white wall) in practice this is only feasible when there is access to the camera under investigation. In this scenario, where the estimated PRNU practically leaks little information (as trivially shown by our theory), different law enforcement agencies (LEAs) may share the estimated fingerprints for cross-searching in databases with no privacy risks. However, there is a growing number of investigations where no access to the device is feasible and the PRNU must be estimated from images "in the wild". Cases include images retrieved from hard drives, social networks, and criminal networks in the Dark web. As an example, we discuss the following two cases.
Case 1: During the course of an investigation, police from country A (LEA A) have seized a hard drive containing images from unknown sources involving child abuse. As metadata has been wiped off, LEA A uses some PRNU clustering software to find that the images come from three different cameras, for which the corresponding PRNUs can be extracted. After analyzing the contents of one of the clusters, it is found that some of the pictures taken by camera #1 have been shot in country B. LEA A would like to verify if the police of country B (LEA B) have other images from camera #1 or even the device. Exchanging the highly-sensitive pictures with LEA B is dismissed for privacy reasons; alternatively, LEA A sends the estimated PRNU on the belief that it entails no privacy infringement. This is rooted in the fact that law enforcement agencies are accustomed to sharing hashes in order to search for cross-matches in databases with images of child exploitation. However, as our work shows, contrary to robust hashes, PRNUs may leak considerable amounts of information that should be treated as private as it may identify the victims.
Case 2: Members of a gang have been exchanging pictures over the Dark Web. Some of them involving the gang leader (and third persons) have been taken by the same camera (itself unavailable), as confirmed by the PRNU. The police would be interested in crawling the social networks in search of other pictures captured by the same device. Due to their very limited computational resources, and convinced that nothing can be inferred from an estimated PRNU, the police outsource the search to a web crawling company. However, the leakage from the PRNU allows the company to infer information about people, places and objects contained in the images acquired by the police. In particular, from the PRNU it is possible to read a car license plate.
As our paper concludes, sharing of PRNU fingerprints should be done only after carefully assessing the risks and considering all the possible remedies, some of which are evaluated and discussed in this paper.
As already pointed out and formalized in [8], existing techniques in the literature can mitigate the contextual residues of images on the PRNU. Examples are: 1) compression schemes and binarization [9][10][11][12], which are originally conceived to reduce the computational burden in the estimation process and limit the required storage of the resulting fingerprint; 2) the application of linear filters, as high pass filters (both fixed [13][14][15] and trainable [16]) and convolutional neural networks for feature extraction [17], which were found to be useful to enforce neural nets to work with noise residuals [5] in both forgery detection [13,18] and camera attribution [19], and 3) the use of more powerful denoising schemes than the wavelet denoiser. In the present paper, we take a step further in this direction, analyzing empirically the effects of JPEG compression and the use of more powerful denoising schemes, as BM3D [20]. Despite the relative effectiveness of those solutions, we believe that working with encrypted data at all times [21], although yet not entirely practical due to the large amount of computations needed, is the most promising venue in terms of privacy preservation.
Our main contributions in this paper can be summarized as follows.
• We derive a model for the fingerprint estimator in terms of the true PRNU and the estimation noise. This model becomes crucial in our two approaches to quantifying the leakage, and is also assumed (but not derived) in [8]. • We take a step to model and bound the information leakage in camera fingerprints as the PRNU, based on a waterfilling information theoretic approach. • We propose a membership inference test, which allows to identify the images in a dataset that were used to estimate a given PRNU. • We propose and test empirically some methods to reduce the leakage in practice. • We confirm that information leakage is a serious privacy threat that should be properly assessed before sharing camera fingerprints. • We show that the discovered leakage could be potentially used to detect PRNU copy attacks without resorting to the original images (as is done in the triangle test), since the extracted PRNU will have an underlying structure that will not match that of the host image.
The rest of the paper is organized as follows: in Section 2 we review the basic principles of PRNU extraction; in Section 3 we propose two metrics to quantify the leakage; Section 4 hints at the potential of our discovery to counter injection-based attacks; Section 5 briefly discusses several approaches to mitigate the leakage; Section 6 contains the results of experiments carried on images taken with popular cameras, and, finally, Section 7 presents our conclusions.

Notation
Matrices, written in boldface font, represent luminance images. All are assumed to be of size M × N. = tr X Y , where tr(·) denotes trace and T transpose. The all-one matrix is denoted by 1. Random variables are written in capital letters, e.g., X, while realizations are in lowercase, e.g., x. Given two random variables X, Y , X → Y means that X converges to Y in probability.

Preliminaries
In this paper, we will use the prevalent simplified sensor output model presented in [1] in matrix form: where Y is the output of the sensor, K is the multiplicative PRNU term, X is the noise-free image and N collects all the non-multiplicative noise sources. This PRNU term can be estimated from a set of L images {Y (i) } L i=1 coming from the same sensor, as shown in Fig. 1 (no deleaking strategy is used in the conventional estimator). Firstly, the noise-free image X (i) is estimated using a denoising filter, 1 and this estimateX (i) is used to obtain a residual W (i) .
= Y (i) −X (i) . Under the assumption of N (i) being composed by i.i.d. samples of a Gaussian process, the Maximum Likelihood (ML) estimator of K reduces to: where R .
, and the division is point-wise. Often, the result of this estimation contains non-unique traces left by color interpolation, compression or other systematic errors, that are removed by post-processing (e.g., zero-meaning and Wiener filtering in the full-DFT domain). Ideally, this PRNU will be a zero-mean white Gaussian process with variance σ 2 k , independent of the location within the matrix.
Unfortunately, the denoising process will not perform perfectly. In fact, the denoised image can be more accurately modeled as: where (i) takes into account the traces of the noise-free image that are left out by the denoising and 1 − (i) models the fraction of the PRNU-dependent component that passes through the denoiser. Then, when subtracted to Y (i) and applied to the estimator, we have: Then, it is easy to show that (4) can be expressed aŝ where the used images, which takes into account the amount of PRNU removed in the denoising process, and N k is estimation noise that depends on both , which in turn convey contextual information about the images. Experiments reported in [23] show that N k can be well-modeled by an independent Gaussian process with variance at the (k, l)th position denoted by γ 2 [ k, l] . Figure 2 illustrates a rather extreme case of leakage in which the PRNU of a Xiaomi MI5S smartphone camera is estimated from 25 DNG (uncompressed) images: the one on the left panel plus 24 additional dark images. As becomes evident, there is a lot of information leaking from the first image into the estimated PRNU. Although by no means this experiment describes a realistic case, it does expose that such alarming leaks may well occur in smaller areas of the image. A more down-to-earth example is shown in Fig. 3, where the PRNU has been estimated with L = 25 images taken with a Nikon D3200 camera (see description of the database in the experimental part), and it visibly contains traces (with semantic meaning) of four images shown in the upper part which were used in the estimation. The bottom panels represent log 1 + 1/γ 2 [ l, k] , when the local variance γ 2 [ l, k] ofK is estimated through a 9 × 9 window. The division by γ 2 [ l, k] has the purpose of emphasizing the areas with low local variance whereas the logarithm simply enhances the contrast for visualization purposes. Notice that despite the use of the more sophisticated denoising algorithm BM3D [20] (bottom-right panel) as compared to the waveletbased denoising [22] (bottom-left panel), the leakage is still very conspicuous.
A more systematic approach to quantifying those leaks is presented in the next section.

Quantifying the leakage
In this section we discuss the two proposed measures to quantify the leakage into the PRNU estimate of the images used for the estimation.

Information-theoretic Leakage
The first measure is based on the Mutual Information of the set of images used for the estimation , we can resort to the data processing inequality to show that I Y (i) L i=1 ,K ≥ I N k ,K . The right hand side is considerably simpler to manage and produces a lower bound on the leakage.
The main difficulty for the calculation of I(N k ,K) is the lack of a complete statistical characterization for . It has been proven by Ihara [24] that given a Gaussian process X with covariance K x and a noise process Z with covariance K z , then the mutual information of X and Several images taken with the NikonD3200 camera from the dataset. Bottom panels: emphasized local variance of the corresponding estimated PRNU computed using a window of size 9 × 9, (left): extraction using the wavelet denoiser, (right): extraction using the BM3D denoiser X + Z is minimized when Z is Gaussian with covariance K z . Therefore, for a given covariance matrix of • K, assuming that such process is Gaussian-distributed with the same covariance will produce a lower bound on the mutual information. Now, since K is assumed to be white, its covariance matrix is σ 2 k I MN×MN . Hence, the covariance of • K will be an MN × MN diagonal matrix with elements ω 2 [ k, l] σ 2 k . Then, the lower-bounding scenario corresponds to MN × MN parallel channels, in which the 'desired' signal (i.e., N k ) is transmitted on each subchannel with power γ 2 [ l, k] and there is an additive Gaussian 'disturbance' (corresponding to •K) with power ω 2 [ k, l] σ 2 k . Unfortunately, determining ω 2 [ k, l] σ 2 k turns out to be a difficult problem because even for moderate L, the term N k dominates • K in (5). One might think of using flat-field images for this purpose, as in this case the contribution of N k would be negligible sooner as L increases. However, this path is not advisable because with flat-field images the contribution of would be lost. Therefore, we must content ourselves with estimating the trace of the covariance matrix of • K, given by P . = σ 2 k l,j ω 2 [ l, j], and then use it to produce a further lower bound on the mutual information. The value P can be seen as the total disturbance power budget that can be split among the different parallel channels in order to minimize the mutual information. Notice that this represents a worst case because in practice σ 2 k ω 2 [ l, j] will deviate at each position (k, l) from such power distribution and the actual leakage will be larger.
The mutual information in this case can be obtained through the use of Lagrange multipliers, which give the following lower bound in nats [25]: where μ is the solution to the equation Fernández-Menduiña and Pérez-González EURASIP Journal on Information Security To estimate P, we propose to randomly split the set into two subsets and estimate K from each. Let K 1 ,K 2 be those estimates. Then, P can be estimated aŝ P = K 1 ,K 2 F . A better estimate can be obtained by repeating several times the splitting of Y (i) L i=1 and averaging the resulting values ofP.
In [8] we propose a procedure for the exact computation of the mutual information, based on injecting synthetic signals that serve as pilots for the estimation of . Unfortunately, the fact discussed above that N k dominates • K requires synthesizing a huge number of signals which make the procedure rather impractical. However, through experiments reported in [8] we were able to show that the lower bound provided here is tight for real-world images, in the sense that it is very close to its true value and, as we have seen, its computation much more affordable. Thus, even though we cannot claim that the lower bound presented here is always a fine approximation to the leakage, it is reasonable to employ it to draw conclusions, especially so when comparing scenarios in which only one subsystem or parameter is changed.
We remark here that the leakage that we have quantified through a lower bound corresponds to the complete set of images Y (i) L i=1 used for estimatingK. This means that we are not quantifying the leakage of a specific image, say, Y (j) , j ∈ {1, · · · , L}. Such problem, which is more difficult due to the remaining images acting as a sort of interference, will be the subject of a future work.
From the mutual information formulas above it is interesting to reason about the gain produced by increasing L, which is a possible mitigation strategy. Let us assume that for a certain L = L 0 the lower bound in (6) is I − 0 and is achieved when μ = μ 0 in (7). Now, suppose that we double L to 2L 0 ; we are interested in learning by how much the lower bound decreases. First, note that if γ 2 0 [ l, j] denotes the power in the (l, j)th subchannel for L 0 , then one would expect that when L is doubled, such power is approximately halved, i.e., γ 2 [ l, j] = γ 2 0 [ l, j] /2. This is due to the fact that γ 2 [ l, j] is the variance of the estimation noise N k , that is expected to go to zero as 1/L. Now, for small γ 2 0 [ l, j], for all l, j, Eq. (7) is approximately solved as and the lower bound in nats approximately becomes If we assume that now γ [ l, j] = γ 0 [ l, j] / √ 2 for all k, l, it is immediate to prove that the approximate solution μ to (7) satisfies μ 0 /2 ≤ μ ≤ 2μ 0 , where the lower bound is achieved when P → ∞ and the upper bound when P = 0. Plugging the current γ [ l, j] and μ into the approximation for the lower bound and taking into account that the logarithm is strictly increasing, we find that For any x > 0, from the monotonicity of the logarithm we can write log(1+x/2) ≥ log(1+x)−log (2) When this change is written in bits per pixel, we arrive at a simple interpretation: whenever L is doubled, the decrease in the leakage is at most 0.5 bits per pixel. As we will confirm in the experimental part, in practice the reduction is more modest, and more so as L keeps increasing (see Fig. 4).

Membership inference
In the PRNU scenario a membership inference test [26] is a binary hypothesis test that, given a PRNU estimate, classifies a certain image as having been used or not in the estimation. This inference is possible due to the aforementioned leakage: the higher the success rate in the membership inference test, the larger the leakage. It is important to note that the number L of images used in the estimation becomes a key parameter, since as L increases the information provided by the other images will dilute the individual contributions. The potential recognition of the images used to estimate the PRNU allows any malicious attacker to obtain information about the input database, which may result in privacy risks in certain scenarios. As an example, knowing whether certain images were used to compute the PRNU may aid a convicted criminal in identifying the informant who handed them to law enforcement.
We derive two types of membership detectors: a Neyman-Pearson-based (NP) detector and a normalizedcross-correlation-based (NCC) detector. Even though the former is expected to perform better due to its statistical properties, along its derivation we will find that it requires information that is not readily available to a potential attacker. Therefore, assuming knowledge of such information leads to a 'genie-based' detector which is not practically realizable but is useful as it sets an upper bound on the achievable performance. In contrast, the NCC detector will behave (slightly) worse but is perfectly implementable. The results indicate that both the NP and the NCC detectors provide a similar detection performance for the Nikon D7000. In contrast, the results for the Canon 600D are less favourable, which is in agreement with the results for the lower bound depicted in Table 1. In both cases, the detection performance degrades significantly when L increases Let Y (r) be the image whose membership we want to test and which is known to contain the true PRNU K. Note that the available observations to implement the test areX (r) , W (r) andK. Then, two hypotheses can be formulated: where Q . = W (r) •X (r) /R. The matrixK can be modeled as having independent zero-mean Gaussian elements with variances at position (l, j) denoted by λ 2 l,j under the hypothesis H 0 and θ 2 l,j under the hypothesis H 1 . Let P .
=K − Q. Then, applying the Neyman-Pearson criterion [27], the following test is obtained: (14) where ψ is a threshold selected so that a certain probability of false alarm is attained. In order to implement the test above, the variances λ 2 l,j and θ 2 l,j are needed for all l, j. They can be computed as the respective local variances at each position ofK and P. Unfortunately, P is only available through Q that in turn requires knowledge of R. Since the latter will be in general unknown to an attacker, the NP detector must be considered only of theoretical interest.
When L is large enough, it is reasonable to assume that θ 2 l,j ≈ λ 2 l,j , for all l, j. In such case, the test in (14) simplifies to: Notice from (14) that when L → ∞, then P →K and θ 2 l,j ≈ λ 2 l,j , for all l, j since the information provided by an individual image is less significant. As a consequence, when L → ∞ the membership test is equivalent to guessing the outcome of (fair) coin tossing 2 .
Assuming J NP is Gaussian distributed under H 0 with mean μ J and variance σ 2 J , which is reasonable by invoking the Central Limit Theorem, we obtain the following expression for the probability of false alarm in terms of the threshold ψ , where Q (·) represents the Q-function, i.e., Q(x) = 1 √ 2π ∞ x e −t 2 /2 dt, and Q −1 (·) its inverse function. Then, using the approximation for large L, we know that under H 0 the mean value is given by while, assuming uncorrelation between all pixels, the variance can be approximated by: As a realizable alternative to the NP detector, it is possible to resort to the NCC ofK and W (r) , which has been already employed in camera attribution scenarios [28]. This approach relies on the availability of sample estimates of the respective means (μ k andμ t ) and variances  The lower bound oscillates for different camera models, ranging from 1.9167 bpp in the best case to 0.8013 bpp (for L = 26), which showcases the fact that some camera models may leak more than twice as much information than others when the wavelet denoiser is used (σ 2 k andσ 2 t ) ofK and W (r) . The resulting detection statistic becomes

Potential in detecting PRNU-copy attacks
One well-known countermeasure against PRNU-copy attacks is the triangle test that assumes the existence of a public set of images from which some have been used to extract the PRNU that is planted in the target image. The test looks for high correlations between the allegedly forged image and the images in the public set. An improved version, the pooled triangle test looks for high joint cross-correlations between the forged image and some subset of the public set. The triangle test and more so the pooled one, find some difficulties to get them implemented in practice because the camera owner may lose track of her set of public images. However, the existence of leakage in the case of natural images shown here might be useful for detecting the existence of a planted PRNU, independently from the availability of a public set. Indeed, in the residual computed from the forged image, there will be traces of the planted PRNU with an underlying structure that does not match that of the forged image.
With mere illustrative purposes, we have taken the same PRNU shown in Fig. 3 bottom-left, and planted it in the image of Fig. 5a. Then, we have computed the residual Fig. 5b which shows clear traces of the planted PRNU that obviously do not correspond to Fig. 5a. For instance, the vehicle from Fig. 3 top-right is still visible in the area of the residual corresponding to the sky. The problem remains when images are JPEG-compressed, because even though the traces of the PRNU may dissipate with compression, the leakage in the estimated PRNU is harder to eliminate (see Section 6). This is illustrated in Fig. 5c, where all intervening images (i.e., those used to extract the PRNU and the host image on which it is planted) are JPEG-compressed with QF=92.
A more systematic approach to exploiting leakage towards PRNU-copy detection is out of the scope of this paper. In any case, the fact that traces of the copied PRNU will be more easily found in flat regions of the target image suggests that a deep neural network trained with residuals coming from both pristine and forged images would be a feasible detector.
Finally, we remark that leakage mitigation techniques, to be discussed in the following Section should be able to reduce the probability of success of such a detector.

Leakage mitigation
Given the privacy risks that PRNU leakage entails, it is worth considering potential mitigation strategies, some of which are discussed here. We refer the reader to [8] for complementary details. We classify countermeasures in three categories: prevention, 'deleaking' , and privacy preservation.
Preventive methods aim at conditioning the estimation process so that the resulting PRNU leaks less information. This can be achieved, for instance, by increasing the number of images L whenever possible (see discussion at the end of Section 3.1), maximizing the use of flat-field images, or improving denoising algorithms thus reducing (i) and, consequently, the leakage, as shown in (4). In Section 6 we will present some experimental proof of the leakage reduction afforded by those approaches.
Deleaking methods consist in modifyng the estimated PRNU in a way that has limited loss in the PRNU detection performance, while decreasing the leakage. Examples of this are PRNU compression methods (e.g. [11]), but other possibilities exist, such as high-pass filtering in order to mitigate the pollution introduced by the contextual information of images [29] or whitening the estimated PRNU by normalizing by its local standard deviation (i.e., equalizing) at every spatial position. This PRNU equalization offers practically the same detection performance as using the conventional PRNU but consistently decreases the leakage. A detailed treatment of binarization and equalization as deleaking methods is carried out in [8] and, therefore, is not covered in this work.
Finally, another approach is to limit the exposure of the images and the PRNU in the clear using privacypreserving techniques. This is possible by carrying out the PRNU estimation with encrypted images (and producing an encrypted PRNU) and detecting the encrypted PRNUs from encrypted query images [21]. This way, PRNU detection can be seen as a zero-knowledge proof mechanism. Although this is a very promising approach, substantial work is still needed to reduce the computational complexity of the underlying methods so that they become practical.

Experimental setup and results
We have carried out experiments to validate our measures on a database of images, all in both TIFF and JPEG formats, taken with several commercially available cameras listed in Table 1. The number of images per camera ranges from 122 (Canon1100D#2) to 316 (Canon1100D#1). We discuss the results separately for the mutual information and the membership inference test.

Mutual information
In our first experiment, with TIFF images, we have computed the lower bound from (6) (heretofore denoted as Information Leakage Bound, ILB, and measured in bits per pixel, bpp) for two different values of L, namely L = 26 and L = 50. Denoising is carried out using the waveletbased denoiser in [22]. The results, shown in Table 1, correspond to the average ILBs of 10 (resp. 5) runs of the experiment with randomly chosen subsets of size L = 26 (resp. L = 50). The decreasing trend with L can be explained by the fact that the disturbance power budget P stays approximately constant, while the 'desired' signal N k reduces its power with L. In fact, notice that, as L → ∞ the term N k is expected to go to zero due to the law of large numbers. The relatively small ILBs observed for the Canon 600D camera are conjectured to be due to the images in the respective dataset being very similar to each other. Figure 4 (left) better illustrates the decrease of the leakage (as measured by the ILB) with L, as discussed at the end of Section 3.1. The plotted values correspond to the average ILBs of 5 runs of the experiment with randomly chosen subsets of size L. As discussed above, increasing L constitutes an advisable leakage mitigation mechanism that adds to the gains achieved in terms of detection performance. Notice, however, the diminishing returns with L: the leakage reduction from, say, doubling L is larger for smaller values of L. There is an important lesson here: as commercially available cameras increase their resolution, an ever smaller L is required to achieve a certain PRNU detection performance. While this fact is valuable from a practical point of view (often the number of available images in forensic cases is very small), it may be detrimental in terms of leakage, and additional measures may be required.
In order to quantify the impact of using flat-field images, in our next experiment we use DNG images taken with a the camera of a Xiaomi MI5S smartphone to build the following: sets 50brt and 50drk correspond to L = 50 images of respectively white and black cardboard, while in sets 49brt+berry and 49drk+berry one of the images is replaced by the one shown in Fig. 2(Left). The corresponding ILBs are given in Table 2. As we discussed above in connection with the leakage mitigation, by comparing these values with those in Table 1 we can see that the usage of flat-field images tends to reduce leakage substantially. On the other hand, our dark images leak less information than the bright ones. Of course, this leakage does not correspond to perceptually meaningful information. Furthermore, while the inclusion of a non-flat image does not increase the information leakage of bright flat-field images, as the former gets diluted in the latter when averaging, this is not the case for dark images: the new image has a considerable impact on N k and thus contributes to a larger leakage. This is consistent with the empirical observation that it is easier to extract traces from the image in Fig. 2(Left) when averaged with dark images (cf. Fig. 2(Right)). Table 3 contains the results of repeating the experiment shown in Table 1 but using the BM3D denoising algorithm [20] instead of the wavelet-based one. The objective here is to show that a better denoising reduces the leakage. Even though all ILBs are smaller for the BM3D algorithm, the reduction with respect to the wavelet-based filter is not as substantial as one would expect, given the additional computational cost that it entails.

Membership inference
Aiming at testing the ability and accuracy of both NP and NCC membership inference detectors, experiments were performed with PRNUs estimated from subsets of 25 and 50 TIFF images, randomly selected from a set of 190 images captured using the NikonD7000 camera. In Fig. 6 the outputs of the NP and NCC detectors are represented for one such subset. The first 50 samples of the shown sequence correspond to the membership test statistics for those 50 images used to estimate the PRNU. From the results, it is clear that the detector is able to distinguish which images were used to estimate the PRNU in a given dataset.
In the same figure we also show the results of repeating the same process considering PRNUs estimated from randomly chosen sets of 25 TIFF images. As expected, the output of the detectors follows the same trend, but the difference between both levels is now larger, since the individual contributions of each image are less relevant when larger datasets are considered for the estimation. Table 2 Lower bound (6) for flat-field images with and without the image in Fig. 2  These results are confirmed by representing the ROC curves for both detectors in Fig. 7 with L = 100 and L = 50, generated using 160 different combinations of TIFF images to obtain the PRNU, selected randomly. From this figure the degradation when L increases is again evident. Besides, the NP detector obtains marginally better results, as expected since it was derived from a likelihood ratio. In Fig. 4 (right) the results for the camera Canon600D are also included. From all our set of cameras, this was the only one in which the membership inference method failed systematically. The reasons why are to be fully researched yet. In any case, these results match those depicted in Table 1, where the lower bound on the mutual information for this camera is the lowest between all the tested devices. The excellent results (from an attacker's point of view) obtained with the NikonD7000 are also explainable from the ILBs in the table since this particular model exhibits a high ILB. This confirms the existence of a very close relationship between the membership identification and the lower bound expressed in Eq. (6), which we intend to explore in the future.
In Fig. 8 the experiments shown in Fig. 6 were repeated, but considering only L = 50 images drawn from a set of 190 JPEG compressed images using a Quality Factor of 92. We focused again on the Nikon D7000. From the results, we can see that both detectors perform similarly in this scenario. These conclusions can be further verified with the ROC curves plotted in Fig. 7a, obtained following the same experimental setup than for the uncompressed case.
In Fig. 7b, the ROC curves following exactly the same procedure as in the previous experiments, but considering the BM3D denoiser instead of the wavelet-based approach and TIFF images, are plotted. As we can see from the results, the performance of both detectors decreased, which was expected since the BM3D performs better  The results indicate that both the NP and the NCC detectors provide a similar detection performance. (Right) Results for BM3D denoiser, for TIFF images obtained from the Nikon D7000. As we can see, the performance of the detectors decreases with respect to the wavelet denoiser, but the test is still able to obtain an acceptable degree of discrimination, showing that the leakage is still present in the images  , where the PRNU is estimated from the first 50 images. The performance of both detectors decreased slightly, as the compression process enhances the denoising. In both cases, the two levels can still be differentiated

Conclusions
In this paper, the leakage in the PRNU from the database of images used for its estimation is revealed and lowerbounded using a information-theoretic approach. Experimental results show that this leakage is substantial and thus can entail significant risks to privacy. As a consequence of this leakage, membership identification based on the PRNU becomes possible using Neyman-Pearson and Correlation based approaches, achieving high accuracy for both detectors. More importantly, the leakage here uncovered calls for a careful risk assessment and additional security and privacy measures when it comes to sharing PRNU-fingerprint databases. Different methods to mitigate the leakage were discussed and experimentally tested. First, we addressed the gain afforded by increasing the number L of images used for the estimation and showed that while effective, this strategy produces diminishing returns. On the one hand, we investigated the option of using JPEG compression as a mean to mitigate this phenomenon, and showed that in practice compression schemes provide few advantages over working with uncompressed images. On the other hand, experiments with the BM3D were also performed. Despite the relative improvement on the obtained results compared with the wavelet denoiser, the results also showed that it is not the most effective way to solve the leakage problem.
This paper is still a first step to model and remove the leakage from the PRNU. Some open problems we expect to tackle in the near future are: • Image database reconstruction. Use machine learning techniques to reconstruct as reliably as possible the image database from the estimated PRNU. This will illustrate even further the threats to privacy and support the use of leakage mitigation techniques. • Data-driven PRNU estimators. Analyze the leakage phenomena on machine learning-based PRNU estimators.
• Alternative mitigation methods. Investigate on alternative leakage mitigation techniques, as high pass filters (both fixed and based on learning methods). • Compression schemes. Analyze more aggressive compression schemes, and the trade-off between leakage mitigation and detection performance.