Skip to main content

Document authentication using graphical codes: reliable performance analysis and channel optimization

Abstract

This paper proposes to investigate the impact of the channel model for authentication systems based on codes that are corrupted by a physically unclonable noise such as the one emitted by a printing process. The core of such a system for the receiver is to perform a statistical test in order to recognize and accept an original code corrupted by noise and reject any illegal copy or a counterfeit. This study highlights the fact that the probability of type I and type II errors can be better approximated, by several orders of magnitude, when using the Cramér-Chernoff theorem instead of a Gaussian approximation. The practical computation of these error probabilities is also possible using Monte Carlo simulations combined with the importance sampling method. By deriving the optimal test within a Neyman-Pearson setup, a first theoretical analysis shows that a thresholding of the received code induces a loss of performance. A second analysis proposes to find the best parameters of the channels involved in the model in order to maximize the authentication performance. This is possible not only when the opponent’s channel is identical to the legitimate channel but also when the opponent’s channel is different, leading this time to a min-max game between the two players. Finally, we evaluate the impact of an uncertainty for the receiver on the opponent channel, and we show that the authentication is still possible whenever the receiver can observe forged codes and uses them to estimate the parameters of the model.

1 Introduction

The problem of authentication of physical products such as documents, goods, drugs, and jewels is a major concern in a world of global exchanges. The World Health Organization in 2005 claimed that nearly 25% of medicines in developing countries are forgeries [1], and according to the Organization for Economic Co-operation and Development (OECD), international trade in counterfeit and pirated goods reached more than US$250 billion in 2009 [2].

1.1 Addressed problem and related works

Authentication of physical products is generally done by using the stochastic structure of either the materials that composes the product or of a printed package associated to it. Authentication can be performed for example by recording the random patterns of the fiber of a paper [3], but such a system is practically heavy to deploy since each product needs to be linked to its high-definition capture stored in a database. Another solution is to rely on the degradation induced by the interaction between the product and a physical process such as printing, marking, embossing, carving, etc. Because of both the defaults of the physical process and the stochastic nature of the matter, this interaction can be considered as a physically unclonable function (PUF) [4] that cannot be reproduced by the forger and can consequently be used to perform authentication. In [5], the authors measure the degradation of the inks within printed color tiles and use discrepancy between the statistics of the authentic and print-and-scan tiles to perform authentication. Other marking techniques can also be used; in [6], the authors propose to characterize the random profiles of laser marks on materials such as metals (the technique is called LPUF for laser-written PUF) to use them as authentication features.

We study in this paper an authentication system which uses the fact that a printing process at very high resolution can be seen as a stochastic process due to the nature of different elements such as the paper fibers, the ink heterogeneity, or the dot addressability of the printer. Such an authentication system has been proposed by Picard et al. [7, 8] and uses 2D pseudo-random binary codes that are printed at the native resolution of the printer (2,400 dpi on a standard offset printer or 812 dpi on a digital HP Indigo printer).The principle of the studied system in this paper is depicted in Figure 1:

 The original code is secretly exchanged between the legitimate source and the receiver.

Figure 1
figure 1

Principle of authentication using graphical codes.

 Once printed on a package to be authenticated, the degraded code will be scanned then thresholded by an opponent (the forger). It is important to note that at this stage thresholding is necessary for the opponent because the industrial printers can only print dots, e.g., binary versions of the scanned code.

 The opponent then produces a printed copy of the original code to manufacture his forgery.

 The receiver performs a test on an observed scanned code, being either the scanned version of the original printed code or the scanned version of the fake code. Using his knowledge on the original code, he establishes a statistical test in order to perform authentication.

One advantage of this system over previously cited ones is that it is easy to deploy since the authentication process needs only a scan of the graphical code under scrutiny and the seed used to generate the original one: no fingerprint database is required in this case.

The security of this system solely relies on the use of a PUF, i.e., the impossibility for the opponent to accurately estimate the original binary code. Different security analysis have already been performed with respect to (w.r.t.) this authentication system or to very similar ones. In [9], the authors have studied the impact of multiple printed observations of same graphical codes and have shown that the power of the noise due to the printing process can be reduced in this particular setup, but not completely removed due to deterministic printing artifacts. In [10], the authors use machine learning tools in order to try to infer the original code from an observation of the printed code; their study shows that the estimation accuracy can be increased without recovering perfectly the original code. In [11], the authors propose a print and scan model adapted to graphical code and derive attacks and adapted detection metrics to counter the attacks. In [12], the authors consider the security analysis in the rather similar setup of passive fingerprinting using binary fingerprints under informed attacks (the channel between the original code and the copied code is assumed to be a binary symmetric channel). They show that in this case the security increase with the code length, and they propose a practical threshold when type I error (originally detected as a forgery) and type II error (forgery detected as an original) are equal.

1.2 Notations

We denote sets by calligraphic font, e.g., , random variables (RV) ranging over these sets by the same italic capitals, e.g., X, and their outcomes in lowercase letters, e.g., x. E X [.] denotes the expectation over X. The cardinality of the set is denoted by |X|. The sequence of N variables (X1,X2,....,X N ) is denoted XN.

1.3 Setup

The binary graphical code can be seen as an authentication sequence xN chosen at random from the message set X N and shared secretly with the legitimate receiver. In our authentication model, xN is published as a noisy version yN taking values in the set of points V N (see Figure 1). An opponent may observe yN and, naturally, tries to retrieve the original authentication sequence. He obtains an estimated sequence x ̂ N and publish a forgery as a sequence zN taking value in the same set of points V N , hoping that it will be accepted by the receiver as coming from the legitimate source. When observing a sequence oN, which may be one of the two possible sequences yN or zN, the destination has to decide whether this observed sequence comes from the legitimate source or not.

The authentication model involves two channels X(Y,Z), and in the rest of the paper, we define the main channel as the channel between the legitimate source and the receiver, and the opponent channel as the channel between the legitimate source and the receiver but passing through the counterfeiter channel (see Figure 1). The two channels X(Y,Z) are considered being discrete and memoryless with conditional probability distribution PY ZX(y, zx). The marginal channels PYX and PZX constitute the transition probability matrices of the main channel and the opponent channel, respectively.

As we shall see in the rest of the paper, authentication performances are directly impacted by the discrimination between the two channels and can be maximized by channel optimization.

Note that the authentication sequence xN is generated using a secure pseudo-random number generator (PRNG) having a sufficiently large key space to prevent brute-force attacks. The seed of the PRNG can practically be transmitted using both a secure lossless communication channel and via a key distribution system so that the receiver can generate xN from the seed. The security of such a system is beyond the scope of this paper.

1.4 Contributions of the paper

The goal of this paper is twofold:

 Firstly, it provides reliable performance measurements of the authentication system based on a Neyman-Pearson hypothesis test (i.e., to compute accurately the probability of rejecting an authentic code and the probability of non-detecting an illegal copy, denoted as type I and type II errors, respectively). An asymptotic expression which is more accurate than the Gaussian expression is first proposed to compute these probabilities of errors; then, the importance sampling simulation method is provided to practically estimate them. We evaluate the impact of the Gaussian approximation of the test with respect to its asymptotic expression.

 Secondly, the computation of type I and type II errors are used to derive the most favorable channels for authentication. We show first that it is in the receiver’s interest to process directly the scanned grayscale code instead of a binary version. Then, the error probabilities are used to compute for a given channel model, the configuration which maximizes the authentication performance.

This paper is an extension of [13] in which we use the generalized Gaussian distribution family instead of the Gaussian distribution as in [13]. Moreover, the analytical formulation of these probabilities is practically confirmed by using an importance sampling method, a Monte Carlo strategy of numerical simulation that can be used to compute rare events. We also present how to design the channel in order to maximize the authentication performance for different cases of generalized Gaussian distributions and when the opponent is either passive (he undergoes the same channel as the receiver) or active (he can adapt his channel).

2 The authentication channel

2.1 Channel modeling

Let TVX be the generic transition matrix modeling the whole physical processes used, more specifically the printing and scanning devices. The entries of this matrix are conditional probabilities TVX(vx) relating an input alphabet and the output alphabet . In practical and realistic situations, is a binary alphabet standing for black (0) and white (1) elements of a digital code, and the channel output set stands for the set of gray-level values with cardinality K (for printed and scanned images, K=256). Transition matrix TVX may conceptually be any discrete distribution over the set , but we will focus in Section 4.4 on some common and realistic distributions when analyzing numerically the performance.

The marginal distribution of the main channel PYX is equivalent to one print and scan process, and consequently, we have PYX=TVX. On the other hand, PZX depends on the opponent processing while he has to retrieve the original sequence before reprinting it. We aim here at expressing this marginal distribution considering that the opponent tries to restore the original sequence before publishing his fraudulent sequence zN.

When performing a detection to obtain an estimated sequence x ̂ N of the original code, the opponent undergoes errors. These errors are evaluated with probabilities Pe,W when confusing an original white dot with a black and Pe,B when confusing an original black dot with a white. This distinction is due to the fact that the distribution TVX of the physical devices is arbitrary and not necessarily symmetric. Let D W be the optimal decision region for decoding white dots obtained after using classical maximum likelihood decoding:

D W = v V : P Y X ( v X = 1 ) > P Y X ( v X = 0 ) .
(1)

Error probabilities Pe,W and Pe,B are then equal to

P e , B = v D W P Y X (vX=0),
(2)
P e , W = v D W c P Y X (vX=1).
(3)

where D W c is the complementary region in the set . The channel X X ̂ can be modeled as a binary input binary output (BIBO) channel with transition probability matrix P X ̂ X :

P X ̂ X ( x ̂ = 0 x = 0 ) P X ̂ X ( x ̂ = 1 x = 0 ) P X ̂ X ( x ̂ = 0 x = 1 ) P X ̂ X ( x ̂ = 1 x = 1 ) = 1 P e , B P e , B P e , W 1 P e , W
(4)

As we can see in Figure 1, the opponent channel XZ is a physically degraded version of the main channel. Thus, X X ̂ Z forms a Markov chain with the relation P X ̂ Z X ( x ̂ ,zx)= P X ̂ X ( x ̂ x) T Z / X ̂ (z x ̂ ), where T Z X ̂ is the transition matrix of the counterfeiter physical device. Components of the marginal channel matrix PZX are

P Z X ( v x ) = x ̂ = 0 , 1 P X ̂ Z X ( x ̂ , v x ) = x ̂ = 0 , 1 P X ̂ X ( x ̂ x ) T Z X ̂ ( v x ̂ ) .
(5)

Finally, we have

P Z X ( v X = 0 ) = ( 1 P e , B ) T Z X ̂ ( v X ̂ = 0 ) + P e , B T Z X ̂ ( v X ̂ = 1 ) ,
(6)
P Z X ( v X = 1 ) = ( 1 P e , W ) T Z X ̂ ( v X ̂ = 1 ) + P e , W T Z X ̂ ( v X ̂ = 0 ) .
(7)

2.2 Receiver’s strategies: thresholding or not?

Two strategies are possible for the receiver.

2.2.1 Binary thresholding

As a first strategy, the legitimate receiver first decode the observed sequence o N using a maximum likelihood criterion based on the main channel marginal distribution PYX. He then restores a binary version x ~ N of the original message xN using the same decision region as defined by (1) and naturally undergoes errors.

 In the main channel, i.e., when O N = Y N , error probabilities are equivalent to (2) and (3).

 In the opponent channel, i.e., when O N = Z N , we make use of (6) and (7) to express the corresponding error probabilities:

P ~ e , W = v D W c P Z X ( v X = 1 ) , P ~ e , W = ( 1 P e , W ) v D W c T Z X ̂ ( v X ̂ = 1 ) + P e , W v D W c T Z X ̂ ( v X ̂ = 0 ) .
(8)
P ~ e , W = ( 1 P e , W ) P e , W + P e , W ( 1 P e , B )
(9)

where P e , W = v D W c T Z X ̂ (v X ̂ =1) and P e , B = v D W T Z X ̂ (v X ̂ =0). The same development yields

P ~ e , B =(1 P e , B ) P e , B + P e , B (1 P e , W ).
(10)

For this first strategy, the opponent channel may be viewed as the cascade of two binary input/binary output channels:

1 P ~ e , B P ~ e , B P ~ e , W 1 P ~ e , W = 1 P e , B P e , B P e , W 1 P e , W × 1 P e , B P e , B P e , W 1 P e , W .
(11)

As we will see in the next section, in this particular case, the test to decide whether the observed decoded sequence x ~ N comes from the legitimate source or not is tantamount to counting the number of erroneous decoded dots.

2.2.2 Gray-level observations

In the second strategy, the receiver performs his test directly on the received sequence o N without any given decoding. We will see in Section 3.3 that this strategy is better than the previous one (see Section 3.2).

3 Impacts of the receiver’s strategies on hypothesis testing

We consider here testing whether, for a given fixed input (x1, …, x N ), an observed independent and identically distributed (i.i.d.) sequence (o1, …, o N x1, …, x N ) is generated from a given distribution PYX or if it comes from an alternative hypothesis associated to distribution PZX, (o i x i ) belonging to a discrete finite set . Practically, we are interested in performing authentication after observing a sequence of N samples (o i x i ), attesting whereas this sequence comes from a legitimate source or from a counterfeiter. The receiver establishes then a decision based on a predefined statistical test and assigns one of the two hypothesis H0 or H1 corresponding, respectively, to each of the former cases. According to this test, the space V N will be partitioned into two regions 0 and 1 . Accepting hypothesis H0 while it is actually a fake (the observed N sample sequence belongs to 0 while H1 is true) leads to an error of type II having probability β. Rejecting hypothesis H0 while actually the observed sequence comes from the legitimate source (the observed N sample sequence belongs to 1 while H0 is true) leads to an error of type I with probability α. It is desirable to find a test with a minimal probability β for a fixed or prescribed probability of type I. An optimal decision rule will be given by the Neyman-Pearson criterion. The eponymous theorem states that under the constraint αα, β is minimized if only if the following log-likelihood test infers the choice of H1:

log P N ( o N x N , H 1 ) P N ( o N x N , H 0 ) γ,
(12)

where γ is a threshold verifying the constraint αα.

3.1 Authentication via binary thresholding

In the first strategy, the final observed data is x ~ N and the original sequence xN is a side information containing two types of data (‘0’ and ‘1’). The conditional distribution of each random component ( X ~ i x i ) of the sequence ( X ~ N x N ) is the same for each given type. We compute now the probabilities that describe the two random i.i.d. sequences ( X ~ N x N ), one per data type, and for each of the two possible hypothesis. We derive then the corresponding test from (12). Under hypothesis H j , j{0, 1}, these probabilities are expressed conditionally to the known original code xN. Let N B ={i: x i =0} and N W ={i: x i =1}, with N B = N B and N W = N W . Because of i.i.d. sequences, we have

P N ( x ~ N x N , H j ) = N i = 1 P ( x ~ i x i , H j ) , P N ( x ~ N x N , H j ) = i N B P ( x ~ i 0 , H j ) × i N W P ( x ~ i 1 , H j ) .

Under hypothesis H0 the channel X X ~ has distributions given by (2) and (3) and we have:

P N x ~ N x N , H 0 = ( P e , B ) n e , B ( 1 P e , B ) N B n e , B × ( P e , W ) n e , W ( 1 P e , W ) N W n e , W ,

where ne,B and ne,W are the number of errors ( x ~ i x i ) when black is decoded into white and when white is decoded into black, respectively.

 Under hypothesis H1, the channel X X ~ has distributions given by (9) and (10), and we have

P N x ~ N x N , H 1 = ( P ~ e , B ) n e , B ( 1 P ~ e , B ) N B n e , B × ( P ~ e , W ) n e , W ( 1 P ~ e , W ) N W n e , W .

Applying now the Neyman Pearson criterion (12), the test is expressed as

L 1 = log P N x ~ N x N , H 1 P N x ~ N x N , H 0 H 0 H 1 γ ,
(13)
L 1 = n e , B log P ~ e , B ( 1 P e , B ) P e , B ( 1 P ~ e , B ) + n e , W log P ~ e , W ( 1 P e , W ) P e , W ( 1 P ~ e , W ) H 0 H 1 λ 1 ,
(14)

where λ 1 =γ N B log 1 P ~ B 1 P B N W log 1 P ~ W 1 P W . This expression has the practical advantage to only count the number of errors in order to perform the authentication task but at a cost of a loss of optimality.

3.2 Authentication via gray-level observations

In the second strategy, the observed data is o N . Here again, the conditional distribution of each random component ( O i x i ) of the sequence ( O N x N ) is the same for each type of data of X. The Neyman Pearson test is expressed as

L 2 =log P N ( o N x N , H 1 ) P N ( o N x N , H 0 ) H 0 H 1 λ 2 ,
(15)

which can be developed as

L 2 = i N B log P Z X ( o i 0 ) P Y X ( o i 0 ) + i N W log P Z X ( o i 1 ) P Y X ( o i 1 ) H 0 H 1 λ 2 ,
(16)
L 2 = i N B log 1 P e , W T Z X ̂ ( o i 0 ) T Y X ( o i 0 ) + P e , W T Z X ̂ ( o i 1 ) T Y X ( o i 0 ) + i N W log 1 P e , B T Z X ̂ ( o i 1 ) T Y X ( o i 1 ) + P e , B T Z X ̂ ( o i 0 ) T Y X ( o i 1 ) H 0 H 1 λ 2 .
(17)

Note that the expressions of the transition matrix modeling the physical processes TYX and T Z X ̂ are required in order to perform the optimal test.

3.3 Authentication with thresholding vs authentication without thresholding

In this setup and without loss of generality, we consider only the Gaussian model with variance σ2 for the physical devices TYX and T Z X ̂ . Figure 2 compares the receiver operating characteristic (ROC) curves associated with the two different strategies. Note that the error probabilities are computed using the results given in the next section (see Section 4.2). We can notice that the gap between the two strategies is important. This is not surprising since the binary thresholding removes information from the gray-level observation, yet this has a practical impact because one practitioner can be tempted to count the number of errors as given in (14) as an authentication score for its easy implementation. The information theoretical analysis presented in the Appendix confirms also that authentication is more accurate without thresholding, and this result is in line with the remark of Blahut in [[14]] where in p108 he writes that ‘information is increased if a measurement is made more precise [...] (i.e. with a refinement of the set of measurement outcomes).’

Figure 2
figure 2

ROC curves for the two different strategies ( N=2,000 , σ=52 ). α is the probability of rejecting an authentic code and β the probability of non-detecting an illegal copy.

Moreover, as we will see in Section 5, the plain scan of the graphical code can be used whenever the receiver needs to estimate the opponent’s channel.

4 Toward reliable performance evaluation

In the previous section we have expressed the Neyman-Pearson test for the two proposed strategies resumed by (14) and (17). These tests may then be practically performed on the observed sequence in order to make a decision about its authenticity. We aim now at expressing the error probabilities of types I and II and comparing the two possible strategies described previously. Let m=1, 2 be the index denoting the strategy; a straightforward calculation gives

α m = l > λ m P L m (l H 0 ),
(18)
β m = l > λ m P L m (l H 1 ).
(19)

where P L m (l H j ) is the distribution of the log-likelihood ratio L m under hypothesis H j .

4.1 Gaussian approximation

As the length N of the sequence is generally large, we use the central limit theorem to study the distributions P L m , m=1, 2 (a similar strategy was proposed in [15]).

 For the binary thresholding strategy, ne,W and ne,B in (14) are binomial random variables depending on the origin of the observed sequence. Let N x stand for the number of data of type x in the original code and Pe,x the cross-over probabilities emerging from type x in the BIBO channels (4) or (11). When N is large enough, the binomial random variables can be approximated with a Gaussian distribution. We have

n e , x N( N x P e , x , N x P e , x (1 P e , x )).
(20)

 From (14), L1 is a weighted sum of Gaussian random variables and one can obviously deduce the parameters of the normal approximation describing the log-likelihood L1.

 For the second strategy, i.e., when the receiver tests directly the observed gray-level sequence, the log-likelihood L2 in Equation 17 may be expressed as two sums of i.i.d. and becomes

L 2 = i N B ( o i ,0)+ i N W ( o i ,1) H 0 H 1 λ 2 ,
(21)

 where (v, x) is a function :X×V having some distribution with mean and variance equal to

m x =E[(V,x) H j ]= v V (v,x)P(vx, H j ),
(22)

and

var[(V,x) H j ]= v V ( ( v , x ) m x ) 2 P(vx, H j ),
(23)

 with P=PYX (respectively P=PZX) for j=0 (respectively 1). The central limit theorem is then used again to approximate the distribution of L2 and compute type I and type II error probabilities.

4.2 Asymptotic expression

In this section, we drop the subscribe m denoting the strategy as all the subsequent analysis is common for both of them. One important problem is the fact that the Gaussian approximation proposed previously provides inaccurate error probability values when the threshold λ in (18) and (19) is far from the mean of the log-likelihood random variable L. Chernoff bound and large deviation theory [16] are preferred in this context as very small error probabilities of types I and II may be desired [17]. Given a real number s, the Chernoff bound on type I and type II errors may be expressed as

α=Pr(Lλ H 0 ) e g L (s; H 0 )for anys>0,
(24)
β=Pr(Lλ H 1 ) e g L (s; H 1 )for anys>0,
(25)

where the function g L (s ; H j ), j=0, 1 is the moment generating function of the random variable L defined as

g L (s; H j )= E P L ( L H j ) e sL .
(26)

where expectation is performed with respect to distribution P L (LH j ). Reminding that L is a sum of N independent random variables, asymptotic analysis in probability theory (when N is large enough) shows that bounds similar to (24) and (25) are much more appropriate for estimating α and β than the Gaussian approximation especially when λ is far from E[L], namely when bounding the tails of a distribution [16, 17]. The tightest bound is obtained by finding the value of s that provides the minimum of the right-hand side (RHS) of (24) and (25), i.e., the minimum of esλg L (s ; H j ) for each j=0, 1. Taking the derivative, the value s that provides the tightest bound under each hypothesis is such that a

λ= d g L ( s ; H j ) d s g L ( s ; H j ) s = s ~ j = d d s ln g L ( s ; H j ) s = s ~ j .
(27)

We introduce the semi-invariant moment generating function after an acute observation of the identity (27). The semi-invariant moment generating function of L is

μ L (s; H j )=ln g L (s; H j ).
(28)

This function has many interesting properties that ease the extraction of an asymptotic expression for (24) and (25) [17]. For instance, this function is additive for the sum of independent random variables, and we have

μ L (s; H j )= i N B μ i , 0 (s; H j )+ i N W μ i , 1 (s; H j ),
(29)

where μi, x(s ; H j ) is the semi-invariant moment generating function of the random component ( O i ,x) when the observed sequence comes from the distribution associated to hypothesis H j . In addition, relation (27) may be expressed as the sum of the derivatives at the value s ~ j optimizing the bound:

λ= i N B μ i , 0 ( s ~ j ; H j )+ i N W μ i , 1 ( s ~ j ; H j ).
(30)

Chernoff bounds on type I and type II errors (24) and (25) may then be expressed as

α = Pr ( L λ H 0 ) exp i N B μ i , 0 ( s ~ 0 ; H 0 ) s ~ 0 μ i , 0 ( s ~ 0 ; H 0 ) + i N W μ i , 1 ( s ~ 0 ; H 0 ) s ~ 0 μ i , 1 ( s ~ 0 ; H 0 ) ,
(31)

and

β = Pr ( L λ H 1 ) exp i N B μ i , 0 ( s ~ 1 ; H 1 ) s ~ 1 μ i , 0 ( s ~ 1 ; H 1 ) + i N W μ i , 1 ( s ~ 1 ; H 1 ) s ~ 1 μ i , 1 ( s ~ 1 ; H 1 ) .
(32)

The distribution of each random component ( O i x i ) in the sequence ( O N x N ) is the same for each type of data X, and consequently, μi, x(s ; H j )=μ x (s ; H j ), i.e., μi, x(s ; H j ) is independent from i for each type of data x. The RHS in (31) and (32) can be simplified as

exp N B μ 0 ( s ~ j ; H j ) s ~ j μ 0 ( s ~ j ; H j ) + N W μ 1 ( s ~ j ; H j ) s ~ j μ 1 ( s ~ j ; H j ) .
(33)

Roughly speaking, Cramér’s theorem [16] states that for sufficiently large N, the upper bounds expressed for j=0, 1 in (33) are also lower bounds for α and β, respectively. Thus, one can write for N B N W N/2:

lim N 2 N ln α = μ ( s ~ 0 ; H 0 ) s ~ 0 μ ( s ~ 0 ; H 0 ) ,
(34)
lim N 2 N ln β = μ ( s ~ 1 ; H 1 ) s ~ 1 μ ( s ~ 1 ; H 1 ) ,
(35)

where s ~ 0 >0, s ~ 1 >0, μ( s ~ j ; H j )= μ 0 ( s ~ j ; H j )+ μ 1 ( s ~ j ; H j ), μ ( s ~ j ; H j )= μ 0 ( s ~ j ; H j )+ μ 1 ( s ~ j ; H j ). A modified asymptotic expression including a correction factor is evaluated for the sum of an i.i.d random sequence (see [17], Appendix 5A), and for large N, we have

α = Pr ( L λ H 0 ) , N 1 s ~ 0 μ ′′ ( s ~ 0 ; H 0 ) exp N 2 μ ( s ~ 0 ; H 0 ) s ~ 0 μ ( s ~ 0 ; H 0 ) .
(36)

and

β = Pr ( L λ H 1 ) , N 1 s ~ 1 μ ′′ ( s ~ 1 ; H 1 ) exp N 2 μ ( s ~ 1 ; H 1 ) s ~ 1 μ ( s ~ 1 ; H 1 ) ,
(37)

where μ ′′ ( s ~ j ; H j )= μ 0 ′′ ( s ~ j ; H j )+ μ 1 ′′ ( s ~ j ; H j ) is the second derivative of the semi-invariant moment generating function of (V, x) defined by

( v , 1 ) = log 1 P e , W T Z X ̂ ( v 1 ) T Y X ( v 1 ) + P e , W T Z X ̂ ( v 0 ) T Y X ( v 1 ) ,
(38)
( v , 0 ) = log 1 P e , B T Z X ̂ ( v 0 ) T Y X ( v 0 ) + P e , B T Z X ̂ ( v 1 ) T Y X ( v 0 ) .
(39)

4.3 Numerical computations of α and β via importance sampling

This section addresses the problem of estimating numerically type I and type II error probabilities, i.e., α and β. Monte Carlo simulation method [18] gives accurate solution since these probabilities can be expressed as expectations of a function of a random variable governed by a given probability distribution. We have

α = v N 1 P N ( v N x N , H 0 ) ,
(40)
= v N V N P N ( v N x N , H 0 ) ϕ ( v N ; 1 ) ,
(41)

where ϕ( v N ; 1 )=1 whenever v N 1 and zero if not. The probability of type I error is then expressed as the expectation of ϕ( v N ; 1 ) under distribution PN(vNxN, H0). In the same way, type II error probability β is the expectation of ϕ( v N ; 0 ) under distribution PN(vNxN, H1). In the sequel, we denote P N ( v N x N , H 0 )= P Y X N and P N ( v N x N , H 1 )= P Z X N , and we have

α = E P Y X N ϕ ( V N ; 1 ) ,
(42)
β = E P Z X N ϕ ( V N ; 0 ) .
(43)

Monte Carlo methods make use of the law of large numbers to infer an estimation for α and β by computing numerically an empirical mean for ϕ( v N ; 1 ) and ϕ( v N ; 0 ), respectively. Clearly, the computer runs Ntrials, each one generating an i.i.d. vector vN, where samples v n are driven from distributions PYX and PZX, respectively, which gives the following estimates:

α ̂ = 1 N trials N trials i = 1 ϕ ( ( v N ) ( i ) ; 1 ) , ( v n ) ( i ) being generated from P Y X β ̂ = 1 N trials N trials i = 1 ϕ ( ( v N ) ( i ) ; 0 ) , ( v n ) ( i ) being generated from P Z X .

The Monte Carlo estimator is unbiased ( α ̂ α and β ̂ β) almost surely, and the rate of convergence is N trials 1 / 2 . Recalling that for a zero mean and unit variance Gaussian random variable U, P(|U|≤1.96)=0.95, the confidence interval at 0.95 obtained from each estimation is

[ α ̂ 1.96 σ α N trials , α ̂ + 1.96 σ α N trials ]
(44)
[ β ̂ 1.96 σ β N trials , β ̂ + 1.96 σ β N trials ] ,
(45)

where σ α (resp. σ β ) is the standard deviation of the random variable ϕ( ( V N ) ( i ) ; 1 ) (resp. ϕ( ( V N ) ( i ) ; 0 )). As ϕ( ( V N ) ( i ) ; 1 ) and ϕ( ( v N ) ( i ) ; 0 ) are Bernoulli random variables with parameter α and β, respectively, their variances are easily deduced, e.g., σ α 2 =α α 2 α and σ β 2 =β β 2 β. When α and β are very small, accurate estimations are then difficult to achieve with realistic number of trials. Roughly speaking, the number of trials needed is N trials > 1 0 3 α (or N trials > 1 0 3 β ) when the desired confidence interval at 0.95 is constrained to be about a tenth of the expected value of α or β. Actually, we need to evaluate numerically very small values of α and β to draw the curve β(α) evaluating the performance of a given test statistic. The required number of trials fails to be realistic. We propose then to use the importance sampling method [18] which enables us to generate rare events and thus reduce considerably the required number of trials. Let us consider distributions QYX and QZX over the set such that QYX and QZX>0 and rewrite (42) and (43) as

E P Y X N ϕ ( V N ; 1 ) = E P Y X N ϕ ( V N ; 1 ) Q Y X N Q Y X N , E P Z X N ϕ ( V N ; 0 ) = E P Z X N ϕ ( V N ; 0 ) Q Z X N Q Z X N .

One can then alternatively express type I and type II error probabilities by

α = E Q Y X N ϕ ( V N ; 1 ) P Y X N Q Y X N ,
(46)
β = E Q Z X N ϕ ( V N ; 0 ) P Z X N Q Z X N .
(47)

Monte Carlo simulation with importance sampling method gives the following two estimates:

α ̂ = 1 N trials N trials i = 1 ϕ ( v N ) ( i ) ; 1 × P Y X N ( ( v N ) ( i ) x N ) Q Y X N ( ( v N ) ( i ) x N ) , ( v N ) ( i ) being generated from Q Y X N ,
(48)
β ̂ = 1 N trials N trials i = 1 ϕ ( v N ) ( i ) ; 0 × P Z X N ( ( v N ) ( i ) x N ) Q Z X N ( ( v N ) ( i ) x N ) , ( v N ) ( i ) being generated from Q Z X N .
(49)

The problem of importance sampling is to choose an adequate function QVX such that the variance of the estimated probabilities in (48) and (49) are very small. The number of trials will be considerably reduced and accurate estimations of very low values of α and β may be possible. Let

Q Y X ( s , v x ) = exp μ x ( s ; H 0 ) + sℓ ( v , x ) P Y X ( v x )

and

Q Z X ( s , v x ) = exp μ x ( s ; H 1 ) + sℓ ( v , x ) P Z X ( v x )

be tilted distributions over the set , and μ x (s; H j ) the semi-invariant moment generating function of (v, x) distributed under hypothesis H j .

Proposition 1

The mean of the log-likelihood function (v, x) governed by the tilted distributions QYX(s,vx) is μ x′(s; H0).

Proof

We have indeed

v V ( v , x ) Q Y X ( s , v x ) = v V ( v , x ) exp μ x ( s ; H 0 ) + sℓ ( v , x ) × P Y X ( v x ) , = v V ( v , x ) exp sℓ ( v , x ) P Y X ( v x ) exp μ x ( s ; H 0 ) ;

since μ x (s; H0)= log(g x (s; H0)), the denominator of the previous expression is simply g x (s; H0):

v V ( v , x ) Q Y X ( s , v x ) = v V ( v , x ) exp sℓ ( v , x ) P Y X ( v x ) v V exp sℓ ( v , x ) P Y X ( v x ) , = d g x ( s ; H 0 ) d s g x ( s ; H 0 ) ,

Finally, we have

v V (v,x) Q Y X (s,vx)= μ x (s; H 0 ).
(50)

The same development yields

v V (v,x) Q Z X (s,vx)= μ x (s; H 1 ).
(51)

When choosing s= s ~ 0 for QYX(s,vx) and s= s ~ 1 for QZX(s,vx), the mean of the log-likelihood function (v, x) governed by these tilted distributions will be equal to the threshold λ of the test 30.

Proposition 2

The variances of the estimations in (48) and (49) go to zero as the number of dots is sufficiently large.

Proof.

To show this, let oN be the observed samples coming from the main channel, e.g., driven from the tilted distribution Q Y X N ( s ~ 0 , v N x N ). We have

Q Y X N ( s ~ 0 , o N x N ) = exp i N B μ i , 0 ( s ~ 0 ; H 0 ) i N W μ i , 1 ( s ~ 0 ; H 0 ) + s ~ 0 i N B ( o i , 0 ) + s ~ 0 i N W ( o i , 1 ) × P Y X N ( o N x N ) .

Recalling that μ( s ~ j ; H j )= μ 0 ( s ~ j ; H j )+ μ 1 ( s ~ j ; H j ), for N B N W N/2, we have

Q Y X N ( s ~ 0 , o N x N ) = exp N 2 μ ( s ~ 0 ; H 0 ) + s ~ 0 i N B ( o i , 0 ) + i N W ( o i , 1 ) P Y X N ( o N x N ) .

By the law of large numbers, the sum of N/2 log-likelihood functions of the observed samples (o i x) governed by the tilted distribution, converges in probability to its mean value as N is sufficiently large:

i N B ( o i , 0 ) P N 2 v V ( v , 0 ) Q Y X ( s ~ 0 , v 0 ) = N 2 μ 0 ( s ~ 0 ; H 0 ) , i N W ( o i , 1 ) P N 2 v V ( v , 1 ) Q Y X ( s ~ 0 , v 1 ) = N 2 μ 1 ( s ~ 0 ; H 0 ) .

Recalling that μ ( s ~ j ; H j )= μ 0 ( s ~ j ; H j )+ μ 1 ( s ~ j ; H j ), and from proposition 1, we have

i N B ( o i , 0 ) + i N W ( o i , 1 ) P N 2 μ ( s 0 ~ ; H 0 ) .

Equivalently, when observed samples come from the opponent channel, e.g., drawn from the tilted distribution Q Z X N ( s ~ 1 , v N x N ), we have

i N B ( o i , 0 ) + i N W ( o i , 1 ) P N 2 μ ( s ~ 1 ; H 1 ) .

Finally, we have

Q Y X N ( s ~ 0 , o N x N ) P exp N 2 μ ( s ~ 0 ; H 0 ) s ~ 0 μ ( s ~ 0 ; H 0 ) × P Y X N ( o N x N )
(52)

and

Q Z X N ( s ~ 1 , o N x N ) P exp N 2 μ ( s ~ 1 ; H 1 ) s ~ 1 μ ( s ~ 1 ; H 1 ) × P Z X N ( o N x N ) .
(53)

The variance of ϕ( V N ; 1 ) P Y X N Q Y X N when VN is governed by the tilted distribution Q Y X N ( s ~ 0 , v N x N ) is then (the function ϕ(.) being 0 or 1)

var Q Y X N ϕ ( V N ; 1 ) P Y X N Q Y X N = E Q Y X N ϕ 2 ( V N ; 1 ) P Y X N Q Y X N 2 α 2 , = E P Y X N ϕ ( V N ; 1 ) P Y X N Q Y X N α 2 , P E P Y X N ϕ ( V N ; 1 ) 1 exp N 2 μ ( s ~ 0 ; H 0 ) s ~ 0 μ ( s ~ 0 ; H 0 ) α 2 .

The denominator in the expectation, i.e., exp N 2 μ ( s ~ 0 ; H 0 ) s ~ 0 μ ( s ~ 0 ; H 0 ) , is simply the inverse of the Cramér-Chernoff bound proposed in (34). We then have

var Q Y X N ϕ ( V N ; 1 ) P Y X N Q Y X N P α E P Y X N ϕ ( V N ; 1 ) α 2 .

Finally, since E P Y X N ϕ ( V N ; 1 ) =α (42), the variance goes to zero as N is large enough:

var Q Y X N ϕ ( V N ; 1 ) P Y X N Q Y X N P 0 .

The same development gives

var Q Z X N ϕ ( V N ; 0 ) P Z X N Q Z X N P 0 .

4.4 Practical performance analysis

Without loss of generality, we use in our analysis a generalized Gaussian distribution to model the physical device, i.e., the association of a printer with a scanner, used by the legitimate source TYX(vx) and by the counterfeiter T Z X ̂ (v x ̂ ):

p(vx)= b 2 ( 1 / b ) e ( | v m ( x ) | / a ) b ,
(54)

where m(x) is the mean and the parameter a can be computed from the variance σ2=var[V]:

a= σΓ ( 1 / b ) / Γ ( 3 / b ) .
(55)

The parameter b is used to control the sparsity of the distribution, for example, when b=1 the distribution is Laplacian, b=2 the distribution is Gaussian, and b→+ the distribution is uniform. The resulting distribution is first discretized then truncated to provide values within [0,…,255] to model a scanning process. Each channel is parametrized in this case by four parameters, two per each type of dots, m b =m(0) and σ b for black dots and m w =m(1) and σ w for white dots. Note that other print and scan models that take into account the gamma transfer function or additive noise with input dependent variance can be found in [19], but the general methodology of this paper is not dependent on the model and can still be applied.

Figure 3 illustrates the different effects of the generalized Gaussian distributions on the main and the opponent channels of same mean and variance and b=1 (Laplacian distribution), b=2 (Gaussian distribution), and b=6, i.e., close to a uniform distribution.

Figure 3
figure 3

Example of a 20 × 20 code which is printed and scanned by an opponent. Main and opponent channels are identical, m b = 50, m w = 150, σ b = 40, and σ w = 40.

In order to assess the accuracy of the computations of α and β using either the Gaussian approximation given by (18) and (19), the asymptotic expression given by (36) and (37), or the Monte Carlo simulations using importance sampling given by (48) and (49), we derive ROC curves for generalized Gaussian distributions and b={1,2,6}.

Figure 4 illustrates the gap between the estimation of α and β using the Gaussian approximation and the asymptotic expression or the Monte Carlo simulations. The Monte Carlo simulations confirm the fact that the derived Cramér Chernoff bounds are tight, and the difference between the results obtained with the Gaussian approximation are very important especially for close to uniform channels. We can also notice that for the same channel power, the authentication performances are better for b=6 then for b=2 and b=1.

Figure 4
figure 4

Comparison between the Gaussian approximation, the asymptotic expression, and Monte Carlo simulations for b = 1, b =2, and b = 6 . Main and opponent channels are identical, m b = 50, m w = 150, σ b = 40, and σ w = 40.

5 Optimal configurations for authentication

The goal of this section is to derive configurations that are optimal regarding authentication, i.e., to derive configurations that for a given α minimize β.

5.1 Optimal configurations by modification of the printing channel

5.1.1 Problem setting

This authentication problem can be seen as a game where the main goal of the receiver, for a given false alarm probability α, is to find a channel that minimizes the probability of missed detection β. Practically, this means that the channel can be chosen by using a given quality of paper, a different ink, and/or by adopting an appropriate resolution. For example, if the legitimate source wants to decrease the noise variance, he can choose to use oversampling to replicate the dots; on the contrary, if the legitimate source wants to increase the noise variance, he can use a paper of lesser quality. It is important to recall that because the opponent will have to print a binary version of its observation, and because a printing device at this very high resolution can only print binary images, the opponent will in any case have to print with decoding errors after estimation X ̂ .

We analyze two scenarios described below:

 The legitimate source and the opponent have identical printing devices; practically, this means that they use exactly the same printing setup. In this case, the legitimate source will try to look for the channel such that for a given α, the legitimate party will have a probability of missed detection β such that

β = min C β(α).
(56)

In this case, the opponent is passive and has no strategy but duplicating the graphical code.

 The opponent can modify its printing channel C o (here, we assume that he can change the variance of its noise), practically it means that he can modify one or several parameters of the printing setup without being detected. The opponent then tries to maximize the probability of false detection by choosing the adequate printing channel, and the legitimate sources will adopt the printing channel C l which will minimize the probability of false detection. We end up with what is called a min-max game in game theory, where the optimal β is the solution of

β = min C l max C o β(α).
(57)

 In this case, the opponent is active since he tries to adapt his strategy in order to degrade the authentication performance.

Because the expressions of β(α) is not simple and have to be computed using the asymptotic expressions (31) and (32), we cannot solve this problem analytically and we have to use numerical calculus instead.

We conduct this analysis for the generalized Gaussian model, where we assume that the parameters m b and m w are constant for the main and the opponent channels (which implies that the scanning process has the same calibration for the two types of images). We assume that the main channel and the opponent channel variances are respectively denoted σ m 2 and σ o 2 and are identical for black and white dots.

5.1.2 Passive opponent

Here, the opponent has to undergo a channel identical to the main channel; the only parameter of the optimization problem (56) is consequently σ m . Figure 5 presents the evolution of β w.r.t. σ m for α=10−6 and m b =50, m w =150. For each channel configuration, we can find an optimal configuration; this configuration offers a smaller probability of error for b=6 than for b=2 or b=1. It is not surprising to notice that in each case, β is important whenever σ m is very small (i.e., when the print and scan noise is very small, hence the estimation of the original code is easy) or very large (i.e., when the print and scan noise is so important that the original and forgery become equally noisy).

Figure 5
figure 5

Evolution of β w.r.t. σ m ( α = 10−6). Main and opponent channels are identical, m b = 50 and m w = 150.

5.1.3 Active opponent

In this setup, the opponent can use a channel of different variance σ o 2 than the main channel σ m 2 and tries to solve the game defined in (57). Figure 6 shows the evolutions of β w.r.t. σ o for different σ m . We can see that in each case it is in the opponent interest to optimize his channel. Note that even if we assume that the opponent print and scan channel is perfect ( x ̂ N = z N ), because the input of the printer has to be binary and because the opponent will make decoding errors by estimating the original code, the copied printed code will be necessarily different from the original printed code (see Figure 1), which implies a perfect discrimination between the two hypotheses.

Figure 6
figure 6

Evolution of the probability of non-detection β w.r.t. σ o for different σ m . The plots arriving from left to right show σ m varying from 20 to 80 with an increment of 10. m b = 50, m w = 150, and α = 10−6.

Figure 7 shows the evolution of best opponent strategy max σ o β w.r.t. σ m . By comparing it with Figure 5, we can see that the opponent’s probability of non-detection can be multiplied by one or several orders of magnitude (×107 for b=1, ×105 for b=2, and ×10 for b=6).

Figure 7
figure 7

Evolution of best opponent strategy max σ o β w.r.t. σ m . m b = 50, m w = 150, and α = 10−6.

6 Impact of the estimation of the print and scan channel

The previous scenarios assume that the receiver has a full knowledge of the print and scan channel. Here, we assume that the receiver also has to estimate the opponent channel before performing authentication. From the estimated parameters, the receiver will compute a threshold and a log-likelihood test. Depending on the number of observations N o , the estimated model and test will decrease the performance of the authentication system.

We consider that the opponent uses a different printing device unknown from the legitimate party. According to (6) and (7), the parameters to be estimated are Pe,W, Pe,B, m b , m w , and σ=σ b =σ w . We use the classical expectation maximization (EM) algorithm combined with Newton’s method to solve the maximization step as these distributions are discrete and have the finite support of the gray-level range.

Figure 8 shows the authentication performances using an estimated Gaussian model (b=2) from N o =2,000 observed symbols. We can notice that the performance is very close to an exact knowledge of the model. This analysis shows also that if the receiver has some assumptions of the opponent channel and enough observations, he should perform model estimation instead of using the thresholding strategy. Figure 9 shows the importance of model estimation when comparing it to a blind authentication test when the receiver assumes that both the opponent channel and his channel are identical.

Figure 8
figure 8

Authentication performance using model estimation with the EM algorithm ( N = 2,000 , N o = 2,000 , σ = 52, m b = 50, and m w = 150 ). The asymptotic expression is used to derive the error probabilities.

Figure 9
figure 9

Importance of model estimation when compared to a blind authentication test. ROC curves comparing different degrees of knowledge about the opponent channel while the true opponent printing process model has parameters (σ = 40,m b = 40, and μ w = 160). ‘True model’: the receiver knows exactly this model, ‘Blind model’: the receiver uses arbitrarily his printing process to model it, and ‘Est. model’: the receiver estimates the opponent channel using N o = 2,000 observations.

7 Conclusions

This paper brings numerous conclusions on the authentication using binary codes corrupted by a manufacturing stochastic noise:

 The nature of the receiver’s input is of upmost importance, and thresholding is a bad strategy with respect to getting an accurate version of the genuine or forged code, except if the system requires it, due for example to computational requirements.

 The Gaussian approximation used to compute the ROC of the authentication system are not valuable anymore for very low type I or type II errors. Cramér Chernoff bound or Monte Carlo simulations using importance sampling can be used instead to achieve accurate values of these probabilities. The proposed methodology is not impacted by the nature of the noise and can be applied for different memoryless channels that are more realistic for modeling the printing process.

 It is in the opponent’s interest to adapt its channel in order to decrease the authentication performances of the system; this can be possible by solving a max-min game.

 If the opponent’s print and scan channel remains unknown for the receiver, he can use estimation techniques such as the EM algorithm in order to estimate the channel.

Our future works will consist in evaluating the impact of the noise model on the authentication performance; this first analysis suggests that sparse distributions are less favorable for authentication than dense distributions, but this has to be confirmed by a deeper study.

Endnote

a One can show that esλg L (s ; H j ) is a convex function of s.

Appendix

Information theoretic comparison between hypothesis testing with and without thresholding

In this appendix, we aim at establishing an inequality between the average of the two log-likelihood tests (14) and (15). The greater is the discrimination between the two distributions involved in the log-likelihood test, the best is the authentication performance. The expected value of the log-likelihood test (12) with respect to any of the two distributions involved in the ratio is the Kullback-Leibler divergence or discrimination defined as

L( P Y X N ; P Z X N )= v N V N P Y X N ( v N x N )log P Y X N ( v N x N ) P Z X N ( v N x N ) ,
(58)

the base of the logarithm being arbitrary. In the remainder of this paper, we settle on base 2.

In ([14], p. 114), the author provides an interesting inequality relating the discrimination to type I and type II errors in hypothesis testing. This relation is stated by the following lemma:

Lemma 1

(see the former reference for the proof) For any partition ( 0 , 1 ) of the observation space V N , the probabilities of type I and II errors satisfy

L( P Y X N ; P Z X N )αlog α 1 β +(1α)log 1 α β .
(59)

In our authentication model, the likelihood test is performed conditionally to an available side information involving two types of data x. One type for black points and the second one for white points in the original code. In accordance to this, we express now the discrimination quantity for the two proposed strategies in order to establish the desired inequality:

L ( P N ( X ~ N x N , H 0 ) ; P N ( X ~ N x N , H 1 ) ) = x ~ 1 x ~ N P N x ~ N x N , H 0 log P N x ~ N x N , H 0 P N x ~ N x N , H 1 ,
(60)

and

L ( P N ( O N x N , H 0 ) ; P N ( O N x N , H 1 ) ) = v 1 v N P Y X N ( v N x N ) log P Y X N ( v N x N ) P Z X N ( v N x N ) .
(61)

For the sake of simplicity, we develop proofs and details for the second strategy only and give results for the thresholding case for which all developments are likewise the former. Regarding the additivity theorem ([14], theorem 4.3.7) for independent sequences and reminding that the distribution of each component of the sequence ( O N x N ) is the same for each type of data x, the discrimination quantity becomes

L ( P N ( O N x N , H 0 ) ; P N ( O N x N , H 1 ) ) = N W × v V P Y X ( v 1 ) log P Y X ( v 1 ) P Z X ( v 1 ) + N B × v V P Y X ( v 0 ) log P Y X ( v 0 ) P Z X ( v 0 ) .
(62)

Given a composition (or relative frequency) for X P X ={N W /N, N B /N}, we have

L ( P N ( O N X N , H 0 ) ; P N ( O N X N , H 1 ) ) = N × L ( P Y X ; P Z X P X ) ,
(63)

where L(PY/X; PZ/XP X ) is the average discrimination. Similarly, we obtain for the first strategy the relation

L( P N ( X ~ N X N , H 0 ); P N ( X ~ N X N , H 1 ))=N×L( P e , x ; P ~ e , x P X ).
(64)

Corollary 1

Given an i.i.d outcome XN=xN with composition, or type P X , for any partition of the observation space ( 0 , 1 ), the probabilities of type I and II errors satisfy

L( P Y X ; P Z X P X ) 1 N α log α 1 β + ( 1 α ) log 1 α β .
(65)

Proof.

The proof is straightforward by combining (59) and (63).

Corollary 2

Consider a partition of the observation space ( 0 , 1 ) with probability of type I error α; then, the probability of type II error is lower bounded by

β 2 [ NL ( P Y X ; P Z X P X ) + h ( α ) ] / ( 1 α ) .
(66)

Proof.

From the previous corollary, we have

( 1 α ) log β NL ( P Y X ; P Z X P X ) α log α ( 1 α ) log ( 1 α ) + α log ( 1 β ) .

Setting h(α)=−α logα−(1−α) log(1−α), which is the binary entropy (≤ 1), and observing that α log(1−β)≤0, we can write the inequality

(1α)logβNL( P Y X ; P Z X P X )+h(α).
(67)

It is desired that this lower bound is very small which is obviously possible with large values of the quantity L(PYX; PZXP X ).

Theorem 1

For the two strategies of the receiver, we have

L( P Y X ; P Z X P X )L( P e , x ; P ~ e , x P X )

Proof.

L ( P Y X ; P Z X P X ) = x = 0 , 1 P X ( x ) v V P Y X ( v x ) log P Y X ( v x ) P Z X ( v x ) , x = 0 , 1 P X ( x ) v D W P Y X ( v x ) log P Y X ( v x ) P Z X ( v x ) , + x = 0 , 1 P X ( x ) v D W c P Y X ( v x ) log P Y X ( v x ) P Z X ( v x ) , ( a ) x = 0 , 1 P X ( x ) v D W P Y X ( v k ) log v D W P Y X ( v x ) v D W P Z X ( v x ) , + x = 0 , 1 P X ( x ) v D W c P Y X ( v x ) log v D W c P Y X ( v x ) v D W c P Z X ( v x ) , = ( b ) x = 0 , 1 P X ( x ) P e , x log P e , x P ~ e , x + ( 1 P e , x ) log ( 1 P e , x ) ( 1 P ~ e , x ) , = x = 0 , 1 P X ( x ) L ( P e , x , P ~ e , x x ) , = L ( P e , x , P ~ e , x P X ) .
  1. (a)

    is obtained from the log-sum inequality: N i = 1 a i log a i b i N i = 1 a i log N i = 1 a i N i = 1 b i .

  2. (b)

    since P e , x = v D W P Y X (vx), P ~ e , x = v D W P Z X (vx), 1 P e , x = v D W c P Y X (vx),1 P ~ e , x = v D W c P Z X (vx).Figure 10 plots a comparison between the Kullback-Leibler divergences with and without thresholding w.r.t. the variance of Gaussian model of the physical devices, we can see that the divergence is smaller with thresholding than without.

Figure 10
figure 10

Comparison between the Kullback-Leibler divergences. Kullback-Leibler divergence function for the two different strategies w.r.t. the standard deviation of the Gaussian model of the physical devices.

References

  1. WCO: Global congress addresses international counterfeits threat immediate action required to combat threat to finance/health. . Accessed 14 Nov 2005 http://www.wcoomd.org/en/media/newsroom/2005/november

  2. WCO: Counterfeiting and piracy endangers global economic recovery, say global congress leaders. . Accessed 3 Dec 2009 http://www.wipo.int/pressroom/en/articles/2009/article_0054.html

  3. Haist T, Tiziani HJ: Optical detection of random features for high security applications. Optic. Comm 1998, 147(1–3):173-179.

    Article  Google Scholar 

  4. Suh GE, Devadas S: Physical unclonable functions for device authentication and secret key generation. In Proceedings of the 44th Annual Design Automation Conference. ACM, San Diego; 2007:9-14.

    Google Scholar 

  5. Gaubatz MD, Simske SJ, Gibson S: Distortion metrics for predicting authentication functionality of printed security deterrents. In 16th IEEE International Conference on Image Processing (ICIP), 2009. Cairo, IEEE, Piscataway; 2009:1489-1492.

    Chapter  Google Scholar 

  6. Shariati SS, Standaert FX, Jacques L, Macq B, Salhi MA, Antoine P: Random profiles of laser marks. In Proceedings of the 31st WIC Symposium on Information Theory in the Benelux. Rotterdam; 11–12 May 2010.

    Google Scholar 

  7. Picard J, Zhao J: Improved techniques for detecting, analyzing, and using visible authentication patterns. WO Patent WO/2005/067,586 28 July 2005.

    Google Scholar 

  8. Picard J, Vielhauer C, Thorwirth N: Towards fraud-proof, ID documents using multiple data hiding technologies and biometrics. In SPIE Proceedings–Electronic Imaging, Security and Watermarking of Multimedia Contents VI. San Jose; 2004:123-234.

    Google Scholar 

  9. Baras C, Cayre F: 2D bar-codes for authentication: a security approach. In Proceedings of EUSIPCO 2012. Bucarest; 27 Sept 2012.

    Google Scholar 

  10. Diong M, Bas P, Pelle C, Sawaya W: Document authentication using 2D codes: maximizing the decoding performance using statistical inference. In Communications and Multimedia Security. Springer, Kent; 2012:39-54.

    Chapter  Google Scholar 

  11. Dirik AE, Haas B: Copy detection pattern-based document protection for variable media. Image Process. IET 2012, 6(8):1102-1113. 10.1049/iet-ipr.2012.0297

    Article  MathSciNet  Google Scholar 

  12. Beekhof F, Voloshynovskiy S, Farhadzadeh F: Content authentication and identification under informed attacks. In 2012 IEEE International Workshop on Information Forensics and Security (WIFS). IEEE, Tenerife; 2012:133-138.

    Chapter  Google Scholar 

  13. A-T Phan Ho, B-A Hoang Mai, Sawaya W, Bas P: Document authentication using graphical codes: impacts of the channel model. In ACM Workshop on Information Hiding and Multimedia Security. Montpellier, ACM, New York; 2013.

    Google Scholar 

  14. Blahut RE: Principles and Practice of Information Theory. (Addison-Wesley; 1987.

    MATH  Google Scholar 

  15. Picard J: Digital authentication with copy-detection patterns. Electron. Imaging 2004, 5310: 176-183.

    Google Scholar 

  16. Dembo A: Large Deviations Techniques and Applications. Stochastic Modelling and Applied Probability. Springer; 2010.

    Book  Google Scholar 

  17. Gallager RG: Information Theory and Reliable Communication. Wiley; 1968.

    MATH  Google Scholar 

  18. Hammersley JM, Handscomb DC, Weiss G: Monte Carlo methods. Phys. Today 1965, 18: 55.

    Article  Google Scholar 

  19. Lin C-Y, Chang S-F: Distortion modeling and invariant extraction for digital image print-and-scan process. In Proceedings of International Symposium on Multimedia Information Processing. Taipei; Dec 1999.

    Google Scholar 

Download references

Acknowledgements

This work was partly supported by the National French project ANR-10-CORD-019 ‘Estampille’.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anh Thu Phan Ho.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Phan Ho, A.T., Mai Hoang, B.A., Sawaya, W. et al. Document authentication using graphical codes: reliable performance analysis and channel optimization. EURASIP J. on Info. Security 2014, 9 (2014). https://doi.org/10.1186/1687-417X-2014-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-417X-2014-9

Keywords