Skip to main content

A novel quality assessment for visual secret sharing schemes

Abstract

To evaluate the visual quality in visual secret sharing schemes, most of the existing metrics fail to generate fair and uniform quality scores for tested reconstructed images. We propose a new approach to measure the visual quality of the reconstructed image for visual secret sharing schemes. We developed an object detection method in the context of secret sharing, detecting outstanding local features and global object contour. The quality metric is constructed based on the object detection-weight map. The effectiveness of the proposed quality metric is demonstrated by a series of experiments. The experimental results show that our quality metric based on secret object detection outperforms existing metrics. Furthermore, it is straightforward to implement and can be applied to various applications such as performing the security test of the visual secret sharing process.

1 Introduction

Visual secret sharing, also named visual cryptography, encrypts the secret image by generating random-looking shares. The secret can be retrieved by stacking the shares together. Thus, the decryption does not need a computer.

The quality of the reconstructed image is one of the most important issues of visual secret sharing. Different from classical image encryption, the secret “decrypted” in visual secret sharing is not exactly the same as when it was encrypted but it is reconstructed with a certain quality level such that the secret object can be perceived by human eyes. That is, the decrypted secret image does not possess perfect quality when rendered within noisy shares. So the degree that the reconstructed image differs from the original image becomes very important. Visual quality (or display quality) is used to represent the quality of the reconstructed secret image. A perfect reconstruction should maintain all the secret information of the original secret. To determine the secret object within a noisy background, the secret object needs to be integrated and clear. We define the integrity of the secret object as In(I) and the clarity of the secret object as Cl(I) for image I. A reconstructed secret object which is clear and highly integrated represents a better visual quality and implies that a larger amount of secret information can be perceived.

$$ E(\text{RI}) \text{is proportional to In (RI)}\cdot \text{Cl}(\text{RI}) $$
(1)
$$ Q(\text{RI}) \text{is proportional to} E(\text{RI}). $$
(2)

Here, the secret information maintained in the reconstructed image RI is denoted by E(RI). The visual quality of the reconstructed image is represented by Q(RI). In the real world, the reconstructed image can be damaged or faded. An acceptable quality metric should be able to differentiate the reconstructed images, from bad to good quality.

The most popular criterion in visual quality measurement is contrast, which was first proposed by Naor and Shamir [1]. Contrast based on area representation [24] was proposed to measure the visual quality of reconstructed images based on the traditional concept of contrast. Higher contrast was often viewed as higher visual quality. Blackness [5, 6] of the reconstruction image was discussed in a few later studies. Other scholars [7, 8] have used some well-known image quality metrics such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) to test the difference between the reconstruction image and secret image. None of these metrics work properly in visual quality measurement for visual secret sharing.

In this paper, we proposed a novel metric to measure the visual quality of the reconstructed image for visual secret sharing. An object detection method in the context of secret sharing is developed. A quality metric is constructed based on the weight map generated by the secret object detection. Theoretical analysis and simulation results are provided as well, demonstrating the effectiveness and possible applications of our novel visual quality assessment method. The remaining part of this paper is organized as follows. Section 2 reviews related work about visual quality assessment of visual secret sharing schemes. The proposed secret object detection method is introduced in Section 3. Experimental results of the quality assessment and possible applications are provided in Section 4. Section 5 gives the conclusions.

2 Visual secret sharing schemes and existing quality metrics

Naor and Shamir [1] proposed a model representation for visual secret sharing schemes in 1994, which is also referred to as the deterministic visual secret sharing model [9]. Multiple pixels are used to reconstruct one pixel of the original secret image. Thus, recent studies focus on size invariant visual secret sharing schemes [2, 1014] to avoid the storage overload and dimension distortion caused by such pixel expansion [15] in the deterministic models. The earliest size invariant visual secret sharing scheme was proposed by Kafri and Keren [16] in 1987, named as “encryption of pictures by random grids” at that time. Several approaches have been proposed recently to perform size invariant visual secret sharing, such as the random grid-based visual secret sharing (RG-based VSS) [2, 1014], the probabilistic visual cryptography (ProbVC) [1719], and the multiple pixel sharing scheme [2022]. This paper uses the RG-based VSS as our main testing model. The experiment using Naor and Shamir’s deterministic model is also provided at the end part of this paper to demonstrate that our metric could also be applied to general visual secret sharing schemes with pixel expansion.

We explored existing quality metrics for the visual quality assessment. All of them are shown to be improper or not correct.

2.1 Contrast based on area representation

“Contrast” was first described by Naor and Shamir in their deterministic model representing the difference between black and white pixels in the reconstructed image. The Hamming weight of the stacked result V is represented by H(V). The minimum Hamming weight of a stacking result for the reconstructed black pixels is denoted by d. The value m is the pixel expansion rate. α is defined as contrast. The recovered pixel is treated as black if Eq. (3) is satisfied and white if Eq. (4) is satisfied.

$$ H(V)\geq d $$
(3)
$$ H(V)\leq d-\alpha \cdot m $$
(4)

Other contrast definitions such as contrast=(hl)/(m+l) [23] and contrast=(hl)/(h+l) [24] focus on the relative difference between white and black pixels in the reconstructed secret image which are similar with Naor and Shamir’s definition, where h is the lower bound of the darkness levels to encrypt a black pixel and l is the upper bound of the darkness levels to encrypt a white pixel in the reconstructed secret image.

Based on these definitions, the contrast based on area representation [24] was proposed for size invariant visual secret sharing schemes. For a given pixel r, in a black/white image with size a×b, the light transmission of a white pixel is defined as T(r)=1; otherwise, when r is a black pixel, we have T(r)=0. We define the stacking result of the shares to be \(R\triangleq R_{1}\otimes...\otimes R_{n}\), where R i represents a share. The average light transmission of R is denoted as

$$ T(R)=\frac{\sum\limits_{i=1}^{a} \sum\limits_{j=1}^{b} T(r_{i},_{j})}{a \times b}. $$
(5)

Let S(0) or S(1) denote the area of all the white or black pixels in the secret image S, where \(S=S(0) \bigcup S(1)\) and \(S(0) \bigcap S(1)=\emptyset \). Therefore, R[ S(0)] or R[ S(1)] is the corresponding area of all the white or black pixels in the image R. The contrast of the reconstructed image is expressed as

$$ \alpha=\frac{T(R[\!S(0)])-T(R[\!S(1)])}{1+T(R[\!S(1)])}. $$
(6)

This contrast based on area representation calculates the transmission difference between the black and the white area in the reconstruction image.

Our experience tells us that it is not proper to evaluate the visual quality of the reconstructed secret image simply by the contrast value. As shown in Eq. (6), the value of α only depends on the light transmissions of the white and the black area in the reconstructed image. The visual quality of the reconstructed image cannot be predicted given only the light transmission values. Two sample images with the same contrast values were constructed in Fig. 1. Both the symbol “T” and the “baboon” have the same contrast value 0.54. However, it is easier to recognize the symbol “T” than the “baboon”. Reconstructed images with the same contrast value may differ significantly in visual quality.

Fig. 1
figure 1

Two reconstructed images with the same value of contrast based on area representation: a reconstruction of a “baboon”; b reconstruction of a symbol “T”

2.2 Blackness

Blackness is another important factor that has aroused researchers’ attention in recent years. Chiu [5, 6] argued that the visual quality of a recovered image is affected not only by its contrast value but also by its blackness. Quality assessment and optimization method were further developed concerning only these two factors, contrast and blackness in [5].

The degree of blackness represents the percentage of black pixels in the secret image recovered to be black. However, we cannot rely on the accuracy of quality measurement only based on contrast and blackness. The reconstruction quality of two testing images with the same blackness are demonstrated in Fig. 2. The contrast value (0.62) of the reconstructed “zebra” is higher than the contrast (0.52) of the reconstructed “square” symbol; both the “zebra” and the “square” symbol are reconstructed with the blackness value 0.99. However, important features of the “zebra” secret image are destroyed severely, and it is hard to differentiate it as a zebra or a horse in the reconstruction. It is much harder to recognize the “zebra” than the “square” symbol from the reconstructed images.

Fig. 2
figure 2

Two reconstruction images with the same blackness value

Not only does the global contrast and blackness value impact visual quality, but also the feature of the original image will affect the visual quality. So it is unreliable to judge the visual quality of reconstructed image only by contrast and blackness.

2.3 Objective image quality metrics

Peak signal-to-noise ratio (PSNR) is one of the most common measurements for the quality loss in image processing. It is based on mean squared error (MSE). It is commonly used as a quantitative metric in general image quality assessment and in visual secret sharing [7, 8].

However, as Wang et al. [25] stated in their research, the pixel-to-pixel error measurement cannot accurately represent the image quality loss. Wang first proposed structure similarity (SSIM). The changes in structural information and variations illustrate the degradations of the image quality better than error based on pixels.

$$ \text{SSIM}(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{\left(\mu_{x}^{2}+\mu_{y}^{2}+C_{1}\right) \left(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2}\right)} $$
(7)

where μ x is the average of x, μ y is the average of y, \(\sigma _{x}^{2}\) is the variance of x, \(\sigma _{y}^{2}\) is the variance of y, and σ xy is the covariance of x and y. Parameters C 1 and C 2 are used to stabilize the division. Compared with PSNR and MSE, the advantage of SSIM, as shown in Eq. (7), is that the relationship among pixel neighbors are taken into consideration. SSIM is treated as another commonly applied tool of quality evaluation in image processing and image encryption [2628].

We further tested PSNR and SSIM with some sample images in Fig. 3. The PSNR and SSIM values of the tested images are shown in Table 1.

Table 1 PSNR and SSIM values of tested images
Fig. 3
figure 3

Sample images for PSNR and SSIM testing

The higher PSNR and higher structural similarity are always viewed as higher visual quality, i.e., less error. The symbol “T” has lower PSNR and SSIM values than the “baboon,” though it is clearer. Both PSNR and SSIM failed to represent the quality loss of tested images. The global difference between reconstructed and original images are tested pixel by pixel or patch by patch, and errors occurring in the background part of the images are counted equivalently as errors on the secret object. The image quality metrics are not robust on image context; thus, the quality scores based on PSNR and SSIM are not suitable for reconstructed visual secret images.

From the above experiments and observation, we found that reconstruction with a good quality should be able to maintain the most outstanding object features of the original secret. As shown in the figures, the reconstructed “baboon” can be understood if both the overall secret object contour, such as the outline of the face and eyebrow, and the outstanding local features, such as the eyes and nose, are maintained in the reconstruction. The total degradation of the secret information is formed by errors at all pixel locations. But pixels at different locations should have a different effect on the total degradation. Such as, one pixel error located on the eyes of the “baboon” and another pixel error located on the “fur area” will definitely have a different impact on the final visual quality. A smart visual quality metric should be able to differentiate errors at different locations and quantitatively represent how much the global object contour and the outstanding local features are maintained in the reconstructed image.

3 Object detection in the context of secret sharing

To detect the secret object within a noisy reconstructed image, common object detection methods [2934] were studied first. In the common image object detection, a variety of image features are usually extracted first to generate the initial object representation, classification methods are applied to known features, and training process or learning process is the essential part of the detection. The detection methods usually rely on large databases and prior knowledge. However, common object detection methods cannot work properly in visual secret sharing. There are mainly two reasons. First, prior knowledge cannot be relied on in a secret object detection as the secret object can be formed in any possible pattern. Further, it is not practical to perform a training process in secret object detection as the secret objects are expected to be random and independent of each other. The secret object with complete contour and clear local features should contain high secret information as we stated in Eqs. (1) and (2). An objective and intuitive representation method is needed to measure the quality of the reconstructed secret object.

The main strategy we applied is to detect the global contour structure based on low-level image features (discussed in Section 3.3). The integrity of the reconstructed object is represented by the global contour detection result. At the same time, outstanding local features are detected to evaluate the clarity of the secret object.

To detect the outstanding local features, image regions different from their neighborhoods should be considered to be more important, such as the eyes of the baboon. As the secret object can be rendered in any pattern, a detection method can only rely on the objective image features and does not involve any strategies of scene understanding or human visual systems. The problem is how to detect the outstanding local features objectively when there is no objective detection technique available. Inspired by the multiple scale feature extraction and weighted dissimilarity calculation methods of models for “visual saliency” [35, 36], we designed our local feature detection method for visual secret sharing schemes specifically.

3.1 Outstanding local feature detection model: visual saliency

Saliency models [37] are well studied and commonly applied to accomplish a fast object detection within a noisy background. “Saliency,” which simulates human visual system models, is trying to predict the most conspicuous locations within one image. Itti and Koch [35] proposed the most classic saliency detection model in 1998. The “graph-based visual saliency” (GBVS) model [36], an improved version of Itti and Koch’s model, demonstrates better performance. We performed a series of experiments to test the characteristics of the conspicuous feature detection process using the “graph-based visual saliency” model.

As shown in Fig. 4, different saliency weights of tested symbols are marked by different colors. Warm colors such as red and yellow represent high saliency weight values and low saliency weights are marked by gray and dark blue. It is shown that the outstanding features were highlighted only if they are located at the center part of the image.

Fig. 4
figure 4

Local feature detection of graph-based visual saliency model

The reason why the classic saliency model “ignores” the important local features at the boundary part is analyzed. The graph-based visual saliency model generates the feature map by the intensity variation and edge orientations of the secret image. There are six types of features selected; four of them are edge orientation features and the other two types are intensity variations. To select the outstanding local locations, the graph-based visual saliency model calculated a “weighted dissimilarity” for the feature data on each map location/node pair. Pixels with a high weighted dissimilarity from surrounding pixels are detected to be more salient. The surrounding pixels are given higher weight than further pixels. The boundary parts in the image have less neighbors/surroundings than the center part of the image as shown in Fig. 5. That is why more features in the center part of the image are detected than the boundary part of the image as shown in Fig. 4. A Markov chain’s steady state was calculated to form the final most conspicuous locations. To simulate the human visual system selecting processes, the final weight map is found to be an equilibrium distribution of all weighted location maps calculated by weighted dissimilarity. Thus, the final weight map becomes more concentrated as shown in Fig. 6.

Fig. 5
figure 5

Dissimilarity weights at different locations

Fig. 6
figure 6

Process of concentrating the weight distribution of the secret images: a symbol “T”; b bar-shape symbol

In the context of visual secret sharing, the entire secret can be located anywhere in the image, which is very different from the center-biased natural settings. All types of raw image features should be equally weighted, and all the pixel locations should be equally important. Furthermore, no human visual system modeling should be applied. Because the detection result should be totally objective, we only rely on the inherent secret image features.

3.2 Local feature detection for visual secret sharing

The main purpose of the local feature extraction is to find the pixels which are very different from their neighborhood. Such pixels are considered as “unusual” and thus more significant. To guarantee each pixel has the same number of neighbors/surrounding pixels, the image is first symmetrically extended before feature extraction as shown in Fig. 7. Pixel intensity variation was extracted vertically and horizontally, equally weighted at every location. Each of the pixel location should have a×b−1 surrounding pixel locations if the secret image has the size a×b.

Fig. 7
figure 7

Symmetrical extension (a) and weighted dissimilarity (b) of the secret image

To quantitatively represent the outstanding features at every pixel location, we calculate the dissimilarity of each two pixel locations

$$ d\left[(i,j),(p,q)\right]=|V(i,j)-V(p,q)|, $$

where the value of the raw feature (intensity variance) is denoted by V. The dissimilarity between two specific pixel locations is then assigned a weight

$$w\left[(i,j),(p,q)\right]=d\left[(i,j),(p,q)\right] \cdot F(i-p,j-q) $$

where \(F(x,y)=\text {exp}\left (-\frac {x^{2}+y^{2}}{2\sigma ^{2}}\right)\) is a “Gaussian-like” function. Thus, the weighted dissimilarity between two pixel locations is proportional to their difference and their closeness. Pixels differing from close neighbors will have higher weight because d[(i,j),(p,q)] and F(ip,jq) are relatively large. The parameter σ decides the shape of the Gaussian-like function.

Our experiment results show that a smaller sigma leads to sharp detection capability and provides greater sensitivity to the changes in smaller local areas. In our situation, a smooth outstanding local area is a better fit. We illustrate two examples in Fig. 8, and the larger σ in (a) is preferred as it provides greater detail than smaller σ in (b). Assuming that people try to understand the basic information from the reconstruction image without any hint from the secret, losing any part of the information may cause the recognition failure. Experiments show that outstanding local features can represent the basic information very well when σ is assigned to one eighth of the testing image width.

Fig. 8
figure 8

Local outstanding information detection generated by different σ: a σ=4; b σ=0.32

3.3 Object contour detection for visual secret sharing

One may argue that abundant outstanding local features will make the global contour useless, because local features of an object could leak the global structure of the object. For example, one may be able to guess the secret object is a person if there are two eyes in the image. Such an inference is based on the experience or prior knowledge which is again not reliable for secret detection. Furthermore, local features of a secret object could be intentionally misleading or abnormal. This is very different from natural image settings. Integration of all the local features (the global contour) is necessary for secret object detection in visual secret sharing.

The contour feature of the secret image should be formed by the raw feature (intensity variance) which is strongly connected and insensitive such as “the face contour and the head contour of the baboon.” Trivial intensity variance such as the “hair of the baboon” should not be taken as a contour feature. We adopted feature extraction in multiple scales to detect the broad contour regions. The testing image is symmetrically extended before performing the feature extraction. First, the raw feature map of the largest scale is extracted by averaging the vertical and horizontal intensity variances by performing “Sobel” filter to the entire secret image. To detect the feature maps in multiple scales, each lower scale feature map is generated by downsampling the upper scale by 2:1 both vertically and horizontally. To eliminate the features which are not connected or tiny in size, a 3×3 Gaussian filter with standard deviation 0.5 is convoluted to each scale map. The Gaussian low-pass filter weakened the high-frequency noise in each scale. After the smoothed feature map is downsampled repeatedly, coarser features are maintained and finer features become weaker and weaker. The basic structure of the image will be detected after a few iterations. Three iterations are proved to be good enough in our experiments. Experimental results (Fig. 9) show that this simple contour extraction method is competitive compared with more complex contour extraction methods like the Gabor energy filter [38] and the Gabor energy filtering augmented with surround inhibition [39].

Fig. 9
figure 9

Contour extraction result: a original image, b our contour extraction, c Gabor energy filter extraction, and d Gabor energy filtering augmented with surround inhibition extraction

3.4 Basic flow of the object detection for visual secret sharing

Secret images may form any pattern. Some secret images have white backgrounds and some may have black backgrounds. We need a fair metric, which treats the black and white backgrounds such as Fig. 10 equally. To avoid the influence of background color in our error calculation, we adopted a preprocessing step to unify the background property of the secret. Graph-based visual saliency is adopted to perform an initial prediction of the foreground. The accumulated saliency of the black and white pixels in the secret image are statistically compared. Whichever color (black or white) with a higher accumulated saliency weight will be taken as the “temperate foreground color.” The other color is taken as the “temperate background color.” For secret images in which the “temperate background color” is black, we use its reversed image in the following detection processes. And for the secret images with a white “temperate background,” we leave it unchanged. Thus, the same secret objects with different background colors are treated equally in the secret object detection. As long as the secret object is rendered to a same structure, the detection result will be the same. Other background/foreground detections can be used as well. The “temperate background” selection will not affect the object detection result. Only the uniformity of the selection matters.

Fig. 10
figure 10

Reconstructed images for black and white backgrounds

The flowchart of our object detection in the context of secret sharing is shown in Fig. 11. The image after preprocessing is extended to generate the raw features by linear filtering. The raw features are used to generate both the global contour feature map and the outstanding local feature map. Both the global contour information and the outstanding local information are generated in multiple scales. The contour feature in each scale is the output of the Gaussian smoothing filter. The multiple scale contour feature maps are then normalized to a single contour feature map by the across-scale linear combination. The multiple scale local feature maps are generated by downsampling the raw feature map repeatedly. The final local outstanding feature maps are generated by weighted dissimilarity calculation using downsampled local feature map. Finally, a single local feature map is generated by the across-scale linear combination of all the local feature maps in multiple scales. An adaptive fusion method is implemented by assigning different weights between the normalized contour and local feature maps. The weights could be adjusted by applications and users’ requirements. Here, we use equal weights for the local feature and the contour feature as an example. The final detection-weight map for visual secret sharing is shown by a test set in Fig. 12.

Fig. 11
figure 11

Flow chart of secret object detection-weight map generation

Fig. 12
figure 12

Weight maps of the final secret object detection

3.5 Performance analysis of object detection in the context of secret sharing

From Fig. 12, we found that most of the structures of the secrets were retained in our detection-weight maps. If we mark the original secret with only “hot” and “cold” representing the secret content and the other content, respectively, how much the “hot” information is shown as “hot” and how much the “cold” information is kept as “cold” in the secret object detection-weight map illustrate the secret detection ability. To illustrate the detection ability of our method, the symbol “T” was tested at different “hot” thresholds as shown in Fig 13. Our detection method achieved more than 90% in the secret coverage when the threshold of “hot” is set to 0.4.

Fig. 13
figure 13

Secret coverage of our secret object detection

According to previous analysis, the secret object of the reconstructed image should be similar to the secret object in the original images if the reconstruction quality is good. The detection-weight map of the original image should be consistent with the weight map of the reconstructed image. The consistency of the weight maps between the secret and the reconstructed secret is another important factor to judge the performance of the secret detection method. We partitioned the weight values into three groups [ 0,0.45),[ 0.45,0.7), and [ 0.7,1] as “cold,” “warm,” and “hot,” respectively. The weight distribution for the square symbol and the “T” symbol were tested in Fig. 14. The difference between the original secret detection-weight map and the reconstructed detection-weight map is very small. Our secret object detection-weight maps demonstrated good consistency between the original secret image and the reconstructed secret image.

Fig. 14
figure 14

Consistency test of the secret object detection method for a original images and b reconstructed images

4 Quality assessment based on secret object detection

Our quality assessment is constructed based on the secret object detection. The overall visual quality score of the reconstructed image can be calculated by the detection-weight control. The pixels in the detection-weight map with high detection weights are considered more important than the pixels with lower detection weights. Errors at highly weighted locations cause severe quality degradation.

The detection-weight map, which is normalized from zero to one, marks the secret object with higher weight values and the non-secret parts with lower weight values. To quantitatively represent the overall quality, the final detection-weight map is divided into L weight levels. A larger L represents a more detailed and smooth division and a smaller L leads to clear and obvious division as shown in Fig. 15. The level index is denoted as “l.” The upper bound and the lower bound of level l are defined as B upper(l) and B lower(l)

$$ B_{\text{upper}}(l) = \frac{1}{L} \cdot l $$
$$ B_{\text{lower}}(l) = \frac{1}{L} \cdot (l-1) $$
Fig. 15
figure 15

Detection-weight map division: a L=5 and b L=15

Pixels with floating detection weights between the upper and lower bounds of level l are categorized into the same weight level. The total level number L can be quite large. We performed experiments for L, ranging from 3 to 15. The final quality rank of tested images remains stable. The level number L=5 shows a clear division and was efficient in computation; thus, it was selected in our experiments.

To generate the weight factor for each level, we first use β 1,β 2···β L as a set of generators and assign β 1=0.1. We then assign β 2=β 1 and for the weight level that Ll>2, we selected

$$\beta_{l} = \sum\limits_{i=1}^{l-1} \beta_{i}. $$

The weight factor for level l is defined as

$$W_{l}=\frac{\beta_{l}}{\sum\limits_{i=1}^{L} \beta_{i}}. $$

Observe that \(W_{l} = \sum \limits _{i=1}^{l-1} W_{i}\) and the weight factors for all levels sum up to one.

$$\sum\limits_{i=1}^{L} W_{i}=1. $$

The overall quality based on the secret object detection (QBSD) of the reconstructed image is defined by a linear combination of the accuracy rates of all the weight levels, as shown in Eq. (8).

$$ \text{QBSD}=\sum\limits_{l}W_{l} \cdot R_{l}, \text{where} $$
(8)
$$ R_{l}=1-\frac{N_{\text{error}}}{N_{\text{tpixel}}}. $$
(9)

The accuracy rate R l is related to the error rate N error/N tpixel of level l. The number of the pixel errors in the current level is N error, and the total pixel number in the current level is N tpixel. If each level achieves 100% in reconstruction accuracy, the overall quality will be “1.”

To evaluate the reconstruction quality, we need to measure how similar the reconstructed secret and the original secret are. One type of method could be measuring the distance between the reconstruction and the original secret. The other type is measuring the accuracy rate between the reconstructed and the original secret. We tested several commonly used quality metrics, shown in Table 2. MSE and PSNR are metrics measuring distance, using Euclidean distance between the reconstructed image and the original image. PSNR is in logarithmic scale of MSE’s reciprocal. So lower MSE and higher PSNR represent less distance of the reconstruction and the original secret information. Blackness, SSIM, and our metric are metrics measuring the accuracy rate. A higher quality rate value represents a better reconstruction quality. The contrast method is neither a metric of distance nor a metric of accuracy rate; it mainly depends on the unique characteristic of the secret sharing scheme and the secret image.

Table 2 QBSD quality assessment compared with other quality metrics for 2 out of 2 visual secret sharing

Several experiments were performed to test the proposed QBSD quality metric. We tested the quality of the reconstructed images for two out of two, three out of three (Table 3), and two out of four (Table 4) RG-based VSS schemes using our quality metric. As shown in Fig. 16, the quality of the reconstructed images has great variations.

Table 3 QBSD quality assessment compared with other quality metrics for 3 out of 3 visual secret sharing
Table 4 QBSD quality assessment compared with other quality metrics for 2 out of 4 visual secret sharing
Fig. 16
figure 16

Original and reconstructed images for different (2 out of 2, 3 out of 3, and 2 out of 4) visual secret sharing schemes

First of all, we found that secret objects with simple structures, such as the symbol “T” and the square, have much better reconstructed quality than the other objects and the secret objects “baboon” and the “zebra” have worse quality than other images in the reconstruction. It is not very easy to find a clear face contour of the “baboon” in the three visual secret sharing schemes. The quality scores show the fact that losing important contour features results in degradation in quality.

Secondly, outstanding local features such as the eyes and the mouth of the “portrait” are not well maintained in the reconstructed images of the three out of three visual secret sharing scheme. The stripes of the “zebra” are not well maintained in all the reconstructions of the three different schemes. Errors of the important local features generate severe degradation in quality. Our quality metric performs just as we expected.

Thirdly, the two out of two visual secret sharing scheme gives a better reconstruction quality than the other two schemes according to the QBSD scores. Our proposed metric gives us a sense of the quality variance for sharing the same secret by different secret sharing schemes.

Overall, our quality metric is consistent with the quality degradations. The “very good reconstruction,” “good reconstruction,” “fair reconstruction,” and “poor reconstruction” are differentiated very well. None of the other metrics could offer proper and consistent quality scores for these reconstructed images of the three schemes.

To demonstrate our metric could be also applied to the deterministic visual secret sharing schemes, Naor and Shamir’s two out of two deterministic sharing scheme is also tested. For a fair measurement without any size difference, the reconstructed image with the pixel expansion is compared with an expanded ground-truth secret image. The pixel expansion rate m is 4 (Fig. 17). The ground-truth secret image is generated by expanding each pixel of the original secret image by the rate m. The object detection-weight map is generated directly from the ground-truth secret image.

Fig. 17
figure 17

Quality measurement of 2 out of 2 visual secret sharing with pixel expansion m=4

The quality testing result is shown in Table 5. The reconstructed image quality using the deterministic model with pixel expansion is slightly higher compared with the size invariant mode. The quality rank for each tested image remains consistent. This proves that the proposed quality metric could also be applied to general visual secret sharing schemes with pixel expansion as well as the size invariant sharing schemes.

Table 5 QBSD measurement of the same images reconstructed with and without pixel expansion, m=4

There are several practical applications for the proposed visual quality metric. For example, our quality metric could be applied to security measurement in the visual secret sharing process. Assume one share within the two out two secret sharing process is leaking secret information; the party holding one share (which is an insufficient share number to reconstruct the secret if there is no secret leaking) will be able to review the secret. A pilot experiment is performed to demonstrate that QBSD scores could be also used to measure the security level of a “leaking share.” The visual quality of the leaking shares for different images are tested using our QBSD metric. The measured quality scores of the leaking shares are represented in the Fig. 18. Measured QBSD scores are different for different secret leaking situations. The term “leakage” here means how large is the leaking area of the entire share. A 30% leakage at the top part of the share generates a higher quality score than the same amount of leakage at the bottom part. A share leaking a small significant secret area has a much higher quality score than a share leaking a larger insignificant secret area. This confirms our secret object detection result. As shown in Fig. 12, a significant part of the secret object with a higher detection weight holds more secret information and causes a severe safety issue if leaked out. A higher QBSD score indicates a higher secret information leakage, in other words, a lower security level. A series of leaking shares for different images were measured, and we found that the main secret content starts to leak out when the QBSD score is above 0.58. Intuitive values indicating different security levels can be provided using QBSD measurement.

Fig. 18
figure 18

QBSD measurement of the shares leaking secret

The proposed QBSD metric could also be used by the dealer, in the secret sharing process, to ensure the reconstruction quality of different images. Besides, it can be used to analyze the quality performance of different sharing schemes or assign a quality threshold in the decryption process. More practical applications could be explored in a further study.

5 Conclusions

In this paper, we have investigated the existing approaches to quality assessment of the reconstructed image for visual secret sharing schemes and enumerated their limitations. We have proposed a novel quality assessment based on secret object detection in the context of visual secret sharing. To check the clarity and integrity of the secret object, both local outstanding features and global contour are detected. Experimental results show that the proposed quality metric outperforms the other common quality metrics. The difference between our quality assessment and other tested metrics is that we are the first to adopt an image-adaptive quality measurement. Our proposed quality metric can be applied to different visual secret sharing processes and provide practical benefits.

References

  1. M Naor, A Shamir, in Workshop on the Theory and Application of of Cryptographic Techniques. Visual cryptography (SpringerBerlin Heidelberg, 1994), pp. 1–12.

    Google Scholar 

  2. SJ Shyu, Image encryption by random grids. Pattern Recogn.40(3), 1014–1031 (2007).

    Article  MATH  Google Scholar 

  3. X Wu, W Sun, Generalized random grid and its applications in visual cryptography. IEEE Transactions on Information Forensics and Security 8. 9:, 1541–1553 (2013).

    Article  Google Scholar 

  4. X Wu, D Ou, L Dai, W Sun, in Proceedings of the first ACM workshop on Information hiding and multimedia security. XOR-based meaningful visual secret sharing by generalized random grids (ACM, 2013), pp. 181–190.

  5. P-L Chiu, K-H Lee, A simulated annealing algorithm for general threshold visual cryptography schemes. Inf. Forensic. Secur. IEEE Trans.6(3), 992–1001 (2011).

    Article  MathSciNet  Google Scholar 

  6. K-H Lee, P-L Chiu, Image size invariant visual cryptography for general access structures subject to display quality constraints. IEEE Transactions on Image Processing 22. 10:, 3830–3841 (2013).

    Article  MathSciNet  Google Scholar 

  7. NS Alex, LJ Anbarasi, in In Electronics Computer Technology (ICECT), 2011 3rd International Conference on, 2. Enhanced image secret sharing via error diffusion in halftone visual cryptography (IEEE, 2011), pp. 393–397.

  8. E Myodo, K Takagi, S Miyaji, Y Takishima, in 2007 IEEE International Conference on Multimedia and Expo. Halftone visual cryptography embedding a natural grayscale image based on error diffusion technique (IEEE, 2007), pp. 2114–2117.

  9. R De Prisco, A De Santis, On the relation of random grid and deterministic visual cryptography. IEEE Trans. Inf. Forensic. Secur.9(4), 653–665 (2014).

    Article  Google Scholar 

  10. T-H Chen, K-H Tsao, Visual secret sharing by random grids revisited. Pattern Recogn.42(9), 2203–2217 (2009).

    Article  MATH  Google Scholar 

  11. W-P Fang, Friendly progressive visual secret sharing. Pattern Recogn.41(4), 1410–1414 (2008).

    Article  MATH  Google Scholar 

  12. T-H Chen, K-H Tsao, Threshold visual secret sharing by random grids. J. Syst. Softw.84(7), 1197–1208 (2011).

    Article  Google Scholar 

  13. X Wu, W Sun, Improving the visual quality of random grid-based visual secret sharing. Signal Process.93(5), 977–995 (2013).

    Article  Google Scholar 

  14. X Wu, W Sun, Random grid-based visual secret sharing with abilities of OR and XOR decryptions. J. Vis. Commun. Image Represent.24(1), 48–62 (2013).

    Article  Google Scholar 

  15. A Jaafar, A Samsudin, A survey of black-and-white visual cryptography models. International Journal of Digital Content Technology and its Applications(JDCTA). 6(15), 237–249 (2012).

    Article  Google Scholar 

  16. O Kafri, E Keren, Encryption of pictures and shapes by random grids. Opt. Lett.12(6), 377–379 (1987).

    Article  Google Scholar 

  17. C-N Yang, New visual secret sharing schemes using probabilistic method. Pattern Recogn. Lett.25(4), 481–494 (2004).

    Article  Google Scholar 

  18. D Wang, L Zhang, N Ma, X Li, Two secret sharing schemes based on Boolean operations. Pattern Recogn.40(10), 2776–2785 (2007).

    Article  MATH  Google Scholar 

  19. S Cimato, R De Prisco, A De Santis, Probabilistic visual cryptography schemes. Comput. J.49(1), 97–107 (2006).

    Article  MATH  Google Scholar 

  20. H Zhang, X Wang, W Cao, Y Huang, Visual cryptography for general access structure using pixel-block aware encoding. J. Comput.3(12), 68–75 (2008).

    Google Scholar 

  21. TH Lin, NS Shiao, HH Chen, CS Tsai, in Frontier Computing. Theory, Technologies and Applications, 2010 IET International Conference on. A new non-expansion visual cryptography scheme with high quality of recovered image (IET, 2010), pp. 258–263.

  22. Y-CH Chih-Lun Chou, A watermarking technique based on non-expansible visual cryptography (Thesis, Department of Information Management, National Central University, Taiwan, 2002).

    Google Scholar 

  23. PA Eisen, DR Stinson, Threshold visual cryptography schemes with specified whiteness levels of reconstructed pixels. Des. Codes Crypt.25(1), 15–61 (2002).

    Article  MathSciNet  MATH  Google Scholar 

  24. ER Verheul, HC Van Tilborg, Constructions and properties of k out of n visual secret sharing schemes. Des. Codes Crypt.11(2), 179–196 (1997).

    Article  MathSciNet  MATH  Google Scholar 

  25. Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process.13(4), 600–612 (2004).

    Article  Google Scholar 

  26. S Nirmala, AAS Begum, Visual innovation towards secure environment. International Journal On Engineering Technology and Sciences. 1(5), 75–79 (2014).

    Google Scholar 

  27. B SaiChandana, S Anuradha, A new visual cryptography scheme for color images. Int. J. Eng. Sci. Technol.2(6), 1997–2000 (2010).

    Google Scholar 

  28. B Saichandana, Visual cryptography scheme for color images. International Journal of Computer Engineering and technology. 1(1), 207–212 (2010).

    Google Scholar 

  29. CP Papageorgiou, M Oren, T Poggio, in Computer vision, 1998. sixth international conference on. A general framework for object detection (IEEE, 1998), pp. 555–562.

  30. P Viola, M Jones, in Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, 1. Rapid object detection using a boosted cascade of simple features (IEEE, 2001), pp. I–511.

  31. PF Felzenszwalb, RB Girshick, D McAllester, D Ramanan, Object detection with discriminatively trained part-based models. Pattern Anal. Mach. Intell. IEEE Trans.32(9), 1627–1645 (2010).

    Article  Google Scholar 

  32. C Papageorgiou, T Poggio, A trainable system for object detection. Int. J. Comput. Vis.38(1), 15–33 (2000).

    Article  MATH  Google Scholar 

  33. A Mohan, C Papageorgiou, T Poggio, Example-based object detection in images by components. Pattern Anal. Mach. Intell. IEEE Trans.23(4), 349–361 (2001).

    Article  Google Scholar 

  34. A Vedaldi, V Gulshan, M Varma, A Zisserman, in 2009 IEEE 12th international conference on computer vision. Multiple kernels for object detection (IEEE, 2009), pp. 606–613.

  35. L Itti, C Koch, E Niebur, A model of saliency-based visual attention for rapid scene analysis. Pattern Anal. Mach. Intell. IEEE Trans.20(11), 1254–1259 (1998).

    Article  Google Scholar 

  36. J Harel, C Koch, P Perona, in Advances in neural information processing systems. Graph-based visual saliency, (2006), pp. 545–552.

  37. T Judd, F Durand, A Torralba, A benchmark of computational models of saliency to predict human fixations (2012). Technical report, MIT-CSAIL-TR-2012-001.

  38. P Kruizinga, N Petkov, Non-linear operator for oriented texture. IEEE Trans. Image Process.8(10), 1395–1407 (1999).

    Article  Google Scholar 

  39. N Petkov, MA Westenberg, Suppression of contour perception by band-limited noise and its relation to non-classical receptive field inhibition. Biol. Cybern.88(10), 236–246 (2003).

    Article  MATH  Google Scholar 

Download references

Authors’ contributions

FJ carried out the visual quality assessment study and the experiments of novel quality assessment based on image processing techniques. BK designed the study process and participated in the experimental result analysis. FJ carried out drafting the manuscript. BK revised the manuscript critically for important intellectual content. Both authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brian King.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, F., King, B. A novel quality assessment for visual secret sharing schemes. EURASIP J. on Info. Security 2017, 1 (2017). https://doi.org/10.1186/s13635-016-0053-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13635-016-0053-0

Keywords