Skip to main content

Estimating Previous Quantization Factors on Multiple JPEG Compressed Images


The JPEG compression algorithm has proven to be efficient in saving storage and preserving image quality thus becoming extremely popular. On the other hand, the overall process leaves traces into encoded signals which are typically exploited for forensic purposes: for instance, the compression parameters of the acquisition device (or editing software) could be inferred. To this aim, in this paper a novel technique to estimate “previous” JPEG quantization factors on images compressed multiple times, in the aligned case by analyzing statistical traces hidden on Discrete Cosine Transform (DCT) histograms is exploited. Experimental results on double, triple and quadruple compressed images, demonstrate the effectiveness of the proposed technique while unveiling further interesting insights.

1 Introduction

The life-cycle of a digital image is extremely complicated nowadays: images are acquired by smartphones or digital cameras, edited, shared through Instant Messaging platforms [1], etc. In each step, the image could go through a modification that potentially changes something without modifying (in almost cases) the semantic content. This makes forensics analysis really difficult in order to reconstruct the history of an image from the first acquisition device to each of the subsequent processing ([2, 3]). Even detecting if an investigated image has been compressed only two times is a challenging task, namely Double Compression Detection ([46]). The problem is furtherly complicated by considering the possibility to crop and/or resize images (e.g., aligned and non-aligned scenario [7, 8]). State-of-the-art image forensics techniques usually make use of different underlying assumptions specifically addressed for the task ([710]). This becomes particularly relevant when dealing with multiple compressions [11]. The robust inference of how many times an image has been compressed is a problem investigated with techniques working mainly for the aligned scenario ([1215]). In particular, [15] pushes the detection up to triple compression by defining a three-class classification problem demonstrated to work only for multiple compressed images with the same Quality Factor.

Once an image has been detected to be multiply compressed, the reconstruction of the history of the image itself becomes challenging. First Quantization Estimation (FQE) has been widely investigated for both the aligned and non-aligned cases w.r.t. different datasets in the double compressed scenario.

A first technique for FQE was proposed by Bianchi et al. ([1618]). They proposed a method based on the Expectation Maximization algorithm to predict the most probable quantization factors of the primary compression over a set of candidates. Other techniques based on statistical consideration of Discrete Cosine Transform (DCT) histograms were proposed by Galvan et al. [4]. Their technique works effectively in specific conditions on double compressed images exploiting the a-priori knowledge of monotonicity of the DCT coefficients by histogram iterative refinement. Strategies related to histogram analysis and filtering similar to Galvan et al. [4] were introduced until these days ([1921]). Still they lack of robustness and are likely to work only in double compressed scenario and at specific conditions demonstrating many limits. Recently, Machine Learning has been employed for the prediction task making many black-boxes able to train and model statistical data w.r.t. specific datasets. For instance Lukáš and Fridrich in [22] introduced a first attempt exploiting neural networks, furtherly improved in [23] with considerations on errors similar to [4]. At last Convolutional Neural Networks (CNN) were also introduced in some works ([2426]). CNNs have demonstrated to be incredibly powerful in finding hidden correlations among data, specifically in images but they are also very prone to overfitting, making all the techniques extremely dependent on the dataset used for training ([27]). This drawback is in some way mitigated by employing as much training data as possible in wild conditions, Niu et al. [28] in this way achieved top-rated results for both aligned and non-aligned double compressions.

All the techniques reported above tried to estimate the first quantization matrix in a double compression scenario, although estimating just the previous quantization matrix for multiple compressed images, could be of extreme importance for investigation in order to understand intermediate processing. When it comes to multiple compressions, the number of compression parameters involved in each step for every single image becomes huge. Machine Learning techniques need to see and consider almost all combinations during the training phase, and are not easily viable for this specific task. In this paper, a FQE technique based on simulations of multiple compression processes is proposed in order to detect the most similar DCT histogram computed in the previous compression step. The method is based only on information coming from a single image, thus it does not need a training phase.

The proposed technique starts from the information of the (known) last quantization matrix (easily readable from the image file itself) in conjunction with simulations of compressions applied to the image itself with proper matrices. Experiments on 2, 3 and 4 times compressed images show the robustness of the technique providing useful insights for investigators at specific compression parameters combination. The remainder of this paper is organized as follows: Sections 2 and 3 describe the proposed approach and datasets, in Sections 4 experimental results are reported in different scenarios, and Section 5 concludes the paper.

2 Proposed Approach

Given a JPEG m-compressed (compressed m times) image I, the main objective of this work is the estimation of a reliable number of k quantization factors (zig-zag order) of the 8×8 quantization matrix Qm−1 (i.e., the quantization table of (m−1)-th compression), which it is possible to define as qm−1={q1,q2,......,qk}. The unique information available about I is the last quantization matrix qm, which can be one of the standard JPEG quantization tables or custom ones ([29, 30]), available by accessing the JPEG file and the extracted (e.g., with LibJpeg C libraryFootnote 1) DCT coefficients of each 8×8 block (Dref). No inverse-DCT operation is done at this step, thus no further rounding and truncation errors can be introduced. The set of the obtained DCT blocks and the respective coefficients (multiplied by qm) are collected to compute an histogram for each of the first k coefficients in classic zig-zag order denoted with: href,k(Dref) with k{1,2,..,64}. A square patch CI of a size d×d is cropped from the image I previously decompressed (e.g., Python Pillow libraryFootnote 2), leaving out 4 pixels for each direction, in order to break the JPEG block structure [22]. CI is then used as input to simulate JPEG compressions, carried out with a certain number n>0 of constant 8×8 matrices Mi with i{1,2,..,n}. The parameter n is simply set considering the greatest value that can be assumed by the quantization factors employed in the previous quantization step for the worst scenario (i.e., lowest Quality Factor). Once the parameter n is defined, the simulation of compression of CI is arranged as follows: given CI for i=1,2,...n, a 8×8 quantization matrix Mi with each element equal to i is defined, allowing to generate \(C^{\prime }_{I,i}\) compressed images. The current (second) compression is then simulated by employing the known qm on each of the n\(C^{\prime }_{I,i}\) thus generating CI,i″ new compressed images. Each CI,i″ represents a simulation of compression with known previous and last quantization parameters.

As done with I, the DCT coefficients (Di) are extracted from CI,i″, the distributions hi,k(Di) are computed, with i{1,..,n}, which represent a set of n distributions for the k coefficient, where k{1,..,64}. hi,k(Di) are then analytically compared, one by one, with the real one href,k(Dref) through the χ2 distance defined as follows:

$$ {\chi}^{2}({x},{y})=\sum_{i=1}^{m} {\left(x_{i} - y_{i}\right)^{2}}/\left(x_{i} + y_{i}\right) $$

where x and y represent the distributions to be compared.

Finally the estimation of qm−1={q1,q2,......,qk}, can be done for every qk quantization factor as follows:

$$ {q_{k}}={argmin}_{i=1,..,n}{\chi^{2}{\left(h_{i,k}\left(D_{i}\right),h_{ref,k}\left(D_{ref}\right)\right)}} $$

For sake of clarity, the pseudo-code of the process is reported in Algorithm 1.

3 Datasets

The effectiveness of the proposed approach was demonstrated through experiments performed on four datasets (BOSSBase [31], RAISE [32], Dresden [33] and UCID [34]) for the first quantization estimation in the double compression scenario: patches of different dimensions were obtained by extracting a proper region from the central part of the original images. A new set of doubly compressed images was then created starting from the cropped images with a certain number of combinations of parameters in terms of crop size and compression quality factors (employing only standard quantization tables [29]).

Other experimental datasets were similarly created from RAISE using custom quantization tables employed in Photoshop and from the collection shared by Park et al. [35]. The first dataset is obtained from all RAISE images cropped in patches 64×64, by employing the 8 highest Photoshop custom quantization tables (on 12 total) for first compression (where higher values correspond to better quality factors) and QF2={80,90}. The second dataset is built from 500 randomly picked full-size RAISE images by considering for first and second compression a collection of 1070 custom tables, with substantial differences from the standard ones, splitted in 3 quality clusters (LOW, MID, HIGH) calculated by the mean of the first 15 DCT coefficients and selected randomly from the clusters in the compression phase.

Finally, a dataset for the multiple compression scenario was created starting from UCID [34] and compressing two, three and four times patches of different size with QFm{80,90},m=1,2,3 and all previous steps of compression with QF{60,65,70,75,80,85,90,95,100}.

4 Experimental Results

To properly assess the performances of the proposed solution, a series of tests have been conducted, considering the datasets described in the previous Section, in multiple compression scenarios. Four approaches were considered for comparison: Bianchi et al. [17], that is a milestone among analytical methods and has great similarity with the proposed approach; Galvan et al. [4] and Dalmia et al. [19] which achieve state of the art results when QF1<QF2 and Niu et al. [28], which represents the state-of-the-art with the use of CNNs with best results as today. It is worth noting that Niu et al. [28] uses different trained neural models for each QF2 (80 and 90), while the proposed solution works for any QF2 with the same technique. Although [28] has been designed to work on a more general scenario and the related CNN has been trained considering also the non-aligned double compression, it achieves the best results among CNN based approaches also in the aligned scenario.

As regards implementations used for testing above mentioned techniques: the publicly availableFootnote 3 Matlab implementation was employed for Bianchi et al. [17]; code from the ICVGIP-2016.RAR archive available on Dr. Manish Okade’s websiteFootnote 4 was employed for Dalmia et al. [19]; models and implementation available on GithubFootnote 5 were employed for Niu at al. [28] and finally an implementation from scratch was employed for Galvan et al. [4]

Experiments were carried out for standard tables and custom ones, all employing 64×64 patches extracted from RAISE dataset [32]. As reported in Table 1 and Figs. 1, 2, 3 and 4 the proposed method outperforms almost always the state-of-the-art when the first quantization is computed with standard tables, while the obtained results on images employing Photoshop custom tables demonstrate a much greater gap in accuracy values (see Table 2 and Figs. 5, 6). Results on custom tables show better generalization capabilities w.r.t. [28] which, being CNN-based, seems to be dependent on tables used for training.

Fig. 1
figure 1

Average accuracy of the estimation for each DCT coefficient (first is DC) employing standard tables with QF1={55,60,65,75} and QF2=80. Plot shows results of our method, Bianchi et al. [17], Dalmia et al. [19], Galvan et al. [4] and Niu et al. [28]

Fig. 2
figure 2

Average accuracy of the estimation for each DCT coefficient (first is DC) employing standard tables with QF1={60,65,75,80,85} and QF2=90. Plot shows results of our method, Bianchi et al. [17], Dalmia et al. [19], Galvan et al. [4] and Niu et al. [28]

Fig. 3
figure 3

Average accuracy of the estimation for each DCT coefficient (first is DC) employing standard tables with QF1={55,60,65,75,80,85,90,95} and QF2=80. Plot shows results of our method, Bianchi et al. [17] and Niu et al. [28]

Fig. 4
figure 4

Average accuracy of the estimation for each DCT coefficient (first is DC) employing standard tables with QF1={60,65,75,80,85,90,95} and QF2=90. Plot shows results of our method, Bianchi et al. [17] and Niu et al. [28]

Fig. 5
figure 5

Average accuracy of the estimation for each DCT coefficient (first is DC) employing custom tables with QF2=80

Fig. 6
figure 6

Average accuracy of the estimation for each DCT coefficient (first is DC) employing custom tables with QF2=90

Table 1 Accuracy obtained by proposed approach compared to Bianchi et al. ([17]), Galvan et al. ([4]), Dalmia et al. [19] and Niu et al. ([28]) with different combinations of QF1/ QF2 by considering the standard quantization tables
Table 2 Accuracy obtained by proposed approach compared to Bianchi et al. ([17]), Galvan et al. ([4]) and Niu et al. ([28]) employing custom tables for first compression

Further tests have been performed to demonstrate the robustness of the proposed solution w.r.t. image contents and acquisition conditions (e.g., different devices). Specifically, three datasets have been considered: Dresden [33], UCID [34] and BOSSBase [31]. Results reported in Tables 35, confirm the effectiveness of the proposed solution. The impact of the resolution/crop pair is evident observing the results of a single dataset (Table 4), where for each increase in crop size (incrementally) corresponds an improvement of accuracy. At the same time, considering the same crop of different datasets (64×64 in Tables 1, 35) the best results are obtained in the crop extracted from the dataset with lowest resolution. A crop d×d extracted from an high resolution image contains less information than that extracted from a smaller one, delivering a flatter histogram that is difficult to discriminate.

Table 3 Accuracy obtained by the proposed approach on Dresden [33] dataset with different patch size and QF1/QF2
Table 4 Accuracy obtained by the proposed approach on UCID [34] dataset with different patch size and QF1/QF2
Table 5 Accuracy obtained by the proposed approach on BOSSBase [31] dataset with different patch size and QF1/QF2

A final test regarding double compressed images has been performed in a much more challenging scenario: a dataset of 500 full-size RAISE images was employed for first and second compression by using 1070 custom tables collected by Park et al. [35] (as described in Section 3). For this test, the parameter of the proposed approach was n=136 which is the maximum value of the first 15 coefficients among the 1070 quantization tables in this context. Results obtained, in terms of accuracy, are reported in Table 6 and definitively demonstrate the robustness of the technique even in a wild scenario of non-standard tables.

Table 6 Accuracy of proposed approach using RAISE full-size images compressed with custom table from Park et al. [35]

4.1 Experiments with Multiple Compressions

The hypothesis that only one compression was performed before the last one could be a strong limit. Thus, a method able to extract information about previous quantization matrices, in a multiple compression scenario, may be a considerable contribution. For this reason, the proposed approach was tested in a triple JPEG compression scenario, where the new goal was the estimation of the quantization factors related to the second compression matrix. Figure 7 shows the accuracy obtained employing different crop sizes (64×64,128×128,256×256) on all the combinations QF1/QF2/QF3 with QF1/QF2{60,65,70,75,80,85,90,95,100} and QF3{80,90} with the method that predicts the firsts 15 coefficients of QF2.

Fig. 7
figure 7

Overall accuracy of the proposed method on JPEG triple compressed images when trying to estimate the Qm−1 quantization factors. First row identifies patch size 64×64,128×128,256×256 and QF3=80 respectively [(a),(b),(c)], while second row is related to the same patch sizes and QF3=90 [(d),(e),(f)

As shown in Fig. 7, the method in general achieves satisfactory results. Some limits are visible when the first compression is strong (low QF) and the second one has been performed with an high quality factor QF2{90,95,100}. By analyzing the results in these particular cases, it is worth noting that the method estimates QFm−2 instead of QFm−1. Figure 8 shows the accuracies obtained in these last cases (QF2{90,95,100}) considering as correct estimations the quantization factors related to Qm−1 (a), Qm−2 (b) and both (c). Results shown in (c) demonstrate how the method is able to return information about quantization factors (not only m−1) even in this challenging scenario. Starting from this phenomenon, in order to discriminate a predicted factor qk between Qm−2 and Qm−1, a simple test has been carried out on 100 triple compressed images with QF1=65,QF2=95 and QF3=90. Starting from the cropped image CI (see Section 2), we simulated, similarly to the case of double compressions in the proposed approach, all the possible triple compressions taking into account only two hypothesis (i.e., qk belongs to Q2 or Q1) and considering a constant matrix built from qk as Q1 or Q2 respectively. Thus, the obtained simulated distributions are compared with the real one through χ2 distance (1). In this scenario, the proposed solution correctly estimated Q1 quantization factors with an accuracy of 95.5%. Moreover, as a side effect of the triple compression also Q2 is predicted with 76.6% accuracy.

Fig. 8
figure 8

Overall accuracy of the proposed method on JPEG triple compressed images with high QF2 (90,95,98), patch size 256×256 and QF3=90, considering as ground truth (i.e., correct estimations) the quantization factors related to QF2 (a), QF1 (b) and both (c)

The insights found for the triple compression experiments were confirmed on 4 times JPEG compressed images (Fig. 9). Even in this scenario, if high QF are employed in the third compression (e.g., 90, 95, 100) Q2 factors are actually predicted in a similar way of what was described before. Besides, if both QF3 and QF2 are high, Q1 elements could be estimated, confirming how the method in each case obtains information about previous compressions.

Fig. 9
figure 9

Accuracy of the proposed method on JPEG 4-compressed images employing all the combinations QF1,QF2,QF3{60,65,70,75,80,85,90,95,100} and QF4=90 considering QF3 as ground truth (a). Further analysis have been conducted with QF3{90,95,100} (low accuracy regions): (b) and (c) show the results employing QF2 and QF1 as ground truth respectively

The proposed method estimates the strongest previous compression which is basically the behavior of most First Quantization Estimation (FQE) methods. For this reason, a comparison was made with [28] on triple compressed images considering Qm−1 as correct estimation. Figure 10 reports the accuracy in the QF3=90 scenario showing how our method (left graph) maintains good result even in triple compression while [28] has a significant performance drop compared to double compression.

Fig. 10
figure 10

Accuracy of our method (left) and [28] (right) on JPEG triple compressed images employing all the combinations QF1,QF2{60,65,70,75,80,85,90,95,100} and QF3=90 considering QF2 as ground truth

4.2 Cross JPEG Validation

Recent works in literature demonstrate how different JPEG implementations could employ various Discrete Cosine Transform and mathematical operators to perform floating-point to integer conversion of DCT coefficients [36].

In order to further validate the proposed method, a cross JPEG implementation test was conducted considering two different libraries (Pillow and libjpeg-turbo) and 2 DCT configurationsFootnote 6 to compress the input images and Pillow to simulate the double compression described in the pipeline. The test was performed using the same 8156 RAISE images cropped 64×64 and double compressed by means of the aforementioned JPEG implementations with QF1={60,65,70,75,80,85,90,95} and QF2=90. Results reported in Table 7 confirm the overall robustness of the proposed solution with respect to different JPEG implementations.

Table 7 Accuracy obtained employing different JPEG implementations with QF2=90

5 Conclusions

In this paper a novel method for previous quantization factor estimation was proposed. The technique outperforms the state-of-the-art in the aligned double compressed JPEG scenario, specifically in the challenging cases where custom JPEG quantization tables are involved. The good results obtained, even in the multiple compression scenarios (up to 4 compressions) highlight that previous compressions leave traces detectable in the distributions of quantization factors. Furthermore, the use of these distributions for previous quantization estimation makes the proposed technique simple with a relatively low computational effort, avoiding extremely computationally hungry techniques while maintaining the same accuracy results. The strengths of the proposed method compared to machine learning approaches are its simplicity and the fact that it does not need training sets.

Availability of data and materials

Not applicable.










Joint Photographic Experts Group


Discrete Cosine Transform


First Quantization Estimation


Convolutional Neural Network


Quality Factor


  1. O. Giudice, A. Paratore, M. Moltisanti, S. Battiato, in International Conference on Image Analysis and Processing. A classification engine for image ballistics of social data (Springer, 2017), pp. 625–636.

  2. S. Battiato, G. Messina, in Proc. of the First ACM Workshop on Multimedia in Forensics. Digital forgery estimation into DCT domain: a critical analysis, (2010), pp. 389–399.

  3. H. Farid, Digital Image Ballistics from JPEG Quantization: A Followup Study. Computer Science Technical Report TR2008-638 (2008).

  4. F. Galvan, G. Puglisi, A. R. Bruna, S. Battiato, First quantization matrix estimation from double compressed JPEG images. IEEE Trans. Inf. Forensics Secur.9(8), 1299–1310 (2014).

    Article  Google Scholar 

  5. O. Giudice, F. Guarnera, A. Paratore, S. Battiato, in Proc. of International Conference on Image Analysis and Processing. 1-D DCT domain analysis for JPEG double compression detection (Springer, 2019), pp. 716–726.

  6. E. Kee, M. K. Johnson, H. Farid, Digital image authentication from JPEG headers. IEEE Trans. Inf. Forensics Secur.6(3), 1066–1075 (2011).

    Article  Google Scholar 

  7. A. Piva, An overview on image forensics. International Scholarly Research Notices. 2013: (2013).

  8. L. Verdoliva, Media forensics and deepfakes: An overview. IEEE J. Sel. Top. Signal Process.14(5), 910–932 (2020).

    Article  Google Scholar 

  9. S. Battiato, O. Giudice, A. Paratore, in Proc. of the 17th International Conference on Computer Systems and Technologies 2016. Multimedia forensics: discovering the history of multimedia contents (ACM, 2016), pp. 5–16.

  10. M. C. Stamm, M. Wu, K. J. R. Liu, Information forensics: An overview of the first decade. IEEE Access. 1:, 167–200 (2013).

    Article  Google Scholar 

  11. Z. Fan, R. L. De Queiroz, Identification of bitmap compression history: JPEG detection and quantizer estimation. IEEE Trans. Image Process.12(2), 230–235 (2003).

    Article  Google Scholar 

  12. S. Mandelli, N. Bonettini, P. Bestagini, V. Lipari, S. Tubaro, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Multiple JPEG compression detection through task-driven non-negative matrix factorization (IEEE, 2018), pp. 2106–2110.

  13. C. Pasquini, P. Schöttle, R. Böhme, G. Boato, F. Pèrez-Gonzàlez, in Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. Forensics of high quality and nearly identical JPEG image recompression, (2016), pp. 11–21.

  14. V. Verma, N. Agarwal, N. Khanna, DCT-domain deep convolutional neural networks for multiple JPEG compression classification. Signal Process. Image Commun.67:, 22–33 (2018).

    Article  Google Scholar 

  15. H. Wang, J. Wang, J. Zhai, X. Luo, Detection of triple JPEG compressed color images. IEEE Access. 7:, 113094–113102 (2019).

    Article  Google Scholar 

  16. T. Bianchi, A. De Rosa, A. Piva, in Conference on Acoustics Speech and Signal Processing, ed. by I. International. Improved DCT coefficient analysis for forgery localization in JPEG images, (2011), pp. 2444–2447.

  17. T. Bianchi, A. Piva, Image forgery localization via block-grained analysis of JPEG artifacts. Proc. IEEE Trans. Inf. Forensics Secur.7(3), 1003 (2012).

    Article  Google Scholar 

  18. A. Piva, T. Bianchi, in Proc. of 18th IEEE International Conference on Image Processing (ICIP), 2011. Detection of non-aligned double JPEG compression with estimation of primary compression parameters (IEEE, 2011), pp. 1929–1932.

  19. N. Dalmia, M. Okade, in Proc. of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing. First quantization matrix estimation for double compressed JPEG images utilizing novel DCT histogram selection strategy, (2016), pp. 1–8.

  20. T. H. Thai, R. Cogranne, F. Retraint, T. Doan, JPEG quantization step estimation and its applications to digital image forensics. IEEE Trans. Inf. Forensics Secur.12(1), 123–133 (2016).

    Article  Google Scholar 

  21. H. Yao, H. Wei, T. Qiao, C. Qin, JPEG quantization step estimation with coefficient histogram and spectrum analyses. J. Vis. Commun. Image Represent.69:, 102795 (2020).

    Article  Google Scholar 

  22. J. Lukáš, J. Fridrich, in Proc. of the Digital Forensic Research Workshop. Estimation of primary quantization matrix in double compressed JPEG images, (2003), pp. 5–8.

  23. G. Varghese, A. Kumar, Detection of double JPEG compression on color image using neural network classifier. Int. J.3:, 175–181 (2016).

    Google Scholar 

  24. M. Barni, L. Bondi, N. Bonettini, P. Bestagini, A. Costanzo, M. Maggini, B. Tondi, S. Tubaro, Aligned and non-aligned double JPEG detection using convolutional neural networks. J. Vis. Commun. Image Represent.49:, 153–163 (2017).

    Article  Google Scholar 

  25. T. Uricchio, L. Ballan, R. Caldelli, I. Amerini, in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Localization of JPEG double compression through multi-domain convolutional neural networks, (2017), pp. 53–59.

  26. Q. Wang, R. Zhang, Double JPEG compression forensics based on a convolutional neural network. EURASIP J. Inf. Secur.2016(1), 23 (2016).

    Article  Google Scholar 

  27. A. Plebe, G. Grasso, The unbearable shallow understanding of deep learning. Mind. Mach.29(4), 515–553 (2019).

    Article  Google Scholar 

  28. Y. Niu, B. Tondi, Y. Zhao, M. Barni, Primary Quantization Matrix Estimation of Double Compressed JPEG Images via CNN. IEEE Signal Process. Lett.27:, 191–195 (2020).

    Article  Google Scholar 

  29. G. Hudson, A. Léger, B. Niss, I. Sebestyén, JPEG at 25: Still going strong. IEEE MultiMedia. 24(2), 96–103 (2017).

    Article  Google Scholar 

  30. G. K. Wallace, The JPEG still picture compression standard. Commun. ACM. 34(4), 30–44 (1991).

    Article  Google Scholar 

  31. P. Bas, T. Filler, T. Pevnỳ, in International Workshop on Information Hiding. Break our steganographic system: the ins and outs of organizing boss (Springer, 2011), pp. 59–70.

  32. D. Dang-Nguyen, C. Pasquini, V. Conotter, G. Boato, in Proc. of the 6th ACM Multimedia Systems Conference. RAISE: a raw images dataset for digital image forensics, (2015), pp. 219–224.

  33. T. Gloe, R. Böhme, The dresden image database for benchmarking digital image forensics. J. Digit. Forensic Pract.3(2-4), 150–159 (2010).

    Article  Google Scholar 

  34. G. Schaefer, M. Stich, in Storage and Retrieval Methods and Applications for Multimedia 2004, 5307. UCID: An uncompressed color image database (International Society for Optics and Photonics, 2003), pp. 472–480.

  35. J. Park, D. Cho, W. Ahn, H. K. Lee, in Proceedings of the European Conference on Computer Vision (ECCV). Double JPEG detection in mixed JPEG quality factors using deep convolutional neural network, (2018), pp. 636–652.

  36. S. Agarwal, H. Farid, in 2017 IEEE Workshop on Information Forensics and Security (WIFS). Photo forensics from JPEG dimples (IEEE, 2017), pp. 1–6.

Download references


Not applicable.


Not applicable.

Author information

Authors and Affiliations



Authors’ contributions

Equal contributor. The author(s) authors read and approved the final manuscript.

Authors’ information

Sebastiano Battiato (M’04–SM’06) received the degree (summa cum laude) in computer science from the University of Catania, in 1995, and the Ph.D. degree in computer science and applied mathematics from the University of Naples, in 1999. From 1999 to 2003, he was the Leader of the “Imaging” Team, STMicroelectronics, Catania. He joined the Department of Mathematics and Computer Science, University of Catania, as an Assistant Professor, an Associate Professor, and a Full Professor, in 2004, 2011, and 2016, respectively. He has been the Chairman of the Undergraduate Program in Computer Science, from 2012 to 2017, and a Rector’s Delegate of education (Postgraduates and Ph.D.), from 2013 to 2016. He is currently a Full Professor of computer science with the University of Catania, where he is also the Scientific Coordinator of the Ph.D. Program in Computer Science. He is involved in the research and directorship with the Image Processing Laboratory (IPLab). He coordinates IPLab’s participates on large scale projects funded by national and international funding bodies and private companies. He has edited six books and coauthored about 200 articles in international journals, conference proceedings, and book chapters. He is a co-inventor of 22 international patents. His current research interests include computer vision, imaging technology, and multimedia forensics. Prof. Battiato has been a regular member of numerous international conference committees. He was a recipient of the 2017 PAMI Mark Everingham Prize for the series of annual ICVSS schools and the 2011 Best Associate Editor Award of the IEEE Transactions on Circuits and Systems for Video Technology. He has been the Chair of several international events, including ICIAP 2017, VINEPA 2016, ACIVS 2015, VAAM2014-2015-2016, VISAPP2012-2015, IWCV2012, ECCV2012, ICIAP 2011, ACM MiFor 2010-2011, and SPIE EI Digital Photography 2011-2012-2013. He has been a guest editor of several special issues published in international journals. He is an Associate Editor of the SPIE Journal of Electronic Imaging and the IET Image Processing journal. He is the Director (and Co-Founder) of the International Computer Vision Summer School (ICVSS). He is a reviewer of several international journals. He participated as a principal investigator in many international and national research projects.

Oliver Giudice received his degree in Computer Engineering (summa cum laude) in 2011 at University of Catania and his Ph.D. in Maths and Computer Science in 2017 defending a thesis entitled “Digital Forensics Ballistics: Reconstructing the source of an evidence exploiting multimedia data”. From 2011 to 2014 he was involved in various research projects at University of Catania in collaboration with the Civil and Environmental Engineering department and the National Sanitary System. In 2014 he started his job as a researcher at the IT Department of Banca d’Italia. For various years since 2011 he collaborated with the IPLab ( working on Multimedia Forensics topics and being involved in various forensics cases as Digital Forensics Expert. Since 2016 he is co-founder of “iCTLab s.r.l.”, spin-off of University of Catania, company that works in the field of Digital Forensics, Privacy and Security consulting and software development. His research interests include machine learning, computer vision, image coding, urban security, crypto-currencies and multimedia forensics.

Francesco Guarnera received the bachelor degree (summa cum laude) in computer science from the Università degli Studi di Catania, in 2009, and the master degree (summa cum laude) in computer science from the Università degli Studi di Catania in 2018. From 2009 to 2016 he was a developer/analyst/project manager of web applications based on AMP (Apache-MySql-PHP). He is currently a Ph.D. student with Università degli Studi di Catania and ICTLab s.r.l. ( and works on Digital Forensics and Computer Vision.

Giovanni Puglisi received the M.S. degree in computer science engineering (summa cum laude) from Catania University, Catania, Italy, in 2005, and the Ph.D. degree in computer science in 2009. From 2009 to 2014, he worked at the University of Catania, Italy, as post-doc researcher. He joined the Department of Mathematics and Computer Science, University of Cagliari, as associate professor in 2014. His research interests include image/video enhancement and processing, camera imaging technology and multimedia forensics. He edited one book, coauthored more than 50 papers in international journals, conference proceedings and book chapters. He is a co-inventor of several patents and serves as reviewer different international journals and conferences.

Corresponding author

Correspondence to Sebastiano Battiato.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Battiato, S., Giudice, O., Guarnera, F. et al. Estimating Previous Quantization Factors on Multiple JPEG Compressed Images. EURASIP J. on Info. Security 2021, 8 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: