Abstract
Introduction
Image compression techniques are concerned with reducing the size (information or contents) of an image. In the current technological age, the world is facing a huge amount of information. Dealing with such a huge amount of information most often causes problems of storage and efficient retrieving, and its practical use becomes difficult.1,2 Although, with the enhanced modern technology, there is no issue of storage, when the information is dealt with portable devices like cameras connected to Internet or communicating with other systems, then the bandwidth for communication and storage space are of more concern, for example, high-definition television (HDTV). In order to achieve low storage requirements and bandwidth for communication, information should be compressed. On behalf of information, images are classified into irrelevant, redundant, and irrelevant information. Information that can be regenerated without any significant loss in the images and is deterministic is called redundant information, while, on the contrary, the information that contains a large amount of detail in the image and cannot be significantly determined perceptually is called irrelevant information. The information that is not relevant and nor irrelevant is called useful information. Humans can normally perceive decompressed images visually. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System.
As the contents of image consist of information along with redundant data, these redundant data are encapsulated with the original information in the image. The section of data that must be kept permanently to retain the original data in the image is known as information. To achieve high compression rate of image, one should reduce the redundant (duplicate) information. During the reconstruction of an image, the redundant information is required to achieve the original image. So, in this case, our system must be able to re-insert the redundant information back to where it is removed from to achieve the original image. This process of removing redundant information is known as image compression. Various techniques have been suggested by multiple researchers in medical sciences, computer networking and communication, image processing, and many other fields to achieve a high image compression, such as Monika et al., 3 who presented the concept of compressed sensing method for medical image compression. This concept uses coefficient random permutation–based compression system. Karthikeyan and Palanisamy 4 presented the concept of optimized discrete wavelet transform (DWT) and Huffman encoder. Coutinho et al. 5 presented the concept of discrete Tchebichef transform for image compression in embedded systems. Lin and Chen 6 proposed an algorithm that uses mutual correlation information with human vision for medical image compression. Most of these methods were challenged for their high implementation cost and complexity.
This paper presents a low-cost and efficient algorithm for achieving a high image compression rate with a minor loss of redundant information. The proposed method consists of an optimized Haar wavelet transformation (HWT). The HWT is applied along with thresholding and some other filtering techniques (averaging and subtracting) on joint photographic experts group (JPEG) images to achieve high compression rate with a very minor loss of information. Applicability of this technique is tested by calculating its results and comparing it with discrete cosine transform (DCT) and run length encoding (RLE) on the basis of peak signal to noise ratio (PSNR) value and compression rate. The system provides 97.7% image compression for HWT that is much better than DCT having 50% image compression and RLE with 80.9% compression rate. The proposed system provides a high PSNR value of −35.01, which is much better than DCT of value −42.05 and RLE of value −39.03. This high PSNR value of Haar wavelet represents that the Haar wavelet transform is applicable to reconstruct the original image with no loss of original information in the image. These algorithms were used because of the minor loss of information in the image, they need less implementation effort, and they have low bandwidth.
The rest of the paper is organized as follows: section “Related work” presents state-of-the-art techniques and algorithms used for image compression. The proposed methodology of the research work along with background detail is explained in section “Proposed methodology”. The comparisons of all algorithms based on efficiency and compression is discussed in section “Efficiency and compression-based comparison of each technique”, followed by the conclusion in section “Conclusion.”
Related work
The vast digital information of the modern digital world and rapid growth of digital imaging applications like multimedia, satellite communication, HDTV, tele-conferencing, and desktop publishing compel the researchers to think and develop a standard and effective algorithm for image compression. For solving this conflict, many compression algorithms are suggested by the researchers like JPEG, JPEG 2000, DWT, Huffman coding, quantization, lossy compression, lossless compression, DCT, and RLE. 7 However, still the researchers think of developing a more suitable and more effective algorithm for achieving high compression and for decompressing the original information back. For this purpose, the researchers did sufficient work and are still working on this scenario. Some of them are presented in this section; some research papers present the use of one dimensional (1D) DCT technique to achieve a compressed model for data compression. P.A. Kumar presents the use of wavelet transformation to compress image using wavelet technique in VHDL and Verilog. 8
Gupta and Garg 7 presented the use of 2D DCT for compression of images. Porwik et al. 9 and Kaur et al. 10 present the use of different wavelet techniques to achieve image compression. Their work shows the efficiency and results of different wavelet transformations. A similar work is presented by Tamboli et al. 11 Both DCT and wavelet transforms are investigated and implemented for image compression purposes. 12 The use of these methods depends on the situation where high transmission or high decompression is required. The lossless algorithms give the exact original information when decompressed in some applications like decompressing binary information like executable files and documents but might fail in producing the exact information while decompressing images. Although the use of wavelet transform was common in DCT while applying for image compression, some finer detail is destroyed for the sake of saving storage space or bandwidth. The DCT technique was used for image compression for the first time in 1974 due to its simplicity. 13 Although this is useful in some image compression standards like lossy image compression techniques, the main problems with DCT technique are as follows:
Its high loss of quality (image quality) during compression.
Its effect (blurring) can be visually observed during high compression.
Mohan and Kiran 14 presented the concept of DWT for image fusion. Kant et al. 15 presented the concept of wavelet transform based on local correlation-based fusion strategy for the enhancement of image details. Kolekar et al. 16 presented an introductory paper on wavelet-based image processing and its applications. Sometimes, the Fourier methods are not useful in recapturing the signal, 1 especially in the situations when the signal is highly non-smooth and has no more Fourier information that is required during reconstructing the signal locally. The wavelet analysis plays a key role in such cases because of its simplicity, and it becomes very effective while dealing with the conventional properties of a signal. Especially, the Haar wavelet transform efficiently allows analyzing the image on spectral domain. 10 Karthikeyan and Palanisamy 4 presented the concept of optimized DWT and Huffman coding for image compression. Ambadekar et al. 17 presented the concept of encryption and DWT for copyright protection based on digital image watermarking.
Yu et al. 18 developed a novel-based approach for alcoholism detection based on magnetic resonance imaging (MRI). Features are calculated using Haar wavelet transform and principle component analysis based on the contrast equalization of brain slices. A back propagation neural network is suggested for classification purposes. Saha et al. 19 present the concept of wavelet and curvelet transforms for biomedical image processing. Ali et al. 20 suggested the concept of DCT for facial emotion recognition. Jin et al. 21 presented the concept of DCT, discrete stationary wavelet transform (DSWT), and local spatial frequency techniques for infrared and visual fusion methods. Priyadharsini et al. 22 proposed wavelet transformation in contrast enhancement for underwater acoustic images. Ho et al. 23 discrete wavelet transformed base deoxyribonucleic acid (DNA) sequence representation for text recognition and classification. The experimental results for the proposed algorithm are calculated on standard database of Plagiarism Analysis, Ownership Identification, and Neighbor redundant/duplication detection (PAN).
Proposed methodology
This section of the paper describes the proposed methodology for JPEG image compression for measurement and metrology in materials for advanced manufacturing processes and the background details of the image compression and how to achieve compression in images. The applications of the image compression in other fields as well as in advanced manufacturing processes are mentioned. In addition, this section details different compression techniques of DCT, RLE scheme, and Haar wavelet transform followed in this research work.
The proposed methodology for JPEG image compression can be implemented in six main steps as shown in Figure 1. In the proposed methodology, input images (JPEG) were fed to the designed compression system which results in compressed images. Then, the compressed images were given to the decompression system to reconstruct the original images. The applicability of the proposed algorithm is tested by comparing the information present in the recovered image and in the original image, and also the compression ratio (CR) achieved.

Block diagram of operating system.
From Figure 1, it is clear that an input (image) is fed to the compression system. The compression system (a system that converts the original image into bitstream. After conversion of bitstream, the system applies a compression technique in order to achieve a compressed image), decompression system (for reconstructing the original image, a reciprocal of the compression technique is applied in order to achieve/decompress an accurate resultant image), and at the end a compression rate and efficiency of the system to reconstruct/decompress the original image is calculated.
From the block diagram shown in Figure 1, it is concluded that image compression and decompression systems are solely dependent on the techniques/algorithm followed for image compression and decompression. If the algorithm is capable of compressing the image to a minimal size on sender size and can reconstruct the original image properly on the receiver side, then the algorithm followed is 100% accurate. But, in the universe, no such system/algorithm exists that provides 100% accurate results, because every system has a limitation that affects its accuracy. The algorithm/system proposed in this research work results in a high compression rate with 97.7% reconstruction capability. This high accuracy rate for reconstructing the original image results in a minor loss in the original image on the receiver side.
Image compression coding
Image compression coding aims to convert an image into digital form called bitstream. The conversion of image is not only done in a compact mode but also displays the decoded image. In the image compression system, two steps are involved: one is the image encoding and the other is the image decoding. Image encoder converts the image into a digital form, while the decoder converts the bitstream back to the original image. Figure 2 shows the process of encoding and decoding an image.

Image compression basic flow diagram.
The CR of a system can be calculated using the formula shown in equation 1
where n1 are the bit values in the original image and n2 are the bit values in the encoded image.
In order to evaluate the efficiency and compression rate of the image compression technique, it is required to define an equation which can calculate the difference between the recovered and the input images. Two equations are given for this process: one is called mean square error (MSE) and the other is PSNR, as expressed in equations 2 and 3, respectively
where f(x, y) shows the original image and f′(x, y) shows the decoded image. Mostly in compression algorithms, it is required to maximize the PSNR value and minimize the MSE to gain a high compression rate with a minor loss of required details in an image. The correlation in between the pixels is very high or the pixel has a high similarity in its properties to its adjacent pixels. The compression is achieved by eliminating the correlation between the pixels. The researcher suggested variable length encoding technique, predictive coding, orthogonal transform, and subband coding to eliminate this inter-pixel similarity for achieving compression rate. In this paper, we focused on RLE technique and Haar wavelet transform to remove this correlation between pixels to achieve a high compression rate with minor loss in finer detail. Figure 3 represents a generalized form of the image compression system. Each of these steps is discussed in detail in the further sections.

Generalized encoding form of image compression.
RLE
Data files often hold the same character repeatedly in a row for many times. For example, text documents use multiple spaces to indent paragraphs, separate sentences, format charts and tables, and so on. Digital signals can also have runs of the same values, which represents that the signal remains constant. In addition to it, an image of the nighttime of the sky will contain a long run of characters that represents the black background. Likewise, digitized music holds a long run of zeroes in between the song. RLE is the simplest technique to compress this type of data. Figure 4 illustrates a conventional model of RLE scheme. In this diagram, the input data consist of a frequent number of zeroes in the stream. Each time a zero occurs to the input stream, two values are added in the output stream (encoded/compressed file). A first value of zero that acts as a flag to represent that compression is started and the second value represents the number of occurrence of flag/zero value as shown in Figure 1 in the encoded section with red color. If the average run length is greater than zero, compression will take place, otherwise it makes the encoded file larger than the input file selected.

Conventional RLE encoding model.
This technique can also be implemented for consecutive characters or alpha numeric strings as explained below. This technique works by storing the consecutive bitstream with a single pixel element and with a count (number of occurrence of same pixel consecutively). For example, the string SSSSSSSSSSSSSSSSSSSS will reserve a memory of 20 bytes (if a character reserves one byte of memory for storage) to store (because there are 20 characters in the string): but the same data can be encoded to 20S using RLE and will require a memory of two bytes to store it. 20S is known as a “run packet.” First byte for storing run count that is 20 and value of run that is S in this example. A string of multiple different characters SSSSSaaa66666666X000000000 can be encoded using RLE to 5s3a86X90. Before RLE encoding, it needs 25 bytes to store this string, but after encoding, it needs only 10 bytes to store this string.
Many researchers have suggested the use of RLE schemes for different purposes in image processing and computer vision techniques. Qin et al. 24 developed enhanced RLE scheme for image compression in image encryption scheme using QR code. Brown et al. 25 proposed RLE technique for the awareness of direct memory access engine. This engine was working on scratchpad-enabled multicore processors. Lawrence 26 suggested the concept of RLE scheme for compression of supplemental data in embedding systems. Kondo et al. 27 presented the concept of variable length coding and decoding for moving pictures. Husseen et al. 28 presented the concept of enhanced RLE scheme for image compression. Aggarwal and Srivastava 29 presented an overview paper on different compression techniques. Liaghati et al. 30 presented the concept of biased RLE for bilevel classification of hyperspectral images.
Unfortunately, this technique fails in compressing natural language text, because natural languages suffer from short repetitive runs or no repetitive runs (bitstream). This technique is efficient in compressing images that contain redundant information. The resultant image of RLE is shown in Figure 4(b). This technique is implemented on the sender side, by compressing a large number of consecutive runs (bitstream) of repeating items by sending only a run (bit) along with a counter which shows the number of occurrence for a specific bit repeated in the bitstream. On the receiver side, an inverse process is implemented to reconstruct/decompress the original image.
After applying RLE scheme, the original image is reduced to a considerable size (from 59.8 to 44.5 KB). Figure 5(a) and (b) shows the original and resultant image by applying the RLE method. The resultant image quality lost along with finer detail in the image is shown in Figure 4(b). One can easily observe this loss of information in the resultant image with naked eyes. This loss of required information proves the lack of reconstructing the original image after encoding. On the contrary, when the HWT is applied to the same image, a compression rate of 25.1 KB is achieved along with a minor loss of finer detail in the image.

(a) Original: 59.8 KB and (b) resultant: 44.5 KB.
DCT
A DCT is defined as an algorithm that represents a limited sequence of data inputs as the summation of cosine functions hovering at variable frequencies. In this modern technological age, there has been a keen interest toward the usage of orthogonal transformation in the fields of digital image processing. DCT can be widely used in image processing applications for pattern recognition and Wiener filtering. 31 Ramesh Babu and Srinivasa Rao 32 compared the results of DCT, DWT, and stationary wavelet transform (SWT) for satellite image fusion techniques. It helps in classifying the image into multiple sections of varying importance (regarding images visual perceptions). DCT performs similar operations like discrete Fourier transform on a signal or on an image. It converts an image/signal from spatial domain to frequency domain as shown in Figure 6.

Conversion from time domain to frequency domain.
At first, encoding process is applied in DCT which encodes the digital image. The general equation for 1D (N data items) DCT can be calculated using equation 4
For decoding, an inverse operation is performed. An inverse 1D discrete cosine transformation can be simply calculated using X–1(u), which can also be represented in equation 5
After applying discrete cosine transformation to the JPEG image, a resultant is obtained as shown in Figure 7(a) and (b). Visually, DCT is comparatively better than RLE and produces a more compressed resultant image. RLE produces a compressed image of size 44.5 KB, while DCT produces a compressed image of size 42.2 KB. Still, the quality of the image degraded at the receiver end.

(a) Original: 59.8 KB and (b) resultant: 42.2 KB.
Wavelets
Wavelets are mathematical functions which represent data in the form of series of wavelets known as “levels detail” or “frequency components.” Wavelets techniques are used in hierarchical decomposing functions like approximation theory, physics, and signal processing but are now applied in many fields of computer graphics like image compression and editing, in biometric field. Mostly, the biomedical researchers apply unique wavelet transform technique for solving specific problems based on the needs and nature of the problem. The concept of wavelet application is now successfully implemented in many biomedical problems.33–35
Wavelets theory was developed as a result of studying the consequences of multi-resolution analysis. This gives the information of nature and relationship of the time and frequency in different scales with high resolution. 33 Wavelets transforms convert an image into a series of wavelets. Wavelets transforms play a vital role in the field of signal and image processing. They are helpful in compressing an image with low storage space requirements and also with containing more significant information of the image. 9
About every wavelet transformation application, the main target is that the wavelet coefficient should be able to reconstruct or synthesize the original signal properly. In pattern recognition problems, one can use same wavelet transformation for both analysis and synthesizing. But, in arrant reconstruction situations, one can implement bi-orthogonality condition of wavelets instead of using same wavelet transform for both analysis and synthesizing. Bi-orthogonality conditions help the researchers to achieve high compressing with perfect reconstruction using two different set of wavelets; use one set for synthesis and the other one for analysis. But, this is the most challenging task for the modern research to define such an ample wavelet function that can provide full information and description of the interested image. Although it is not an easy task to define such a generous wavelet algorithm for a defined class of signal, however, it is possible by generalizing characteristic of several wavelet transformations to determine which one is more applicable for the defined application.
Haar wavelet transform
Haar wavelet functions are used in many fields, such as mathematics, sciences, biometric, and computer applications since 1910. 26 Haar wavelets were introduced by a Hungarian mathematician named Alfred Haar. Haar worked in analysis studying of orthogonal system functions, linear inequalities, and Chebyshev approximations. Haar defined the Haar wavelet theory in 1909, and this was the simplest of all wavelets. Mathematical representation of the Haar technique is termed as Haar wavelet transform. 36
Recently, various generalizations and several definitions 37 as well as modifications38–40 are used. One of the significant modifications that were introduced is the lifting scheme.41–43 These functions have been used in spectral techniques for multivalued logics like edge extraction, image coding, and so on.
The mathematical representation and evaluation of HWT is simple and can easily be implemented in terms of subtraction, addition, and division by 2. Like all wavelet transformations, the HWT decomposes the discrete time signal into two sub-signals of half its length also known as “details level.”
The wavelet family
Mathematically, the Haar wavelet transform interval (a, b) can be expressed as
where
j represents the level of the wavelet, and p represents the translation parameter. The relation between i, p, and q can be calculated by
where ai are real constants and Hi(x) is known as the Haar wavelet. The mother wavelet for i = 1 can be expressed as
The representation of scaling and dilation of the first eight Haar wavelets is shown in Figure 5. HWT has many advantages in the field of computer graphics like the following: 35
Operationally fast and easy to understand.
Memory efficient.
Provides high CR.
Provides high PSNR value.
Provides more significant details of the signal as compared to other wavelet transforms in reversible process.
The Haar wavelet transform is simple transformation and can be used from a space domain to a local frequency domain. As discussed earlier, the Haar wavelet transform decomposes a signal into two sub levels: one is known as an average and the other one is known as difference.
Figure 8 represents the graphical representation for the first eight Haar wavelet “detail levels.” The process of compression and decompression is achieved by applying these “detail levels” values to an 8 × 8 block of an image. This process is briefly explained in section 7.

First eight Haar wavelets representation.
Data compression in computer graphics applications and multimedia applications has played a vital role where compression techniques are rapidly used in compressing big data files like images. Effective techniques succeeded in this area to compress images with high quality and with low storage space requirements. Especially, the HWT efficiently allows analyzing the image on spectral domain. 10 Although the HWT is simple and efficient, it is non-continuous in nature, which is the drawback of HWT. 9
Mathematical operations of HWT
The performance analysis is the procedure or methodology that one can follow to examine a system or an application performance. Keep in mind that different attributes are considered while checking performance analysis in different situations, like finding an athlete performance differing from finding a computer performance or human resource performance in a project or performance analysis of funds differing from each other.
The main operation of the proposed method is divided into sub-groups for clarity and is easy to understand and is explained below in the sub-sections.
Converting the image into matrix form
As the computer system deals the image as a matrix or an array of discrete values known as “pixels” or “picture elements,” these discrete values range from 0 (black) to some positive number values 255 (white). For computing Haar wavelet of an image, we must convert an image to a discrete matrix values as Haar wavelet transform cannot deal with continuous data, and discrete matrix of an image can be achieved using MATLAB (a programming tool). In this paper, we considered only an 8 × 8 image block for processing but noted that the same technique is applied to the whole image to calculate the HWT of the image. Table 1 presents an 8 × 8 portion of image for HWT.
Pixel matrix P represents an 8 × 8 image. 42
Calculating the wavelet transform matrix (T)
Before converting matrix “P” to wavelet transform, first we will find the “detail level” that is the average and difference of each row and column of the matrix, as these are required for computing Haar wavelet transform of an image. The following steps were applied to obtain a transformed matrix:
Deal each vector (row) of the matrix as a string.
A new matrix generates after calculating the difference and average of each row.
These steps are iteratively applied to the new matrix.
Finally, a transformed matrix generates.
To understand the effect of differencing and averaging to a data string, consider the first row of preceding matrix “P” in Table 2. Each successive row of the table gives the initial, intermediate, and final result obtained. The data string has length 23 = 8, which shows there are three steps in the transformation process. The first row of the table shows the original data string of table “P” that are four paired numbers. The first four numbers of second row show the averages of those pairs. While the first two numbers of the second last row show the average of those four averages that calculated in previous step carried two at a time, similarly the first number of last row shows the average of the two calculated averages in the second last row. The remaining numbers are shown in bold to represent the deviation from the various averages. The four bold faced numbers in the second row are the result of subtracting the four averages from the first row first paired number, such as subtracting 640, 1216, 1408, and 1536 from 576, 1152, 1344, and 1536. This yields in −64, −64, −64, and 0, which are known as detail coefficients. These detail coefficients are repeated in every subsequent row of the table. The other two entries, that is the third and fourth of row three, can be achieved by subtracting the first two elements of row three from the first paired elements of row two, such as subtracting 928 and 1472 from 640 and 1408, which gives −228 and −64. Like previous detail coefficients, these two new calculated detail coefficients are repeated in each later row of the table. And finally, −272 (the second entry of the last row) can be obtained by subtracting 1200 from 928, that is, start of second last row.
Averaging and differencing results of first row of table “P.”
As the averaging and differencing is a reversible process, we can work back from any row to the previous row in the table and hence to the first row using appropriate addition and subtraction techniques. Here, in the whole transformation process, we lost nothing. After completing the transformation process on rows, we will repeat the same transformation process on the columns to achieve high compression. This can be achieved by applying transpose to the row transformed matrix. This gives another matrix “T” with an 8 × 8 dimension, called Haar wavelet transforms of the table “P,” as shown in Table 3.
Transformed matrix (T).
The matrix has overall one average value in the top left of the transformed matrix “T” and 63 detail elements. The values in the transformed table that has a little variation in the original format and apparent themselves as zero or small element is the portion that indicates the point of wavelet transform. The value “0” in the “T” table is due to the identical adjacent pixel elements in matrix “P.” The values −2, −4, and 4 can be explicated by some nearly identical values in matrix “P.” A matrix with high proportion of zeros is called a sparse matrix. Sometimes, in many cases, corresponding wavelet-transformed matrices of many images are sparser than the original image matrices and that is why it is easier and efficient to transmit and store a sparser matrix than an ordinary matrix of the same size.
The aim of the wavelet transformation function is not only to deal with the expectation of sparicity of the transformed matrices but also to enable us to shrink the “significant detail level” to set a lot of entries to zero. This helps in changing the transformed matrices entries taking the advantages of “regions of low activity,” and then the approximately original data can be reconstructed by applying the inverse wavelet transformation to the processed transformed matrix.
So, we come up with the idea of using wavelet compression: define a threshold value (always positive) “α” and decide that every entry in the transformation matrix “T” that is less than or equal to “α” will be set to zero. Equation 5 represents a mathematical form of thresholding. Thresholding results in a sparse matrix. Then, by applying this transformation, we can reconstruct the original approximate data. We throw a sizable detail coefficient and satisfy ourselves with the result that is visually acceptable. The process will be a lossless compression if no information is lost and the original information is reconstructed properly (e.g. if α = 0); otherwise, it will be a lossy compression in the case if it loses some information (if α > 0)
In the process of thresholding, a non-negative number known as threshold value has been selected, and any detail coefficient whose value is equal or less than the selected threshold value in the transpose matrix “T” will be set to zero. An optimum threshold value of 20 is selected in this case because if the selected threshold value is greater than the one that requires, then we will lose a lot of significant information, and as a result, the constructed image will be blurred more depending upon the threshold value selected. The result of different threshold values is shown in Figure 9. It is evident from Figure 6 that a threshold value of 10 proves a high-quality image comparative to threshold value of 20 and 40, but on the other side, a value of 10 results in low compression rate.

(a) Threshold value = 20, (b) threshold value = 40, and (c) threshold value = 10.
A graphical representation for the same input image with multiple threshold values selected and its effect on the resultant image is shown in Figure 10.

Threshold versus compression achieved graph.
From the graph shown in Figure 7, it is concluded that as we increase the threshold values, the compression rate increases, but on the other side, for high threshold values, the PSNR value decreases which results in low quality image. So, in this case, a threshold value of 20 is an optimal value for both PSNR value and compression rate.
A matrix with high proportion of zero(s) is called a sparse. It is noted that the transformed matrix is sparser than the original matrix in lossy image compression. In our case, we set a threshold value of “20,” which means that we reset the values of details coefficients to zero whose values are less than or equal to 20 in the transformed matrix. The resultant matrix “D” is shown in Table 4.
Thresholding.
After applying the inverse transform to obtain the original approximate data, we get the resultant matrix “R” shown in Table 5.
Reconstructed image.
After applying this process, the results of the Haar wavelet transform is shown in Figure 11. This is concluded from Figure 11 that the HWT gives a high compression rate with a finer detail in image. The results are drawn with validating the proposed algorithm on multiple images on the basis of efficiency, storage requirements, and PSNR values. The results of HWT on an image are shown in Figure 11(a) and (b).

(a) Original: 59.8 KB and (b) resultant: 25.1 KB.
During our result analysis, the selected images are red, green, and blue (RGB) images, so we test the algorithm by applying it to each color in the image, and after compressing a specific color in the tom image, we get the compressed images shown in Figure 12(a)–(c).

(a) Red color compressed, (b) green color compressed, and (c) blue color compressed.
Efficiency and compression-based comparison of each technique
For the same input JPEG images, the proposed algorithm is tested and the results are compared with DCT and RLE scheme. The experimental results of the proposed research conclude that the results of the HWT are better than RLE and DCT due to its high PSNR value as shown in Table 6. The high PSNR value indicates a good quality picture with significant information.
Comparison table showing the compression results of Haar wavelet transform versus discrete cosine transform versus run length encoding.
PSNR: peak signal to noise ratio.
Figure 13 represents a graphical shape of compressing the redundant informaiton for the same input image (JPEG format) of size 59.8 KB based on RLE, DCT, and Haar wavelet transform. It is clear from Figure 12 that Haar wavelet trasform obtains a significant compression rate of 34.7 KB much better than 15.3 and 17.6 KB for RLE and DCT, respectively. Also from Figures 8–10, it is concluded that Haar wavelet transform has the ability to approximately recover the original image.

Comparison of proposed compression techniques.
Haar wavelet transform is implemented using simple addition, subtraction, and averaging techniques, which result in digital (graphical) form of output. It has no tendency to be affected by noise and ultimately it is proven in reconstructing the original image.
Conclusion
Image compression techniques are used for compressing images with no loss of significant information. This paper presents an optimized Haar wavelet-based compression technique for JPEG image compression in measurement and metrology in materials and its applications in advanced manufacturing processes. The performance results are carried out based on PSNR and MSE values. The performance of the proposed algorithm is compared with state-of-the-art algorithms RLE and DCT. The results proved that the optimized HWT has high PSNR and more CR as compared to DCT and RLE.
The proposed algorithm presented in this paper is more efficient, cost effective, simpler, and easy to implement. In addition, this algorithm helps in achieving high CR along with high PSNR value that promises significant details and high quality during reconstruction of image. Sometimes, some image compression algorithms are hard to implement or lose significant information during image reconstruction, but the proposed method promises the reconstruction process with 98% accuracy with a minor loss of redundant information. In the future work, more robust and efficient algorithms will be tried to handle the compression of images.
