Abstract
1. Introduction
In general, Wireless Sensor Networks (WSNs) consist of spatially distributed autonomous sensor nodes (SNs) over a sensing field to monitor physical or environmental conditions, such as temperature, sound, and pressure. As it can be seen in Figure 1, the SNs readings are conveyed in a cooperative manner to a base station. Some deployed nodes may have additional resources (super SNs) or may have additional tasks (routing nodes). Such networks have some features that differentiate them from other wireless networks including (1) deployment in harsh environments and (2) strong restrictions on hardware and software capabilities in terms of processing speed, memory storage, and energy supply. Such sensors usually carry limited, irreplaceable energy resources. Therefore, lifetime adequacy is a significant restriction of almost all WSNs.

A general architecture of Wireless Sensor Networks.
Being one of the conflicting constraints with energy consumption in WSNs, adequate QoS provision has been addressed in the recent research. For example, Rao et al. [1] consider the trade-off between network performance optimization and lifetime maximization in real-time WSNs as a joint nonlinear optimization problem. Based on the solution of such a mathematical optimization problem, they developed an online distributed algorithm to achieve the appropriate trade-off. Even though these efforts have been exerted to balance the network performance and lifetime, the practical applications might require an expected lifetime and a high QoS level. Under this circumstance, always balancing the trade-off is not sufficient per se.
Chen et al. [2] design an adaptive fault-tolerant QoS control algorithm to satisfy the application QoS requirements in query-based WSNs. They developed a mathematical model where the lifetime of the system is considered as a system parameter, to determine the optimal redundancy level that could satisfy QoS requirements while prolonging the lifetime. However, Chen et al. aimed at maximizing the lifetime while maintaining QoS parameters such as the fault tolerance. The network dynamics in their application have not been fully considered. Jemal et al. [3] adopt a self-adaptation strategy to optimize the energy consumption. They developed a probabilistic approach that estimates part of the QoS, namely, the residual energy, to conserve the transmission energy, thus extending the sensor node's lifetime. Their approach is based on a
In this paper, we target breaking the “downward spiral” between reducing a certain QoS measure and extending the network's lifetime. Actually, the application-relevant QoS parameters cannot be ignored. In lieu of solving complex optimization problems to manipulate the natural trade-off between energy efficiency and other QoS parameters, we propose to divide and conquer the procedure. The strategy is to develop ideas mainly for energy efficiency and then refine such ideas to improve the QoS parameters. Three detailed examples are given in order to support the proposed strategy [7–9].
First, we focus on reducing the radio energy consumption via data encoding. The literature reports on lossy compressors such as wavelet transform and model-based methods. In this context, we propose to exploit a recently developed Fuzzy transform for sensor data compression. Actually, the Fuzzy compressor shows a comparable precision performance with the aforementioned methods. However, we seek to further improve the recovery precision through adapting the transformation. Accordingly, a modified transformation, referred to as FuzzyCAT, is proposed. FuzzyCAT has high compression ratios and high precision as well by detecting the input signal curvature and dynamically modifying the transform's approximating function.
Second, virtual sensing is exploited being a novel technique for reducing energy consumed by energy-hungry sensors (GPS, gas sensors, etc.) and simultaneously reducing the event-miss probability. Generally, virtual sensors are orchestrations of HW/SW components that are able to sense a phenomenon which can be directly sensed by an “energy-hungry” sensor. The method is evaluated through two case studies including object tracking and gas leak detection. In both studies, lifetime of the main sensors is significantly extended. Moreover, virtual sensing reliability is improved through adopting ontology-based generated rules for sensor selection where sensing quality and environmental conditions are the selection criteria.
Third, we develop a new concept for the interplay between energy efficiency and QoS requirements. Instead of maximizing the lifetime, we restrict ourselves to only meet the application expected lifetime, but improving the QoS provided. A self-adaptive framework is proposed to respond to the environmental dynamics. Moreover, a hierarchical MAPE (monitor-analyze-plan-execute) architecture forms the global control loop. The obtained results show improvements on the provided quality while confining the lifetime within the expected time frame. As a proof of concept, the three aforementioned methods are examined using different tools including the Cooja simulator of Contiki OS, probabilistic model checking, and real experiments with TelosB sensor nodes.
For the paper to be self-contained, the WSN technical literature is surveyed to identify the heavy energy consumers in different application scenarios. Based on this survey, a new taxonomy of energy efficiency methods in WSNs is constructed with classifying these methods into We sidestep complex optimization problems via proposing to divide and conquer the procedure by which the energy-QoS trade-off is manipulated. We give three detailed examples in order to support the proposed strategy. We survey the recent endeavors for energy efficiency and QoS control in WSNs. We classify these methods into
The paper is structured into two main parts. The first part elaborates on three examples of refined energy efficiency methods, including FuzzyCAT data encoding, reliable virtual sensing, and lifetime planning. In Section 2, the problem of saving energy while considering other application-relevant QoS parameters is formulated. Furthermore, it discusses the main energy consumers in WSNs. The Fuzzy transform-based data compression technique is explained in Section 3. Moreover, ideas for improving the recovery precision and the detection latency are discussed. Section 4 introduces the concept of virtual sensing for energy efficiency and demonstrates how the system reliability can be significantly improved. The third example of planning the entire lifetime with self-adaptive mechanisms is demonstrated in Section 5. Section 6 comprises the second part of this paper. It introduces a novel taxonomy of energy efficiency methods. Moreover, it discusses the negative impacts of such methods on other QoS parameters. Finally, a conclusion together with an outlook is addressed in Section 7.
2. Preliminaries
2.1. Problem Definition
Energy efficiency in WSNs is generally a fertile research area. The WSN literature has been submerged with many energy conservation and harvesting techniques. Symbolically, the energy consumption problem can be denoted as shown in (1). Under the assumption As denoted in (2b), the actual lifetime The quality set

Taxonomy of energy consumption sources in WSNs.
At first glance, the above problem looks contradicting. However, we show through the given three examples that providing the best-effort service quality while offering an extended lifetime beyond the task lifetime is feasible. Figure 2 depicts a comprehensive taxonomy of the various energy consumption sources in WSNs. We divided them into two levels including the “component” and “functional” levels. The former comprises the local sources of energy dissipation at each node
The contribution of each of these consumers in overall energy consumption depends on the application scenario. From a data-centric prospective, WSN applications could be distinguished according to the aggregation manner into
In the category of event-driven applications, data transmission is highly less frequent compared to the previous category. However, other sources may dominate the energy consumption. For instance, Kim et al. [14] list the sources of energy consumption of the H-mote with a Hybus sensor board which contains five sets of air quality monitoring sensors. Power consumption of both, the
Finally, the third category of WSNs applications considers heterogeneous WSN in which the measured data is frequently transmitted. Simultaneously, the network continuously samples the environment for detecting predefined events. The office monitoring scenario [15] belongs to this category. In this application, individuals should be localized in an event-driven manner. Furthermore, environmental conditions have to be also repeatedly collected for controlling the heat and light systems.
2.2. Divide-and-Conquer Technique
In this paper, we propose the

Difference between the proposed divide-and-conquer (DnC) technique and the general MOO method.
Kellner and Hogrefe [16] propose multiobjective ant colony optimization (MOACO) algorithms that are capable of considering multiple objectives at the same time. The MOACO algorithms provide a compromise between security and efficient routing. Ferentinos and Tsiligiridis [17] utilize
Aside from the profits cultivated from employing these MOO algorithms, their computational overhead is questionable. The impact of such additional burden is highly notable, especially with resources-taxed sensor nodes. Alternatively, we propose the DnC technique to simplify these optimizations. As depicted in Figure 3(b), the DnC technique considers energy efficiency as the first objective that has to be accomplished. Afterwards, QoS can easily be improved via refining the energy-conservation method in light of the design-time knowledge. To prove the applicability of our proposed DnC technique, we provide three examples of energy-conservation methods that consider QoS improvement. Each of these examples covers one of the aforementioned categories of WSNs application scenarios.
For time-driven scenarios, we propose a novel compression technique exploiting the recently developed Fuzzy transform (
3. Fuzzy-Based Data Compression
Data compression methods have basically been classified into
The first category is typically linear transformations that map data on a space where computation is simpler. Given the temporal correlation of sensed data, most resulting coefficients approach zero and are discarded, so the mapped space can be easily entropy-coded. In the second category, the signal's temporal correlation is exploited to approximate it by a sequence of line segments. The information loss is controlled by a user-set error margin: whenever the approximating line deviates from the next data point by more than the error margin, the current line parameters are sent and a new approximation is started. We note that the authors of [20] utilize the same idea in designing derivative-based prediction (DBP) modelling, a greedy algorithm that linearly approximates the signal, although their work is not data compression per se. The key difference between LTC and DBP is that LTC transmits the parameters of a line once it has exhausted its approximating potential, whereas DBP sends out the line parameters (the model) immediately after the learning phase and waits until the model adheres to the “incorrectness” definition to compute a new one. Admittedly, both algorithms proved to be successful by finding application in real WSNs.
Compressive sensing is a novel method which displaces the traditional mantra of “sample then compress” with “compress while sampling.” Its core idea is to sample below the Nyquist rate and then use numerical optimization methods to recover full-length signals from a small number of randomly collected samples [4]. For the CS method, the computational burden is customarily transferred to the sink; however, the recovery process is relatively complex. Figure 4 depicts a realistic comparison between various compression algorithms including the K-run-length encoding (KRLE), the LTC method, the DWT method, the run-length encoding (WQTR), and the low-pass filtered fast Fourier transform (FFT) [4]. As it can be seen, the different methods result in approximately the same battery lifetime. However, both CS and FFT have an advantage of avoiding negative values which result when the compression process increases length of the original data.

The SN lifetime running various compression algorithms where FS shows the classification accuracy for the full (uncompressed) signal [4].
The aforementioned comparative analysis, among existent data compressors, motivates us to adopt the DnC strategy when compressing sensor data. To this end, a Fuzzy transform-based compression (FTC) method is devised. Subsequently, we introduce an adaptive version, referred to as Fuzzy Compression Adaptive Transform (FuzzyCAT). FuzzyCAT considers better accuracy while preserving periodicity and resilience to lost packets. The next discussion highlights the idea behind both
3.1. FTC Compression
The
Definition 1.
Assume a discrete function
In [7], we perform a comparative analysis between Fuzzy transform-based compression (FTC) and the LTC method. The comparison metrics include recovery accuracy, that is, root mean square error (RMSE) between an original signal and a recovered signal, and compression ratio (CR). Simulation results show similar performance, where CR delivered by FTC is 10, whereas LTC compresses the input data by a ratio of 9.17. RMSE is 3.67% for FTC and 3.89% for LTC. However, in case of FTC, we notice that peak values of the RMSE metric occur at curvature areas of the original signal.
One way to ensure smoothness, thus minimizing the reconstruction error of FTC on interesting intervals of the signal, is to implement data preconditioning such as sorting. It is known that many lossy compressors act as low-pass filters, only preserving signals of low frequency or low curvature. Therefore, the idea of sorting the discrete signal is not specific to FTC. It can rather be adapted to other lossy algorithms. In our case, applying quick sorting to the data before compression indeed reduces the RMSE dramatically, since all high fluctuations are removed. However, this comes at a high cost to the overall performance: since the sorted and compressed signal requires “back-sorting” at the sink node on top of performing the decompression. The vector of resorted indices of each data point has to be sent along with the Fuzzy vector. As a consequence of this naïve approach, the compression ratio plummets.
Figure 5 depicts the normalized RMSE as a function of the CR where FTC has been applied to different window sizes (

Reconstruction error of regular and sorted data for different window sizes.
3.2. FuzzyCAT Compression
The core idea behind FuzzyCAT is to increase resolution of the

The first and the second derivatives as measures of smoothness.

Structure of the adaptive basic function.
Comparing the performance of LTC, FTC, and FuzzyCAT involved compressing and recovering a 1000-point segment of the temperature dataset while varying the parameters of each algorithm: error margin for LTC, number of coefficients for FTC, and the number of additional Fuzzy sets per half period for FuzzyCAT. The two remaining parameters of FuzzyCAT,

Determining the optimum thresholds for Berkeley lab data.
The algorithm's parameters such as

Normalized error versus compression ratio of LTC, FTC, and FuzzyCAT.
Aside from the accuracy metric, FuzzyCAT and LTC were also evaluated in terms of the consumed energy. We ran a series of experiments using TelosB sensor nodes (CM5000 MSP) with the Contiki OS. A WSN was deployed to collect temperature, humidity, and light intensity readings. To create fluctuations in the measured signal, the indoor light level has been frequently changed during the experiment. The setup involved a network under the ContikiMAC radio duty cycling protocol in the Rime stack. For the fairness of the experiment, the parameters of each algorithm were set such that both resulted in the same normalized RMSE. Figure 10(a) delineates the power consumed via broadcasting the LTC and the FuzzyCAT vectors. In fact, FuzzyCAT consumes 96.07% less energy than LTC for fixed throughput. This significant gain comes along with another decrease in the processing power consumption as shown in Figure 10(b). Although these results apply only to our settings, they give a “good” insight into FuzzyCAT superiority over the model-based compressors.

Power consumption of TelosB sensor nodes running LTC and FuzzyCAT.
3.3. Limitations of FuzzyCAT
Intuitively, the proposed FuzzyCAT method has some drawbacks that should be here identified. However, we provide solutions which could highly mitigate the incurred burden of such drawbacks: The most obvious disadvantage of FuzzyCAT hides in the fact that the computation of the transform requires iterating through the whole window of data points, before sending the compressed vector, which increases the Since FuzzyCAT records information about the curvature of each half period of the time series window before applying the transform, it needs to store the whole vector of uncompressed measurements in the mote's memory. This is problematic, given that most motes have very Spreading the overhead by sending vectors of values comes at a price of having to respect the Another potential difficulty is that the Finally, as we mentioned previously, the measurements that include both positive and negative values must be
As one can see, the disadvantages of FuzzyCAT are certainly workable and by no means undermine the overall value of the method, especially in consideration with its beneficial characteristics such as stellar energy efficiency, low complexity, and periodicity. Next, we explain our proposed method for shortening the reporting delay.
3.4. Cooperative Prediction Scheme
In this section, we set up an efficient prediction method at the aggregating node for reducing the delay incurred through compression. The main idea is to enable aggregating nodes to forecast the vector
For evaluating this study, we exploit the predictive analysis of the collected time series. Initially, prediction can be classified into (1)
As depicted in Figure 11, the original temperature values

Cooperative multisource prediction of temperature readings.
In the next section, the second DnC example of reliable virtual sensors is discussed in detail. The goal of such virtual sensors is to reduce the energy burden of the energy-hungry sensors while offering a low event-miss probability.
4. Reliable Virtual Sensing
Recent ideas for reducing the energy consumption of sensing modules circle around
The aforementioned methods often do not consider the event-miss probabilities. In real-time scenarios, unavailability of the sensors due to adaptive acquisition plans may lead to missing interesting events. Hierarchical sampling, on the other hand, has been employed with multimodal SNs which comprise different sensors for measuring certain phenomena [11]. Each of those sensors is characterized by specific performance features, that is, resolution and energy consumption. Hence, hierarchical sampling adapts the acquisition rate based on a trade-off between accuracy and energy consumption. In this paper, we contribute to this category through a novel approach, referred to as the

Virtual sensing flowchart.
4.1. Example: Virtual Gas Leak Sensor
Gas sensors typically consume approximately between 500 and 800 mW on each sample. Somov et al. [28] develop a WSN for detecting the combustible gases inside a building. They utilized a pulse heating profile to reduce the sensor energy consumption. However, the gas sensor is still energy-hungry due to the continuous data acquisition. Hence, invoking the DnC strategy is beneficial to drastically reduce the energy consumption and the event-miss probability. Inspired by the work done in [29], a gas sensor could be replaced by a light sensor which stands in front of a chemical film whose color is altered with the existence of gases. Accordingly, the light sensor indirectly detects the gas presence whenever the film's color is changed.
Probabilistic model checking has been customized to evaluate the proposed VS method in terms of energy consumption and detection latency. Figure 13(a) demonstrates lifetime of the μ-radar sensor versus different probabilities of gas leak. The lifetime gradually decreases as the gas leak probability increases. In fact, the resultant lifetime presented in [28] is a deterministic value where it was empirically computed, assuming the occurrence of no gas leaks. We compare the lifetime obtained in [28] with our result determined at

Comparison between virtual sensing together with the PWM method and the pure PWM technique.
Replacing real sensors with an orchestration of heterogeneous sensors imposes sensing reliability risks. To select the appropriate sensor to answer a sensing query, quality conditions of each sensor have to be estimated by looking up the relationship between the sensors and their readings. Using ontology, it is possible to generate rules for switching among sensors depending on observable properties in the feature of interest. To transform this information into processing instructions, we have to concatenate the required sensing devices to have the observed property values at hand. The run-time evaluation of the sensor selection rules could result in a highly dynamic selection of the most reliable sensor. A more detailed overview of the ontology-based decision-making algorithm is given in [8]. In the sequel, an object tracking mechanism is discussed as well as how virtual sensing can be beneficial in this field.
4.2. Case Study: Reliable Virtual Object Tracking
An object tracking system consisting of real and virtual sensors is delineated in Figure 14. The key idea underlying the virtual object tracking sensor

System structure with real and virtual sensors.

Examples of DTW matching for the recorded vibration patterns.
Real experiments over TelosB sensor nodes have confirmed the virtual sensors advantages: they save circa 99.93% of the energy needed for tracking a mobile object. Furthermore, a benchmark for the reliability parameters versus the lifetime and the event-miss probability has been constructed via large-scale simulation. Table 1 lists the impact of the quality thresholds on the μ-radar lifetime and the overall event-miss probability. For large values of accuracy and selectivity margins, the virtual sensor
Impact of varying the selectivity and accuracy margins of the virtual sensors on the event-miss probability and the lifetime.
In the next section, we elaborate on our third proposed idea of lifetime planning for extending the operational lifetime and simultaneously improve the provided service quality.
5. Lifetime Planning
The third given example of the DnC strategy is WSN lifetime planning in light of environmental and application-related dynamics. Often, sensor nodes, in these cases, consume an amount of energy to continue functioning even after the task lifetime. The WSNs literature explored possible trade-offs between the energy efficiency and other QoS parameters [71, 72]. In such articles, the design-time knowledge is completely discarded. As a result, we propose a novel approach, referred to as the
The self-adaptation mechanism relies on the autonomic
As a proof of concept, the next section discusses our implementation of an office monitoring scenario. This case study is investigated to examine the network performance with and without integrating the lifetime planning strategy.
5.1. Case Study: Office Monitoring Scenario
In real office monitoring applications, several interesting events emerge due to the environmental dynamics. The occurrence of such events is exploited to reconfigure the network in light of the lifetime planning strategy. For instance, detecting of whether the person is “stationary” or “walking” triggers a set of reconfigurations. During the “walking” activity, the connectivity is continuously altered based on the distance between the
The design-time knowledge is a valuable resource for drastically decreasing the computational burden on the reasoning engine. Such knowledge is exploited to engineer the QoS lower and upper boundaries. At run-time, the QoS metrics are continuously monitored. A set of secondary sensors, such as temperature and light sensors, forward their readings to the reasoning engine. If these readings exhibit an interesting event, then the quality metrics are accordingly adjusted. In the next section, we elaborate on the reasoning engine and how it reacts to unexpected events.
5.2. Performance Evaluations
An experimental study of the office monitoring scenario has been performed to evaluate the proposed lifetime planning strategy. As earlier explained, the goal of our strategy is to improve the WSNs service qualities while providing adequate network lifetime. Hence, a proactive adaptation mechanism based on the MAPE framework has been adopted. To fulfill this goal, we have to determine whether lifetime planning improves the provided QoS relative to static heuristics and blind adaptation methods. Furthermore, we have to check whether the resultant network lifetime is beyond or below the user/application requirements. Finally, we have to check the proposed method's ability to keep the provided QoS between two predefined boundaries.
As mentioned previously, a scenario of office monitoring is engineered for evaluation purposes. The inherent dynamics, in such a scenario, are to be exploited to show the consequences of planning the service quality levels throughout the entire lifetime. Figure 16 shows the structure of the proposed office monitoring scenario. A network of

Office monitoring testbed implemented in
For a comparative analysis, we contrast the lifetime planning strategy to three different methods including (1)
Mode selection for office monitoring scenario.
Table 2 summarizes the operational mode (rows) and all possible scenarios (columns) for lifetime maximization, blind adaptation, and lifetime planning methods. In fact, adopting general criteria, such as the traffic size and the speed of mobile nodes, mostly covers all possible events. The settings are classified in light of the MSN's state, that is, whether being mobile or stationary. The mobility state has been further classified in accordance with the speed and number of mobile nodes. Thus, four cases emerge by considering only two linguistic variables,
5.2.1. Evaluating the QoS Metrics
Figure 17 depicts the impact of applying lifetime planning, blind adaptation, and the maximization strategy on the service qualities. Figures 17(b) and 17(c) show a comparison between the four strategies in terms of the average

Comparison between lifetime maximization (Max), blind adaptation (Blind), lifetime minimization (Min), and lifetime planning (Planning).
As expected, the lifetime planning strategy achieves highly better reliability and delay than the other approaches, as can be seen in Figures 17(b) and 17(c), respectively. Particularly, lifetime planning has approximately 9.6% and 20% higher PDR than the blind adaptation and maximization method, respectively. Similarly, lifetime planning has about 53% and 78% less delay than the other methods. This superiority is reasonable due to spending more energy in case of the lifetime planning strategy. However, we still need to check the impact of such improvements on the lifetime.
5.2.2. Evaluating the Lifetime
Figure 17(a) delineates the lifetime of cluster heads and children for each strategy. The average lifetime obtained with lifetime planning is about 41.6% and 54.5% less than the other methods. Nevertheless, the achieved network lifetime (approximately 100 days) meets the planned task lifetime, used for estimating the QoS boundaries. These results are confirmed by Figure 17(d) which shows the radio duty cycle of each node as a percentage. With lifetime planning, the nodes activate their transceivers for longer time than for blind adaptation. This additional energy cost contributes to the enhancement of the communication reliability and the detection delay.
5.2.3. Evaluating the QoS Boundaries
Finally, we need to indicate how the expected lifetime is met. Figure 18 depicts the average reliability and average delay for node 6 during several runs over the various scenarios. In both subfigures, the QoS boundaries represent horizontal thresholds colored in green. Obviously, blue and red lines, representing lifetime planning and blind adaptation, have approximately the same behavior, that is, increment and decrement. For the delivery ratio, the lifetime planning values (in blue) are confined between the two green thresholds, as shown in Figure 18(a). Alternatively, the blind adaptation (in red) is reduced without any restrictions to reduce the energy consumption. Figure 18(b) shows a similar behavior for the delay metrics.

Comparison between the controlled and blind (a) delivery ratio and (b) delay.
To sum up, our proposed lifetime planning approach improves the QoS metrics by exploiting an additional amount of energy. Such energy is gained from limiting the lifetime to the application total task time. Simulation results show that lifetime planning highly improves the QoS metrics. This profit comes at the expense of reducing the WSN lifetime while the new lifetime is still adequate to complete the assigned task. In the next section, we provide a novel taxonomy of energy-saving methods in WSNs. We briefly discuss the advantages and disadvantages of each method along with the impact on other QoS parameters such as latency, throughput, and accuracy.
6. The State of the Art
In this section, we provide a new taxonomy of energy conservation in WSNs including the recent main endeavors. Several taxonomies of the WSNs energy conservation exist; however, these taxonomies are outdated; that is, they do not comprise recent methods such as compressive sensing and network coding. Moreover, the impact of reducing the energy consumption on other QoS parameters is not clearly discussed [5, 10–13]. Initially, energy management in WSNs has been divided into
As can be seen in Figure 19, the energy-saving approaches can be classified into three categories roots as follows:

Taxonomy of energy conservation techniques in WSNs.
In the sequel, the various methods are explained in more detail and examples of recent work are given. Moreover, the impact on application-relevant QoS parameters is discussed.
6.1. Node-Oriented Methods
In this category, energy-saving methods, whose scope is within the individual SNs, are discussed. In fact, the methods listed here are general enough to cover the event-driven and the time-driven WSN application scenarios. These energy-saving methods are designed to optimize the SN's performance without prior knowledge of the assigned task or application scenario. These methods are divided into
6.1.1. Low-Power Hardware
The channel on which the wake-up signal is sent can be the same one as the main radio communication channel (i.e., shared channel), or a separate channel can be used for wake-up signaling [32]. Although a separate channel increases the cost and complexity of the SN, the energy gain of deactivating the main receiver outweighs this additional overhead. The wake-up signal can be a single wake-up tone or a bit sequence. In range-based wake-up receivers, all the SNs that receive the tone activate their main transceiver. In identity-based wake-up receivers, the wake-up signal may consist of a bit sequence to address the destination. After the reception of a wake-up signal, nodes check whether the bit sequence refers to them. If so, then the destination wakes up. Mostly, radio signals are employed as wake-up signals in radio-based wake-up receivers [32]. Alternatively, acoustic wake-up receivers are triggered by acoustic signals. When the observed level of the external sound reaches a threshold, the wake-up circuitry is turned on.
Ba et al. [33] consider the programmable RFID tags to implement a passive wake-up radio for WSNs. The wake-up radio is realized using a passive RFID tag as the wake-up signal receiver, whereas an RFID reader acts as the wake-up radio transmitter. However, the results reveal that the wake-up range is relatively limited compared to the ZigBee-compliant sensor mote communication range. Moreover, the radio wake-up transmitter requires high energy consumption that cannot be applied to all SNs. Later in [31], the authors propose an RFID range extension method through energy harvesting.
6.1.2. Energy-Aware Software
6.1.3. Discussion
The aforementioned methods deal with saving energy at the node level. Several ideas have been examined either on real WSN testbeds or using simulations. Indeed, all these methods provide uneven contributions to the energy efficiency problem. However, negative impacts on other service qualities often stem from only focusing on the energy problem. Table 3 summarizes the discussed methods and their influences on other service qualities. For instance, the additional complexity of directed antenna emerges from the need for an accurate localization system; hence, the energy consumption and the processing time are increased. Moreover, the coverage of such directed antenna-equipped SNs is arguable where many hops have to be achieved in order to reliably communicate between two nodes with no line of sight.
Summarizing the various node-oriented energy-saving methods.
To sum up, the node-oriented methods are extremely beneficial for reducing energy consumption. Moreover, they do not depend on the application scenario. However, the significance of each method relatively relies on the application features. For instance, wake-up receivers are ideal for high-frequency data reporting, whereas their gain is reduced by less frequent data transmission. In this paper, lifetime planning is considered a contribution to the category of energy-efficient software.
6.2. Data-Oriented Methods
In this section, several ideas of reducing the sampled and transmitted data are to be discussed. In this category, the methods are broadly divided, according to the data aggregation scheme, into
6.2.1. Event-Driven Methods
6.2.2. Time-Driven Methods
Network coding (NC) is a new in-network processing method which exploits the characteristics of the wireless medium (in particular, the broadcast communication channel). It has mainly been developed to reduce the traffic in broadcast scenarios by sending a linear combination of several packets instead of a copy of each packet. Figure 20 depicts an example of the network coding strategy [5]. In this example, a five-node topology is constructed such that node 1 has to broadcast two packets, a and b. Without the NC approach, if nodes 1, 2, and 3 store and omnidirectionally forward the data packets, this will generate six packet transmissions (2 per each node). Alternatively, nodes 2 and 3 can transmit a linear combination of data items a and b with the NC approach. Accordingly, nodes 2 and 3 have to send only a single packet. Nodes 4 and 5 can decode the packet by solving linear equations. As a result, two packets are saved in total in this example. In general, the NC approach exploits the trade-off between computation and radio communication where data transmission is more energy-hungry.

An example of a network coding method [5].
Despite the advantages of the NC strategy for saving energy and for improving the communication reliability, Voigt et al. [47] report on several drawbacks, including (1) strongly increased delay and (2) high overhead due to lack of adaptability. Accordingly, research efforts have to be exerted in this arena to overcome such limitations.
The core idea behind the algorithmic methods is to estimate a stochastic characterization of the phenomenon to be measured [11]. This estimation can be achieved in two different ways. In the first way, data is to be mapped into a random process described in terms of a
In time series forecasting, a set of historical values is used to predict a future value in the same series. The time series method explicitly considers the internal structure of data [11]. In general, a time series can be decomposed into three components, a trend, a season, and a remainder. The trend component can be described by a monotonically increasing or decreasing function that can be approximated using common regression techniques. Once the trend is fully characterized, the resulting model can be used to predict future values in the time series. The moving average (MA), the autoregressive (AR), or the autoregressive moving average (ARMA) models are simple examples of time series predictors which are easy for implementation and provide an acceptable accuracy. Santini and Römer [49] choose Least Mean Square (LMS) over Kalman filter since it does not require a priori knowledge of the desired measurements. This implies that the sink and the sensors do not need to agree on a predefined model. Miranda et al. [78] show that a well-tuned AR estimator may be used to estimate data series in cluster-based one-hop WSNs.
The stochastic methods rely on a heuristic or a state-transition model describing the sensed phenomenon [49]. Such algorithmic approaches derive methods to construct and update the model on the basis of the chosen characterization. For instance, Han et al. [50] propose an energy-efficient data collection (EEDC) method which is well-suited in inquiry-based applications. In such scenarios, each SN relates to upper and lower bound and the difference between bounds denotes the accuracy of sensed values. In EEDC, bounds are transferred to sink and are updated later according to the request. In general, algorithmic approaches are too complex for computation and sometimes generate communication overhead.
6.2.3. Discussion
The aforementioned methods deal with saving energy through data manipulation. Table 4 summarizes the discussed methods and their influences on other service qualities. Although these methods prove to be successful in reducing the energy consumption, they often have negative impacts on other service qualities. As an example, data compression highly reduces the burden of radio communication by removing data redundancy and shrinking the number of packets. In most cases, lossy data compressors are utilized due to their ability to achieve higher compression ratio than what can be obtained via the lossless compressors. Therefore, users have to accept a certain level of accuracy degradation in the recovered data after decompression. Moreover, data compression methods, in many cases, spend a considerable amount of time for storing the uncompressed patterns and for evaluating the compression algorithm. The resultant delay may be harmful in time-critical WSN applications such as smart gird monitoring. In Section 3, the aforementioned limitations of data compression methods are tackled through proposing a novel compression algorithm, referred to as
Summarizing the various data-oriented energy-saving methods.
As listed in Table 4, triggered sampling is another method in which the duty cycle of energy-hungry sensors is highly reduced. However, most ideas in this arena suffer from the complexity of the new heterogeneous sensing system. In Section 4, this problem is targeted via proposing the invocation of virtual sensors as secondary sensors. Such sensors are extremely energy-efficient and can be easily designed by commercial off-the-shelf (COTS) components. In addition, reliability of such a new heterogeneous sensing system is improved. An ontology-based automatic rule generation method is developed to dynamically select between the main sensors and the virtual ones in light of the virtual sensor's accuracy and the environmental conditions.
6.3. Network-Oriented Methods
In this category, energy-saving methods, whose scope is within the entire network, are discussed. Indeed, the methods listed are general enough to cover the event-driven and the time-driven WSN application scenarios. These energy-saving methods are designed to optimize the WSN's performance without prior knowledge of the assigned task or application scenario. The taxonomy has three main classes, namely,
6.3.1. Mobility-Based Methods
Mobility-based methods rely on employing either mobile sinks or mobile relay nodes in order to reduce the number of multihops, thereby minimizing the transmission cost [51]. These mobile nodes are often attached to mobile entities in the environment such as vehicles, animals, or dedicated robots. Specifically, mobility-based methods increase the network lifetime through reducing the burden on bottleneck nodes. In general, SNs closer to the sink have to relay more packets so that they are subject to premature energy depletion, even with applying other energy efficiency techniques mentioned above. Through adding mobility, the traffic flow can be altered with mobile data collectors. Ordinary nodes wait for the passage of the mobile device and route messages towards it. Accordingly, the number of multihop radio communications is highly reduced. As a consequence, ordinary nodes can save energy thanks to reduced link errors, contention overhead, and forwarding.
Silva et al. [52] introduce a comprehensive survey of mobility models in WSNs. This survey considers the mobility feature from different perspectives, including the MAC layer and the network layer. The authors also propose the network of proxies (NoP) concept to relieve SNs from performing complex mobility tasks by moving them to the network side. Jain et al. [53] present the MULE architecture as an alternative to an ad hoc network. The MULE architecture is a three-tiered design, including sensors, mobile ubiquitous entities, and sink nodes. The key idea of MULE is to exploit the presence of mobile nodes in the environment by using them as forwarding agents. Sugihara and Gupta [54] investigate the trade-off between saving energy by employing mobile collectors and the increased data delivery latency. Generally, with controllable mobile nodes, the mobile displacement can be studied to prevent high latency, buffer overflow, and energy depletion.
6.3.2. Energy-Aware Routing Methods
In most cases, sensor nodes in the proximity of the sink node rapidly die due to relaying the packets from the other nodes in the network. In battery-powered WSNs, this unbalanced energy consumption eventually leads to network partitioning. In other WSNs powered by energy harvesting devices, access to the sink node may be therefore constrained. The authors in [60] apply CT range extension to extend network life by exploiting the energy of less-burdened nodes. Then, data routing excludes the burdened nodes. Accordingly, duty cycling of nodes over the entire network is balanced, as normal relay sensors can be replaced by other cooperative nodes.
6.3.3. Sleep/Wake-Up Protocols
The sleep/wake-up protocols exploit network redundancy to extend the network longevity by switching a number of redundant SNs into sleep mode. Radio transceivers, in most cases, consume the majority of the energy available. Hence, switching the transceiver into sleep mode helps to greatly prolong the network lifetime. Specifically, active SNs can be switched off according to the workload.
The asynchronous protocols enable each SN to wake up whenever it wants and still be able to communicate with its neighbors. In such a scheme, no explicit information exchange is required among the neighboring SNs. Although asynchronous methods are simpler to implement, they are not as efficient as synchronous methods, and in the worst case their guaranteed delay can be very high. Paruchuri et al. [66] present a randomized approach, referred to as RAW, to address the protocol design issues of asynchronous wake-up methods. The RAW protocol enables each SN to make local decisions on whether to sleep or to be active. It allows the existence of several paths between a source and a destination and, thus, a packet can be forwarded along any of such available paths.
In the on-demand methods, SNs are deactivated and then they are only switched on when another SN wants to communicate with them. The main challenge is how to timely trigger the sleeping SNs whenever other SNs are willing to communicate. In particular, multiple radios with different energy/performance trade-offs are utilized. In other words, a low-rate and low-power radio can be dedicated for signaling, and a high-rate but more power-hungry radio can be used for data communication. Schurgers et al. [67] introduce STEM, a topology management technique that trades off power saving versus path setup latency. The proposed technique consists of a separate radio operating at a lower duty cycle. Upon receiving a wake-up message, it turns on the primary radio, which takes care of the regular data transmissions.
6.3.4. Topology Management
In many cases, SNs are deployed with high level of redundancy to ensure space coverage and to cope with possible node failures occurring during or after the deployment. The idea behind topology management protocols is to deactivate some nodes while maintaining network coverage and connectivity. The decision of either activating or deactivating nodes typically depends on the application's needs. Accordingly, topology management protocols dynamically modulate the WSN topology for the sake of minimizing the number of active nodes, hence prolonging the network lifetime. Choosing the active SNs can be accomplished in two ways by a

Location-based duty cycling [6].
6.3.5. Low Duty-Cycle MAC Protocols
MAC protocols are responsible for the coordination between neighbors. Optimizing MAC protocols leads to significant reduction in power consumption. For instance,
Alternatively, contention-based MAC protocols allow nodes to independently access the shared wireless medium [70]. These protocols propose minimizing collisions rather than avoiding them completely. Contention-based protocols depend on a carrier sensing mechanism called
6.4. Discussion
The aforementioned methods deal with saving energy at the network level. Although all these methods provide solutions to the networking overhead, negative impacts on other service qualities may emerge. Table 5 summarizes the discussed methods and their influences on other service qualities. As an example, deactivation methods strive to minimize the network duty cycle to overcome idle listening and redundancies. Nevertheless, this action increases the communication delay where each node has to wait for its neighbors to wake up and then it can start broadcasting. Even the synchronized version of these algorithms may suffer from latency due to time deviations and the additional computational overhead of the synchronization algorithm.
Summarizing the various network-oriented energy-saving methods.
7. Conclusion
The paper has two main contributions. We provide an up-to-date taxonomy of the main energy-saving methods in WSNs. Analyzing the impacts of such methods on other QoS metrics motivated us to propose a novel method, referred to as divide-and-conquer (DnC) method. The core idea behind DnC method is to control the QoS parameters while providing adequate network lifetime. DnC method spends energy only to meet a predefined WSN's task time. Thus, an amount of energy is saved where it can be used to enhance other QoS metrics. As a proof of concept, three different examples, for extending the operational lifetime while improving the application-relevant QoS parameters, are given. Each of these methods targets a certain set of WSN application scenarios. First, a highly precise data compression method was verified to reduce the WSN power consumption in periodic sampling applications. The lessons learned here are as follows: (1) FuzzyCAT recovers the input data with high accuracy while offering high compression ratios, (2) model-based approaches are only recommended with delay-sensitive applications which report low-frequency data, and (3) it is feasible to considerably mitigate the delay incurred due to forming a compression window in transform-based compressors. In many cases, the delay is critical whenever the measured data is significant. In such cases, transform-based compression has to be accompanied with a decision-making algorithm. Implementing such an idea for different transform-based methods is left for further extensions. Moreover, we plan to perform an extensive comparative study among the main compression methods.
For event-driven scenarios, reliable virtual sensors can significantly reduce the power consumption and the event-miss probabilities. We prove the ability of virtual sensors to substitute the main energy-expensive sensors. Furthermore, ontology-based rules are efficiently employed to select between main and virtual sensors according to their quality. We could conclude that (1) adaptive sampling is an effective method; however, it is not suitable for event-driven applications; (2) it is feasible to design virtual sensors for any energy-hungry sensor; and (3) the ontology can be used to solve hard problems in WSNs especially with large-scale heterogeneous networks. For future work, we plan to investigate other QoS metrics that could be affected by the insertion of virtual sensors. Moreover, we may investigate a combination of Kalman filtering with statistical hypothesis testing to provide a general, systematic way of designing virtual sensor setups.
Finally, a self-adaptive mechanism has been devised to implement lifetime planning. Through this method, lifetime is deliberately planned beyond the task time. Thus, more energy is spent for improving the service quality. To sum up, we found out that (1) blind adaptation may lead to an increase in the lifetime, but the QoS metrics will not be considered, and (2) planned adaptation mechanisms can simply improve both sides through exploiting design-time knowledge. For future work, we consider setting up a real testbed to evaluate the proposed approach in a more realistic manner. Moreover, the overhead of our hierarchical MAPE adaptation architecture will be analyzed.
