In distributed sensing systems with constrained communication capabilities, sensors' noisy measurements must be quantized locally before transmitting to the fusion centre. When the same parameter is observed by a number of sensors, the local quantization rules must be jointly designed to optimize a global objective function. In this work we jointly design the local quantizers by maximizing the mutual information as the optimization criterion, so that the quantized measurements carry the most information about the unknown parameter. A low-complexity iterative approach is suggested for finding the local quantization rules. Using the mutual information as the design criterion, we can easily integrate the effect of communication channels in the design and consequently design channel-aware quantization rules. We observe that the optimal design depends on both the measurement and channel noises. Moreover, our algorithm can be used to design quantizers that can be deployed in different applications. We demonstrate the success of our technique through simulating estimation and detection applications, where our method achieves estimation and detection errors as low as when designing for those special purposes.
1. Introduction
A random source with continuous amplitude requires infinite number of bits to be described. However, due to practical constraints in communication systems, for example, limited storage or channel capacity, only finite number of bits can be accommodated. To compress a continuous-amplitude random source into limited amount of information its amplitude X should be quantized to which takes values from a discrete finite set. A side effect of this quantization is some loss of information about X, which depends on the quantizer's quality and the compression rate.
Quantization theory has been studied long ago [1, 2]. The rate-distortion theory [3] describes the relation between the amount of distortion caused by the quantization and the rate by which the quantized source can be presented. The theoretical limits described by the rate-distortion function can only be asymptotically achieved by optimal source encoding. For designing optimal quantizers, a practical method has been investigated by Lloyd and Max [1, 4]. They propose an iterative algorithm for finding the best quantization rule for a random source to achieve the lowest distortion. The distortion measure they use is the mean squared error (MSE). Using the Lloyd-Max algorithm, the optimal (it must be mentioned that all iterative algorithms for quantization design find a local optimal solution which depends on the initial quantization rules) -level quantization rule which minimizes the estimation MSE can be found for a random scalar X distributed according to probability density function (pdf) The joint quantization of multiple variables has been studied under vector quantization [5, 6].
A more interesting quantization scenario is when the continuous-amplitude source is not observable and only a noisy version of it can be measured as , where W is some random noise. In this scenario the observed quantity is quantized as ; however, the goal is to achieve the best representation of X. It has been shown by [7–9] that the optimal quantizer in this case can be achieved using the generalized Lloyd-Max algorithm, where the distortion measure is modified to include both the quantization and measurement noises.
A relatively recent and more challenging problem appears in distributed sensing systems, for example, sensor networks. The problem can be described as distributed noisy source quantization, also referred to as the multiterminal source coding or CEO problem [10, 11]. In a distributed sensing system, the same unknown source is observed by different measurement devices, each having a noisy observation as , where is the measurement noise of the nth observation. Each observation has to be quantized according to a local quantization rule and sent to the fusion centre (FC). At the FC, the quantized values are used for estimation, detection, or classification purposes, based on the application of the sensor network. Since for an optimal solution the local quantization rules have to be jointly optimized, this problem is more challenging than a centralized quantization design. The rate-distortion bound is analytically intractable in this case; however, upper and lower bound have been derived in [12].
Design of the optimal distributed quantizer for the above scenario has been considered by the authors of [13–18], who suggest cyclic algorithms based on alternating minimization [19], to find the optimal N quantization rules. The algorithm starts by initial guesses about the N quantizers, that is, . During each iteration j, for each , the best quantization rule that optimizes a performance criterion, for example, MSE [13, 14], Fisher information [15], or Ali-Silvey distances [16], is found by fixing the other quantizers and is assigned to .
Compared with the previous design algorithms for distributed quantization, that is, [13–16], in this work we use the mutual information (MI) as the optimization criterion for the distributed quantization design. We jointly design quantizers that maximize the MI of the quantized data and the unknown parameter. Our motivation for using the MI is that it is a fundamental measure showing how much information one variable contains about another variable. We design a set of quantizers for the noisy measurements, in a way that the quantized variables contain the most information about the unknown parameter.
A theory of designing quantizers based on optimizing mutual information measures has been discussed as the information bottleneck method by [17]. The information bottleneck method has been mostly used in clustering and classification applications. In this work we take the channel noise effect into account and design distributed quantizers that are optimum in presence of imperfect communication channels.
The MI measure has the following benefits. It allows designing the quantizers independent of the choice of a decoder or estimator in the FC. Also, as we will discuss later, when using the MI measure the global optimization criterion can be broken down into smaller criteria. Finally, it allows incorporating the effect of communication channels in the design of optimal quantizers. Hence, obtain the optimal distributed channel-aware quantizer. By maximizing the MI of the received data at the FC and the unknown parameter, we observe that depending on the channel noise the optimal quantizers can be different from the channel-unaware quantizers.
Performance evaluation through simulating different scenarios shows great results for our distributed quantizers design based on MI maximization. This is evaluated for two applications, that is, estimation and detection. We will show that the quantization rules obtained by maximizing the MI achieve the same, and in some cases better, performance when compared with those quantizers specifically designed for the estimation or the detection purpose, that is, using MSE or Ali-Silvey distances as the optimization criteria.
The paper is organized based on two cases. First, we assume perfect communication channels between the sensors and the FC and develop our method; then we apply the method to the general case where channel is not perfect. In Section 2, we first justify the choice of MI to be used as the design criterion. In Section 3, the problem is defined and formulated based on MI. Consequently, a design algorithm is devised in Section 4 assuming ideal communication channels. In Section 5, the algorithm is modified to include channel effect. Finally, the numerical results are presented and discussed in Section 6.
2. Mutual Information as the Optimization Criterion
Most of the literature on optimal quantizer design has used distortion measures, such as MSE, to design the optimal quantizer [1, 4, 7–9, 13]. However, other measures have also been used as criteria to design quantizers; among them are Ali-Silvey distances [16, 20, 21], Cramer-Rao lower bound, and Fisher information [15, 22, 23]. The motivation for using these measures is the fact that they work better in some applications. For example, Ali-Silvey distance measures are shown to design better quantizers for detection applications [16, 20].
A fundamental measure, showing how much information about the unknown is conveyed in the quantized data, is the MI of the unknown and the quantized data. Therefore, in this work, we base the design of distributed quantizers on maximizing the MI and will show, in Section 6, that the MI criteria result in designing quantizers with the same and even higher performance than other measures including distortion measures and Fisher information. Also, choice of MI as the optimization measure has some computation benefits in solving joint optimization of quantizers as it allows for breaking down the global bigger equation into smaller ones, as explained in Section 3. Keeping the number of quantized levels per sensor constant, we achieve the highest information rate by properly designing the quantizers. The MI criterion, to the best of our knowledge, has not been studied for distributed quantizer design in the literature.
A benefit of using the MI is to make the quantizer design independent of the estimation method or decoder. In design solutions based on distortion measures, such as squared error or Hamming error [3], the estimation method is fixed, for example, minimum mean squared error (MMSE) or maximum likelihood, and the optimization of the quantizers is achieved depending on the estimator type. Using the MI measure, however, following the design of quantizers, an estimation or detection method can be developed in the FC, based on each specific application. This enables us to design a quantizer that is useful for estimation, detection, classification, or feature extraction. Specifically for estimation purposes, the optimal quantizers designed based on minimizing the MSE (the Lloyd-Max algorithm) are those also with high MI [1, 24]. This makes sense, because when the quantized data carry more information about the unknown parameter, the FC has a better representation of the unknown; hence, it can estimate it more accurately. The performance of our MI-based algorithm in estimation and detection applications is discussed in Sections 6.1 and 6.2, respectively.
Using the MI measure in the distributed quantization design enables breaking down an N-sensor quantization problem into smaller problems. In fact, since the formula of the MI can be recursively broken down using the chain rule of MI, a simpler suboptimal solution can be derived by maximizing each component. The related formulations are discussed in the following section.
3. Problem Formulation Based on Mutual Information
The distributed quantization problem addressed in this work is defined as follows.
Suppose X (for the brevity of notations, we use the same symbol to address a random variable and its value) is a random scalar which takes values in with pdf . A number of noisy measurements of X are observed at some distributed locations as , where h is the measurement function and is the measurement noise. The measurement noise at different sensors may be correlated, but it is assumed that the distribution is known.
Due to communication constraints, the continuous-amplitude measurements have to be quantized before transmission. Therefore, is encoded to using a local quantization rule ; that is, . A quantization rule is defined by a set of real-valued numbers called breakpoints, that is, , that divide into partitions and assign a value from to each partition. Each quantized piece of data , , is then transmitted over a communication channel, and the received symbol at the FC is called . The complete problem model is shown in Figure 1.
The complete model of the problem.
Let , , and be denoted by , , and , respectively. Then, random variables X, , , and make a first-order Markov chain as ; hence
We will use this property to develop the design algorithm in Sections 4 and 5. We first consider ideal communication channels between the sensors and the FC and derive an algorithm for designing the quantization rules in Section 4. In Section 5, we extend the algorithm to consider the channel effect and design channel-aware quantization rules.
From this point to Section 5, the communication channels are assumed to be ideal. The goal is to derive so that on average the random variables together are a better representation of X. To achieve that, we maximize the MI of X and the N quantized data over N local quantization rules, that is,
Due to the stepwise characteristic of the quantization rules, finding an analytical solution to the problem in (2) is difficult. Therefore, in this work we find some approximations and numerical methods to tackle this problem.
The MI in (2) can be recursively written based on the chain rule of mutual information [3]:
Hence, a suboptimal solution can be derived based on this recursive breakdown as
where in the nth line, , the quantized data are generated based on the previously found , respectively.
Finding the nth quantization rule, for , from (4) is less complex than finding all quantization rules from (2). Therefore, this suboptimal approach has less complexity. In the following section, we develop a method to find a solution for the maximization problem in (4).
4. Design Algorithm
To find , , one should solve the maximization in the nth line of (4). However, since is a discrete level function the optimization is not analytically traceable. In this section we provide a numerical method to find a local optimal solution for the nth quantization rule, . In (4), assume that are known. To find the nth quantization rule , according to (4) we need to maximize . When the previous quantization rules are fixed, this is equivalent to the following optimization (note that ):
Using the chain rule of MI and the definition of MI based on the entropy [3],
In (6), the entropy terms involving only X and are independent of the choice of quantizers. Therefore, we can reduce the optimization problem to (we have dropped from all the formulas, to save space)
where the last equation is the consequence of the Markov chain property in (1). Note that since, for each , solely depends on , we have
Therefore, (7) can also be written as
Note that, in the above formula, the maximization is on . It is straightforward to see that the probability function is just another form of defining the nth quantization rule. Since, , maps y to a value such that , we can write
where is the Kronecker delta function, which is equal to one, where and zero elsewhere. Note that, in (9), also depends on the quantization rule or .
To solve the optimization problem in (9), motivated by [25] we use the double maxima approach by converting (9) to a larger maximization problem. The maximization in (9) can be achieved following the next three steps, which is proven in the Appendix.
The maximum of the objective function in (9), namely, , can be written as
where p is a short term for and f is a short term for , a function of X and , such that, for all realizations of , . And
Now, for a fixed p, is maximized by
And, for a fixed f, that maximizes is obtained as
where .
It is shown in the Appendix that these procedures find . Using (i), (ii), and (iii), an iterative algorithm to find the optimal nth quantization rule can be derived as Algorithm 1. In Algorithm 1, ϵ determines the condition to stop the iterations. Since the mutual information is increased during each iteration and it is upper-bounded, the algorithm converges to a local maximum.
Algorithm 1: Iterative algorithm to find nth quantization rule.
The discussions up to this point have assumed ideal communication channels between the sensors and the FC. In real distributed sensing systems, due to the nonideal communication channels the quantized data generated by the sensors might not be received correctly at the FC. This will affect the overall performance of the system. Hence, considering the channel effect in designing the quantizers is crucial [26]. For centralized quantization [27–29], revise the MSE to include the channel effect. Then, they jointly optimize the source encoders and the reconstruction levels at the receiver by minimizing this new MSE. For distributed quantization, channel-optimized quantizer design has been developed for hypothesis testing by minimizing the Bayesian cost [30, 31]. Recently, the distributed channel-aware quantizer design for multiple correlated sources has been addressed by [32, 33], where M source encoders are designed to quantize M correlated sources, in presence of noisy communication channels. Reference [34] discusses the problem under a total power constraint and designs the quantizers by minimizing the signal distortion in the receiver. For multimedia applications in distributed networks, the multiple description coding has been used to fight the channel loss [33]. References [35–37] have addressed the effect of imperfect transmission channels in the multiple description coding algorithms for the application of distributed video transmission. In this section, we are considering the channel into our quantizer design to recover the signal more accurately in the destination.
In this section, we design optimal channel-aware quantizers for the distributed quantization of a noisy source using MI measure. We assume that communication channels between each piece of quantized data and the FC are independent. In presence of these noisy channels, we now optimize the quantizers' design by maximizing the MI of the unknown parameter and the channels' outputs. We use the Markov chain property in (1), and to solve the optimization problem we follow an approach similar to Section 4.
Due to channel errors, the received symbol at the FC, , might not be the same as the transmitted symbol . We assume that the channel transition probabilities, that is, , , , are known. Based on these channel transition probabilities, we can write the MI of X and the received symbols at the channels' output , that is, , or in short . Then we maximize to find the optimal N channel-aware quantizers. Similar arguments preceding (5) are applicable here. Hence, the nth optimal channel-aware quantization rule is obtained as
By substituting instead of in (6), we can get
Since , , and make a Markov chain , we have
and therefore
Assuming that the channel between each sensor and the FC is independent of the other channels, can be obtained as , where each term is the transition probability of the corresponding sensor-to-FC channel. Following the same steps as in Section 4, we can derive similar results. Note that the term in (18) depends on the quantization rule . First we write
where p is a short term for and f is a short term for , such that, for all realizations of , . And
Then, for a fixed p, is maximized by
And, for a fixed f, that maximizes is obtained as
where . Finally, a similar iterative solution as Algorithm 1 can be proposed for finding N channel-aware quantization rules.
6. Simulation Results
In this section the performance of our proposed algorithm is demonstrated using computer simulations and compared with other methods. In particular, we examine the performance of our MI-based quantization design for the estimation applications and detection applications, in Sections 6.1 and 6.2, respectively. The effect of nonideal channels on the optimal quantization rules is investigated in Section 6.3.
6.1. Estimation Application
For a distributed sensing system with estimation purposes, the quantized values are used in the FC to estimate the unknown. To compare with [15], where the quantization rules are obtained by minimizing the MSE, we use a similar simulation scenario. Therefore, the unknown parameter X is distributed according to . Two sensors are involved; that is, . The measurement noises and are additive Gaussian noises with correlation ρ and marginal distribution . The number of quantization levels for both sensors is L. At the FC we use the MMSE estimator to estimate X from the quantized measurements and . Similar to [15], the initial quantization breakpoints are chosen from the optimal quantization rules of Lloy-Max algorithm [4]; that is, .
Our algorithm finds the optimal quantizers and by maximizing the MI . According to (4), can be broken down as . Based on Algorithm 1, first is maximized to find , and consequently is maximized to find . Due to this breaking down of the task, the MI , which is the sum of the two components, is maximized in two steps. Figure 2 shows the value of MI at each iteration of the algorithm, for and .
Designing the quantization rules by maximizing the MI.
At each iteration of the algorithm, the current quantization rules are used to quantize the measurements and to and , respectively. These values are then used to estimate X using the MMSE estimator; that is, . The estimation performance at each iteration is computed in terms of MSE, in Figure 3. It can be seen from Figures 2 and 3 that by maximizing the MI the MSE of estimation is decreased.
MSE change at each iteration.
The optimal quantization rules at the end of iterations are represented by the set of breakpoints as and in Table 1. For different simulation scenarios the final quantization rules and the final MSE are shown and compared with the results of Lam algorithm [15]. Comparing with [15], the final quantization rules are different, but the MSE performances are essentially the same.
MSE for estimation.
,
Method
Optimal quantization rules
MSE
MI
Lam
,
Method
Optimal quantization rules
MSE
MI
Lam
,
Method
Optimal quantization rules
MSE
MI
,
Lam
,
6.2. Detection Application
In a distributed sensing system with detection purposes, the FC uses the quantized data to perform a hypothesis testing. We use our method of maximizing the MI to find the optimal quantization rules for the detection scenario and compare the performance with that of Poor algorithm [20], where Ali-Silvey distances [38] are used as the optimization criterion.
To simulate the detection scenario we assume that the unknown X is a Bernoulli random variable which represents the absence () or presence () of the signal θ; that is, and . Each sensor makes an observation of X in additive Gaussian noise and sends the quantized observation to the FC. We use the algorithm in Section 4 to design the optimal rules for quantizing the measurements. Note that since X takes its values from the finite set , the integral over X in all equations translates into a summation over this set. At the FC, the Neyman-Pearson method is used to test the hypotheses.
To compare with Poor [20], we assume equally likely and ; that is, . Also, each sensor quantizes its observation to levels. And the measurement noises are i.i.d. with pdf . The probability of detection error is shown in Table 2 for two different signal energies. Our method is indicated by “MI” in the table. The results based on Matsushita distance and J-divergence criteria from [20] are also indicated in the table. It can be seen from the error probabilities that the detection performance is similar to and in some cases better than [20].
Probability of error for signal detection.
MI
Matsushita
J-divergence
6.3. Channel Effect
The presence of a nonideal communication channel between each sensor and the FC affects the design of optimal local quantizers for each sensor. Using the design algorithm developed in Section 5, we find the channel-aware local quantizers. The simulation results confirm that the optimal quantizers assuming ideal channels are different from the optimal quantizers in the presence of nonideal channels.
To compare the channel-aware and channel-unaware quantization schemes we consider an estimation application. We assume that sensor n's quantized data , , is mapped to a binary word of size and sent to the FC via a binary symmetric channel (BSC) with crossover probability . The received symbol at the FC is therefore and the transition probabilities, that is, , , , are derived based on . In the simulation examples we assume two sensors with identical crossover probabilities for all channels; that is, .
Figures 4 and 5 show maximization of the MI and minimization of the MSE, respectively, during the iterations and for different values of ϵ. The final quantizers are given in Table 3. It can be seen from Table 3 that the optimal quantization solution changes depending on the channel error probability. Consequently, if, for instance, one deploys the quantizers designed for in a scenario where , the MSE will be . While using the optimal quantizers designed for , the MSE is .
Optimal quantizers in presence of nonideal channel.
,
MSE =
MSE =
MSE =
MSE =
Maximizing the MI in presence of nonideal communication channels.
MSE of estimation at each iteration of the algorithm.
For the error detection application in nonideal communication channels, we have compared our results with that of [31], for different problem setups. Reference [31] develops an iterative algorithm by minimizing the error probability at the fusion centre after finding the optimal fusion rule for each iteration based on the quantizers of that iteration. For all setups, we choose , , , and i.i.d. measurement noises with pdf . ϵ and vary for different scenarios. The optimized quantizers and the final error probability are given in Table 4. It can be seen that for all cases our results are very close and just better than the results of [31].
Error detection in presence of nonideal channel.
Method
ϵ
MSE
Liu and Chen
0.5
0.1
0.2932
MI
0.5
0.1
0.2856
Liu and Chen
0.5
0.2
0.3376
MI
0.5
0.2
0.3351
Liu and Chen
0.3
0.1
0.2404
MI
0.3
0.1
0.2358
Liu and Chen
0.3
0.27
0.2855
MI
0.3
0.27
0.2756
7. Conclusion
In this paper, we proposed an algorithm based on maximizing the MI measure for jointly designing optimal channel-aware local quantization rules for a distributed sensing system. The MI allows us to design general purpose quantizers that later can be deployed for different applications, for example, estimation or detection. We have shown that the performance of the optimal quantizers based on the MI is essentially the same as the performance of optimal quantizers of other methods that specifically target the estimation or detection application. We also observed that the optimal local quantizers in the presence of nonideal channels are different from the local quantizers that are optimized without considering the channel effect.
Footnotes
Competing Interests
The authors declare that they have no competing interests.
References
1.
MaxJ.Quantizing for minimum distortionIRE Transactions on Information Theory19606171210.1109/tit.1960.1057548
2.
GrayR. M.NeuhoffD. L.QuantizationIEEE Transactions on Information Theory19984462325238310.1109/18.720541MR16587872-s2.0-0032184399
3.
CoverT. M.ThomasJ. A.Elements of Information Theory1991New York, NY, USAWiley-InterscienceWiley Series in Telecommunications10.1002/0471200611MR1122806
4.
LloydS. P.Least squares quantization in PCMIEEE Transactions on Information Theory198228212913710.1109/tit.1982.1056489MR6518072-s2.0-0020102027
LindeY.BuzoA.GrayR. M.An algorithm for vector quantizer designIEEE Transactions on Communications1980281849510.1109/tcom.1980.10945772-s2.0-0018918171
7.
AyanogluE.On optimal quantization of noisy sourcesIEEE Transactions on Information Theory19903661450145210.1109/18.599432-s2.0-0025522510
8.
FineT.Optimum mean-square quantization of a noisy input (corresp.)IEEE Transactions on Information Theory196511229329410.1109/tit.1965.10537412-s2.0-0001431499
9.
EphraimY.GrayR. M.A unified approach for encoding clean and noisy sources by means of waveform and autoregressive model vector quantizationIEEE Transactions on Information Theory198834482683410.1109/18.9780MR9667522-s2.0-0024035451
10.
BergerT.ZhangZ.ViswanathanH.The CEO problem [multiterminal source coding]IEEE Transactions on Information Theory199642388790210.1109/18.490552
11.
CourtadeT. A.WeissmanT.Multiterminal source coding under logarithmic lossIEEE Transactions on Information Theory201460174076110.1109/tit.2013.2288257MR31509552-s2.0-84891504553
12.
GiannakisG. B.SchizasI. D.JindalN.Distortion-rate bounds for distributed estimation using wireless sensor networksEURASIP Journal on Advances in Signal Processing2008200874860510.1155/2008/7486052-s2.0-43949127101
13.
LamW.ReibmanA.Quantizer design for decentralized estimation systems with communications constraintsProceedings of the 23rd Annual Conference on Information Sciences and SystemsMarch 1989489494
14.
GubnerJ.Distributed estimation and quantizationIEEE Transactions on Information Theory199339414561459
15.
LamW.-M.ReibmanA. R.Design of quantizers for decentralized estimation systemsIEEE Transactions on Communications199341111602160510.1109/26.2417392-s2.0-0027697863
16.
LongoM.LookabaughT. D.GrayR. M.Quantization for decentralized hypothesis testing under communication constraintsIEEE Transactions on Information Theory199036224125510.1109/18.52470MR10527762-s2.0-0025401011
ShenX.ZhuY.YouZ.An efficient sensor quantization algorithm for decentralized estimation fusionAutomatica20114751053105910.1016/j.automatica.2011.01.082MR28783772-s2.0-79954963080
19.
CsiszarI.TusnádyG.Information geometry and alternating minimization proceduresStatistics and Decisions19841205237Zbl0547.60004
20.
PoorH. V.ThomasJ. B.Applications of ali-silvey distance measures in the design of generalized quantizers for binary decision systemsIEEE Transactions on Communications197725989390010.1109/tcom.1977.10939352-s2.0-0017538819
21.
PoorH. V.Fine quantization in signal detection and estimationIEEE Transactions on Information Theory198834596097210.1109/18.21219MR9828062-s2.0-0024079242
22.
VenkitasubramaniamP.TongL.SwamiA.Quantization for maximin ARE in distributed estimationIEEE Transactions on Signal Processing20075573596360510.1109/tsp.2007.894279MR24686652-s2.0-34347396578
23.
FowlerM. L.ChenM.BinghamtonS.Fisher-information-based data compression for estimation using two sensorsIEEE Transactions on Aerospace and Electronic Systems20054131131113710.1109/taes.2005.15414592-s2.0-27844495170
24.
MesserschmittD.Quantizing for maximum output entropy (Corresp.)IEEE Transactions on Information Theory197117561210.1109/tit.1971.1054681
25.
BlahutR. E.Computation of channel capacity and rate-distortion functionsIEEE Transactions on Information Theory1972184460473MR0476161
26.
ChenB.TongL.VarshneyP. K.Channel-aware distributed detection in wireless sensor networksIEEE Signal Processing Magazine2006234162610.1109/MSP.2006.16578142-s2.0-33746329310
27.
FarvardinN.VaishampayanV.Optimal quantizer design for noisy channels: an approach to combined source—channel codingIEEE Transactions on Information Theory198733682783810.1109/tit.1987.1057373MR923240
28.
KurtenbachA. J.WintzP. A.Quantizing for noisy channelsIEEE Transactions on Communication Technology196917229130210.1109/tcom.1969.10900912-s2.0-0001407623
29.
FarvardinN.VaishampayanV.On the performance and complexity of channel-optimized vector quantizersIEEE Transactions on Information Theory199137115516010.1109/18.61130MR10878952-s2.0-0025993181
30.
DumanT. M.SalehiM.Decentralized detection over multiple-access channelsIEEE Transactions on Aerospace and Electronic Systems199834246947610.1109/7.6703282-s2.0-0032045930
31.
LiuB.ChenB.Channel-optimized quantizers for decentralized detection in sensor networksIEEE Transactions on Information Theory20065273349335810.1109/tit.2006.876350MR22400262-s2.0-33746881413
32.
WernerssonN.KarlssonJ.SkoglundM.Distributed quantization over noisy channelsIEEE Transactions on Communications20095761693170010.1109/tcomm.2009.06.0704822-s2.0-67650675055
33.
ValipourM.LahoutiF.Channel optimized distributed multiple description codingIEEE Transactions on Signal Processing20126052539255110.1109/TSP.2011.2180903MR29618202-s2.0-84859963987
34.
ChaudharyM. H.VandendorpeL.Power constrained linear estimation in wireless sensor networks with correlated data and digital modulationIEEE Transactions on Signal Processing201260257058410.1109/tsp.2011.2175220MR29194612-s2.0-84855941806
35.
BaiH.WangA.ZhaoY.PanJ.AbrahamA.Distributed Multiple Description Coding—Principles, Algorithms and Systems2011London, UKSpringer
36.
XueZ.AnhongA.BingZ.LeiL.Adaptive distributed compressed video sensingJournal of lnformation Hiding and Multimedia Signal Processing20145198106
37.
TaiebM. H.ChouinardJ.-Y.WangD.LoukhaoukhaK.HuchetG.Progressive coding and side information updating for distributed video codingJournal of Information Hiding and Multimedia Signal Processing20123111110.4236/jsip.2012.310012-s2.0-84871193964
38.
AliS. M.SilveyS. D.A general class of coefficients of divergence of one distribution from anotherJournal of the Royal Statistical Society. Series B (Methodological)1966281131142