Abstract
Keywords
Introduction
In recent years, we have witnessed the development and proliferation of billions of mobile and Internet of Things (IoT) devices. This rapid increase in IoT sensors, things, objects, and resources puts a demanding strain on existing IoT and distributed sensor network infrastructures. Emerging technologies and intelligent techniques can play a compelling role in addressing these new requirements. The result is a new interdisciplinary field and paradigm termed as the artificial intelligence Internet of Things (AIoT). 1 The AIoT is beginning to receive a significant amount of interest from the research communities and industries. The widespread acceptance and penetration of artificial intelligence (AI) technology have resulted in more AIoT applications to emerge. These AIoT applications have the characteristics of being computationally demanding to execute machine-learning or deep learning algorithms and meet the real-time processing constraints. However, IoT devices are resource constrained (computation, storage, and communications), and this increases the design and deployment challenges to fulfill the quality-of-service (QoS) requirements for the AIoT and its related applications.
In this article, we analyze the convergence of AI throughout the IoT architecture to form the AIoT with a focus on four aspects: (1) architectures, techniques, and hardware platforms for AIoT; (2) sensors, devices, and energy approaches for AIoT; (3) communication and networking for AIoT; and (4) applications for AIoT. The first aspect is related to the combination of AI and edge computing as key enabling technologies for the AIoT. The other remaining three aspects seeks to apply and embed AI and machine-learning techniques toward the design and implementation of the various IoT layers and structure. Figure 1 shows the overview concept of the AIoT and the general layer structure for an AIoT which consists of three layers: (1) sensing and device layer; (2) communication and network layer; and (3) application layer. The AIoT aims to embed AI and machine-learning techniques into the sensing, communication, and application layers to achieve high-performing IoT infrastructures.

AIoT architecture and layers.
In the sensing and device layer, the AIoT paradigm can take advantage of recently developed edge computing architectures 2 and machine-learning approaches, such as active learning (AL), 3 transfer learning (TL), 4 and federated learning (FL). 5 AL techniques can deal with the time-varying and unpredictable data over the IoT network. TL utilizes pre-trained models developed at the edge servers to deliver accurate results. FL can provide the necessary privacy at the edge server for the processing information. In the communication and network layer, the AIoT paradigm can take advantage of newly emerging communication technologies and networks, such as software-defined networking (SDN) and 5G/6G cellular communications.
A critical factor for the AIoT paradigm is the security within the IoT network. The heterogeneity of IoT devices and massive scale of IoT networks make it highly vulnerable to attack in several ways, as the likelihood of detecting an IoT device that has not been properly secured or that has a weak security measure is very high. With the deployment of IoT on 5G networks which provide high speed, low latency, and large bandwidth, it is easier, in the case of network security breach, for hackers to steal and download information, such as customer information and personal information, at a faster rate. 6 Traditional security schemes are not efficient in solving security issues of IoT network especially emerging security threats. AI tools have been integrated with IoT (AIoT) to offer more effective and efficient security solutions for IoT networks.
In the application layer, the potential for the AIoT is very high. It can be applied to make urban environments and vehicles smart. In smart homes, home appliances such as smart TVs, lights, thermostats, refrigerator, coffee makers, and so on are equipped with sensors and intelligence to learn a user’s habits. The intelligence developed can be used to offer automated home support for daily task, minimize energy usage, and so on. Many organizations apply AIoT to assist in managing energy, lighting, and access in the office building. Sensors placed in the office can detect the number of staff present per time and make necessary adjustment to lighting and temperature to enhance energy efficiency. Sensors also use face recognition to determine what personnel to grant access into certain areas of the office environment. 7 The introduction of AIoT and edge computing for intelligent driving allows the vehicle to offload tasks to the server closer to the vehicle side, creating a new paradigm for task offloading and resource allocation.
The survey work has used relevant papers which were searched from databases and other sources including IEEE Xplore, ScienceDirect, Scopus, and Google Scholar. We gave priority to papers which have been published more recently (from 2015 onwards). In performing the search, we used keywords, such as “Artificial Intelligence,”“Internet of Things,” and “Distributed Sensor Networks” using a combination of AND/OR operators to find the papers which were relevant toward the area of AIoT. The studies also utilized two major contexts which were the problem context and the solution context to locate the various studies which focused on techniques, methods, and applications for AIoT.
The remainder of the paper is organized into five sections. The sections are mostly structured to follow the different layers in the AIoT as shown in Figure 1. Section “The convergence of architectures, techniques, and platforms for AIoT” gives discussions on the convergence of architectures, techniques, and hardware platforms for AIoT. The discussions here include edge, fog, MEC, and cloud architectures for AIoT deployment. Section “The convergence of sensors, devices, and energy approaches for AIoT” gives discussions on the convergence of sensors, devices, and energy approaches for the AIoT. Section “The convergence of communication and networking for AIoT” gives discussions on the convergence of communication and networking for the AIoT. Section “The convergence of applications for AIoT” gives discussions on the convergence of various applications for the AIoT. Section “Conclusion” concludes the article with some concluding comments and remarks. Table 1 shows a summary and discussions for the AIoT—a new paradigm of distributed sensor networks, to also serve as a concise summary for the article.
Summary and discussions for AIoT—a new paradigm of distributed sensor networks.
The convergence of architectures, techniques, and platforms for AIoT
This section discusses the convergence of architectures, techniques, and platforms for AIoT from two aspects: (1) edge, fog, and mobile-edge computing (MEC) architectures for AIoT and (2) novel machine-learning and training techniques for AIoT. A summary of the representative studies discussed in this section can be found in Table 1.
Edge, fog, and MEC architectures for AIoT
Chen et al. 8 proposed an approach for resource-efficient edge computing in AIoT termed as ThriftyEdge. In this work, they proposed a device-centric approach and a resource-efficient edge computing scheme to minimize the cloud resource utilization and be able to satisfy its QoS requirement. Their approach is based on a novel mechanism which consists of an efficient topology sorting–based task graph partition algorithm. An optimal virtual machine selection method is also proposed to minimize the IoT device edge resource occupancy and meet its QoS requirement. The virtual machine types are ranked and then selected based on the ranking to utilize the graph partition algorithm. Figure 2(a) shows the task graph for topological sorting and partitioning, and Figure 2(b) shows the flow chart of computation offloading profile.

Task graph and flow chart for ThriftyEdge: 8 (a) task graph for topological sorting and (b) flow chart for computation offloading.
Fragkos et al. 9 proposed an AI-MEC framework by exploiting the computing capabilities of a unmanned aerial vehicle (UAV)-mounted MEC server for the IoT applications. In this work, the authors formulated optimal offloading strategies to the UAV MEC servers based on a game-theoretic model. The pure Nash equilibrium (PNE) strategies were determined following the theory of submodular games. Their experimental results demonstrated the operational characteristics and performance of the models. Gong et al. 10 proposed another approach to perform cooperative edge computing in AIoT devices termed as ICE (intelligent cooperative edge). In their approach, the AI computations are re-designed from the cloud to be distributed and operate on edge devices. Their distribution approach utilized lightweight pipelines for cloud compression and edge reconstruction. The authors’ prototype and their evaluation showed that their approach could enable a useful combination of AI and edge computing.
Sanchez et al. 12 proposed an approach for the real-time acceleration of convolutional neural networks (CNNs) on edge devices termed as AWARE-CNN. Their approach utilized reconfigurability features of field-programmable gate array (FPGA) devices to construct application-specific AIoT architectures to guarantee the latency-aware execution for IoT data. Figure 3 shows the architecture of AWARE-CNN from a high-level view to its microarchitecture. Figure 3(a) shows an overview of AWARE-CNN which creates a translation from the high-level logical description of CNNs to an application-specific dataflow execution engine. Figure 3(b) shows the logical execution of cross-layer parallelism in AWARE-CNN. Figure 3(c) shows the microarchitecture where each layer is mapped to a pipeline stage containing a buffer-sized tile.

AWARE-CNN acceleration on edge devices: 12 (a) overview of AWARE-CNN, (b) cross-layer parallelism in AWARE-CNN, and (c) microarchitecture in AWARE-CNN.
A current limitation of DL systems in IoT environments is the efficient assignment of compute tasks. Zhou et al. 13 proposed an approach for allocating the inference computation among the IoT devices termed as AAIoT (accelerating artificial intelligence Internet of Things). Figure 4 shows the proposed AAIoT approach. In their approach, the first device collects the data, performs the computation for the first layer, and delivers the results to the next device. This process is carried on for the different devices, and the final result is communicated back to the first device. The operations of each layer depend on the results of the previous layer.

AAIoT inference computation on edge devices. 13
He et al. 14 proposed an approach to reduce the end-to-end inference delay for DL implementation in multiple deep neural network (DNN) partitions. In this work, the authors used a mixed-integer nonlinear programming approach and divided the problem into two subproblems: (1) a computing resource allocation (CRA) problem with fixed partitions and (2) a DNN partition deployment (DPD) problem. The authors then proposed to solve the CRA problem based on a Markov approximation algorithm and proposed a low-complexity DPD algorithm to solve the DPD problem to achieve a near-optimal solution. The computational complexity for cooperative multiaccess edge computing can increase significantly when there is large number of users which leads to delays in decision-making. Shi et al. 11 proposed an approach for deep reinforcement learning (DRL) for the task placement problem to enable servers to reduce the average service delay for cooperative multiaccess edge computing. Their approach utilized a mean-field game-guided DRL approach to address the task placement problem.
Liu et al.
16
considered another challenge for edge computing in mobile wireless sensor networks (WSNs) with disjoint connectivity. In their work, the authors proposed a rendezvous selection strategy for data collection in disjoint WSNs with mobile edge nodes and optimize the path length and network connectivity. Their approach utilized the ant colony optimization (ACO) algorithm to give two advantages: (1) simplifying the path construction and computational cost and (2) reducing the search space and increasing the convergence speed. Yang et al.
15
proposed an approach for energy-efficient execution of deep learning inferences tasks for deployment on mobile edge devices. Figure 5 shows the system model of edge inference for DNNs. In this work, the authors proposed to minimize the sum of computation and transmission power consumption under probabilistic QoS constraints. The authors developed a reweighted power minimization approach by iteratively solving a regularized reweighted

System model of edge inference for DNNs. 15
Yang et al. 17 proposed an Edge-Based IoT Platform for AI (EBI-PAI) by integrating SDN and serverless technology into QoE-aware edge computing to address the issues of resource scheduling and service definition in AIoT. Figure 6 shows the system architecture for the EBI-PAI which consists of three layers: (1) control layer; (2) mapping layer; and (3) edge layer. The authors investigated the QoE-aware server deployment problem to optimize the architecture performances during incremental deployment. They formulated the deployment problem, proved its complexity, and designed heuristic algorithms with near-optimal performance (greedy minimum dominating set algorithms (GDSAs) and greedy cover for QoE-Aware (GCQA)). Their work was validated with a case study with real user demands.

Edge-based AIoT (EBI-PAI) architecture. 17
Novel machine-learning and training techniques for AIoT
A desirable goal is to accelerate the training process of machine and deep learning approaches on the AIoT network. Liu et al. 18 proposed a hierarchical training framework termed as HierTrain which has the capability to efficiently deploy DNN training tasks on the mobile-edge-cloud computing (MECC) model. Figure 7 shows the system overview of the HierTrain framework which consists of three stages: (1) profiling stage—profiling the execution time for different layers on the device, edge and cloud; (2) optimization stage—selecting the optimal partition model and determining the training samples for the edge device, edge server, and cloud server; and (3) hierarchical training stage—sending the partitioned samples to the edge and cloud servers. Their work was implemented and deployed on a hardware prototype over the three levels. Their results showed that HierTrain could achieve an increase of 6.9 times performance speedup compared to the cloud-based approach.

HierTrain AIoT training and deployment over MECC. 18
Foukalas and Tziouvaras 20 proposed a combined FATL (federated active transfer learning) model for industrial IoT (IIoT) applications. Figure 8 shows the architecture of the proposed FATL for IIoT. Their proposed architecture had the objective to increase the learning speed and reduce the amount of labeled data during learning iterations for training on the IIoT devices. The authors utilized various machine-learning techniques to achieve these benefits. TL using a pre-trained artificial neural network (ANN) and FL were used for the edge IIoT architecture and scheduling for multiple devices, whereas AL was deployed on the IIoT end devices. Their experimental and simulation results showed that the proposed FATL architecture gave high accuracy and allowed scaling for a number of IIoT devices.

FATL architecture for IIoT applications. 20
Another usage of FL was proposed by Wang et al. 21 toward the problem for cooperative edge caching for IoT. In this work, the authors proposed an approach termed as FADE (Federated Deep Reinforcement Edge). The FADE approach enabled base stations to learn a shared predictive model by utilizing the first-round training parameters as the initial input for the training process. Their experimental results showed that their approach gave a reduction in performance loss and average delay, and improved the cache hit rate efficiency. Zhou et al. 22 proposed an approach for coordinating edge and cloud architectures for the optimization and cost efficiency of FL. In this work, the authors utilized Lyapunov optimization theory and developed an optimization framework for load balancing, data scheduling, and accuracy tuning for the dynamically arriving training data samples. The authors showed that their approach had the advantages for reducing both the computation and communication cost.
Chiang et al. 19 proposed an approach for co-task processing in multiaccess edge computing systems. The authors proposed an approach termed as DDL (deep dual learning) where the learner updates primal and dual variables based on randomly perturbed samples. Other than machine-learning approaches, algorithms from other fields, such as game theory, could also provide useful algorithms for IoT architectures. IoT fog nodes have the characteristics of having low resource capabilities. An important challenge is to match the IoT services to fog node resources while guaranteeing the minimal delay for IoT services and efficient resource utilization. Arisdakessian et al. 23 proposed a multicriteria intelligent IoT-fog scheduling approach using game theory. Their approach proposed to design the preference list functions for the IoT and fog layers for ranking based on the criteria for latency and resource utilization.
The evolvement of edge, fog, and MEC architectures will play an important role in future research for the development of the AIoT to fully realize its potential and advantages for real-world deployments and applications. The following gives some aspects and main interests for researching and future trends on the convergence of architectures, techniques, and platforms for AIoT:
A critical requirement would be the development of platforms and architectures that will support and coordinate the usage of distributed machine-learning algorithms and techniques throughout the AIoT ecosystem. Some illustrative examples are the work by Teerapittayanon et al., 78 who proposed an approach for implementing distributed DNNs over cloud and edge devices and the work by Savaglio et al. 79 who proposed an approach for distributed data mining on edge devices;
A second trend and focus would be the development of AIoT architectures, techniques, and platforms which are robust and secure to hostile, adversarial, and imposter attacks. An interesting research direction would be to utilize blockchain technologies to protect against cloned or counterfeit edge nodes or devices and demand proof of authenticity; 80
A third focus would be the development of techniques and mechanisms for privacy-preserving data collection and mining within the AIoT. This is specifically important for AIoT containing heterogeneous nodes and devices.
The convergence of sensors, devices, and energy approaches for AIoT
The convergence of IoT and AI for the AIoT needs to be supported by different types of sensors, devices, and sensing approaches based on AI and other approaches. A review of AI-based sensors and deployment of AI-based sensors for the next-generation IoT applications can be found in Mukhopadhyay et al.’s study 81 This section discusses the convergence of sensors, devices, and sensing approaches for the AIoT from two critical aspects: (1) scalable sensing, computation, and management for AIoT and (2) wireless power transfer (WPT) and energy-harvesting approaches for AIoT.
Scalable sensing, computation, and management for AIoT
The scalability for IoT sensing, computation, and management is an important issue to be addressed due to the geographically and rapidly increasing number of IoT devices. The advantages of the AIoT paradigm where the sensing and computation can be offloaded to the IoT devices is expected to offer users with higher quality of experience (QoE). However, the current computation offloading with QoE approaches using machine-learning and deep learning approaches may have issues of instability and slow convergence. Lu et al. 24 proposed an approach for offloading computation by utilizing DRL. The authors proposed an algorithm based on double-dueling deterministic policy gradients (D3PG) to address the instability and convergence issues for DRL while offering users better QoE. Their QoE approach has the capability to incorporate multiple factors and elements into the model: (1) computational and transmission latencies; (2) computational and transmission energy consumption; and (3) success rate for tasks.
Compressive sensing (CS) techniques gives a potential way to address the energy efficiency issue in AIoT sensor nodes. However, this approach is limited by the overhead of signal reconstruction constraints in the sensor node and the analysis in the remote server. Wang et al. 25 proposed a compressive sensing architecture utilizing deep learning for implantable neural decoding. The objective of their work was to improve the wireless transmission efficiency and reduce the overheads. The authors proposed a two-stage classification procedure: (1) coarse-grained screening module and (2) fine-grained analysis module. The screening module performed the front-end classification task and transmits the compressed data to a remote server for the fine-grained analysis.
Xu et al. 26 proposed an approach utilizing a blockchain-enabled edge computing platform termed as Edgence for the management of decentralized IoT applications. Figure 9(a) shows the decentralized Edgence architecture which is deployed on edge clouds of MEC. The Edgence platform consists of several master nodes. A masternode contains a full node of the blockchain and a collateral. In this work, the authors proposed using a three-stage validation process: (1) script validation; (2) smart contract validation; and (3) master node validation. Figure 9(b) shows the decentralized AI training under the management of Edgence which used feed-propagation and back propagation for updating the AI models. The initial layers are trained using the data sets of mobile users at the edge clouds, whereas the latter layers are trained at the remote cloud center.

Decentralized Edgence AI architecture and training: 26 (a) decentralized Edgence platform architecture and (b) decentralized Edgence AI models training.
WPT and energy-harvesting approaches for AIoT
There are several challenges which remain to be addressed to enable the practical deployments of WPT technologies in AIoT, such as ensuring security requirements to overcome selfish and untrusted devices and providing monetization incentives for charging services. Zhang et al. 27 proposed an approach for offloading time-varying computation task loads and requirements for deployment in MEC and energy-harvesting IoT devices. The authors approach utilized hybrid-DRL techniques consisting of: (1) hybrid decision-based actor-critic learning (Hybrid-AC)—an improved actor–critic architecture where the actor outputs continuous actions corresponding to every server, and the critic evaluates the continuous actions and outputs the discrete action of server selection and (2) multidevice hybrid-AC (MD-Hybrid-AC)—constructs a centralized critic to output server selections which considers the continuous action policies of all devices.
Wu et al. 28 proposed a novel approach which integrates WPT and battery-constrained on-device learning techniques termed as WPEG (Wirelessly Powered Edge intelliGence). Figure 10(a) shows the framework of the proposed WPEG nodes. Each node has the responsibility to manage the edge devices and WPT nodes within its coverage. In this work, the authors also proposed a permissioned blockchain approach to establish a secure point-to-point (P2P) energy trading for edge intelligence. Figure 10(b) shows the permissioned blockchain structure in the WPEG. The authors further proposed a two-stage Stackelberg game model to achieve the optimization for transmission power and incentive strategies.

Framework and blockchain structure for WPEG: 28 (a) framework of proposed WPEG nodes and (b) permissioned blockchain structure in WPEG.
The following gives some aspects and main interests for researching and future trends on the convergence of sensors, devices, and energy approaches for AIoT:
A potential solution for scalable sensing to cover large-scale areas and environments, such as smart cities would be to exploit mobile, opportunistic, and/or vehicular crowdsensing technologies and/or the internet of vehicles (IoV) for deployment of the AIoT sensors. Ang et al. 82 discuss the applications, architecture, and challenges for IoV sensing and deployment in smart cities;
The need for highly efficient energy management, harvesting, and WPT techniques remains another critical requirement for the continuing development of the AIoT. Ijemaru et al. 83 discuss the utilization of mobile collectors for opportunistic IoT sensing with WPT. Security aspects for WPT are going to become more significant and are opportunities for more research. Encryption techniques for WPT are required so that only the authorized receptors can acquire the transmitted energy and unauthorized receptors are prevented from illegally accessing the transmitted energy.
The convergence of communication and networking for AIoT
There are several advantages and challenges to converge AI in the network layer for routing, scheduling, and enhancing the QoS. Ijaz et al. 84 discussed some challenges of AI in wireless networks for IoT (as shown in Figure 11). Their article focused on the communication aspects for IoT and proposed a generic network architecture showing a few key areas of AI in the IoT: (1) software-defined network; (2) network function virtualization (NFV); (3) multiaccess edge computing; and (4) content delivery network (CDN). They discussed some challenges for the above areas. These included challenges for latency-critical IoT, challenges in routing and network traffic control, challenges in caching, and so on. This section discusses the convergence of AIoT in communication and networking from four aspects: (1) frameworks for AI-enabled IoT networks; (2) AI-enabled IoT and cellular networks; (3) SDN for AIoT; and (4) AIoT and MEC computing.

Network architecture showing AI in the IoT. 84
Frameworks for AI-enabled IoT networks
Song et al. 29 proposed an approach for centralized and distributed AI-enabled IoT networks. In this work, the authors discussed the challenges of contention-based random access and proposed some AI approaches in centralized IoT networks. The authors proposed to utilize DRL techniques for the online training process of the deep Q-network (DQN). Two neural networks (NNs) are used, namely an evaluated NN and a target NN for the DRL. For distributed IoT networks, the authors pointed out some key technical challenges of spectrum access and spectrum sensing. The work also proposed a DQN-based spectrum access strategy for intelligent spectrum access. Yang et al. 30 proposed an over-the-air computation approach termed as AirComp for a communication-efficient federated machine-learning architecture. Figure 12 shows the AirComp architecture and communication process of federated machine learning in intelligent IoT. The global and local models are iteratively updated through a number of communication rounds until the consensus of the global model is reached. As shown in the figure, three steps are performed for each learning round: (1) device selection; (2) local model upload; and (3) global model download. The AirComp approach achieved this by using the superposition property of a multiaccess channel to achieve fast local aggregation for federated machine learning.

AirComp architecture and communication in AIoT. 30
Ramírez et al. 31 proposed a WLAN and AIoT architecture for resource allocation and sharing among the devices in the IoT. The design of this architecture is for the IoT-WLAN. Figure 13 shows the AIoT-WLAN architecture. The proposal is based on an IoT network with centralized management architecture and AI control agent. Their proposed approach utilizes three control agents for the different environments. The AI control agents are located inside the smart things, in the centralized gateway administration, and in the cloud platform. The AI agents are responsible for learning the machine-to-machine (M2M) relations. Each smart thing connects to the network using the MQTT protocol.

AIoT-WLAN architecture. 31
Non-orthogonal multiple access (NOMA) schemes have a high potential for massive machine-type communication (mMTC) framework because it allows multiple users and devices to share resource blocks. Emir et al. 32 proposed a deep learning–aided multi-user detection approach termed DeepMuD for uplink grant-free NOMA IoT networks to utilize the massive machine-type communication. Figure 14 shows the proposed DeepMuD model. In the proposed model, the authors used a model-driven deep learning technique called offline-trained LSTM-based network to perform the multi-user detection. Their work showed that the proposed approach could outperform conventional detectors (even with perfect channel state information (CSI). Their experimental results showed that the same performance was obtained for increasing number of users with a 10-dB decrease in power consumption.

Non-orthogonal multiple access (NOMA) AIoT. 32
Kwon et al. 33 proposed a distributed approach to design a multi-hop ad hoc IoT network as shown in Figure 15. In this work, the authors formulated a decision-making process based on an MDP framework which considers the trade-off between network throughput and individual transmission power consumption. A DRL algorithm using a double deep Q-network (DDQN) was deployed for decision-making at the relay nodes. Each node owns a DDQN to build its own policy e.g. transmission range, etc. The learning process for their system enables individual relay nodes to make decisions using a minimal amount of information.

Distributed AIoT multi-hop ad hoc network. 33
AI-enabled IoT and cellular networks
The development of cellular network technologies such as 5G and 6G provides the potential to deploy complex sensors in the AIoT and to optimize the communication channels. Wang et al. 34 proposed an architecture for 5G intelligent Internet of Things (5G I-IoT) which consist of three major components: (1) processing center; (2) object processor; and (3) sensing regions. Figure 16(a) shows the architecture of the 5G I-IoT, and Figure 16(b) shows the details of the processing center component. The processing center in the cloud includes the intelligent computing module and execution module to process data automatically with intelligent algorithms. DL algorithms are used to design the intelligent computing module.

5G I-IoT architecture and processing center: 34 (a) architecture overview and (b) processing center component.
Zhang et al. 35 discussed their model of the AIoT with 6G cellular technologies. Figure 17 shows their AIoT model with 6G technologies which includes three aspects: (1) mobile ultra-broadband—terabits per second wireless transmissions; (2) super IoT—supplemented by symbiotic radio and satellite-assisted IoT communications to support a wider coverage and larger number of connected IoT devices; and (3) AI—advanced machine-learning focused on deep learning and reinforcement learning.

AIoT model with 6G cellular technologies. 35
Software-defined networking for AIoT
In recent years, software-defined networks has emerged as a potential solution to provide cost-effective and flexible IoT services. The SDN-IoT can optimize network resources which may be underutilized to deliver the IoT data. Tang et al. 36 proposed an approach for SDN-IoT which exploits partially overlapping channel assignment (POCA) in the SDN-IoT. In this work, the authors first investigated and showed that the conventional fixed POCA algorithms are not viable for the highly dynamic large-scale SDN-IoT. Figure 18 shows the wireless SDN-IoT with a heterogeneous structure and the problem with the fixed POCA. The authors proposed a DL-based intelligent POCA for the wireless SDN-IoT where the IoT data traffic dynamically changes. Their approach utilized two DL learning strategies for the prediction of IoT traffic load and to adaptively assign the POCs.

Wireless SDN-IoT. 36
QoE for different services is an important factor for the AIoT. He et al. 37 proposed QoE models to evaluate the qualities of the IoT for both network and users. In this work, the authors focused on optimization of cache capacity among content-centric computing nodes and the transmission rates under a constrained total network cost. Figure 19 shows the cache resource allocation for content-centric IoT. The authors formulated the QoE as a cache resource allocation problem under the different transmission rate to acquire the best QoE. Their approach used DRL to adaptively improve the QoE.

Cache resource allocation for content centric IoT. 37
Wu and Zhang 38 proposed a collaborative and intelligent network-based intrusion detection system (NIDS) architecture termed as SeArch for SDN-based cloud IoT networks. Figure 20(a) shows the proposed SeArch NIDS architecture, and Figure 20(b) shows the NIDS node graph. The architecture consists of three layers: (1) edge-IDS; (2) f-IDS; and (3) cloud-IDS. These IDSs utilized DL and ML algorithms. For detection engines in the SeArch architecture, machine-learning algorithms were used (e.g. SVM for Edge-IDS, SOM for Fog-IDS, and Stacked autoencoder for Cloud-IDS).

SeArch NIDS architecture and node graph: 38 (a) SeArch NIDS architecture and (b) NIDS node graph.
Li et al. 39 proposed an intrusion detection approach by utilizing an AI SDN-IoT architecture. Figure 21 shows the proposed AI SDN-IoT architecture. The authors utilized different types of AI algorithms to perform the feature selection and flow classification tasks. These algorithms were (1) improved Bat algorithm with swarm division and differential mutation and (2) improved random forest. The improved Bat algorithm with swarm division and differential mutation was used to select the typical features. The algorithm splits the swarm into several subgroups. Each subgroup is then used to learn among the different populations efficiently.

AI-based SD-IoT architecture. 39
Guo et al. 40 proposed an approach for an endogenous trusted network (ETN) to support virtual network operations, resource allocation, and provide shared services. In this work, the authors utilized blockchain technology together with SDN and NFV to achieve the trusted sharing of network resources. Figure 22 shows an overview of the proposed ETN framework. The ETN consists of five layers: (1) terminal device layer; (2) network layer; (3) blockchain layer; (4) platform layer; and (5) application layer. Their blockchain approach is used to achieve trust between resource requesters and resource providers in a distributed network.

Endogenous trusted network (ETN) AIoT. 40
AIoT and MEC data streams
MEC is a promising technology to improve QoE in AIoT networks. Jiang et al. 41 proposed an approach termed as Crowd-MECS which used a crowdsourcing framework for mobile edge caching and sharing. Figure 23 shows the Crowd-MECS architecture. The two edge devices (in green) which denote a mobile edge device and a fixed edge device store the cache contents. These cached contents are shared with the red edge devices. Incentives are offered to the green edge devices for the content caching.

Crowd-MECS architecture. 41
Other works for AIoT and MEC can be found in Jiang et al. 42 and Du et al.’s studies. 43 Jiang et al. 42 proposed a resource-scheduling approach based on DRL for large-scale MEC networks. In this work, the authors proposed a DRL approach to minimize the sum of weighted task latency for all IoT users by optimization of the task offloading decision, transmission power, and resource allocation in the large-scale MEC system. Du et al. 43 proposed a DRL approach for optimization of the energy consumption for immersive VR video streaming over terahertz wireless networks. In this work, the authors utilized DRL to learn the optimal viewport rendering offloading and transmit power control policies and proposed an actor–critic (A3C)-based joint optimization algorithm.
The following gives some aspects and main interests for researching and future trends on the convergence of communication and networking for AIoT:
The future will see the development of large-scale (e.g. citywide) IoT and AIoT networks. In these scenarios, interoperability is an important consideration in view of the rapid advances in new technologies such as 5G and 6G. The challenges for interconnection of multiple IoT networks which are owned by different organizations/stakeholders and/or developed using different communication networks/protocols and heterogeneous technologies remain an important issue to be satisfactorily resolved. SDN and NFV are interesting research directions and solutions to resolve the interoperability issues in IoT and AIoT networks;
There is a growing importance to build trust into the communication networks for IoT. A growing trend in this regard is the replacement of centralized IoT trust architectures with decentralized architectures. An interesting research direction would be to utilize distributed ledger technologies (DLTs) for trust development and protection in IoT networks. Danzi et al. 85 give a discussion of the challenges for the integration of IoT networks and DLT.
The convergence of applications for AIoT
The convergence of AI and IoT enable the utilization of AI tools to introduce intelligence and decision-making capability into diverse IoT applications and systems. This convergence is helping to solve multiple IoT problems that have slowed its global adoption. The AIoT focuses on providing fast services, helping to unleash the potentials of utilize data in a better and faster way. 86 For example, in transportation applications, vehicles can be equipped with camera, GPS, radars, and sonar sensors to gather and analyze data to determine the condition of roads, pedestrian behavior, and so on and make decisions in response to such situations. Transportation companies have deployed AIoT to monitor and manage their fleets. Data gathered by the fleets is analyzed to manage fuel cost, track maintenance, and identify driver’s unsafe behaviors. This section discusses the convergence of AIoT applications from eight aspects: (1) energy, smart grids, and AIoT; (2) industry, smart buildings, and AIoT; (3) vehicles, smart transportation, and AIoT; (4) education, smart entertainment, and AIoT; (5) biomedical, smart health, and AIoT; (6) environmental, smart agriculture, and AIoT; (7) robotics and computer vision in AIoT; and (8) security in AIoT. A summary of the representative studies which are discussed in this section can be found in Table 1.
Energy, smart grids, and AIoT
Researchers have deployed the AIoT in multiple ways to help energy producers optimize their equipment for better service delivery and to help users efficiently manage energy to minimize cost. 87 Puri et al. 44 developed an AIoT system that generates electrical energy from different sensors which can be used in industrial areas and household appliances. The authors utilized different sensors, such as piezoelectric sensor, which generates energy from stress caused by human body weight, body heat to electric converter which generates energy from heat generated by movement of human body, and solar panel which generates energy from the sunlight. The authors built and validated the data collected from the sensors with two AI models (ANN and ANFIS) to perform the prediction of generated power output. The authors showed that their system could produce accurate results in predicting the power generated from renewable resources.
Lei et al. 46 proposed a DRL-based energy dispatch approach for IoT microgrids (MGs). The authors formulated and solved a finite-horizon partial observable Markov decision process (POMDP) model by learning from past data to identify uncertainties in future renewable power generation and electricity consumption, which is the focus of the authors. To address the instability challenges of DRL algorithms and the distinctive features of finite-horizon models, the authors designed two novel DRL algorithms. The first is a finite-horizon deep deterministic policy gradient (FH-DDPG) for finite-horizon MDP based on DDPG. The authors conducted experiments applying the idea of time-dependent actor to the proposed algorithm against the baseline DDPG. The results showed that the proposed FH-DDPG performed better than the baseline DDPG in providing stability to energy management problem. The second algorithm is a finite-horizon recurrent deterministic policy gradient (FH-RDPG) to solve the partial observable problem. This algorithm was evaluated against RDPG. The result showed that the FH-RDPG algorithm performed better than baseline RDPG algorithm in a POMDP domain.
Liu et al. 47 proposed an IoT-based system for managing energy on edge computing devices using DRL. Their proposed architecture for the system consists of three major components: (1) energy device; (2) energy edge server; and (3) energy cloud server. The energy device is any entity in the network that requires and can supply energy. The energy edge server is a server deployed either at the network gateway or base station. The energy cloud server is connected to the energy management central controller with the responsibility of providing real-time analysis, maintaining record of devices connected to the network and proffering computational support to energy edge servers. The authors designed two DRL approaches of energy-scheduling scheme with two phases—offline DNN and online dynamic deep Q-learning phase. The DRL scheme is deployed at the energy edge server and energy cloud server. In the first approach, the task of scheduling energy is transferred from the devices to the edge server which utilizes the DRL approach to find the best scheduling solution for devices. In the second approach, the edge server, in order to minimize computation cost, relinquishes the task of training the DNN to the cloud server and then uses the deep Q-learning operation, making use of the estimated Q-value from the server. The results of their experiments showed that the DRL scheme outperformed the traditional cloud scheduling scheme.
Miao et al. 48 proposed a blockchain and AI-based architecture for the natural gas IIoT to address the supply chain failure of existing centralized supply architectures. The authors premised that numerous supply requests from centralized energy supply sites could be overwhelming and cause the indicators such pressure, temperature, and natural gas load to exceed safety limits. They stated that problems such as obsolete infrastructure, unreliable transaction, variable prices, inaccurate gas data, and unsecured user information are associated with centralized supply architectures. To address these problems, the authors proposed an architecture with three dimensions—infrastructure, data, and value. They implemented a blockchain solution to provide trusted transaction, distributed storage, authenticity, and reliability of stored data and data generated during energy generation and transmission. They further proposed an LSTM-based model to predict natural gas load and dispatch and a transaction model that utilizes natural gas value and supply–demand interaction. Their simulation experiments validated the good effectiveness of the models.
Han et al. 45 proposed a deep learning–based framework to predict future energy consumption in smart residential homes and industries. Their aim was to solve the problem of load forecasting in energy management. Their proposed framework is composed of edge devices connected to a cloud server in an IoT network. The IoT network is connected to smart grids to effectively maintain energy demand and supply activities. To deal with several parameters contained in energy data set, the authors applied techniques to preprocess the data before initial training. Their experimental results showed that the approach was able to predict energy consumption with high accuracy.
Industry, smart buildings, and AIoT
AIoT has been adopted in industries and manufacturing to help foresee challenges and deal with them before they cause breakdown of machinery and operations. It enables M2M sensors and other industry smart devices to receive data, analyze it, and take decision to optimize operation, avoid unplanned costly downtime, plan logistics and supply chain, respond to cyber threats, and manage staff safety. Many researches have been done to better utilize AIoT in this domain. 88
An automatic real-time online method to evaluate the reliability and trustworthiness of cyber physical systems (CPS) in an IIoT network was proposed by Lv et al. 49 The evaluation framework and online rank algorithm was designed using machine learning to achieve the online assessment and analysis in real time. The result of the assessment could be utilized in a timely manner to take preventive measures to address any system faults or malicious attacks. To obtain a broad reliability model assessment strategy, the authors analyzed quintessential CPS control model that utilizes a spatiotemporal correlation model. Based on the assessment strategy, they developed a smart control strategy and a trusted smart predictive model. The proposed model from the simulation results was very effective in guiding the CPS defenders to select applicable defense resource input depending on the type of CPS threat and defense domain.
Zhang et al. 50 presented a learning framework that recognizes the thermal model of thermal zones automatically, to control the heating, ventilation, and air-conditioning (HVAC) systems, in a smart building for the comfort of the residents, under different environmental conditions, in order to lessen energy consumption. The proposed framework learns the thermal model from the temperature reading of the smart thermostat. The authors installed an IoT platform in a real-world building and used the data collected to validate the learning framework. Their experiment results showed that the learned model can be delivered in minutes if the learning process is done both in a cloud infrastructure and on edge device for a single thermal zone. The results also showed that the learned model produces reliable evaluation of thermal comfort in intelligent control implementation.
Jiang 51 proposed an intelligent dynamic evacuation path solving model in the case of fire outbreak in public buildings using AI technology. The author discussed that in order to bring people to safety in a fire outbreak situation, there should be a means of determining the shortest effective route to evacuate people depending on the verified fire situation, internal structure of the building, and fire product influences, such as temperature, smoke, and carbon monoxide concentration. The model was built based on appropriate fire emergency evacuation practices. The model was implemented using ant colony algorithm, and an AI-based mobile terminal system was developed for large public buildings to assist in guiding people to safe exit routes.
Vehicles, smart transportation, and AIoT
AI algorithms and techniques which have been integrated into IoT (AIoT) have found important applications in vehicles and transportation. An example of such application is in self-driving or autonomous vehicles. Future self-driving cars and vehicles will be embedded with several types of sensing instruments (e.g. cameras, lidar, and radar) and will generate huge amounts of data (e.g. 100 GB per second). 89 A significant challenge to be addressed is the real-time and secure processing of these sensor data to enable fast response to complex scenarios such as obstacle avoidance and velocity adaptation. The AIoT deployed at the network edge together with federated machine learning and secure trust models offer potential solutions. Xiao et al. 52 proposed an approach based on using blockchain for intelligent driving edge systems. The approach utilized a double auction mechanism to optimize the satisfaction of users and service providers for edge computing. Their experimental results showed that the approach could give better performance for resource utilization.
Dass et al. 53 proposed a trust evaluation scheme called T-Safe to evaluate the trustworthiness of data generated from sensing nodes in an IoT-based intelligent transportation system. Safety-related information are provided to end users by safety-as-a-services infrastructure based on data generated by the sensor nodes by a method termed as decision virtualization. The authors premised that the accuracy and efficiency of such information depends on the trustworthiness, security, and privacy of the sensing nodes and the communication medium through which data are transmitted. The authors designed a trust evaluation model to address this problem. They utilized direct and indirect trust mechanism on every node in order to carry out trust measures update at regular time intervals. The trust measures are then used to evaluate trust of every data item generated from the network. An integer linear programming (ILP) model is formulated by the authors to identify optimal data with reduced effect of unauthorized nodes for decision-making. Experimental results of the proposed scheme against greedy methods show that the proposed scheme outperformed existing methods.
Biomedical, smart health, and AIoT
The AIoT has been employed to manage and process large amounts of data generated in the internet of medical things (IoMT). 90 Yang et al. 54 proposed an intelligent architecture for processing visual data from IoT-assisted health systems. Their architecture consists of three modules: (1) end processing module; (2) edge control module; and (3) cloud management module. The intelligence at the end side was generated from the analysis of human, machine, and sensor characteristics. The intelligence in the edge and cloud sides was determined by an intelligent measurement model proposed by the authors. Their experimental results showed that the proposed method could outperform existing methods.
Mustafa et al. 55 proposed an AI-based IoT system for detecting and classifying stress. In their approach, a wearable device was equipped with various sensors (skin conductance, electrocardiograph (ECG), and skin temperature sensors) to measure the physiological characteristics. The physiological data collected were transferred through the user’s mobile to the cloud. An AI algorithm was used to analyze the data to ascertain the level of stress. The user is alerted of the predicted stress level through a mobile phone. In a situation of a severe stress level, the user’s physician would be alerted for necessary action. Their system achieved a 97.6% performance accuracy binary classification with regards to real-time sensor data.
Environmental, smart agriculture, and AIoT
Many researches utilizing AIoT technology have been focused toward smart agriculture, food processing, and optimization of environmental conditions to increase food production. Chen et al. 56 proposed an AIoT-based scheme that detects rice blast disease called RiceTalk. The scheme is based on soil cultivation IoT platform. Agriculture IoT sensors are used to gather data, which the AI mechanism trains and analyses automatically in real time. Existing studies made use of hyperspectral image or non-image data to determine plant diseases which needs human efforts to capture the photographs and data for analysis. The novel capability of RiceTalk is that the AI model is treated and managed as an IoT device which significantly minimizes the cost of managing the platform to deliver real-time training and prediction. Their experimental results showed that RiceTalk could give a prediction accuracy of rice blast at 89.4%.
Nurminen and Malhi 57 designed an AIoT-based system to monitor and analyze the measurement of humidity, temperature, moisture, and luminosity to support clients for optimizing the conditions of the environment for houseplants. In this work, the authors proposed a solution to provide open interfaces so as to build an Open Houseplant Database for intelligent agents. Their approach also used an open messaging interface (O-MI) node for data collection from sensor and a smart device for data analysis and prediction. The users were provided with an online user interfaces with standard data virtualization using dialogues and graph. Their experimental results showed the system showed great potential to support in a smart manner, the cultivation, and management of herbs and houseplants.
Chen et al. 58 proposed an AIoT-based platform to detect and mitigate piglet crushing in pig farms termed as PigTalk. The PigTalk approach was deployed in a farrowing house made up of different farrowing cages. Smart microphones were planted on top of the cage to detect piglet screams and rotating Internet (IP) cameras to monitor the cages. Floor vibration and water drop actuators were also used to alert a sow to stand up to detect when a piglet scream. In their approach, raw audio data (the piglet scream) are preprocessed by an audio clip transform method to detect piglet screaming. When a sow lies on a piglet, the data (piglet scream) are received by PigTalk through the microphone. PigTalk processed the data in real time and automatically activated the actuators (floor vibration, water drop, etc) or the heat light to force the sow to stand up. In contrast, existing systems would require a farm worker to locate the particular cage and push up the sow to avoid crushing. The PigTalk achieved its operation using two IoT devices: (1) DataBank and (2) ML-device. The DataBank device provided the improved generation of spectrograms of vocalizations received from farrowing cages, whereas the ML_device executed the CNN machine-learning model. Their experimental results showed that crushed piglets could be saved by PigTalk within 0.05 s with a success rate of 99%.
Education, smart entertainment, and AIoT
Several authors have proposed to use AIoT in the education field to make teaching and learning more effective and rewarding. Wangoo and Reddy 59 proposed an intelligent learning environment framework for IoT wearable devices to enhance interactive learning environments. The authors made use of GUI tools such as Node Red, NEMA GUI Builder and Embedded Wizard for developing GUI for the IoT platform. Their framework is designed to provide students the opportunity to effectively interact during teaching and learning activities. IoT wearable devices used in educational application can utilize the framework for enhanced virtualization. Their experiments showed that the framework enabled IoT wearables to provide better learning experiences to users.
Liu et al. 60 applied AI analysis and IoT to determine student’s concentration levels in individualized learning environments and to enhance learning quality using an automated IoT control model. In this work, the authors employed the abilities of image recognition and brainwave signal analysis technologies, and environmental quality detection to capture data from students learning different courses under different environmental conditions to determine their concentration levels. The data collected were passed through an ANN computation. Afterwards, the results were applied to IoT control devices to adjust the learning conditions. Their experimental results showed that learners concentration improved as learning conditions improved.
Zhao et al. 61 designed an AI collection of self-built learning resources. The aim of the work was to popularize the teaching and learning of AI in China. To reduce certain learning factors that discouraged teachers and students from participating in AI teaching activities, the authors developed an open hardware resource called STEAM Suite and a graphical drag-block programming language called December. The STEAM Suite provides several types of motors, plates, tires, beams, gears, shafts, batteries, and other basic components. This enables students to try and build different shapes of mechanical structures which is expected to motivate them to develop interest in AI as they cultivate strong ability in using the components. The authors verified that due to the practical oriented nature of the learning resources, the level of interest in teaching and learning of AI increased.
The application of AIoT in the entertainment sector has offered some revolution in many ways. The enormous data generated can be analyzed and used to improve user experiences and make targeted business decisions. Researchers are utilizing AIoT to develop smart systems to empower businesses in this sector for better services delivery. 62 De Lima et al. 63 proposed an adaptive method to generate personalized interactive storytelling using users’ preferences. The authors investigated the application of personality modeling and generated personalized account of experience with respect to users’ personality using the Big Five factors. The proposed method was evaluated in an online interactive storytelling platform. Their experimental results showed that the proposed approach could effectively identify users’ preferences for story episodes with 91.9% average accuracy and enhanced the experience and satisfaction of users.
Robotics and computer vision in AIoT
The AIoT has enjoyed tremendous application in robotics and computer vision applications. Robots are empowered with sensors and AI algorithms to capture and learn from newer data to develop intelligence. This ability of the robot has motivated its use in manufacturing, health care, and so on to perform tasks meant for human experts faster and with lower cost. 91 In smart cities, AIoT-enabled drones are used for several surveillance purposes such as real-time traffic monitoring. Traffic data are transmitted and analyzed and used to make decisions on the best way to reduce congestion by automatically adjusting the speed limits and timing of the traffic lights. 64
Lee and Chien 65 designed an AIoT architecture to coordinate surface, underwater, aerial ground robots to respond to disaster situations where it is impractical to send personnel. The robots are connected to an IoT network and used to collect data from a disaster site. The data are transmitted from the field workstations through the IoT network to the cloud where it is used to train a deep learning model. After training and verification, the model is transmitted back to the robots through the field workstations in order to continue on-site object classification and would enable them make response decision as they unceasingly confirm identification with the environment. Kim et al. 66 proposed an approach termed as CONTVERB (continuous virtual emotion detection system) for an IoT environment. IoT devices are equipped with wireless signal which they transmit to a person within a signal range and also receive the reflection of the signal. The IoT devices processes the reflected signal using a set of heart beat segmentation and respiration procedure to obtain no less than four different kinds of human emotion, such as sadness, joy, pleasure, and anger. Simulations and implementations of the proposed system showed its effectiveness.
Ke et al. 67 proposed an intelligent edge computing approach for the detection of parking space occupancy in a smart city. Edge AI and IoT are employed so as to break the computation load. The volume of data to be transmitted was designed to be low to handle bandwidth problem associated with real-time processing of video data. Tensorflow Lite was used to implement an SSD-Mobilenet detector on the IoT devices and was trained with the MIO-TCD data set. A tracking algorithm was deployed on the server end to track vehicles in parking facilities. The system was tested in a real-world environment for 3 months and the system achieved 95% accuracy. Li et al. 92 proposed an approach for defect detection in large-scale photovoltaic plants. Their approach utilized UAVs and edge computing to perform the defect detection. In this work, the authors developed an approach that combines deep and TL with data augmentation for deployment on resource-constraint edge devices. Techniques were also used in the network to reduce the parameters and the size of the model. Some other examples of AIoT for robotics and computer vision using various techniques can be found in the literature. Velasco-Montero et al. 93 proposed a methodology to predict the performance of CNNs on vision-based AIoT devices. Chiu et al. 94 proposed a distributed learning approach for AIoT video-based service platforms.
Security in AIoT
The security in the AIoT is a critical factor. Research in this field shows that with AIoT, IoT devices are enabled to learn from data and make swift decisions when an abnormal behavior is detected in the network in real-time. 95 Chakrabarty and Engels 68 proposed a framework for the IoT enabled smart city using AI to mitigate a range of present and future cyber threats. The massive adoption of IoT in smart cities provides a wide surface of attack for a heterogeneous, large and complex smart city system. The authors proposed framework focused on the security of the IoT communication protocols. An AI engine was strategically positioned at different stages (devices and big data center) within the framework to learn the normal network behavior and to monitor the security and health of the network. It is also able to acquire, store, and analyze big data generated by IoT devices. Their framework showed dominance in providing better effectiveness compared to existing smart city frameworks.
A self-adaptive and parallel processing intrusion detection system for an SDN network was proposed by Suresh and Madhavu. 69 The design of the AI-based IDS was done using the self-adaptive energy BAT algorithm. In the initial stage of their design, the software layer performs analysis of arriving traffic packets and to perform feature selection. The classification of the packets is performed by the system in the second stage, and if any attack is confirmed, the system takes appropriate control measures and makes decisions regarding network restrictions, such as resource allocation, routing, traffic management, and packet management. The authors performed training using the KDD CUP 99 data set, and testing was performed using data from a real-time IoT platform. The AI-based IDS result showed a reduction in energy overhead with regards to computation time to discover important features, and a fast response time to intrusion compared to swarm intelligence-based BAT algorithm which lacks parallel processing and self-adaptive abilities. Eskandari et al. 70 proposed an intelligent anomaly-based IDS termed as Passban. The particular quality of the system is its direct deployment on low-cost IoT devices and its platform-independent ability. The aim of the authors was to ensure the protection of data very close to the IoT data sources, so that data generated from several IoT source can be analyzed to detect anomaly. The IDS training phase was carried out on a normal network flow. After the training, the learned model is placed into the gateway’s internal memory where it is used to detect attacks from arriving network traffic. The system was evaluated against common cyber threats (SYN flood attacks, port scanning and brute force attacks) and demonstrated its efficiency.
Trust recommendation is another area of importance for IoT security. Xu et al. 71 proposed an approach termed as TTSVD for trust recommendation in AI-enabled IoT systems which incorporates trust and rating information. In this work, the authors used a dual truster model (TrusterSVD) and trustee model (TrusteeSVD) based on a rating-only recommendation model. The influence of the two models were combined by the weighted average to produce a hybrid model called TT-SVD. Some other examples of AIoT security using various techniques can be found in the literature. Li et al. 72 proposed an approach for resource optimization in blockchain IoT based on dueling DQN. Wang et al. 73 proposed another approach using blockchains to address the issues for decentralized multiparty learning in AIoT. Hu et al. 74 and Lim et al. 75 proposed using FL toward the problems for privacy and incentive mechanism design, respectively. Jia et al. 76 proposed an approach termed as FlowGuard to defend against IoT DDoS attacks. Libri et al. 77 proposed an approach termed as pAElla for real-time malware detection in edge IoT devices.
Conclusion
This paper has reviewed and discussed the convergence of the AIoT, where the advantages of intelligent machine-learning algorithms are integrated into resource-constrained IoT sensors and devices to enable large-scale and complex sensor deployments for IoT infrastructures. The paper has discussed the AIoT from several aspects and layers (sensors and devices, communication, and networking and AIoT applications), discussed the combination of the AIoT with different emerging technologies (e.g. edge, fog, and MEC computing, SDN, 5G/6G cellular networks, etc.), and given insights into the various challenges and issues which remain to be resolved to enable practical deployments of the AIoT into increasingly varied and complex environments.
