Abstract
Introduction
As the name implies, the Internet of Things (IoT) is the Internet connected by objects. There are two meanings: first, the core and foundation of the IoT is still the Internet. The IoT is a network that expands and expands over the Internet; second, its network composition and communication uses a unique addressing scheme that uses a variety of objects or devices around us (such as RFID (radio-frequency identification) tags, smartphones, sensors, activators, etc.), which extend and extend to any object. It constitutes a global network of interconnected entities. Entities in the network can interact with each other anytime and anywhere, completing various tasks of information exchange and communication, 1 that is, the connection of objects. The IoT is widely used in network integration through communication sensing technologies such as smart sensing technology and pervasive computing technology. Therefore, it is called the third wave of the development of the world information industry after computers and the Internet. The IoT is also known as the fourth revolutionary technology after the steam revolution, the power revolution, and the Internet revolution.
As a network of objects, the IoT has been given great expectations. The expected application areas include intelligent logistics, intelligent transportation, precision agriculture, smart home, environmental protection, smart power, retail management, healthcare, financial management, public. The seven areas, namely, safety, industrial supervision, intelligent building, urban management, military management, and so on, are the most common application areas. It can be said that the future of life, the world, and the Internet are closely related. According to the definition of the IoT, the IoT is a network that extends and expands on the Internet. Constrained networks (such as sensor networks) and resource-constrained devices (such as sensors, RFID tags, actuators, etc.) at the edge of the Internet are the main components of the IoT and are also secure for the IoT. The main bottleneck—resource-constrained networks are those that are currently difficult to implement the common link layer characteristics of the Internet. Resource-constrained networks usually have low throughput, high packet loss rate, and extremely asymmetric links. Resource-restricted nodes are those nodes that are currently difficult to implement many common characteristics of Internet nodes.
Resource-constrained nodes are often seriously deficient in energy, storage, and processing capabilities. Although resource-constrained networks and resource-constrained nodes are different, there are usually a large number of resource-constrained nodes in resource-limited networks, and resource-constrained nodes are the main bottleneck of resource-constrained networks. For these resource-constrained devices, only considering their limited computing power and serious storage space, the node itself runs TLS, IPsec, and so on. These are end-to-end security mechanisms on the Internet and have power, high packet loss rates and other challenges. Therefore, in fact, for the actual situation of the IoT network, it is of great practical significance and value to study the end-to-end security technology applicable to the IoT.
Related work
Research on IoT security architecture based on fog computing has been a hot topic in academic and industrial research.2,3 C Dsouza et al. 4 proposed policy-based resource management in fog computing, extending the current fog computing platform to support secure collaboration and interoperability between resources requested by different users in fog computing. A Alrawais et al. 5 proposed a mechanism that uses fog calculations to improve the distribution of certificate revocation information in IoT devices to improve security. C Thota et al. 6 proposed an efficient centralized security architecture for end-to-end integration of IoT-based healthcare systems deployed in cloud environments. OH Alhazmi and Aloufi 7 analyzed the performance of the fog-based IoT and proposed a security scheme based on the MQTT protocol. N Moustafa 8 proposes a systemic IoT-fog-cloud architecture that clarifies the interaction between the IoT, fog, and cloud layers to effectively implement big data analytics and network security applications. PY Zhang et al. 9 discussed and analyzed the architecture of fog computing and pointed out the related potential security and trust issues. Z Wen et al. 10 outline the core issues, challenges, and future research directions of business processes that support fog in IoT services.
Fog computing system architecture
Introduction to the system architecture of fog computing
Compared with the cloud computing end user layer, the network layer, and the cloud layer, three-layer architecture, the fog computing system can be divided into five layers after the introduction of the intermediate fog layer. They are an end user layer, an access network layer, a fog layer, a core network layer, and a cloud layer, respectively, as shown in Figure 1. It is easy to see that the closer to the bottom layer, the larger the distribution area and the less delay the end user data is transmitted to the layer. 11 Table 1 shows the central equipment and important functions of these five layers.

System architecture of fog computing.
Fog computing of the main equipment and functions of each layer.
In the proposed fog computing framework, by extending the fog layer with computing and storage capabilities between the cloud server and the terminal device, the key data and computing services required for localization on the cloud server are moved to the fog server closer to the terminal device. By providing data caching, localized computing and other functions, the demand for high traffic and low latency of mobile applications is better met.
The end user layer is mainly composed of the user’s mobile phone, portable computer, and other terminal devices. With the development of sensor network technology, the sensor node will also play an important role in this layer. These devices may be fixed devices placed somewhere, such as sensors placed on traffic lights, on both sides of the road, or on mobile terminals such as users’ mobile phones and laptops. At this level, these devices will act as content creators and content users. The task will be generated at this layer and the processed results will be returned to this layer. In addition, the terminal device also needs to discover and specify the fog node corresponding to the task forwarding. 12
Design and implementation of fog computing system architecture
In terms of implementation, CISCO® proposes an IOx framework where users can develop deployment applications. 13 For small fog-based applications, a small universal fog computing platform was implemented using the Raspberry Pi. 14 In addition, the existing work has also designed some fog computing platforms for specific application scenarios, similar to smart cities, intelligent transportation, and so on. These platforms will be described in detail in “Prospects and conclusion” section. In the research of virtual simulation platform, a simulation platform-iFogSim has been proposed, and the simulation platform adopts Java language design. The configuration of the terminal device, the cloud computing center, and the network link is implemented, and the fog computing resource scheduling and management algorithm can be implemented in different scenarios according to actual needs. In addition, the performance of the entire design algorithm can be based on delay and power consumption., network resource occupancy and control costs, and other indicators to measure. Although this brings new opportunities to the scheduling algorithm researchers, its function is not very high; there are still some differences between the effects of the simulation technology and the actual results.
Resource scheduling in fog computing
When the fog computing is performed, the virtual machine (VM) or the task loaded in the container 15 will be run. In order to achieve the effect of efficiently using physical machine resources, VMs or containers are migrated between physical machines. The location for such a VM or container migration depends on the design of the scheduling algorithm. The design of the scheduling algorithm is more complicated than the cloud computing. On one hand, the fog computing is very high in the sensitivity to time. Therefore, the time of the user to the fog node, the time when the task migrates in different fog nodes, and the time that the fog node forwards tasks to the cloud data center will be accurately taken into account. On the other hand, the fog node not only sets the computing nodes, but also sets the nodes related to storage. Similarly, the way of placing these nodes affects the related service quality to some extent. In summary, the following factors need to be considered in the design of the scheduling algorithm.
Storage capacity
One of the main functions of the fog layer is storage. For the calculation of the task, the required source data are scattered on each fog node. The requirement for storage is to be able to bring the data as close as possible to the user’s needs, but at the same time require that the storage space be used as little as possible. In this regard, data collection time has become the most important assessment indicator. The location of the data can be used directly to obtain the response time of the data. But these problems can be adjusted by limiting the number of data backups. Therefore, the measurement of the storage situation will also use these two indicators.
Delay
In addition to storing this function, the fog node also assumes the function computed in the fog layer, and one of the most important metrics of computing power is latency. Compared with cloud computing, the calculation task of fog computing can be directly performed on the fog node, without uploading the task to the cloud data center and then to the node, so that the delay can be effectively reduced. Round Trip Time is the most accurate way to define the total time it takes to return a result from the start of a terminal device task to the upload terminal device. Although the same task is intuitive and convenient for horizontal comparison, the method completely ignores the difference between the amount of data that the task needs to transmit and the amount of task calculation. Although the number of instructions executed is in unit time, the reason for the user-specific implementation may be to make it impossible to handle the program with constant CPU use. Therefore, this measurement method has problems in measuring the calculation delay. Today, the method of calculating latency is more reasonable than service-level agreement (SLA) violations, which makes the measurement of resource shortages more reasonable. 16 In the case of a breach of SLA, the degree of shortage is defined as
where
In this regard, the proof is as follows. Assuming that each task is independent of each other, the requested resource amount is
The expected value of the number of instructions that generated the delay due to insufficient resources is
It can be seen that there is a proportional relationship between the expected value of the resource shortage degree and the expected value of the delay instruction number, and the number of delay instructions is also proportional to the delay time. Therefore, it can be concluded that there is a proportional relationship between the expected value of the resource shortage level and the time delay.
It is not difficult to find that this measurement method not only ignores the delay cost of differential services in the network, but also does not reflect the nonlinear delay cost. Therefore, further research and improvement of the definition of delay cost is needed.
Power consumption is also an important indicator of computing power. When task execution is generally loaded on a VM, multiple VMs on the same device share the resources of the device, which can greatly improve resource use efficiency. Figure 2 shows the power consumption parameters of some servers.

Power consumption parameters.
As can be seen from Figure 2, there is a linear relationship between device power consumption and CPU resource usage, and the IBM serverx3250 consumes the least amount of power, and the HP ProLiant ML110G3 is the largest. Therefore, modeling power consumption can be defined as follows
where
Therefore, people plan to install as few VMs as possible on as few devices as possible, but the greater the degree of installation, the higher the risk of resource shortage, and the risk of delay increases. Therefore, the unified consideration of computing power will generally combine the two indicators of power consumption and delay. In addition, there are similar indicators such as CO2 emissions proportional to resource consumption, which can have the same effect.
Utility
Fog computing has the same commercial value as cloud computing. Most of the current deployments of fog computing have specific application scenarios, mainly private fog. However, with the popularity of fog computing, tasks uploaded to a private fog node may not be able to meet their own needs. People can rent fog nodes like renting cloud servers. For example, using the computing power provided by the surrounding sensors and smart sensors on both sides of the road in the Internet of Vehicles, the pricing problem arises. However, since there is not yet a business model mature like cloud computing, this indicator is mainly for specific scenarios, but I believe it will be a research focus in the future.
With the continuous development of the IoT, the number of IoT users has gradually increased, and the amount of data transmission has increased rapidly, causing the cloud server to be overburdened. As an emerging computing model, fog computing provides a new way to reduce the pressure on cloud servers. The fog nodes deployed at the edge of the network can share the work of cloud computing and can handle some simple tasks at the terminal. However, the rational use of the resources of the fog node is still a difficult point and a key point. The problem of fog computing resource management and scheduling is the key to affect the performance of fog computing services. Especially in the case of large-scale service requests, if the resource scheduling problem is not effectively solved, it will increase service delay and reduce resource use. Therefore, the optimal goal can be achieved by in-depth study of fog computing resource management and scheduling issues. Some researchers have proposed an agent-based cloud joint framework by improving the existing fog computing framework. In addition, there is also a service popularity-based smart resources partitioning (SPSRP) scheme proposed for enabling the IoT of fog computing. 17
Multi-layer security measures based on fog computing
Main level
According to relevant literature, it is logical to divide the IoT into three main levels: the perception layer, the transport layer, and the processing layer. In addition, the application of data formed by the processing layer can also be regarded as an application layer. Each logical layer is covered by the underlying security architecture of the IoT. The side of the sensing layer and the transport layer close to the sensing layer is mainly distributed by the fog computing layer, as shown in Figure 3.

Relative position of the fog computing layer in the Internet of Things system.
Some researchers have proposed different security measures for the hardware and embedded device layers under the fog computing layer to protect against the security challenges that the IoT system needs to face from the bottom. 18 For example, in order to ensure the traceability and integrity of data, the physical layer anti-cloning function of the sensor has to be used; in order to enhance the reliability management, physical unclonable function and hardware performance counters are needed; in order to improve confidentiality and privacy protection, it is necessary to use a lightweight encryption algorithm. In addition to the above protection elements, there are various algorithms such as encryption algorithms, HASH functions, and key exchange algorithms that can be used for password elements of IoT security protection. Using different cryptographic algorithms and choosing to process data at different processing locations can make a large difference in the energy consumed. Therefore, in order not to consume too much energy, it is necessary to select an appropriate processing location and a suitable cryptographic algorithm according to the amount of data. For example, within the sensor, it can handle data within 1 KB; if the data are within 1 MB, the fog node can be used as a processing place; if the data are within 1 GB or exceeds 1 GB, it must be processed at the gateway or higher level joint infrastructure.
To make the system response time shorter, it is necessary to fully localize the data, which will greatly improve the efficiency of the IoT system. Powerful microcontrollers make the on-chip system of smart sensors more and more perfect. For example, the flash microcontroller produced by AD has built-in 64-KB program flash and 4-KB data flash, 2304 bytes of data RAM, and a large number of on-chip peripherals such as 12-bit ADC/DAC (analog to digital converter/digital to analog converter), time interval counter, watchdog timer, and so on. The 8052 core is used with a running cycle of up to 20 MHz. This level of system-on-a-chip is sufficient to support lighter weight cryptographic operations. Because the sensor network management on the IoT generally uses 16/32-bit advanced RISC machines-reduced instruction set processor + embedded Linux architecture, coupled with full support of power and hardware, it is fully capable of providing higher security encryption protection, the operation of which is basically equivalent to a personal computer. Foreign researchers have conducted in-depth research, summarized the available encryption elements of the various layers of the IoT from the research results, and made a relatively reliable recommendation (see Table 2).
Encryption elements of each layer of the Internet of Things.
Expected problems to be solved
To build an IoT security architecture based on the fog computing layer, the first issue is to choose the right fog computing hardware configuration. Locate the appropriate security measures and deploy the location to build and validate the appropriate encryption method. Using the new fog computing layer to test the existing lightweight encryption method for delay and power consumption performance, improve the safety factor by improving the existing simple encryption method or switching to a higher strength security algorithm to meet the above requirements. Afterward, we will strive to optimize the underlying architecture of the IoT system and consolidate and strengthen the basis of the IoT system by reducing the computational delays caused by the use of security measures without reducing the security indicators and increasing the power consumption. In order to build an IoT security system based on fog computing, all kinds of resources introduced from the fog computing layer must be fully used. On the basis of meeting existing safety measures, the safety intensity is maximized. The key issues that are expected to be addressed include the following four aspects.
Security algorithm optimization strategy for hardware such as sensors
According to the relevant data, the safety factor of the traditional sensors is not high enough because they only generate the corresponding digital measurement results according to the objective quantity collected by themselves, and then directly encrypt and upload the results. Now, there is a way to improve the safety factor of the sensor, which is to try to extract the unique ID of each sensor, and then modify the corresponding security algorithm in the sensor, so that the unique ID of the sensor is also used as the encryption calculation. The parameters will greatly improve the security strength of the output.
Improvement strategies for lightweight security algorithms
Due to the limitations of various resources on IoT terminals, lightweight security algorithms are still the most widely used methods for IoT terminals. If the IoT terminal can introduce more computing power and space through fog computing, it has enough ability to support security algorithms with higher security strength and more complex calculation. In this way, IoT terminals can greatly improve their computing power and security performance. However, in order to realize this idea, it is necessary to properly improve and implement the security algorithm based on the full study, understand the existing lightweight security algorithms, speed up the operation, improve the security strength, and reduce energy consumption.
Measurement of resource supply for fog nodes
The fog computing layer is not just a simple collection between fog nodes, because multiple nodes can form a cluster to achieve efficient integration of resources. In order to grasp the maximum resource potential that the fog computing layer can provide, it is necessary to form a relatively stable node resource evaluation algorithm through appropriate stress testing, which is also to provide a feasible tool for future IoT network planning.
Standards for measuring the degree of improvement in security algorithms
The improvement effect of the security algorithm can be measured by many different indicators such as the length of the computing time of the device, the power consumption, and the strength of the anti-attack capability. In the continuous correction and continuous testing of security algorithms, researching the above sensors, nodes, gateways, and joint architecture for decryption will greatly benefit the improvement of security algorithms.
With the rapid development of the IoT and Internet technologies, a large amount of data are generated, and how to use big data quickly, safely, and efficiently. Improvements can be made from hardware algorithm optimization such as sensors, improvement of security algorithms, measurement of fog node resource supply, and measurement of the degree of improvement of security algorithms to achieve the security level of fog calculation. In addition, some scholars have proposed a fog demand calculation enabled security demand response scheme for IoE (Internet of Energy) use consensus and access control encryption to prevent collusion attacks. 19
Application of fog computing
The fog computing architecture can have a variety of functions and components. It includes a fog computing gateway to receive data collected from IoT devices. It can also include a variety of wired and wireless fine-grained acquisition endpoints, including rugged routers and other switching devices. In other aspects, it may also include a client device and a gateway to access the edge node. In addition, higher level fog computing architectures will reach out to core networks and routers, and eventually reach global cloud services and server systems. The Open Fog Alliance’s Development Reference Architecture Group has proposed three goals for the development of the fog framework. The fog environment is horizontally scalable, which means it will support vertical applications in multiple industries; it can achieve coherent work from cloud to object; it is a system-level technology that will gradually evolve from objects and network edges. Taking intelligent driving as an example, the fog computing can complete the calculation task with strong time requirements. Below, we give the application of fog computing in intelligent driving.
Intelligent driving demonstration application
The smart driving demonstration application is shown in Figure 4. In the traditional driving mode, the position information of the vehicle is obtained from the satellite through the GPS (global positioning system) sensor, and the information is sent to the cloud data center of the navigation software. After the data center collects the data, it is carefully calculated to obtain the navigation information and send the information to the vehicle through the network. Due to the limitation of the delay and security of the network, the information obtained by such a method is relatively extensive, can only roughly show the driving paths of the roads through which the vehicle passes, and cannot perform the behaviors of acceleration, deceleration, and avoidance of the vehicle in real-time judging, so that unmanned driving cannot be achieved under such conditions. Compared with the traditional driving mode, the intelligent driving can obtain the real-time situation of the road through the sensing device such as the camera and the ultrasonic, thereby achieving the driverless driving of the vehicle more safely. The key issue that needs to be addressed most in these scenarios is the need to quickly transfer, process, and verify the collected information.

Intelligent driving demonstration application.
In the specific application of fog computing, the bottleneck of delay is likely to occur in network transmission and task processing. In fog computing, network transmission mainly involves networks from users to fog nodes, between fog nodes, and from fog nodes to cloud data centers. Since the fog node needs to be responsible for the processing of delay-sensitive tasks, the transmission quality of the end user’s wireless access network to the fog node is particularly critical. In smart driving, the navigation information of the vehicle has little requirement for delay and can therefore be handled by the remote cloud data center. However, in order to avoid other vehicles and pedestrians during driving, there is a high demand for delay, and an accident may occur with a slight delay. Therefore, data collection and processing must be performed in the near-ground fog node. However, the transmission speed and signal quality of current wireless networks still cannot meet such requirements. In the future, when fifth-generation (5G) technology matures and commercializes, it can be used for atomization calculations in applications such as smart driving, which is extremely sensitive to time delay, because 5G transmission speed is 10 times faster than 4G (fourth generation). Therefore, an important research direction in the future is to combine the fog computing of 5G technology. In the task processing, the existing resource allocation algorithm is summarized to enable the fog computing service provider to complete the task more efficiently. Due to the limited computing power of the vehicle itself, the task can be migrated to a fog node outside the vehicle for execution. Both VM migration and container migration require a large amount of information to be moved, resulting in unacceptable delays. The current research on virtualization technology for fog computing is still in a blank stage. In addition, because the fog computing needs to deal with the actual network environment, the environmental changes are more serious and have a certain time and space. This needs to be solved by an adaptive algorithm, and it is not recommended to presuppose the distribution. The current adaptive algorithm is mainly implemented by the reinforcement learning algorithm, but the method needs to consume a large amount of computing resources to achieve the adaptive effect. Therefore, its applicability in fog computing is questionable. In addition, since the position of the vehicle varies greatly over time in smart driving, this change may also introduce additional transmission delays. Therefore, how to use an efficient and low-cost adaptive scheduling algorithm to select the appropriate atomization node for migration and enable information users to obtain data as soon as possible is one of the urgent problems to be solved.
Since fog nodes are mainly deployed in the open scene of intelligent driving, if criminals steal information or information is tampered with, it will have disastrous consequences. 20 For example, if the rear car has made a decision to prepare for overtaking, but the front car did not receive the information in time or received the error message and failed to avoid it in time, it is likely to cause a traffic accident. So the key to this technology is to ensure the integrity and availability of the message. Although the current literature has analyzed the possible aggressive behaviors in fog computing, the delay and robustness of the proposed solution in specific applications has not been tested. Therefore, efficient encryption and verification algorithms in fog computing will also be the focus of research.
Industrial Internet demonstration application
Current industrial production is primarily based on cloud computing. However, in practice, it has been found that cloud server latency is too large and many fine operations cannot be completed on time (e.g. cutting micro-components). In addition, the amount of data collected by the underlying enterprises is increasing, and the backbone of the enterprise is also being tested. The industrial Internet demo application is shown in Figure 4. To address these issues, with latency-sensitive applications and big data feature extraction, organizations can deploy fog layer computing and storage devices between end devices and cloud data centers. In addition to the requirements for time delay and security, this also puts higher requirements on the unity of the platform and the synergy between clouds.21,22 First, because the underlying components involved in the industrial production environment are more complex, it is necessary to establish a unified control platform to effectively cover and connect the cloud architecture, 23 which allows data collection and control of these components. At the same time, the software-defined fog node group is realized, which effectively covers and connects the cloud platform, heterogeneous network, and large-scale terminal equipment, and forms a standard API interface and specification with cloud fusion capability. 24 However, the current research on unified interfaces is still in its infancy. At the same time, research on the number and location of controller deployments is still insufficient to manage resources such as networking, storage, and computing. 25 These are all issues that need to be solved in the future.
Second, in addition to delay-sensitive applications, fog nodes in industrial production environments require preliminary feature extraction for processing data submitted to the cloud data center. 26 To reduce the load on the backbone of the enterprise. This involves key technologies for cloud convergence.27,28 In this technology, it is necessary to study which tasks the fog layer will submit to the cloud data center and the status of submission to the cloud data center, and which server is submitted to the cloud data center, through which route submissions and other core issues.
Prospects and conclusion
Fog computing is still in the early stages of official deployment, but various application scenarios are considered ideal for fog computing applications. The following is the development of the combination of fog computing and the IoT in several areas:
Connected cars: the emergence of semi-automatic and self-driving vehicles will only result in more and more data being generated by the vehicle. Independent driving of a car requires the ability to analyze certain data locally, such as the environment, driving conditions, and direction. Other data may need to be sent back to the manufacturer to help improve vehicle maintenance or track vehicle usage. The fog computing environment supports communication of all of these data sources in the edge (vehicle) and communication with the terminal (manufacturer).
Smart City and Smart Grid: similar to connected cars, power systems are increasingly using real-time data for more efficient operation of the system. Sometimes, these data are located in remote areas, and if you want to process the data, you need to be close to where the data were generated. At other times, data from a large number of sensors need to be brought together. To solve both problems at the same time, the fog computing architecture can be designed.
Real-time analysis: from manufacturing systems that respond when events occur, to financial institutions that use real-time data to inform transaction decisions or monitor fraud, a large number of application scenarios require real-time analysis. The fog computing deployment helps to transfer data between the data creation location and the run location.
Research on IoT security system based on fog computing has important theoretical research significance. At the same time, it has a very wide application prospect in the field of data traceability and integrity of IoT system, identity authentication, credibility management, confidentiality, and privacy protection, and also has very important engineering value. The improvement of the IoT security system can win more trust from users in the IoT system and is of great significance to the development and promotion of the IoT itself.
