Abstract
Introduction
Cloud computing 1 is a type of Internet-based computing that enables on-demand network access to computing resources (e.g., networks, servers, storage, and applications) anywhere (Figure 1). Users access the cloud through the Internet and receive the services they need. Users mainly perform input/output through personal devices, and the processing and storage of actual data is done in the cloud.

Cloud computing concept map.
Recently, the Internet of things (IoT) market using the cloud has been growing.2,3 IoT refers to a technology in which devices are connected to the Internet to exchange information between people and things, and between objects. The cloud is a good platform for IoT. As the IoT market grows, a lot of data will be generated by IoT devices. However, configuring and using a platform for analyzing and storing a large amount of data is inefficient because of the cost of constructing and maintaining the new platform. Therefore, it is effective to rent and use computing resources through a cloud service.
Despite these advantages, however, the combination of the cloud and IoT 4 can cause many problems. Indeed, by 2020, approximately 50 billion IoT devices will be connected to the Internet. However, if an exponentially growing number of IoT devices are connected to the Internet and send a lot of data to the cloud, there will be tremendous traffic across the network. The time and expense incurred in consuming a large amount of bandwidth in the cloud, and the time and expense required for storing and analyzing that data will also increase exponentially. Moreover, as various services are created, the number of services that require real-time processing will also increase. However, because the cloud is physically remote, there is a limit to the provision of these real-time services.
To overcome these problems, a new paradigm called fog computing has emerged.5–7 Fog computing is a new computing model that has been proposed to enable massive traffic and real-time services resulting from a combination of IoT and cloud computing. The core concept of fog computing is that some services in the cloud are handled by network devices (routers, switches, hubs, and WiFi routers) that are present at the edge of the network, as shown in Figure 2. Creating a fog layer between the cloud and the endpoints can provide real-time services by processing service requests closer to where the data are generated, drastically reducing the amount of data delivered to the cloud. However, fog computing also presents some considerations.

Fog computing layer.
First, fog computing handles services in network devices close to the IoT devices. However, network devices are relatively inefficient in terms of computing resources compared to the cloud, making it difficult to handle many services on a single network device. Therefore, fog computing is suitable for locally distributed, small-capacity processing, rather than centralized large-capacity processing. Hence, it is necessary to deploy the service in a proper location considering the computing resources of the network equipment.
Second, the service response time is drastically reduced when the service is processed in a fog computing environment rather than a service processing environment in the cloud. However, even in a fog computing environment, the service response time may be further reduced depending on where the service is processed. In addition, traffic on the network can be reduced by allowing services to be handled at a location close to the IoT devices. Research on fog computing architecture and fog computing platforms have been actively carried out, but there is a lack of research on where to deploy the service.
Third, it is necessary to consider the requirements of the various computing resources for each service. For example, a service that collects and pre-processes data from Internet devices has a relatively low requirement for computing resources. However, services that perform image processing or execute complex algorithms use more central processing units (CPUs) and memory because they have greater computational complexity than other services. In addition, the cache server and the database server require a relatively small amount of CPUs and memory, but a relatively large amount of storage space. When deploying these services, deploying only services that use a lot of resources on one network device can result in a lot of other resources being wasted.
Fourth, CPU utilization of cloud servers by day of week and hourly is quite variable, as shown in Figure 3. Because fog computing also handles cloud services on the network, continuing to service at fixed locations can lead to inefficient results. Therefore, it is necessary to relocate the service periodically according to the different service requests for each time zone.

Host server CPU utilization of the Amazon E2C cloud.
In this article, we propose a practical service deployment method that can provide faster service response time, considering the various problems and utilize the computing resources of network equipment as efficiently as possible to process more services in a fog computing environment.
This technique has the following contributions:
First, by assigning the service to a location close to the user using the service, it can provide fast service response time and reduce network traffic.
Second, it efficiently utilizes the resources of the network device through a packing algorithm that makes it possible to use the computing resources of the network device as much as possible.
Third, it can periodically relocate according to the changing service environment for each time zone, thereby continuously providing fast service response time in a changing environment.
Related work
Research into deploying virtual machines in the cloud through vector bin packing algorithms has been active in the past. Lei et al. 8 has reduced power consumption in data centers by reducing the number of physical machines, in which virtual machines are deployed using vector bin packing algorithms. In addition, they compared various algorithms that could be used for packing and studied which algorithms were efficient. However, in order to consider the computing resources of fog devices that require relatively few computing resources in a fog computing environment besides CPU and memory, storage should also be considered. In addition to low computing resources compared with the cloud, for example, in the case of a cache server, storage requirements are high, so consideration of the three computing resources CPU, memory, and storage is necessary. Song et al. 9 studied the deployment of virtual machines in the cloud by extending the bin packing problem to multiple dimensions. They defined the variable item size bin packing (VISBP) algorithm and extended it to three dimensions to compare it with various algorithms.
However, there is a difference between deploying a virtual machine in such a cloud and deploying a fog server in a fog computing environment. In the cloud, the number of physical machines is fixed and kept at a minimum while virtual machines are implemented. However, because fog computing operates on a network device, it is important to deploy the services on the most efficient device, rather than reducing the power consumption because the network device must continue to operate even if the network device used is minimized. Therefore, in this article, we propose an algorithm that assigns services to the fog devices that are closest to users and enables as much use of fog device computing resources as possible through packing algorithms.
In the research related to fog computing, research on service implementation using fog computing architecture, fog device platforms for fog server deployment and execution, and fog application migration technique according to client patterns have been performed. However, there is a lack of research on which fog server should be deployed on which fog device.
Hu et al. 10 implemented a face recognition system in the fog computing environment. After processing the image received from the fog node adjacent to the user, the extracted face identifier was transmitted to the cloud. Distributing computation tasks from the cloud to the fog nodes improved overall processing efficiency and reduced network transmission. However, there was a fog node adjacent to the user, and there was no consideration as to which fog node should actually deploy the service.
Xu et al. 11 proposed a fog server deployment automation framework that analyzed packets for service requests and obtained the necessary applications from a repository. This study proposed a fog device platform that supports various services but does not consider computing resources and deploys a fog server for only a single user connected to the fog device.
Bellavista and Zanni 12 build real fog middleware using message queuing telemetry transport (MQTT) protocol in fog computing environment. It facilitates interoperability and portability of services because they are configured in the form of a Docker Container. In addition, it is capable of running multiple containers on nodes with limited resources such as RaspberryPi and has excellent scalability. However, this article proposes a fog computing platform for deploying services, but the study of service deployment location is left as a future study.
Mahmud et al. 13 propose a latency-aware application module management policy that targets both deadline-based quality of service (QoS) and applications and resource optimization. The proposed management policy satisfies QoS of service application with deadline. In addition, it investigates how to optimize the number of resources without violating QoS of the applications. It is explained that if the application modules of a particular fog node are relocated to other fog nodes, that particular node can be turned off. However, in general, services in a fog computing environment operate on network devices, so even if there are no application modules to be executed, the devices do not turn off because they perform functions of the network device itself. In addition, if the application module is moved to reduce the number of operating fog nodes, the service delay time becomes longer. In this article, we propose a fog server deployment technique that allows all services to have low latency considering all fog nodes.
Fog server deployment technique
In this section, we propose a fog server deployment technique to determine where to deploy services. In section “Service flow,” we introduce the concept of service flow for expressing a specific service. In section “Vector Bin Packing Model,” we introduce the vector bin packing model for deploying services with computing resources in mind. In section “Fog server deployment technique,” we describe the fog server deployment technique proposed in this article. The fog server deployment technique consists of two processes. First, we propose an algorithm for determining the fog server deployment location that allocates the service to the nearest distance from the devices constituting the service to provide fast service response time. Second, we propose a vector bin packing algorithm to efficiently deploy services considering computing resources when services are concentrated on a specific location.
We define two terms to take into account the location of the network device to process the service. First, a network device that processes a service is defined as a fog device. When a fog device is selected through the algorithm, the service actually operates on the network device, which is called a fog server. The IoT device receives or fetches the data from the data source, processes the service, and returns the result to the user.
Service flow
From this point on, fog server deployment is based on the concept of service flow. A service flow is formed between clients and data sources for processing any service that clients request. In addition, when a service is processed based on data collected from IoT devices, a service flow is formed between the IoT devices. As a result, a service flow is formed between devices associated with a service.
And the service flow further includes CPU, memory, and storage used to process the service. In the network topology of Figure 4, if there are service flows as shown in Table 1, f1 is one service flow between 5, 7, 10, 16, 17, and 19, and CPU, memory, and storage 15, 18, and 22 are used.

Network topology.
Example service flows.
CPU: central processing unit.
More specifically, in the case of the service flow f1, the clients 5 and 17 request any service and data for processing the service at 7, 16, and 19. And CPU, memory, and storage for processing the service are used as 15, 18, and 22.
Vector bin packing model
Vector bin packing is a widely used model for virtual machine deployment algorithms in the cloud. As shown in Figure 5, vector bin packing can pack services such that the sum of each computing resource does not exceed the total capacity. Thus, the vector bin packing model is suitable for deploying services with varying requirements to servers of known total capacity.

One-dimensional and two-dimensional vector bin packing.
The vector bin packing model is extended to three dimensions as shown in Figure 6. The packing problem can be modeled by filling the fog servers corresponding to each service flow, considering the x-axis as the CPU, the y-axis as the memory, and the-z axis as the storage.

Three-dimensional vector bin packing.
By filling the CPU, memory, and storage diagonally without overlapping, fog servers can be deployed without exceeding the total computing resources.
However, unlike a one-dimensional packing problem, we cannot use the popular first-fit decreasing (FFD) algorithm because we have to consider three aspects: CPU, memory, and storage. Therefore, we propose a vector bin packing algorithm to use a fog device’s computing resources as efficiently as possible.
Fog server deployment technique
The algorithm for the fog server deployment technique is shown as Algorithm 1. The fog server deployment technique proposed in this article is divided into two processes: an algorithm for determining the fog server deployment location and a vector bin packing algorithm.
Algorithm for determining fog server deployment location
By line 18 of the algorithm, the fog server is assigned at the location where the sum of the distances from the devices corresponding to the service flow to the respective fog devices is the minimum, in order to determine the location of the fog server. A flowchart for the algorithm to determine the fog server deployment location is shown in Figure 7.

Flowchart of algorithm to determine fog server deployment location.
Through a simple example, the algorithm up to line 18 is described. When the service flows of Table 2 are in the topology of Figure 4, a method of determining the fog device on which to deploy the fog server is described. First, the shortest distances from the service flows to each fog device is calculated to determine the fog server deployment location. The shortest distance from each device {1, 2, 3, … , 26, 27} to each fog device in Figure 4 is shown in Table 3.
20 service flows.
CPU: central processing unit.
The shortest distance from each device to the fog devices.
distanceTable[i][j] = getMinimumDistance(
Table 3 shows the shortest distance from each device to each fog device by Dijkstra’s algorithm. Based on Table 3, the sum of the shortest distances from the devices corresponding to each service flow to each fog device is calculated as shown in Table 4. Service flow f1 includes devices 16 and 24. Since the sum of the shortest distances from 16 and 24 to the fog device A is 4 + 5 = 9, the distance from f1 to the fog device A is 9. In the same way, distanceTable (Table 4) is generated by calculating the distance from all service flows to the fog devices.
Shortest distances from service flows to each fog device.
sortServiceFlows (distanceTable).
Shades represented fog devices that are closest in distance from the service flow.
In Table 4, the shaded areas are fog devices that are closest in distance to each service flow. The service flows (distanceTable) are arranged in ascending order based on the number of fog devices closest in distance to determine the fog server deployment position. The number of fog devices closest to f6–f20 is 1, f7 and f11 are 2, … , and f2 is 5, so it is sorted in ascending order as shown in Table 5.
Service flows that have been aligned and assigned.
Shades represented fog devices that are closest in distance from the service flow.
Slashes represents the number of fog devices that are closest from the service flow.
minDistance = distanceTable[i][j]
assignServiceFlow(
When the order of the service flows is determined as shown in Table 5, the assignment starts from the service flow having the smallest number of fog devices that have the closest distance.
Because there is only one fog device closest in distance from f6 to f20, a fog server is assigned to the corresponding fog device. From f7, it is assigned to the fog device with the smallest service flow assigned in the previous step. Because fog device E has one service flow and H has three service flows, f7 is assigned to E. If the number of assigned service flows is the same, the fog server is assigned to the first fog device. After the fog server deployment location determination is completed in the following steps, it is finally assigned as shown in Tables 5 and 6.
Location of service flows.
CPU: central processing unit.
The service response time and the network traffic can be reduced by assigning the fog server to the fog device closest to the service flow. In addition, a fog device with a few assigned service flows is selected so that the service flows are not concentrated on one fog device.
The algorithm for determining the fog server deployment location calculates the distances from each end device to each fog device through Dijkstra’s algorithm and aligns the distanceTables from the service flows to each fog device in ascending order so that the time complexity is
However, even if the fog server is assigned in this way, many service flows may be concentrated on such as in the fog device C. In this case, some service flows must be assigned to the next-closest fog device in order to prevent overload.
From the 19th line of the algorithm, we propose a vector bin packing algorithm that takes into account the direction of the computing resource vector in order to deploy the fog device’s computing resources as efficiently as possible.
Vector bin packing algorithm
The flowchart for the vector packing algorithm is shown in Figure 8. The core of the algorithm deploys a service flow that is similar to the direction of the current computing resource vector. In the three-dimensional (3D) space, CPU, memory, and storage are considered as a one 3D vector, and the service flow closest to the direction of the computing resource vector remaining in the fog device is selected and deployed.

Flowchart of vector bin packing algorithm.
angle = getAngleBetweenVectors(f, CurrentResource)
minAngle = angle
f, CurrentResource) == minAngle)
deployFogServer(f)
First, the angle of the computing resource vector of the remaining fog device and the angles of the assigned service flow vectors are calculated, and the service flow having the smallest angular difference is deployed in the fog device. Once a service flow is deployed, the same algorithm is repeated when there is a service flow that can be deployed next in the remaining computing resources.
If service flows are deployed and there are no more computing resources available, then the service flows will be assigned to the next nearest fog device (Table 5). If the sum of the computing resources of the assigned service flows is higher than the computing resources of the fog device, the above operation continues. After the service flows are assigned so as not to exceed the computing resources of all the fog devices, the fog server deployment is finally complete. Then, the software-defined networking (SDN) controller modifies the flow tables of the network devices so that the devices corresponding to each service flow and the fog server can communicate with each other.
In Table 6, all the assigned service flows can be deployed in all fog devices except for fog device C. However, since the sum of the computing resources required by the service flows assigned to the fog device C exceeds the resources (100, 100, 100) of the fog device, vector bin packing is performed in the fog device C.
Since the total computing resources of the fog device C are (100, 100, 100), the service flow f20 (26, 26, 30) with the closest angles are deployed. Thereafter, f17 (27, 23, 22) with the closest angle to the computing resources (74, 74, 70) of the remaining fog devices is deployed. Similarly, f9 and f10 are deployed.
After the four service flows are deployed, the remaining f8 and f13 are assigned to the next nearest fog device. Therefore, f8 is assigned to fog device D, and f13 is assigned to fog device A (see Table 5). The algorithm is then terminated because fog device D and A can accommodate both already deployed or newly assigned service flows.
If the newly assigned fog device cannot accept service flows, the vector bin packing algorithm is performed once again and repeats the same operation.
The vector bin packing algorithm has
To see how many service flows can fit into a few fog devices via the vector bin packing algorithm, we compare it with the first-fit (FF) algorithm, which deploys service flows on the next fog device when no resources are available. There are 40 service flows as shown in Table 7. CPU, memory, and storage were randomly generated from 1 to 50.
Computing resources by service flow—random.
CPU: central processing unit.
Figure 9 shows the number of fog devices needed when 1 to 40 service flows, as shown in Table 7, are deployed. The capacity of the CPU, memory, and storage of the fog devices was assumed to be 100. With the FF algorithm, 16 fog devices were needed to accommodate 40 service flows, because if a fog device has no resources, then the service flow will be deployed on the next device. However, with the proposed vector bin packing algorithm, only 14 fog devices were needed to accommodate the entire service flow.

Number of fog devices required based on number of service flows.
In particular, the vector bin packing algorithm is more effective when there are many service flows that use more specific resources. Table 8 shows 13 CPU-intensive service flows, 13 memory-intensive service flows, and 14 storage-intensive service flows.
Computing resources by service flow—using more specific resources.
CPU: central processing unit.
Figure 10 shows the number of fog devices needed when 1 to 40 service flows, as shown in Table 8, are deployed. The number of fog devices required is reduced from 20 to 14 compared to the FF algorithm because when CPU-intensive service flows are deployed, the remaining space accommodates memory and storage-intensive service flows.

Number of fog devices required based on number of service flows.
In practice, there are many services that require relatively specific resources rather than uniformly using all resources. The proposed algorithm makes it possible to efficiently use the computing resources of all the fog devices considering the resource requirement of the services in the fog computing environment.
In this manner, the minimum number of fog devices is used for each service by assigning it to a location that can provide the minimum response time. The vector bin packing algorithm packs such that the computing resources of the assigned service flows does not exceed the total computing resources of the fog device, so as much of the computing resources of the fog device as possible are used. By assigning unpacked service flows to the next-closest fog device and packing it, it is possible to assign as many service flows as possible to the nearest fog device, to obtain the following effects.
First, the response time of services can be reduced by assigning as many service flows as possible to the nearest fog device.
Second, the route to service processing is also reduced, so traffic on the entire network can be reduced.
Third, many service flows can be processed in the fog computing environment by packing the service flows to use as much of the computing resources of the fog devices as possible.
Fourth, by reassigning the fog server periodically according to the service environment by changing time zone, it is possible to continuously provide quick service response time in a changing environment.
Experimental results
Experiment environment
In section “Experimental results,” we evaluated the performance of the fog server deployment technique described in section “Fog server deployment technique” through simulation. The network environment used in the experiment is shown in Figure 11. There are 12 fog devices from A to L to consider in the fog computing environment. In addition, assuming a general situation, there are two clients and one data source adjacent to the outside fog device.

Network topology.
Evaluation of fog server deployment technique
In order to evaluate the fog server deployment technique, we compare the results of using two algorithms for each case where 30 service flows are assigned to random fog devices and specific fog devices through an algorithm for determining the fog server deployment location.
The first algorithm is a sequential algorithm. If there are enough computing resources left to deploy a fog server in the fog device, assign a fog server, or assign it to the next fog device if there are not enough resources left. The second algorithm is the vector bin packing algorithm proposed in the article.
The computing resource capacity of all fog devices is assumed to be 100 for CPU, memory, and storage, and the computing resources required to process the service flow are set to random values between 1 and 50 for CPU, memory, and storage.
Random service flow environment
First, the service flow environment of Table 9 is used to evaluate the case where 30 service flows are assigned to random fog devices. In the service flow environment shown in Table 9, each service flow is assigned to a fog device through an algorithm for determining fog server deployment location and is assigned as shown in Figure 12.
30 random service flow environments.
CPU: central processing unit.

The service flows assigned through the algorithm for determining the fog server deployment location.
For devices C, E, F, and K, the sum of the computing resource requirements of the assigned service flows exceeds the fog device capacity, so the service flows are packed through the vector bin packing algorithm. Unassigned service flows are assigned to the nearest fog device at the next distance. The above process is repeated until the sum of computing resources of assigned service flows does not exceed the resources of all the fog devices. The completed deployment process is shown in Figure 13.

Service flows deployed through vector bin packing algorithm.
Figure 14 shows the final deployment location when deployed through a sequential algorithm. The sum of the distances from the entire service flows to the fog devices when deployed through the vector bin packing algorithm and sequential algorithm are both 548. Because the service flows are assigned equally in a random service flow environment, the sum of the distances is similar because the service flows can be arranged in a higher priority fog device.

Service flows deployed through a sequential algorithm.
Table 10 shows the distance from the service flows in Table 9 to the fog devices. The shaded area is the location where the service flows are finally deployed. The sum of the distances from all the service flows to the fog devices can go up to 772 in the worst case. However, since the algorithm for determining fog server deployment location first assigns the service flows at the shortest distance, the distance is shorter than when randomly deployed. Even after the vector bin packing algorithm is performed, a faster response time can be obtained because service flows are deployed in fog devices which are relatively close to the devices forming the service flow.
Shortest distances from service flows to each fog device.
Service flows are finally deployed in the shaded location.
Figure 15 shows the distribution of computing resources when deployed through the vector bin packing algorithm, and Figure 16 shows the distribution of computing resources when deployed sequentially. For a more numerical representation, the standard deviations for three computing resources of all fog devices are shown in Table 11.

Distribution of computing resources when deployed through a vector bin packing algorithm.

Distribution of computing resources when deployed through a sequential algorithm.
Computing resource standard deviations for the results deployed through the two algorithms.
VBP: vector bin packing.
The environment in which service flows are assigned to specific fog devices
In the situation where the service flows are randomly generated, there is not much packing because each service flow is initially assigned uniformly. There is not a large difference in the standard deviation between the sequential deployment method and the method using the vector bin packing algorithm. However, because the packing attempts to use the three resources as efficiently as possible, the standard deviation is small when using the vector bin packing algorithm.
In the next experiment, fog servers are deployed in a situation where service flows are assigned to four fog devices through an algorithm for determining fog server deployment locations. The computing resource usage of the service flows was randomly set to a value between 1 and 50.
When assigned through an algorithm for determining fog server deployment location, service flows were assigned to four fog devices as shown in Figure 17. Then, when the fog servers were deployed using the vector bin packing algorithm and the sequential method, the service flows were finally deployed as shown in Figures 18 and 19.

The service flows assigned through the algorithm for determining the fog server deployment location.

Service flows deployed through the vector bin packing algorithm.

Service flows deployed through a sequential algorithm.
The sum of the shortest distances from all service flows to each fog device after deployment was 546 when using vector bin packing and sequential deployment. The reason why the sum of the shortest distances was the same even when the service flows were initially assigned to the four fog devices is because service flow 11 is not deployed in a fog device during sequential deployment. Because of the inefficient use of computing resources, not all service flows can be deployed. Conversely, the vector bin packing algorithm packs the service flows to use the computing resources of the fog devices as efficiently as possible, so that all 30 service flows can be deployed and more service flows can be accommodated.
Table 12 shows the standard deviation of the computing resources for both deployment methods. Using the proposed vector bin packing algorithm, the standard deviation is as small as 7.71. The difference is greater than in the previous experimental random flow environment greater because there are more packings.
Computing resource standard deviations for the results deployed through the two algorithms.
VBP: vector bin packing.
An environment in which service flows are assigned to one fog device
Finally, we conducted experiments in environments where the same devices were using 30 services. The service flows were initially assigned to fog device I by an algorithm for determining the fog server deployment location.
When 30 service flows are assigned to fog device I, the deployment is completed by two algorithms, as shown in Figures 20 and 21.

Service flows deployed through the vector bin packing algorithm.

Service flows deployed through the sequential algorithm.
When the same devices used 30 services, all the service flows were assigned to one fog device through an algorithm that determines the fog server deployment location, and the remaining service flows after packing were all assigned to the nearest fog device. Therefore, the effect of the vector bin packing algorithm proposed in this article was the greatest.
When sequential deployment was used, service flows were deployed on all fog devices, but fog servers were not deployed on either A or B fog devices because of the efficient use of the fog device’s computing resources when using the vector bin packing algorithm. Therefore, other service flows were more acceptable. The sum of the shortest distances from all service flows to each fog device after deployment was 611 using the vector bin packing algorithm, and 643 when deployed sequentially. Moreover, as shown in Table 13, the standard deviation for the computing resources of each fog device also decreased by an average of 8.65.
Computing resource standard deviations for the results deployed through the two algorithms.
VBP: vector bin packing.
Experiment in general environment
However, in general, many users use various services, so the service flow is not concentrated in a specific fog device. Therefore, we conducted experiments in arbitrary service flow environments to take into account more general situations.
Experiments were conducted 100 times for each case in a situation where the number of service flows existing in the network environment was different. The standard deviation and usage of computing resources of fog devices were measured. The devices corresponding to the service flow were randomly generated from 1 to 27 (Figure 11), and the computing resources were randomly generated between 1 and 50.
Figure 22 shows the average of the standard deviation of the computing resource usage of fog devices after deploying the fog server. The standard deviation is lower when the vector bin packing algorithm (proposed in the article) is used rather than sequential deployment because the computing resources of the fog devices are used uniformly.

Standard deviation of computing resources after fog server deployment.
Figure 23 shows the sum of usage of computing resources of fog devices after fog server deployment. When the number of service flows existing in the network is small, the usage of computing resources is the same because all the service flows are deployed in the fog devices.

Sum of computing resource usage of fog devices after deploying fog server.
However, the larger the number of service flows, the more computing resources are consumed when using vector bin packing algorithms. This is because more service flows can be deployed in the network by considering the remaining amount of computing resources, so that the computing resources of the whole fog device is used as efficiently as possible.
Conclusion and future work
Conclusion
In this article, we proposed an algorithm for determining fog server deployment location and the vector bin packing algorithm to deploy various services in a fog computing environment. In addition, when clients, data sources, and IoT devices request a service or provide the data necessary for processing the service, the devices and the computing resources (CPU, memory, and storage) required to process the service were defined as one service flow. Based on the service flows, the service flows were assigned to the closest fog device to process the service. When the sum of the computing resources of the assigned service flows exceeded the total computing resources of the fog device, the service flows were packed through the vector bin packing algorithm to utilize as much of the computing resources of the fog device as possible. Unpacked service flows were assigned to the nearest fog device and packed again.
Thus, efficient use of computing resources makes it possible to operate as many services as possible in a fog computing environment. In addition, by reducing the distance from each service flow to the fog devices as much as possible, faster response time can be obtained, and the traffic on the network can also be reduced because the distance over which the data travel is shortened.
Experiments showed that the sum of the distances from service flows to fog devices was reduced, and the use of computing resources was much more efficient. We also confirmed that the proposed vector bin packing algorithm was more efficient, as many service flows were assigned to fog devices and many instances of packing occurred. Therefore, the algorithm proposed in this article efficiently packs many service flows into a fog computing environment so that many service flows can operate.
Future work
In this study, we implemented an offline algorithm for the current service flow environment, and periodically monitored the service flow environment to relocate the fog server. However, considering the change in the service flow environment in real time (the fog server is deleted after the service is terminated, and a new service is deployed in the remaining space), implementing an online algorithm would make it possible to more efficiently deploy the fog server. In addition, communication with the cloud is one of the factors to consider in fog server deployment, such as sending data from a fog computing environment to the cloud or receiving data from the cloud to process the service.
Future work will focus on communication with the cloud, and research on fog server deployment and migration techniques that are tailored to real-time service flow environments using online algorithms.
