Abstract
Introduction
Digital signage (DS)1,2 refers to a fixed display device installed indoors and outdoors or a mobile display device which is mounted on various transportations. A DS shows various information such as advertisements, news, living information, and disaster information. The contents displayed at DS client can be remotely controlled through a network. 3 Moreover, a DS system evolves from a simple display function to a bidirectional communication medium through convergence with various information technology (IT) frameworks. In the past, advertisements were the main displayed contents, but recently, a DS can be used to display various contents such as traffic, sightseeing, local information, and general living information too. 4
Currently, DS technology development can be broadly divided into contents, platform, network, and display. Technologies such as interworking with mobile, multi-sensor technology, large screen, and high-definition screen are typical examples. Especially, service content part is changing from fixed contents to dynamic contents.
Recently, the interactive multimodal DS system5–9 is introduced as a new model which provides dynamic on-demand contents according to context or user’s requirement based on sensing data obtained from surrounding wireless sensor networks.10,11 Various sensors and Internet of things (IoT) devices can be used to interact and implement smart features for an interactive DS system such as microphone array, video camera, depth,5,6 temperature, motion, light sensors,7–9 and even orientation and acceleration sensors in a mobile device of users. 7 Sensor-based interactive DS system has been becoming a common type of human–machine interface which is promising to be deployed in various places for various industrial sectors such as healthcare, smart cities, entertainment, intelligent transport system, and even retail industry. For example, smart DS systems can be deployed at various places for remote interaction between a hospital and clients for health check and health advice; 12 deployed at shopping centers to assist customers in shopping, finding products, automated checkout, and displayed relevant advertisements based on context;9,13 or can be deployed around a city to provide visitors context-aware information about transportation, weather sensing data, or other recommendations. 6
In the existing architecture, the server manages all clients, IoT devices, and all the contents of the DS client and plays the role of delivering the playback schedule. The client gets schedules and plays contents received from the server. The contents are stored on the server and transmitted to clients on demand. In the development of IoT, more and more features will be added to interactive DS systems which are increasingly popular.
In addition, efficient software provisioning (i.e. deployment model) method also was considered between the server and the client since each feature also requires each service application. 14
An increasingly high number of DS clients and sensor devices attached to the system generating a huge amount of traffic flowed to the server may create bottleneck, management, and scalability issues at the server, especially in large-scale DS systems for smart cities, e-health, or intelligent transport systems. In the meanwhile, real-time interaction is an important requirement of human–machine interfaces including the interactive DS system.
In addition, the current system architecture requires application installations and configurations at the client side. With an increasingly high number of clients as well as additional features, the current architecture may need a high cost and complexity of deployment as well as management. In some cases, if resources at the DS clients are limited, services may be difficult to be deployed.
In this article, we propose and implement a container-based distributed virtual client architecture to solve the above issues. In the proposed architecture, a number of DS clients and IoT devices are virtualized and managed by a container-based virtual client middleware. Each container-based virtual client middleware is responsible for managing and processing data locally for a cluster of DS clients and corresponding IoT devices, thus reducing the number of sessions requested to the DS server.
We implement the DS server system using network function virtualization (NFV) in the cloud environment with OpenStack 15 and the middleware with Docker container16,17 which can be located at edge cloud or local servers. As a result, applications (i.e. DS client software, video encoder, IoT data analyzer, etc) which provide related DS services can be installed on cloud and the middleware, thus facilitating service configurations and enabling lightweight clients. In particular, contents received from the server are encoded and processed at the middleware. The contents are then converted into streaming data and transmitted to DS clients. DS clients now are designed simply as streaming data receivers, called viewers, which require only a lightweight program. This approach is expected to degrade significantly the cost of deployment and maintenance in large-scale interactive DS systems.
Moreover, the middleware server can also play the role of a local caching server. Once several clients request the same content, only a session from the middleware server to the DS server is required for the first request. Later clients can retrieve content directly from the local middleware server through streaming. That leads to two benefits: (1) improving the quality of service (i.e. latency) and (2) reducing the load on the server.
In summary, this article makes the following contributions:
We find the drawbacks of the existing architecture for large-scale interactive DS systems.
We propose a container-based distributed virtual client architecture to address the drawbacks. Benefits of the proposed architecture are as follows: (1) reducing the load on the DS server, (2) improving the service quality (i.e. latency and bandwidth), and (3) enabling lightweight DS clients to reduce the cost of deployment and maintenance.
We implement the architecture with a cloud-based DS server using NFV in OpenStack, Docker-based container middleware, and simple viewers as DS clients. Experimental and analysis results show the advantages of the proposed architecture.
The rest of the article is organized as follows. Section “Related works” introduces the related research, and section “The proposed architecture” presents the proposed structure. In section “Implementation and performance evaluation,” we implement and analyze the designed architecture and conclude in section “Conclusion.”
Related works
DS
The DS system has been proposed to provide various convergent services based on existing architectures.1–9 Recently, the interactive DS system5–9 is introduced to provide dynamic on-demand contents based on contextual sensing information which can be obtained from surrounding sensor networks.18–20 In the current architecture, as illustrated in Figure 1, the server manages all clients, IoT devices, and all the contents of the DS client and plays the role of delivering the playback schedule. The client gets schedules and plays contents received from the server. DS clients are of various types, from mobile type to PC type. A client is selected by what service it provides. If a DS service is provided through interworking with IoT devices, such as a sensor and camera, various settings are required to enable IoT devices to interwork with the applications. As a result, clients also need to adapt additional functions and data transmission between the server and the client. The load of clients may increase heavier than before. With an increasingly high number of DS clients and sensor devices attached to the system, generating a huge amount of traffic flowed to the server, bottleneck, management, and scalability as well as high cost for deployment are existing issues of the existing interactive DS architecture, especially in large-scale DS systems for smart cities, e-health, or intelligent transport systems.

An existing interactive digital signage architecture.
The DS system has been proposed to provide various convergent services based on existing architectures.1–9 As the service becomes diverse, a content distribution method that anyone can publish the contents with pub/sub method has been proposed. 21 In addition, an architecture which installs content servers between server and client has been proposed for using low-cost DS client. 22 Recently, the interactive DS system5–9 is introduced to provide dynamic on-demand contents based on contextual sensing information which can be obtained from surrounding sensor networks.18–20 In the interactive DS, the transmission of contents or sensing information frequently occurs. One of the solutions is peer-to-peer (P2P)–based content sharing method which uses P2P content distribution to reduce the cost of transmission between the server and the client in DS systems. 23
In our proposed architecture, we distribute management and processing tasks of the server into container-based middlewares to reduce the load on the DS server, improve performance, and enable lightweight clients. We implement the server system in the cloud environment with OpenStack 15 and the middleware with Docker container16,17 which can be located at edge cloud or local servers.
Docker-based virtualization
Docker16,17 is an open-source project based on container mechanism. The Docker is lighter than virtual machines (VMs). Even if an application is affected by any virus, it does not affect another machine since it is isolated. Docker is now also supported in Linux and used popularly by big companies such as Google and Amazon. Unlike traditional VMs, the Docker is a virtualization solution that runs applications in isolation, based on Linux container technology without a hypervisor. Docker shares the host rather than providing operating system. In other words, Docker can configure virtualization environment much lighter than the hypervisor-based virtual server because it can share the host’s operating system to operate with minimal resources. Docker can operate with multiple workloads on a single host. Moreover, Docker also supports portability. In order to provide various services in a DS environment, additional functions of the client are required. For example, when a provider builds an interactive DS system with a capability to recognize the gender, motion, and health condition of users, the provider needs to install IoT device drivers, sensing data analysis tool, and scheduler. Those functions should be installed on the client side to communicate with the server. However, if all functions are installed on the client side, the server may not be able to manage all and it may create a bottleneck since all messages flow to the server. For this problem, Docker is a good solution which can support required functions easily for an interactive DS system. Using Docker can help set up independent services and support fast deployment for services.
The proposed architecture
In this section, we present our design for the proposed architecture. As illustrated in Figure 2, the proposed architecture consists of three components. The upper component is a cloud-based DS server system (described in section “DS”). The middle components are container-based virtual clients (i.e. middleware) for DS, and the lower components are DS viewers and IoT devices (described in section “Docker-based virtualization”). The cloud system is implemented with OpenStack and used for functioning of the DS server, device management, as well as container middleware management through a container manager (i.e. Docker registry server). The middle level is used for container-based DS client, data processing function, and IoT device management. DS viewer and IoT device connect to the container-based middleware local server. When an IoT device is connected with a DS viewer, its middleware local server sends the device identity data to verify it and the DS server provides the contents to the DS viewer.

Overall architecture of container-based digital signage system.
Design of cloud system architecture
The cloud-based DS server system is implemented using OpenStack 15 which is an open source for NFV environments. The virtual network function (VNF) implementation for the DS server follows the ETSI NFV architecture. 24
The detailed description of NFV architecture and its components can be found in the study by ETSI-NFV. 24 The DS server is implemented on the cloud so that it can be easily managed and expanded. In order to transfer data efficiently between VNFs in the cloud system, it is designed to consider the data plane acceleration. The orchestrator with orchestration functions and the virtual network function manager (VNFM) are used for VNF management.
Figure 3 shows the logical architecture for the host architecture and services of the cloud system. We construct two servers for experiments and test bed. We configure Controller and Compute1 nodes in the host Host1 and Compute2 node in the host Host2. Each server network configuration consists of three network interfaces. Eth1 is for external network connection, eth2 is for communication between VNFs, and eth0 is a bridge for data packet acceleration. Communication between the VNFs is directly connected to the vf (n) managed by single root I/O virtualization (SR-IOV) through the vf-driver of VNF and is connected to another VNF by SR-IOV switching function. 24

System design of container-based digital signage system.
The service logical architecture consists of VNF-based structures. To create a VM, we use virtual network function descriptor (VNFD). VNF is created by VNFM according to the contents defined in VNFD such as image, Network, and Flavor. 24 After the VNF is created and the DS server is implemented in the VNF, the VNF performs services such as DS server, IoT management server, and data storage.
Figure 4 describes how the DS server VNF is created on the cloud using OpenStack. The codes specify information such as Virtual Deployment Unit (VDU), Connection Point (CP), and Virtual Link (VL). VDU defines resource information for the VM creation. It defines image and Flavor (CPU, memory, storage) for service provision and information of node where the VM is created. Using the network information defined in the VL, the network interfaces are allocated to the VM. CP information specifies connection points between VL and VM.

Description of VNF configuration.
Design of container-based middleware architecture
Our implemented middleware server is a system that provides services through the connection between the DS server and DS viewers. DS viewers are simple end devices which display DS contents based on contextual sensing information of users and IoT devices such as cameras and sensors. The middleware provides virtualization function of DS clients to DS viewers and local data processing. Figure 5 shows the middleware architecture. The virtualization functions for DS clients and IoT data processing are implemented using container-based Docker.16,17

Architecture of middleware system.
A basic container is implemented with a DS client, an IoT analyzer for data processing of IoT devices, a device agent (DA) for IoT device management, and a listener which is serviced after confirming information of IoT devices. IoT analyzer analyzes data from DS cameras, sensors, or near-field communication (NFC) readers to recognize the types of user. For example, IoT analyzer gets data from DS cameras and analyzes the data to recognize gender, age, or the number of people. According to the result, IoT analyzer sends a request of content changing to the DS server so that the DS server knows which content should be loaded. Those components can be implemented within a single container or multiple containers. A single container can support multiple clients and a middleware server can host multiple containers. As a result, a middleware server hosted on an edge cloud or a local server can be used to serve multiple clients in a local area. In our architecture, all required applications (i.e. DS client software, video encoder, IoT data analyzer, etc) are installed and run in the middleware.
IoT data and related information are processed locally at the middleware, thus reducing load on the server. Contents received from the server are encoded and processed at the middleware. The contents are then converted into streaming data and transmitted to DS clients. DS clients now are designed simply as streaming data receivers, called viewers, which require only a lightweight program.
Moreover, the middleware architecture helps to reduce the number of sessions from clients to the server considerably, thus reducing the load on the server. The middleware server can also play the role of a local caching server. Once several clients request the same content, only a session from the middleware server to the DS server is required for the first request. Later, clients can retrieve content directly from the local middleware server through streaming. This leads to two benefits: (1) improving the quality of service (i.e. latency and bandwidth) and (2) reducing the load on the server.
Figure 6 presents the sequence diagram for DS services in the proposed architecture. Services are divided into several components including client, middleware, and cloud system.

Sequence diagram for digital signage service.
As shown in the figure, when a new DS client viewer is deployed, the client viewer sends its hardware key and device identity to the DA of its corresponding local middleware to request for the serving DS client container information. Upon receiving the request, the DA requests the container information to the device listener on the server. Upon reception of the container information, the DA checks the current status of the received container. If there is no image for the container in the middleware, the DA requests an image to the Docker Local Registry Server in the cloud server and activates a corresponding DS client container for the new client. After that, the DA sends the container information to the DS client viewer and related IoT devices.
Once the DS client viewer requests content information to be played back to the DS client container, the container requests schedule information and content information to the DS server. After receiving the content, the container encodes and converts the content to streaming data which is then transmitted to the DS viewer.
IoT devices are also connected to the IoT analyzer in the middleware using its information (ID, hardware key). Sensing data from IoT devices are collected, processed, and classified at the middleware so that only minimum information needs to be sent to the cloud server. Upon new events (i.e. a new coming user with a different gender and motion) obtained from sensing information processing, the DS server may change the content schedule on demand and provide corresponding services to the new user. Various services can be served according to the sensing information obtained by IoT devices. Moreover, according to the services, functions of provided containers can also be different. A single container can support multiple clients, and a middleware can host multiple containers for different services.
Implementation and performance evaluation
Implementation
In this section, we present the details of implementation and test bed to evaluate the proposed architecture. Table 1 describes the implementation environment, applications, and parameters for each component.
Implementation environment.
Cloud server and VNF configuration
We implement the cloud server using OpenStack. Tacker 25 which is one of the OpenStack VNF projects is used as the VNFM. VNFs are implemented with four functions: one for DS server, one for IoT data management and event, one for DS viewer and IoT device management, and one for middleware management. The DS server is deployed using Xibo CMS 26 with modified functions for the schedule and contents changing according to the IoT data results. We also modify the authentication process between DS clients and the server.
To provide an appropriate container for each service in the cloud, we use a tagging mechanism. Tagging information consists of DS client viewer key value, IoT device ID, middleware ID, and container name.
When a DS viewer needs a container for services, the viewer sends a request message to a corresponding local middleware. The middleware attaches the middleware ID to the request message from the viewer and then sends the request to the device manager in the cloud. The device manager finds an appropriate container for services and sends the container name information back to the middleware. After receiving the container name information from the server, the middleware checks the container name in the current process list. If the same container is running now in the middleware, the middleware will not run it again. If the container is not found, the middleware needs to run the container. At this point, the middleware also needs to send a request message to the server to get required files such as a container image.
In order to change the client schedule according to the information collected by IoT devices in the DS environment, the following processes should be performed. Figure 7 shows procedures required for the content changed function by external trigger events, implemented in the DS server. To change the play schedules of contents, the DS server asks for information such as content index information, content type, and client ID from the device manager. The DS server matches the content index information with a library of contents managed by the DS server. If there is no library for the indexes and types, the content is searched from an external server using an index and a content type. The content is then registered with the library of the server. After that, content schedules can be changed. When the schedule information replacement is completed, a schedule change notification is sent to the client. The client then receives the content according to the new schedule information and play.

Function of IoT event-driven content change.
Implementation of container-based middleware
The middleware is implemented using a container-based Docker. Basically, the following applications are implemented in a middleware: DS client for virtual clients, analyzer for IoT data, and DA for container configuration. The DS client image is configured based on the DS viewer environment, and contents are encoded and processed in the container and then delivered to DS viewers as streaming. Figure 8 shows the container execution list in middleware.

Container list of middleware platform.
Container creation and management in the middleware are managed by the DA. As shown in Figure 8, the DA creates the container and provides services according to the request of clients. Figure 9 shows the container life cycle management process in the DA.

Life cycle management function of containers in device management.
To provide DS services, viewers request connection information of their container to a corresponding middleware and the device manager in the cloud server using device keys. In the middleware, the DA searches the image of corresponding containers and processes information. If any information does not match and the image does not exist, the DA imports the image from the Docker Registry Server in the cloud server and creates a container with received container information container_info. The DA also creates an internal network for communication between containers and allocates the internal network to the created container.
The internal network address assignment is generated by creating a list structure of available IPv4 addresses. When the container creation process is finished, the DA sends connection information of the container to the DS viewer. After receiving connection information of container, the viewer connects to its corresponding container in the middleware and can receive contents through streaming. Attachment and communication set up for IoT devices also follow the same procedures.
Experimental evaluation
Test bed
Figure 10 illustrates our experimental set up with a cloud server, a number of middleware servers, and a number of viewers. We set up the cloud server system on two Intel general servers and the middleware system for image processing of DS clients on a graphics processing unit (GPU) board. We used only lightweight mini-PCs for DS clients.

Test bed environment.
Obtained results
We compare the proposed architecture with the cloud-based server architecture where clients connect and send data as well as request directly to the cloud server. A test bed is set up with 10 DS clients. A number of sensors are used to trigger events for content changes. Clients request for corresponding videos for detected users. Transmission control protocol is used for requests and responses between DS server with the middleware in cases of the proposed architecture and with DS clients in case of the cloud-based architecture. Between DS clients and the middleware in the proposed architecture, only User Datagram Protocol (UDP) is used for streaming.
Cost for a DS client
In our proposed architecture, a simple mini-PC is required for receiving streaming data from the middleware. Only a lightweight streaming receiver application is implemented for the client. In the case of the cloud-based server architecture, more powerful PCs are required to run related applications such as DS client, video encoder, and data processing.
We test the memory usage in DS clients in the two architectures in a case where a 720-pixel resolution video is requested. DS clients in the cloud-based server architecture consume more than 1 GB, while those in the proposed architecture consume only 200 MB for temporary storage. Using only one middleware, the cost for every DS client is reduced considerably.
This indicates that the proposed architecture is promising to reduce the deployment and maintenance cost for interactive DS systems.
Bandwidth
Figure 11 shows the measured total bandwidth under a various number of clients. The obtained results indicate that within one or two clients per a middleware, there is no benefit about bandwidth. However, when the cluster size increases to more than three DS clients per middleware server, the proposed architecture achieves a considerable benefit in terms of bandwidth. Within 10 DS clients and a single middleware server, using the proposed architecture benefits more than three times in terms of bandwidth compared to the cloud-based server architecture with the same number of DS clients. The obtained results with the small test bed can be used to translate to large-scale scenarios. When the number of DS clients increases and a proper number of middleware servers is used (i.e. 2000 DS clients and 10 middleware servers), the advantages in terms of bandwidth of the proposed architecture are multifold.

Bandwidth versus various numbers of DS clients.
Latency
We measure end-to-end latency between the DS server and DS clients. Average results are presented in Figure 12. The obtained results indicate that within one client per middleware server, the end-to-end latency in case of the proposed architecture is worse than the cloud-based architecture. However, when the cluster size increases to more than two DS clients per middleware server, the proposed architecture achieves a significant benefit in terms of end-to-end latency. Within eight DS clients per middleware server, the proposed architecture helps to improve end-to-end latency about 20% compared to the cloud-based server architecture with the same number of DS clients. The proposed architecture helps to reduce the number of sessions and requests directly sent to the server. In addition, once several clients request the same content, only a session from the middleware server to the DS server is required for the first request. Later, clients can retrieve content directly from the local middleware server through streaming. As a result, the service performance is improved significantly.

End-to-end latency versus a various number of DS clients.
The performance trend shown in the figure indicates that when the number of clients increases, the end-to-end latency increases significantly. In the case of a high number of DS clients, long latency is a crucial performance issue of the cloud-based server architecture. In the proposed architecture, the number of clients per middleware server can be controlled by deploying a proper number of middleware servers. When the number of DS clients is increased to 20 and the number of client per middleware does not change (10 DS clients per middleware server), the proposed architecture achieves approximately a double improvement ratio compared to the test case with 10 clients.
Table 2 shows the measured total delay of the interactive DS as a perspective of users. The C-trigger is the delay for the client (i.e. the IoT camera of DS Viewer) to recognize the user and request the content change to the server. The S-Proc Delay is the delay for sending content information to clients from the DS server, and Resp is the delay to transmit content from the DS server to clients. The result shows that S-Proc Delay and Resp increase proportional to the number of clients in the cloud-based architecture because the processing cost and network usage increase by the number of clients. In the proposed architecture, the result shows that the delay increases only slightly when the number of clients increases. This is achieved through the cache function of content in the middleware.
Content change response time by IoT camera recognition.
IoT: Internet of things.
Performance analysis
Cost analysis model
In this section, we evaluate the performance of the proposed architecture under a various number of hop counts to the server and packet sizes based on an analytical model due to resource limitations to set up large and complicated test beds. We use the analytical model built by Abolfazli, et al. 27 for systems of cloud and clients. We present here the basic formulas, while the details can be found by Abolfazli, et al. 27
The following parameters are defines as follows:
The packet propagation delay
In case of the proposed architecture, the equation of
where
In case of the proposed architecture, clients do not need to connect a session between client and server since the middleware connects a session on behalf of client. Therefore,
Analysis
In the case of the cloud-based server architecture, every client request is sent directly to the cloud DS server. Therefore, service performance at each DS client depends on the distance (i.e. number of hops) between the client and the DS server and the packet size. In the proposed architecture, only a number of requests are required to send to the cloud server as the proposed architecture helps to reduce the number of sessions between DS clients and the DS server. Therefore, the service performance at DS clients shows a less dependence on the distant to the server as well as the packet size. As explained above, DS clients request for the same content and can retrieve the content immediately from the local middleware server, thus the distance to the server (i.e. the local middleware server in this case) is reduced considerably. For example, for interactive DS display using the gender based on sensing data, about 50% of DS clients may request the same content. Note that the middleware can be deployed at edge cloud or local server which is located near the served clients. In addition, sensing data can be processed in local middleware servers, and corresponding content can be returned quickly through streaming from the local middleware server.
Numerical results
Detailed parameters used in our analysis are presented in Table 3. Other default parameters can be found by Abolfazli, et al. 27
Variable value.
DS: digital signage.
Figure 13 presents the average round-trip latency of the cloud-based DS server architecture and the proposed architecture under various number of hop counts to the DS server. The figure shows that the cloud-based DS server architecture highly depends on the distant from DS clients to the server, while the proposed architecture witnesses the less dependency. The higher the number of hop counts, the greater the performance improvement the proposed architecture achieves compared to the cloud-based DS server architecture. Explanations are discussed in “Analysis” section.

Round-trip latency versus
Figure 14 presents the average round-trip latency of the two architectures under various packet sizes. The two systems show similar performance behaviors compared to the results presented in Figure 13. By exploiting local middleware servers to reduce the number of requests sent to the cloud DS server, the proposed architecture witnesses a low round-trip latency even when the packet size increases. Although a large packet size may be used, packets are not always sent to the cloud DS server, but the local middleware server instead. Content may be retrieved directly from the local middleware server. This design helps to reduce traffic load on the DS server and improve the service performance considerably.

Round-trip latency versus
Conclusion
This article presents our proposal and implementation of a container-based distributed virtual client architecture to address existing issues of current architectures for interactive DS systems, especially for large-scale interactive DS systems. The proposed architecture distributes traffic load on the DS server by designing container-based middleware servers which can be deployed in edge cloud or local servers. Current DS clients are virtualized into container-based images which perform related features of traditional DS clients. As a result, the proposed architecture enables lightweight clients which help reduce deployment and maintenance cost for a DS system. In addition, the number of sessions and requests sent to the cloud DS server is also reduced, while content requests by DS clients can be retrieved directly from local middleware servers. The proposed architecture thus improves service performance at DS clients significantly. For future works, we plan to add more interactive features for DS clients through using a number of new sensors and deploy a large-scale interactive DS system to test the system.
