This Ph.D. thesis is composed of two parts: the first and main part is devoted to video streaming and Internet media delivery services. In particular, issues concerning optimal network resource allocation and live video streaming synchronisation are tackled. The second part represents the beginning of a study investigating the asymptotic stability of nonlinear systems with Deep Reinforcement Learning controllers. Part 1. Video streaming is gaining more and more ground causing an unprecedented growth of multimedia streaming services such as YouTube, Netflix, Twitch, etc. As a consequence, more than half of the global Internet traffic is today due to video contents. To keep high engagement and avoid service abandonment, services delivering videos to massive audiences are required to provide users with a satisfactory Quality of Experience (QoE), which estimates a provider's service from a user's standpoint. Online video content services should provide users with the best possible Quality of Experience (QoE) given the constraints due to the user device and network. Current video platforms perform a Quality of Service (QoS) fair distribution of network resources. Such an approach is designed to provide concurrent users sharing the same network resources (i.e. network links) with a fair share of network bandwidth with no regard to the heterogeneity of users. As a consequence, the quality perceived by users is not equalized since, in order to obtain the same level of QoE, users with large screen devices (f.i. Smart TVs) require a larger video bitrate w.r.t. devices with small screens (e.g. smartphones). On the users' side, players run a control algorithm that strives to selfishly improve the quality individually perceived by users. As a consequence, this control architecture leads--in the best case--to maximise the average quality collectively perceived by all users and not to a resource distribution that results fair in terms of user-perceived quality. In the context of the Cloud-based pLatform for Immersive adaPtive video Streaming (CLIPS) project, an optimisation framework to design the QoE-fair network bandwidth allocation strategy based on the Multi-Commodity Flow Problem (MCFP) has been proposed. QoE has been modelled as the visual quality, which represents the main factor in the definition of such a metric. Visual quality has been evaluated through the Video Multi-method Assessment Fusion (VMAF), which is a full-reference video quality assessment tool. Moreover, to sensibly reduce the number of variables involved in the optimisation procedure, a traffic clustering approach has been integrated into the framework. Although the state of the art provides several ways to estimate the QoE associated with a user, the video quality will be considered as the measurement unit for QoE throughout this work. In addition to optimal network resource distribution, live video streaming synchronisation issues have been investigated. Live streaming events, such as football matches, are nowadays enjoyed together by users even if they are not physically in the same place. This has been possible thanks to the advent of social media applications and mobile devices. A crucial point concerning this service is the synchronisation of video playback among geographically distributed users, which prevents users' service abandonment. When comments and reactions on social networks are left, a not synchronised video playback can be easily noticed and be detrimental to users’ feelings of togetherness. To this end, a distributed control approach has been proposed to achieve synchronisation among users. In particular, the well-known consensus problem of simple integrators with saturated inputs has been deployed to design a distributed playback synchronisation framework. Furthermore, a leader-follower approach has been adopted with the aim of ensuring a controlled synchronisation among users in order to obtain the least possible delay with respect to the video content provider. Finally, an event-triggered control is introduced as an enhancement to the previously developed control to reduce the information exchanged among users. Part 2. Recent years have witnessed a considerable spread of machine learning techniques to solve problems and enhance procedures in various fields. In control systems, machine learning has brought several advantages, such as the possibility of controlling nonlinear systems that would be hard to control with conventional techniques, the possibility of controlling systems whose model is not known, and so on. To this end, Deep Reinforcement Learning (DRL) algorithms aim at learning control policies through interaction with an environment. However, despite the encouraging performance, such algorithms are still mainly employed in simulation environments since most of the real-world applications are safety critical. For this reason, it is important to have some guarantees on the asymptotic stability of the system controlled with a DRL policy. The framework proposed to take a step forward in this direction consists of extracting the DRL control policy that proves good at achieving the control goal. Then, a Learner-Verifier scheme leverages a counterexample-based strategy to synthesise a Lyapunov function that certifies the asymptotic stability of the system controlled with the policy extracted. This framework also provides useful insights into safety guarantees that are often necessary when it comes to real applications.
Questa tesi di dottorato è suddivisa in due parti: la prima parte, nonché la principale, è dedicata al video streaming e ai servizi Internet multimediali. In particolare, in tale elaborato si analizzano i problemi legati all'allocazione ottima delle risorse di rete nell'ambito del video streaming ed alla sincronizzazione degli utenti durante una diretta streaming. La seconda parte rappresenta l'inizio di uno studio che analizza l'asintotica stabilità dei sistemi non lineari con controllori Deep Reinforcement Learning. Parte 1. Il video streaming sta acquistando sempre più importanza causando una notevole crescita dei servizi di streaming multimediale come YouTube, Netflix, Twitch ecc. Di conseguenza, più della metà del traffico globale su Internet è oggi dovuto a contenuti video. Per fidelizzare gli utenti ed evitare l'abbandono del servizio, è necessario che le piattaforme che inviano contenuti video ad un numero massivo di utenti garantiscano agli utenti un livello soddisfacente di Quality of Experience (QoE), la quale mira a stimare il servizio fornito da un provider dal punto di vista dell'utente. I servizi che forniscono contenuti video online dovrebbero garantire agli utenti la migliore QoE possibile dati i vincoli dovuti ai dispositivi degli utenti e alla rete. Le attuali piattaforme video adoperano una distribuzione equa delle risorse di rete dal punto di vista della Quality of Service (QoS). Con tale approccio, agli utenti che condividono le stesse risorse di rete (cioè i link di rete) vengono assegnate parti delle risorse di banda senza considerare l'eterogeneità degli utenti. Di conseguenza, la qualità percepita dagli utenti non è uguagliata dal momento che, per ottenere lo stesso livello di QoE, gli utenti con dispositivi dotati di ampio schermo (ad esempio le smart TV) hanno bisogno di un bitrate maggiore rispetto a dispositivi con piccoli schermi (ad esempio gli smartphone). Lato utente, i player sul dispositivo eseguono un algoritmo di controllo che mira a migliorare in maniera 'egoista' la qualità percepita individualmente dagli utenti. Tale architettura di controllo porta--nel migliore dei casi--ad una massimizzazione della qualità media percepita collettivamente da tutti gli utenti piuttosto che ad una distribuzione delle risorse che risulti equa in termini di qualità percepita dall'utente. Nell'ambito del progetto Cloud-based pLatform for immersive adaPtive video Streaming (CLIPS), si è proposto un framework di ottimizzazione per l'allocazione delle risorse di rete in maniera QoE-fair. La QoE è stata modellata utilizzando la qualità visiva, che rappresenta il fattore principale quando tale metrica viene definita. La qualità visiva è stata valutata attraverso il Video Multi-method Assessment Fusion (VMAF), che è uno strumento per la valutazione della qualità video. Inoltre, per ridurre notevolmente il numero di variabili coinvolte nella procedura di ottimizzazione, si è integrato un metodo di clustering del traffico. Nonostante lo stato dell'arte fornisca molti modi per stimare la QoE associata ad un utente, la qualità visiva è considerata l'unità di misura per eccellenza in questo lavoro. Gli eventi streaming dal vivo, come ad esempio una partita di calcio, sono al giorno d'oggi seguiti da utenti che di solito non sono fisicamente nello stesso luogo. Ciò è stato reso possibile grazie alla diffusione dei social media e dei dispositivi mobili. Un aspetto cruciale riguardante tale servizio è la sincronizzazione del video playback tra utenti geograficamente distribuiti. Quando vengono lasciati commenti e reazioni sui social network, un video playback non sincronizzato può essere facilmente notato e quindi avere risultati negativi sul senso di togrtherness che l'utente avverte quando vede un evento in diretta con altri utenti. A tal fine, è stato proposto un approccio di controllo distribuito per ottenere la sincronizzazione tra gli utenti. In particolare, è stato utilizzato il noto problema del consenso di integratori semplici con input saturati per progettare una sincronizzazione distribuita del video playback. Inoltre, è stato adottato un approccio leader-follower con l'obiettivo di assicurare una sincronizzazione tra gli utenti e al contempo di ottenere il minor ritardo possibile rispetto al provider. Infine, è stato introdotto un controllo di tipo event-triggered per migliorare il controllo precedente e per ridurre le informazioni scambiate tra gli utenti. Parte 2. Negli ultimi anni si è avuta una notevole diffusione di tecniche di machine learning per risolvere problemi e migliorare procedure in vari campi. Nell'ambito dei sistemi di controllo, il machine learning ha apportato diversi vantaggi, come ad esempio la possibilità di controllare sistemi nonlineari che sarebbero difficili da controllare con le tecniche classiche, la possibilità di controllare sistemi il cui modello non è noto e così via. Gli algoritmi di Deep Reinforcement Learning (DRL) mirano ad apprendere policy di controllo attraverso l'interazione con l'environment. Tuttavia, nonostante le performance incoraggianti, tali algoritmi sono ancora impiegati principalmente in ambienti simulativi dato che la maggior parte delle applicazioni reali presentano problemi di sicurezza. Per questo motivo, è importante avere delle garanzie sulla stabilità asintotica del sistema controllato mediante una policy DRL. Per ovviare a tale problema, l'architettura proposta consiste nell'estrazione di una policy di controllo DRL in grado di raggiungere l'obiettivo di controllo. Successivamente, uno schema Learner-Verifier utilizza un approccio basato su controesempi per provare a sintetizzare una funzione di Lyapunov che certifica l'asintotica stabilità del sistema controllato con la policy estratta. L'architettura permette anche di ottenere delle considerazioni utili per la sicurezza, che è spesso richiesta nel caso di applicazioni reali.
QoE-aware Control of Video Streaming Systems / Manfredi, Gioacchino. - ELETTRONICO. - (2023). [10.60576/poliba/iris/manfredi-gioacchino_phd2023]
QoE-aware Control of Video Streaming Systems
Manfredi, Gioacchino
2023-01-01
Abstract
This Ph.D. thesis is composed of two parts: the first and main part is devoted to video streaming and Internet media delivery services. In particular, issues concerning optimal network resource allocation and live video streaming synchronisation are tackled. The second part represents the beginning of a study investigating the asymptotic stability of nonlinear systems with Deep Reinforcement Learning controllers. Part 1. Video streaming is gaining more and more ground causing an unprecedented growth of multimedia streaming services such as YouTube, Netflix, Twitch, etc. As a consequence, more than half of the global Internet traffic is today due to video contents. To keep high engagement and avoid service abandonment, services delivering videos to massive audiences are required to provide users with a satisfactory Quality of Experience (QoE), which estimates a provider's service from a user's standpoint. Online video content services should provide users with the best possible Quality of Experience (QoE) given the constraints due to the user device and network. Current video platforms perform a Quality of Service (QoS) fair distribution of network resources. Such an approach is designed to provide concurrent users sharing the same network resources (i.e. network links) with a fair share of network bandwidth with no regard to the heterogeneity of users. As a consequence, the quality perceived by users is not equalized since, in order to obtain the same level of QoE, users with large screen devices (f.i. Smart TVs) require a larger video bitrate w.r.t. devices with small screens (e.g. smartphones). On the users' side, players run a control algorithm that strives to selfishly improve the quality individually perceived by users. As a consequence, this control architecture leads--in the best case--to maximise the average quality collectively perceived by all users and not to a resource distribution that results fair in terms of user-perceived quality. In the context of the Cloud-based pLatform for Immersive adaPtive video Streaming (CLIPS) project, an optimisation framework to design the QoE-fair network bandwidth allocation strategy based on the Multi-Commodity Flow Problem (MCFP) has been proposed. QoE has been modelled as the visual quality, which represents the main factor in the definition of such a metric. Visual quality has been evaluated through the Video Multi-method Assessment Fusion (VMAF), which is a full-reference video quality assessment tool. Moreover, to sensibly reduce the number of variables involved in the optimisation procedure, a traffic clustering approach has been integrated into the framework. Although the state of the art provides several ways to estimate the QoE associated with a user, the video quality will be considered as the measurement unit for QoE throughout this work. In addition to optimal network resource distribution, live video streaming synchronisation issues have been investigated. Live streaming events, such as football matches, are nowadays enjoyed together by users even if they are not physically in the same place. This has been possible thanks to the advent of social media applications and mobile devices. A crucial point concerning this service is the synchronisation of video playback among geographically distributed users, which prevents users' service abandonment. When comments and reactions on social networks are left, a not synchronised video playback can be easily noticed and be detrimental to users’ feelings of togetherness. To this end, a distributed control approach has been proposed to achieve synchronisation among users. In particular, the well-known consensus problem of simple integrators with saturated inputs has been deployed to design a distributed playback synchronisation framework. Furthermore, a leader-follower approach has been adopted with the aim of ensuring a controlled synchronisation among users in order to obtain the least possible delay with respect to the video content provider. Finally, an event-triggered control is introduced as an enhancement to the previously developed control to reduce the information exchanged among users. Part 2. Recent years have witnessed a considerable spread of machine learning techniques to solve problems and enhance procedures in various fields. In control systems, machine learning has brought several advantages, such as the possibility of controlling nonlinear systems that would be hard to control with conventional techniques, the possibility of controlling systems whose model is not known, and so on. To this end, Deep Reinforcement Learning (DRL) algorithms aim at learning control policies through interaction with an environment. However, despite the encouraging performance, such algorithms are still mainly employed in simulation environments since most of the real-world applications are safety critical. For this reason, it is important to have some guarantees on the asymptotic stability of the system controlled with a DRL policy. The framework proposed to take a step forward in this direction consists of extracting the DRL control policy that proves good at achieving the control goal. Then, a Learner-Verifier scheme leverages a counterexample-based strategy to synthesise a Lyapunov function that certifies the asymptotic stability of the system controlled with the policy extracted. This framework also provides useful insights into safety guarantees that are often necessary when it comes to real applications.File | Dimensione | Formato | |
---|---|---|---|
35 ciclo-MANFREDI Gioacchino.pdf
accesso aperto
Descrizione: Tesi di dottorato completa di frontespizio
Tipologia:
Tesi di dottorato
Licenza:
Creative commons
Dimensione
3.46 MB
Formato
Adobe PDF
|
3.46 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.