Response Time Improvement of Multimodal Interactive Systems
Zeman, Tomáš; Hák, Roman
2017 - English
Multimodal interaction, permitting our
highly skilled and coordinated communicative
behavior to control computer systems,
has been proven as a key to natural
and very flexible human-computer interaction.
However, multimodal input processing
submits great research and development
challenges in contrast to the traditional
user interfaces. Besides processing
of complex input signals from individual
modality sensors (e.g. speech recognition,
image processing, etc.), it also requires
more detailed understanding of human
communication paradigms and interaction
schemes.
The submitted thesis deals with an
analysis of users’ integration patterns observed
during multimodal interaction and
explores possibilities of their utilization to
increase accuracy and robustness of algorithms
for multimodal input processing.
The work contains three main parts.
The first one is dedicated to an analysis
of the most fundamental multimodal integration
patterns, which is followed by
a quantitative research and evaluation of
import characteristics of the patterns in
the form of own conducted user study. In
the context of the new findings, a definition
of a classification of one of the
most important integration patterns, i.e.
synchronization pattern dividing users to
simultaneous (SIM) and sequential (SEQ)
integrators, is modified and readjusted.
The modified classification addresses issues
with consistency and accuracy and
offers a significantly superior solution to
the original definition provided in the related
literature.
Based on the evaluations and results
obtained in the quantitative empirical research,
a method for multimodal integration
patterns modeling with utilization
of machine learning algorithms, namely
Bayesian Networks, is designed and proposed
in the following part of the thesis.
The constructed probability model is capable
of very precise and robust multimodal
input prediction with accuracy of 99%.
A procedure for applying the predictive
capabilities of the constructed classification
model to address the multimodal
input segmentation is then introduced.
The proposed procedure is subjected to
tests and measurements in order to evaluate
the segmentation accuracy and impact
of the procedure employment on response
time of the system. Experiments
with a selection of training sets and a
comparison of four approaches to encode
continuous input variables in the model
are conducted as a part of the measurements.
The results show that the introduced
segmentation method provides a
significant improvement in response time
(to 0.8 s for SEQ and under 0.5 s for SIM
integrators) over the state-of-the-art approaches,
while maintaining remarkably
high accuracy (98–99%). This significant
decrease in response time allows a system
to respond more instantly on user’s
multimodal input with nearly real-time
feedback and brings very important improvement
in terms of usability, which
should positively influence users’ experience
and satisfaction with the multimodal
interaction interface.Multimodalní interakce umožnuje plne využít
naše velmi zdatné a vysoce koordinované
komunikacní schopnosti k ovládání
pocítacových systému. Predstavuje
tak cestu k prirozené a velmi flexibilní interakci
cloveka s pocítacem. Zpracování
multimodálního vstupu však oproti tradicním
uživatelským rozhraním predstavuje
mnohé nárocné výzkumné i vývojové
úkoly. Krome zpracování složitých signálu
od jednotlivých senzoru (napr. rozpoznání
reci, obrazu apod.) vyžaduje také mnohem
detailnejší znalost a porozumení lidským
komunikacním paradigmatum a interakcním
schématum.
Predložená práce se zaobírá analýzou
uživatelských integracních vzorcu pozorovaných
pri multimodální interakci a
zkoumá možnosti jejich využití ke zvýšení
presnosti a robustnosti algoritmu pro
zpracování multimodálních vstupu.
Práce obsahuje tri stežejní cásti. První
z nich je venována analýze nejpodstatnejších
multimodálních integracních vzorcu,
na kterou navazuje kvantitativní výzkum
duležitých charakteristik techto vzorcu v
podobe vlastní uživatelské studie. V rámci
nove získaných poznatku je modifikována
definice pro klasifikaci jednoho z nejduležitejších
vzorcu, tj. synchronizacní vzor
delící uživatele na simultánní (SIM) a
sekvencní (SEQ) integrátory. Nová klasifikace
reší zejména problémy v konzistenci
a presnosti, a významne tak kvalitativne
presahuje puvodní definici uvádenou
v související literature.
Na základe zjištení a výsledku dosažených
v rámci kvantitativního výzkumu
je v další cásti práce navržena metoda
pro modelování multimodálních integracních
vzorcu pomocí algoritmu strojového
ucení, jmenovite Bayesovských sítí. Zkonstruovaný
pravdepodobnostní model poskytuje
velmi presnou a robustní predikci
multimodálního vstupu dosahujícího 99%
úspešnosti.
Následne je popsán postup aplikování
predikcních schopností modelu pri rešení
segmentace spojitého multimodálního
vstupu. Predstavená metoda je podrobena
testum a merením s ohledem na presnost
a dopad jejího použití na zlepšení doby
odezvy systému. V rámci merení jsou provedeny
experimenty s volbou trénovací
množiny a porovnání ctyr prístupu ke kódování
spojitých vstupních promenných
v modelu. Výsledky ukazují, že navržená
metoda poskytuje významné zlepšení v
dobe odezvy systému (0,8 s pro SEQ a
pod 0,5 s pro SIM integrátory) v porovnání
s nejmodernejšími publikovanými postupy
pri zachování pozoruhodne vysoké
presnosti (98–99 %). Toto výrazné snížení
umožnuje systému zareagovat na multimodální
uživatelský vstup s odevzvou témer
v reálnem case. Prináší tak duležité zlepšení
ve smyslu použitelnosti, které by melo
pozitivne ovlivnit celkovou uživatelskou
zkušenost a spokojenost s multimodálním
interakcním rozhraním.
Available in digital repository of ČVUT.
Response Time Improvement of Multimodal Interactive Systems
Multimodal interaction, permitting our highly skilled and coordinated communicative behavior to control computer systems, has been proven as a key to natural and very flexible human-computer ...
Scene text localization and recognition in images and videos
Matas, Jiří; Neumann, Lukáš
2017 - English
Scene Text Localization and Recognition methods nd all areas in an image or a video
that would be considered as text by a human, mark boundaries of the areas and output
a sequence of characters associated with its content. They are used to process images
and videos taken by a digital camera or a mobile phone and to \read" the content of
each text area into a digital format, typically a list of Unicode character sequences, that
can be processed in further applications.
Three di erent methods for Scene Text Localization and Recognition were proposed
in the course of the research, each one advancing the state of the art and improving the
accuracy. The rst method detects individual characters as Extremal Regions (ER),
where the probability of each ER being a character is estimated using novel features
with O(1) complexity and only ERs with locally maximal probability are selected across
several image projections for the second stage, where the classi cation is improved using
more computationally expensive features. The method was the rst published method
to address the complete problem of scene text localization and recognition as a whole
- all previous work in the literature focused solely on di erent subproblems.
Secondly, a novel easy-to-implement stroke detector was proposed. The detector is
signi cantly faster and produces signi cantly less false detections than the commonly
used ER detector. The detector e ciently produces character strokes segmentations,
which are exploited in a subsequent classi cation phase based on features e ectively
calculated as part of the segmentation process. Additionally, an e cient text clustering
algorithm based on text direction voting is proposed, which as well as the previous
stages is scale- and rotation- invariant and supports wide variety of scripts and fonts.
The third method exploits a deep-learning model, which is trained for both text
detection and recognition in a single trainable pipeline. The method localizes and
recognizes text in an image in a single feed-forward pass, it is trained purely on synthetic
data so it does not require obtaining expensive human annotations for training and it
achieves state-of-the-art accuracy in the end-to-end text recognition on two standard
datasets, whilst being an order of magnitude faster than the previous methods - the
whole pipeline runs at 10 frames per second.
Available in digital repository of ČVUT.
Scene text localization and recognition in images and videos
Scene Text Localization and Recognition methods nd all areas in an image or a video that would be considered as text by a human, mark boundaries of the areas and output a sequence of characters ...
Automatic Scaling in Cloud Computing
Šedivý, Jan; Vondra, Tomáš
2017 - English
This dissertation thesis deals with automatic scaling in cloud computing, mainly focusing
on the performance of interactive workloads, that is web servers and services, running in an
elastic cloud environment. In the rst part of the thesis, the possibility of forecasting the
daily curve of workload is evaluated using long-range seasonal techniques of statistical time
series analysis. The accuracy is high enough to enable either green computing or lling
the unused capacity with batch jobs, hence the need for long-range forecasts. The second
part focuses on simulations of automatic scaling, which is necessary for the interactive
workload to actually free up space when it is not being utilized at peak capacity. Cloud
users are mostly scared of letting a machine control their servers, which is why realistic
simulations are needed. We have explored two methods, event-driven simulation and queuetheoretic
models. During work on the rst, we have extended the widely-used CloudSim
simulation package to be able to dynamically scale the simulation setup at run time and
have corrected its engine using knowledge from queueing theory. Our own simulator then
relies solely on theoretical models, making it much more precise and much faster than the
more general CloudSim. The tools from the two parts together constitute the theoretical
foundation which, once implemented in practice, can help leverage cloud technology to
actually increase the e ciency of data center hardware.
In particular, the main contributions of the dissertation thesis are as follows:
1. New methodology for forecasting time series of web server load and its validation
2. Extension of the often-used simulator CloudSim for interactive load and increasing
the accuracy of its output
3. Design and implementation of a fast and accurate simulator of automatic scaling
using queueing theoryTato dizerta cn pr ace se zab yv a cloud computingem, konkr etn e se zam e ruje na v ykon interaktivn
z at e ze, nap r klad webov ych server u a slu zeb, kter e b e z v elastick em cloudov em
prost red . V prvn c asti pr ace je zhodnocena mo znost p redpov d an denn k rivky z at e ze
pomoc metod statistick e anal yzy casov ych rad se sez onn m prvkem a dlouh ym dosahem.
P resnost je dostate cn e vysok a, aby umo znila bu d set ren energi nebo vypl nov an
nevyu zit e kapacity d avkov ymi ulohami, jejich z doba b ehu je hlavn m d uvodem pro pot rebu
dlouhodob e p redpov edi. Druh a c ast se zam e ruje na simulace automatick eho sk alov an ,
kter e je nutn e, aby interaktivn z at e z skute cn e uvolnila prostor, pokud nen vyt e zov ana na
plnou kapacitu. U zivatel e cloud u se p rev a zn e boj nechat stroj, aby ovl adal jejich servery,
a pr av e proto jsou pot reba realistick e simulace. Prozkoumali jsme dv e metody, konkr etn e
simulaci s prom enn ym casov ym krokem r zen ym ud alostmi a modely z teorie hromadn e obsluhy.
B ehem pr ace na prvn z t echto metod jsme roz s rili siroce pou z van y simula cn bal k
CloudSim o mo znost dynamicky sk alovat simulovan y syst em za b ehu a opravili jsme jeho
j adro za pomoci znalost z teorie hromadn e obsluhy. N a s vlastn simul ator se pak spol eh a
pouze na teoretick e modely, co z ho cin p resn ej s m a mnohem rychlej s m ne zli obecn ej s
CloudSim. N astroje z obou c ast pr ace tvo r dohromady teoretick y z aklad, kter y, pokud
bude implementov an v praxi, pom u ze vyu z t technologii cloudu tak, aby se skute cn e zv y sila
efektivita vyu zit hardwaru datov ych center.
Hlavn p r nosy t eto dizerta cn pr ace jsou n asleduj c :
1. Stanoven metodologie pro p redpov d an casov ych rad z at e ze webov ych server u a jej
validace
2. Roz s ren casto citovan eho simul atoru CloudSim o mo znost simulace interaktivn
z at e ze a zp resn en jeho v ysledk u
3. N avrh a implementace rychl eho a p resn eho simul atoru automatick eho sk alov an vyu z vaj c ho
teorii hromadn e obsluhy
Keywords:
cloud computing; autoscaling; time series forecasting; green computing; simulation
Available in digital repository of ČVUT.
Automatic Scaling in Cloud Computing
This dissertation thesis deals with automatic scaling in cloud computing, mainly focusing on the performance of interactive workloads, that is web servers and services, running in an elastic cloud ...
Distributed algorithms for Wireless Physical Layer Network Coding self-organisation in cloud communication networks
Sýkora, Jan; Hynek, Tomáš
2017 - English
Communication networks of present-days
are growing in complexity to follow a
never-ending demand for fast, smooth, energy
efficient and cheap connection. This
demand is a driving force of new technologies.
Two of them – cloud network
concept and Wireless Physical Layer Network
Coding (WPLNC) – are in the focus
of this thesis. Both together represent a
paradigm shift that can bring benefits to
the networks of the future.
With WPLNC the communicating
nodes are no more separated in orthogonal
resources, such as time, frequency or
code space. Instead, they are allowed to
overlap directly at the level of electromagnetic
waves. Overlapping signals are no
more considered as nuisance, rather, they
form new super-signals that are processed
as one. With slightly more complicated
signal processing some big advantages in
throughput, reliability and/or efficiency
can be achieved by this technique.
Cloud network concept introduces an
intelligent network, that is able to react
on changing conditions. The network is
formed by self-aware relay nodes that cooperate
to provide communication service
to the terminals. The network is assumed
distributed and decentralised to be able
to react fast and adapt to local variations.
WPLNC technique seems to be
a favourable scheme for such a network.
In this thesis I focus on distributed algorithms
for implementation of WPLNC
in the context of cloud network. The algorithms
equip the relay nodes with ability
to self-adapt and self-organise their
WPLNC processing, such that it is aligned
optimally across the whole network. Particularly,
as an example an algorithm that
assigns WPLNC mappings to individual
relays is provided. This algorithm is transformed
into a form of suitable communication
protocol. Some other versions of
the algorithm are proposed that focus on
more complex situations – when the relays
do not perform fair and try to harm the
network or when some form of behaviour
is dependent on relay node state, such as
battery level.
Whenever it is possible not only a
mathematical solution of studied issues is
provided but also software simulation is
shown and more importantly a verification
by hardware testing in real conditions
is done and presented.
Available in digital repository of ČVUT.
Distributed algorithms for Wireless Physical Layer Network Coding self-organisation in cloud communication networks
Communication networks of present-days are growing in complexity to follow a never-ending demand for fast, smooth, energy efficient and cheap connection. This demand is a driving force of new ...
Lock-chart solving
Železný, Filip; Černoch, Radomír
2017 - English
Lock-chart solving (also known as master key system solving) is
a process for designing mechanical keys and locks so that every
key can open and be blocked in a user-defined set of locks. This
work is an algorithmic study of lock-chart solving.
Literature on this topic [34, 38, 44, 53] has established that the
extension variant of the problem is NP-complete, reformulated
lock-chart solving as a constraint satisfaction problem (CSP) with
set variables, applied a local search algorithm, and defined a
symmetry-breaking algorithm using automorphisms.
However, the otherwise standard decision problem with a discrete
search space has a twist. After a lock-chart is solved and its
solution is fixed, new keys and locks may be added as a part of
an extension, and the original solution should be prepared for
this. In the first formal treatment of extensions, several scenarios
are proposed, and effects on lock-chart solving algorithms are
discussed.
First, we formalise lock-chart solving. 6 variants of lock-charts
and 4 constraint frameworks of increasing generality and applicability
to real-world problems are formulated. Their hierarchy
is used to extend the classification of lock-chart solving problems
into computational complexity classes. A close relationship
between the most realistic framework and the Boolean satisfiability
problem (SAT) is established. Mechanical profiles are shown
to express NP-complete problems as a complement to the previous
result on the extension problem variant. We give the first
proof that diagonal lock-charts (systems with only one master
key) can be solved in P using an algorithm known as rotating
constant method.
The practical part proposes several algorithms for lock-chart solving.
The problem is translated into SAT, into CSP (with standard
variables) and partly into the maximum independent set
problem. The SAT translation inspires a model-counting algorithm
tailored for lock-charts. Finally, we describe a customised
depth-first-search (DFS) algorithm that uses the model-counter
for pruning non-perspective parts of the search space. In the
empirical evaluation, CSP and the customised DFS improve the
performance of the previous automorphism algorithm.Rˇ ešením systému generálního a hlavních klícˇu° (SGHK) se myslí
návrh uzávˇer °u mechanických klíˇc °u a blokovacích prvk°u zámk° u.
Návrh musí respektovat požadavek, aby každý klíˇc v systému
otevíral uživatelem zadanou množinu zámk° u. Tato práce poskytuje
algoritmickou analýzu SGHK.
Relevantní literatura [34, 38, 44, 53] již dokázala jednu variantu
problému jako NP-úplnou, pˇreformulovala problém jako programování
s omezujícími podmínkami (CSP) s použitím množinových
promˇenných, aplikovala simulované žíhání a definovala
symetrie stavového prostoru pomocí automorfismu.
Jinak bˇežná úloha s diskrétním stavovým prostorem má háˇcek.
Zákazník m° uže objednat tzv. rozšíˇrení – pˇridání nových klíˇc °u
a zámk°u do již vyrobeného SGHK. Prvotní ˇrešení proto musí
poˇcítat s omezujícími podmínkami, jejichž pˇresná forma není
v dobˇe návrhu známa. Tato práce je dle našich znalostí první
formální studí problému rozšíˇrení SGHK.
Práce nejdˇríve formalizuje pojem SGHK v nˇekolika variantách
a navrhuje ˇctyˇri zp° usoby formalizace omezujících podmínek od
nejjednodušší po nejrealistiˇctˇejší. Hierarchie rozhodovacích úloh
je využita pro klasifikaci do tˇríd výpoˇcetní složitosti. Nejdˇríve je
popsána úzká vazba na úlohu splnitelnosti výrokových formulí
(SAT). Mechanické profily se ukazují dostateˇcnˇe expresivní pro
pˇreklad NP-úplných úloh, což doplˇ nuje již existující výsledek.
Práce obsahuje první d°ukaz pˇríslušnosti diagonální úlohy (SGHK
s generálním klíˇcem, ale bez dalších hlavních klíˇc ° u) do tˇrídy P
pomocí tzv. rotating constant method.
Praktická ˇcást práce navrhuje nˇekolik algoritm °u pro ˇrešení SGHK.
Problém je pˇreložen na SAT, na CSP a jeho ˇcást na úlohu hledání
maximální nezávislé množiny. Pro poˇcítání poˇctu klíˇc °u spl ˇ nující
omezující podmínky je použito dynamické programování a princip
inkluze a exkluze. Závˇerem je popsán upravený algoritmus
prohledávání do hloubky (DFS), který proˇrezává neperspektivní
ˇcásti stavového prostoru pomocí poˇcitadla klíˇc ° u. V emprickém
porovnání CSP a upravené DFS algoritmy prokázaly pˇrínos stávajícímu
algoritmu využívající automorfismy.
Available in digital repository of ČVUT.
Lock-chart solving
Lock-chart solving (also known as master key system solving) is a process for designing mechanical keys and locks so that every key can open and be blocked in a user-defined set of locks. This work ...
Semantic Integration in the Context of Cyber-Physical Systems
Mařík, Vladimír; Vrba, Pavel; Jirkovský, Václav
2017 - English
Industrial systems have been developing into more and more complex systems during
last decades. They have changed from centralized solutions to distributed, more robust,
and more
exible eco-systems comprising a high number of embedded systems.
In recent years, we are witnessing the research trend in the area of embedded systems
which concerns the very close integration of physical and computing systems.
This dissertation thesis deals with the problem of the semantic integration of
components (sensors and actuators) of cyber-physical systems within industrial automation
domain and presents resulting bene ts. Cyber-physical systems were created
based on the aforementioned trend of the close integration of computing systems
and physical systems. This tight integration involves infrastructures responsible for
control, computation, communication, and sensing. These systems are composed
of many subsystems produced by various manufacturers, and the subsystems produce
an enormous volume of data. Furthermore, data generated from all of the
system parts has di erent dimensions, sampling rates, levels of details, etc. Next,
cyber-physical systems form systems which represent building blocks of the fourth industrial
revolution (Industry 4.0) for example (Industrial) Internet of Things, Smart
Cities, Smart Factories. Thus, the right understanding of data (data meanings,
given context, subsystems purposes, and possible ways of subsystems integration)
belong to essential requirements for enabling Industry 4.0 visions. In this thesis, the
utilization of ontologies was proposed to deal with the semantic heterogeneity for
enabling easier cyber-physical system components integration.
Moreover, the current widespread e ort to create
exible highly customized manufacturing
requires novel methods for data handling together with subsequent data
utilization. Storing knowledge and data in an ontology o ers a needed solution. For
example, an ontology employment brings easy system data model management, increase
an e ciency of cyber-physical system components interoperability, advanced
data processing, reusability of sensors and actuators, and utilization of ontology
matching methods for an integration of other data models. This work concerns
the problem, how to describe cyber-physical system components using ontologies
to enable e ective integration. Next, the ontology matching system suitable for
integration of heterogeneous data models in industrial automation domain is described. The proposed solution of the semantic interoperability is demonstrated on
the Plug&Play cyber-physical system components.
On the other hand, storing data in an ontology and mainly processing of RDF
statements brings one signi cant bottleneck | performance issue. Thus, Big Data
technologies are employed for overcoming this issue together with a proposal of
suitable storage data models. The overall approach is demonstrated on the proposed
and developed prototype named Semantic Big Data Historian.
In particular, the main contributions of the dissertation thesis are as follows:
1. The proposal of the solution for CPS low-level semantic integration based
on Semantic web Technologies together with a veri cation of a feasibility of
proposed approach using Semantic Big Data Historian.
2. The overcoming performance issues of processing shop
floor data represented
as RDF-triples with the help of Big Data technologies and suitable storage
data models | vertical partitioning and hybrid SBDH model.
3. The proposal and implementation of a suitable way how to integrate heterogeneous
data models from industrial automation domain where the highest
precision and recall are required. The approach is based on similarity measures
aggregation using self-organizing maps and user involvement with the
help of active learning and visualization of self-organizing map output layer.
4. Enabling reusability of cyber-physical system components together with effortless
configuration based on utilization of Semantic Web technologies. This
approach was named as Plug&Play cyber-physical system components.
Available in digital repository of ČVUT.
Semantic Integration in the Context of Cyber-Physical Systems
Industrial systems have been developing into more and more complex systems during last decades. They have changed from centralized solutions to distributed, more robust, and more exible ...
Centralized and Decentralized Algorithms for Multi-Robot Trajectory Coordination
Pěchouček, Michal; Čáp, Michal
2017 - English
One of the standing challenges in multi-robot systems is how to reliably avoid collisions among
individual robots without jeopardizing the mission of the system. This is because the existing collisionavoidance
techniques are either prone to deadlocks, i.e., the robots may never reach their desired goal
position, or computationally intractable, i.e., the solution may not be provided in practical time. We
study whether it is possible to design a method for collision avoidance in multi-robot systems that is
both deadlock-free and computationally tractable. The central results of our work are 1) the observation
that in appropriately structured environments deadlock-free and computationally tractable collision
avoidance is, in fact, possible to achieve and 2) consequently we propose practical, yet guaranteed,
centralized and decentralized algorithms for collision avoidance in multi-robot systems.
We take the deliberative approach, i.e., coordinated collision-free trajectories are first computed
either by a central motion planner or by decentralized negotiation among the robots and then each robot
controls its advancement along its planned trajectory. We start by reviewing the existing techniques in
both single- and multi-robot motion planning, identify their limitations, and subsequently design new
centralized and decentralized trajectory coordination algorithms for different use cases.
Firstly, we prove that a revised version of the classical prioritized planning technique, which may
not return a solution in general, is guaranteed to always return a solution in polynomial time under
certain conditions that we characterize. Specifically, it is guaranteed to provide a solution if the start and
destination of each coordinated robot is an endpoint of a so-called well-formed infrastructure. That is,
it can be reliably used in systems where the robots at start and destination positions do not prevent other
robots from reaching their goals, which, notably, is a property satisfied in most man-made environments.
Secondly, we design an asynchronous decentralized variant of both classical and revised prioritized
planning that can be used to find coordinated trajectories solely by peer-to-peer message passing among
the robots. The method inherits guarantees from its centralized version, but can compute the solution
faster by exploiting the computational power distributed across multi-robot team.
Thirdly, in contrast to the above algorithms that coordinate robots in a batch, we design a decentralized
algorithm that can coordinate the robots in the systems incrementally. That is, the robots may
be ordered to relocate at any time during the operation of the system. We prove that if the robots are
tasked to relocate between endpoints of a well-formed infrastructure, then the algorithm is guaranteed
to always find a collision-free trajectory for each relocation task in quadratic time.
Fourthly, we show that incremental replanning of trajectories of individual robots while they are
subject to gradually increasing collision penalty can serve as a powerful heuristic that is able to generate
near-optimal solutions.
Finally, we design a novel control law for controlling the advancement of individual robots in the
team along their planned trajectories in the presence of delaying disturbances, e.g., humans stepping in
the way of robots. While naive control strategies for handling the disturbances may lead to deadlocks,
we prove that under the proposed control law, the robots are guaranteed to always reach their destination.
We evaluate the presented techniques both in synthetic simulated environments as well as in realworld
field experiments. In simulation experiments with up to 60 robots, we observe that the proposed
technique generates shorter motions than state-of-the-art reactive collision avoidance techniques and
reliably solves also the instances where reactive techniques fail. Further, unlike many proposed coordination
techniques, we validate the assumptions of our algorithms and the consequent practical applicability
of our approach by implementing and testing proposed coordination approach in two real-world
multi-robot systems. In particular, we successfully deployed and field tested asynchronous decentralized
prioritized planning as a collision avoidance mechanism in 1) a Multi-UAV system with fixed-wing
unmanned aircraft and 2) an experimental mobility-on-demand system using self-driving golf carts.
Available in digital repository of ČVUT.
Centralized and Decentralized Algorithms for Multi-Robot Trajectory Coordination
One of the standing challenges in multi-robot systems is how to reliably avoid collisions among individual robots without jeopardizing the mission of the system. This is because the existing ...
Impact of IP Chanel Parameters on the Final Quality of the Transferred Voice
Holub, Jan; Slavata, Oldřich
2017 - English
The subject of this dissertation is the measurement of quality of voice
transmission over IP networks. Development in digitization, encoding and
transmission of the speech is very fast, as well as the development of network
elements and increasing transmission capacity and network reliability. It also
puts increased demands on the development and implementation of methods
for measuring the quality of voice transmission. Objective algorithms and
subjective test methods must respond to new kinds of faults and disturbances.
The problem is that the methodology of subjective and partly objective tests is
based on ITU-T Recommendation P.800 of 1993 and was designed for use in
conventional telephone networks. With the current low rate and irregular
occurrence of errors, the used samples are too short.
In the first part of this thesis, the protocols and codecs which are most often
used for speech transmission are described. The following describes the
methods and algorithms currently used to measure the transmission quality. In
this thesis is mainly used algorithm POLQA then PESQ and 3SQM.
In the second part of this thesis, the effect of various IP network parameters on
the voice quality is verified in the series of experiments. Tested parameters are
delay, delay variation (jitter), packet loss, QoS and various codecs. The influence
of the length of the sample on the measuring results using an objective
algorithm is also evaluated. The effects of static delay on the quality of the
conversation and the effects of stress evaluators on the results of these tests
are investigated using subjective tests.
In the second part of this work, a methodology for subjective tests using long
samples is suggested. The subject of the investigation is in addition to the length
of the sample as well the method of collection and analysis of data from the
evaluators. The investigated methods are: a classic one with evaluation at the
end, equidistant with ratings at regular intervals and random with ratings
anytime during the sample. These methods differ in the final evaluation of the
sample and its uncertainty.
Thanks to the use of longer samples, it is possible to reduce the number of
repetitions and the subjective tests are simplified while maintaining the
required accuracy. Experimental results showed the influence of jitter, packet
loss, the codec used and methods of QoS on the transmission quality. Influence
of static delay on the quality of the conversation was also confirmed.
Differences in results are noticeable in comparison of objective algorithms. It is
caused by the the progressive development of algorithms.
Available in digital repository of ČVUT.
Impact of IP Chanel Parameters on the Final Quality of the Transferred Voice
The subject of this dissertation is the measurement of quality of voice transmission over IP networks. Development in digitization, encoding and transmission of the speech is very fast, as well as ...
Intelligent Distribution Systems with Dispersed Electricity Generation
Švec, Jan; Tlustý, Josef; Fandi, Ghaeth
2017 - English
The huge wind energy resource represents a potential to use wind farms to power vast area of the earth with renewable energy. Many wind farm concepts have been proposed, but much work still need to be done on Intelligent Distribution Systems with Dispersed Electricity Generation.
This work presents the development of a comprehensive simulation tool for modeling the dynamic response of wind farms, the verification of the simulation tool through model-to-model comparisons, and the application of the simulation tool to an integrated real commercial power system loads analysis for the promising system concepts without and with wind farm. A MATLAB/Simulink simulation tool was developed from a benchmark commercial power system and to have the features required to perform loads analyses for without and with wind farm system configurations. The simulation capability was tested using model-to-model comparisons without and with wind farm. The favorable results of all of the verification exercises provided confidence of enhanced voltage profile and reduced power losses.
The simulation tool was then applied in a preliminary loads analysis of a wind farm. The loads analysis aimed to characterize the dynamic response and to identify potential loads and instabilities resulting from the dynamic couplings of the wind farm. The coupling between the wind farm response, in particular, with larger extreme loads experienced voltage instabilities which were then found to be improved with the installation of the wind farm at various distance and location in the system.
The design modifications by improving the wind farm response at various distance and location, was able to eliminate voltage instabilities in the system. This was aimed at obtaining an effective design to achieve favorable performance of the proposed electric power system with regards to improved voltage profile and reduced power losses.Velké množství energie dodávané z větrných elektráren představuje potenciál využití energie větru pomocí větrných farem do rozlehlých oblastí na Zemi. Již bylo navrženo mnoho konceptů větrných farem, ale stále je potřeba vykonat hodně práce při vývoji inteligentních distribučních systémů s rozptýlenou výrobou elektrické energie.
V této práci je uveden vývoj komplexního simulačního nástroje pro modelování dynamické odezvy větrných farem, ověření tohoto simulačního nástroje srovnáním modelů a jeho aplikace na skutečný komerční integrovaný systém. Dále jsou provedeny analýzy zatížení pro navržené systémové koncepty bez větrných farem a s farmami. Simulační nástroj v prostředí MATLAB/Simulink byl vyvinut pro kalibrovaný energetický systém, má vlastnosti požadované k provedení analýzy zatížení, ať s připojenými větrnými farmami, či bez nich. Příznivé výsledky všech ověřovacích úloh poskytují věrohodnost výsledků napěťových profilů v síti a snížení výkonových ztrát.
Simulační nástroj byl poté aplikován v předběžné analýze zatížení větrných farem. Analýza zatížení se zaměřuje na dynamickou odezvu a identifikaci potenciálních odběrů a nestabilit vyplývajících z dynamických vazeb větrných farem. Jako odezva vazeb mezi větrnými farmami, zejména s většími výkony bylo zjištěno, že zlepšení napěťové nestability lze dosáhnout instalací větrných farem v různých vzdálenostech a umístění v systému.
Modifikace návrhu zlepšením odezvy větrných farem v různých vzdálenostech a umístěních byly schopny eliminovat napěťové nestability systému. Cílem bylo dosažení efektivního návrhu, aby bylo dosaženo příznivých vlivů na energetickou soustavu, tedy zlepšení napěťových profilů a snížení výkonových ztrát.
Available in digital repository of ČVUT.
Intelligent Distribution Systems with Dispersed Electricity Generation
The huge wind energy resource represents a potential to use wind farms to power vast area of the earth with renewable energy. Many wind farm concepts have been proposed, but much work still need to be ...
Place Recognition by Per-Location Classifiers
Pajdla, Tomáš; Šivic, Josef; Gronát, Petr
2017 - English
Place recognition is formulated as a task of finding the location where the query image
was captured. This is an important task that has many practical applications in robotics,
autonomous driving, augmented reality, 3D reconstruction or systems that organize imagery
in geographically structured manner. Place recognition is typically done by finding
a reference image in a large structured geo-referenced database.
In this work, we first address the problem of building a geo-referenced dataset for
place recognition. We describe a framework for building the dataset from the street-side
imagery of the Google Street View that provides panoramic views from positions along
many streets, cities and rural areas worldwide. Besides of downloading the panoramic
views and ability to transform them into a set of perspective images, the framework is
capable of getting underlying scene depth information.
Second, we aim at localizing a query photograph by finding other images depicting
the same place in a large geotagged image database. This is a challenging task due
to changes in viewpoint, imaging conditions and the large size of the image database.
The contribution of this work is two-fold; (i) we cast the place recognition problem as a
classification task and use the available geotags to train a classifier for each location in the
database in a similar manner to per-exemplar SVMs in object recognition, and (ii) as only
a few positive training examples are available for each location, we propose two methods
to calibrate all the per-location SVM classifiers without the need for additional positive
training data. The first method relies on p-values from statistical hypothesis testing and
uses only the available negative training data. The second method performs an affine
calibration by appropriately normalizing the learned classifier hyperplane and does not
need any additional labeled training data. We test the proposed place recognition method
with the bag-of-visual-words and Fisher vector image representations suitable for large
scale indexing.
Experiments are performed on three datasets: 25,000 and 55,000 geotagged street
view images of Pittsburgh, and the 24/7 Tokyo benchmark containing 76,000 images with
varying illumination conditions. The results show improved place recognition accuracy of
the learned image representation over direct matching of raw image descriptors.
Available in digital repository of ČVUT.
Place Recognition by Per-Location Classifiers
Place recognition is formulated as a task of finding the location where the query image was captured. This is an important task that has many practical applications in robotics, autonomous driving, ...
NRGL provides central access to information on grey literature produced in the Czech Republic in the fields of science, research and education. You can find more information about grey literature and NRGL at service web
Send your suggestions and comments to nusl@techlib.cz
Provider
Other bases