Number of found documents: 319
Published from to

Novel Web Metrics Based On Sentiment Analysis
Jelínek, Ivan; Malinský, Radek
2016 - English
In recent years, the Internet has been experiencing a huge boom in social networking, blogging and discussing on online forums. With the growing popularity of these communication channels, there have been arising a large number of comments on various topics from many different types of users. Such information source is not only useful for academic researchers, but also for commercial companies that would like to gain a direct user feedback on price, quality, and other factors of their products. However, obtaining comprehensive information from such a source is a challenging task nowadays. Several models have been proposed for the social media analysis on the Web. However, many of these solutions are usually tailored to a specific purpose or data type, and there is still lack of generality and unclear approach to handling the data. Moreover, a web content diversity, a variety of technologies along with the website structure differences, all of these make the Web a network of heterogeneous data, where things are difficult to find. It is, therefore, necessary to design a suitable metric that would reflect a semantic content of single pages in a better way. In this thesis, the main emphasis has been placed on the evaluation of the Internet trends, where the trend may be defined as anything from an event, product name, name of a person or any expression, which is mentioned online. A general model has been proposed to collect and analyse data from the Web. The analysis part of the model is based on webometric principles that are enhanced by the methods of sentiment and social network analysis. The extension of webometrics by the combination of these methods leads up to gaining insights into the public opinion with respect to some topic, and to a better machine understanding of a text. iii In particular, the main contributions of the dissertation thesis are as follows: 1. Proposal of the new theoretical model for gathering and processing data from Web 2.0. 2. Definition of the methodology for the evaluation of Internet trends. 3. Adaptation of the newly designed methodology for the evaluation in social network sphere. 4. Proposal of the new sentiment sense disambiguation methods to improve sentiment classification for multiple-topic related words. 5. Architecture design of the new framework that provides an end-to-end approach to the analysis of selected Internet trends. Available in digital repository of ČVUT.
Novel Web Metrics Based On Sentiment Analysis

In recent years, the Internet has been experiencing a huge boom in social networking, blogging and discussing on online forums. With the growing popularity of these communication channels, there ...

Jelínek, Ivan; Malinský, Radek
České vysoké učení technické v Praze, 2016

Network Traffic Representations for Adaptive Intrusion Detection
Rehák, Martin; Pevný, Tomáš; Bartoš, Karel
2016 - English
New and unseen polymorphic malware, zero-day attacks, or other types of advanced persistent threats are usually not detected by traditional security systems. This represents a challenge to the network security industry as the amount and variability of attacks has been increasing. In this thesis, we propose three key approaches, each dealing with this challenge at di erent levels of abstraction. In order to cope with an increasing volume of network tra c, we propose the adaptive sampling method based on two concepts that mitigate the negative impact of sampling on the raw input data: (i) Features used by the analytic algorithms are extracted before the sampling and attached to the surviving ows. The surviving ows thus carry the representation of the original statistical distribution in these attached features. (ii) Adaptive sampling that deliberatively skews the distribution of the surviving data to over-represent the rare ows or ows with rare feature values. This preserves the variability of the data and is critical for the analysis of malicious tra c, such as the detection of stealthy, hidden threats. Our approach has been extensively validated on standard NetFlow data, as well as on HTTP proxy logs that approximate the use-case of enriched IPFIX for the network forensics. Next, we propose a novel representation and classi cation system designed to detect both known as well as previously unseen security threats. The classi ers use statistical feature representation computed from the network tra c and learn to recognize malicious behavior. The representation is designed and optimized to be invariant to the most common changes of malware behaviors. This is achieved in part by a feature histogram constructed for each group of network connections ( ows) and in part by a feature self-similarity matrix computed for each group. The parameters of the representation (histogram bins) are optimized and learned based on the training samples along with the classi ers. The proposed approach was deployed on large corporate networks, where it detected 2,090 new variants of malware with 90% precision. Finally, we propose a distributed and self-organized mechanism for the collaboration of multiple heterogeneous detection systems. The mechanism is based on a game-theoretical approach that optimizes the behavior of each detection system with respect to other systems in highly dynamic environments. The game-theoretical model specializes the detection systems on speci c types of malicious behaviors to collaboratively cover a wider range of attack classes. According to our experimental evaluation on the real network tra c, the proposed mechanism shows clear improvements caused by mutual specialization of individual detection systems. All three approaches can be combined into a uni ed collaborative fusion system, analyzing the input network tra c at di erent levels of abstraction. The bene ts of such combination were demonstrated in the nal experiment, where we combined the proposed adaptive sampling with a collaborative mechanism for detection systems deployed in multiple networks. Available in digital repository of ČVUT.
Network Traffic Representations for Adaptive Intrusion Detection

New and unseen polymorphic malware, zero-day attacks, or other types of advanced persistent threats are usually not detected by traditional security systems. This represents a challenge to the ...

Rehák, Martin; Pevný, Tomáš; Bartoš, Karel
České vysoké učení technické v Praze, 2016

Travelling Waves in Distributed Control
Šebek, Michal; Martinec, Dan
2016 - English
Available in digital repository of ČVUT.
Travelling Waves in Distributed Control

Šebek, Michal; Martinec, Dan
České vysoké učení technické v Praze, 2016

Automatic User Interface Generation
Slavík, Pavel; Macík, Miroslav
2016 - English
Available in digital repository of ČVUT.
Automatic User Interface Generation

Slavík, Pavel; Macík, Miroslav
České vysoké učení technické v Praze, 2016

PROPERTY STUDIES OF PLASMA SPRAYED TITANATES
Ctibor, Pavel; Kotlan, Jiří; Sedláček, Josef
2016 - English
Use of plasma sprayed coatings is an important part of industrial production in many applications. This technique is mainly used for applications such as thermal barrier and wear resistant layers. Application of plasma sprayed coatings in the electronics industry lags behind its possibilities because there was lower attention to electrical properties of these coatings worldwide. This work provides new knowledge about the electrical and structural properties of titanates which are in the sintered state used as a dielectrics. Calcium titanate and barium-strontium titanate as promising materials in the form of the coating were selected as proper materials in this dissertation. Plasma deposited coatings exhibit di erent microstruc- ture and often phase and chemical composition when compared to the sintered material. Coatings were prepared by the conventional gas stabilized plasma torch as well as water stabilized plasma technology. Many experimental methods were used to bring a new knowledge about these materials. The electrical properties were studied by frequency dependence of relative permittivity and loss factor or by measuring electrical breakdown strength, electrical resistivity and determination of band gap. The morphology of coatings was studied by both light and electron microscopy. Phase composition was characterized by X- ray di raction analysis including high temperature in-situ experiments. Raman spectroscopy and X-ray photoelectron spectroscopy were used to analyze the chemical composition. The mechanical properties were analyzed by microhardness and elastic modulus measurements. All these measurements and discus- sion of the results bring a new knowledge about plasma deposited titanates and thereby contribute to greater potential application of these materials in the electronics industry. Available in digital repository of ČVUT.
PROPERTY STUDIES OF PLASMA SPRAYED TITANATES

Use of plasma sprayed coatings is an important part of industrial production in many applications. This technique is mainly used for applications such as thermal barrier and wear resistant layers. ...

Ctibor, Pavel; Kotlan, Jiří; Sedláček, Josef
České vysoké učení technické v Praze, 2016

Algorithms for Analysis of Nonlinear High-Frequency Circuits
Dobeš, Josef; Černý, David
2016 - English
The most efficient simulation solvers use composite procedures that adaptively rearrange computation algorithms to maximize simulation performance. Fast and stable processing optimized for given simulation problem is essential for any modern simulator. It is characteristic for electronic circuit analysis that complexity of simulation is affected by circuit size and used device models. Implementation of electronic device models in program SPICE uses traditional implementation allowing fast computation but further modification of model can be questionable. The first fundamental thesis aim is scalability of the simulation based on the adaptive internal solver composing different algorithms according to properties of simulation problem to maximize simulation performance. In a case of the small circuit as faster solution prove simple, straightforward methods that utilize arithmetic operations without unnecessary condition jumping and memory rearrangements that can not be effectively optimized by a compiler. The limit of small size simulation problems is related to computation machine capabilities. The present day PC sets this limit to fifty independent voltage nodes where inefficiency of calculation procedure does not play any role in overall processor performance. The scalable solver must also be able to handle correctly simulation of large-scale circuits that requires entirely different approach apart to standard size circuits. The unique properties of simulation of the electronic circuits that played until this time only the minor role suddenly gain on significance for circuits with several thousand voltage nodes. In those particular cases, iterative algorithms based on Krylov subspace methods provide better results from the aspect of performance than standard direct methods. This thesis also proposes unique techniques of indexation of the large-scale sparse matrix system. The primary purpose is to reduce memory requirements for storing sparse matrices during simulation computation. The second fundamental thesis aim is automatic adaptivity of device models definition respecting current simulation state and settings. This principle is denoted as Functional Chaining mechanism that is based on the principle of the automatic self-modifying procedure utilizing state-of-the-art functional computation layer during the simulation process. It can significantly improve mapping performance of circuit variables to device models; it also allows autonomous redefinition of simulation algorithms during analysis with an intention to reduce computation time. The core idea is based on utilization of programming principles related to functional programming languages. It is also presents possibilites of reimplementation to the modern object-oriented languages. The third fundamental thesis aim focuses on simulation accuracy and reliability. Arbitrary precision variable types can directly lead to increased simulation accuracy but on the other hand; they can significantly decrease simulation performance. In last chapters, there are several algorithms provided with the claim to provide better simulation accuracy and suppress computation errors of floating point data types. Available in digital repository of ČVUT.
Algorithms for Analysis of Nonlinear High-Frequency Circuits

The most efficient simulation solvers use composite procedures that adaptively rearrange computation algorithms to maximize simulation performance. Fast and stable processing optimized for given ...

Dobeš, Josef; Černý, David
České vysoké učení technické v Praze, 2016

Evaluation of motor speech disorders by acoustic analysis: differential diagnosis and monitoring of medical intervention
Čmejla, Roman; Tykalová, Tereza
2016 - English
Dysarthria is a motor speech disorder resulting from neurologic impairment affecting mainly the control and execution of movements related to speech production. Occurrence of dysarthria in adult age is commonly manifested as a consequence of degenerative disorder such as Parkinson's disease (PD), Huntington's disease (HD), multiple system atrophy (MSA), progressive supranuclear palsy (PSP) or cerebellar ataxia (CA). Interestingly, identification of specific deviant speech characteristics can provide important clues about the underlying pathophysiology and localization of neurological diseases. Speech may also serve as a valuable marker of disease onset or treatment efficacy. Therefore, the main aims of this doctoral thesis were (a) to design the feasible algorithms, methodologies or measurements that would be sensitive and accurate enough to capture pathological changes in speech, (b) to objectively quantify the effect of neurological disorder on speech production and (c) to relate the potentially observed speech changes to overall motor performance or medication doses in order to provide deeper insight into the pathophysiology of speech disturbances. Several databases of PD, HD, MSA, PSP and CA patients as well as age-matched healthy controls were obtained. During recording, all participants were instructed to perform several speaking tasks such as sustained phonation, fast syllable repetition, reading passage or monologue. In addition, various clinical information about patient's motor skills, cognitive abilities or medication doses was available. The acoustic analyses were carried out to provide quantitative objective evaluation of speech performances. Statistical analyses were applied to search for possible group differences or correlations between speech and clinical metrics. The results of this doctoral thesis are presented in the form of nine peer-reviewed journal papers. In summary, we managed to objectively quantify the effect of neurological disorder on speech production in PD, HD, MSA, PSP and CA patients. Furthermore, we proved that the separation of patients from healthy controls based solely on speech is possible. The differentiation among several types of parkinsonian disorders is also possible as we were able to discriminate between MSA/PSP and PD with 95 % accuracy and between PSP and MSA subjects with 75 % accuracy. In addition, a number of correlations were found between clinical and speech characteristics. Considering PD, an adverse effect of levodopa on speech fluency was found in PD patients after 3-6 years of taking medication. On the other hand, we found improved or maintained speech performances (related mainly to consonant and vowel articulation, pitch variability and number of pauses) in two-thirds of those PD patients whereas speech deteriorated only in one-third; indicating general positive effect of long-term dopaminergic therapy on dysarthria in early stages of PD. In conclusion, objective acoustic analysis of motor speech disorders can significantly contribute to early and correct diagnosis of the particular disorder and provide more insights into underlying pathophysiology of such diseases.Dysartrie je porucha hlasu a řeči vznikající v důsledku poškození funkce části mozku, která je zodpovědná zejména za řízení a provádění pohybů souvisejících s tvorbou řeči. Dysartrie se v dospělosti běžně vyskytuje jako důsledek degenerativních onemocnění, mezi která patří i Parkinsonova nemoc (PN), Huntingtonova nemoc (HN), mnohočetná systémová atrofie (MSA), progresivní supranukleární obrna (PSO) nebo cerebelární ataxie (CA). Rozpoznání a klasifikace specifických řečových charakteristik souvisejících s dysartrií může poskytnout důležité informace o patofyziologii a lokalizaci neurologických onemocnění. Hodnocení míry poškození řeči pak může též sloužit jako cenný ukazatel pro stanovení doby nástupu nemoci či účinnosti léčby. Hlavní cíle této disertační práce jsou: (a) navrhnout vhodné algoritmy, metody a měření, které by byly dostatečně citlivé a přesné, aby zachytily patologické změny v řeči, (b) objektivně kvantifikovat vliv neurologických poruch na řečový projev a (c) hledat souvislosti mezi pozorovanými změnami v řeči a celkovým motorickým stavem pacienta či medikací, kterou užívá, za účelem hlubšího porozumění patofyziologii poruch řeči. V rámci dizertace bylo pořízeno několik databází PN, HN, MSA, PSO a CA pacientů a zdravých kontrol odpovídajícího věku. V průběhu nahrávání byli všichni účastníci studie požádáni o provedení několika řečových úloh jako je prodloužená fonace, rychlé opakování slabik, přečtení úryvku textu nebo vyprávění krátkého monologu. Dále byly pořízeny záznamy s různými klinickými informacemi o stavu motorických či kognitivních schopností pacienta nebo dávkách užívaných léků. Objektivní kvantitativní hodnocení řeči bylo provedeno s pomocí akustických analýz. Statistické analýzy byly použity k hledání možných rozdílů mezi skupinami či korelací mezi řečovými a klinickými charakteristikami. Výsledky této disertační práce jsou prezentovány v podobě devíti IF publikací. Vliv neurologické poruchy na produkci řeči se nám podařilo objektivně kvantifikovat u všech zkoumaných nemocí včetně PN, HN, MSA, PSO a CA. Dále se ukázalo, že oddělení pacientů od zdravých kontrol pouze na základě nahrávky řečového projevu je možné. Taktéž jsme prokázali, že i diferenciace mezi několika velmi podobnými nemocemi parkinsonského typu je možná. S použitím akustických analýz jsme byly schopni rozlišit mezi MSA/PSO a PN s přesností 95 % a mezi PSO a MSA s přesností 75 %. Dále jsme objevili řadu korelací mezi klinickými a řečovými charakteristikami. Nepříznivý účinek levodopy na plynulost řeči byl nalezen u pacientů s PN po 3-6 letech užívání tohoto léku. Na druhé straně jsme pozorovali zlepšení nebo alespoň zachování kvality řečového projevu (související zejména s přesností artikulace souhlásek a samohlásek, variabilitou výšky hlasu a počtem pauz) u 2/3 těchto PN pacientů, zatímco řeč se zhoršila pouze u 1/3. Tyto nálezy naznačující celkový pozitivní účinek dlouhodobého užívání levodopy na dysartrii v brzkých stádiích PN. Závěrem lze říci, že objektivní akustická analýza poruch řeči může významně přispět ke včasné a správné diagnóze dané nemoci a také přispět k hlubšímu porozumění patofyziologii těchto chorob. Available in digital repository of ČVUT.
Evaluation of motor speech disorders by acoustic analysis: differential diagnosis and monitoring of medical intervention

Dysarthria is a motor speech disorder resulting from neurologic impairment affecting mainly the control and execution of movements related to speech production. Occurrence of dysarthria in adult age ...

Čmejla, Roman; Tykalová, Tereza
České vysoké učení technické v Praze, 2016

QUALITY CONTROL METHODS AND TOOLS FOR IMPROVEMENT OF EFFECTIVENESS OF MANUFACTURING PROCESSES
Mach, Pavel; Tarba, Larisa
2016 - English
The objective of this thesis is to uncover the usage of combining several modern methods for controlling and optimizing the manufacturing processes. The goal is to achieve manufacturing of high value-added products, monitor and control performance and the quality of the processes themselves in order to have fewer defects, to increase the non-defect production and improve the overall quality. Implementing innovative technologies into the manufacturing process and creating their mathematical model of effectiveness criteria of implementation allows evaluating the possible changes. Combining Lean Production, Six Sigma methodology and Fuzzy logic will give not only the broader view on all aspects, but also consider how to improve the manufacturing process and make it non defective, seamless and the most efficient. The first part of the thesis describes the current situation of the electronics market, clarifies and explains the basic terms of methods, how they can be combined and used in manufacturing processes in order to increase and control the quality. The second part of the thesis describes one of the manufacturing processes, i.e. Printed Circuit Board manufacturing, and development of the mathematical model and criteria for evaluation innovative technologies and their implementation into the manufacturing process. The third part of the thesis is focused on methodology for optimizing the printed circuit board assembly process by minimizing the duration of sub processes, which potentially will decrease the lead time of the whole assembly process itself. The fourth part of the thesis focuses on optimal strategy for implementing the innovative technologies into the manufacturing process as well as suggests the creation of policy manual of quality management system which will describe the interaction between individual processes within the organization. Available in digital repository of ČVUT.
QUALITY CONTROL METHODS AND TOOLS FOR IMPROVEMENT OF EFFECTIVENESS OF MANUFACTURING PROCESSES

The objective of this thesis is to uncover the usage of combining several modern methods for controlling and optimizing the manufacturing processes. The goal is to achieve manufacturing of high ...

Mach, Pavel; Tarba, Larisa
České vysoké učení technické v Praze, 2016

Quality Assessment of Post-Processed Images
Klíma, Miloš; Le Callet, Patrick; Fliegel, Karel; Krasula, Lukáš
2016 - English
The vast majority of the work done in the field of quality assessment during last two decades has been dedicated to the quantification of the distortion caused by the processing of an image. The original image was, therefore, always considered to be of the best possible quality. In this kind of scenario, the notion of quality can be expressed as the fidelity of the processed version to the reference. However, some post-processing algorithms enable to adjust aesthetic properties of an image in order to enhance the perceived quality. In such cases, the best possible quality image is not available and the classical fidelity approach is no longer applicable. The goal of this thesis is to revise the quality assessment methodologies to cope with the challenges brought by the post-processing into the quality evaluation. The post-processing algorithms, relevant to the topic of this thesis, come from two groups – image enhancement, represented by image sharpening, and dynamic range compression (also known as tone-mapping) techniques. Both subjective and objective quality assessment methodologies applicable in these areas are studied and the suitable solutions, outperforming the state-of-the-art methods, are proposed. Moreover, a novel methodology for evaluating the performance of objective quality metrics, overcoming the shortcomings of the currently available methods, is presented. Available in digital repository of ČVUT.
Quality Assessment of Post-Processed Images

The vast majority of the work done in the field of quality assessment during last two decades has been dedicated to the quantification of the distortion caused by the processing of an image. The ...

Klíma, Miloš; Le Callet, Patrick; Fliegel, Karel; Krasula, Lukáš
České vysoké učení technické v Praze, 2016

Robust recognition of strongly distorted speech
Pollák, Petr; Borský, Michal
2016 - English
The automatic speech recognition systems have become a part of our daily lives. People often rely on virtual personal assistants in smartphones, use their voice to control intelligent devices in cars and smart homes or communicate with automatic dialogue systems in call-centres. Since these systems often suffer from a performance drop in realistic acoustic conditions which are characterized by strong distortions, a large portion of research still must be focused on robust front-end algorithms and acoustic modelling methods for distorted speech recognition. This thesis is focused on these compensation methods working at the level of front-end processing and acoustic modelling, whose aim is to compensate the degradation introduced by a distant microphone, noisy environments and a lossy compression. The techniques for noisy and distant speech recognition studied in this thesis were focused on front-end noise suppression techniques, feature normalization techniques, acoustic model adaptations and discriminative training. Said techniques were evaluated in three different car conditions and two different public environments. The experiments have proved, that extended spectral subtraction can bring significant improvement even for the state-of-the-art systems in public environments with a strong noise and for a far-distance microphone recordings. The evaluation of compressed speech recognition examined the degrading effects of lossy compression on fundamental frequency, formants and smoothed LPC spectrum and for standard MFCC and PLP features used for ASR. The low-pass filtering and the areas of very low energy in a spectrogram were identified as the two main reasons of degradation. The practical experiments evaluated the contributions of specific feature extraction setups, combinations of normalization and compensation techniques, supervised and unsupervised adaptation and discriminative training methods and finally the matched training. The largest contributions were gained from the application of adaptation techniques, subspace GMM and discriminative training. A novel algorithm named Spectrally selective dithering (SSD) was proposed within this thesis, which compensated the effect of spectral valleys. The contribution of said algorithm was verified for both GMM-HMM and DNN-HMM speech recognition systems for Czech and English and for a GMM-HMM system for German. The practical experiments proved that the proposed algorithm can lower WER for all languages with GMM-HMM systems. Concerning DNN-HMM system, a significant contribution was achieved only for Czech. Available in digital repository of ČVUT.
Robust recognition of strongly distorted speech

The automatic speech recognition systems have become a part of our daily lives. People often rely on virtual personal assistants in smartphones, use their voice to control intelligent devices in ...

Pollák, Petr; Borský, Michal
České vysoké učení technické v Praze, 2016

About project

NRGL provides central access to information on grey literature produced in the Czech Republic in the fields of science, research and education. You can find more information about grey literature and NRGL at service web

Send your suggestions and comments to nusl@techlib.cz

Provider

http://www.techlib.cz

Facebook

Other bases