Number of found documents: 11487
Published from to

Lexicalized Syntactic Analysis by Restarting Automata
Mráz, F.; Otto, F.; Pardubská, D.; Plátek, Martin
2019 - English
We study h-lexicalized two-way restarting automata that can rewrite at most i times per cycle for some i ≥ 1 (hRLWW(i)-automata). This model is considered useful for the study of lexical (syntactic) disambiguation, which is a concept from linguistics. It is based on certain reduction patterns. We study lexical disambiguation through the formal notion of h-lexicalized syntactic analysis (hLSA). The hLSA is composed of a basic language and the corresponding h-proper language, which is obtained from the basic language by mapping all basic symbols to input symbols. We stress the sensitivity of hLSA by hRLWW(i)-automata to the size of their windows, the number of possible rewrites per cycle, and the degree of (non-)monotonicity. We introduce the concepts of contextually transparent languages (CTL) and contextually transparent lexicalized analyses based on very special reduction patterns, and we present two-dimensional hierarchies of their subclasses based on the size of windows and on the degree of synchronization. The bottoms of these hierarchies correspond to the context-free languages. CTL creates a proper subclass of context-sensitive languages with syntactically natural properties. Keywords: Restarting automaton; h-lexicalization; lexical disambiguation Fulltext is available at external website.
Lexicalized Syntactic Analysis by Restarting Automata

We study h-lexicalized two-way restarting automata that can rewrite at most i times per cycle for some i ≥ 1 (hRLWW(i)-automata). This model is considered useful for the study of lexical (syntactic) ...

Mráz, F.; Otto, F.; Pardubská, D.; Plátek, Martin
Ústav informatiky, 2019

Second Order Optimality in Markov and Semi-Markov Decision Processes
Sladký, Karel
2019 - English
Semi-Markov decision processes can be considered as an extension of discrete- and continuous-time Markov reward models. Unfortunately, traditional optimality criteria as long-run average reward per time may be quite insufficient to characterize the problem from the point of a decision maker. To this end it may be preferable if not necessary to select more sophisticated criteria that also reflect variability-risk features of the problem. Perhaps the best known approaches stem from the classical work of Markowitz on mean-variance selection rules, i.e. we optimize the weighted sum of average or total reward and its variance. Such approach has been already studied for very special classes of semi-Markov decision processes, in particular, for Markov decision processes in discrete- and continuous-time setting. In this note these approaches are summarized and possible extensions to the wider class of semi-Markov decision processes is discussed. Attention is mostly restricted to uncontrolled models in which the chain is aperiodic and contains a single class of recurrent states. Considering finite time horizons, explicit formulas for the first and second moments of total reward as well as for the corresponding variance are produced. Keywords: semi-Markov processes with rewards; discrete- and continuous-time Markov reward chains; risk-sensitive optimality; average reward and variance over time Fulltext is available at external website.
Second Order Optimality in Markov and Semi-Markov Decision Processes

Semi-Markov decision processes can be considered as an extension of discrete- and continuous-time Markov reward models. Unfortunately, traditional optimality criteria as long-run average reward per ...

Sladký, Karel
Ústav teorie informace a automatizace, 2019

Mean-Risk Optimization Problem via Scalarization, Stochastic Dominance, Empirical Estimates
Kaňková, Vlasta
2019 - English
Many economic and financial situations depend simultaneously on a random element and on a decision parameter. Mostly it is possible to influence the above mentioned situation by an optimization model depending on a probability measure. We focus on a special case of one-stage two objective stochastic "Mean-Risk problem". Of course to determine optimal solution simultaneously with respect to the both criteria is mostly impossible. Consequently, it is necessary to employ some approaches. A few of them are known (from the literature), however two of them are very important; first of them is based on a scalarizing technique and the second one is based on the stochastic dominance. First approach has been suggested (in special case) by Markowitz, the second approach is based on the second order stochastic dominance. The last approach corresponds (under some assumptions) to partial order in the set of the utility functions.\nThe aim of the contribution is to deal with the both main above mentioned approaches. First, we repeat their properties and further we try to suggest possibility to improve the both values simultaneously with respect to the both criteria. However, we focus mainly on the case when probability characteristics has to be estimated on the data base. Keywords: Two-objective stochastic optimization problems; scalarization; stochastic dominance; empirical estimates Fulltext is available at external website.
Mean-Risk Optimization Problem via Scalarization, Stochastic Dominance, Empirical Estimates

Many economic and financial situations depend simultaneously on a random element and on a decision parameter. Mostly it is possible to influence the above mentioned situation by an optimization model ...

Kaňková, Vlasta
Ústav teorie informace a automatizace, 2019

Possibilities of using 3D laser scanning in geotechnical practise
Kukutsch, Radovan; Kajzar, Vlastimil; Šňupárek, Richard; Waclawik, Petr
2019 - English
We are experiencing the penetration of modern and smart technologies in all sectors of human activity, including mining and geotechnics. One of these technologies is 3D laser scanning, which has seen significant technological advancements over the last decade and has become an integral part of underground construction monitoring as a tool to enable comprehensive, accurate and unbiased capture of the spatial situation in digital form. This development was behind the fact that since 2013, 3D laser scanner has been used by the Institute of Geonics of the Czech Academy of Sciences as a necessary part of the geotechnical monitoring of mine works, when it is possible to precisely detect and quantify the time-space changes caused by man's intervention in the rock mass compared to the traditional established measuring methods. A leading project of recent years was the monitoring of the strain stress state of the rock massif during the 30th seam extraction during the trial operation of the Room and Pillar extraction method in the CSM Mine shaft safety pillar where, besides many other measuring instruments, 3D laser scanning was used for the convergence measuring of roadways, especially for capturing any deformation changes on the permanent pillars. A complementary function was the comparative evaluation with the results of other tools, e.g. with data measured by horizontal extensometers. The subject of the article will be a general description of the possibilities of using 3D laser scanning in geotechnical practice on spatial data acquired during the monitoring lasting almost 3.5 years, when important phenomena were detected in the movement of the pillar walls and the floor heave in the CSM Mine in the tens of cm, sometimes up to 100 cm. Keywords: 3D laser scanning; room and pillar; roadways deformation Available at various institutes of the ASCR
Possibilities of using 3D laser scanning in geotechnical practise

We are experiencing the penetration of modern and smart technologies in all sectors of human activity, including mining and geotechnics. One of these technologies is 3D laser scanning, which has seen ...

Kukutsch, Radovan; Kajzar, Vlastimil; Šňupárek, Richard; Waclawik, Petr
Ústav geoniky, 2019

Laplacian preconditioning of elliptic PDEs: Localization of the eigenvalues of the discretized operator
Gergelits, Tomáš; Mardal, K.-A.; Nielsen, B. F.; Strakoš, Z.
2019 - English
This contribution represents an extension of our earlier studies on the paradigmatic example of the inverse problem of the diffusion parameter estimation from spatio-temporal measurements of fluorescent particle concentration, see [6, 1, 3, 4, 5]. More precisely, we continue to look for an optimal bleaching pattern used in FRAP (Fluorescence Recovery After Photobleaching), being the initial condition of the Fickian diffusion equation maximizing a sensitivity measure. As follows, we define an optimization problem and we show the special feature (so-called complementarity principle) of the optimal binary-valued initial conditions. Keywords: second order elliptic PDEs; preconditioning by the inverse Laplacian; eigenvalues of the discretized preconditioned problem; nodal values of the coefficient function; Hall’s theorem; convergence of the conjugate gradient method Available in digital repository of the ASCR
Laplacian preconditioning of elliptic PDEs: Localization of the eigenvalues of the discretized operator

This contribution represents an extension of our earlier studies on the paradigmatic example of the inverse problem of the diffusion parameter estimation from spatio-temporal measurements of ...

Gergelits, Tomáš; Mardal, K.-A.; Nielsen, B. F.; Strakoš, Z.
Ústav informatiky, 2019

Study of lithium encapsulation in porous membrane using ion and neutron beams
Ceccio, Giovanni; Cannavó, Antonino; Horák, Pavel; Torrisi, Alfio; Tomandl, Ivo; Hnatowicz, Vladimír; Vacík, Jiří
2019 - English
Ion track-etched membranes are porous systems obtained by etching of the latent ion tracks using a suitable etchant solution. In this work, control of the pores' spatial profiles and dimensions in PET polymers was achieved by varying etching temperature and etching time. For determination of the pores' shape, Ion Transmission Spectroscopy technique was employed. In this method, alterations of the energy loss spectra of the transmitted ions reflect alterations in the material density of the porous foils, as well as alterations of their thickness. Simulation code, developed by the team, allowed the tomographic study of the ion track 3D geometry and its evolution during chemical etching. From the doping of porous membranes with lithium-based solution and its analysis by Thermal Neutron Depth Profiling method, the ability of porous PET membranes to encapsulate nano-sized material was also inspected. The study is important for various applications, e.g., for catalysis, active agents, biosensors, etc. Keywords: doping; etching; ion transmission spectroscopy; thermal neutron depth profiling Available at various institutes of the ASCR
Study of lithium encapsulation in porous membrane using ion and neutron beams

Ion track-etched membranes are porous systems obtained by etching of the latent ion tracks using a suitable etchant solution. In this work, control of the pores' spatial profiles and dimensions in PET ...

Ceccio, Giovanni; Cannavó, Antonino; Horák, Pavel; Torrisi, Alfio; Tomandl, Ivo; Hnatowicz, Vladimír; Vacík, Jiří
Ústav jaderné fyziky, 2019

Instrumentation for study of nanomaterials in NPI REZ (New laboratory for material study in Nuclear Physics Institute in REZ)
Bejšovec, Václav; Cannavó, Antonino; Ceccio, Giovanni; Hnatowicz, Vladimír; Horák, Pavel; Lavrentiev, Vasyl; Macková, Anna; Tomandl, Ivo; Torrisi, Alfio; Vacík, Jiří
2019 - English
Nano-sized materials become irreplaceable component of a number of devices for every aspect of human life. The development of new materials and deepening of the current knowledge require a set of specialized techniques-deposition methods for preparation/modification of the materials and analytical tools for proper understanding of their properties. A thoroughly equipped research centers become the requirement for the advance and development not only in nano-sized field. The Center of Accelerators and Nuclear Analytical Methods (CANAM) in the Nuclear Physics Institute (NPI) comprises a unique set of techniques for the synthesis or modification of nanostructured materials and systems, and their characterization using ion beam, neutron beam and microscopy imaging techniques. The methods are used for investigation of a broad range of nano-sized materials and structures based on metal oxides, nitrides, carbides, carbon-based materials (polymers, fullerenes, graphenes, etc.) and nano-laminate composites (MAX phases). These materials can be prepared at NPI using ion beam sputtering, physical vapor deposition and molecular beam epitaxy. Based on the deposition method and parameters, the samples can be tuned to possess specific properties, e.g., composition, thickness (nm-μm), surface roughness, optical and electrical properties, etc. Various nuclear analytical methods are applied for the sample characterization. RBS, RBS-channeling, PIXE, PIGE, micro-beam analyses and Transmission Spectroscopy are accomplished at the Tandetron 4130MC accelerator, and additionally the Neutron Depth Profiling (NDP) and Prompt Gamma Neutron Activation (PGNA) analyses are performed at an external neutron beam from the LVR-15 research reactor. The multimode AFM facility provides further surface related information, magnetic/electrical properties with nano-metric precision, nano-indentation, etc. Keywords: AFM; ion beam analysis; LEIF Available at various institutes of the ASCR
Instrumentation for study of nanomaterials in NPI REZ (New laboratory for material study in Nuclear Physics Institute in REZ)

Nano-sized materials become irreplaceable component of a number of devices for every aspect of human life. The development of new materials and deepening of the current knowledge require a set of ...

Bejšovec, Václav; Cannavó, Antonino; Ceccio, Giovanni; Hnatowicz, Vladimír; Horák, Pavel; Lavrentiev, Vasyl; Macková, Anna; Tomandl, Ivo; Torrisi, Alfio; Vacík, Jiří
Ústav jaderné fyziky, 2019

Wind tunnel tests for lifetime estimation of bridge and mast cables exposed to vortex induced vibrations
Trush, Arsenii; Pospíšil, Stanislav; Kuznetsov, Sergeii
2019 - English
A significant number of TV and radio broadcasting masts in the Czech Republic was built in the 70-80s of the last century. At the moment is an actual issue is the reconstruction and determination of residual life of these structures. Guyed masts and particularly guy ropes have significant dimensions and comparatively low mass and damping with high flexibility. Therefore, aerodynamic and aeroelastic loads, such as vortex induced vibrations, galloping, wind gusts, etc., are key for them. As a tensile construction elements (guy ropes) for guyed masts the traditional open wire spiral strand cables are used. This type of cable has a characteristic helical surface roughness pattern that can act as vortex suppressor, high fatigue endurance, although somewhat lower corrosion resistance comparing to modern locked coil cables with non-circular shaped wires of outer layer and cables with protective polymer coatings. At the same time, on numerous bridges with the above-mentioned modern cable types the fatigue damage to wires in anchorage zones and destruction of protective coatings was detected. Present paper provides results of wind tunnel testing of three models of helical strake cable in order to evaluate separately impact of lay angle and surface roughness factors and reference smooth cylinder model in flow with grid generated turbulence of different intensities. The reduction of the lock-in range of helical strand cables comparing to reference smooth model was observed whereby the greatest impact was an increase of lay angle. Keywords: bridge cable; wind tunnel; vortex shedding Available at various institutes of the ASCR
Wind tunnel tests for lifetime estimation of bridge and mast cables exposed to vortex induced vibrations

A significant number of TV and radio broadcasting masts in the Czech Republic was built in the 70-80s of the last century. At the moment is an actual issue is the reconstruction and determination of ...

Trush, Arsenii; Pospíšil, Stanislav; Kuznetsov, Sergeii
Ústav teoretické a aplikované mechaniky, 2019

A Nonparametric Bootstrap Comparison of Variances of Robust Regression Estimators.
Kalina, Jan; Tobišková, Nicole; Tichavský, Jan
2019 - English
While various robust regression estimators are available for the standard linear regression model, performance comparisons of individual robust estimators over real or simulated datasets seem to be still lacking. In general, a reliable robust estimator of regression parameters should be consistent and at the same time should have a relatively small variability, i.e. the variances of individual regression parameters should be small. The aim of this paper is to compare the variability of S-estimators, MM-estimators, least trimmed squares, and least weighted squares estimators. While they all are consistent under general assumptions, the asymptotic covariance matrix of the least weighted squares remains infeasible, because the only available formula for its computation depends on the unknown random errors. Thus, we take resort to a nonparametric bootstrap comparison of variability of different robust regression estimators. It turns out that the best results are obtained either with MM-estimators, or with the least weighted squares with suitable weights. The latter estimator is especially recommendable for small sample sizes. Keywords: robustness; linear regression; outliers; bootstrap; least weighted squares Fulltext is available at external website.
A Nonparametric Bootstrap Comparison of Variances of Robust Regression Estimators.

While various robust regression estimators are available for the standard linear regression model, performance comparisons of individual robust estimators over real or simulated datasets seem to be ...

Kalina, Jan; Tobišková, Nicole; Tichavský, Jan
Ústav informatiky, 2019

Implicitly weighted robust estimation of quantiles in linear regression
Kalina, Jan; Vidnerová, Petra
2019 - English
Estimation of quantiles represents a very important task in econometric regression modeling, while the standard regression quantiles machinery is well developed as well as popular with a large number of econometric applications. Although regression quantiles are commonly known as robust tools, they are vulnerable to the presence of leverage points in the data. We propose here a novel approach for the linear regression based on a specific version of the least weighted squares estimator, together with an additional estimator based only on observations between two different novel quantiles. The new methods are conceptually simple and comprehensible. Without the ambition to derive theoretical properties of the novel methods, numerical computations reveal them to perform comparably to standard regression quantiles, if the data are not contaminated by outliers. Moreover, the new methods seem much more robust on a simulated dataset with severe leverage points. Keywords: regression quantiles; robust regression; outliers; leverage points Fulltext is available at external website.
Implicitly weighted robust estimation of quantiles in linear regression

Estimation of quantiles represents a very important task in econometric regression modeling, while the standard regression quantiles machinery is well developed as well as popular with a large number ...

Kalina, Jan; Vidnerová, Petra
Ústav informatiky, 2019

About project

NRGL provides central access to information on grey literature produced in the Czech Republic in the fields of science, research and education. You can find more information about grey literature and NRGL at service web

Send your suggestions and comments to nusl@techlib.cz

Provider

http://www.techlib.cz

Facebook

Other bases