Number of found documents: 1654
Published from to

Globální implicitní funkce
Rohn, Jiří
2020 - Czech
Tento text pochází z roku 1973 a nebyl dosud zveřejněn. Jeho hlavním výsledkem je věta o existenci a jednoznačnosti globální implicitní funkce v Rn. Tomuto výsledku předchází řada pomocných tvrzení. Keywords: silně lokální souvislé množiny; iredundantní pokrytí; pokračování implicitní funkce; existence a jednoznačnost; globální implicitní funkce; inverzní zobrazení Available in a digital repository NRGL
Globální implicitní funkce

Tento text pochází z roku 1973 a nebyl dosud zveřejněn. Jeho hlavním výsledkem je věta o existenci a jednoznačnosti globální implicitní funkce v Rn. Tomuto výsledku předchází řada pomocných tvrzení.

Rohn, Jiří
Ústav informatiky, 2020

Regression for High-Dimensional Data: From Regularization to Deep Learning
Kalina, Jan; Vidnerová, Petra
2020 - English
Regression modeling is well known as a fundamental task in current econometrics. However, classical estimation tools for the linear regression model are not applicable to highdimensional data. Although there is not an agreement about a formal definition of high dimensional data, usually these are understood either as data with the number of variables p exceeding (possibly largely) the number of observations n, or as data with a large p in the order of (at least) thousands. In both situations, which appear in various field including econometrics, the analysis of the data is difficult due to the so-called curse of dimensionality (cf. Kalina (2013) for discussion). Compared to linear regression, nonlinear regression modeling with an unknown shape of the relationship of the response on the regressors requires even more intricate methods. Keywords: regression; neural networks; robustness; high-dimensional data; regularization Fulltext is available at external website.
Regression for High-Dimensional Data: From Regularization to Deep Learning

Regression modeling is well known as a fundamental task in current econometrics. However, classical estimation tools for the linear regression model are not applicable to highdimensional data. ...

Kalina, Jan; Vidnerová, Petra
Ústav informatiky, 2020

The Equation |x| - |Ax| = b
Rohn, Jiří
2020 - English
We formulate conditions on A and b under which the double absolute value equation |x| - |Ax| = b possesses in each orthant a unique solution which, moreover, belongs to the interior of that orthant. Keywords: absolute value equation; double absolute value equation; orthantwise solvability; theorem of the alternatives Available in a digital repository NRGL
The Equation |x| - |Ax| = b

We formulate conditions on A and b under which the double absolute value equation |x| - |Ax| = b possesses in each orthant a unique solution which, moreover, belongs to the interior of that orthant.

Rohn, Jiří
Ústav informatiky, 2020

The scalar-valued score functions of continuous probability distribution
Fabián, Zdeněk
2019 - English
In this report we give theoretical basis of probability theory of continuous random variables based on scalar valued score functions. We maintain consistently the following point of view: It is not the observed value, which is to be used in probabilistic and statistical considerations, but its 'treated form', the value of the scalar-valued score function of distribution of the assumed model. Actually, the opinion that an observed value of random variable should be 'treated' with respect to underlying model is one of main ideas of the inference based on likelihood in classical statistics. However, a vector nature of Fisher score functions of classical statistics does not enable a consistent use of this point of view. Instead, various inference functions are suggested and used in solutions of various statistical problems. Inference function of this report is the scalar-valued score function of distribution. Keywords: Shortcomings of probability theory; Scalar-valued score functions; Characteristics of continous random variables; Parametric estimation; Transformed distributions; Skew-symmetric distributions Available at various institutes of the ASCR
The scalar-valued score functions of continuous probability distribution

In this report we give theoretical basis of probability theory of continuous random variables based on scalar valued score functions. We maintain consistently the following point of view: It is not ...

Fabián, Zdeněk
Ústav informatiky, 2019

Lexicalized Syntactic Analysis by Restarting Automata
Mráz, F.; Otto, F.; Pardubská, D.; Plátek, Martin
2019 - English
We study h-lexicalized two-way restarting automata that can rewrite at most i times per cycle for some i ≥ 1 (hRLWW(i)-automata). This model is considered useful for the study of lexical (syntactic) disambiguation, which is a concept from linguistics. It is based on certain reduction patterns. We study lexical disambiguation through the formal notion of h-lexicalized syntactic analysis (hLSA). The hLSA is composed of a basic language and the corresponding h-proper language, which is obtained from the basic language by mapping all basic symbols to input symbols. We stress the sensitivity of hLSA by hRLWW(i)-automata to the size of their windows, the number of possible rewrites per cycle, and the degree of (non-)monotonicity. We introduce the concepts of contextually transparent languages (CTL) and contextually transparent lexicalized analyses based on very special reduction patterns, and we present two-dimensional hierarchies of their subclasses based on the size of windows and on the degree of synchronization. The bottoms of these hierarchies correspond to the context-free languages. CTL creates a proper subclass of context-sensitive languages with syntactically natural properties. Keywords: Restarting automaton; h-lexicalization; lexical disambiguation Fulltext is available at external website.
Lexicalized Syntactic Analysis by Restarting Automata

We study h-lexicalized two-way restarting automata that can rewrite at most i times per cycle for some i ≥ 1 (hRLWW(i)-automata). This model is considered useful for the study of lexical (syntactic) ...

Mráz, F.; Otto, F.; Pardubská, D.; Plátek, Martin
Ústav informatiky, 2019

Laplacian preconditioning of elliptic PDEs: Localization of the eigenvalues of the discretized operator
Gergelits, Tomáš; Mardal, K.-A.; Nielsen, B. F.; Strakoš, Z.
2019 - English
This contribution represents an extension of our earlier studies on the paradigmatic example of the inverse problem of the diffusion parameter estimation from spatio-temporal measurements of fluorescent particle concentration, see [6, 1, 3, 4, 5]. More precisely, we continue to look for an optimal bleaching pattern used in FRAP (Fluorescence Recovery After Photobleaching), being the initial condition of the Fickian diffusion equation maximizing a sensitivity measure. As follows, we define an optimization problem and we show the special feature (so-called complementarity principle) of the optimal binary-valued initial conditions. Keywords: second order elliptic PDEs; preconditioning by the inverse Laplacian; eigenvalues of the discretized preconditioned problem; nodal values of the coefficient function; Hall’s theorem; convergence of the conjugate gradient method Available in digital repository of the ASCR
Laplacian preconditioning of elliptic PDEs: Localization of the eigenvalues of the discretized operator

This contribution represents an extension of our earlier studies on the paradigmatic example of the inverse problem of the diffusion parameter estimation from spatio-temporal measurements of ...

Gergelits, Tomáš; Mardal, K.-A.; Nielsen, B. F.; Strakoš, Z.
Ústav informatiky, 2019

A Logical Characteristic of Read-Once Branching Programs
Žák, Stanislav
2019 - English
We present a mathematical model of the intuitive notions such as the knowledge or the information arising at different stages of computations on branching programs (b.p.). The model has two appropriate properties: i) The ”knowledge” arising at a stage of computation in question is derivable from the ”knowledge” arising at the previous stage according to the rules of the model and according to the local arrangement of the b.p. ii) The model confirms the intuitively well-known fact that the knowledge arising at a node of a computation depends not only on it but in some cases also on a ”mystery” information. (I. e. different computations reaching the same node may have different knowledge(s) arisen at it.) We prove that with respect to our model no such information exists in read-once b.p.‘s but on the other hand in b. p.‘s which are not read-once such information must be present. The read-once property forms a frontier. More concretely, we may see the instances of our models as a systems S = (U,D) where U is a universe of knowledge and D are derivation rules. We say that a b.p. P is compatible with a system S iff along each computation in P S derives F (false) or T (true) at the end correctly according to the label of the reached sink. This key notion modifies the classic paradigm which takes the computational complexity with respect to different classes of restricted b.p.‘s (e.g. read-once b.p.‘s, k-b.p.‘s, b.p.‘s computing in limited time etc.). Now, the restriction is defined by a subset of systems and only these programs are taken into account which are compatible with at least one of the chosen systems. Further we understand the sets U of knowledge(s) as a sets of admissible logical formulae. It is clear that more rich sets U‘s imply the large restrictions on b.p.‘s and consequently the smaller complexities of Boolean functions are detected. More rich logical equipment implies stronger computational effectiveness. Another question arises: given a set of Boolean functions (e.g. codes of some graphs) what logical equipment is optimal from the point of complexity? Keywords: branching programs; computational complexity; logic Available in digital repository of the ASCR
A Logical Characteristic of Read-Once Branching Programs

We present a mathematical model of the intuitive notions such as the knowledge or the information arising at different stages of computations on branching programs (b.p.). The model has two ...

Žák, Stanislav
Ústav informatiky, 2019

A Nonparametric Bootstrap Comparison of Variances of Robust Regression Estimators.
Kalina, Jan; Tobišková, Nicole; Tichavský, Jan
2019 - English
While various robust regression estimators are available for the standard linear regression model, performance comparisons of individual robust estimators over real or simulated datasets seem to be still lacking. In general, a reliable robust estimator of regression parameters should be consistent and at the same time should have a relatively small variability, i.e. the variances of individual regression parameters should be small. The aim of this paper is to compare the variability of S-estimators, MM-estimators, least trimmed squares, and least weighted squares estimators. While they all are consistent under general assumptions, the asymptotic covariance matrix of the least weighted squares remains infeasible, because the only available formula for its computation depends on the unknown random errors. Thus, we take resort to a nonparametric bootstrap comparison of variability of different robust regression estimators. It turns out that the best results are obtained either with MM-estimators, or with the least weighted squares with suitable weights. The latter estimator is especially recommendable for small sample sizes. Keywords: robustness; linear regression; outliers; bootstrap; least weighted squares Fulltext is available at external website.
A Nonparametric Bootstrap Comparison of Variances of Robust Regression Estimators.

While various robust regression estimators are available for the standard linear regression model, performance comparisons of individual robust estimators over real or simulated datasets seem to be ...

Kalina, Jan; Tobišková, Nicole; Tichavský, Jan
Ústav informatiky, 2019

Implicitly weighted robust estimation of quantiles in linear regression
Kalina, Jan; Vidnerová, Petra
2019 - English
Estimation of quantiles represents a very important task in econometric regression modeling, while the standard regression quantiles machinery is well developed as well as popular with a large number of econometric applications. Although regression quantiles are commonly known as robust tools, they are vulnerable to the presence of leverage points in the data. We propose here a novel approach for the linear regression based on a specific version of the least weighted squares estimator, together with an additional estimator based only on observations between two different novel quantiles. The new methods are conceptually simple and comprehensible. Without the ambition to derive theoretical properties of the novel methods, numerical computations reveal them to perform comparably to standard regression quantiles, if the data are not contaminated by outliers. Moreover, the new methods seem much more robust on a simulated dataset with severe leverage points. Keywords: regression quantiles; robust regression; outliers; leverage points Fulltext is available at external website.
Implicitly weighted robust estimation of quantiles in linear regression

Estimation of quantiles represents a very important task in econometric regression modeling, while the standard regression quantiles machinery is well developed as well as popular with a large number ...

Kalina, Jan; Vidnerová, Petra
Ústav informatiky, 2019

A Robustified Metalearning Procedure for Regression Estimators
Kalina, Jan; Neoral, A.
2019 - English
Metalearning represents a useful methodology for selecting and recommending a suitable algorithm or method for a new dataset exploiting a database of training datasets. While metalearning is potentially beneficial for the analysis of economic data, we must be aware of its instability and sensitivity to outlying measurements (outliers) as well as measurement errors. The aim of this paper is to robustify the metalearning process. First, we prepare some useful theoretical tools exploiting the idea of implicit weighting, inspired by the least weighted squares estimator. These include a robust coefficient of determination, a robust version of mean square error, and a simple rule for outlier detection in linear regression. We perform a metalearning study for recommending the best linear regression estimator for a new dataset (not included in the training database). The prediction of the optimal estimator is learned over a set of 20 real datasets with economic motivation, while the least squares are compared with several (highly) robust estimators. We investigate the effect of variable selection on the metalearning results. If the training as well as validation data are considered after a proper robust variable selection, the metalearning performance is improved remarkably, especially if a robust prediction error is used. Keywords: model choice; computational statistics; robustness; variable selection Available in digital repository of the ASCR
A Robustified Metalearning Procedure for Regression Estimators

Metalearning represents a useful methodology for selecting and recommending a suitable algorithm or method for a new dataset exploiting a database of training datasets. While metalearning is ...

Kalina, Jan; Neoral, A.
Ústav informatiky, 2019

About project

NRGL provides central access to information on grey literature produced in the Czech Republic in the fields of science, research and education. You can find more information about grey literature and NRGL at service web

Send your suggestions and comments to nusl@techlib.cz

Provider

http://www.techlib.cz

Facebook

Other bases