**756**

###
**Nonparametric Bootstrap Techniques for Implicitly Weighted Robust Estimators**

Kalina, Jan

2018 - English
The paper is devoted to highly robust statistical estimators based on implicit weighting, which have a potential to find econometric applications. Two particular methods include a robust correlation coefficient based on the least weighted squares regression and the minimum weighted covariance determinant estimator, where the latter allows to estimate the mean and covariance matrix of multivariate data. New tools are proposed allowing to test hypotheses about these robust estimators or to estimate their variance. The techniques considered in the paper include resampling approaches with or without replacement, i.e. permutation tests, bootstrap variance estimation, and bootstrap confidence intervals. The performance of the newly described tools is illustrated on numerical examples. They reveal the suitability of the robust procedures also for non-contaminated data, as their confidence intervals are not much wider compared to those for standard maximum likelihood estimators. While resampling without replacement turns out to be more suitable for hypothesis testing, bootstrapping with replacement yields reliable confidence intervals but not corresponding hypothesis tests.
Keywords:
*robust statistics; econometrics; correlation coefficient; multivariate data*
Fulltext is available at external website.
Nonparametric Bootstrap Techniques for Implicitly Weighted Robust Estimators

The paper is devoted to highly robust statistical estimators based on implicit weighting, which have a potential to find econometric applications. Two particular methods include a robust correlation ...

###
**How to down-weight observations in robust regression: A metalearning study**

Kalina, Jan; Pitra, Zbyněk

2018 - English
Metalearning is becoming an increasingly important methodology for extracting knowledge from a data base of available training data sets to a new (independent) data set. The concept of metalearning is becoming popular in statistical learning and there is an increasing number of metalearning applications also in the analysis of economic data sets. Still, not much attention has been paid to its limitations and disadvantages. For this purpose, we use various linear regression estimators (including highly robust ones) over a set of 30 data sets with economic background and perform a metalearning study over them as well as over the same data sets after an artificial contamination. We focus on comparing the prediction performance of the least weighted squares estimator with various weighting schemes. A broader spectrum of classification methods is applied and a support vector machine turns out to yield the best results. While results of a leave-1-out cross validation are very different from results of autovalidation, we realize that metalearning is highly unstable and its results should be interpreted with care. We also focus on discussing all possible limitations of the metalearning methodology in general.
Keywords:
*metalearning; robust statistics; linear regression; outliers*
Available at various institutes of the ASCR
How to down-weight observations in robust regression: A metalearning study

Metalearning is becoming an increasingly important methodology for extracting knowledge from a data base of available training data sets to a new (independent) data set. The concept of metalearning is ...

###
**Robust Metalearning: Comparing Robust Regression Using A Robust Prediction Error**

Peštová, Barbora; Kalina, Jan

2018 - English
The aim of this paper is to construct a classification rule for predicting the best regression estimator for a new data set based on a database of 20 training data sets. Various estimators considered here include some popular methods of robust statistics. The methodology used for constructing the classification rule can be described as metalearning. Nevertheless, standard approaches of metalearning should be robustified if working with data sets contaminated by outlying measurements (outliers). Therefore, our contribution can be also described as robustification of the metalearning process by using a robust prediction error. In addition to performing the metalearning study by means of both standard and robust approaches, we search for a detailed interpretation in two particular situations. The results of detailed investigation show that the knowledge obtained by a metalearning approach standing on standard principles is prone to great variability and instability, which makes it hard to believe that the results are not just a consequence of a mere chance. Such aspect of metalearning seems not to have been previously analyzed in literature.
Keywords:
*metalearning; robust regression; outliers; robust prediction error*
Fulltext is available at external website.
Robust Metalearning: Comparing Robust Regression Using A Robust Prediction Error

The aim of this paper is to construct a classification rule for predicting the best regression estimator for a new data set based on a database of 20 training data sets. Various estimators considered ...

###
**An adaptive recursive multilevel approximate inverse preconditioning: Computation of the Schur complement**

Kopal, Jiří; Rozložník, Miroslav; Tůma, Miroslav

2017 - English
Available in digital repository of the ASCR
An adaptive recursive multilevel approximate inverse preconditioning: Computation of the Schur complement

###
**The Computational Power of Neural Networks and Representations of Numbers in Non-Integer Bases.**

Šíma, Jiří

2017 - English
We briefly survey the basic concepts and results concerning the computational power of neural networks which basically depends on the information content of weight parameters. In particular, recurrent neural networks with integer, rational, and arbitrary real weights are classified within the Chomsky and finer complexity hierarchies. Then we refine the analysis between integer and rational weights by investigating an intermediate model of integer-weight neural networks with an extra analog rational-weight neuron (1ANN). We show a representation theorem which characterizes the classification problems solvable by 1ANNs, by using so-called cut languages. Our analysis reveals an interesting link to an active research field on non-standard positional numeral systems with non-integer bases. Within this framework, we introduce a new concept of quasi-periodic numbers which is used to classify the computational power of 1ANNs within the Chomsky hierarchy.
Keywords:
*neural network; Chomsky hierarchy; beta-expansion; cut language*
Available at various institutes of the ASCR
The Computational Power of Neural Networks and Representations of Numbers in Non-Integer Bases.

We briefly survey the basic concepts and results concerning the computational power of neural networks which basically depends on the information content of weight parameters. In particular, recurrent ...

###
**Návrh integrovaného emisního procesoru nové generace**

Resler, Jaroslav; Juruš, Pavel; Benešová, N.; Vlček, O.; Belda, M.; Huszár, P.; Krč, Pavel; Eben, Kryštof

2017 - Czech
V podstatě jediným rozšířeným a veřejně dostupným nástrojem pro modelování emisí pro potřeby CTM je procesor SMOKE (Coats & Carlie, 1996). Problém modelu SMOKE ovšem je v jeho silné vazbě na podmínky USA. V minulosti došlo k několika pokusům uzpůsobit model SMOKE jiným podmínkám - viz např. práce reportované v Bieser et al., 2011 nebo Borge et al., 2008, úsilí ovšem vždy naráželo na limity designu tohoto emisního procesoru. Naším cílem je vyvinout emisní procesor založený na otevřených technologiích, který bude komfortní pro typické využití v CTM v našich podmínkách a který bude dostatečně flexibilní, aby byl snadno konfigurovatelný a nastavitelný i pro ostatní uživatele ve světě a jejich specifické potřeby. The only publicly available and widely used tool for emission modelling for CTM is the processor SMOKE (Coats & Carlie, 1996), but its usage is limited by its strong dependence on conditions of USA. A few attempts to adjust SMOKE to other conditions were made in the past - see e.g. works reported in Bieser et al., 2011 or Borge et al., 2008, but the efforts hit the limits of its design. Our goal is to develop an emission processor based on open technologies which will be easy to use for typical usage with CTM in our conditions and which will be flexible and configurable enough to serve specific needs of users in other countries over the world.
Keywords:
*emission model; CTM; postgresql; postgis; inventory*
Available at various institutes of the ASCR
Návrh integrovaného emisního procesoru nové generace

V podstatě jediným rozšířeným a veřejně dostupným nástrojem pro modelování emisí pro potřeby CTM je procesor SMOKE (Coats & Carlie, 1996). Problém modelu SMOKE ovšem je v jeho silné vazbě na podmínky ...

###
**Exact Inference In Robust Econometrics under Heteroscedasticity**

Kalina, Jan; Peštová, Barbora

2017 - English
The paper is devoted to the least weighted squares estimator, which is one of highly robust estimators for the linear regression model. Novel permutation tests of heteroscedasticity are proposed. Also the asymptotic behavior of the permutation test statistics of the Goldfeld-Quandt and Breusch-Pagan tests is investigated. A numerical experiment on real economic data is presented, which also shows how to perform a robust prediction model under heteroscedasticity. Theoretical results may be simply extended to the context of multivariate quantiles
Keywords:
*heteroscedasticity; robust statistics; regression; diagnostic tools; economic data*
Fulltext is available at external website.
Exact Inference In Robust Econometrics under Heteroscedasticity

The paper is devoted to the least weighted squares estimator, which is one of highly robust estimators for the linear regression model. Novel permutation tests of heteroscedasticity are proposed. Also ...

###
**On the Optimization of Initial Conditions for a Model Parameter Estimation**

Matonoha, Ctirad; Papáček, Š.; Kindermann, S.

2017 - English
The design of an experiment, e.g., the setting of initial conditions, strongly influences the accuracy of the process of determining model parameters from data. The key concept relies on the analysis of the sensitivity of the measured output with respect to the model parameters. Based on this approach we optimize an experimental design factor, the initial condition for an inverse problem of a model parameter estimation. Our approach, although case independent, is illustrated at the FRAP (Fluorescence Recovery After Photobleaching) experimental technique. The core idea resides in the maximization of a sensitivity measure, which depends on the initial condition. Numerical experiments show that the discretized optimal initial condition attains only two values. The number of jumps between these values is inversely proportional to the value of a diffusion coefficient D (characterizing the biophysical and numerical process). The smaller value of D is, the larger number of jumps occurs.
Keywords:
*FRAP; sensitivity analysis; optimal experimental design; parameter estimation; finite differences*
Available in digital repository of the ASCR
On the Optimization of Initial Conditions for a Model Parameter Estimation

The design of an experiment, e.g., the setting of initial conditions, strongly influences the accuracy of the process of determining model parameters from data. The key concept relies on the analysis ...

###
**The Computational Power of Neural Networks and Representations of Numbers in Non-Integer Bases**

Šíma, Jiří

2017 - English
We briefly survey the basic concepts and results concerning the computational power of neural networks which basically depends on the information content of weight parameters. In particular, recurrent neural networks with integer, rational, and arbitrary real weights are classified within the Chomsky and finer complexity hierarchies. Then we refine the analysis between integer and rational weights by investigating an intermediate model of integer-weight neural networks with an extra analog rational-weight neuron (1ANN). We show a representation theorem which characterizes the classification problems solvable by 1ANNs, by using so-called cut languages. Our analysis reveals an interesting link to an active research field on non-standard positional numeral systems with non-integer bases. Within this framework, we introduce a new concept of quasi-periodic numbers which is used to classify the computational power of 1ANNs within the Chomsky hierarchy.
Keywords:
*neural network; Chomsky hierarchy; beta-expansion; cut language*
Available at various institutes of the ASCR
The Computational Power of Neural Networks and Representations of Numbers in Non-Integer Bases

We briefly survey the basic concepts and results concerning the computational power of neural networks which basically depends on the information content of weight parameters. In particular, recurrent ...

###
**On the optimal initial conditions for an inverse problem of model parameter estimation**

Matonoha, Ctirad; Papáček, Š.

2017 - English
Available in digital repository of the ASCR
On the optimal initial conditions for an inverse problem of model parameter estimation

NRGL provides central access to information on grey literature produced in the Czech Republic in the fields of science, research and education. You can find more information about grey literature and NRGL at service web

Send your suggestions and comments to nusl@techlib.cz

Provider

Other bases