Hybrid Methods for Nonlinear Least Squares Problems
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2019 - anglický
This contribution contains a description and analysis of effective methods for minimization of the nonlinear least squares function F(x) = (1=2)fT (x)f(x), where x ∈ Rn and f ∈ Rm, together with extensive computational tests and comparisons of the introduced methods. All hybrid methods are described in detail and their global convergence is proved in a unified way. Some proofs concerning trust region methods, which are difficult to find in the literature, are also added. In particular, the report contains an analysis of a new simple hybrid method with Jacobian corrections (Section 8) and an investigation of the simple hybrid method for sparse least squares problems proposed previously in [33] (Section 14).
Klíčová slova:
numerical optimization; nonlinear least squares; trust region methods; hybrid methods; sparse problems; partially separable problems; numerical experiments
Plné texty jsou dostupné v digitálním repozitáři NUŠL
Hybrid Methods for Nonlinear Least Squares Problems
This contribution contains a description and analysis of effective methods for minimization of the nonlinear least squares function F(x) = (1=2)fT (x)f(x), where x ∈ Rn and f ∈ Rm, together with ...
Does a Singular Symmetric Interval Matrix Contain a Symmetric Singular Matrix?
Rohn, Jiří
2019 - anglický
We consider the conjecture formulated in the title concerning existence of a symmetric singular matrix in a singular symmetric interval matrix. We show by means of a counterexample that it is generally not valid, and we prove that it becomes true under an additional assumption of positive semide niteness of the midpoint matrix. The proof is constructive.
Klíčová slova:
symmetric interval matrix; singularity; positive semide niteness
Plné texty jsou dostupné v digitálním repozitáři NUŠL
Does a Singular Symmetric Interval Matrix Contain a Symmetric Singular Matrix?
We consider the conjecture formulated in the title concerning existence of a symmetric singular matrix in a singular symmetric interval matrix. We show by means of a counterexample that it is ...
Transforming hierarchical images to program expressions using deep networks
Křen, Tomáš
2018 - anglický
We present a technique describing how to effectively train a neural network given an image to produce a formal description of the given image. The basic motivation of the proposed technique is an intention to design a new tool for automatic program synthesis capable of transforming sensory data (in our case static image, but generally a phenotype) to a formal code expression (i.e. syntactic tree of a program), such that the code (from evolutionary perspective a genotype) evaluates to a value that is similar to the input data, ideally identical. Our approach is partially based on our technique for generating program expressions in the context of typed functional genetic programming. We present promising results evaluating a simple image description language achieved with a deep network combining convolution encoder of images and recurrent decoder for generating program expressions in the sequential prefix notation and propose possible future applications.
Klíčová slova:
deep networks; automatic program synthesis; image processing
Plné texty jsou dostupné v digitálním repozitáři NUŠL
Transforming hierarchical images to program expressions using deep networks
We present a technique describing how to effectively train a neural network given an image to produce a formal description of the given image. The basic motivation of the proposed technique is an ...
Nonparametric Bootstrap Techniques for Implicitly Weighted Robust Estimators
Kalina, Jan
2018 - anglický
The paper is devoted to highly robust statistical estimators based on implicit weighting, which have a potential to find econometric applications. Two particular methods include a robust correlation coefficient based on the least weighted squares regression and the minimum weighted covariance determinant estimator, where the latter allows to estimate the mean and covariance matrix of multivariate data. New tools are proposed allowing to test hypotheses about these robust estimators or to estimate their variance. The techniques considered in the paper include resampling approaches with or without replacement, i.e. permutation tests, bootstrap variance estimation, and bootstrap confidence intervals. The performance of the newly described tools is illustrated on numerical examples. They reveal the suitability of the robust procedures also for non-contaminated data, as their confidence intervals are not much wider compared to those for standard maximum likelihood estimators. While resampling without replacement turns out to be more suitable for hypothesis testing, bootstrapping with replacement yields reliable confidence intervals but not corresponding hypothesis tests.
Klíčová slova:
robust statistics; econometrics; correlation coefficient; multivariate data
Dokument je dostupný na externích webových stránkách.
Nonparametric Bootstrap Techniques for Implicitly Weighted Robust Estimators
The paper is devoted to highly robust statistical estimators based on implicit weighting, which have a potential to find econometric applications. Two particular methods include a robust correlation ...
How to down-weight observations in robust regression: A metalearning study
Kalina, Jan; Pitra, Zbyněk
2018 - anglický
Metalearning is becoming an increasingly important methodology for extracting knowledge from a data base of available training data sets to a new (independent) data set. The concept of metalearning is becoming popular in statistical learning and there is an increasing number of metalearning applications also in the analysis of economic data sets. Still, not much attention has been paid to its limitations and disadvantages. For this purpose, we use various linear regression estimators (including highly robust ones) over a set of 30 data sets with economic background and perform a metalearning study over them as well as over the same data sets after an artificial contamination. We focus on comparing the prediction performance of the least weighted squares estimator with various weighting schemes. A broader spectrum of classification methods is applied and a support vector machine turns out to yield the best results. While results of a leave-1-out cross validation are very different from results of autovalidation, we realize that metalearning is highly unstable and its results should be interpreted with care. We also focus on discussing all possible limitations of the metalearning methodology in general.
Klíčová slova:
metalearning; robust statistics; linear regression; outliers
Plné texty jsou dostupné na vyžádání prostřednictvím repozitáře Akademie věd.
How to down-weight observations in robust regression: A metalearning study
Metalearning is becoming an increasingly important methodology for extracting knowledge from a data base of available training data sets to a new (independent) data set. The concept of metalearning is ...
Sparse Test Problems for Nonlinear Least Squares
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2018 - anglický
This report contains a description of subroutines which can be used for testing large-scale optimization codes. These subroutines can easily be obtained from the web page http://www.cs.cas.cz/~luksan/test.html. Furthermore, all test problems contained in these subroutines are presented in the analytic form.
Klíčová slova:
large-scale optimization; least squares; test problems
Plné texty jsou dostupné v digitálním repozitáři NUŠL
Sparse Test Problems for Nonlinear Least Squares
This report contains a description of subroutines which can be used for testing large-scale optimization codes. These subroutines can easily be obtained from the web page ...
Problems for Nonlinear Least Squares and Nonlinear Equations
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2018 - anglický
This report contains a description of subroutines which can be used for testing large-scale optimization codes. These subroutines can easily be obtained from the web page http://www.cs.cas.cz/~luksan/test.html. Furthermore, all test problems contained in these subroutines are presented in the analytic form.
Klíčová slova:
large-scale optimization; least squares; nonlinear equations,; test problems
Plné texty jsou dostupné v digitálním repozitáři NUŠL
Problems for Nonlinear Least Squares and Nonlinear Equations
This report contains a description of subroutines which can be used for testing large-scale optimization codes. These subroutines can easily be obtained from the web page ...
Robust Metalearning: Comparing Robust Regression Using A Robust Prediction Error
Peštová, Barbora; Kalina, Jan
2018 - anglický
The aim of this paper is to construct a classification rule for predicting the best regression estimator for a new data set based on a database of 20 training data sets. Various estimators considered here include some popular methods of robust statistics. The methodology used for constructing the classification rule can be described as metalearning. Nevertheless, standard approaches of metalearning should be robustified if working with data sets contaminated by outlying measurements (outliers). Therefore, our contribution can be also described as robustification of the metalearning process by using a robust prediction error. In addition to performing the metalearning study by means of both standard and robust approaches, we search for a detailed interpretation in two particular situations. The results of detailed investigation show that the knowledge obtained by a metalearning approach standing on standard principles is prone to great variability and instability, which makes it hard to believe that the results are not just a consequence of a mere chance. Such aspect of metalearning seems not to have been previously analyzed in literature.
Klíčová slova:
metalearning; robust regression; outliers; robust prediction error
Dokument je dostupný na externích webových stránkách.
Robust Metalearning: Comparing Robust Regression Using A Robust Prediction Error
The aim of this paper is to construct a classification rule for predicting the best regression estimator for a new data set based on a database of 20 training data sets. Various estimators considered ...
A limited-memory optimization method using the infinitely many times repeated BNS update and conjugate directions
Vlček, Jan; Lukšan, Ladislav
2018 - anglický
Klíčová slova:
Unconstrained minimization; variable metric methods; limited-memory methods; the repeated BFGS update; global convergence; numerical results
Plné texty jsou dostupné v digitálním repozitáři Akademie Věd.
A limited-memory optimization method using the infinitely many times repeated BNS update and conjugate directions
Numerical solution of generalized minimax problems
Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan
2018 - anglický
Klíčová slova:
Numerical optimization; nonlinear approximation; nonsmooth optimization; generalized minimax problems; recursive quadratic programming methods; interior point methods; smoothing methods; algorithms; numerical experiments
Plné texty jsou dostupné v digitálním repozitáři Akademie Věd.
Numerical solution of generalized minimax problems
NUŠL poskytuje centrální přístup k informacím o šedé literatuře vznikající v ČR v oblastech vědy, výzkumu a vzdělávání. Více informací o šedé literatuře a NUŠL najdete na webu služby.
Vaše náměty a připomínky posílejte na email nusl@techlib.cz
Provozovatel
Zahraniční báze