DeepBoost google
We present a new ensemble learning algorithm, DeepBoost, which can use as base classifiers a hypothesis set containing deep decision trees, or members of other rich or complex families, and succeed in achieving high accuracy without overfitting the data. The key to the success of the algorithm is a capacity-conscious criterion for the selection of the hypotheses. We give new datadependent learning bounds for convex ensembles expressed in terms of the Rademacher complexities of the sub-families composing the base classifier set, and the mixture weight assigned to each sub-family. Our algorithm directly benefits from these guarantees since it seeks to minimize the corresponding learning bound. We give a full description of our algorithm, including the details of its derivation, and report the results of several experiments showing that its performance compares favorably to that of AdaBoost and Logistic Regression and their L1-regularized variants.
DeepBoost


Targeted Learning google
The statistics profession is at a unique point in history. The need for valid statistical tools is greater than ever; data sets are massive, often measuring hundreds of thousands of measurements for a single subject. The field is ready to move towards clear objective benchmarks under which tools can be evaluated. Targeted learning allows
1) the full generalization and utilization of cross-validation as an estimator selection tool so that the subjective choices made by humans are now made by the machine, and
2) targeting the fitting of the probability distribution of the data toward the target parameter representing the scientific question of interest.
Targeted learning methods build machine-learning-based estimators of parameters defined as features of the probability distribution of the data, while also providing influence-curve or bootstrap-based confidence internals. The theory offers a general template for creating targeted maximum likelihood estimators for a data structure, nonparametric or semiparametric statistical model, and parameter mapping. These estimators of causal inference parameters are double robust and have a variety of other desirable statistical properties.
Targeted maximum likelihood estimation built on the loss-based ‘super learning’ system such that lower-dimensional parameters could be targeted (e.g., a marginal causal effect); the remaining bias for the (low-dimensional) target feature of the probability distribution was removed. Targeted learning for effect estimation and causal inference allows for the complete integration of machine learning advances in prediction while providing statistical inference for the target parameter(s) of interest.
http://…/9781441997814
http://…/papers


Parallel Pareto Local Search based on Decomposition (PPLS/D) google
Pareto Local Search (PLS) is a basic building block in many multiobjective metaheuristics. In this paper, Parallel Pareto Local Search based on Decomposition (PPLS/D) is proposed. PPLS/D decomposes the original search space into L subregions and executes L parallel search processes in these subregions simultaneously. Inside each subregion, the PPLS/D process is first guided by a scalar objective function to approach the Pareto set quickly, then it finds non-dominated solutions in this subregion. Our experimental studies on the multiobjective Unconstrained Binary Quadratic Programming problems (mUBQPs) with two to four objectives demonstrate the efficiency of PPLS/D. We investigate the behavior of PPLS/D to understand its working mechanism. Moreover, we propose a variant of PPLS/D called PPLS/D with Adaptive Expansion (PPLS/D-AE), in which each process can search other subregions after it converges in its own subregion. Its advantages and disadvantages have been studied. …

Advertisements