|0 | 1 | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | X|
||What is …| = 3233|
|0.632 Boostrap||Sampling with replacement. Each data point has probability (1 – 1/n)n of being selected as test data: Training data = 1 – (1 – 1/n)n of the original data. A particular training data has a probability of (1 – 1/n) of not being picked. This means the training data will contain approximately 63.2% of the instances.|
|1-Nearest-Neighbor-Based Multiclass Learning||This paper deals with Nearest-Neighbor (NN) learning algorithms in metric spaces. Initiated by Fix and Hodges in 1951 , this seemingly simplistic learning paradigm remains competitive against more sophisticated methods and, in its celebrated k-NN version, has been placed on a solid theoretical foundation. Although the classic 1-NN is well known to be inconsistent in general, in recent years a series of papers has presented variations on the theme of a regularized 1-NN classifier, as an alternative to the Bayesconsistent k-NN. Gottlieb et al. showed that approximate nearest neighbor search can act as a regularizer, actually improving generalization performance rather than just injecting noise. In a follow-up work, showed that applying Structural Risk Minimization to (essentially) the margin-regularized datadependent bound in yields a strongly Bayes-consistent 1-NN classifier. A further development has seen margin-based regularization analyzed through the lens of sample compression: a near-optimal nearest neighbor condensing algorithm was presented and later extended to cover semimetric spaces ; an activized version also appeared. As detailed in , margin-regularized 1-NN methods enjoy a number of statistical and computational advantages over the traditional k-NN classifier. Salient among these are explicit data-dependent generalization bounds, and considerable runtime and memory savings. Sample compression affords additional advantages, in the form of tighter generalization bounds and increased efficiency in time and space.|
|1-of-n Code||A special case of constant weight codes are the one-of-N codes, that encode log_2 N bits in a code-word of N bits. The one-of-two code uses the code words 01 and 10 to encode the bits ‘0’ and ‘1’. A one-of-four code can use the words 0001, 0010, 0100, 1000 in order to encode two bits 00, 01, 10, and 11.