Vaccination Heatmaps  WSJ graphics team put together a series of interactive visualisations on the impact of vaccination that blew up on twitter and facebook, and were roundly lauded as greatlooking and effective dataviz. Some of these had enough data available to look particularly good. https://…/recreatingafamousvisualisation https://…/recreatingthevaccinationheatmapsinr 
Validation Set  A set of examples used to tune the parameters of a classifier. http://…/StatLearn12.pdf 
Value at Risk (VaR) 
In financial mathematics and financial risk management, value at risk (VaR) is a widely used risk measure of the risk of loss on a specific portfolio of financial assets. For a given portfolio, time horizon, and probability p, the 100p% VaR is defined as a threshold loss value, such that the probability that the loss on the portfolio over the given time horizon exceeds this value is p. This assumes marktomarket pricing, normal markets, and no trading in the portfolio. 
Value Charts Indicator (VCI) 
The indicator displays the trendadjusted price activity of a security. It oscillates around the zeroline and is displayed as a candlestick chart. 
Value Iteration Network (VIN) 
We introduce the value iteration network: a fully differentiable neural network with a `planning module’ embedded within. Value iteration networks are suitable for making predictions about outcomes that involve planningbased reasoning, such as predicting a desired trajectory from an observation of a map. Key to our approach is a novel differentiable approximation of the valueiteration algorithm, which can be represented as a convolutional neural network, and trained endtoend using standard backpropagation. We evaluate our value iteration networks on the task of predicting optimal obstacleavoiding trajectories from an image of a landscape, both on synthetic data, and on challenging raw images of the Mars terrain. Value Iteration Networks 
Value Prediction Network (VPN) 
This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates modelfree and modelbased RL methods into a single neural network. In contrast to typical modelbased RL methods, VPN learns a dynamics model whose abstract states are trained to make optionconditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both modelfree and modelbased baselines in a stochastic environment where careful planning is required but building an accurate observationprediction model is difficult. Furthermore, VPN outperforms Deep QNetwork (DQN) on several Atari games even with shortlookahead planning, demonstrating its potential as a new way of learning a good state representation. 
ValueAdded Modeling  Valueadded modeling (also known as valueadded analysis and valueadded assessment) is a method of teacher evaluation that measures the teacher’s contribution in a given year by comparing the current test scores of their students to the scores of those same students in previous school years, as well as to the scores of other students in the same grade. In this manner, valueadded modeling seeks to isolate the contribution, or value added, that each teacher provides in a given year, which can be compared to the performance measures of other teachers. VAMs are considered to be fairer than simply comparing student’s achievement scores or gain scores without considering potentially confounding context variables like past performance or income. It is also possible to use this approach to estimate the value added by the school principal or the school as a whole. Critics say that the use of tests to evaluate individual teachers has not been scientifically validated, and much of the results are due to chance or conditions beyond the teacher’s control, such as outside tutoring. Research shows, however, that differences in teacher effectiveness as measured by valueadded of teachers are associated with very large economic effects on students. RealVAMS 
ValuebyAlpha Map  ValuebyAlpha is essentially a bivariate choropleth technique that “equalizes” a base map so that the visual weight of a map unit corresponds to some data value. Whereas cartograms accomplish this by varying size, VbA modifies the alpha channel (transparency, basically) of map units overlain on a neutral color background. Thus shapes and sizes are not distorted (except necessarily by the map projection, of course), but the lowerimpact units with lower alpha values fade into the background and make for a map that is visually equalized by the data. 
ValueGradient Backpropagation (GProp) 
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporaldifference based method for learning the gradient of the valuefunction. Secondly, we present the deviatoractorcritic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor’s policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm. 
Vapnik Chervonenkis Dimension (VC) 
In statistical learning theory, or sometimes computational learning theory, the VC dimension (for VapnikChervonenkis dimension) is a measure of the capacity of a statistical classification algorithm, defined as the cardinality of the largest set of points that the algorithm can shatter. It is a core concept in VapnikChervonenkis theory, and was originally defined by Vladimir Vapnik and Alexey Chervonenkis. Informally, the capacity of a classification model is related to how complicated it can be. For example, consider the thresholding of a highdegree polynomial: if the polynomial evaluates above zero, that point is classified as positive, otherwise as negative. A highdegree polynomial can be wiggly, so it can fit a given set of training points well. But one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a high capacity. A much simpler alternative is to threshold a linear function. This function may not fit the training set well, because it has a low capacity 
Vapnik Chervonenkis Theory (VC Theory) 
VapnikChervonenkis theory (also known as VC theory) was developed during 19601990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view. VC theory covers at least four parts: • Theory of consistency of learning processes • Nonasymptotic theory of the rate of convergence of learning processes • Theory of controlling the generalization ability of learning processes • Theory of constructing learning machines 
Variable Importance Plot  randomForest 
Variable Selection Deviation (VSD) 
Variable selection deviation measures and instability tests for highdimensional model selection methods such as LASSO, SCAD and MCP, etc., to decide whether the sparse patterns identified by those methods are reliable. glmvsd 
Variance Component Analysis (VCA) 
Variance components models are a way to assess the amount of variation in a dependent variable that is associated with one or more randomeffects variables. The central output is a variance components table which shows the proportion of variance attributable to a random effects variable’s main effect and, optionally, the random variable’s interactions with other factors. Random effects variables are categorical variables (factors) whose categories (levels) are conceived as a random sample of all categories. Examples might include grouping variables like schools in a study of students, days of the month in a marketing study, or subject id in repeated measures studies. Variance components analysis will show whether such random schoollevel effects, dayofmonth effects, or subject effects are important or if they may be discounted. Variance components analysis usually applies to a mixed effects model – that is, one in which there are random and fixed effects, differences in either of which might account for variance in the dependent variable. There must be at least one random effects variable. To illustrate, a researcher might study timetopromotion for a random sample of firemen in randomly selected fire stations, also looking at hours of training of the firemen. Stations would be a random effect. Training would be a fixed effect. Variance components analysis would reveal if the betweenstations random effect accounted for an important or a trivial amount of the variance in timetopromotion, based on a model which included randomeffects variables, fixedeffects variables, covariates, and interactions among them. It should be noted that variance components analysis has largely been superceded by linear mixed models and generalized linear mixed models analysis. The variance components procedure is often an adjunct to these procedures. Unlike them, the variance components procedure estimates only variance components, not model regression coefficients. Variance components analysis may be seen as a more computationally efficient procedure useful for models in special designs, such as split plot, univariate repeated measures, random block, and other mixed effects designs. VCA 
Variance Component Model  
Variance Inflation Factor (VIF) 
In statistics, the variance inflation factor (VIF) quantifies the severity of multicollinearity in an ordinary least squares regression analysis. It provides an index that measures how much the variance (the square of the estimate’s standard deviation) of an estimated regression coefficient is increased because of collinearity. https://…/collinearityandstepwisevifselection 
Variance Inflation Factor Change Point Detection (VIFCP) 
VIFCP 
Variance Reduction (VR) 
In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates that can be obtained for a given number of iterations. Every output random variable from the simulation is associated with a variance which limits the precision of the simulation results. In order to make a simulation statistically efficient, i.e., to obtain a greater precision and smaller confidence intervals for the output random variable of interest, variance reduction techniques can be used. The main ones are: Common random numbers, antithetic variates, control variates, importance sampling and stratified sampling. Under these headings are a variety of specialized techniques; for example, particle transport simulations make extensive use of ‘weight windows’ and ‘splitting/Russian roulette’ techniques, which are a form of importance sampling. 
Variation of Information Distance (VI) 
In probability theory and information theory, the variation of information or shared information distance is a measure of the distance between two clusterings (partitions of elements). It is closely related to mutual information; indeed, it is a simple linear expression involving the mutual information. Unlike the mutual information, however, the variation of information is a true metric, in that it obeys the triangle inequality. Even more, it is a universal metric, in that if any other distance measure two items closeby, then the variation of information will also judge them close. 
Variational Autoencoder (VAE) 
How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior. Unfortunately, the common meanfield approach requires analytical solutions of expectations w.r.t. the approximate posterior, which are also intractable in the general case. We show how a reparameterization of the variational lower bound yields a simple differentiable unbiased estimator of the lower bound; this SGVB (Stochastic Gradient Variational Bayes) estimator can be used for efficient approximate posterior inference in almost any model with continuous latent variables and/or parameters, and is straightforward to optimize using standard stochastic gradient ascent techniques. For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto Encoding VB (AEVB) algorithm. In the AEVB algorithm we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint. The learned approximate posterior inference model can also be used for a host of tasks such as recognition, denoising, representation and visualization purposes. When a neural network is used for the recognition model, we arrive at the variational autoencoder. 
Variational Deep Embedding (VaDE) 
Clustering is among the most fundamental tasks in computer vision and machine learning. In this paper, we propose Variational Deep Embedding (VaDE), a novel unsupervised generative clustering approach within the framework of Variational AutoEncoder (VAE). Specifically, VaDE models the data generative procedure with a Gaussian Mixture Model (GMM) and a deep neural network (DNN): 1) the GMM picks a cluster; 2) from which a latent embedding is generated; 3) then the DNN decodes the latent embedding into observables. Inference in VaDE is done in a variational way: a different DNN is used to encode observables to latent embeddings, so that the evidence lower bound (ELBO) can be optimized using Stochastic Gradient Variational Bayes (SGVB) estimator and the reparameterization trick. Quantitative comparisons with strong baselines are included in this paper, and experimental results show that VaDE significantly outperforms the stateoftheart clustering methods on 4 benchmarks from various modalities. Moreover, by VaDE’s generative nature, we show its capability of generating highly realistic samples for any specified cluster, without using supervised information during training. Lastly, VaDE is a flexible and extensible framework for unsupervised generative clustering, more general mixture models than GMM can be easily plugged in. 
Variational Gaussian Process (VGP) 
Representations offered by deep generative models are fundamentally tied to their inference method from data. Variational inference methods require a rich family of approximating distributions. We construct the variational Gaussian process (VGP), a Bayesian nonparametric model which adapts its shape to match complex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random nonlinear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity. We prove a universal approximation theorem for the VGP, demonstrating its representative power for learning any model. For inference we present a variational objective inspired by autoencoders and perform black box inference over a wide class of models. The VGP achieves new stateoftheart results for unsupervised learning, inferring models such as the deep latent Gaussian model and the recently proposed DRAW. 
Variational Message Passing (VMP) 
Variational message passing (VMP) is an approximate inference technique for continuous or discretevalued Bayesian networks, with conjugateexponential parents, developed by John Winn. VMP was developed as a means of generalizing the approximate variational methods used by such techniques as Latent Dirichlet allocation and works by updating an approximate distribution at each node through messages in the node’s Markov blanket. 
Varimax Rotation  In statistics, a varimax rotation is used to simplify the expression of a particular subspace in terms of just a few major items each. The actual coordinate system is unchanged, it is the orthogonal basis that is being rotated to align with those coordinates. The subspace found with principal component analysis or factor analysis is expressed as a dense basis with many nonzero weights which makes it hard to interpret. Varimax is so called because it maximizes the sum of the variances of the squared loadings (squared correlations between variables and factors). Preserving orthogonality requires that it is a rotation that leaves the subspace invariant. Intuitively, this is achieved if, (a) any given variable has a high loading on a single factor but nearzero loadings on the remaining factors and if (b) any given factor is constituted by only a few variables with very high loadings on this factor while the remaining variables have nearzero loadings on this factor. If these conditions hold, the factor loading matrix is said to have “simple structure,” and varimax rotation brings the loading matrix closer to such simple structure (as much as the data allow). From the perspective of individuals measured on the variables, varimax seeks a basis that most economically represents each individual – that is, each individual can be well described by a linear combination of only a few basis functions. 
Vector Autoregression (VAR) 
➘ “Vector Autoregressive Model” 
Vector Autoregressive Model (VAR) 
Vector autoregression (VAR) is an econometric model used to capture the linear interdependencies among multiple time series. VAR models generalize the univariate autoregression (AR) models by allowing for more than one evolving variable. All variables in a VAR are treated symmetrically in a structural sense (although the estimated quantitative response coefficients will not in general be the same); each variable has an equation explaining its evolution based on its own lags and the lags of the other model variables. VAR modeling does not require as much knowledge about the forces influencing a variable as do structural models with simultaneous equations: The only prior knowledge required is a list of variables which can be hypothesized to affect each other intertemporally. bvarsv,graphicalVAR,mlVAR 
Vector Generalized Additive Model (VGAM) 
Vector smoothing is used to extend the class of generalized additive models in a very natural way to include a class of multivariate regression models. The resulting models are called `vector generalized additive models’. The class of models for which the methodology gives generalized additive extensions includes the multiple logistic regression model for nominal responses, the continuation ratio model and the proportional and nonproportional odds models for ordinal responses, and the bivariate probit and bivariate logistic models for correlated binary responses. They may also be applied to generalized estimating equations. VGAM 
Vector Generalized Linear Model (VGLM) 
VGAM 
Vector Quantization (VQ) 
Vector quantization (VQ) is a classical quantization technique from signal processing which allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in kmeans and some other clustering algorithms. The density matching property of vector quantization is powerful, especially for identifying the density of large and highdimensioned data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction and density estimation. Vector quantization is based on the competitive learning paradigm, so it is closely related to the selforganizing map model. 
Vector Space Model (VSM) 
Vector space model or term vector model is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System. 
Vega  Vega is a visualization grammar, a declarative format for creating, saving, and sharing interactive visualization designs. With Vega, you can describe the visual appearance and interactive behavior of a visualization in a JSON format, and generate views using HTML5 Canvas or SVG. 
vega.js  Vega is a visualization grammar, a declarative format for creating, saving and sharing visualization designs. With Vega you can describe data visualizations in a JSON format, and generate interactive views using either HTML5 Canvas or SVG. 
Velox  To support complex dataintensive applications such as personalized recommendations, targeted advertising, and intelligent services, the data management community has focused heavily on the design of systems to support training complex models on large datasets. Unfortunately, the design of these systems largely ignores a critical component of the overall analytics process: the deployment and serving of models at scale. We present Velox, a new component of the Berkeley Data Analytics Stack. Velox is a data management system for facilitating the next steps in realworld, largescale analytics pipelines: online model management, maintenance, and prediction serving. Velox provides enduser applications and services with a lowlatency, intuitive interface to models, transforming the raw statistical models currently trained using existing offline largescale compute frameworks into fullblown, endtoend data products. To provide uptodate results for these complex models, Velox also facilitates lightweight online model maintenance and selection (i.e., dynamic weighting). Velox has the ability to span online and offline systems, to adaptively adjust model materialization strategies, and to exploit inherent statistical properties such as model error tolerance, all while operating at “Big Data” scale. http://…/veloxampcamp5final 
Vertex Similarity Method  We consider methods for quantifying the similarity of vertices in networks. We propose a measure of similarity based on the concept that two vertices are similar if their immediate neighbors in the network are themselves similar. This leads to a selfconsistent matrix formulation of similarity that can be evaluated iteratively using only a knowledge of the adjacency matrix of the network. We test our similarity measure on computergenerated networks for which the expected results are known, and on a number of realworld networks. 
Vertical Hoeffding Tree (VHT) 
The Vertical Hoeffding Tree (VHT) is a distributed extension of the VFDT (Domingos and Hulten, 2000). The VHT uses vertical parallelism to split the workload across several machines. Vertical parallelism leverages the parallelism across attributes in the same example, rather than across different examples in the stream. In practice, each training example is routed through the tree model to a leaf. There, the example is split into its constituting attributes, and each attribute is sent to a different Processor instance that keeps track of sufficient statistics. This architecture has two main advantages over one based on horizontal parallelism. First, attribute counters are not replicated across several machines, thus reducing the memory footprint. Second, the computation of the fitness of an attribute for a split decision (via, e.g., entropy or information gain) can be performed in parallel. The drawback is that in order to get good performance, there must be sufficient inherent parallelism in the data. That is, the VHT works best for sparse data (e.g, bagofwords models). Vertical Hoeffding Tree (VHT) classifier is a distributed classifier that utilizes vertical parallelism on top of the Very Fast Decision Tree (VFDT) or Hoeffding Tree classifier. Hoeffding Tree or VFDT is the standard decision tree algorithm for data stream classification. VFDT uses the Hoeffding bound to decide the minimum number of arriving instances to achieve certain level of confidence in splitting the node. This confidence level determines how close the statistics between the attribute chosen by VFDT and the attribute chosen by decision tree for batch learning. For a more comprehensive summary of VFDT, read chapter 3 of ‘Data Stream Mining: A Practical Approach’. http://…/StreamMining.pdf http://…/p71domingos.pdf http://…/arintoemdcthesis.pdf https://…ableadvancedmassiveonlineanalysis.pdf 
Very Fast Decision Tree (VFDT) 
Hoeffding Tree or VFDT is the standard decision tree algorithm for data stream classification. VFDT uses the Hoeffding bound to decide the minimum number of arriving instances to achieve certain level of confidence in splitting the node. This confidence level determines how close the statistics between the attribute chosen by VFDT and the attribute chosen by decision tree for batch learning. ➚ “Hoeffding Tree” https://…/VerticalHoeffdingTreeClassifier.html 
Very Good Importance Sampling (VGIS) 
loo 
Video Ladder Network (VLN) 
We present the Video Ladder Network (VLN) for video prediction. VLN is a neural encoderdecoder model augmented by both recurrent and feedforward lateral connections at all layers. The model achieves competitive results on the Moving MNIST dataset while having very simple structure and providing fast inference. 
Viral Search  The article, after a brief introduction on genetic algorithms and their functioning, presents a kind of genetic algorithm called Viral Search. We present the key concepts, we formally derive the algorithm and we perform numerical tests designed to illustrate the potential and limits. 
vis.js  A dynamic, browser based visualization library. The library is designed to be easy to use, to handle large amounts of dynamic data, and to enable manipulation of and interaction with the data. The library consists of the components DataSet, Timeline, Network, Graph2d and Graph3d. 
Visual Analog Scale (VAS) 
The visual analogue scale or visual analog scale (VAS) is a psychometric response scale which can be used in questionnaires. It is a measurement instrument for subjective characteristics or attitudes that cannot be directly measured. When responding to a VAS item, respondents specify their level of agreement to a statement by indicating a position along a continuous line between two endpoints. http://…/jcn_10_706.pdf ordinalCont 
Visual Analytics  Visual analytics is “the science of analytical reasoning facilitated by visual interactive interfaces.” It can attack certain problems whose size, complexity, and need for closely coupled human and machine analysis may make them otherwise intractable. Visual analytics advances science and technology developments in analytical reasoning, interaction, data transformations and representations for computation and visualization, analytic reporting, and technology transition. As a research agenda, visual analytics brings together several scientific and technical communities from computer science, information visualization, cognitive and perceptual sciences, interactive design, graphic design, and social sciences. 
Visual Communication (VC) 
Visual communication is communication through visual aid and is described as the conveyance of ideas and information in forms that can be read or looked upon. Visual communication in part or whole relies on vision, and is primarily presented or expressed with two dimensional images, it includes: signs, typography, drawing, graphic design, illustration, Industrial Design, Advertising, Animation colour and electronic resources. It also explores the idea that a visual message accompanying text has a greater power to inform, educate, or persuade a person or audience. 
Visual Intelligence (VI) 

Visual Interaction Network (VIN) 
From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a generalpurpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual frontend based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual frontend learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the objectbased dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for modelbased decisionmaking and planning from raw sensory observations in complex physical environments. 
Visual Predictive Checks (VPC) 
The visual predictive check (VPC) is a model diagnostic that can be used to: (i) allow comparison between alternative models, (ii) suggest model improvements, and (iii) support appropriateness of a model. The VPC is constructed from stochastic simulations from the model therefore all model components contribute and it can help in diagnosing both structural and stochastic contributions. As the VPC is being increasingly used as a key diagnostic to illustrate model appropriateness, it is important that its methodology, strengths and weaknesses be discussed by the pharmacometric community. In a typical VPC, the model is used to repeatedly (usually n≥1000) simulate observations according to the original design of the study. Based on these simulations, percentiles of the simulated data are plotted versus an independent variable, usually time since start of treatment. It is then desirable that the same percentiles are calculated and plotted for the observed data to aid comparison of predictions with observations. With suitable data a plot including the observations may be helpful by indicating the data density at different times and thus giving some indirect feel for the uncertainty in the percentiles. Apparently poor model performance where there is very sparse data may not as strongly indicate model inadequacy as poor performance with dense data. A drawback of adding all observations to the VPC, in particular for large studies, is that it may cloud the picture without making data density obvious. A possible intermediate route is to plot a random subsample of all observations. asVPC 
Visual Question Answering (VQA) 
Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring realworld scenarios, such as helping the visually impaired, both the questions and answers are openended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. An Analysis of Visual Question Answering Algorithms 
Visualization of Analysis of Variance (VISOVA) 
VISOVA(VISualization Of VAriance) is a novel method for exploratory data analysis. It is basically an extension of the trellis graphics and developing their grid concept with parallel coordinates, permitting visualization of many dimensions at once. This package includes functions allowing users to perform VISOVA analysis and compare different column/variable ordering methods for making the highdimensional structures easier to perceive even when the data is complicated. visova 
VizPacker  VizPacker is a handy tool that helps visualization developer easily design, build and preview a chart based on CVOM SDK, and autocreate a package for CVOM chart extension, which include a set of fundamental codes and files based on visualization module and data schema. VizPacker is meant to help users quickly have hands on the implementation workflow and avoid unnecessary issuesstruggling. 
VocabularyInformed Extreme Value Learning (ViEVL) 
The novel unseen classes can be formulated as the extreme values of known classes. This inspired the recent works on openset recognition \cite{Scheirer_2013_TPAMI,Scheirer_2014_TPAMIb,EVM}, which however can have no way of naming the novel unseen classes. To solve this problem, we propose the Extreme Value Learning (EVL) formulation to learn the mapping from visual feature to semantic space. To model the margin and coverage distributions of each class, the Vocabularyinformed Learning (ViL) is adopted by using vast open vocabulary in the semantic space. Essentially, by incorporating the EVL and ViL, we for the first time propose a novel semantic embedding paradigm — Vocabularyinformed Extreme Value Learning (ViEVL), which embeds the visual features into semantic space in a probabilistic way. The learned embedding can be directly used to solve supervised learning, zeroshot and open set recognition simultaneously. Experiments on two benchmark datasets demonstrate the effectiveness of proposed frameworks. 
Volatility  Volatility is the annualized standard deviation of returns – it is often expressed in percent. A volatility of 20 means that there is about a onethird probability that an asset’s price a year from now will have fallen or risen by more than 20% from its present value. 
Voronoi Diagram  In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other. The regions are called Voronoi cells. 
Voronoi DiagramBased Evolutionary Algorithm (VorEAl) 
This paper presents the Voronoi diagrambased evolutionary algorithm (VorEAl). VorEAl partitions input space in abnormal/normal subsets using Voronoi diagrams. Diagrams are evolved using a multiobjective bioinspired approach in order to conjointly optimize classification metrics while also being able to represent areas of the data space that are not present in the training dataset. As part of the paper VorEAl is experimentally validated and contrasted with similar approaches. 
Vowpal Wabbit  Vowpal Wabbit (aka VW) is an open source fast outofcore learning system library and program developed originally at Yahoo! Research, and currently at Microsoft Research. It was started and is led by John Langford. Vowpal Wabbit’s is notable as an efficient scalable implementation of online machine learning and support for a number of machine learning reductions, importance weighting, and a selection of different loss functions and optimization algorithms. 
VoxML  We present the specification for a modeling language, VoxML, which encodes semantic knowledge of realworld objects represented as threedimensional models, and of events and attributes related to and enacted over these objects. VoxML is intended to overcome the limitations of existing 3D visual markup languages by allowing for the encoding of a broad range of semantic knowledge that can be exploited by a variety of systems and platforms, leading to multimodal simulations of realworld scenarios using conceptual objects that represent their semantic values. 
Advertisements