Vaccination Heatmaps  WSJ graphics team put together a series of interactive visualisations on the impact of vaccination that blew up on twitter and facebook, and were roundly lauded as greatlooking and effective dataviz. Some of these had enough data available to look particularly good. https://…/recreatingafamousvisualisation https://…/recreatingthevaccinationheatmapsinr 
Validation Set  A set of examples used to tune the parameters of a classifier. http://…/StatLearn12.pdf 
Value Aggregation  Value aggregation is a general framework for solving imitation learning problems. Based on the idea of data aggregation, it generates a policy sequence by iteratively interleaving policy optimization and evaluation in an online learning setting. 
Value at Risk (VaR) 
In financial mathematics and financial risk management, value at risk (VaR) is a widely used risk measure of the risk of loss on a specific portfolio of financial assets. For a given portfolio, time horizon, and probability p, the 100p% VaR is defined as a threshold loss value, such that the probability that the loss on the portfolio over the given time horizon exceeds this value is p. This assumes marktomarket pricing, normal markets, and no trading in the portfolio. 
Value Charts Indicator (VCI) 
The indicator displays the trendadjusted price activity of a security. It oscillates around the zeroline and is displayed as a candlestick chart. 
Value Iteration Network (VIN) 
We introduce the value iteration network: a fully differentiable neural network with a `planning module’ embedded within. Value iteration networks are suitable for making predictions about outcomes that involve planningbased reasoning, such as predicting a desired trajectory from an observation of a map. Key to our approach is a novel differentiable approximation of the valueiteration algorithm, which can be represented as a convolutional neural network, and trained endtoend using standard backpropagation. We evaluate our value iteration networks on the task of predicting optimal obstacleavoiding trajectories from an image of a landscape, both on synthetic data, and on challenging raw images of the Mars terrain. Value Iteration Networks 
Value Prediction Network (VPN) 
This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates modelfree and modelbased RL methods into a single neural network. In contrast to typical modelbased RL methods, VPN learns a dynamics model whose abstract states are trained to make optionconditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both modelfree and modelbased baselines in a stochastic environment where careful planning is required but building an accurate observationprediction model is difficult. Furthermore, VPN outperforms Deep QNetwork (DQN) on several Atari games even with shortlookahead planning, demonstrating its potential as a new way of learning a good state representation. 
Value Propagation Network (VProp) 
We present Value Propagation (VProp), a parameterefficient differentiable planning module built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. Furthermore, we show that the module enables learning to plan when the environment also includes stochastic elements, providing a costefficient learning system to build lowlevel sizeinvariant planners for a variety of interactive navigation problems. We evaluate on static and dynamic configurations of MazeBase gridworlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input. 
ValueAdded Modeling  Valueadded modeling (also known as valueadded analysis and valueadded assessment) is a method of teacher evaluation that measures the teacher’s contribution in a given year by comparing the current test scores of their students to the scores of those same students in previous school years, as well as to the scores of other students in the same grade. In this manner, valueadded modeling seeks to isolate the contribution, or value added, that each teacher provides in a given year, which can be compared to the performance measures of other teachers. VAMs are considered to be fairer than simply comparing student’s achievement scores or gain scores without considering potentially confounding context variables like past performance or income. It is also possible to use this approach to estimate the value added by the school principal or the school as a whole. Critics say that the use of tests to evaluate individual teachers has not been scientifically validated, and much of the results are due to chance or conditions beyond the teacher’s control, such as outside tutoring. Research shows, however, that differences in teacher effectiveness as measured by valueadded of teachers are associated with very large economic effects on students. RealVAMS 
ValuebyAlpha Map  ValuebyAlpha is essentially a bivariate choropleth technique that “equalizes” a base map so that the visual weight of a map unit corresponds to some data value. Whereas cartograms accomplish this by varying size, VbA modifies the alpha channel (transparency, basically) of map units overlain on a neutral color background. Thus shapes and sizes are not distorted (except necessarily by the map projection, of course), but the lowerimpact units with lower alpha values fade into the background and make for a map that is visually equalized by the data. 
ValueGradient Backpropagation (GProp) 
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporaldifference based method for learning the gradient of the valuefunction. Secondly, we present the deviatoractorcritic (DAC) model, which comprises three neural networks that estimate the value function, its gradient, and determine the actor’s policy respectively. We evaluate GProp on two challenging tasks: a contextual bandit problem constructed from nonparametric regression datasets that is designed to probe the ability of reinforcement learning algorithms to accurately estimate gradients; and the octopus arm, a challenging reinforcement learning benchmark. GProp is competitive with fully supervised methods on the bandit task and achieves the best performance to date on the octopus arm. 
Vanlearning  Although we have tons of machine learning tools to analyze data, most of them require users have some programming backgrounds. Here we introduce a SaaS application which allows users analyze their data without any coding and even without any knowledge of machine learning. Users can upload, train, predict and download their data by simply clicks their mouses. Our system uses data preprocessor and validator to relieve the computational cost of our server. The simple architecture of Vanlearning helps developers can easily maintain and extend it. 
Vapnik Chervonenkis Dimension (VC) 
In statistical learning theory, or sometimes computational learning theory, the VC dimension (for VapnikChervonenkis dimension) is a measure of the capacity of a statistical classification algorithm, defined as the cardinality of the largest set of points that the algorithm can shatter. It is a core concept in VapnikChervonenkis theory, and was originally defined by Vladimir Vapnik and Alexey Chervonenkis. Informally, the capacity of a classification model is related to how complicated it can be. For example, consider the thresholding of a highdegree polynomial: if the polynomial evaluates above zero, that point is classified as positive, otherwise as negative. A highdegree polynomial can be wiggly, so it can fit a given set of training points well. But one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a high capacity. A much simpler alternative is to threshold a linear function. This function may not fit the training set well, because it has a low capacity 
Vapnik Chervonenkis Theory (VC Theory) 
VapnikChervonenkis theory (also known as VC theory) was developed during 19601990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view. VC theory covers at least four parts: · Theory of consistency of learning processes · Nonasymptotic theory of the rate of convergence of learning processes · Theory of controlling the generalization ability of learning processes · Theory of constructing learning machines 
varbvs  We introduce varbvs, a suite of functions written in R and MATLAB for regression analysis of largescale data sets using Bayesian variable selection methods. We have developed numerical optimization algorithms based on variational approximation methods that make it feasible to apply Bayesian variable selection to very large data sets. With a focus on examples from genomewide association studies, we demonstrate that varbvs scales well to data sets with hundreds of thousands of variables and thousands of samples, and has features that facilitate rapid data analyses. Moreover, varbvs allows for extensive model customization, which can be used to incorporate external information into the analysis. We expect that the combination of an easytouse interface and robust, scalable algorithms for posterior computation will encourage broader use of Bayesian variable selection in areas of applied statistics and computational biology. The most recent R and MATLAB source code is available for download at Github (https://…/varbvs ), and the R package can be installed from CRAN (https://…/package=varbvs ). 
Variable Importance Plot  randomForest 
Variable Selection Deviation (VSD) 
Variable selection deviation measures and instability tests for highdimensional model selection methods such as LASSO, SCAD and MCP, etc., to decide whether the sparse patterns identified by those methods are reliable. glmvsd 
Variance Component Analysis (VCA) 
Variance components models are a way to assess the amount of variation in a dependent variable that is associated with one or more randomeffects variables. The central output is a variance components table which shows the proportion of variance attributable to a random effects variable’s main effect and, optionally, the random variable’s interactions with other factors. Random effects variables are categorical variables (factors) whose categories (levels) are conceived as a random sample of all categories. Examples might include grouping variables like schools in a study of students, days of the month in a marketing study, or subject id in repeated measures studies. Variance components analysis will show whether such random schoollevel effects, dayofmonth effects, or subject effects are important or if they may be discounted. Variance components analysis usually applies to a mixed effects model – that is, one in which there are random and fixed effects, differences in either of which might account for variance in the dependent variable. There must be at least one random effects variable. To illustrate, a researcher might study timetopromotion for a random sample of firemen in randomly selected fire stations, also looking at hours of training of the firemen. Stations would be a random effect. Training would be a fixed effect. Variance components analysis would reveal if the betweenstations random effect accounted for an important or a trivial amount of the variance in timetopromotion, based on a model which included randomeffects variables, fixedeffects variables, covariates, and interactions among them. It should be noted that variance components analysis has largely been superceded by linear mixed models and generalized linear mixed models analysis. The variance components procedure is often an adjunct to these procedures. Unlike them, the variance components procedure estimates only variance components, not model regression coefficients. Variance components analysis may be seen as a more computationally efficient procedure useful for models in special designs, such as split plot, univariate repeated measures, random block, and other mixed effects designs. VCA 
Variance Component Model  
Variance Inflation Factor (VIF) 
In statistics, the variance inflation factor (VIF) quantifies the severity of multicollinearity in an ordinary least squares regression analysis. It provides an index that measures how much the variance (the square of the estimate’s standard deviation) of an estimated regression coefficient is increased because of collinearity. https://…/collinearityandstepwisevifselection 
Variance Inflation Factor Change Point Detection (VIFCP) 
VIFCP 
Variance Network  In this paper, we propose variance networks, a new model that stores the learned information in the variances of the network weights. Surprisingly, no information gets stored in the expectations of the weights, therefore if we replace these weights with their expectations, we would obtain a random guess quality prediction. We provide a numerical criterion that uses the loss curvature to determine which random variables can be replaced with their expected values, and find that only a small fraction of weights is needed for ensembling. Variance networks represent a diverse ensemble that is more robust to adversarial attacks than conventional lowvariance ensembles. The success of this model raises several counterintuitive implications for the training and application of Deep Learning models. 
Variance Reduction (VR) 
In mathematics, more specifically in the theory of Monte Carlo methods, variance reduction is a procedure used to increase the precision of the estimates that can be obtained for a given number of iterations. Every output random variable from the simulation is associated with a variance which limits the precision of the simulation results. In order to make a simulation statistically efficient, i.e., to obtain a greater precision and smaller confidence intervals for the output random variable of interest, variance reduction techniques can be used. The main ones are: Common random numbers, antithetic variates, control variates, importance sampling and stratified sampling. Under these headings are a variety of specialized techniques; for example, particle transport simulations make extensive use of ‘weight windows’ and ‘splitting/Russian roulette’ techniques, which are a form of importance sampling. 
Variation of Information Distance (VI) 
In probability theory and information theory, the variation of information or shared information distance is a measure of the distance between two clusterings (partitions of elements). It is closely related to mutual information; indeed, it is a simple linear expression involving the mutual information. Unlike the mutual information, however, the variation of information is a true metric, in that it obeys the triangle inequality. Even more, it is a universal metric, in that if any other distance measure two items closeby, then the variation of information will also judge them close. 
Variational Adaptive Newton (VAN) 
We present the Variational Adaptive Newton (VAN) method which is a blackbox optimization method especially suitable for explorativelearning tasks such as active learning and reinforcement learning. Similar to Bayesian methods, VAN estimates a distribution that can be used for exploration, but requires computations that are similar to continuous optimization methods. Our theoretical contribution reveals that VAN is a secondorder method that unifies existing methods in distinct fields of continuous optimization, variational inference, and evolution strategies. Our experimental results show that VAN performs well on a widevariety of learning tasks. This work presents a generalpurpose explorativelearning method that has the potential to improve learning in areas such as active learning and reinforcement learning. 
Variational Autoencoder (VAE) 
How can we perform efficient approximate inference and learning with directed probabilistic models whose continuous latent variables and/or parameters have intractable posterior distributions? The variational Bayesian (VB) approach involves the optimization of an approximation to the intractable posterior. Unfortunately, the common meanfield approach requires analytical solutions of expectations w.r.t. the approximate posterior, which are also intractable in the general case. We show how a reparameterization of the variational lower bound yields a simple differentiable unbiased estimator of the lower bound; this SGVB (Stochastic Gradient Variational Bayes) estimator can be used for efficient approximate posterior inference in almost any model with continuous latent variables and/or parameters, and is straightforward to optimize using standard stochastic gradient ascent techniques. For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto Encoding VB (AEVB) algorithm. In the AEVB algorithm we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint. The learned approximate posterior inference model can also be used for a host of tasks such as recognition, denoising, representation and visualization purposes. When a neural network is used for the recognition model, we arrive at the variational autoencoder. 
Variational Bayesian Sparse Gaussian Process Regression (VBSGPR) 
This paper presents a novel variational inference framework for deriving a family of Bayesian sparse Gaussian process regression (SGPR) models whose approximations are variationally optimal with respect to the fullrank GPR model enriched with various corresponding correlation structures of the observation noises. Our variational Bayesian SGPR (VBSGPR) models jointly treat both the distributions of the inducing variables and hyperparameters as variational parameters, which enables the decomposability of the variational lower bound that in turn can be exploited for stochastic optimization. Such a stochastic optimization involves iteratively following the stochastic gradient of the variational lower bound to improve its estimates of the optimal variational distributions of the inducing variables and hyperparameters (and hence the predictive distribution) of our VBSGPR models and is guaranteed to achieve asymptotic convergence to them. We show that the stochastic gradient is an unbiased estimator of the exact gradient and can be computed in constant time per iteration, hence achieving scalability to big data. We empirically evaluate the performance of our proposed framework on two realworld, massive datasets. 
Variational BiLSTM  Recurrent neural networks like long shortterm memory (LSTM) are important architectures for sequential prediction tasks. LSTMs (and RNNs in general) model sequences along the forward time direction. Bidirectional LSTMs (BiLSTMs) on the other hand model sequences along both forward and backward directions and are generally known to perform better at such tasks because they capture a richer representation of the data. In the training of BiLSTMs, the forward and backward paths are learned independently. We propose a variant of the BiLSTM architecture, which we call Variational BiLSTM, that creates a channel between the two paths (during training, but which may be omitted during inference); thus optimizing the two paths jointly. We arrive at this joint objective for our model by minimizing a variational lower bound of the joint likelihood of the data sequence. Our model acts as a regularizer and encourages the two networks to inform each other in making their respective predictions using distinct information. We perform ablation studies to better understand the different components of our model and evaluate the method on various benchmarks, showing stateoftheart performance. 
Variational Continual Learning  This paper develops variational continual learning (VCL), a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge. Experimental results show that variational continual learning outperforms stateoftheart continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way. 
Variational Deep Embedding (VaDE) 
Clustering is among the most fundamental tasks in computer vision and machine learning. In this paper, we propose Variational Deep Embedding (VaDE), a novel unsupervised generative clustering approach within the framework of Variational AutoEncoder (VAE). Specifically, VaDE models the data generative procedure with a Gaussian Mixture Model (GMM) and a deep neural network (DNN): 1) the GMM picks a cluster; 2) from which a latent embedding is generated; 3) then the DNN decodes the latent embedding into observables. Inference in VaDE is done in a variational way: a different DNN is used to encode observables to latent embeddings, so that the evidence lower bound (ELBO) can be optimized using Stochastic Gradient Variational Bayes (SGVB) estimator and the reparameterization trick. Quantitative comparisons with strong baselines are included in this paper, and experimental results show that VaDE significantly outperforms the stateoftheart clustering methods on 4 benchmarks from various modalities. Moreover, by VaDE’s generative nature, we show its capability of generating highly realistic samples for any specified cluster, without using supervised information during training. Lastly, VaDE is a flexible and extensible framework for unsupervised generative clustering, more general mixture models than GMM can be easily plugged in. 
Variational Deep Q Network  We propose a framework that directly tackles the probability distribution of the value function parameters in Deep Q Network (DQN), with powerful variational inference subroutines to approximate the posterior of the parameters. We will establish the equivalence between our proposed surrogate objective and variational inference loss. Our new algorithm achieves efficient exploration and performs well on large scale chain Markov Decision Process (MDP). 
Variational Gaussian Process (VGP) 
Representations offered by deep generative models are fundamentally tied to their inference method from data. Variational inference methods require a rich family of approximating distributions. We construct the variational Gaussian process (VGP), a Bayesian nonparametric model which adapts its shape to match complex posterior distributions. The VGP generates approximate posterior samples by generating latent inputs and warping them through random nonlinear mappings; the distribution over random mappings is learned during inference, enabling the transformed outputs to adapt to varying complexity. We prove a universal approximation theorem for the VGP, demonstrating its representative power for learning any model. For inference we present a variational objective inspired by autoencoders and perform black box inference over a wide class of models. The VGP achieves new stateoftheart results for unsupervised learning, inferring models such as the deep latent Gaussian model and the recently proposed DRAW. 
Variational Generative Adversarial net (VGAN) 
In this paper, we propose a model using generative adversarial net (GAN) to generate realistic text. Instead of using standard GAN, we combine variational autoencoder (VAE) with generative adversarial net. The use of highlevel latent random variables is helpful to learn the data distribution and solve the problem that generative adversarial net always emits the similar data. We propose the VGAN model where the generative model is composed of recurrent neural network and VAE. The discriminative model is a convolutional neural network. We train the model via policy gradient. We apply the proposed model to the task of text generation and compare it to other recent neural network based models, such as recurrent neural network language model and SeqGAN. We evaluate the performance of the model by calculating negative loglikelihood and the BLEU score. We conduct experiments on three benchmark datasets, and results show that our model outperforms other previous models. 
Variational Inverse Control With Events (VICE) 
The design of a reward function often poses a major practical challenge to realworld applications of reinforcement learning. Approaches such as inverse reinforcement learning attempt to overcome this challenge, but require expert demonstrations, which can be difficult or expensive to obtain in practice. We propose variational inverse control with events (VICE), which generalizes inverse reinforcement learning methods to cases where full demonstrations are not needed, such as when only samples of desired goal states are available. Our method is grounded in an alternative perspective on control and reinforcement learning, where an agent’s goal is to maximize the probability that one or more events will happen at some point in the future, rather than maximizing cumulative rewards. We demonstrate the effectiveness of our methods on continuous control tasks, with a focus on highdimensional observations like images where rewards are hard or even impossible to specify. 
Variational Message Passing (VMP) 
Variational message passing (VMP) is an approximate inference technique for continuous or discretevalued Bayesian networks, with conjugateexponential parents, developed by John Winn. VMP was developed as a means of generalizing the approximate variational methods used by such techniques as Latent Dirichlet allocation and works by updating an approximate distribution at each node through messages in the node’s Markov blanket. 
Variational Mode Decomposition (VMP) 
During the late 1990s, Huang introduced the algorithm called Empirical Mode Decomposition, which is widely used today to recursively decompose a signal into different modes of unknown but separate spectral bands. EMD is known for limitations like sensitivity to noise and sampling. These limitations could only partially be addressed by more mathematical attempts to this decomposition problem, like synchrosqueezing, empirical wavelets or recursive variational decomposition. Here, we propose an entirely nonrecursive variational mode decomposition model, where the modes are extracted concurrently. The model looks for an ensemble of modes and their respective center frequencies, such that the modes collectively reproduce the input signal, while each being smooth after demodulation into baseband. In Fourier domain, this corresponds to a narrowband prior. We show important relations to Wiener filter denoising. Indeed, the proposed method is a generalization of the classic Wiener filter into multiple, adaptive bands. Our model provides a solution to the decomposition problem that is theoretically well founded and still easy to understand. The variational model is efficiently optimized using an alternating direction method of multipliers approach. Preliminary results show attractive performance with respect to existing mode decomposition models. In particular, our proposed model is much more robust to sampling and noise. Finally, we show promising practical decomposition results on a series of artificial and real data. vmd 
Variational Recurrent Neural Machine Translation (VRNMT) 
Partially inspired by successful applications of variational recurrent neural networks, we propose a novel variational recurrent neural machine translation (VRNMT) model in this paper. Different from the variational NMT, VRNMT introduces a series of latent random variables to model the translation procedure of a sentence in a generative way, instead of a single latent variable. Specifically, the latent random variables are included into the hidden states of the NMT decoder with elements from the variational autoencoder. In this way, these variables are recurrently generated, which enables them to further capture strong and complex dependencies among the output translations at different timesteps. In order to deal with the challenges in performing efficient posterior inference and largescale training during the incorporation of latent variables, we build a neural posterior approximator, and equip it with a reparameterization technique to estimate the variational lower bound. Experiments on ChineseEnglish and EnglishGerman translation tasks demonstrate that the proposed model achieves significant improvements over both the conventional and variational NMT models. 
Variational SimulationBased Calibration (VSBC) 
While it’s always possible to compute a variational approximation to a posterior distribution, it can be difficult to discover problems with this approximation’. We propose two diagnostic algorithms to alleviate this problem. The Paretosmoothed importance sampling (PSIS) diagnostic gives a goodness of fit measurement for joint distributions, while simultaneously improving the error in the estimate. The variational simulationbased calibration (VSBC) assesses the average performance of point estimates. 
Variational Walkback  We propose a novel method to directly learn a stochastic transition operator whose repeated application provides generated samples. Traditional undirected graphical models approach this problem indirectly by learning a Markov chain model whose stationary distribution obeys detailed balance with respect to a parameterized energy function. The energy function is then modified so the model and data distributions match, with no guarantee on the number of steps required for the Markov chain to converge. Moreover, the detailed balance condition is highly restrictive: energy based models corresponding to neural networks must have symmetric weights, unlike biological neural circuits. In contrast, we develop a method for directly learning arbitrarily parameterized transition operators capable of expressing nonequilibrium stationary distributions that violate detailed balance, thereby enabling us to learn more biologically plausible asymmetric neural networks and more general nonenergy based dynamical systems. The proposed training objective, which we derive via principled variational methods, encourages the transition operator to ‘walk back’ in multistep trajectories that start at datapoints, as quickly as possible back to the original data points. We present a series of experimental results illustrating the soundness of the proposed approach, Variational Walkback (VW), on the MNIST, CIFAR10, SVHN and CelebA datasets, demonstrating superior samples compared to earlier attempts to learn a transition operator. We also show that although each rapid training trajectory is limited to a finite but variable number of steps, our transition operator continues to generate good samples well past the length of such trajectories, thereby demonstrating the match of its nonequilibrium stationary distribution to the data distribution. Source Code: http://…/walkback_nips17 
Variational Wasserstein Clustering  We propose a new clustering method based on optimal transportation. We solve optimal transportation with variational principles and investigate the use of power diagrams as transportation plans for aggregating arbitrary domains into a fixed number of clusters. We iteratively drive centroids through target domains while maintaining the minimum clustering energy by adjusting the power diagrams. Thus, we simultaneously pursue clustering and the Wasserstein distances between centroids and target domains, resulting in a robust measurepreserving mapping. In general, there are two approaches for solving optimal transportation problem — Kantorovich’s v.s. Brenier’s. While most researchers focus on Kantorovich’s approach, we propose a solution to clustering problem following Brenier’s approach and achieve a competitive result with the stateoftheart method. We demonstrate our applications to different areas such as domain adaptation, remeshing, and representation learning on synthetic and real data. 
Varimax Rotation  In statistics, a varimax rotation is used to simplify the expression of a particular subspace in terms of just a few major items each. The actual coordinate system is unchanged, it is the orthogonal basis that is being rotated to align with those coordinates. The subspace found with principal component analysis or factor analysis is expressed as a dense basis with many nonzero weights which makes it hard to interpret. Varimax is so called because it maximizes the sum of the variances of the squared loadings (squared correlations between variables and factors). Preserving orthogonality requires that it is a rotation that leaves the subspace invariant. Intuitively, this is achieved if, (a) any given variable has a high loading on a single factor but nearzero loadings on the remaining factors and if (b) any given factor is constituted by only a few variables with very high loadings on this factor while the remaining variables have nearzero loadings on this factor. If these conditions hold, the factor loading matrix is said to have “simple structure,” and varimax rotation brings the loading matrix closer to such simple structure (as much as the data allow). From the perspective of individuals measured on the variables, varimax seeks a basis that most economically represents each individual – that is, each individual can be well described by a linear combination of only a few basis functions. 
VCExplorer  Graphs have been widely used to model different information networks, such as the Web, biological networks and social networks (e.g. Twitter). Due to the size and complexity of these graphs, how to explore and utilize these graphs has become a very challenging problem. In this paper, we propose, VCExplorer, a new interactive graph exploration framework that integrates the strengths of graph visualization and graph summarization. Unlike existing graph visualization tools where vertices of a graph may be clustered into a smaller collection of super/virtual vertices, VCExplorer displays a small number of actual source graph vertices (called hubs) and summaries of the information between these vertices. We refer to such a graph as a HAgraph (Hubbased Aggregation Graph). This allows users to appreciate the relationship between the hubs, rather than super/virtual vertices. Users can navigate through the HA graph by ‘drilling down’ into the summaries between hubs to display more hubs. We illustrate how the graph aggregation techniques can be integrated into the exploring framework as the consolidated information to users. In addition, we propose efficient graph aggregation algorithms over multiple subgraphs via computation sharing. Extensive experimental evaluations have been conducted using both real and synthetic datasets and the results indicate the effectiveness and efficiency of VCExplorer for exploration. 
Vector Autoregression (VAR) 
➘ “Vector Autoregressive Model” 
Vector Autoregressive Model (VAR) 
Vector autoregression (VAR) is an econometric model used to capture the linear interdependencies among multiple time series. VAR models generalize the univariate autoregression (AR) models by allowing for more than one evolving variable. All variables in a VAR are treated symmetrically in a structural sense (although the estimated quantitative response coefficients will not in general be the same); each variable has an equation explaining its evolution based on its own lags and the lags of the other model variables. VAR modeling does not require as much knowledge about the forces influencing a variable as do structural models with simultaneous equations: The only prior knowledge required is a list of variables which can be hypothesized to affect each other intertemporally. bvarsv,graphicalVAR,mlVAR 
Vector Field Based Neural Network  A novel Neural Network architecture is proposed using the mathematically and physically rich idea of vector fields as hidden layers to perform nonlinear transformations in the data. The data points are interpreted as particles moving along a flow defined by the vector field which intuitively represents the desired movement to enable classification. The architecture moves the data points from their original configuration to anew one following the streamlines of the vector field with the objective of achieving a final configuration where classes are separable. An optimization problem is solved through gradient descent to learn this vector field. 
Vector Generalized Additive Model (VGAM) 
Vector smoothing is used to extend the class of generalized additive models in a very natural way to include a class of multivariate regression models. The resulting models are called `vector generalized additive models’. The class of models for which the methodology gives generalized additive extensions includes the multiple logistic regression model for nominal responses, the continuation ratio model and the proportional and nonproportional odds models for ordinal responses, and the bivariate probit and bivariate logistic models for correlated binary responses. They may also be applied to generalized estimating equations. VGAM 
Vector Generalized Linear Model (VGLM) 
VGAM 
Vector QuantisedVariational AutoEncoder (VQVAE) 
Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector QuantisedVariational AutoEncoder (VQVAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of ‘posterior collapse’ — where the latents are ignored when they are paired with a powerful autoregressive decoder — typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations. 
Vector Quantization (VQ) 
Vector quantization (VQ) is a classical quantization technique from signal processing which allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for data compression. It works by dividing a large set of points (vectors) into groups having approximately the same number of points closest to them. Each group is represented by its centroid point, as in kmeans and some other clustering algorithms. The density matching property of vector quantization is powerful, especially for identifying the density of large and highdimensioned data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for lossy data compression. It can also be used for lossy data correction and density estimation. Vector quantization is based on the competitive learning paradigm, so it is closely related to the selforganizing map model. 
Vector Space Model (VSM) 
Vector space model or term vector model is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System. 
Vega  Vega is a visualization grammar, a declarative format for creating, saving, and sharing interactive visualization designs. With Vega, you can describe the visual appearance and interactive behavior of a visualization in a JSON format, and generate views using HTML5 Canvas or SVG. 
vega.js  Vega is a visualization grammar, a declarative format for creating, saving and sharing visualization designs. With Vega you can describe data visualizations in a JSON format, and generate interactive views using either HTML5 Canvas or SVG. 
VegaLite  We present VegaLite, a highlevel grammar that enables rapid specification of interactive data visualizations. VegaLite combines a traditional grammar of graphics, providing visual encoding rules and a composition algebra for layered and multiview displays, with a novel grammar of interaction. Users specify interactive semantics by composing selections. In VegaLite, a selection is an abstraction that defines input event processing, points of interest, and a predicate function for inclusion testing. Selections parameterize visual encodings by serving as input data, defining scale extents, or by driving conditional logic. The VegaLite compiler automatically synthesizes requisite data flow and event handling logic, which users can override for further customization. In contrast to existing reactive specifications, VegaLite selections decompose an interaction design into concise, enumerable semantic units. We evaluate VegaLite through a range of examples, demonstrating succinct specification of both customized interaction methods and common techniques such as panning, zooming, and linked selection. 
Velox  To support complex dataintensive applications such as personalized recommendations, targeted advertising, and intelligent services, the data management community has focused heavily on the design of systems to support training complex models on large datasets. Unfortunately, the design of these systems largely ignores a critical component of the overall analytics process: the deployment and serving of models at scale. We present Velox, a new component of the Berkeley Data Analytics Stack. Velox is a data management system for facilitating the next steps in realworld, largescale analytics pipelines: online model management, maintenance, and prediction serving. Velox provides enduser applications and services with a lowlatency, intuitive interface to models, transforming the raw statistical models currently trained using existing offline largescale compute frameworks into fullblown, endtoend data products. To provide uptodate results for these complex models, Velox also facilitates lightweight online model maintenance and selection (i.e., dynamic weighting). Velox has the ability to span online and offline systems, to adaptively adjust model materialization strategies, and to exploit inherent statistical properties such as model error tolerance, all while operating at ‘Big Data’ scale. http://…/veloxampcamp5final 
VerdictDB  Despite 25 years of research in academia, approximate query processing (AQP) has had little industrial adoption. One of the major causes for this slow adoption is the reluctance of traditional vendors to make radical changes to their legacy codebases, and the preoccupation of newer vendors (e.g., SQLonHadoop products) with implementing standard features. On the other hand, the few AQP engines that are available are each tied to a specific platform and require users to completely abandon their existing databases—an unrealistic expectation given the infancy of the AQP technology. Therefore, we argue that a universal solution is needed: a databaseagnostic approximation engine that will widen the reach of this emerging technology across various platforms. Our proposal, called VerdictDB, uses a middleware architecture that requires no changes to the backend database, and thus, can work with all offtheshelf engines. Operating at the driverlevel, VerdictDB intercepts analytical queries issued to the database and rewrites them into another query that, if executed by any standard relational engine, will yield sufficient information for computing an approximate answer. VerdictDB uses the returned result set to compute an approximate answer and error estimates, which are then passed on to the user or application. However, lack of access to the query execution layer introduces significant challenges in terms of generality, correctness, and efficiency. This paper shows how VerdictDB overcomes these challenges and delivers up to 171 times speedup (18.45 times on average) for a variety of existing engines, such as Impala, Spark SQL, and Amazon Redshift while incurring less than 2.6% relative error. 
VERtex Similarity Embedding (VERSE) 
Embedding a webscale information network into a lowdimensional vector space facilitates tasks such as link prediction, classification, and visualization. Past research has addressed the problem of extracting such embeddings by adopting methods from words to graphs, without defining a clearly comprehensible graphrelated objective. Yet, as we show, the objectives used in past works implicitly utilize similarity measures among graph nodes. In this paper, we carry the similarity orientation of previous works to its logical conclusion; we propose VERtex Similarity Embeddings (VERSE), a simple, versatile, and memoryefficient method that derives graph embeddings explicitly calibrated to preserve the distributions of a selected vertextovertex similarity measure. VERSE learns such embeddings by training a singlelayer neural network. While its default, scalable version does so via sampling similarity information, we also develop a variant using the full information per vertex. Our experimental study on standard benchmarks and realworld datasets demonstrates that VERSE, instantiated with diverse similarity measures, outperforms stateoftheart methods in terms of precision and recall in major data mining tasks and supersedes them in time and space efficiency, while the scalable samplingbased variant achieves equally good results as the nonscalable full variant. 
Vertex Similarity Method  We consider methods for quantifying the similarity of vertices in networks. We propose a measure of similarity based on the concept that two vertices are similar if their immediate neighbors in the network are themselves similar. This leads to a selfconsistent matrix formulation of similarity that can be evaluated iteratively using only a knowledge of the adjacency matrix of the network. We test our similarity measure on computergenerated networks for which the expected results are known, and on a number of realworld networks. 
VertexDiminished Random Walk (VDRW) 
Imbalanced data widely exists in many highimpact applications. An example is in air traffic control, where we aim to identify the leading indicators for each type of accident cause from historical records. Among all three types of accident causes, historical records with ‘personnel issues’ are much more than the other two types (‘aircraft issues’ and ‘environmental issues’) combined. Thus, the resulting dataset is highly imbalanced, and can be naturally modeled as a network. Up until now, most existing work on imbalanced data analysis focused on the classification setting, and very little is devoted to learning the node representation from imbalanced networks. To address this problem, in this paper, we propose VertexDiminished Random Walk (VDRW) for imbalanced network analysis. The key idea is to encourage the random particle to walk within the same class by adjusting the transition probabilities each step. It resembles the existing Vertex Reinforced Random Walk in terms of the dynamic nature of the transition probabilities, as well as some convergence properties. However, it is more suitable for analyzing imbalanced networks as it leads to more separable node representations in the embedding space. Then, based on VDRW, we propose a semisupervised network representation learning framework named ImVerde for imbalanced networks, in which context sampling uses VDRW and the label information to create nodecontext pairs, and balancedbatch sampling adopts a simple undersampling method to balance these pairs in different classes. Experimental results demonstrate that ImVerde based on VDRW outperforms stateoftheart algorithms for learning network representation from imbalanced data. 
Vertical Hoeffding Tree (VHT) 
The Vertical Hoeffding Tree (VHT) is a distributed extension of the VFDT (Domingos and Hulten, 2000). The VHT uses vertical parallelism to split the workload across several machines. Vertical parallelism leverages the parallelism across attributes in the same example, rather than across different examples in the stream. In practice, each training example is routed through the tree model to a leaf. There, the example is split into its constituting attributes, and each attribute is sent to a different Processor instance that keeps track of sufficient statistics. This architecture has two main advantages over one based on horizontal parallelism. First, attribute counters are not replicated across several machines, thus reducing the memory footprint. Second, the computation of the fitness of an attribute for a split decision (via, e.g., entropy or information gain) can be performed in parallel. The drawback is that in order to get good performance, there must be sufficient inherent parallelism in the data. That is, the VHT works best for sparse data (e.g, bagofwords models). Vertical Hoeffding Tree (VHT) classifier is a distributed classifier that utilizes vertical parallelism on top of the Very Fast Decision Tree (VFDT) or Hoeffding Tree classifier. Hoeffding Tree or VFDT is the standard decision tree algorithm for data stream classification. VFDT uses the Hoeffding bound to decide the minimum number of arriving instances to achieve certain level of confidence in splitting the node. This confidence level determines how close the statistics between the attribute chosen by VFDT and the attribute chosen by decision tree for batch learning. For a more comprehensive summary of VFDT, read chapter 3 of ‘Data Stream Mining: A Practical Approach’. http://…/StreamMining.pdf http://…/p71domingos.pdf http://…/arintoemdcthesis.pdf https://…ableadvancedmassiveonlineanalysis.pdf 
Very Fast Decision Tree (VFDT) 
Hoeffding Tree or VFDT is the standard decision tree algorithm for data stream classification. VFDT uses the Hoeffding bound to decide the minimum number of arriving instances to achieve certain level of confidence in splitting the node. This confidence level determines how close the statistics between the attribute chosen by VFDT and the attribute chosen by decision tree for batch learning. ➚ “Hoeffding Tree” https://…/VerticalHoeffdingTreeClassifier.html 
Very Good Importance Sampling (VGIS) 
loo 
Veto Interval Graphs (VI Graphs) 
We introduce a variation of interval graphs, called veto interval (VI) graphs. A VI graph is represented by a set of closed intervals, each containing a point called a veto mark. The edge $ab$ is in the graph if the intervals corresponding to the vertices $a$ and $b$ intersect, and neither contains the veto mark of the other. We find families of graphs which are VI graphs, and prove results towards characterizing the maximum chromatic number of a VI graph. We define and prove similar results about several related graph families, including unit VI graphs, midpoint unit VI (MUVI) graphs, and single and double approval graphs. We also highlight a relationship between approval graphs and a family of tolerance graphs. 
Video Ladder Network (VLN) 
We present the Video Ladder Network (VLN) for video prediction. VLN is a neural encoderdecoder model augmented by both recurrent and feedforward lateral connections at all layers. The model achieves competitive results on the Moving MNIST dataset while having very simple structure and providing fast inference. 
VideoCapsuleNet  The recent advances in Deep Convolutional Neural Networks (DCNNs) have shown extremely good results for video human action classification, however, action detection is still a challenging problem. The current action detection approaches follow a complex pipeline which involves multiple tasks such as tube proposals, optical flow, and tube classification. In this work, we present a more elegant solution for action detection based on the recently developed capsule network. We propose a 3D capsule network for videos, called VideoCapsuleNet: a unified network for action detection which can jointly perform pixelwise action segmentation along with action classification. The proposed network is a generalization of capsule network from 2D to 3D, which takes a sequence of video frames as input. The 3D generalization drastically increases the number of capsules in the network, making capsule routing computationally expensive. We introduce capsulepooling in the convolutional capsule layer to address this issue which makes the voting algorithm tractable. The routingbyagreement in the network inherently models the action representations and various action characteristics are captured by the predicted capsules. This inspired us to utilize the capsules for action localization and the classspecific capsules predicted by the network are used to determine a pixelwise localization of actions. The localization is further improved by parameterized skip connections with the convolutional capsule layers and the network is trained endtoend with a classification as well as localization loss. The proposed network achieves sateoftheart performance on multiple action detection datasets including UCFSports, JHMDB, and UCF101 (24 classes) with an impressive ~20% improvement on UCF101 and ~15% improvement on JHMDB in terms of vmAP scores. 
VieClus  It is common knowledge that there is no single best strategy for graph clustering, which justifies a plethora of existing approaches. In this paper, we present a general memetic algorithm, VieClus, to tackle the graph clustering problem. This algorithm can be adapted to optimize different objective functions. A key component of our contribution are natural recombine operators that employ ensemble clusterings as well as multilevel techniques. Lastly, we combine these techniques with a scalable communication protocol, producing a system that is able to compute highquality solutions in a short amount of time. We instantiate our scheme with local search for modularity and show that our algorithm successfully improves or reproduces all entries of the 10th DIMACS implementation~challenge under consideration using a small amount of time. 
VIoLET  IoT deployments have been growing manifold, encompassing sensors, networks, edge, fog and cloud resources. Despite the intense interest from researchers and practitioners, most do not have access to largescale IoT testbeds for validation. Simulation environments that allow analytical modeling are a poor substitute for evaluating software platforms or application workloads in realistic computing environments. Here, we propose VIoLET, a virtual environment for defining and launching largescale IoT deployments within cloud VMs. It offers a declarative model to specify containerbased compute resources that match the performance of the native edge, fog and cloud devices using Docker. These can be interconnected by complex topologies on which private/public networks, and bandwidth and latency rules are enforced. Users can configure synthetic sensors for data generation on these devices as well. We validate VIoLET for deployments with > 400 devices and > 1500 devicecores, and show that the virtual IoT environment closely matches the expected compute and network performance at modest costs. This fills an important gap between IoT simulators and real deployments. 
VIPE  This paper presents a new interactive opinion mining tool that helps users to classify large sets of short texts originated from Web opinion polls, technical forums or Twitter. From a manual multilabel preclassification of a very limited text subset, a learning algorithm predicts the labels of the remaining texts of the corpus and the texts most likely associated to a selected label. Using a fast matrix factorization, the algorithm is able to handle large corpora and is welladapted to interactivity by integrating the corrections proposed by the users on the fly. Experimental results on classical datasets of various sizes and feedbacks of users from marketing services of the telecommunication company Orange confirm the quality of the obtained results. 
Viral Search  The article, after a brief introduction on genetic algorithms and their functioning, presents a kind of genetic algorithm called Viral Search. We present the key concepts, we formally derive the algorithm and we perform numerical tests designed to illustrate the potential and limits. 
vis.js  A dynamic, browser based visualization library. The library is designed to be easy to use, to handle large amounts of dynamic data, and to enable manipulation of and interaction with the data. The library consists of the components DataSet, Timeline, Network, Graph2d and Graph3d. 
Visual Analog Scale (VAS) 
The visual analogue scale or visual analog scale (VAS) is a psychometric response scale which can be used in questionnaires. It is a measurement instrument for subjective characteristics or attitudes that cannot be directly measured. When responding to a VAS item, respondents specify their level of agreement to a statement by indicating a position along a continuous line between two endpoints. http://…/jcn_10_706.pdf ordinalCont 
Visual Analytics  Visual analytics is “the science of analytical reasoning facilitated by visual interactive interfaces.” It can attack certain problems whose size, complexity, and need for closely coupled human and machine analysis may make them otherwise intractable. Visual analytics advances science and technology developments in analytical reasoning, interaction, data transformations and representations for computation and visualization, analytic reporting, and technology transition. As a research agenda, visual analytics brings together several scientific and technical communities from computer science, information visualization, cognitive and perceptual sciences, interactive design, graphic design, and social sciences. 
Visual Communication (VC) 
Visual communication is communication through visual aid and is described as the conveyance of ideas and information in forms that can be read or looked upon. Visual communication in part or whole relies on vision, and is primarily presented or expressed with two dimensional images, it includes: signs, typography, drawing, graphic design, illustration, Industrial Design, Advertising, Animation colour and electronic resources. It also explores the idea that a visual message accompanying text has a greater power to inform, educate, or persuade a person or audience. 
Visual Intelligence (VI) 

Visual Interaction Network (VIN) 
From just a glance, humans can make rich predictions about the future state of a wide range of physical systems. On the other hand, modern approaches from engineering, robotics, and graphics are often restricted to narrow domains and require direct measurements of the underlying states. We introduce the Visual Interaction Network, a generalpurpose model for learning the dynamics of a physical system from raw visual observations. Our model consists of a perceptual frontend based on convolutional neural networks and a dynamics predictor based on interaction networks. Through joint training, the perceptual frontend learns to parse a dynamic visual scene into a set of factored latent object representations. The dynamics predictor learns to roll these states forward in time by computing their interactions and dynamics, producing a predicted physical trajectory of arbitrary length. We found that from just six input video frames the Visual Interaction Network can generate accurate future trajectories of hundreds of time steps on a wide range of physical systems. Our model can also be applied to scenes with invisible objects, inferring their future states from their effects on the visible objects, and can implicitly infer the unknown mass of objects. Our results demonstrate that the perceptual module and the objectbased dynamics predictor module can induce factored latent representations that support accurate dynamical predictions. This work opens new opportunities for modelbased decisionmaking and planning from raw sensory observations in complex physical environments. 
Visual Knowledge Memory Network (VKMN) 
Visual question answering (VQA) requires joint comprehension of images and natural language questions, where many questions can’t be directly or clearly answered from visual content but require reasoning from structured human knowledge with confirmation from visual content. This paper proposes visual knowledge memory network (VKMN) to address this issue, which seamlessly incorporates structured human knowledge and deep visual features into memory networks in an endtoend learning framework. Comparing to existing methods for leveraging external knowledge for supporting VQA, this paper stresses more on two missing mechanisms. First is the mechanism for integrating visual contents with knowledge facts. VKMN handles this issue by embedding knowledge triples (subject, relation, target) and deep visual features jointly into the visual knowledge features. Second is the mechanism for handling multiple knowledge facts expanding from question and answer pairs. VKMN stores joint embedding using keyvalue pair structure in the memory networks so that it is easy to handle multiple facts. Experiments show that the proposed method achieves promising results on both VQA v1.0 and v2.0 benchmarks, while outperforms stateoftheart methods on the knowledgereasoning related questions. 
Visual Predictive Checks (VPC) 
The visual predictive check (VPC) is a model diagnostic that can be used to: (i) allow comparison between alternative models, (ii) suggest model improvements, and (iii) support appropriateness of a model. The VPC is constructed from stochastic simulations from the model therefore all model components contribute and it can help in diagnosing both structural and stochastic contributions. As the VPC is being increasingly used as a key diagnostic to illustrate model appropriateness, it is important that its methodology, strengths and weaknesses be discussed by the pharmacometric community. In a typical VPC, the model is used to repeatedly (usually n≥1000) simulate observations according to the original design of the study. Based on these simulations, percentiles of the simulated data are plotted versus an independent variable, usually time since start of treatment. It is then desirable that the same percentiles are calculated and plotted for the observed data to aid comparison of predictions with observations. With suitable data a plot including the observations may be helpful by indicating the data density at different times and thus giving some indirect feel for the uncertainty in the percentiles. Apparently poor model performance where there is very sparse data may not as strongly indicate model inadequacy as poor performance with dense data. A drawback of adding all observations to the VPC, in particular for large studies, is that it may cloud the picture without making data density obvious. A possible intermediate route is to plot a random subsample of all observations. asVPC 
Visual Question Answering (VQA) 
Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring realworld scenarios, such as helping the visually impaired, both the questions and answers are openended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. An Analysis of Visual Question Answering Algorithms 
Visual Question Answering With Explanation (VQAE) 
Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQAE (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQAE problem in a multitask learning architecture. Our VQAE dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the stateoftheart methods by a clear margin on the VQA v2 dataset. 
VIsual Tracking via Adversarial Learning (VITAL) 
The trackingbydetection framework consists of two stages, i.e., drawing samples around the target object in the first stage and classifying each sample as the target object or as background in the second stage. The performance of existing trackers using deep classification networks is limited by two aspects. First, the positive samples in each frame are highly spatially overlapped, and they fail to capture rich appearance variations. Second, there exists extreme class imbalance between positive and negative samples. This paper presents the VITAL algorithm to address these two problems via adversarial learning. To augment positive samples, we use a generative network to randomly generate masks, which are applied to adaptively dropout input features to capture a variety of appearance changes. With the use of adversarial learning, our network identifies the mask that maintains the most robust features of the target objects over a long temporal span. In addition, to handle the issue of class imbalance, we propose a highorder cost sensitive loss to decrease the effect of easy negative samples to facilitate training the classification network. Extensive experiments on benchmark datasets demonstrate that the proposed tracker performs favorably against stateoftheart approaches. 
Visualization of Analysis of Variance (VISOVA) 
VISOVA(VISualization Of VAriance) is a novel method for exploratory data analysis. It is basically an extension of the trellis graphics and developing their grid concept with parallel coordinates, permitting visualization of many dimensions at once. This package includes functions allowing users to perform VISOVA analysis and compare different column/variable ordering methods for making the highdimensional structures easier to perceive even when the data is complicated. visova 
VizPacker  VizPacker is a handy tool that helps visualization developer easily design, build and preview a chart based on CVOM SDK, and autocreate a package for CVOM chart extension, which include a set of fundamental codes and files based on visualization module and data schema. VizPacker is meant to help users quickly have hands on the implementation workflow and avoid unnecessary issuesstruggling. 
VLocNet++  Visual localization is one of the fundamental enablers of robot autonomy which has been mostly tackled using local featurebased pipelines that efficiently encode knowledge about the environment and the underlying geometrical constraints. Although deep learning based approaches have shown considerable robustness in the context of significant perceptual changes, repeating structures and textureless regions, their performance has been subpar in comparison to local featurebased pipelines. In this paper, we propose the novel VLocNet++ architecture that attempts to overcome this limitation by simultaneously embedding geometric and semantic knowledge of the world into the pose regression network. We adopt a multitask learning approach that exploits the intertask relationship between learning semantics, regressing 6DoF global pose and odometry, for the mutual benefit of each of these tasks. VLocNet++ incorporates the Geometric Consistency Loss function that utilizes the predicted motion from the odometry stream to enforce global consistency during pose regression. Furthermore, we propose a selfsupervised warping technique that uses the relative motion to warp intermediate network representations in the segmentation stream for learning consistent semantics. In addition, we propose a novel adaptive weighted fusion layer to leverage inter and intra task dependencies based on region activations. Finally, we introduce a firstofakind urban outdoor localization dataset with pixellevel semantic labels and multiple loops for training deep networks. Extensive experiments on the challenging indoor Microsoft 7Scenes benchmark and our outdoor DeepLoc dataset demonstrate that our approach exceeds the stateoftheart, outperforming local featurebased methods while exhibiting substantial robustness in challenging scenarios. 
VocabularyInformed Extreme Value Learning (ViEVL) 
The novel unseen classes can be formulated as the extreme values of known classes. This inspired the recent works on openset recognition \cite{Scheirer_2013_TPAMI,Scheirer_2014_TPAMIb,EVM}, which however can have no way of naming the novel unseen classes. To solve this problem, we propose the Extreme Value Learning (EVL) formulation to learn the mapping from visual feature to semantic space. To model the margin and coverage distributions of each class, the Vocabularyinformed Learning (ViL) is adopted by using vast open vocabulary in the semantic space. Essentially, by incorporating the EVL and ViL, we for the first time propose a novel semantic embedding paradigm — Vocabularyinformed Extreme Value Learning (ViEVL), which embeds the visual features into semantic space in a probabilistic way. The learned embedding can be directly used to solve supervised learning, zeroshot and open set recognition simultaneously. Experiments on two benchmark datasets demonstrate the effectiveness of proposed frameworks. 
Volatility  Volatility is the annualized standard deviation of returns – it is often expressed in percent. A volatility of 20 means that there is about a onethird probability that an asset’s price a year from now will have fallen or risen by more than 20% from its present value. 
Voronoi Cell Topology Visualization and Analysis Toolkit (VoroTop) 
This paper introduces a new opensource software program called VoroTop, which uses Voronoi topology to analyze local structure in atomic systems. Strengths of this approach include its abilities to analyze hightemperature systems and to characterize complex structure such as grain boundaries. This approach enables the automated analysis of systems and mechanisms previously not possible. 
Voronoi Diagram  In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other. The regions are called Voronoi cells. 
Voronoi DiagramBased Evolutionary Algorithm (VorEAl) 
This paper presents the Voronoi diagrambased evolutionary algorithm (VorEAl). VorEAl partitions input space in abnormal/normal subsets using Voronoi diagrams. Diagrams are evolved using a multiobjective bioinspired approach in order to conjointly optimize classification metrics while also being able to represent areas of the data space that are not present in the training dataset. As part of the paper VorEAl is experimentally validated and contrasted with similar approaches. 
Vowpal Wabbit  Vowpal Wabbit (aka VW) is an open source fast outofcore learning system library and program developed originally at Yahoo! Research, and currently at Microsoft Research. It was started and is led by John Langford. Vowpal Wabbit’s is notable as an efficient scalable implementation of online machine learning and support for a number of machine learning reductions, importance weighting, and a selection of different loss functions and optimization algorithms. 
VoxML  We present the specification for a modeling language, VoxML, which encodes semantic knowledge of realworld objects represented as threedimensional models, and of events and attributes related to and enacted over these objects. VoxML is intended to overcome the limitations of existing 3D visual markup languages by allowing for the encoding of a broad range of semantic knowledge that can be exploited by a variety of systems and platforms, leading to multimodal simulations of realworld scenarios using conceptual objects that represent their semantic values. 
Advertisements