QA4IE  Information Extraction (IE) refers to automatically extracting structured relation tuples from unstructured texts. Common IE solutions, including Relation Extraction (RE) and open IE systems, can hardly handle crosssentence tuples, and are severely restricted by limited relation types as well as informal relation specifications (e.g., freetext based relation tuples). In order to overcome these weaknesses, we propose a novel IE framework named QA4IE, which leverages the flexible question answering (QA) approaches to produce high quality relation triples across sentences. Based on the framework, we develop a large IE benchmark with high quality human evaluation. This benchmark contains 293K documents, 2M golden relation triples, and 636 relation types. We compare our system with some IE baselines on our benchmark and the results show that our system achieves great improvements. 
QCP  Research on multirobot systems has demonstrated promising results in manifold applications and domains. Still, efficiently learning an effective robot behaviors is very difficult, due to unstructured scenarios, high uncertainties, and large state dimensionality (e.g. hyperredundant and groups of robot). To alleviate this problem, we present QCP a cooperative modelbased reinforcement learning algorithm, which exploits action values to both (1) guide the exploration of the state space and (2) generate effective policies. Specifically, we exploit Qlearning to attack the curseofdimensionality in the iterations of a MonteCarlo Tree Search. We implement and evaluate QCP on different stochastic cooperative (generalsum) games: (1) a simple cooperative navigation problem among 3 robots, (2) a cooperation scenario between a pair of KUKA YouBots performing handovers, and (3) a coordination task between two mobile robots entering a door. The obtained results show the effectiveness of QCP in the chosen applications, where action values drive the exploration and reduce the computational demand of the planning process while achieving good performance. 
QGIS  QGIS (previously known as ‘Quantum GIS’) is a crossplatform free and opensource desktop geographic information system (GIS) application that provides data viewing, editing, and analysis capabilities.Similar to other software GIS systems QGIS allows users to create maps with many layers using different map projections. Maps can be assembled in different formats and for different uses. QGIS allows maps to be composed of raster or vector layers. Typical for this kind of software the vector data is stored as either point, line, or polygonfeature. Different kinds of raster images are supported and the software can perform georeferencing of images. QGIS provides integration with other open source GIS packages, including PostGIS, GRASS, and MapServer to give users extensive functionality. Plugins, written in Python or C++, extend the capabilities of QGIS. There are plugins to geocode using the Google Geocoding API, perform geoprocessing (fTools) similar to the standard tools found in ArcGIS, interface with PostgreSQL/PostGIS, SpatiaLite and MySQL databases. 
QGraph  Arising usercentric graph applications such as route planning and personalized social network analysis have initiated a shift of paradigms in modern graph processing systems towards multiquery analysis, i.e., processing multiple graph queries in parallel on a shared graph. These applications generate a dynamic number of localized queries around query hotspots such as popular urban areas. However, existing graph processing systems are not yet tailored towards these properties: The employed methods for graph partitioning and synchronization management disregard query locality and dynamism which leads to high query latency. To this end, we propose the system QGraph for multiquery graph analysis that considers query locality on three levels. (i) The queryaware graph partitioning algorithm Qcut maximizes query locality to reduce communication overhead. (ii) The method for synchronization management, called hybrid barrier synchronization, allows for full exploitation of local queries spanning only a subset of partitions. (iii) Both methods adapt at runtime to changing query workloads in order to maintain and exploit locality. Our experiments show that Qcut reduces average query latency by up to 57 percent compared to static queryagnostic partitioning algorithms. 
QICD (QICD) 
Extremely fast algorithm ‘QICD’, Iterative Coordinate Descent Algorithm for Highdimensional Nonconvex Penalized Quantile Regression. This algorithm combines the coordinate descent algorithm in the inner iteration with the majorization minimization step in the outside step. For each inner univariate minimization problem, we only need to compute a onedimensional weighted median, which ensures fast computation. Tuning parameter selection is based on two different method: the cross validation and BIC for quantile regression model. Details are described in the Peng,B and Wang,L. (2015) linked to via the URL below with <DOI:10.1080/10618600.2014.913516>. 
QLearning  Qlearning is a modelfree reinforcement learning technique. Specifically, Qlearning can be used to find an optimal actionselection policy for any given (finite) Markov decision process (MDP). It works by learning an actionvalue function that ultimately gives the expected utility of taking a given action in a given state and following the optimal policy thereafter. A policy is a rule that the agent follows in selecting actions, given the state it is in. When such an actionvalue function is learned, the optimal policy can be constructed by simply selecting the action with the highest value in each state. One of the strengths of Qlearning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. Additionally, Qlearning can handle problems with stochastic transitions and rewards, without requiring any adaptations. It has been proven that for any finite MDP, Qlearning eventually finds an optimal policy, in the sense that the expected value of the total reward return over all successive steps, starting from the current state, is the maximum achievable. DeepQLearning DynTxRegime 
QLearning SineCosine Algorithm (QLSCA) 
The sinecosine algorithm (SCA) is a new populationbased metaheuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sinecosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from 1 to 1). In this paper, we propose a new hybrid Qlearning sinecosine based strategy, called the Qlearning sinecosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Qlearning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (L\’evy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent stateoftheart strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level. 
QMiner  QMiner is a data analytics platform for processing largescale realtime streams containing structured and unstructured data. 
QMIX  In many realworld settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint actionvalues conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel valuebased method that can train decentralised policies in a centralised endtoend fashion. QMIX employs a network that estimates joint actionvalues as a complex nonlinear combination of peragent values that condition only on local observations. We structurally enforce that the jointaction value is monotonic in the peragent values, which allows tractable maximisation of the joint actionvalue in offpolicy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing valuebased multiagent reinforcement learning methods. 
QR Decomposition  In linear algebra, a QR decomposition (also called a QR factorization) of a matrix is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem, and is the basis for a particular eigenvalue algorithm, the QR algorithm. If A has n linearly independent columns, then the first n columns of Q form an orthonormal basis for the column space of A. More generally, the first k columns of Q form an orthonormal basis for the span of the first k columns of A for any 1 ≤ k ≤ n. The fact that any column k of A only depends on the first k columns of Q is responsible for the triangular form of R. 
QRkit  Embedded computer vision applications increasingly require the speed and power benefits of singleprecision (32 bit) floating point. However, applications which make use of Levenberglike optimization can lose significant accuracy when reducing to single precision, sometimes unrecoverably so. This accuracy can be regained using solvers based on QR rather than Cholesky decomposition, but the absence of sparse QR solvers for common sparsity patterns found in computer vision means that many applications cannot benefit. We introduce an opensource suite of solvers for Eigen, which efficiently compute the QR decomposition for matrices with some common sparsity patterns (block diagonal, horizontal and vertical concatenation, and banded). For problems with very particular sparsity structures, these elements can be composed together in ‘kit’ form, hence the name QRkit. We apply our methods to several computer vision problems, showing competitive performance and suitability especially in single precision arithmetic. 
qSpace Novelty Detection  In machine learning, novelty detection is the task of identifying novel unseen data. During training, only samples from the normal class are available. Test samples are classified as normal or abnormal by assignment of a novelty score. Here we propose novelty detection methods based on training variational autoencoders (VAEs) on normal data. Since abnormal samples are not used during training, we define novelty metrics based on the (partially complementary) assumptions that the VAE is less capable of reconstructing abnormal samples well; that abnormal samples more strongly violate the VAE regularizer; and that abnormal samples differ from normal samples not only in inputfeature space, but also in the VAE latent space and VAE output. These approaches, combined with various possibilities of using (e.g.~sampling) the probabilistic VAE to obtain scalar novelty scores, yield a large family of methods. We apply these methods to magnetic resonance imaging, namely to the detection of diffusionspace (\mbox{qspace}) abnormalities in diffusion MRI scans of multiple sclerosis patients, i.e.~to detect multiple sclerosis lesions without using any lesion labels for training. Many of our methods outperform previously proposed qspace novelty detection methods. 
Quadratic Assignment Problem (QAP) 
The quadratic assignment problem (QAP) is one of the fundamental combinatorial optimization problems in the branch of optimization or operations research in mathematics, from the category of the facilities location problems. The problem models the following reallife problem: There are a set of n facilities and a set of n locations. For each pair of locations, a distance is specified and for each pair of facilities a weight or flow is specified (e.g., the amount of supplies transported between the two facilities). The problem is to assign all facilities to different locations with the goal of minimizing the sum of the distances multiplied by the corresponding flows. Intuitively, the cost function encourages factories with high flows between each other to be placed close together. The problem statement resembles that of the assignment problem, except that the cost function is expressed in terms of quadratic inequalities, hence the name. qap 
Quadratic Discriminant Analysis (QDA) 
Quadratic discriminant analysis (QDA) is closely related to linear discriminant analysis (LDA), where it is assumed that the measurements from each class are normally distributed. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. When the normality assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is the likelihood ratio test. QUDA: A Direct Approach for Sparse Quadratic Discriminant Analysis SQDA 
Quadratic Exponential Model  cquad 
Quadratic Programming (QP) 
Quadratic programming (QP) is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables. 
Qualitative Comparative Analysis (QCA) 
Qualitative Comparative Analysis (QCA) is a technique, originally developed by Charles Ragin in 1987. QCA currently has more adherents in Europe than in the United States. It is used for analyzing data sets by listing and counting all the combinations of variables observed in the data set, and then applying the rules of logical inference to determine which descriptive inferences or implications the data supports. QCAtools,QCAfalsePositive,iaQCA 
Qualitative Data Science  The often celebrated artificial intelligence of machine learning is impressive but does not come close to human intelligence and ability to understand the world. Many data scientists are working on automated text analysis to solve this issue (the topicmodels package is an example of such an attempt). These efforts are impressive but even the smartest text analysis algorithm is not able to derive meaning from text. To fully embrace all aspects of data science we need to be able to methodically undertake qualitative data analysis. RDQA 
Quantification  In mathematics and empirical science, quantification is the act of counting and measuring that maps human sense observations and experiences into members of some set of numbers. Quantification in this sense is fundamental to the scientific method. 
Quantification  Quantification is the machine learning task of estimating testdata class proportions that are not necessarily similar to those in training. Apart from its intrinsic value as an aggregate statistic, quantification output can also be used to optimize classifier probabilities, thereby increasing classification accuracy. We unify major quantification approaches under a constrained multivariate regression framework, and use mathematical programming to estimate class proportions for different loss functions. With this modeling approach, we extend existing binaryonly quantification approaches to multiclass settings as well. We empirically verify our unified framework by experimenting with several multiclass datasets including the Stanford Sentiment Treebank and CIFAR10. 
Quantile Copula Causal Discovery (QCCD) 
Telling cause from effect using observational data is a challenging problem, especially in the bivariate case. Contemporary methods often assume an independence between the cause and the generating mechanism of the effect given the cause. From this postulate, they derive asymmetries to uncover causal relationships. In this work, we propose such an approach, based on the link between Kolmogorov complexity and quantile scoring. We use a nonparametric conditional quantile estimator based on copulas to implement our procedure, thus avoiding restrictive assumptions about the joint distribution between cause and effect. In an extensive study on real and synthetic data, we show that quantile copula causal discovery (QCCD) compares favorably to stateoftheart methods, while at the same time being computationally efficient and scalable. 
Quantile Fourier Neural Network  A novel quantile Fourier neural network is presented for nonparametric probabilistic forecasting. Prediction are provided in the form of composite quantiles using time as the only input to the model. This effectively is a form of extrapolation based quantile regression applied for forecasting. Empirical results showcase that for time series data that have clear seasonality and trend, the model provides high quality probabilistic predictions. This work introduces a new class of forecasting of using only time as the input versus using past data such as an autoregressive model. Extrapolation based regression has not been studied before for probabilistic forecasting. 
Quantile Function (QF) 
In probability and statistics, the quantile function specifies, for a given probability in the probability distribution of a random variable, the value at which the probability of the random variable will be less than or equal to that probability. It is also called the percent point function or inverse cumulative distribution function. The quantile function is one way of prescribing a probability distribution, and it is an alternative to the probability density function (pdf) or probability mass function, the cumulative distribution function (cdf) and the characteristic function. The quantile function, Q, of a probability distribution is the inverse of its cumulative distribution function F. The derivative of the quantile function, namely the quantile density function, is yet another way of prescribing a probability distribution. It is the reciprocal of the pdf composed with the quantile function. 
Quantile Markov Decision Processes (QMDP) 
In this paper, we consider the problem of optimizing the quantiles of the cumulative rewards of Markov Decision Processes (MDP), to which we refers as Quantile Markov Decision Processes (QMDP). Traditionally, the goal of a Markov Decision Process (MDP) is to maximize expected cumulative reward over a defined horizon (possibly to be infinite). In many applications, however, a decision maker may be interested in optimizing a specific quantile of the cumulative reward instead of its expectation. (If we have some reference here, it would be good.) Our framework of QMDP provides analytical results characterizing the optimal QMDP solution and presents the algorithm for solving the QMDP. We provide analytical results characterizing the optimal QMDP solution and present the algorithms for solving the QMDP. We illustrate the model with two experiments: a grid game and a HIV optimal treatment experiment. 
Quantile Regression (QR) 
Quantile regression is a type of regression analysis used in statistics and econometrics. Whereas the method of least squares results in estimates that approximate the conditional mean of the response variable given certain values of the predictor variables, quantile regression aims at estimating either the conditional median or other quantiles of the response variable. GLDreg 
Quantile Reinforcement Learning (QRL) 
In reinforcement learning, the standard criterion to evaluate policies in a state is the expectation of (discounted) sum of rewards. However, this criterion may not always be suitable, we consider an alternative criterion based on the notion of quantiles. In the case of episodic reinforcement learning problems, we propose an algorithm based on stochastic approximation with two timescales. We evaluate our proposition on a simple model of the TV show, Who wants to be a millionaire. 
Quantile Treatment Effect (QTE) 
qte 
Quantile Variables  In the framework of Symbolic Data Analysis (SDA), distributionvariables are a particular case of multivalued variables: each unit is represented by a set of distributions (e.g. histograms, density functions or quantile functions), one for each variable. Factor analysis (FA) methods are primary exploratory tools for dimension reduction and visualization. In the present work, we use Multiple Factor Analysis (MFA) approach for the analysis of data described by distributional variables. Each distributional variable induces a set new numeric variable related to the quantiles of each distribution. We call these new variables as \textit{quantile variables} and the set of quantile variables related to a distributional one is a block in the MFA approach. Thus, MFA is performed on juxtaposed tables of quantile variables. \\ We show that the criterion decomposed in the analysis is an approximation of the variability based on a suitable metrics between distributions: the squared $L_2$ Wasserstein distance. \\ Applications on simulated and real distributional data corroborate the method. The interpretation of the results on the factorial planes is performed by new interpretative tools that are related to the several characteristics of the distributions (location, scale and shape). 
Quantiles Return (QR) 

Quantitative Analysis (QA) 

Quantitative Analyst (Quant) 
The quant, or quantitative analyst, is a financial professional who makes use of a mathematical approach to evaluating the current conditions in a trading market. As part of this evaluation, the quant will also employ the same general methods to individual investment opportunities within the market. The general concept is to make use of a numerical analysis in order to help an investor identify the most profitable purchases and sales to make within the market. 
Quantitative CBA  Quantitative CBA is a postprocessing algorithm for association rule classification algorithm CBA (Liu et al, 1998). QCBA uses original, undiscretized numerical attributes to optimize the discovered association rules, refining the boundaries of literals in the antecedent of the rules produced by CBA. Some rules as well as literals from the rules can consequently be removed, which makes the resulting classifier smaller. Onerule classification and crisp rules make CBA classification models possibly most comprehensible among all association rule classification algorithms. These viable properties are retained by QCBA. The postprocessing is conceptually fast, because it is performed on a relatively small number of rules that passed data coverage pruning in CBA. Benchmark of our QCBA approach on 22 UCI datasets shows average 53% decrease in the total size of the model as measured by the total number of conditions in all rules. Model accuracy remains on the same level as for CBA. 
Quantitative Discourse Analysis  Quantitative Discourse Analysis is basically looking at patterns in language. qdap 
Quantized Compressive KMeans  The recent framework of compressive statistical learning aims at designing tractable learning algorithms that use only a heavily compressed representationor sketchof massive datasets. Compressive KMeans (CKM) is such a method: it estimates the centroids of data clusters from pooled, nonlinear, random signatures of the learning examples. While this approach significantly reduces computational time on very large datasets, its digital implementation wastes acquisition resources because the learning examples are compressed only after the sensing stage. The present work generalizes the sketching procedure initially defined in Compressive KMeans to a large class of periodic nonlinearities including hardwarefriendly implementations that compressively acquire entire datasets. This idea is exemplified in a Quantized Compressive KMeans procedure, a variant of CKM that leverages 1bit universal quantization (i.e. retaining the least significant bit of a standard uniform quantizer) as the periodic sketch nonlinearity. Trading for this resourceefficient signature (standard in most acquisition schemes) has almost no impact on the clustering performances, as illustrated by numerical experiments. 
Quantized MANN (QMANN) 
Memoryaugmented neural networks (MANNs) refer to a class of neural network models equipped with external memory (such as neural Turing machines and memory networks). These neural networks outperform conventional recurrent neural networks (RNNs) in terms of learning longterm dependency, allowing them to solve intriguing AI tasks that would otherwise be hard to address. This paper concerns the problem of quantizing MANNs. Quantization is known to be effective when we deploy deep models on embedded systems with limited resources. Furthermore, quantization can substantially reduce the energy consumption of the inference procedure. These benefits justify recent developments of quantized multi layer perceptrons, convolutional networks, and RNNs. However, no prior work has reported the successful quantization of MANNs. The indepth analysis presented here reveals various challenges that do not appear in the quantization of the other networks. Without addressing them properly, quantized MANNs would normally suffer from excessive quantization error which leads to degraded performance. In this paper, we identify memory addressing (specifically, contentbased addressing) as the main reason for the performance degradation and propose a robust quantization method for MANNs to address the challenge. In our experiments, we achieved a computationenergy gain of 22x with 8bit fixedpoint and binary quantization compared to the floatingpoint implementation. Measured on the bAbI dataset, the resulting model, named the quantized MANN (QMANN), improved the error rate by 46% and 30% with 8bit fixedpoint and binary quantization, respectively, compared to the MANN quantized using conventional techniques. ➚ “Memory Augmented Neural Network” 
Quantized Neural Network (QNN) 
We introduce a method to train Quantized Neural Networks (QNNs) — neural networks with extremely low precision (e.g., 1bit) weights and activations, at runtime. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bitwise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32bit counterparts. For example, our quantized version of AlexNet with 1bit weights and 2bit activations achieves 51% top1 accuracy. Moreover, we quantize the parameter gradients to 6bits as well which enables gradients computation using only bitwise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32bit counterparts using only 4bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online. 
Quantum Low Entropy based Associative Reasoning (QLEAR learning) 
In this paper, we propose the classification method based on a learning paradigm we are going to call Quantum Low Entropy based Associative Reasoning or QLEAR learning. The approach is based on the idea that classification can be understood as supervised clustering, where a quantum entropy in the context of the quantum probabilistic model, will be used as a ‘capturer’ (measure, or external index), of the ‘natural structure’ of the data. By using quantum entropy we do not make any assumption about linear separability of the data that are going to be classified. The basic idea is to find close neighbors to a query sample and then use relative change in the quantum entropy as a measure of similarity of the newly arrived sample with the representatives of interest. In other words, method is based on calculation of quantum entropy of the referent system and its relative change with the addition of the newly arrived sample. Referent system consists of vectors that represent individual classes and that are the most similar, in Euclidean distance sense, to the vector that is analyzed. Here, we analyze the classification problem in the context of measuring similarities to prototype examples of categories. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in biasvariance decomposition) in the case of limited sampling. Alternatively, one could use machine learning techniques (like support vector machines) but they involve timeconsuming optimization. Here we propose a hybrid of nearest neighbor and machine learning technique which deals naturally with the multiclass setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice. 
Quantum Neural Network (QNN) 
Quantum neural networks (QNNs) are neural network models which are based on the principles of quantum mechanics. There are two different approaches to QNN research, one exploiting quantum information processing to improve existing neural network models (sometimes also vice versa), and the other one searching for potential quantum effects in the brain. 
Quantum Variational Autoencoder (QVAE) 
Variational autoencoders (VAEs) are powerful generative models with the salient ability to perform inference. Here, we introduce a \emph{quantum variational autoencoder} (QVAE): a VAE whose latent generative process is implemented as a quantum Boltzmann machine (QBM). We show that our model can be trained endtoend by maximizing a welldefined lossfunction: a ‘quantum’ lowerbound to a variational approximation of the loglikelihood. We use quantum Monte Carlo (QMC) simulations to train and evaluate the performance of QVAEs. To achieve the best performance, we first create a VAE platform with discrete latent space generated by a restricted Boltzmann machine (RBM). Our model achieves stateoftheart performance on the MNIST dataset when compared against similar approaches that only involve discrete variables in the generative process. We consider QVAEs with a smaller number of latent units to be able to perform QMC simulations, which are computationally expensive. We show that QVAEs can be trained effectively in regimes where quantum effects are relevant despite training via the quantum bound. Our findings open the way to the use of quantum computers to train QVAEs to achieve competitive performance for generative models. Placing a QBM in the latent space of a VAE leverages the full potential of current and nextgeneration quantum computers as sampling devices. 
QuasiFully Supervised Learning (QFSL) 
Most existing ZeroShot Learning (ZSL) methods have the strong bias problem, in which instances of unseen (target) classes tend to be categorized as one of the seen (source) classes. So they yield poor performance after being deployed in the generalized ZSL settings. In this paper, we propose a straightforward yet effective method named QuasiFully Supervised Learning (QFSL) to alleviate the bias problem. Our method follows the way of transductive learning, which assumes that both the labeled source images and unlabeled target images are available for training. In the semantic embedding space, the labeled source images are mapped to several fixed points specified by the source categories, and the unlabeled target images are forced to be mapped to other points specified by the target categories. Experiments conducted on AwA2, CUB and SUN datasets demonstrate that our method outperforms existing stateoftheart approaches by a huge margin of 9.3~24.5% following generalized ZSL settings, and by a large margin of 0.2~16.2% following conventional ZSL settings. 
quasiMCMC  QuasiMonte Carlo (QMC) methods for estimating integrals are attractive since the resulting estimators converge at a faster rate than pseudorandom Monte Carlo. However, they can be difficult to set up on arbitrary posterior densities within the Bayesian framework, in particular for inverse problems. We introduce a general parallel Markov chain Monte Carlo (MCMC) framework, for which we prove a law of large numbers and a central limit theorem. We further extend this approach to the use of adaptive kernels and state conditions, under which ergodicity holds. As a further extension, an importance sampling estimator is derived, for which asymptotic unbiasedness is proven. We consider the use of completely uniformly distributed (CUD) numbers and nonreversible transitions within the above stated methods, which leads to a general parallel quasiMCMC (QMCMC) methodology. We prove consistency of the resulting estimators and demonstrate numerically that this approach scales close to $n^{1}$ as we increase parallelisation, instead of the usual $n^{1/2}$ that is typical of standard MCMC algorithms. In practical statistical models we observe up to 2 orders of magnitude improvement compared with pseudorandom methods. 
QuasiMonte Carlo Variational Inference (QMC) 
Many machine learning problems involve Monte Carlo gradient estimators. As a prominent example, we focus on Monte Carlo variational inference (MCVI) in this paper. The performance of MCVI crucially depends on the variance of its stochastic gradients. We propose variance reduction by means of QuasiMonte Carlo (QMC) sampling. QMC replaces N i.i.d. samples from a uniform probability distribution by a deterministic sequence of samples of length N. This sequence covers the underlying random variable space more evenly than i.i.d. draws, reducing the variance of the gradient estimator. With our novel approach, both the score function and the reparameterization gradient estimators lead to much faster convergence. We also propose a new algorithm for Monte Carlo objectives, where we operate with a constant learning rate and increase the number of QMC samples per iteration. We prove that this way, our algorithm can converge asymptotically at a faster rate than SGD. We furthermore provide theoretical guarantees on QMC for Monte Carlo objectives that go beyond MCVI, and support our findings by several experiments on largescale data sets from various domains. 
QuasiNewton Methods  QuasiNewton methods are methods used to either find zeroes or local maxima and minima of functions. They are an alternative to Newton’s method when the Jacobian (when searching for zeroes) or the Hessian (when searching for extrema) is unavailable or too expensive to compute at every iteration. 
QuasiRecurrent Neural Networks (QRNN) 
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasirecurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and characterlevel neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. 
Quegel  Pioneered by Google’s Pregel, many distributed systems have been developed for largescale graph analytics. These systems expose the userfriendly ‘think like a vertex’ programming interface to users, and exhibit good horizontal scalability. However, these systems are designed for tasks where the majority of graph vertices participate in computation, but are not suitable for processing lightworkload graph queries where only a small fraction of vertices need to be accessed. The programming paradigm adopted by these systems can seriously underutilize the resources in a cluster for graph query processing. In this work, we develop a new opensource system, called Quegel, for querying big graphs, which treats queries as firstclass citizens in the design of its computing model. Users only need to specify the Pregellike algorithm for a generic query, and Quegel processes lightworkload graph queries on demand using a novel superstepsharing execution model to effectively utilize the cluster resources. Quegel further provides a convenient interface for constructing graph indexes, which significantly improve query performance but are not supported by existing graphparallel systems. Our experiments verified that Quegel is highly efficient in answering various types of graph queries and is up to orders of magnitude faster than existing systems. 
Query Autofiltering  Query Autofiltering is autotagging of the incoming query where the knowledge source is the search index itself. What does this mean and why should we care? Content tagging processes are traditionally done at index time either manually or automatically by machine learning or knowledge based (taxonomy/ontology) approaches. To ‘tag’ a piece of content means to attach a piece of metadata that defines some attribute of that content (such as product type, color, price, date and so on). We use this now for faceted search – if I search for ‘shirts’, the search engine will bring back all records that have the token ‘shirts’ or the singular form ‘shirt’ (using a technique called stemming). At the same time, it will display all of the values of the various tags that we added to the content at index time under the field name or ‘category’ of these tags. We call these things facets. When the user clicks on a facet link, say color = red, we then generate a Solr filter query with the name / value pair of <field name> = <facet value> and add that to the original query. What this does is narrow the search result set to all records that have ‘shirt’ or ‘shirts’ and the ‘color’ facet value of ‘red’. Another benefit of faceting is that the user can see all of the colors that shirts come in, so they can also find blue shirts in the same way. Query Autofiltering Revisited 
Query Expansion (QE) 
Query expansion (QE) is the process of reformulating a seed query to improve retrieval performance in information retrieval operations. In the context of web search engines, query expansion involves evaluating a user’s input (what words were typed into the search query area, and sometimes other types of data) and expanding the search query to match additional documents. Query expansion involves techniques such as: · Finding synonyms of words, and searching for the synonyms as well · Finding all the various morphological forms of words by stemming each word in the search query · Fixing spelling errors and automatically searching for the corrected form or suggesting it in the results · Reweighting the terms in the original query Query expansion is a methodology studied in the field of computer science, particularly within the realm of natural language processing and information retrieval. 
Quetelet Index (QI) 

QuickeNing  We propose an approach to accelerate gradientbased optimization algorithms by giving them the ability to exploit curvature information using quasiNewton update rules. The proposed scheme, called QuickeNing, is generic and can be applied to a large class of firstorder methods such as incremental and blockcoordinate algorithms; it is also compatible with composite objectives, meaning that it has the ability to provide exactly sparse solutions when the objective involves a sparsityinducing regularization. QuickeNing relies on limitedmemory BFGS rules, making it appropriate for solving highdimensional optimization problems; with no linesearch, it is also simple to use and to implement. Besides, it enjoys a worstcase linear convergence rate for strongly convex problems. We present experimental results where QuickeNing gives significant improvements over competing methods for solving largescale highdimensional machine learning problems. 
QuickIM  The Influence Maximization (IM) problem aims at finding k seed vertices in a network, starting from which influence can be spread in the network to the maximum extent. In this paper, we propose QuickIM, the first versatile IM algorithm that attains all the desirable properties of a practically applicable IM algorithm at the same time, namely high time efficiency, good result quality, low memory footprint, and high robustness. On realworld social networks, QuickIM achieves the $\Omega(n + m)$ lower bound on time complexity and $\Omega(n)$ space complexity, where $n$ and $m$ are the number of vertices and edges in the network, respectively. Our experimental evaluation verifies the superiority of QuickIM. Firstly, QuickIM runs 13 orders of magnitude faster than the stateoftheart IM algorithms. Secondly, except EasyIM, QuickIM requires 12 orders of magnitude less memory than the stateoftheart algorithms. Thirdly, QuickIM always produces as good quality results as the stateoftheart algorithms. Lastly, the time and the memory performance of QuickIM is independent of influence probabilities. On the largest network used in the experiments that contains more than 3.6 billion edges, QuickIM is able to find hundreds of influential seeds in less than 4 minutes, while all the stateoftheart algorithms fail to terminate in an hour. 
QuickNet  We present QuickNet, a fast and accurate network architecture that is both faster and significantly more accurate than other fast deep architectures like SqueezeNet. Furthermore, it uses less parameters than previous networks, making it more memory efficient. We do this by making two major modifications to the reference Darknet model (Redmon et al, 2015): 1) The use of depthwise separable convolutions and 2) The use of parametric rectified linear units. We make the observation that parametric rectified linear units are computationally equivalent to leaky rectified linear units at test time and the observation that separable convolutions can be interpreted as a compressed Inception network (Chollet, 2016). Using these observations, we derive a network architecture, which we call QuickNet, that is both faster and more accurate than previous models. Our architecture provides at least four major advantages: (1) A smaller model size, which is more tenable on memory constrained systems; (2) A significantly faster network which is more tenable on computationally constrained systems; (3) A high accuracy of 95.7 percent on the CIFAR10 Dataset which outperforms all but one result published so far, although we note that our works are orthogonal approaches and can be combined (4) Orthogonality to previous model compression approaches allowing for further speed gains to be realized. 
Quilt Plot  Quilt plots. Sounds interesting. If you looked at that and thought “Hey, that’s a heat map!”, you are correct. That is a heat map. Let’s be quite clear about that. It’s a heat map. 
Quintly Query Language (QQL) 
QQL stands for quintly query language and gives you the power to define your own metrics based on the quintly data pool. As you can hear from the name this is based on the SQL language, technically based on SQLite. 
Quiver Plot  ggquiver 
Advertisements