QGIS  QGIS (previously known as ‘Quantum GIS’) is a crossplatform free and opensource desktop geographic information system (GIS) application that provides data viewing, editing, and analysis capabilities.Similar to other software GIS systems QGIS allows users to create maps with many layers using different map projections. Maps can be assembled in different formats and for different uses. QGIS allows maps to be composed of raster or vector layers. Typical for this kind of software the vector data is stored as either point, line, or polygonfeature. Different kinds of raster images are supported and the software can perform georeferencing of images. QGIS provides integration with other open source GIS packages, including PostGIS, GRASS, and MapServer to give users extensive functionality. Plugins, written in Python or C++, extend the capabilities of QGIS. There are plugins to geocode using the Google Geocoding API, perform geoprocessing (fTools) similar to the standard tools found in ArcGIS, interface with PostgreSQL/PostGIS, SpatiaLite and MySQL databases. 
QICD (QICD) 
Extremely fast algorithm ‘QICD’, Iterative Coordinate Descent Algorithm for Highdimensional Nonconvex Penalized Quantile Regression. This algorithm combines the coordinate descent algorithm in the inner iteration with the majorization minimization step in the outside step. For each inner univariate minimization problem, we only need to compute a onedimensional weighted median, which ensures fast computation. Tuning parameter selection is based on two different method: the cross validation and BIC for quantile regression model. Details are described in the Peng,B and Wang,L. (2015) linked to via the URL below with <DOI:10.1080/10618600.2014.913516>. 
QLearning  Qlearning is a modelfree reinforcement learning technique. Specifically, Qlearning can be used to find an optimal actionselection policy for any given (finite) Markov decision process (MDP). It works by learning an actionvalue function that ultimately gives the expected utility of taking a given action in a given state and following the optimal policy thereafter. A policy is a rule that the agent follows in selecting actions, given the state it is in. When such an actionvalue function is learned, the optimal policy can be constructed by simply selecting the action with the highest value in each state. One of the strengths of Qlearning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. Additionally, Qlearning can handle problems with stochastic transitions and rewards, without requiring any adaptations. It has been proven that for any finite MDP, Qlearning eventually finds an optimal policy, in the sense that the expected value of the total reward return over all successive steps, starting from the current state, is the maximum achievable. DeepQLearning DynTxRegime 
QMiner  QMiner is a data analytics platform for processing largescale realtime streams containing structured and unstructured data. 
QR Decomposition  In linear algebra, a QR decomposition (also called a QR factorization) of a matrix is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem, and is the basis for a particular eigenvalue algorithm, the QR algorithm. If A has n linearly independent columns, then the first n columns of Q form an orthonormal basis for the column space of A. More generally, the first k columns of Q form an orthonormal basis for the span of the first k columns of A for any 1 ≤ k ≤ n. The fact that any column k of A only depends on the first k columns of Q is responsible for the triangular form of R. 
Quadratic Assignment Problem (QAP) 
The quadratic assignment problem (QAP) is one of the fundamental combinatorial optimization problems in the branch of optimization or operations research in mathematics, from the category of the facilities location problems. The problem models the following reallife problem: There are a set of n facilities and a set of n locations. For each pair of locations, a distance is specified and for each pair of facilities a weight or flow is specified (e.g., the amount of supplies transported between the two facilities). The problem is to assign all facilities to different locations with the goal of minimizing the sum of the distances multiplied by the corresponding flows. Intuitively, the cost function encourages factories with high flows between each other to be placed close together. The problem statement resembles that of the assignment problem, except that the cost function is expressed in terms of quadratic inequalities, hence the name. qap 
Quadratic Discriminant Analysis (QDA) 
Quadratic discriminant analysis (QDA) is closely related to linear discriminant analysis (LDA), where it is assumed that the measurements from each class are normally distributed. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. When the normality assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is the likelihood ratio test. QUDA: A Direct Approach for Sparse Quadratic Discriminant Analysis SQDA 
Quadratic Exponential Model  cquad 
Quadratic Programming (QP) 
Quadratic programming (QP) is a special type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables. 
Qualitative Comparative Analysis (QCA) 
Qualitative Comparative Analysis (QCA) is a technique, originally developed by Charles Ragin in 1987. QCA currently has more adherents in Europe than in the United States. It is used for analyzing data sets by listing and counting all the combinations of variables observed in the data set, and then applying the rules of logical inference to determine which descriptive inferences or implications the data supports. QCAtools,QCAfalsePositive,iaQCA 
Quantification  In mathematics and empirical science, quantification is the act of counting and measuring that maps human sense observations and experiences into members of some set of numbers. Quantification in this sense is fundamental to the scientific method. 
Quantification  Quantification is the machine learning task of estimating testdata class proportions that are not necessarily similar to those in training. Apart from its intrinsic value as an aggregate statistic, quantification output can also be used to optimize classifier probabilities, thereby increasing classification accuracy. We unify major quantification approaches under a constrained multivariate regression framework, and use mathematical programming to estimate class proportions for different loss functions. With this modeling approach, we extend existing binaryonly quantification approaches to multiclass settings as well. We empirically verify our unified framework by experimenting with several multiclass datasets including the Stanford Sentiment Treebank and CIFAR10. 
Quantile Function (QF) 
In probability and statistics, the quantile function specifies, for a given probability in the probability distribution of a random variable, the value at which the probability of the random variable will be less than or equal to that probability. It is also called the percent point function or inverse cumulative distribution function. The quantile function is one way of prescribing a probability distribution, and it is an alternative to the probability density function (pdf) or probability mass function, the cumulative distribution function (cdf) and the characteristic function. The quantile function, Q, of a probability distribution is the inverse of its cumulative distribution function F. The derivative of the quantile function, namely the quantile density function, is yet another way of prescribing a probability distribution. It is the reciprocal of the pdf composed with the quantile function. 
Quantile Regression (QR) 
Quantile regression is a type of regression analysis used in statistics and econometrics. Whereas the method of least squares results in estimates that approximate the conditional mean of the response variable given certain values of the predictor variables, quantile regression aims at estimating either the conditional median or other quantiles of the response variable. GLDreg 
Quantile Reinforcement Learning (QRL) 
In reinforcement learning, the standard criterion to evaluate policies in a state is the expectation of (discounted) sum of rewards. However, this criterion may not always be suitable, we consider an alternative criterion based on the notion of quantiles. In the case of episodic reinforcement learning problems, we propose an algorithm based on stochastic approximation with two timescales. We evaluate our proposition on a simple model of the TV show, Who wants to be a millionaire. 
Quantile Treatment Effect (QTE) 
qte 
Quantiles Return (QR) 

Quantitative Analysis (QA) 

Quantitative Analyst (Quant) 
The quant, or quantitative analyst, is a financial professional who makes use of a mathematical approach to evaluating the current conditions in a trading market. As part of this evaluation, the quant will also employ the same general methods to individual investment opportunities within the market. The general concept is to make use of a numerical analysis in order to help an investor identify the most profitable purchases and sales to make within the market. 
Quantitative Discourse Analysis  Quantitative Discourse Analysis is basically looking at patterns in language. qdap 
Quantized Neural Network (QNN) 
We introduce a method to train Quantized Neural Networks (QNNs) — neural networks with extremely low precision (e.g., 1bit) weights and activations, at runtime. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bitwise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32bit counterparts. For example, our quantized version of AlexNet with 1bit weights and 2bit activations achieves 51% top1 accuracy. Moreover, we quantize the parameter gradients to 6bits as well which enables gradients computation using only bitwise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32bit counterparts using only 4bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online. 
Quantum Low Entropy based Associative Reasoning (QLEAR learning) 
In this paper, we propose the classification method based on a learning paradigm we are going to call Quantum Low Entropy based Associative Reasoning or QLEAR learning. The approach is based on the idea that classification can be understood as supervised clustering, where a quantum entropy in the context of the quantum probabilistic model, will be used as a ‘capturer’ (measure, or external index), of the ‘natural structure’ of the data. By using quantum entropy we do not make any assumption about linear separability of the data that are going to be classified. The basic idea is to find close neighbors to a query sample and then use relative change in the quantum entropy as a measure of similarity of the newly arrived sample with the representatives of interest. In other words, method is based on calculation of quantum entropy of the referent system and its relative change with the addition of the newly arrived sample. Referent system consists of vectors that represent individual classes and that are the most similar, in Euclidean distance sense, to the vector that is analyzed. Here, we analyze the classification problem in the context of measuring similarities to prototype examples of categories. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in biasvariance decomposition) in the case of limited sampling. Alternatively, one could use machine learning techniques (like support vector machines) but they involve timeconsuming optimization. Here we propose a hybrid of nearest neighbor and machine learning technique which deals naturally with the multiclass setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice. 
Quantum Neural Network (QNN) 
Quantum neural networks (QNNs) are neural network models which are based on the principles of quantum mechanics. There are two different approaches to QNN research, one exploiting quantum information processing to improve existing neural network models (sometimes also vice versa), and the other one searching for potential quantum effects in the brain. 
QuasiNewton Methods  QuasiNewton methods are methods used to either find zeroes or local maxima and minima of functions. They are an alternative to Newton’s method when the Jacobian (when searching for zeroes) or the Hessian (when searching for extrema) is unavailable or too expensive to compute at every iteration. 
QuasiRecurrent Neural Networks (QRNN) 
Recurrent neural networks are a powerful tool for modeling sequential data, but the dependence of each timestep’s computation on the previous timestep’s output limits parallelism and makes RNNs unwieldy for very long sequences. We introduce quasirecurrent neural networks (QRNNs), an approach to neural sequence modeling that alternates convolutional layers, which apply in parallel across timesteps, and a minimalist recurrent pooling function that applies in parallel across channels. Despite lacking trainable recurrent layers, stacked QRNNs have better predictive accuracy than stacked LSTMs of the same hidden size. Due to their increased parallelism, they are up to 16 times faster at train and test time. Experiments on language modeling, sentiment classification, and characterlevel neural machine translation demonstrate these advantages and underline the viability of QRNNs as a basic building block for a variety of sequence tasks. 
Quegel  Pioneered by Google’s Pregel, many distributed systems have been developed for largescale graph analytics. These systems expose the userfriendly ‘think like a vertex’ programming interface to users, and exhibit good horizontal scalability. However, these systems are designed for tasks where the majority of graph vertices participate in computation, but are not suitable for processing lightworkload graph queries where only a small fraction of vertices need to be accessed. The programming paradigm adopted by these systems can seriously underutilize the resources in a cluster for graph query processing. In this work, we develop a new opensource system, called Quegel, for querying big graphs, which treats queries as firstclass citizens in the design of its computing model. Users only need to specify the Pregellike algorithm for a generic query, and Quegel processes lightworkload graph queries on demand using a novel superstepsharing execution model to effectively utilize the cluster resources. Quegel further provides a convenient interface for constructing graph indexes, which significantly improve query performance but are not supported by existing graphparallel systems. Our experiments verified that Quegel is highly efficient in answering various types of graph queries and is up to orders of magnitude faster than existing systems. 
Query Autofiltering  Query Autofiltering is autotagging of the incoming query where the knowledge source is the search index itself. What does this mean and why should we care? Content tagging processes are traditionally done at index time either manually or automatically by machine learning or knowledge based (taxonomy/ontology) approaches. To ‘tag’ a piece of content means to attach a piece of metadata that defines some attribute of that content (such as product type, color, price, date and so on). We use this now for faceted search – if I search for ‘shirts’, the search engine will bring back all records that have the token ‘shirts’ or the singular form ‘shirt’ (using a technique called stemming). At the same time, it will display all of the values of the various tags that we added to the content at index time under the field name or ‘category’ of these tags. We call these things facets. When the user clicks on a facet link, say color = red, we then generate a Solr filter query with the name / value pair of <field name> = <facet value> and add that to the original query. What this does is narrow the search result set to all records that have ‘shirt’ or ‘shirts’ and the ‘color’ facet value of ‘red’. Another benefit of faceting is that the user can see all of the colors that shirts come in, so they can also find blue shirts in the same way. Query Autofiltering Revisited 
Query Expansion (QE) 
Query expansion (QE) is the process of reformulating a seed query to improve retrieval performance in information retrieval operations. In the context of web search engines, query expansion involves evaluating a user’s input (what words were typed into the search query area, and sometimes other types of data) and expanding the search query to match additional documents. Query expansion involves techniques such as: • Finding synonyms of words, and searching for the synonyms as well • Finding all the various morphological forms of words by stemming each word in the search query • Fixing spelling errors and automatically searching for the corrected form or suggesting it in the results • Reweighting the terms in the original query Query expansion is a methodology studied in the field of computer science, particularly within the realm of natural language processing and information retrieval. 
Quetelet Index (QI) 

QuickeNing  We propose an approach to accelerate gradientbased optimization algorithms by giving them the ability to exploit curvature information using quasiNewton update rules. The proposed scheme, called QuickeNing, is generic and can be applied to a large class of firstorder methods such as incremental and blockcoordinate algorithms; it is also compatible with composite objectives, meaning that it has the ability to provide exactly sparse solutions when the objective involves a sparsityinducing regularization. QuickeNing relies on limitedmemory BFGS rules, making it appropriate for solving highdimensional optimization problems; with no linesearch, it is also simple to use and to implement. Besides, it enjoys a worstcase linear convergence rate for strongly convex problems. We present experimental results where QuickeNing gives significant improvements over competing methods for solving largescale highdimensional machine learning problems. 
QuickNet  We present QuickNet, a fast and accurate network architecture that is both faster and significantly more accurate than other fast deep architectures like SqueezeNet. Furthermore, it uses less parameters than previous networks, making it more memory efficient. We do this by making two major modifications to the reference Darknet model (Redmon et al, 2015): 1) The use of depthwise separable convolutions and 2) The use of parametric rectified linear units. We make the observation that parametric rectified linear units are computationally equivalent to leaky rectified linear units at test time and the observation that separable convolutions can be interpreted as a compressed Inception network (Chollet, 2016). Using these observations, we derive a network architecture, which we call QuickNet, that is both faster and more accurate than previous models. Our architecture provides at least four major advantages: (1) A smaller model size, which is more tenable on memory constrained systems; (2) A significantly faster network which is more tenable on computationally constrained systems; (3) A high accuracy of 95.7 percent on the CIFAR10 Dataset which outperforms all but one result published so far, although we note that our works are orthogonal approaches and can be combined (4) Orthogonality to previous model compression approaches allowing for further speed gains to be realized. 
Quilt Plot  Quilt plots. Sounds interesting. If you looked at that and thought “Hey, that’s a heat map!”, you are correct. That is a heat map. Let’s be quite clear about that. It’s a heat map. 
Quintly Query Language (QQL) 
QQL stands for quintly query language and gives you the power to define your own metrics based on the quintly data pool. As you can hear from the name this is based on the SQL language, technically based on SQLite. 
Advertisements