Objective Function  A function that is to be optimized (minimizing or maximizing a numerical value depending on a particular task or problem), for example, an objective function in pattern classification tasks could be to minimize the error rate of a classifier. 
ObjectiveReinforced Generative Adversarial Network (ORGAN) 
In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics. 
Objectoriented Neural Programming (OONP) 
We propose Objectoriented Neural Programming (OONP), a framework for semantically parsing documents in specific domains. Basically, OONP reads a document and parses it into a predesigned objectoriented data structure (referred to as ontology in this paper) that reflects the domainspecific semantics of the document. An OONP parser models semantic parsing as a decision process: a neural netbased Reader sequentially goes through the document, and during the process it builds and updates an intermediate ontology to summarize its partial understanding of the text it covers. OONP supports a rich family of operations (both symbolic and differentiable) for composing the ontology, and a big variety of forms (both symbolic and differentiable) for representing the state and the document. An OONP parser can be trained with supervision of different forms and strength, including supervised learning (SL) , reinforcement learning (RL) and hybrid of the two. Our experiments on both synthetic and realworld document parsing tasks have shown that OONP can learn to handle fairly complicated ontology with training data of modest sizes. 
Octave  GNU Octave is a highlevel interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write noninteractive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. http://wiki.octave.org 
OctNet  We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling. 
Octree Generating Networks  We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute and memoryefficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from highlevel representations, and shape from a single image. 
Oddball SGD  Stochastic Gradient Descent (SGD) is arguably the most popular of the machine learning methods applied to training deep neural networks (DNN) today. It has recently been demonstrated that SGD can be statistically biased so that certain elements of the training set are learned more rapidly than others. In this article, we place SGD into a feedback loop whereby the probability of selection is proportional to error magnitude. This provides a noveltydriven oddball SGD process that learns more rapidly than traditional SGD by prioritising those elements of the training set with the largest novelty (error). In our DNN example, oddball SGD trains some 50x faster than regular SGD. 
Odds  Odds are a numerical expression used in gambling and statistics to reflect the likelihood that a particular event will take place. Conventionally, they are expressed in the form “X to Y”, where X and Y are numbers. In gambling, odds represent the ratio between the amounts staked by parties to a wager or bet. Thus, odds of 6 to 1 mean the first party (normally a bookmaker) is staking six times the amount that the second party is. In statistics, odds represent the probability that an event will take place. Thus, odds of 6 to 1 mean that there are six possible outcomes in which the event will not take place to every one where it will. In other words, the probability that X will not happen is six times the probability that it will. The gambling and statistical uses of odds are closely interlinked. If a bet is a fair one, then the odds offered to the gamblers will perfectly reflect relative probabilities. If the odds being offered to the gamblers do not correspond to probability in this way then one of the parties to the bet has an advantage over the other. 
Offline Algorithm  In computer science, an online algorithm is one that can process its input piecebypiece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand. 
Oja Median  Consider p+1 points in R^p. These points form a simplex, which has a pdimensional volume. For example, in R^3 four points form a tetrahedron, and in R^2 three points form a triangle whose area is ‘2dimensional volume’. Now consider a data set in R^p for which we seek the median. Oja proposed the following measure for a point X in R^p: • for every subset of p points from the data set, form a simplex with X. • sum together the volumes of each such simplex. • the Oja simplex median is any point X* in R^p for which this sum is minimum. 
OnDisk Data Processing (ODDP) 
In this paper, we present a survey of ‘ondisk’ data processing (ODDP). ODDP, which is a form of neardata processing, refers to the computing arrangement where the secondary storage drives have the data processing capability. Proposed ODDP schemes vary widely in terms of the data processing capability, target applications, architecture and the kind of storage drive employed. Some ODDP schemes provide only a specific but heavily used operation like sort whereas some provide a full range of operations. Recently, with the advent of Solid State Drives, powerful and extensive ODDP solutions have been proposed. In this paper, we present a thorough review of architectures developed for different ondisk processing approaches along with current and future challenges and also identify the future directions which ODDP can take. 
OneClass Adversarial net (OCAN) 
Many online applications, such as online social networks or knowledge bases, are often attacked by malicious users who commit different types of actions such as vandalism on Wikipedia or fraudulent reviews on eBay. Currently, most of the fraud detection approaches require a training dataset that contains records of both benign and malicious users. However, in practice, there are often no or very few records of malicious users. In this paper, we develop oneclass adversarial nets (OCAN) for fraud detection using training data with only benign users. OCAN first uses LSTMAutoencoder to learn the representations of benign users from their sequences of online activities. It then detects malicious users by training a discriminator with a complementary GAN model that is different from the regular GAN model. Experimental results show that our OCAN outperforms the stateoftheart oneclass classification models and achieves comparable performance with the latest multisource LSTM model that requires both benign and malicious users in the training phase. 
OneClass Support Vector Machine (OCSVM) 
Traditionally, many classification problems try to solve the two or multiclass situation. The goal of the machine learning application is to distinguish test data between a number of classes, using training data. But what if you only have data of one class and the goal is to test new data and found out whether it is alike or not like the training data? A method for this task, which gained much popularity the last two decades, is the OneClass Support Vector Machine. Estimating the Support of a HighDimensional Distribution 
OneFactorAtaTime (OFAT) 
The onefactoratatime method (or OFAT) is a method of designing experiments involving the testing of factors, or causes, one at a time instead of all simultaneously. Prominent text books and academic papers currently favor factorial experimental designs, a method pioneered by Sir Ronald A. Fisher, where multiple factors are changed at once. The reasons stated for favoring the use of factorial design over OFAT are: 1. OFAT requires more runs for the same precision in effect estimation 2. OFAT cannot estimate interactions 3. OFAT can miss optimal settings of factors Despite these criticisms, some researchers have articulated a role for OFAT and showed they can be more effective than fractional factorials under certain conditions (number of runs is limited, primary goal is to attain improvements in the system, and experimental error is not large compared to factor effects, which must be additive and independent of each other). Designed experiments remain nearly always preferred to OFAT with many types and methods available, in addition to fractional factorials which, though usually requiring more runs than OFAT, do address the three concerns above. One modern design over which OFAT has no advantage in number of runs is the PlackettBurman which, by having all factors vary simultaneously (an important quality in experimental designs), gives generally greater precision in effect estimation. reval 
OnePass Algorithm  In computing, a onepass algorithm is one which reads its input exactly once, in order, without unbounded buffering. A onepass algorithm generally requires O(n) time and less than O(n) storage (typically O(1)), where n is the size of the input. Basically onepass algorithm operates as follows: (1) the object descriptions are processed serially; (2) the first object becomes the cluster representative of the first cluster; (3) each subsequent object is matched against all cluster representatives existing at its processing time; (4) a given object is assigned to one cluster (or more if overlap is allowed) according to some condition on the matching function; (5) when an object is assigned to a cluster the representative for that cluster is recomputed; (6) if an object fails a certain test it becomes the cluster representative of a new cluster 
OneShot Imitation Learning  Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring taskspecific engineering. In this paper, we propose a metalearning framework for achieving such capability, which we call oneshot imitation learning. Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into twoblock towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. Videos available at https://bit.ly/oneshotimitation. 
OneShot Learning  Oneshot learning is an object categorization problem in computer vision. Whereas most machine learning based object categorization algorithms require training on hundreds or thousands of images and very large datasets, oneshot learning aims to learn information about object categories from one, or only a few, training images. 
OneSided Dynamic Principal Components  odpc 
Online Algorithm  In computer science, an online algorithm is one that can process its input piecebypiece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand. 
Online Analytical Mining (OLAM) 
Online Analytical Processing (OLAP) technology is an essential element of the decision support system and permits decision makers to visualize huge operational data for quick, consistent, interactive and meaningful analysis. More recently, data mining techniques are also used together with OLAP to analyze large data sets which makes OLAP more useful and easier to apply in decision support systems. Several works in the past proved the likelihood and interest of integrating OLAP with data mining and as a result a new promising direction of Online Analytical Mining (OLAM) has emerged. OLAM provides a multidimensional view of its data and creates an interactive data mining environment whereby users can dynamically select data mining and OLAP functions, perform OLAP operations (such as drilling, slicing, dicing and pivoting on the data mining results), as well as perform mining operations on OLAP results, that is, mining different portions of data at multiple levels of abstraction. 
Online Analytical Processing (OLAP) 
In computing, online analytical processing, or OLAP, is an approach to answering multidimensional analytical (MDA) queries swiftly. OLAP is part of the broader category of business intelligence, which also encompasses relational database, report writing and data mining. Typical applications of OLAP include business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas, with new applications coming up, such as agriculture. The term OLAP was created as a slight modification of the traditional database term Online Transaction Processing (“OLTP”). 
Online Connected Dominating Set Leasing (OCDSL) 
We introduce the \emph{Online Connected Dominating Set Leasing} problem (OCDSL) in which we are given an undirected connected graph $G = (V, E)$, a set $\mathcal{L}$ of lease types each characterized by a duration and cost, and a sequence of subsets of $V$ arriving over time. A node can be leased using lease type $l$ for cost $c_l$ and remains active for time $d_l$. The adversary gives in each step $t$ a subset of nodes that need to be dominated by a connected subgraph consisting of nodes active at time $t$. The goal is to minimize the total leasing costs. OCDSL contains the \emph{Parking Permit Problem}~\cite{PPP} as a special subcase and generalizes the classical offline \emph{Connected Dominating Set} problem~\cite{Guha1998}. It has an $\Omega(\log ^2 n + \log \mathcal{L})$ randomized lower bound resulting from lower bounds for the \emph{Parking Permit Problem} and the \emph{Online Set Cover} problem~\cite{Alon:2003:OSC:780542.780558,Korman}, where $\mathcal{L}$ is the number of available lease types and $n$ is the number of nodes in the input graph. We give a randomized $\mathcal{O}(\log ^2 n + \log \mathcal{L} \log n)$competitive algorithm for OCDSL. We also give a deterministic algorithm for a variant of OCDSL in which the dominating subgraph need not be connected, the \emph{Online Dominating Set Leasing} problem. The latter is based on a simple primaldual approach and has an $\mathcal{O}(\mathcal{L} \cdot \Delta)$competitive ratio, where $\Delta$ is the maximum degree of the input graph. 
Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD) 
Recent work in distance metric learning has focused on learning transformations of data that best align with specified pairwise similarity and dissimilarity constraints, often supplied by a human observer. The learned transformations lead to improved retrieval, classification, and clustering algorithms due to the better adapted distance or similarity measures. Here, we address the problem of learning these transformations when the underlying constraint generation process is nonstationary. This nonstationarity can be due to changes in either the groundtruth clustering used to generate constraints or changes in the feature subspaces in which the class structure is apparent. We propose Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD), a general adaptive, online approach for learning and tracking optimal metrics as they change over time that is highly robust to a variety of nonstationary behaviors in the changing metric. We apply the OCELAD framework to an ensemble of online learners. Specifically, we create a retroinitialized composite objective mirror descent (COMID) ensemble (RICE) consisting of a set of parallel COMID learners with different learning rates, demonstrate RICEOCELAD on both real and synthetic data sets and show significant performance improvements relative to previously proposed batch and online distance metric learning algorithms. 
Online Convex Optimization (OCO) 

Online Deep Metric Learning (ODML) 
Metric learning learns a metric function from training data to calculate the similarity or distance between samples. From the perspective of feature learning, metric learning essentially learns a new feature space by feature transformation (e.g., Mahalanobis distance metric). However, traditional metric learning algorithms are shallow, which just learn one metric space (feature transformation). Can we further learn a better metric space from the learnt metric space In other words, can we learn metric progressively and nonlinearly like deep learning by just using the existing metric learning algorithms To this end, we present a hierarchical metric learning scheme and implement an online deep metric learning framework, namely ODML. Specifically, we take one online metric learning algorithm as a metric layer, followed by a nonlinear layer (i.e., ReLU), and then stack these layers modelled after the deep learning. The proposed ODML enjoys some nice properties, indeed can learn metric progressively and performs superiorly on some datasets. Various experiments with different settings have been conducted to verify these properties of the proposed ODML. 
Online Failure Prediction  To identify during runtime whether a failure will occur in the near future based on an assessment of the monitored current system state. Such type of failure prediction is called online failure prediction. 
Online FAult Detection (FADO) 
This paper proposes and studies a detection technique for adversarial scenarios (dubbed deterministic detection). This technique provides an alternative detection methodology in case the usual stochastic methods are not applicable: this can be because the studied phenomenon does not follow a stochastic sampling scheme, samples are highdimensional and subsequent multipletesting corrections render results overly conservative, sample sizes are too low for asymptotic results (as e.g. the central limit theorem) to kick in, or one cannot allow for the small probability of failure inherent to stochastic approaches. This paper instead designs a method based on insights from machine learning and online learning theory: this detection algorithm – named Online FAult Detection (FADO) – comes with theoretical guarantees of its detection capabilities. A version of the margin is found to regulate the detection performance of FADO. A precise expression is derived for bounding the performance, and experimental results are presented assessing the influence of involved quantities. A case study of scene detection is used to illustrate the approach. The technology is closely related to the linear perceptron rule, inherits its computational attractiveness and flexibility towards various extensions. 
Online Generative Discriminative Restricted Boltzmann Machine (OGDRBM) 
We propose a novel online learning algorithm for Restricted Boltzmann Machines (RBM), namely, the Online Generative Discriminative Restricted Boltzmann Machine (OGDRBM), that provides the ability to build and adapt the network architecture of RBM according to the statistics of streaming data. The OGDRBM is trained in two phases: (1) an online generative phase for unsupervised feature representation at the hidden layer and (2) a discriminative phase for classification. The online generative training begins with zero neurons in the hidden layer, adds and updates the neurons to adapt to statistics of streaming data in a single pass unsupervised manner, resulting in a feature representation best suited to the data. The discriminative phase is based on stochastic gradient descent and associates the represented features to the class labels. We demonstrate the OGDRBM on a set of multicategory and binary classification problems for data sets having varying degrees of classimbalance. We first apply the OGDRBM algorithm on the multiclass MNIST dataset to characterize the network evolution. We demonstrate that the online generative phase converges to a stable, concise network architecture, wherein individual neurons are inherently discriminative to the class labels despite unsupervised training. We then benchmark OGDRBM performance to other machine learning, neural network and ClassRBM techniques for credit scoring applications using 3 public nonstationary twoclass credit datasets with varying degrees of classimbalance. We report that OGDRBM improves accuracy by 2.53% over batch learning techniques while requiring at least 24%70% fewer neurons and fewer training samples. This online generative training approach can be extended greedily to multiple layers for training Deep Belief Networks in nonstationary data mining applications without the need for a priori fixed architectures. 
Online Gradient Descent (OGD) 
In stochastic (or “online”) gradient descent, the true gradient of Q(w) is approximated by a gradient at a single example. … As the algorithm sweeps through the training set, it performs the above update for each training example. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges. 
Online Machine Learning  Online machine learning is a model of induction that learns one instance at a time. The goal in online learning is to predict labels for instances. For example, the instances could describe the current conditions of the stock market, and an online algorithm predicts tomorrow’s value of a particular stock. The key defining characteristic of online learning is that soon after the prediction is made, the true label of the instance is discovered. This information can then be used to refine the prediction hypothesis used by the algorithm. The goal of the algorithm is to make predictions that are close to the true labels. 
Online Maximum a Posterior Estimation (OPE) 
One of the core problems in statistical models is the estimation of a posterior distribution. For topic models, the problem of posterior inference for individual texts is particularly important, especially when dealing with data streams, but is often intractable in the worst case. As a consequence, existing methods for posterior inference are approximate and do not have any guarantee on neither quality nor convergence rate. In this paper, we introduce a provably fast algorithm, namely Online Maximum a Posterior Estimation (OPE), for posterior inference in topic models. OPE has more attractive properties than existing inference approaches, including theoretical guarantees on quality and fast convergence rate. The discussions about OPE are very general and hence can be easily employed in a wide class of probabilistic models. Finally, we employ OPE to design three novel methods for learning Latent Dirichlet allocation from text streams or large corpora. Extensive experiments demonstrate some superior behaviors of OPE and of our new learning methods. 
Online Mirror Descent  
Online MultiArmed Bandit  We introduce a novel variant of the multiarmed bandit problem, in which bandits are streamed one at a time to the player, and at each point, the player can either choose to pull the current bandit or move on to the next bandit. Once a player has moved on from a bandit, they may never visit it again, which is a crucial difference between our problem and classic multiarmed bandit problems. In this online context, we study Bernoulli bandits (bandits with payout Ber($p_i$) for some underlying mean $p_i$) with underlying means drawn i.i.d. from various distributions, including the uniform distribution, and in general, all distributions that have a CDF satisfying certain differentiability conditions near zero. In all cases, we suggest several strategies and investigate their expected performance. Furthermore, we bound the performance of any optimal strategy and show that the strategies we have suggested are indeed optimal up to a constant factor. We also investigate the case where the distribution from which the underlying means are drawn is not known ahead of time. We again, are able to suggest algorithms that are optimal up to a constant factor for this case, given certain mild conditions on the universe of distributions. 
Online Multiple Kernel Classification (OMKC) 
Online learning and kernel learning are two active research topics in machine learning. Although each of them has been studied extensively, there is a limited effort in addressing the intersecting research. In this paper, we introduce a new research problem, termed OnlineMultiple Kernel Learning (OMKL), that aims to learn a kernel based prediction function from a pool of predefined kernels in an online learning fashion. OMKL is generally more challenging than typical online learning because both the kernel classifiers and their linear combination weights must be learned simultaneously. In this work, we consider two setups for OMKL, i.e. combining binary predictions or realvalued outputs from multiple kernel classifiers, and we propose both deterministic and stochastic approaches in the two setups for OMKL. The deterministic approach updates all kernel classifiers for every misclassified example, while the stochastic approach randomly chooses a classifier(s) for updating according to some sampling strategies. Mistake bounds are derived for all the proposed OMKL algorithms. 
Online Network Optimization (ONO) 
Future 5G wireless networks will rely on agile and automated network management, where the usage of diverse resources must be jointly optimized with surgical accuracy. A number of key wireless network functionalities (e.g., traffic steering, energy savings) give rise to hard optimization problems. What is more, high spatiotemporal traffic variability coupled with the need to satisfy strict per slice/service SLAs in modern networks, suggest that these problems must be constantly (re)solved, to maintain closetooptimal performance. To this end, in this paper we propose the framework of Online Network Optimization (ONO), which seeks to maintain both agile and efficient control over time, using an arsenal of datadriven, adaptive, and AIbased techniques. Since the mathematical tools and the studied regimes vary widely among these methodologies, a theoretical comparison is often out of reach. Therefore, the important question ‘what is the right ONO technique ‘ remains open to date. In this paper, we discuss the pros and cons of each technique and further attempt a direct quantitative comparison for a specific use case, using real data. Our results suggest that carefully combining the insights of problem modeling with stateoftheart AI techniques provides significant advantages at reasonable complexity. 
Online Portfolio Selection (OLPS) 
Online portfolio selection, which sequentially selects a portfolio over a set of assets in order to achieve certain targets, is a natural and important task for asset portfolio management. Aiming to maximize the cumulative wealth, several categories of algorithms have been proposed to solve this task. One category of algorithms—Follow theWinner— tries to asymptotically achieve the same growth rate (expected log return) as that of an optimal strategy, which is often based on the CGT. The second category—Follow the Loser—transfers the wealth from winning assets to losers, which seems contradictory to the common sense but empirically often achieves significantly better performance. Finally, the third category—Pattern Matchingbased approaches—tries to predict the next market distribution based on a sample of historical data and explicitly optimizes the portfolio based on the sampled distribution. Although these three categories are focused on a single strategy (class), there are also some other strategies that focus on combining multiple strategies (classes)—MetaLearning Algorithms (MLAs). Book: Online Portfolio Selection 
Online Principal Component Analysis (oPCA) 
In the online setting of the well known Principal Component Analysis (PCA) problem, the vectors xt are presented to the algorithm one by one. onlinePCA 
Online Reputation Monitoring (ORM) 
Online Reputation Monitoring (ORM) is concerned with the use of computational tools to measure the reputation of entities online, such as politicians or companies. 
Online Transactional Processing (OLTP) 
Online transaction processing, or OLTP, is a class of information systems that facilitate and manage transactionoriented applications, typically for data entry and retrieval transaction processing. The term is somewhat ambiguous; some understand a “transaction” in the context of computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions. OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automated teller machine (ATM) for a bank is an example of a commercial transaction processing application. Online transaction processing applications are high throughput and insert or updateintensive in database management. These application are used concurrently by hundred of users. The key goals of OLTP applications are availability, speed, concurrency, and recoverability. Reduced paper trails and the faster, more accurate forecast for revenues and expenses are both example of how OLTP makes things simpler for businesses. However, like many modern online information technology solutions, some systems require offline maintenance, which further affects the costbenefit analysis of online transaction processing system. 
Ontology  In computer science and information science, an ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that really or fundamentally exist for a particular domain of discourse. An ontology compartmentalizes the variables needed for some set of computations and establishes the relationships between them. The fields of artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture all create ontologies to limit complexity and to organize information. The ontology can then be applied to problem solving. 
Ontology Classification  Ontology classification – the computation of the subsumption hierarchies for classes and properties—is a core reasoning service provided by all OWL (Web Ontology Language) reasoners known to us. A popular algorithm for computing the class hierarchy is the socalled Enhanced Traversal (ET) algorithm. 
Ontology Engineering  Ontology engineering in computer science and information science is a field which studies the methods and methodologies for building ontologies: formal representations of a set of concepts within a domain and the relationships between those concepts. A largescale representation of abstract concepts such as actions, time, physical objects and beliefs would be an example of ontological engineering. 
Ontology Learning  Manual construction of ontologies for the SemanticWeb is a time consuming task. In order to help humans, the ontology learning field tries to automate the construction of new ontologies. The amount of data caused by the success of Internet is demanding methodologies and tools to automatically extract unknown and potentially useful knowledge out of it, generating structured representations with that knowledge. Although ontological engineering tools have matured over the last decade , manual ontology acquisition remains a tedious, timeconsuming, error prone, and complex task that can easily result in a knowledge acquisition bottleneck. Besides, while the new necessities of information are growing, the available ontologies need to be updated, enriched with new contents. The research on the ontology learning field has made possible the development of several approaches that allow the partial automation of the ontology construction process. It aims at reducing the time and effort in the ontology development process. Some methods and tools have been proposed in the last years, to speed up the ontology building process, using different sources and several techniques. Computational linguistics techniques, information extraction, statistics, and machine learning are the most prominent paradigms applied until now. There are also a great variety of information sources used for ontology learning. Though Web pages, dictionaries, knowledge bases, semistructured and structured sources can be used to learn an ontology, most of the methods only use textual sources for the learning process. All methods and tools have a strong relationships to the type of processing performed. In summary, the ontology learning field puts a number of research activities together, which focus on different types of knowledge and information sources, but share their target of a common domain conceptualisation The ontology learning is a complex multidisciplinary field that uses the natural language processing, text and web data extraction, machine learning and ontology engineering. 
OntoSeg  Text segmentation (TS) aims at dividing long text into coherent segments which reflect the subtopic structure of the text. It is beneficial to many natural language processing tasks, such as Information Retrieval (IR) and document summarisation. Current approaches to text segmentation are similar in that they all use wordfrequency metrics to measure the similarity between two regions of text, so that a document is segmented based on the lexical cohesion between its words. Various NLP tasks are now moving towards the semantic web and ontologies, such as ontologybased IR systems, to capture the conceptualizations associated with user needs and contents. Text segmentation based on lexical cohesion between words is hence not sufficient anymore for such tasks. This paper proposes OntoSeg, a novel approach to text segmentation based on the ontological similarity between text blocks. The proposed method uses ontological similarity to explore conceptual relations between text segments and a Hierarchical Agglomerative Clustering (HAC) algorithm to represent the text as a treelike hierarchy that is conceptually structured. The rich structure of the created tree further allows the segmentation of text in a linear fashion at various levels of granularity. The proposed method was evaluated on a wellknown dataset, and the results show that using ontological similarity in text segmentation is very promising. Also we enhance the proposed method by combining ontological similarity with lexical similarity and the results show an enhancement of the segmentation quality. 
Open Data  Open data is the idea that certain data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control. The goals of the open data movement are similar to those of other “Open” movements such as open source, open hardware, open content, and open access. The philosophy behind open data has been long established (for example in the Mertonian tradition of science), but the term “open data” itself is recent, gaining popularity with the rise of the Internet and World Wide Web and, especially, with the launch of opendata government initiatives such as Data.gov and Data.gov.uk. 
Open Data Center Alliance (ODCA) 
The Open Data Center Alliance is focused on the widespread adoption of enterprise cloud computing through best practice sharing and collaboration with the industry on availability of solution choice based on ODCA requirements. From its inception to today, the Alliance has seen a maturation of the cloud market place. To meet this new stage of enterprise cloud readiness, the ODCA has announced a new organizational charter. This new charter has driven the creation of the ODCA Cloud Expert Network and workgroups to deliver work focused on this charter. 
Open Data Platform (ODP) 
The Open Data Platform Initiative (ODP) is a shared industry effort focused on promoting and advancing the state of Apache Hadoop and Big Data technologies for the enterprise. Enabling Big Data solutions to flourish atop a common core platform. The current ecosystem is challenged and slowed by fragmented and duplicated efforts. The ODP Core will take the guesswork out of the process and accelerate many use cases by running on a common platform. Freeing up enterprises and ecosystem vendors to focus on building business driven applications. 
Open Domain INformer (ODIN) 
Rulebase information extraction (IE) has long enjoyed wide adoption throughout industry, though it has remained largely ignored in academia, in favor of machine learning (ML) methods (Chiticariu et al., 2013). However, rulebased systems have several advantages over pure ML systems, including: (a) the rules are interpretable and thus suitable for rapid development and/or domain transfer; and (b) humans and machines can contribute to the same model. Why then have such systems failed to hold the attention of the academic community? One argument raised by Chiticariu et al. is that, despite notable previous efforts (Appelt and Onyshkevych, 1998; Levy and Andrew, 2006; Hunter et al., 2008; Cunningham et al., 2011; Chang and Manning, 2014), there is not a standard language for this task, or a “standard way to express rules”, which raises the entry cost for new rulebased systems. ODIN aims to address these issues with a new language and framework. We follow the simplicity principles promoted by other natural language processing toolkits, such as Stanford’s CoreNLP, which aim to “avoid overdesign”, “do one thing well”, and have a user “up and running in ten minutes or less” (Manning et al., 2014). 
Open Neural Network Exchange (ONNX) 
ONNX is a open format to represent deep learning models. With ONNX, AI developers can more easily move models between stateoftheart tools and choose the combination that is best for them. ONNX is developed and supported by a community of partners. 
Open Speech and Music Interpretation by Large Space Extraction (openSMILE) 
The openSMILE feature extraction tool enables you to extract large audio feature spaces, and apply machine learning methods to classify and analyze your data in realtime. It combines features from Music Information Retrieval and Speech Processing. SMILE is an acronym for Speech & Music Interpretation by Largespace Extraction. It is written in C++ and is available as both a standalone commandline executable as well as a dynamic library. The main features of openSMILE are its capability of online incremental processing and its modularity. Feature extractor components can be freely interconnected to create new and custom features, all via a simple configuration file. New components can be added to openSMILE via an easy binary plugin interface and a comprehensive API. http://…tureextractoratutorialforversion21 http://…/citation.cfm?id=1874246 
Open Web Analytics (OWA) 
Open Web Analytics (OWA) is open source web analytics software that you can use to track and analyze how people use your websites and applications. OWA is licensed under GPL and provides website owners and developers with easy ways to add web analytics to their sites using simple Javascript, PHP, or REST based APIs. OWA also comes with builtin support for tracking websites made with popular content management frameworks such as WordPress and MediaWiki. 
OpenAI  OpenAI is a nonprofit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. In the short term, we’re building on recent advances in AI research and working towards the next set of breakthroughs. 
OpenAI Gym  OpenAI Gym is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software. 
OpenBLAS  OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version. 
openCV  OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at realtime computer vision. It is free for use under the open source BSD license. The library is crossplatform. It focuses mainly on realtime image processing. 
OpenFace  OpenFace is a Python and Torch implementation of face recognition with deep neural networks and is based on the CVPR 2015 paper FaceNet: A Unified Embedding for Face Recognition and Clustering by Florian Schroff, Dmitry Kalenichenko, and James Philbin at Google. Torch allows the network to be executed on a CPU or with CUDA. 
OpenGIS Simple Features Reference Implementation (OGR) 
OGR used to stand for OpenGIS Simple Features Reference Implementation. However, since OGR is not fully compliant with the OpenGIS Simple Feature specification and is not approved as a reference implementation of the spec the name was changed to OGR Simple Features Library. The only meaning of OGR in this name is historical. OGR is also the prefix used everywhere in the source of the library for class names, filenames, etc. 
OpenLava  OpenLava is an open source workload job scheduling software for a cluster of computers. OpenLava was derived from an early version of Platform LSF. Its configuration file syntax, API, and CLI have been kept unchanged. Therefore OpenLava is mostly compatible with Platform LSF. OpenLava was based on the Utopia research project at the University of Toronto. OpenLava is licensed under GNU General Public License v2. http://www.openlava.org 
OpenMarkov  OpenMarkov is a software tool for probabilistic graphical models (PGMs) developed by the Research Centre for Intelligent DecisionSupport Systems of the UNED in Madrid, Spain. It has been designed for: • editing and evaluating several types of several types of PGMs, such as Bayesian networks, influence diagrams, factored Markov models, etc.; • learning Bayesian networks from data interactively; • costeffectiveness analysis. ➘ “Probabilistic Graphical Model” 
OpenMx  OpenMx is an open source program for extended structural equation modeling. It runs as a package under R. Cross platform, it runs under Linux, Mac OS and Windows. OpenMx consists of an R library of functions and optimizers supporting the rapid and flexible implementation and estimation of SEM models. Models can be estimated based on either raw data (with FIML modelling) or on correlation or covariance matrices. Models can handle mixtures of continuous and ordinal data. The current version is OpenMx 2, and is available on CRAN. Path analysis, Confirmatory factor analysis, Latent growth modeling, Mediation analysis are all implemented. Multiple group models are implemented readily. When a model is run, it returns a model, and models can be updated (adding ad removing paths, adding constraints and equalities. Giving parameters the same label equates them). An innovation is that labels can consist of address of other parameters, allowing easy implementation of constrains on parameters by address. RAM models return standardized and raw estimates, as well as a range of fit indices (AIC, RMSEA, TLI, CFI etc.). Confidence intervals are estimated robustly. The program has parallel processing builtin via links to parallel environments in R, and in general takes advantage of the R programming environment. Users can expand the package with functions. These have been used, for instance, to implement Modification indices. Models can be written in either a ‘pathic’ or ‘matrix’ form. For those who think in terms of path models, paths are specified using mxPath() to describe paths. For models that are better suited to description in terms of matrix algebra, this is done using similar functional extensions in the R environment, for instance mxMatrix and mxAlgebra. OpenMx,ifaTools 
OpenNMT  OpenNMT is an opensource toolkit for neural machine translation (NMT). The system prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques. OpenNMT has been used in several production MT systems, modified for numerous research papers, and is implemented across several deep learning frameworks. 
OpenRefine  OpenRefine (formerly Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; extending it with web services; and linking it to databases like Freebase. Please note that since October 2nd, 2012, Google is not actively supporting this project, which has now been rebranded to OpenRefine. Project development, documentation and promotion is now fully supported by volunteers. Find out more about the history of OpenRefine and how you can help the community. rrefine 
OpenSource Toolkit for Neural Machine Translation (openNMT) 
We describe an opensource toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques. 
OpenStreetMap (OSM) 
OpenStreetMap (OSM) is a collaborative project to create a free editable map of the world. Two major driving forces behind the establishment and growth of OSM have been restrictions on use or availability of map information across much of the world and the advent of inexpensive portable satellite navigation devices. Created by Steve Coast in the UK in 2004, it was inspired by the success of Wikipedia and the preponderance of proprietary map data in the UK and elsewhere. Since then, it has grown to over 1.6 million registered users, who can collect data using manual survey, GPS devices, aerial photography, and other free sources. This crowdsourced data is then made available under the Open Database License. The site is supported by the OpenStreetMap Foundation, a nonprofit organization registered in England. Rather than the map itself, the data generated by the OpenStreetMap project is considered its primary output. This data is then available for use in both traditional applications, like its usage by Craigslist, OsmAnd, Geocaching, MapQuest Open, JMP statistical software, and Foursquare to replace Google Maps, and more unusual roles, like replacing default data included with GPS receivers. This data has been favourably compared with proprietary datasources, though data quality varies worldwide. http://…tmapvisualizationcasestudysamplecode 
OpenTSDB  OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on top of HBase. OpenTSDB was written to address a common need: store, index and serve metrics collected from computer systems (network gear, operating systems, applications) at a large scale, and make this data easily accessible and graphable. Thanks to HBase’s scalability, OpenTSDB allows you to collect thousands of metrics from tens of thousands of hosts and applications, at a high rate (every few seconds). OpenTSDB will never delete or downsample data and can easily store hundreds of billions of data points. OpenTSDB is free software and is available under both LGPLv2.1+ and GPLv3+. Find more about OpenTSDB at http://opentsdb.net 
Operational Analytics  Operational analytics is a more specific term for a type of business analytics which focuses on improving existing operations. This type of business analytics, like others, involves the use of various data mining and data aggregation tools to get more transparent information for business planning. 
Operational Intelligence (OI) 
Operational intelligence (OI) is a category of realtime dynamic, business analytics that delivers visibility and insight into data, streaming events and business operations. Operational Intelligence solutions run queries against streaming data feeds and event data to deliver realtime analytic results as operational instructions. Operational Intelligence provides organizations the ability to make decisions and immediately act on these analytic insights, through manual or automated actions. 
Operational Modal Analysis (OMA) 
Ambient modal identification, also known as Operational Modal Analysis (OMA), aims at identifying the modal properties of a structure based on vibration data collected when the structure is under its operating conditions, i.e., no initial excitation or known artificial excitation. The modal properties of a structure include primarily the natural frequencies, damping ratios and mode shapes. In an ambient vibration test the subject structure can be under a variety of excitation sources which are not measured but are assumed to be ‘broadband random’. The latter is a notion that one needs to apply when developing an ambient identification method. The specific assumptions vary from one method to another. Regardless of the method used, however, proper modal identification requires that the spectral characteristics of the measured response reflect the properties of the modes rather than those of the excitation. 
Operations Research (OR) 
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. It is often considered to be a subfield of mathematics. The terms management science and decision science are sometimes used as synonyms. Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, and mathematical optimization, operations research arrives at optimal or nearoptimal solutions to complex decisionmaking problems. Because of its emphasis on humantechnology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, and draws on psychology and organization science. Operations research is often concerned with determining the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost) of some realworld objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. 
Opinion Pool  
Opportunistic Sensing  
Optical Neural Network (ONN) 
We develop a novel optical neural network (ONN) framework which introduces a degree of scalar invariance to image classification estimation. Taking a hint from the human eye, which has higher resolution near the center of the retina, images are broken out into multiple levels of varying zoom based on a focal point. Each level is passed through an identical convolutional neural network (CNN) in a Siamese fashion, and the results are recombined to produce a high accuracy estimate of the object class. ONNs act as a wrapper around existing CNNs, and can thus be applied to many existing algorithms to produce notable accuracy improvements without having to change the underlying architecture. 
Optimal Control Theory  Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. 
Optimal Matching Analysis (OMA) 
Optimal matching is a sequence analysis method used in social science, to assess the dissimilarity of ordered arrays of tokens that usually represent a timeordered sequence of socioeconomic states two individuals have experienced. Once such distances have been calculated for a set of observations (e.g. individuals in a cohort) classical tools (such as cluster analysis) can be used. The method was tailored to social sciences from a technique originally introduced to study molecular biology (protein or genetic) sequences. Optimal matching uses the NeedlemanWunsch algorithm. 
Optimistic Optimization  OOR 
Optimization  In mathematics, computer science, economics, or management science, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives. In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. More generally, optimization includes finding ‘best available’ values of some objective function given a defined domain (or a set of constraints), including a variety of different types of objective functions and different types of domains. 
Optunity  Optunity is a free software package for hyperparameter search in the context of machine learning developed at STADIUS. GitXiv 
OPUS Miner  OPUS Miner is an open source implementation of the OPUS Miner algorithm which applies OPUS search for Filtered Topk Association Discovery of SelfSufficient Itemsets. OPUS Miner finds selfsufficient itemsets. These are an effective way of summarizing the key associations in highdimensional data. opusminer 
Order Statistic  In statistics, the kth order statistic of a statistical sample is equal to its kthsmallest value. Together with rank statistics, order statistics are among the most fundamental tools in nonparametric statistics and inference. Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution. 
Ordered Decision Diagram (ODD) 
A Symbolic Approach to Explaining Bayesian Network Classifiers 
Ordered Weighted Averaging Aggregation Operator (OWA) 
In applied mathematics – specifically in fuzzy logic – the ordered weighted averaging (OWA) operators provide a parameterized class of mean type aggregation operators. They were introduced by Ronald R. Yager. Many notable mean operators such as the max, arithmetic average, median and min, are members of this class. They have been widely used in computational intelligence because of their ability to model linguistically expressed aggregation instructions. 
Ordering points to identify the clustering structure (OPTICS) 
Ordering points to identify the clustering structure (OPTICS) is an algorithm for finding densitybased clusters in spatial data. It was presented by Mihael Ankerst, Markus M. Breunig, HansPeter Kriegel and Jörg Sander. Its basic idea is similar to DBSCAN, but it addresses one of DBSCAN’s major weaknesses: the problem of detecting meaningful clusters in data of varying density. In order to do so, the points of the database are (linearly) ordered such that points which are spatially closest become neighbors in the ordering. Additionally, a special distance is stored for each point that represents the density that needs to be accepted for a cluster in order to have both points belong to the same cluster. This is represented as a dendrogram. 
Ordinal Data Clustering  ordinalClust 
Ordinal Forests (OF) 
Ordinal forests (OF) are a method for ordinal regression with highdimensional and lowdimensional data that is able to predict the values of the ordinal target variable for new observations and at the same time estimate the relative widths of the classes of the ordinal target variable. Using a (permutationbased) variable importance measure it is moreover possible to rank the importances of the covariates. ordinalForest 
Ordinal Pooling Network (OPN) 
In the framework of convolutional neural networks that lie at the heart of deep learning, downsampling is often performed with a maxpooling operation that however completely discards the information from other activations in a pooling region. To address this issue, a novel pooling scheme, Ordinal Pooling Network (OPN), is introduced in this work. OPN rearranges all the elements of a pooling region in a sequence and assigns different weights to all the elements based upon their orders in the sequence, where the weights are learned via the gradientbased optimisation. The results of our smallscale experiments on image classification task on MNIST database demonstrate that this scheme leads to a consistent improvement in the accuracy over maxpooling operation. This improvement is expected to increase in the deep networks, where several layers of pooling become necessary. 
Oriented Response Network (ORN) 
Deep Convolution Neural Networks (DCNNs) are capable of learning unprecedentedly effective image representations. However, their ability in handling significant local and global image rotations remains limited. In this paper, we propose Active Rotating Filters (ARFs) that actively rotate during convolution and produce feature maps with location and orientation explicitly encoded. An ARF acts as a virtual filter bank containing the filter itself and its multiple unmaterialised rotated versions. During backpropagation, an ARF is collectively updated using errors from all its rotated versions. DCNNs using ARFs, referred to as Oriented Response Networks (ORNs), can produce withinclass rotationinvariant deep features while maintaining interclass discrimination for classification tasks. The oriented response produced by ORNs can also be used for image and object orientation estimation tasks. Over multiple stateoftheart DCNN architectures, such as VGG, ResNet, and STN, we consistently observe that replacing regular filters with the proposed ARFs leads to significant reduction in the number of network parameters and improvement in classification performance. We report the best results on several commonly used benchmarks. 
Orthant Probabilities  
Orthogonal Array (OA) 
In mathematics, in the area of combinatorial designs, an orthogonal array is a ‘table’ (array) whose entries come from a fixed finite set of symbols (typically, {1,2,…,n}), arranged in such a way that there is an integer t so that for every selection of t columns of the table, all ordered ttuples of the symbols, formed by taking the entries in each row restricted to these columns, appear the same number of times. The number t is called the strength of the orthogonal array. The Orthogonal Array Package oapackage 
Orthogonal Nonlinear LeastSquares Regression (ONLS) 
Orthogonal nonlinear least squares (ONLS) is a not so frequently applied and maybe overlooked regression technique that comes into question when one encounters an “error in variables” problem. While classical nonlinear least squares (NLS) aims to minimize the sum of squared vertical residuals, ONLS minimizes the sum of squared orthogonal residuals. The method is based on finding points on the fitted line that are orthogonal to the data by minimizing for each the Euclidean distance to some point on the fitted curve. onls 
Orthogonal Regression  Total least squares, also known as rigorous least squares and (in a special case) orthogonal regression, is a type of errorsinvariables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression, and can be applied to both linear and nonlinear models. The total least squares approximation of the data is generically equivalent to the best, in the Frobenius norm, lowrank approximation of the data matrix. 
Oscillatory Recurrent GAted Neural Integrator Circuits (ORGaNICs) 
Working memory is a cognitive process that is responsible for temporarily holding and manipulating information. Most of the empirical neuroscience research on working memory has focused on measuring sustained activity in prefrontal cortex (PFC) and/or parietal cortex during simple delayedresponse tasks, and most of the models of working memory have been based on neural integrators. But working memory means much more than just holding a piece of information online. We describe a new theory of working memory, based on a recurrent neural circuit that we call ORGaNICs (Oscillatory Recurrent GAted Neural Integrator Circuits). ORGaNICs are a variety of Long Short Term Memory units (LSTMs), imported from machine learning and artificial intelligence. ORGaNICs can be used to explain the complex dynamics of delayperiod activity in prefrontal cortex (PFC) during a working memory task. The theory is analytically tractable so that we can characterize the dynamics, and the theory provides a means for reading out information from the dynamically varying responses at any point in time, in spite of the complex dynamics. ORGaNICs can be implemented with a biophysical (electrical circuit) model of pyramidal cells, combined with shunting inhibition via a thalamocortical loop. Although introduced as a computational theory of working memory, ORGaNICs are also applicable to models of sensory processing, motor preparation and motor control. ORGaNICs offer computational advantages compared to other varieties of LSTMs that are commonly used in AI applications. Consequently, ORGaNICs are a framework for canonical computation in brains and machines. 
OSEMN Process (OSEMN) 
We’ve variously heard it said that data science requires some commandline fu for data procurement and preprocessing, or that one needs to know some machine learning or stats, or that one should know how to `look at data’. All of these are partially true, so we thought it would be useful to propose one possible taxonomy — we call it the Snice* taxonomy — of what a data scientist does, in roughly chronological order: • Obtain • Scrub • Explore • Model • iNterpret (or, if you like, OSEMN, which rhymes with possum). Using the OSEMN Process to Work Through a Data Problem 
Outlier  In statistics, an outlier is an observation point that is distant from other observations. An outlier may be due to variability in the measurement or it may indicate experimental error; the latter are sometimes excluded from the data set. 
Outofcore Algorithm  Outofcore or external memory algorithms are algorithms that are designed to process data that is too large to fit into a computer’s main memory at one time. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory such as hard drives or tape drives. A typical example is geographic information systems, especially digital elevation models, where the full data set easily exceeds several gigabytes or even terabytes of data. This notion naturally extends to a network connecting a data server to a treatment or visualization workstation. Popular massofdata based web applications such as googleMap or googleEarth enter this topic. It also extends to GPU computing – utilizing powerful graphics cards with little memory (compared to CPU memory) and slow CPUGPU memory transfer (compared to computation bandwidth). 
OutofDistribution Detector for Neural Networks (ODIN) 
We consider the problem of detecting outofdistribution examples in neural networks. We propose ODIN, a simple and effective outofdistribution detector for neural networks, that does not require any change to a pretrained model. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in and outofdistribution samples, allowing for more effective detection. We show in a series of experiments that our approach is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach[1] by a large margin, establishing a new stateoftheart performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR10) when the true positive rate is 95%. We theoretically analyze the method and prove that performance improvement is guaranteed under mild conditions on the image distributions. 
Outofsample Testing  
Output Range Analysis  Deep neural networks (NN) are extensively used for machine learning tasks such as image classification, perception and control of autonomous systems. Increasingly, these deep NNs are also been deployed in highassurance applications. Thus, there is a pressing need for developing techniques to verify neural networks to check whether certain userexpected properties are satisfied. In this paper, we study a specific verification problem of computing a guaranteed range for the output of a deep neural network given a set of inputs represented as a convex polyhedron. Range estimation is a key primitive for verifying deep NNs. We present an efficient range estimation algorithm that uses a combination of local search and linear programming problems to efficiently find the maximum and minimum values taken by the outputs of the NN over the given input set. In contrast to recently proposed ‘monolithic’ optimization approaches, we use local gradient descent to repeatedly find and eliminate local minima of the function. The final global optimum is certified using a mixed integer programming instance. We implement our approach and compare it with Reluplex, a recently proposed solver for deep neural networks. We demonstrate the effectiveness of the proposed approach for verification of NNs used in automated control as well as those used in classification. 
Outranking Methods (OM) 
A classical problem in the field of Multiple Criteria Decision Making (mcdm) is to build a preference relation on a set of multiattributed alternatives on the basis of preferences expresses on each attribute and interattribute information such as weights. Based on this preference relation (or, more generally, on various relations obtained following a robustness analysis) a recommendation is elaborated (e.g. exhibiting of a subset likely to contain the best alternatives). OutrankingTools 
Overconfidence Effect  The overconfidence effect is a wellestablished bias in which someone’s subjective confidence in their judgments is reliably greater than their objective accuracy, especially when confidence is relatively high. For example, in some quizzes, people rate their answers as “99% certain” but are wrong 40% of the time. It has been proposed that a metacognitive trait mediates the accuracy of confidence judgments, but this trait’s relationship to variations in cognitive ability and personality remains uncertain. Overconfidence is one example of a miscalibration of subjective probabilities. 
Overdispersion  In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model. A common task in applied statistics is choosing a parametric model to fit a given set of empirical observations. This necessitates an assessment of the fit of the chosen model. It is usually possible to choose the model parameters in such a way that the theoretical population mean of the model is approximately equal to the sample mean. However, especially for simple models with few parameters, theoretical predictions may not match empirical observations for higher moments. When the observed variance is higher than the variance of a theoretical model, overdispersion has occurred. Conversely, underdispersion means that there was less variation in the data than predicted. Overdispersion is a very common feature in applied data analysis because in practice, populations are frequently heterogeneous (nonuniform) contrary to the assumptions implicit within widely used simple parametric models. 
Overfitting  In statistics and machine learning, overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model which has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data. 
Overlapping KMeans (OKM) 
Cleuziou, G. (2007) <doi:10.1109/icpr.2008.4761079> COveR 
Owl  Owl is a new numerical library developed in the OCaml language. It focuses on providing a comprehensive set of highlevel numerical functions so that developers can quickly build up data analytical applications. In this abstract, we will present Owl’s design, core components, and its key functionality. 
Advertisements