Advertisements

WhatIs-O

Objective Function A function that is to be optimized (minimizing or maximizing a numerical value depending on a particular task or problem), for example, an objective function in pattern classification tasks could be to minimize the error rate of a classifier.
Objective-Reinforced Generative Adversarial Network
(ORGAN)
In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics.
Octave GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable.
http://wiki.octave.org
OctNet We present OctNet, a representation for deep learning with sparse 3D data. In contrast to existing models, our representation enables 3D convolutional networks which are both deep and high resolution. Towards this goal, we exploit the sparsity in the input data to hierarchically partition the space using a set of unbalanced octrees where each leaf node stores a pooled feature representation. This allows to focus memory allocation and computation to the relevant dense regions and enables deeper networks without compromising resolution. We demonstrate the utility of our OctNet representation by analyzing the impact of resolution on several 3D tasks including 3D object classification, orientation estimation and point cloud labeling.
Octree Generating Networks We present a deep convolutional decoder architecture that can generate volumetric 3D outputs in a compute- and memory-efficient manner by using an octree representation. The network learns to predict both the structure of the octree, and the occupancy values of individual cells. This makes it a particularly valuable technique for generating 3D shapes. In contrast to standard decoders acting on regular voxel grids, the architecture does not have cubic complexity. This allows representing much higher resolution outputs with a limited memory budget. We demonstrate this in several application domains, including 3D convolutional autoencoders, generation of objects and whole scenes from high-level representations, and shape from a single image.
Oddball SGD Stochastic Gradient Descent (SGD) is arguably the most popular of the machine learning methods applied to training deep neural networks (DNN) today. It has recently been demonstrated that SGD can be statistically biased so that certain elements of the training set are learned more rapidly than others. In this article, we place SGD into a feedback loop whereby the probability of selection is proportional to error magnitude. This provides a novelty-driven oddball SGD process that learns more rapidly than traditional SGD by prioritising those elements of the training set with the largest novelty (error). In our DNN example, oddball SGD trains some 50x faster than regular SGD.
Odds Odds are a numerical expression used in gambling and statistics to reflect the likelihood that a particular event will take place. Conventionally, they are expressed in the form “X to Y”, where X and Y are numbers.
In gambling, odds represent the ratio between the amounts staked by parties to a wager or bet. Thus, odds of 6 to 1 mean the first party (normally a bookmaker) is staking six times the amount that the second party is. In statistics, odds represent the probability that an event will take place. Thus, odds of 6 to 1 mean that there are six possible outcomes in which the event will not take place to every one where it will. In other words, the probability that X will not happen is six times the probability that it will.
The gambling and statistical uses of odds are closely interlinked. If a bet is a fair one, then the odds offered to the gamblers will perfectly reflect relative probabilities. If the odds being offered to the gamblers do not correspond to probability in this way then one of the parties to the bet has an advantage over the other.
Offline Algorithm In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand.
Oja Median Consider p+1 points in R^p. These points form a simplex, which has a p-dimensional volume. For example, in R^3 four points form a tetrahedron, and in R^2 three points form a triangle whose area is ‘2-dimensional volume’. Now consider a data set in R^p for which we seek the median. Oja proposed the following measure for a point X in R^p:
• for every subset of p points from the data set, form a simplex with X.
• sum together the volumes of each such simplex.
• the Oja simplex median is any point X* in R^p for which this sum is minimum.
One-Class Support Vector Machine
(OCSVM)
Traditionally, many classification problems try to solve the two or multi-class situation. The goal of the machine learning application is to distinguish test data between a number of classes, using training data. But what if you only have data of one class and the goal is to test new data and found out whether it is alike or not like the training data? A method for this task, which gained much popularity the last two decades, is the One-Class Support Vector Machine.
Estimating the Support of a High-Dimensional Distribution
One-Factor-At-a-Time
(OFAT)
The one-factor-at-a-time method (or OFAT) is a method of designing experiments involving the testing of factors, or causes, one at a time instead of all simultaneously. Prominent text books and academic papers currently favor factorial experimental designs, a method pioneered by Sir Ronald A. Fisher, where multiple factors are changed at once. The reasons stated for favoring the use of factorial design over OFAT are:
1. OFAT requires more runs for the same precision in effect estimation
2. OFAT cannot estimate interactions
3. OFAT can miss optimal settings of factors
Despite these criticisms, some researchers have articulated a role for OFAT and showed they can be more effective than fractional factorials under certain conditions (number of runs is limited, primary goal is to attain improvements in the system, and experimental error is not large compared to factor effects, which must be additive and independent of each other). Designed experiments remain nearly always preferred to OFAT with many types and methods available, in addition to fractional factorials which, though usually requiring more runs than OFAT, do address the three concerns above. One modern design over which OFAT has no advantage in number of runs is the Plackett-Burman which, by having all factors vary simultaneously (an important quality in experimental designs), gives generally greater precision in effect estimation.
reval
One-Pass Algorithm In computing, a one-pass algorithm is one which reads its input exactly once, in order, without unbounded buffering. A one-pass algorithm generally requires O(n) time and less than O(n) storage (typically O(1)), where n is the size of the input. Basically one-pass algorithm operates as follows:
(1) the object descriptions are processed serially;
(2) the first object becomes the cluster representative of the first cluster;
(3) each subsequent object is matched against all cluster representatives existing at its processing time;
(4) a given object is assigned to one cluster (or more if overlap is allowed) according to some condition on the matching function;
(5) when an object is assigned to a cluster the representative for that cluster is recomputed;
(6) if an object fails a certain test it becomes the cluster representative of a new cluster
One-Shot Imitation Learning Imitation learning has been commonly applied to solve different tasks in isolation. This usually requires either careful feature engineering, or a significant number of samples. This is far from what we desire: ideally, robots should be able to learn from very few demonstrations of any given task, and instantly generalize to new situations of the same task, without requiring task-specific engineering. In this paper, we propose a meta-learning framework for achieving such capability, which we call one-shot imitation learning. Specifically, we consider the setting where there is a very large set of tasks, and each task has many instantiations. For example, a task could be to stack all blocks on a table into a single tower, another task could be to place all blocks on a table into two-block towers, etc. In each case, different instances of the task would consist of different sets of blocks with different initial states. At training time, our algorithm is presented with pairs of demonstrations for a subset of all tasks. A neural net is trained that takes as input one demonstration and the current state (which initially is the initial state of the other demonstration of the pair), and outputs an action with the goal that the resulting sequence of states and actions matches as closely as possible with the second demonstration. At test time, a demonstration of a single instance of a new task is presented, and the neural net is expected to perform well on new instances of this new task. The use of soft attention allows the model to generalize to conditions and tasks unseen in the training data. We anticipate that by training this model on a much greater variety of tasks and settings, we will obtain a general system that can turn any demonstrations into robust policies that can accomplish an overwhelming variety of tasks. Videos available at https://bit.ly/one-shot-imitation.
One-Shot Learning One-shot learning is an object categorization problem in computer vision. Whereas most machine learning based object categorization algorithms require training on hundreds or thousands of images and very large datasets, one-shot learning aims to learn information about object categories from one, or only a few, training images.
Online Algorithm In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand.
Online Analytical Mining
(OLAM)
Online Analytical Processing (OLAP) technology is an essential element of the decision support system and permits decision makers to visualize huge operational data for quick, consistent, interactive and meaningful analysis. More recently, data mining techniques are also used together with OLAP to analyze large data sets which makes OLAP more useful and easier to apply in decision support systems. Several works in the past proved the likelihood and interest of integrating OLAP with data mining and as a result a new promising direction of Online Analytical Mining (OLAM) has emerged. OLAM provides a multi-dimensional view of its data and creates an interactive data mining environment whereby users can dynamically select data mining and OLAP functions, perform OLAP operations (such as drilling, slicing, dicing and pivoting on the data mining results), as well as perform mining operations on OLAP results, that is, mining different portions of data at multiple levels of abstraction.
Online Analytical Processing
(OLAP)
In computing, online analytical processing, or OLAP, is an approach to answering multi-dimensional analytical (MDA) queries swiftly. OLAP is part of the broader category of business intelligence, which also encompasses relational database, report writing and data mining. Typical applications of OLAP include business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas, with new applications coming up, such as agriculture. The term OLAP was created as a slight modification of the traditional database term Online Transaction Processing (“OLTP”).
Online Convex Ensemble StrongLy Adaptive Dynamic Learning
(OCELAD)
Recent work in distance metric learning has focused on learning transformations of data that best align with specified pairwise similarity and dissimilarity constraints, often supplied by a human observer. The learned transformations lead to improved retrieval, classification, and clustering algorithms due to the better adapted distance or similarity measures. Here, we address the problem of learning these transformations when the underlying constraint generation process is nonstationary. This nonstationarity can be due to changes in either the ground-truth clustering used to generate constraints or changes in the feature subspaces in which the class structure is apparent. We propose Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD), a general adaptive, online approach for learning and tracking optimal metrics as they change over time that is highly robust to a variety of nonstationary behaviors in the changing metric. We apply the OCELAD framework to an ensemble of online learners. Specifically, we create a retro-initialized composite objective mirror descent (COMID) ensemble (RICE) consisting of a set of parallel COMID learners with different learning rates, demonstrate RICE-OCELAD on both real and synthetic data sets and show significant performance improvements relative to previously proposed batch and online distance metric learning algorithms.
Online Convex Optimization
(OCO)
Online Failure Prediction To identify during runtime whether a failure will occur in the near future based on an assessment of the monitored current system state. Such type of failure prediction is called online failure prediction.
Online Gradient Descent
(OGD)
In stochastic (or “on-line”) gradient descent, the true gradient of Q(w) is approximated by a gradient at a single example. … As the algorithm sweeps through the training set, it performs the above update for each training example. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges.
Online Machine Learning Online machine learning is a model of induction that learns one instance at a time. The goal in on-line learning is to predict labels for instances. For example, the instances could describe the current conditions of the stock market, and an online algorithm predicts tomorrow’s value of a particular stock. The key defining characteristic of on-line learning is that soon after the prediction is made, the true label of the instance is discovered. This information can then be used to refine the prediction hypothesis used by the algorithm. The goal of the algorithm is to make predictions that are close to the true labels.
Online Maximum a Posterior Estimation
(OPE)
One of the core problems in statistical models is the estimation of a posterior distribution. For topic models, the problem of posterior inference for individual texts is particularly important, especially when dealing with data streams, but is often intractable in the worst case. As a consequence, existing methods for posterior inference are approximate and do not have any guarantee on neither quality nor convergence rate. In this paper, we introduce a provably fast algorithm, namely Online Maximum a Posterior Estimation (OPE), for posterior inference in topic models. OPE has more attractive properties than existing inference approaches, including theoretical guarantees on quality and fast convergence rate. The discussions about OPE are very general and hence can be easily employed in a wide class of probabilistic models. Finally, we employ OPE to design three novel methods for learning Latent Dirichlet allocation from text streams or large corpora. Extensive experiments demonstrate some superior behaviors of OPE and of our new learning methods.
Online Multi-Armed Bandit We introduce a novel variant of the multi-armed bandit problem, in which bandits are streamed one at a time to the player, and at each point, the player can either choose to pull the current bandit or move on to the next bandit. Once a player has moved on from a bandit, they may never visit it again, which is a crucial difference between our problem and classic multi-armed bandit problems. In this online context, we study Bernoulli bandits (bandits with payout Ber($p_i$) for some underlying mean $p_i$) with underlying means drawn i.i.d. from various distributions, including the uniform distribution, and in general, all distributions that have a CDF satisfying certain differentiability conditions near zero. In all cases, we suggest several strategies and investigate their expected performance. Furthermore, we bound the performance of any optimal strategy and show that the strategies we have suggested are indeed optimal up to a constant factor. We also investigate the case where the distribution from which the underlying means are drawn is not known ahead of time. We again, are able to suggest algorithms that are optimal up to a constant factor for this case, given certain mild conditions on the universe of distributions.
Online Multiple Kernel Classification
(OMKC)
Online learning and kernel learning are two active research topics in machine learning. Although each of them has been studied extensively, there is a limited effort in addressing the intersecting research. In this paper, we introduce a new research problem, termed OnlineMultiple Kernel Learning (OMKL), that aims to learn a kernel based prediction function from a pool of predefined kernels in an online learning fashion. OMKL is generally more challenging than typical online learning because both the kernel classifiers and their linear combination weights must be learned simultaneously. In this work, we consider two setups for OMKL, i.e. combining binary predictions or real-valued outputs from multiple kernel classifiers, and we propose both deterministic and stochastic approaches in the two setups for OMKL. The deterministic approach updates all kernel classifiers for every misclassified example, while the stochastic approach randomly chooses a classifier(s) for updating according to some sampling strategies. Mistake bounds are derived for all the proposed OMKL algorithms.
Online Portfolio Selection
(OLPS)
Online portfolio selection, which sequentially selects a portfolio over a set of assets in order to achieve certain targets, is a natural and important task for asset portfolio management. Aiming to maximize the cumulative wealth, several categories of algorithms have been proposed to solve this task. One category of algorithms—Follow theWinner— tries to asymptotically achieve the same growth rate (expected log return) as that of an optimal strategy, which is often based on the CGT. The second category—Follow the Loser—transfers the wealth from winning assets to losers, which seems contradictory to the common sense but empirically often achieves significantly better performance. Finally, the third category—Pattern Matching-based approaches—tries to predict the next market distribution based on a sample of historical data and explicitly optimizes the portfolio based on the sampled distribution. Although these three categories are focused on a single strategy (class), there are also some other strategies that focus on combining multiple strategies (classes)—Meta-Learning Algorithms (MLAs).
Book: Online Portfolio Selection
Online Principal Component Analysis
(oPCA)
In the online setting of the well known Principal Component Analysis (PCA) problem, the vectors xt are presented to the algorithm one by one.
onlinePCA
Online Transactional Processing
(OLTP)
Online transaction processing, or OLTP, is a class of information systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. The term is somewhat ambiguous; some understand a “transaction” in the context of computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions. OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automated teller machine (ATM) for a bank is an example of a commercial transaction processing application. Online transaction processing applications are high throughput and insert or update-intensive in database management. These application are used concurrently by hundred of users. The key goals of OLTP applications are availability, speed, concurrency, and recoverability. Reduced paper trails and the faster, more accurate forecast for revenues and expenses are both example of how OLTP makes things simpler for businesses. However, like many modern online information technology solutions, some systems require offline maintenance, which further affects the cost-benefit analysis of online transaction processing system.
Ontology In computer science and information science, an ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that really or fundamentally exist for a particular domain of discourse. An ontology compartmentalizes the variables needed for some set of computations and establishes the relationships between them. The fields of artificial intelligence, the Semantic Web, systems engineering, software engineering, biomedical informatics, library science, enterprise bookmarking, and information architecture all create ontologies to limit complexity and to organize information. The ontology can then be applied to problem solving.
Ontology Classification Ontology classification – the computation of the subsumption hierarchies for classes and properties—is a core reasoning service provided by all OWL (Web Ontology Language) reasoners known to us. A popular algorithm for computing the class hierarchy is the so-called Enhanced Traversal (ET) algorithm.
Ontology Engineering Ontology engineering in computer science and information science is a field which studies the methods and methodologies for building ontologies: formal representations of a set of concepts within a domain and the relationships between those concepts. A large-scale representation of abstract concepts such as actions, time, physical objects and beliefs would be an example of ontological engineering.
Ontology Learning Manual construction of ontologies for the SemanticWeb is a time consuming task. In order to help humans, the ontology learning field tries to automate the construction of new ontologies. The amount of data caused by the success of Internet is demanding methodologies and tools to automatically extract unknown and potentially useful knowledge out of it, generating structured representations with that knowledge. Although ontological engineering tools have matured over the last decade , manual ontology acquisition remains a tedious, time-consuming, error prone, and complex task that can easily result in a knowledge acquisition bottleneck. Besides, while the new necessities of information are growing, the available ontologies need to be updated, enriched with new contents. The research on the ontology learning field has made possible the development of several approaches that allow the partial automation of the ontology construction process. It aims at reducing the time and effort in the ontology development process. Some methods and tools have been proposed in the last years, to speed up the ontology building process, using different sources and several techniques. Computational linguistics techniques, information extraction, statistics, and machine learning are the most prominent paradigms applied until now. There are also a great variety of information sources used for ontology learning. Though Web pages, dictionaries, knowledge bases, semi-structured and structured sources can be used to learn an ontology, most of the methods only use textual sources for the learning process. All methods and tools have a strong relationships to the type of processing performed. In summary, the ontology learning field puts a number of research activities together, which focus on different types of knowledge and information sources, but share their target of a common domain conceptualisation The ontology learning is a complex multi-disciplinary field that uses the natural language processing, text and web data extraction, machine learning and ontology engineering.
OntoSeg Text segmentation (TS) aims at dividing long text into coherent segments which reflect the subtopic structure of the text. It is beneficial to many natural language processing tasks, such as Information Retrieval (IR) and document summarisation. Current approaches to text segmentation are similar in that they all use word-frequency metrics to measure the similarity between two regions of text, so that a document is segmented based on the lexical cohesion between its words. Various NLP tasks are now moving towards the semantic web and ontologies, such as ontology-based IR systems, to capture the conceptualizations associated with user needs and contents. Text segmentation based on lexical cohesion between words is hence not sufficient anymore for such tasks. This paper proposes OntoSeg, a novel approach to text segmentation based on the ontological similarity between text blocks. The proposed method uses ontological similarity to explore conceptual relations between text segments and a Hierarchical Agglomerative Clustering (HAC) algorithm to represent the text as a tree-like hierarchy that is conceptually structured. The rich structure of the created tree further allows the segmentation of text in a linear fashion at various levels of granularity. The proposed method was evaluated on a wellknown dataset, and the results show that using ontological similarity in text segmentation is very promising. Also we enhance the proposed method by combining ontological similarity with lexical similarity and the results show an enhancement of the segmentation quality.
Open Data Open data is the idea that certain data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control. The goals of the open data movement are similar to those of other “Open” movements such as open source, open hardware, open content, and open access. The philosophy behind open data has been long established (for example in the Mertonian tradition of science), but the term “open data” itself is recent, gaining popularity with the rise of the Internet and World Wide Web and, especially, with the launch of open-data government initiatives such as Data.gov and Data.gov.uk.
Open Data Center Alliance
(ODCA)
The Open Data Center Alliance is focused on the widespread adoption of enterprise cloud computing through best practice sharing and collaboration with the industry on availability of solution choice based on ODCA requirements. From its inception to today, the Alliance has seen a maturation of the cloud market place. To meet this new stage of enterprise cloud readiness, the ODCA has announced a new organizational charter. This new charter has driven the creation of the ODCA Cloud Expert Network and workgroups to deliver work focused on this charter.
Open Data Platform
(ODP)
The Open Data Platform Initiative (ODP) is a shared industry effort focused on promoting and advancing the state of Apache Hadoop and Big Data technologies for the enterprise. Enabling Big Data solutions to flourish atop a common core platform. The current ecosystem is challenged and slowed by fragmented and duplicated efforts. The ODP Core will take the guesswork out of the process and accelerate many use cases by running on a common platform. Freeing up enterprises and ecosystem vendors to focus on building business driven applications.
Open Domain INformer
(ODIN)
Rule-base information extraction (IE) has long enjoyed wide adoption throughout industry, though it has remained largely ignored in academia, in favor of machine learning (ML) methods (Chiticariu et al., 2013). However, rule-based systems have several advantages over pure ML systems, including: (a) the rules are interpretable and thus suitable for rapid development and/or domain transfer; and (b) humans and machines can contribute to the same model. Why then have such systems failed to hold the attention of the academic community? One argument raised by Chiticariu et al. is that, despite notable previous efforts (Appelt and Onyshkevych, 1998; Levy and Andrew, 2006; Hunter et al., 2008; Cunningham et al., 2011; Chang and Manning, 2014), there is not a standard language for this task, or a “standard way to express rules”, which raises the entry cost for new rule-based systems. ODIN aims to address these issues with a new language and framework. We follow the simplicity principles promoted by other natural language processing toolkits, such as Stanford’s CoreNLP, which aim to “avoid over-design”, “do one thing well”, and have a user “up and running in ten minutes or less” (Manning et al., 2014).
Open Speech and Music Interpretation by Large Space Extraction
(openSMILE)
The openSMILE feature extraction tool enables you to extract large audio feature spaces, and apply machine learning methods to classify and analyze your data in real-time. It combines features from Music Information Retrieval and Speech Processing. SMILE is an acronym for Speech & Music Interpretation by Large-space Extraction. It is written in C++ and is available as both a standalone commandline executable as well as a dynamic library. The main features of openSMILE are its capability of on-line incremental processing and its modularity. Feature extractor components can be freely interconnected to create new and custom features, all via a simple configuration file. New components can be added to openSMILE via an easy binary plugin interface and a comprehensive API.
http://…ture-extractor-a-tutorial-for-version-2-1
http://…/citation.cfm?id=1874246
Open Web Analytics
(OWA)
Open Web Analytics (OWA) is open source web analytics software that you can use to track and analyze how people use your websites and applications. OWA is licensed under GPL and provides website owners and developers with easy ways to add web analytics to their sites using simple Javascript, PHP, or REST based APIs. OWA also comes with built-in support for tracking websites made with popular content management frameworks such as WordPress and MediaWiki.
OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. In the short term, we’re building on recent advances in AI research and working towards the next set of breakthroughs.
OpenAI Gym OpenAI Gym is a toolkit for reinforcement learning research. It includes a growing collection of benchmark problems that expose a common interface, and a website where people can share their results and compare the performance of algorithms. This whitepaper discusses the components of OpenAI Gym and the design decisions that went into the software.
OpenBLAS OpenBLAS is an optimized BLAS library based on GotoBLAS2 1.13 BSD version.
openCV OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. It is free for use under the open source BSD license. The library is cross-platform. It focuses mainly on real-time image processing.
OpenFace OpenFace is a Python and Torch implementation of face recognition with deep neural networks and is based on the CVPR 2015 paper FaceNet: A Unified Embedding for Face Recognition and Clustering by Florian Schroff, Dmitry Kalenichenko, and James Philbin at Google. Torch allows the network to be executed on a CPU or with CUDA.
OpenGIS Simple Features Reference Implementation
(OGR)
OGR used to stand for OpenGIS Simple Features Reference Implementation. However, since OGR is not fully compliant with the OpenGIS Simple Feature specification and is not approved as a reference implementation of the spec the name was changed to OGR Simple Features Library. The only meaning of OGR in this name is historical. OGR is also the prefix used everywhere in the source of the library for class names, filenames, etc.
OpenLava OpenLava is an open source workload job scheduling software for a cluster of computers. OpenLava was derived from an early version of Platform LSF. Its configuration file syntax, API, and CLI have been kept unchanged. Therefore OpenLava is mostly compatible with Platform LSF. OpenLava was based on the Utopia research project at the University of Toronto. OpenLava is licensed under GNU General Public License v2.
http://www.openlava.org
OpenMarkov OpenMarkov is a software tool for probabilistic graphical models (PGMs) developed by the Research Centre for Intelligent Decision-Support Systems of the UNED in Madrid, Spain. It has been designed for:
• editing and evaluating several types of several types of PGMs, such as Bayesian networks, influence diagrams, factored Markov models, etc.;
• learning Bayesian networks from data interactively;
• cost-effectiveness analysis.
“Probabilistic Graphical Model”
OpenMx OpenMx is an open source program for extended structural equation modeling. It runs as a package under R. Cross platform, it runs under Linux, Mac OS and Windows. OpenMx consists of an R library of functions and optimizers supporting the rapid and flexible implementation and estimation of SEM models. Models can be estimated based on either raw data (with FIML modelling) or on correlation or covariance matrices. Models can handle mixtures of continuous and ordinal data. The current version is OpenMx 2, and is available on CRAN. Path analysis, Confirmatory factor analysis, Latent growth modeling, Mediation analysis are all implemented. Multiple group models are implemented readily. When a model is run, it returns a model, and models can be updated (adding ad removing paths, adding constraints and equalities. Giving parameters the same label equates them). An innovation is that labels can consist of address of other parameters, allowing easy implementation of constrains on parameters by address. RAM models return standardized and raw estimates, as well as a range of fit indices (AIC, RMSEA, TLI, CFI etc.). Confidence intervals are estimated robustly. The program has parallel processing built-in via links to parallel environments in R, and in general takes advantage of the R programming environment. Users can expand the package with functions. These have been used, for instance, to implement Modification indices. Models can be written in either a ‘pathic’ or ‘matrix’ form. For those who think in terms of path models, paths are specified using mxPath() to describe paths. For models that are better suited to description in terms of matrix algebra, this is done using similar functional extensions in the R environment, for instance mxMatrix and mxAlgebra.
OpenMx,ifaTools
OpenRefine OpenRefine (formerly Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; extending it with web services; and linking it to databases like Freebase. Please note that since October 2nd, 2012, Google is not actively supporting this project, which has now been rebranded to OpenRefine. Project development, documentation and promotion is now fully supported by volunteers. Find out more about the history of OpenRefine and how you can help the community.
rrefine
Open-Source Toolkit for Neural Machine Translation
(openNMT)
We describe an open-source toolkit for neural machine translation (NMT). The toolkit prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques.
OpenStreetMap
(OSM)
OpenStreetMap (OSM) is a collaborative project to create a free editable map of the world. Two major driving forces behind the establishment and growth of OSM have been restrictions on use or availability of map information across much of the world and the advent of inexpensive portable satellite navigation devices. Created by Steve Coast in the UK in 2004, it was inspired by the success of Wikipedia and the preponderance of proprietary map data in the UK and elsewhere. Since then, it has grown to over 1.6 million registered users, who can collect data using manual survey, GPS devices, aerial photography, and other free sources. This crowdsourced data is then made available under the Open Database License. The site is supported by the OpenStreetMap Foundation, a non-profit organization registered in England. Rather than the map itself, the data generated by the OpenStreetMap project is considered its primary output. This data is then available for use in both traditional applications, like its usage by Craigslist, OsmAnd, Geocaching, MapQuest Open, JMP statistical software, and Foursquare to replace Google Maps, and more unusual roles, like replacing default data included with GPS receivers. This data has been favourably compared with proprietary datasources, though data quality varies worldwide.
http://…tmap-visualization-case-study-sample-code
OpenTSDB OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on top of HBase. OpenTSDB was written to address a common need: store, index and serve metrics collected from computer systems (network gear, operating systems, applications) at a large scale, and make this data easily accessible and graphable. Thanks to HBase’s scalability, OpenTSDB allows you to collect thousands of metrics from tens of thousands of hosts and applications, at a high rate (every few seconds). OpenTSDB will never delete or downsample data and can easily store hundreds of billions of data points. OpenTSDB is free software and is available under both LGPLv2.1+ and GPLv3+. Find more about OpenTSDB at http://opentsdb.net
Operational Analytics Operational analytics is a more specific term for a type of business analytics which focuses on improving existing operations. This type of business analytics, like others, involves the use of various data mining and data aggregation tools to get more transparent information for business planning.
Operational Intelligence
(OI)
Operational intelligence (OI) is a category of real-time dynamic, business analytics that delivers visibility and insight into data, streaming events and business operations. Operational Intelligence solutions run queries against streaming data feeds and event data to deliver real-time analytic results as operational instructions. Operational Intelligence provides organizations the ability to make decisions and immediately act on these analytic insights, through manual or automated actions.
Operations Research
(OR)
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. It is often considered to be a sub-field of mathematics. The terms management science and decision science are sometimes used as synonyms. Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, and mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex decision-making problems. Because of its emphasis on human-technology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, and draws on psychology and organization science. Operations research is often concerned with determining the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost) of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries.
Opinion Pool
Opportunistic Sensing
Optimal Control Theory Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject.
Optimal Matching Analysis
(OMA)
Optimal matching is a sequence analysis method used in social science, to assess the dissimilarity of ordered arrays of tokens that usually represent a time-ordered sequence of socio-economic states two individuals have experienced. Once such distances have been calculated for a set of observations (e.g. individuals in a cohort) classical tools (such as cluster analysis) can be used. The method was tailored to social sciences from a technique originally introduced to study molecular biology (protein or genetic) sequences. Optimal matching uses the Needleman-Wunsch algorithm.

optmatch

Optimistic Optimization OOR
Optimization In mathematics, computer science, economics, or management science, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives. In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. More generally, optimization includes finding ‘best available’ values of some objective function given a defined domain (or a set of constraints), including a variety of different types of objective functions and different types of domains.
Optunity Optunity is a free software package for hyperparameter search in the context of machine learning developed at STADIUS.
GitXiv
OPUS Miner OPUS Miner is an open source implementation of the OPUS Miner algorithm which applies OPUS search for Filtered Top-k Association Discovery of Self-Sufficient Itemsets. OPUS Miner finds self-sufficient itemsets. These are an effective way of summarizing the key associations in high-dimensional data.
opusminer
Order Statistic In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference. Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.
Ordered Weighted Averaging Aggregation Operator
(OWA)
In applied mathematics – specifically in fuzzy logic – the ordered weighted averaging (OWA) operators provide a parameterized class of mean type aggregation operators. They were introduced by Ronald R. Yager. Many notable mean operators such as the max, arithmetic average, median and min, are members of this class. They have been widely used in computational intelligence because of their ability to model linguistically expressed aggregation instructions.
Ordering points to identify the clustering structure
(OPTICS)
Ordering points to identify the clustering structure (OPTICS) is an algorithm for finding density-based clusters in spatial data. It was presented by Mihael Ankerst, Markus M. Breunig, Hans-Peter Kriegel and Jörg Sander. Its basic idea is similar to DBSCAN, but it addresses one of DBSCAN’s major weaknesses: the problem of detecting meaningful clusters in data of varying density. In order to do so, the points of the database are (linearly) ordered such that points which are spatially closest become neighbors in the ordering. Additionally, a special distance is stored for each point that represents the density that needs to be accepted for a cluster in order to have both points belong to the same cluster. This is represented as a dendrogram.
Ordinal Forests
(OF)
Ordinal forests (OF) are a method for ordinal regression with high-dimensional and low-dimensional data that is able to predict the values of the ordinal target variable for new observations and at the same time estimate the relative widths of the classes of the ordinal target variable. Using a (permutation-based) variable importance measure it is moreover possible to rank the importances of the covariates.
ordinalForest
Oriented Response Network
(ORN)
Deep Convolution Neural Networks (DCNNs) are capable of learning unprecedentedly effective image representations. However, their ability in handling significant local and global image rotations remains limited. In this paper, we propose Active Rotating Filters (ARFs) that actively rotate during convolution and produce feature maps with location and orientation explicitly encoded. An ARF acts as a virtual filter bank containing the filter itself and its multiple unmaterialised rotated versions. During back-propagation, an ARF is collectively updated using errors from all its rotated versions. DCNNs using ARFs, referred to as Oriented Response Networks (ORNs), can produce within-class rotation-invariant deep features while maintaining inter-class discrimination for classification tasks. The oriented response produced by ORNs can also be used for image and object orientation estimation tasks. Over multiple state-of-the-art DCNN architectures, such as VGG, ResNet, and STN, we consistently observe that replacing regular filters with the proposed ARFs leads to significant reduction in the number of network parameters and improvement in classification performance. We report the best results on several commonly used benchmarks.
Orthant Probabilities
Orthogonal Array
(OA)
In mathematics, in the area of combinatorial designs, an orthogonal array is a ‘table’ (array) whose entries come from a fixed finite set of symbols (typically, {1,2,…,n}), arranged in such a way that there is an integer t so that for every selection of t columns of the table, all ordered t-tuples of the symbols, formed by taking the entries in each row restricted to these columns, appear the same number of times. The number t is called the strength of the orthogonal array.
The Orthogonal Array Package
oapackage
Orthogonal Nonlinear Least-Squares Regression
(ONLS)
Orthogonal nonlinear least squares (ONLS) is a not so frequently applied and maybe overlooked regression technique that comes into question when one encounters an “error in variables” problem. While classical nonlinear least squares (NLS) aims to minimize the sum of squared vertical residuals, ONLS minimizes the sum of squared orthogonal residuals. The method is based on finding points on the fitted line that are orthogonal to the data by minimizing for each the Euclidean distance to some point on the fitted curve.
onls
Orthogonal Regression Total least squares, also known as rigorous least squares and (in a special case) orthogonal regression, is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generalization of Deming regression, and can be applied to both linear and non-linear models. The total least squares approximation of the data is generically equivalent to the best, in the Frobenius norm, low-rank approximation of the data matrix.
OSEMN Process
(OSEMN)
We’ve variously heard it said that data science requires some command-line fu for data procurement and preprocessing, or that one needs to know some machine learning or stats, or that one should know how to `look at data’. All of these are partially true, so we thought it would be useful to propose one possible taxonomy — we call it the Snice* taxonomy — of what a data scientist does, in roughly chronological order:
• Obtain
• Scrub
• Explore
• Model
• iNterpret
(or, if you like, OSEMN, which rhymes with possum).
Using the OSEMN Process to Work Through a Data Problem
Outlier In statistics, an outlier is an observation point that is distant from other observations. An outlier may be due to variability in the measurement or it may indicate experimental error; the latter are sometimes excluded from the data set.
Out-of-core Algorithm Out-of-core or external memory algorithms are algorithms that are designed to process data that is too large to fit into a computer’s main memory at one time. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory such as hard drives or tape drives.
A typical example is geographic information systems, especially digital elevation models, where the full data set easily exceeds several gigabytes or even terabytes of data.
This notion naturally extends to a network connecting a data server to a treatment or visualization workstation. Popular mass-of-data based web applications such as google-Map or google-Earth enter this topic.
It also extends to GPU computing – utilizing powerful graphics cards with little memory (compared to CPU memory) and slow CPU-GPU memory transfer (compared to computation bandwidth).
Out-of-Distribution Detector for Neural Networks
(ODIN)
We consider the problem of detecting out-of-distribution examples in neural networks. We propose ODIN, a simple and effective out-of-distribution detector for neural networks, that does not require any change to a pre-trained model. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in- and out-of-distribution samples, allowing for more effective detection. We show in a series of experiments that our approach is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach[1] by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10) when the true positive rate is 95%. We theoretically analyze the method and prove that performance improvement is guaranteed under mild conditions on the image distributions.
Out-of-sample Testing
Outranking Methods
(OM)
A classical problem in the field of Multiple Criteria Decision Making (mcdm) is to build a preference relation on a set of multi-attributed alternatives on the basis of preferences expresses on each attribute and inter-attribute information such as weights. Based on this preference relation (or, more generally, on various relations obtained following a robustness analysis) a recommendation is elaborated (e.g. exhibiting of a subset likely to contain the best alternatives).
OutrankingTools
Overconfidence Effect The overconfidence effect is a well-established bias in which someone’s subjective confidence in their judgments is reliably greater than their objective accuracy, especially when confidence is relatively high. For example, in some quizzes, people rate their answers as “99% certain” but are wrong 40% of the time. It has been proposed that a metacognitive trait mediates the accuracy of confidence judgments, but this trait’s relationship to variations in cognitive ability and personality remains uncertain. Overconfidence is one example of a miscalibration of subjective probabilities.
Overdispersion In statistics, overdispersion is the presence of greater variability (statistical dispersion) in a data set than would be expected based on a given statistical model. A common task in applied statistics is choosing a parametric model to fit a given set of empirical observations. This necessitates an assessment of the fit of the chosen model. It is usually possible to choose the model parameters in such a way that the theoretical population mean of the model is approximately equal to the sample mean. However, especially for simple models with few parameters, theoretical predictions may not match empirical observations for higher moments. When the observed variance is higher than the variance of a theoretical model, overdispersion has occurred. Conversely, underdispersion means that there was less variation in the data than predicted. Overdispersion is a very common feature in applied data analysis because in practice, populations are frequently heterogeneous (non-uniform) contrary to the assumptions implicit within widely used simple parametric models.
Overfitting In statistics and machine learning, overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model which has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data.
Advertisements