R Packages worth a look

Taxicab Correspondence Analysis (TaxicabCA)
Computation and visualization of Taxicab Correspondence Analysis, Choulakian (2006) <doi:10.1007/s11336-004-1231-4>. Classical correspondence ana …

Two-Group Ta-Test (tatest)
The ta-test is a modified two-sample or two-group t-test of Gosset (1908). In small samples with less than 15 replicates,the ta-test significantly redu …

Unified Interface to Distance, Dissimilarity, Similarity Matrices (disto)
Provides a high level API to interface over sources storing distance, dissimilarity, similarity matrices with matrix style extraction, replacement and …

Cooperative Aspects of Linear Production Programming Problems (coopProductGame)
Computes cooperative game and allocation rules associated with linear production programming problems.

GreedyExperimentalDesign JARs (GreedyExperimentalDesignJARs)
These are GreedyExperimentalDesign Java dependency libraries. Note: this package has no functionality of its own and should not be installed as a stand …


Document worth reading: “Does modelling need a Reformation Ideas for a new grammar of modelling”

The quality of mathematical modelling is looked at from the perspective of science’s own quality control arrangement and recent crises. It is argued that the crisis in the quality of modelling is at least as serious as that which has come to light in fields such as medicine, economics, psychology, and nutrition. In the context of the nascent sociology of quantification, the linkages between big data, algorithms, mathematical and statistical modelling (use and misuse of p-values) are evident. Looking at existing proposals for best practices the suggestion is put forward that the field needs a thorough Reformation, leading to a new grammar for modelling. Quantitative methodologies such as uncertainty and sensitivity analysis can form the bedrock on which the new grammar is built, while incorporating important normative and ethical elements. To this effect we introduce sensitivity auditing, quantitative storytelling, and ethics of quantification. Does modelling need a Reformation Ideas for a new grammar of modelling

Distilled News

Detecting True and Deceptive Hotel Reviews using Machine Learning

In this tutorial, you´ll use a machine learning algorithm to implement a real-life problem in Python. You will learn how to read multiple text files in python, extract labels, use dataframes and a lot more!

Automated Text Feature Engineering using textfeatures in R

It could be the era of Deep Learning where it really doesn´t matter how big is your dataset or how many columns you´ve got. Still, a lot of Kaggle Competition Winners and Data Scientists emphasis on one thing that could put you on the top of the leaderboard in a Competition is ‘Feature Engineering’. Irrespective of how sophisticated your model is, good features will always help your Machine Learning Model building process better than others.

Best (and Free!!) Resources to understand Nuts and Bolts of Deep learning

The internet is filled with tutorials to get started with Deep Learning. You can choose to get started with the superb Stanford courses CS221 or CS224, Fast AI courses or Deep Learning AI courses if you are an absolute beginner. All except Deep Learning AI are free and accessible from the comfort of your home. All you need is a good computer (preferably with a Nvidia GPU) and you are good to take your first steps into Deep Learning. This blog is however not addressing the absolute beginner. Once you have a bit of intuition about how Deep Learning algorithms work, you might want to understand how things work below the hood. While most work in Deep Learning (the 10% apart from Data Munging viz 90% of total work) is adding layers like Conv2d, changing hyperparameters in different types of optimization strategies like ADAM or using batchnorm and other techniques just by writing one line commands in Python (thanks to the awesome frameworks available), a lot of the people might feel a deep desire to know what happens behind the scenes. This is the list of resources which might help you get to know what happens inside the hood when you (say) put a conv2d layer or call T.grad in Theano.

Call Centre Workforce Planning Using Erlang C in R language

Call centre performance can be expressed by the Grade of Service, which is the percentage of calls that are answered within a specific time, for example, 90% of calls are answered within 30 seconds. This Grade of Service depends on the volume of calls made to the centre, the number of available agents and the time it takes to process a contact. Although working in a call centre can be chaotic, the Erlang C formula describes the relationship between the Grade of Service and these variables quite accurately.

Machine Learning Results in R: one plot to rule them all!

To automate the process of modeling selection and evaluate the results with visualization, I have created some functions into my personal library and today I´m sharing the codes with you. I run them to evaluate and compare Machine Learning models as fast and easily as possible. Currently, they are designed to evaluate binary classification models results.

The ultimate list of Web Scraping tools and software

Here’s your guide to pick the right web scraping tool for your specific data needs.

Monte Carlo Shiny: Part Three

In previous posts, we covered how to run a Monte Carlo simulation and how to visualize the results. Today, we will wrap that work into a Shiny app wherein a user can build a custom portfolio, and then choose a number of simulations to run and a number of months to simulate into the future.

Book Memo: “Tensor Numerical Methods in Scientific Computing”

The most difficult computational problems nowadays are those of higher dimensions. This research monograph offers an introduction to tensor numerical methods designed for the solution of the multidimensional problems in scientific computing. These methods are based on the rank-structured approximation of multivariate functions and operators by using the appropriate tensor formats. The old and new rank-structured tensor formats are investigated. We discuss in detail the novel quantized tensor approximation method (QTT) which provides function-operator calculus in higher dimensions in logarithmic complexity rendering super-fast convolution, FFT and wavelet transforms. This book suggests the constructive recipes and computational schemes for a number of real life problems described by the multidimensional partial differential equations. We present the theory and algorithms for the sinc-based separable approximation of the analytic radial basis functions including Green’s and Helmholtz kernels. The efficient tensor-based techniques for computational problems in electronic structure calculations and for the grid-based evaluation of long-range interaction potentials in multi-particle systems are considered. We also discuss the QTT numerical approach in many-particle dynamics, tensor techniques for stochastic/parametric PDEs as well as for the solution and homogenization of the elliptic equations with highly-oscillating coefficients. Contents Theory on separable approximation of multivariate functions Multilinear algebra and nonlinear tensor approximation Superfast computations via quantized tensor approximation Tensor approach to multidimensional integrodifferential equations

Whats new on arXiv

Linear Model Regression on Time-series Data: Non-asymptotic Error Bounds and Applications

Data-driven methods for modeling dynamic systems have received considerable attention as they provide a mechanism for control synthesis directly from the observed time-series data. In the absence of prior assumptions on how the time-series had been generated, regression on the system model has been particularly popular. In the linear case, the resulting least squares setup for model regression, not only provides a computationally viable method to fit a model to the data, but also provides useful insights into the modal properties of the underlying dynamics. Although probabilistic estimates for this model regression have been reported, deterministic error bounds have not been examined in the literature, particularly as they pertain to the properties of the underlying system. In this paper, we provide deterministic non-asymptotic error bounds for fitting a linear model to the observed time-series data, with a particular attention to the role of symmetry and eigenvalue multiplicity in the underlying system matrix.

Efficient Deep Learning on Multi-Source Private Data

Machine learning models benefit from large and diverse datasets. Using such datasets, however, often requires trusting a centralized data aggregator. For sensitive applications like healthcare and finance this is undesirable as it could compromise patient privacy or divulge trade secrets. Recent advances in secure and privacy-preserving computation, including trusted hardware enclaves and differential privacy, offer a way for mutually distrusting parties to efficiently train a machine learning model without revealing the training data. In this work, we introduce Myelin, a deep learning framework which combines these privacy-preservation primitives, and use it to establish a baseline level of performance for fully private machine learning.

Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions

Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks. To this end, many defense approaches that attempt to improve the robustness of DNNs have been proposed. In a separate and yet related area, recent works have explored to quantize neural network weights and activation functions into low bit-width to compress model size and reduce computational complexity. In this work,we find that these two different tracks, namely the pursuit of network compactness and robustness, can bemerged into one and give rise to networks of both advantages. To the best of our knowledge, this is the first work that uses quantization of activation functions to defend against adversarial examples. We also propose to train robust neural networks by using adaptive quantization techniques for the activation functions. Our proposed Dynamic Quantized Activation (DQA) is verified through a wide range of experiments with the MNIST and CIFAR-10 datasets under different white-box attack methods, including FGSM, PGD, andC&W attacks. Furthermore, Zeroth Order Optimization and substitute model based black-box attacks are also considered in this work. The experimental results clearly show that the robustness of DNNs could be greatly improved using the proposed DQA.

Machine Learning Interpretability: A Science rather than a tool

The term ‘interpretability’ is oftenly used by machine learning researchers each with their own intuitive understanding of it. There is no universal well agreed upon definition of interpretability in machine learning. As any type of science discipline is mainly driven by the set of formulated questions rather than by different tools in that discipline, e.g. astrophysics is the discipline that learns the composition of stars, not as the discipline that use the spectroscopes. Similarly, we propose that machine learning interpretability should be a discipline that answers specific questions related to interpretability. These questions can be of statistical, causal and counterfactual nature. Therefore, there is a need to look into the interpretability problem of machine learning in the context of questions that need to be addressed rather than different tools. We discuss about a hypothetical interpretability framework driven by a question based scientific approach rather than some specific machine learning model. Using a question based notion of interpretability, we can step towards understanding the science of machine learning rather than its engineering. This notion will also help us understanding any specific problem more in depth rather than relying solely on machine learning methods.

Improving Explainable Recommendations with Synthetic Reviews

An important task for a recommender system to provide interpretable explanations for the user. This is important for the credibility of the system. Current interpretable recommender systems tend to focus on certain features known to be important to the user and offer their explanations in a structured form. It is well known that user generated reviews and feedback from reviewers have strong leverage over the users’ decisions. On the other hand, recent text generation works have been shown to generate text of similar quality to human written text, and we aim to show that generated text can be successfully used to explain recommendations. In this paper, we propose a framework consisting of popular review-oriented generation models aiming to create personalised explanations for recommendations. The interpretations are generated at both character and word levels. We build a dataset containing reviewers’ feedback from the Amazon books review dataset. Our cross-domain experiments are designed to bridge from natural language processing to the recommender system domain. Besides language model evaluation methods, we employ DeepCoNN, a novel review-oriented recommender system using a deep neural network, to evaluate the recommendation performance of generated reviews by root mean square error (RMSE). We demonstrate that the synthetic personalised reviews have better recommendation performance than human written reviews. To our knowledge, this presents the first machine-generated natural language explanations for rating prediction.

Deep Reinforcement Learning for Swarm Systems

Recently, deep reinforcement learning (RL) methods have been applied successfully to multi-agent scenarios. Typically, these methods rely on a concatenation of agent states to represent the information content required for decentralized decision making. However, concatenation scales poorly to swarm systems with a large number of homogeneous agents as it does not exploit the fundamental properties inherent to these systems: (i) the agents in the swarm are interchangeable and (ii) the exact number of agents in the swarm is irrelevant. Therefore, we propose a new state representation for deep multi-agent RL based on mean embeddings of distributions. We treat the agents as samples of a distribution and use the empirical mean embedding as input for a decentralized policy. We define different feature spaces of the mean embedding using histograms, radial basis functions and a neural network learned end-to-end. We evaluate the representation on two well known problems from the swarm literature (rendezvous and pursuit evasion), in a globally and locally observable setup. For the local setup we furthermore introduce simple communication protocols. Of all approaches, the mean embedding representation using neural network features enables the richest information exchange between neighboring agents facilitating the development of more complex collective strategies.

Improving Named Entity Recognition by Jointly Learning to Disambiguate Morphological Tags

Previous studies have shown that linguistic features of a word such as possession, genitive or other grammatical cases can be employed in word representations of a named entity recognition (NER) tagger to improve the performance for morphologically rich languages. However, these taggers require external morphological disambiguation (MD) tools to function which are hard to obtain or non-existent for many languages. In this work, we propose a model which alleviates the need for such disambiguators by jointly learning NER and MD taggers in languages for which one can provide a list of candidate morphological analyses. We show that this can be done independent of the morphological annotation schemes, which differ among languages. Our experiments employing three different model architectures that join these two tasks show that joint learning improves NER performance. Furthermore, the morphological disambiguator’s performance is shown to be competitive.

Adaptive Neural Trees

Deep neural networks and decision trees operate on largely separate paradigms; typically, the former performs representation learning with pre-specified architectures, while the latter is characterised by learning hierarchies over pre-specified features with data-driven architectures. We unite the two via adaptive neural trees (ANTs), a model that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree, along with a backpropagation-based training algorithm that adaptively grows the architecture from primitive modules (e.g., convolutional layers). We demonstrate that, whilst achieving over 99% and 90% accuracy on MNIST and CIFAR-10 datasets, ANTs benefit from (i) faster inference via conditional computation, (ii) increased interpretability via hierarchical clustering e.g. learning meaningful class associations, such as separating natural vs. man-made objects, and (iii) a mechanism to adapt the architecture to the size and complexity of the training dataset.

Receiver Operating Characteristic Curves and Confidence Bands for Support Vector Machines

Many problems that appear in biomedical decision making, such as diagnosing disease and predicting response to treatment, can be expressed as binary classification problems. The costs of false positives and false negatives vary across application domains and receiver operating characteristic (ROC) curves provide a visual representation of this trade-off. Nonparametric estimators for the ROC curve, such as a weighted support vector machine (SVM), are desirable because they are robust to model misspecification. While weighted SVMs have great potential for estimating ROC curves, their theoretical properties were heretofore underdeveloped. We propose a method for constructing confidence bands for the SVM ROC curve and provide the theoretical justification for the SVM ROC curve by showing that the risk function of the estimated decision rule is uniformly consistent across the weight parameter. We demonstrate the proposed confidence band method and the superior sensitivity and specificity of the weighted SVM compared to commonly used methods in diagnostic medicine using simulation studies. We present two illustrative examples: diagnosis of hepatitis C and a predictive model for treatment response in breast cancer.

Dependency Leakage: Analysis and Scalable Estimators

In this paper, we prove the first theoretical results on dependency leakage — a phenomenon in which learning on noisy clusters biases cross-validation and model selection results. This is a major concern for domains involving human record databases (e.g. medical, census, advertising), which are almost always noisy due to the effects of record linkage and which require special attention to machine learning bias. The proposed theoretical properties justify regularization choices in several existing statistical estimators and allow us to construct the first hypothesis test for cross-validation bias due to dependency leakage. Furthermore, we propose a novel matrix sketching technique which, along with standard function approximation techniques, enables dramatically improving the sample and computational scalability of existing estimators. Empirical results on several benchmark datasets validate our theoretical results and proposed methods.

SySeVR: A Framework for Using Deep Learning to Detect Software Vulnerabilities

The detection of software vulnerabilities (or vulnerabilities for short) is an important problem that has yet to be tackled, as manifested by many vulnerabilities reported on a daily basis. This calls for machine learning methods to automate vulnerability detection. Deep learning is attractive for this purpose because it does not require human experts to manually define features. Despite the tremendous success of deep learning in other domains, its applicability to vulnerability detection is not systematically understood. In order to fill this void, we propose the first systematic framework for using deep learning to detect vulnerabilities. The framework, dubbed Syntax-based, Semantics-based, and Vector Representations (SySeVR), focuses on obtaining program representations that can accommodate syntax and semantic information pertinent to vulnerabilities. Our experiments with 4 software products demonstrate the usefulness of the framework: we detect 15 vulnerabilities that are not reported in the National Vulnerability Database. Among these 15 vulnerabilities, 7 are unknown and have been reported to the vendors, and the other 8 have been ‘silently’ patched by the vendors when releasing newer versions of the products.

General Value Function Networks

In this paper we show that restricting the representation-layer of a Recurrent Neural Network (RNN) improves accuracy and reduces the depth of recursive training procedures in partially observable domains. Artificial Neural Networks have been shown to learn useful state representations for high-dimensional visual and continuous control domains. If the the tasks at hand exhibits long depends back in time, these instantaneous feed-forward approaches are augmented with recurrent connections and trained with Back-prop Through Time (BPTT). This unrolled training can become computationally prohibitive if the dependency structure is long, and while recent work on LSTMs and GRUs has improved upon naive training strategies, there is still room for improvements in computational efficiency and parameter sensitivity. In this paper we explore a simple modification to the classic RNN structure: restricting the state to be comprised of multi-step General Value Function predictions. We formulate an architecture called General Value Function Networks (GVFNs), and corresponding objective that generalizes beyond previous approaches. We show that our GVFNs are significantly more robust to train, and facilitate accurate prediction with no gradients needed back-in-time in domains with substantial long-term dependences.

Self-supervised Knowledge Distillation Using Singular Value Decomposition

To solve deep neural network (DNN)’s huge training dataset and its high computation issue, so-called teacher-student (T-S) DNN which transfers the knowledge of T-DNN to S-DNN has been proposed. However, the existing T-S-DNN has limited range of use, and the knowledge of T-DNN is insufficiently transferred to S-DNN. To improve the quality of the transferred knowledge from T-DNN, we propose a new knowledge distillation using singular value decomposition (SVD). In addition, we define a knowledge transfer as a self-supervised task and suggest a way to continuously receive information from T-DNN. Simulation results show that a S-DNN with a computational cost of 1/5 of the T-DNN can be up to 1.1\% better than the T-DNN in terms of classification accuracy. Also assuming the same computational cost, our S-DNN outperforms the S-DNN driven by the state-of-the-art distillation with a performance advantage of 1.79\%. code is available on https://…/SSKD\_SVD.

Trust-Based Collaborative Filtering: Tackling the Cold Start Problem Using Regular Equivalence

User-based Collaborative Filtering (CF) is one of the most popular approaches to create recommender systems. This approach is based on finding the most relevant k users from whose rating history we can extract items to recommend. CF, however, suffers from data sparsity and the cold-start problem since users often rate only a small fraction of available items. One solution is to incorporate additional information into the recommendation process such as explicit trust scores that are assigned by users to others or implicit trust relationships that result from social connections between users. Such relationships typically form a very sparse trust network, which can be utilized to generate recommendations for users based on people they trust. In our work, we explore the use of a measure from network science, i.e. regular equivalence, applied to a trust network to generate a similarity matrix that is used to select the k-nearest neighbors for recommending items. We evaluate our approach on Epinions and we find that we can outperform related methods for tackling cold-start users in terms of recommendation accuracy.

Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search

While existing work on neural architecture search (NAS) tunes hyperparameters in a separate post-processing step, we demonstrate that architectural choices and other hyperparameter settings interact in a way that can render this separation suboptimal. Likewise, we demonstrate that the common practice of using very few epochs during the main NAS and much larger numbers of epochs during a post-processing step is inefficient due to little correlation in the relative rankings for these two training regimes. To combat both of these problems, we propose to use a recent combination of Bayesian optimization and Hyperband for efficient joint neural architecture and hyperparameter search.

Backplay: ‘Man muss immer umkehren’

A long-standing problem in model free reinforcement learning (RL) is that it requires a large number of trials to learn a good policy, especially in environments with sparse rewards. We explore a method to increase the sample efficiency of RL when we have access to demonstrations. Our approach, which we call Backplay, uses a single demonstration to construct a curriculum for a given task. Rather than starting each training episode in the environment’s fixed initial state, we start the agent near the end of the demonstration and move the starting point backwards during the course of training until we reach the initial state. We perform experiments in a competitive four player game (Pommerman) and a path-finding maze game. We find that this weak form of guidance provides significant gains in sample complexity with a stark advantage in sparse reward environments. In some cases, standard RL did not yield any improvement while Backplay reached success rates greater than 50% and generalized to unseen initial conditions in the same amount of training time. Additionally, we see that agents trained via Backplay can learn policies superior to those of the original demonstration.

A Probabilistic Theory of Supervised Similarity Learning for Pointwise ROC Curve Optimization

The performance of many machine learning techniques depends on the choice of an appropriate similarity or distance measure on the input space. Similarity learning (or metric learning) aims at building such a measure from training data so that observations with the same (resp. different) label are as close (resp. far) as possible. In this paper, similarity learning is investigated from the perspective of pairwise bipartite ranking, where the goal is to rank the elements of a database by decreasing order of the probability that they share the same label with some query data point, based on the similarity scores. A natural performance criterion in this setting is pointwise ROC optimization: maximize the true positive rate under a fixed false positive rate. We study this novel perspective on similarity learning through a rigorous probabilistic framework. The empirical version of the problem gives rise to a constrained optimization formulation involving U-statistics, for which we derive universal learning rates as well as faster rates under a noise assumption on the data distribution. We also address the large-scale setting by analyzing the effect of sampling-based approximations. Our theoretical results are supported by illustrative numerical experiments.

Cross Validation Based Model Selection via Generalized Method of Moments

Structural estimation is an important methodology in empirical economics, and a large class of structural models are estimated through the generalized method of moments (GMM). Traditionally, selection of structural models has been performed based on model fit upon estimation, which take the entire observed samples. In this paper, we propose a model selection procedure based on cross-validation (CV), which utilizes sample-splitting technique to avoid issues such as over-fitting. While CV is widely used in machine learning communities, we are the first to prove its consistency in model selection in GMM framework. Its empirical property is compared to existing methods by simulations of IV regressions and oligopoly market model. In addition, we propose the way to apply our method to Mathematical Programming of Equilibrium Constraint (MPEC) approach. Finally, we perform our method to online-retail sales data to compare dynamic market model to static model.

Time-Varying Optimization: Algorithms and Engineering Applications

This is the write-up of the talk I gave at the 23rd International Symposium on Mathematical Programming (ISMP) in Bordeaux, France, July 6th, 2018. The talk was a general overview of the state of the art of time-varying, mainly convex, optimization, with special emphasis on discrete-time algorithms and applications in energy and transportation. This write-up is mathematically correct, while its style is somewhat less formal than a standard paper.

Fast Model-Selection through Adapting Design of Experiments Maximizing Information Gain
Analysis of social media content and search behavior related to seasonal topics using the sociophysics approach
Backscatter-assisted Relaying in Wireless Powered Communications Network
Degree Correlations Amplify the Growth of Cascades in Networks
On SDEs with Lipschitz coefficients, driven by continuous, model-free price paths
Discrete linear-complexity reinforcement learning in continuous action spaces for Q-learning algorithms
Design and Analysis of Efficient Maximum/Minimum Circuits for Stochastic Computing
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Data-Efficient Weakly Supervised Learning for Low-Resource Audio Event Detection Using Deep Learning
Analysis of Optimized Threshold with SLM based Blanking Non-Linearity for Impulsive Noise Reduction in Power Line Communication Systems
On maximum $k$-edge-colorable subgraphs of bipartite graphs
A tight lower bound for the hardness of clutters
A Fast Segmentation-free Fully Automated Approach to White Matter Injury Detection in Preterm Infants
Monochromatic cycle partitions in random graphs
Learning Noise-Invariant Representations for Robust Speech Recognition
Influence Models on Layered Uncertain Networks: A Guaranteed-Cost Design Perspective
Rapid Bipedal Gait Design Using C-FROST with Illustration on a Cassie-series Robot
SRN: Side-output Residual Network for Object Reflection Symmetry Detection and Beyond
Emergent Meaning Structures: A Socio-Semantic Network Analysis of Artistic Collectives
Distributed Triangle Detection via Expander Decomposition
Parallel Restarted SGD for Non-Convex Optimization with Faster Convergence and Less Communication
Expressive power of outer product manifolds on feed-forward neural networks
Multimatricvariate distribution under elliptical models
Developing a Portable Natural Language Processing Based Phenotyping System
Discovering Job Preemptions in the Open Science Grid
A Framework for Moment Invariants
The Online $k$-Taxi Problem
Remote Sampling with Applications to General Entanglement Simulation
Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions
Item Recommendation with Variational Autoencoders and Heterogenous Priors
From modelling of systems with constraints to generalized geometry and back to numerics
Invariant Information Distillation for Unsupervised Image Segmentation and Clustering
Devam vs. Tamam: 2018 Turkish Elections
Mixed-Stationary Gaussian Process for Flexible Non-Stationary Modeling of Spatial Outcomes
Airline Passenger Name Record Generation using Generative Adversarial Networks
Model selection for sequential designs in discrete finite systems using Bernstein kernels
Payoff Control in the Iterated Prisoner’s Dilemma
Accel: A Corrective Fusion Network for Efficient Semantic Segmentation on Video
Derandomizing the Lovasz Local Lemma via log-space statistical tests
Query-Conditioned Three-Player Adversarial Network for Video Summarization
Modular Semantics and Characteristics for Bipolar Weighted Argumentation Graphs
Supermodular Locality Sensitive Hashes
A transformation-based approach to Gaussian mixture density estimation for bounded data
Tensor Methods for Additive Index Models under Discordance and Heterogeneity
Regularized Zero-Forcing Precoding Aided Adaptive Coding and Modulation for Large-Scale Antenna Array Based Air-to-Air Communications
Integrating Algorithmic Planning and Deep Learning for Partially Observable Navigation
Pink Work: Same-Sex Marriage, Employment and Discrimination
Massively Parallel Symmetry Breaking on Sparse Graphs: MIS and Maximal Matching
Stochastic defense against complex grid attacks
A Modulation Module for Multi-task Learning with Applications in Image Retrieval
Digit sums and generating functions
Evaluating Gaussian Process Metamodels and Sequential Designs for Noisy Level Set Estimation
Multivariate approximation in total variation using local dependence
Pattern Synthesis via Complex-Coefficient Weight Vector Orthogonal Decomposition–Part I: Fundamentals
Encrypted Control System with Quantizer
Recurrent Capsule Network for Relations Extraction: A Practical Application to the Severity Classification of Coronary Artery Disease
Deterministic oblivious distribution (and tight compaction) in linear time
Synthesis of Successful Actuator Attackers on Supervisors
Pattern Synthesis via Complex-Coefficient Weight Vector Orthogonal Decomposition–Part II: Robust Sidelobe Synthesis
Timed Discrete-Event Systems are Synchronous Product Structures
The MOEADr Package – A Component-Based Framework for Multiobjective Evolutionary Algorithms Based on Decomposition
Motivating the Rules of the Game for Adversarial Example Research
Generating Levels That Teach Mechanics
Forward Attention in Sequence-to-sequence Acoustic Modelling for Speech Synthesis
3D Global Convolutional Adversarial Network\\ for Prostate MR Volume Segmentation
A generalisation of the relation between zeros of the complex Kac polynomial and eigenvalues of truncated unitary matrices
Gradient Band-based Adversarial Training for Generalized Attack Immunity of A3C Path Finding
A Learning-Based Coexistence Mechanism for LAA-LTE Based HetNets
On Evaluation of Embodied Navigation Agents
Convergence guarantees for RMSProp and ADAM in non-convex optimization and their comparison to Nesterov acceleration on autoencoders
Comparing Respiratory Monitoring Performance of Commercial Wireless Devices
A pliable lasso for the Cox model
Bag-of-Visual-Words for Signature-Based Multi-Script Document Retrieval
Visual Affordance and Function Understanding: A Survey
Detecting strong signals in gene perturbation experiments: An adaptive approach with power guarantee and FDR control
Planning and Synthesis Under Assumptions
Resilient Feedback Controller Design For Linear Model of Power Grids
An Attention-Based Approach for Single Image Super Resolution
Effect of Sensor Error on the Assessment of Seismic Building Damage
Deep Content-User Embedding Model for Music Recommendation
Lower bounds for embeddings into hypercubes]{Lower bounds for dilation, wirelength, and edge congestion of embedding graphs into hypercubes
DroNet: Efficient convolutional neural network detector for real-time UAV applications
Multi-Task Unsupervised Contextual Learning for Behavioral Annotation
A Central Limit Theorem for $L_p$ transportation cost with applications to Fairness Assessment in Machine Learning
Robust Distributed Compression of Symmetrically Correlated Gaussian Sources
Approximating Systems Fed by Poisson Processes with Rapidly Changing Arrival Rates
Delay Minimization for NOMA-MEC Offloading
Tri-Compress: A Cascaded Data Compression Framework for Smart Electricity Distribution Systems
Local symmetry theory of resonator structures for the real-space control of edge-states in binary aperiodic chains
Traditional Wisdom and Monte Carlo Tree Search Face-to-Face in the Card Game Scopone
Determining ellipses from low resolution images with a comprehensive image formation model
The Scheduler is Very Powerful in Competitive Analysis of Distributed List Accessing
Computed Tomography Image Enhancement using 3D Convolutional Neural Network
Customer Sharing in Economic Networks with Costs
News-based trading strategies
Semilinear evolution equations for the Anderson Hamiltonian in two and three dimensions
Spaceborne Staring Spotlight SAR Tomography – A First Demonstration with TerraSAR-X
Network Identification: A Passivity and Network Optimization Approach
What circle bundles can be triangulated over $\partial Δ^3$
Learning Interpretable Anatomical Features Through Deep Generative Models: Application to Cardiac Remodeling
Practical MIMO-NOMA: Low Complexity & Capacity-Approaching Solution
Reactive random walkers on complex networks
Random walks on graphs: new bounds on hitting, meeting, coalescing and returning
Mix $\star$-autonomous quantales and the continuous weak order
Vertex Turán problems for the oriented hypercube
Approximation algorithms on $k-$ cycle covering and $k-$ clique covering
An Information-theoretic Framework for the Lossy Compression of Link Streams
Distinct patterns of syntactic agreement errors in recurrent networks and humans
Guaranteed Error Bounds on Approximate Model Abstractions through Reachability Analysis
Large deviation principle for fractional Brownian motion with respect to capacity
An entropic interpolation proof of the HWI inequality
Interacting diffusions on random graphs with diverging degrees: hydrodynamics and large deviations
Melanoma Recognition with an Ensemble of Techniques for Segmentation and a Structural Analysis for Classification
Comment on ‘Nodal infection in Markovian susceptible-infected-susceptible and susceptible-infected-removed epidemics on networks are non-negatively correlated”
Quantum cluster algebras from unpunctured triangulated surfaces
Intriguing yet simple skewness – kurtosis relation in economic and demographic data distributions; pointing to preferential attachment processes
On graphs admitting two disjoint maximum independent sets
RARD II: The 2nd Related-Article Recommendation Dataset
Learning Hybrid Sparsity Prior for Image Restoration: Where Deep Learning Meets Sparse Coding
Time-Bounded Influence Diffusion with Incentives
Fake news as we feel it: perception and conceptualization of the term ‘fake news’ in the media
Stochastic Dominance Under Independent Noise
An ETH-Tight Exact Algorithm for Euclidean TSP
Identifying Position-Dependent Mechanical Systems: A Modal Approach with Applications to Wafer Stage Control
Linearity of Saturation for Berge Hypergraphs
On the Gardner-Zvavitch conjecture: symmetry in the inequalities of Brunn-Minkowski type
Method for motion artifact reduction using a convolutional neural network for dynamic contrast enhanced MRI of the liver
Quantifying Biases in Online Information Exposure
Active Learning for Segmentation by Optimizing Content Information for Maximal Entropy
The parameterised complexity of computing the maximum modularity of a graph
The Generalized Lasso for Sub-gaussian Measurements with Dithered Quantization
Quantile-Regression Inference With Adaptive Control of Size
Video Time: Properties, Encoders and Evaluation
A Quantitative Central Limit Theorem for the Excursion Area of Random Spherical Harmonics over Subdomains of $\mathbb{S}^2$
Intermediate spectral statistics in the many–body localization transition
An exploratory factor analysis model for slum severity index in Mexico City
Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS
Is it worth it Budget-related evaluation metrics for model selection
Spectrum accessing optimization in congestion times in radio cognitive networks based on chaotic neural networks
Skin Lesion Segmentation and Classification for ISIC 2018 Using Traditional Classifiers with Hand-Crafted Features
Quantified boolean formula problem
Cruise Missile Target Trajectory Movement Prediction based on Optimal 3D Kalman Filter with Firefly Algorithm
Probability Density Function Estimation in OFDM Transmitter and Receiver in Radio Cognitive Networks based on Recurrent Neural Network
Learning Sums of Independent Random Variables with Sparse Collective Support
Cross-layer Optimization for High Speed Adders: A Pareto Driven Machine Learning Approach
Extrinsic Spin-Charge Coupling in Diffusive Superconducting Systems
Throttling for Zero Forcing and Variants
Sample Path Properties of the Average Generation of a Bellman-Harris Process
Skeletal Movement to Color Map: A Novel Representation for 3D Action Recognition with Inception Residual Networks
Heuristic Policies for Stochastic Knapsack Problem with Time-Varying Random Demand
Location Augmentation for CNN
Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias
A New Index of Human Capital to Predict Economic Growth
Weighted Persistent Homology Sums of Random Cech Complexes
Hypergraphs not containing a tight tree with a bounded trunk ~II: 3-trees with a trunk of size 2
Semi-Markov processes, integro-differential equations and anomalous diffusion-aggregation

If you did not already know

Adversary Model google
In computer science, an online algorithm measures its competitiveness against different adversary models. For deterministic algorithms, the adversary is the same, the adaptive offline adversary. For randomized online algorithms competitiveness can depend upon the adversary model used. …

N-Gram google
In the fields of computational linguistics and probability, an n-gram is a contiguous sequence of n items from a given sequence of text or speech. The items can be phonemes, syllables, letters, words or base pairs according to the application. The n-grams typically are collected from a text or speech corpus. An n-gram of size 1 is referred to as a “unigram”; size 2 is a “bigram” (or, less commonly, a “digram”); size 3 is a “trigram”. Larger sizes are sometimes referred to by the value of n, e.g., “four-gram”, “five-gram”, and so on. …

Similarity-Based Imbalanced Classification (SBIC) google
When the training data in a two-class classification problem is overwhelmed by one class, most classification techniques fail to correctly identify the data points belonging to the underrepresented class. We propose Similarity-based Imbalanced Classification (SBIC) that learns patterns in the training data based on an empirical similarity function. To take the imbalanced structure of the training data into account, SBIC utilizes the concept of absent data, i.e. data from the minority class which can help better find the boundary between the two classes. SBIC simultaneously optimizes the weights of the empirical similarity function and finds the locations of absent data points. As such, SBIC uses an embedded mechanism for synthetic data generation which does not modify the training dataset, but alters the algorithm to suit imbalanced datasets. Therefore, SBIC uses the ideas of both major schools of thoughts in imbalanced classification: Like cost-sensitive approaches SBIC operates on an algorithm level to handle imbalanced structures; and similar to synthetic data generation approaches, it utilizes the properties of unobserved data points from the minority class. The application of SBIC to imbalanced datasets suggests it is comparable to, and in some cases outperforms, other commonly used classification techniques for imbalanced datasets. …

Distilled News

How to add Trend Lines to Visualizations in Displayr

Visualizations should make the most important features of your data stand out. But too often, what’s important gets lost in the minefield of data. But now you can highlight systematic changes from random noise by adding trend lines to your chart! In Displayr, Visualizations of chart type Column, Bar, Area, Line and Scatter all support trend lines. Trend lines can be linear or non-parametric (cubic spline, Friedman´s super-smoother or LOESS).

Practical Apache Spark in 10 minutes. Part 2 – RDD

Spark´s primary abstraction is a distributed collection of items called a Resilient Distributed Dataset (RDD). It is a fault-tolerant collection of elements which allows parallel operations upon itself. RDDs can be created from Hadoop InputFormats (such as HDFS files) or by transforming other RDDs.

Practical Apache Spark in 10 minutes. Part 3 – DataFrames and SQL

Spark SQL is a part of Apache Spark big data framework designed for processing structured and semi-structured data. It provides a DataFrame API that simplifies and accelerates data manipulations. DataFrame is a special type of object, conceptually similar to a table in relational database. It represents a distributed collection of data organized into named columns. DataFrames can be created from external sources, retrieved with a query from a database, or converted from RDD; the inverse transform is also possible. This abstraction is designed for sampling, filtering, aggregating, and visualizing the data. In this blog post, we’re going to show you how to load a DataFrame and perform basic operations on DataFrames with both API and SQL. We’ll also go through DataFrame to RDD and vice-versa conversions.

Practical Apache Spark in 10 minutes. Part 4 – MLlib

The vast possibilities of artificial intelligence are of increasing interest in the field of modern information technologies. One of its most promising and evolving directions is machine learning (ML), which becomes the essential part in various aspects of our life. ML has found successful applications in Natural Languages Processing, Face Recognition, Autonomous Vehicles, Fraud detection, Machine vision and many other fields. Machine learning utilizes the mathematical algorithms that can solve specific tasks in a way analogous to the human brain. Depending on the neural network training method, ML algorithms can be divided into supervised (with labeled data), unsupervised (with unlabeled data), semi-supervised (there are both labeled and unlabeled data in the dataset) and reinforcement (based on reward receiving) learning. Solving the most basic and popular ML tasks, such as classification and regression, is mainly based on supervised learning algorithms. Among the variety of existing ML tools, Spark MLlib is a popular and easy-to-start library which enables training neural networks for solving the problems mentioned above. In this post, we would like to consider classification task. We will classify Iris plants to the 3 categories according to the size of their sepals and petals. The public dataset with Iris classification is available here. To move forward, download the file to the working folder.

Practical Apache Spark in 10 minutes. Part 5 – Streaming

Spark is a powerful tool which can be applied to solve many interesting problems. Some of them have been discussed in our previous posts. Today we will consider another important application, namely streaming. Streaming data is the data which continuously comes as small records from different sources. There are many use cases for streaming technology such as sensor monitoring in industrial or scientific devices, server logs checking, financial markets monitoring, etc. In this post, we will examine the case with sensors temperature monitoring. For example, we have several sensors (1,2,3,4,…) in our device. Their state is defined by the following parameters: date (dd/mm/year), sensor number, state (1 – stable, 0 – critical), and temperature (degrees Celsius). The data with the sensors state comes in streaming, and we want to analyze it. Streaming data can be loaded from the different sources. As we don´t have the real streaming data source, we should simulate it. For this purpose, we can use Kafka, Flume, and Kinesis, but the simplest streaming data simulator is Netcat.

Data collection and data markets in the age of privacy and machine learning

While models and algorithms garner most of the media coverage, this is a great time to be thinking about building tools in data.

Mounting multiple data and outputs volumes

For some advanced uses cases, users might need to mount more than one data and/or outputs volumes. Polyaxon provides a way to mount multiple volumes so that user can choose which volume(s) to mount for a specific job or experiment.

Classification with Shogun machine learning library

Shogun is an open-source machine learning library that offers a wide range of machine learning algorithms. From my point of view it’s not very popular among professionals, but it have a lot of fans among enthusiasts and students. Library offers unified API for algorithms, so they can be easily managed, it somehow looks like to scikit-learn approach. There is a set of examples which can help you in learning of the library, but holistic documentation is missed.

5 Quick and Easy Data Visualizations in Python with Code

Data Visualization is a big part of a data scientist´s jobs. In the early stages of a project, you´ll often be doing an Exploratory Data Analysis (EDA) to gain some insights into your data. Creating visualizations really helps make things clearer and easier to understand, especially with larger, high dimensional datasets. Towards the end of your project, it´s important to be able to present your final results in a clear, concise, and compelling manner that your audience, whom are often non-technical clients, can understand. Matplotlib is a popular Python library that can be used to create your Data Visualizations quite easily. However, setting up the data, parameters, figures, and plotting can get quite messy and tedious to do every time you do a new project. In this blog post, we´re going to look at 6 data visualizations and write some quick and easy functions for them with Python´s Matplotlib. In the meantime, here´s a great chart for selecting the right visualization for the job!

Comparing AI Strategies – Vertical vs. Horizontal

Getting an AI startup to scale for an IPO is currently elusive. Several different strategies are being discussed around the industry and here we talk about the horizontal strategy and the increasingly favored vertical strategy.

Comparison of Top 6 Python NLP Libraries

Natural language processing (NLP) is getting very popular today, which became especially noticeable in the background of the deep learning development. NLP is a field of artificial intelligence aimed at understanding and extracting important information from text and further training based on text data. The main tasks include speech recognition and generation, text analysis, sentiment analysis, machine translation, etc. In the past decades, only experts with appropriate philological education could be engaged in the natural language processing. Besides mathematics and machine learning, they should have been familiar with some key linguistic concepts. Now, we can just use already written NLP libraries. Their main purpose is to simplify the text preprocessing. We can focus on building machine learning models and hyperparameters fine-tuning. There are many tools and libraries designed to solve NLP problems. Today, we want to outline and compare the most popular and helpful natural language processing libraries, based on our experience. You should understand that all the libraries we look at have only partially overlapped tasks. So, sometimes it is hard to compare them directly. We will walk around some features and compare only those libraries, for which this is possible.

Blockchain + Analytics: Enabling Smart IOT

Autonomous cars are racing down the highway at speeds exceeding 100 MPH when suddenly a car a half-mile ahead blows out a tire sending dangerous debris across 3 lanes of traffic. Instead of relying upon sending this urgent, time-critical distress information to the world via the cloud, the cars on that particular section of the highway use peer-to-peer, immutable communications to inform all vehicles in the area of the danger so that they can slow down and move to unobstructed lanes (while also sending a message to the nearest highway maintenance robots to remove the debris).

a Little SQL with a Little R

Book Memo: “R Markdown”

The Definitive Guide
R Markdown: The Definitive Guide is the first official book authored by the core R Markdown developers that provides a comprehensive and accurate reference to the R Markdown ecosystem. With R Markdown, you can easily create reproducible data analysis reports, presentations, dashboards, interactive applications, books, dissertations, websites, and journal articles, while enjoying the simplicity of Markdown and the great power of R and other languages. In this book, you will learn
• Basics: Syntax of Markdown and R code chunks, how to generate figures and tables, and how to use other computing languages
• Built-in output formats of R Markdown: PDF/HTML/Word/RTF/Markdown documents and ioslides/Slidy/Beamer/PowerPoint presentations
• Extensions and applications: Dashboards, Tufte handouts, xaringan/reveal.js presentations, websites, books, journal articles, and interactive tutorials
• Advanced topics: Parameterized reports, HTML widgets, document templates, custom output formats, and Shiny documents.