**A Comparison of Adaptation Techniques and Recurrent Neural Network Architectures**

Recently, recurrent neural networks have become state-of-the-art in acoustic modeling for automatic speech recognition. The long short-term memory (LSTM) units are the most popular ones. However, alternative units like gated recurrent unit (GRU) and its modifications outperformed LSTM in some publications. In this paper, we compared five neural network (NN) architectures with various adaptation and feature normalization techniques. We have evaluated feature-space maximum likelihood linear regression, five variants of i-vector adaptation and two variants of cepstral mean normalization. The most adaptation and normalization techniques were developed for feed-forward NNs and, according to results in this paper, not all of them worked also with RNNs. For experiments, we have chosen a well known and available TIMIT phone recognition task. The phone recognition is much more sensitive to the quality of AM than large vocabulary task with a complex language model. Also, we published the open-source scripts to easily replicate the results and to help continue the development.

**DLA: Compiler and FPGA Overlay for Neural Network Inference Acceleration**

Overlays have shown significant promise for field-programmable gate-arrays (FPGAs) as they allow for fast development cycles and remove many of the challenges of the traditional FPGA hardware design flow. However, this often comes with a significant performance burden resulting in very little adoption of overlays for practical applications. In this paper, we tailor an overlay to a specific application domain, and we show how we maintain its full programmability without paying for the performance overhead traditionally associated with overlays. Specifically, we introduce an overlay targeted for deep neural network inference with only ~1% overhead to support the control and reprogramming logic using a lightweight very-long instruction word (VLIW) network. Additionally, we implement a sophisticated domain specific graph compiler that compiles deep learning languages such as Caffe or Tensorflow to easily target our overlay. We show how our graph compiler performs architecture-driven software optimizations to significantly boost performance of both convolutional and recurrent neural networks (CNNs/RNNs) – we demonstrate a 3x improvement on ResNet-101 and a 12x improvement for long short-term memory (LSTM) cells, compared to naive implementations. Finally, we describe how we can tailor our hardware overlay, and use our graph compiler to achieve ~900 fps on GoogLeNet on an Intel Arria 10 1150 – the fastest ever reported on comparable FPGAs.

**Combining a Context Aware Neural Network with a Denoising Autoencoder for Measuring String Similarities**

Measuring similarities between strings is central for many established and fast growing research areas including information retrieval, biology, and natural language processing. The traditional approach for string similarity measurements is to define a metric over a word space that quantifies and sums up the differences between characters in two strings. The state-of-the-art in the area has, surprisingly, not evolved much during the last few decades. The majority of the metrics are based on a simple comparison between character and character distributions without consideration for the context of the words. This paper proposes a string metric that encompasses similarities between strings based on (1) the character similarities between the words including. Non-Standard and standard spellings of the same words, and (2) the context of the words. Our proposal is a neural network composed of a denoising autoencoder and what we call a context encoder specifically designed to find similarities between the words based on their context. The experimental results show that the resulting metrics succeeds in 85.4\% of the cases in finding the correct version of a non-standard spelling among the closest words, compared to 63.2\% with the established Normalised-Levenshtein distance. Besides, we show that words used in similar context are with our approach calculated to be similar than words with different contexts, which is a desirable property missing in established string metrics.

**Pangloss: Fast Entity Linking in Noisy Text Environments**

Entity linking is the task of mapping potentially ambiguous terms in text to their constituent entities in a knowledge base like Wikipedia. This is useful for organizing content, extracting structured data from textual documents, and in machine learning relevance applications like semantic search, knowledge graph construction, and question answering. Traditionally, this work has focused on text that has been well-formed, like news articles, but in common real world datasets such as messaging, resumes, or short-form social media, non-grammatical, loosely-structured text adds a new dimension to this problem. This paper presents Pangloss, a production system for entity disambiguation on noisy text. Pangloss combines a probabilistic linear-time key phrase identification algorithm with a semantic similarity engine based on context-dependent document embeddings to achieve better than state-of-the-art results (>5% in F1) compared to other research or commercially available systems. In addition, Pangloss leverages a local embedded database with a tiered architecture to house its statistics and metadata, which allows rapid disambiguation in streaming contexts and on-device disambiguation in low-memory environments such as mobile phones.

**Slice Finder: Automated Data Sclicing for Model Validation**

As machine learning (ML) systems become democratized, it becomes increasingly important to help users easily debug their models. However, current data tools are still primitive when it comes to helping users trace model performance problems all the way to the data. We focus on the particular problem of slicing data to identify subsets of the validation data where the model performs poorly. This is an important problem in model validation because the overall model performance can fail to reflect that of the smaller subsets, and slicing allows users to analyze the model performance on a more granular-level. Unlike general techniques (e.g., clustering) that can find arbitrary slices, our goal is to find interpretable slices (which are easier to take action compared to arbitrary subsets) that are problematic and large. We propose Slice Finder, which is an interactive framework for identifying such slices using statistical techniques. Applications include diagnosing model fairness and fraud detection, where identifying slices that are interpretable to humans is crucial.

**Using Textual Summaries to Describe a Set of Products**

When customers are faced with the task of making a purchase in an unfamiliar product domain, it might be useful to provide them with an overview of the product set to help them understand what they can expect. In this paper we present and evaluate a method to summarise sets of products in natural language, focusing on the price range, common product features across the set, and product features that impact on price. In our study, participants reported that they found our summaries useful, but we found no evidence that the summaries influenced the selections made by participants.

**Don’t get Lost in Negation: An Effective Negation Handled Dialogue Acts Prediction Algorithm for Twitter Customer Service Conversations**

In the last several years, Twitter is being adopted by the companies as an alternative platform to interact with the customers to address their concerns. With the abundance of such unconventional conversation resources, push for developing effective virtual agents is more than ever. To address this challenge, a better understanding of such customer service conversations is required. Lately, there have been several works proposing a novel taxonomy for fine-grained dialogue acts as well as develop algorithms for automatic detection of these acts. The outcomes of these works are providing stepping stones for the ultimate goal of building efficient and effective virtual agents. But none of these works consider handling the notion of negation into the proposed algorithms. In this work, we developed an SVM-based dialogue acts prediction algorithm for Twitter customer service conversations where negation handling is an integral part of the end-to-end solution. For negation handling, we propose several efficient heuristics as well as adopt recent state-of- art third party machine learning based solutions. Empirically we show model’s performance gain while handling negation compared to when we don’t. Our experiments show that for the informal text such as tweets, the heuristic-based approach is more effective.

**Quickest Detection of Dynamic Events in Networks**

The problem of quickest detection of dynamic events in networks is studied. At some unknown time, an event occurs, and a number of nodes in the network are affected by the event, in that they undergo a change in the statistics of their observations. It is assumed that the event is dynamic, in that it can propagate along the edges in the network, and affect more and more nodes with time. The event propagation dynamics is assumed to be unknown. The goal is to design a sequential algorithm that can detect a ‘significant’ event, i.e., when the event has affected no fewer than

nodes, as quickly as possible, while controlling the false alarm rate. Fully connected networks are studied first, and the results are then extended to arbitrarily connected networks. The designed algorithms are shown to be adaptive to the unknown propagation dynamics, and their first-order asymptotic optimality is demonstrated as the false alarm rate goes to zero. The algorithms can be implemented with linear computational complexity in the network size at each time step, which is critical for online implementation. Numerical simulations are provided to validate the theoretical results.

**Explanations for Temporal Recommendations**

Recommendation systems are an integral part of Artificial Intelligence (AI) and have become increasingly important in the growing age of commercialization in AI. Deep learning (DL) techniques for recommendation systems (RS) provide powerful latent-feature models for effective recommendation but suffer from the major drawback of being non-interpretable. In this paper we describe a framework for explainable temporal recommendations in a DL model. We consider an LSTM based Recurrent Neural Network (RNN) architecture for recommendation and a neighbourhood-based scheme for generating explanations in the model. We demonstrate the effectiveness of our approach through experiments on the Netflix dataset by jointly optimizing for both prediction accuracy and explainability.

**A Data-Driven Approach for Predicting Vegetation-Related Outages in Power Distribution Systems**

This paper presents a novel data-driven approach for predicting the number of vegetation-related outages that occur in power distribution systems on a monthly basis. In order to develop an approach that is able to successfully fulfill this objective, there are two main challenges that ought to be addressed. The first challenge is to define the extent of the target area. An unsupervised machine learning approach is proposed to overcome this difficulty. The second challenge is to correctly identify the main causes of vegetation-related outages and to thoroughly investigate their nature. In this paper, these outages are categorized into two main groups: growth-related and weather-related outages, and two types of models, namely time series and non-linear machine learning regression models are proposed to conduct the prediction tasks, respectively. Moreover, various features that can explain the variability in vegetation-related outages are engineered and employed. Actual outage data, obtained from a major utility in the U.S., in addition to different types of weather and geographical data are utilized to build the proposed approach. Finally, a comprehensive case study is carried out to demonstrate how the proposed approach can be used to successfully predict the number of vegetation-related outages and to help decision-makers to detect vulnerable zones in their systems.

**Container-based Cluster Orchestration Systems: A Taxonomy and Future Directions**

Containers, enabling lightweight environment and performance isolation, fast and flexible deployment, and fine-grained resource sharing, have gained popularity in better application management and deployment in addition to hardware virtualization. They are being widely used by organizations to deploy their increasingly diverse workloads derived from modern-day applications such as web services, big data, and IoT in either proprietary clusters or private and public cloud data centers. This has led to the emergence of container orchestration platforms, which are designed to manage the deployment of containerized applications in large-scale clusters. These systems are capable of running hundreds of thousands of jobs across thousands of machines. To do so efficiently, they must address several important challenges including scalability, fault-tolerance and availability, efficient resource utilization, and request throughput maximization among others. This paper studies these management systems and proposes a taxonomy that identifies different mechanisms that can be used to meet the aforementioned challenges. The proposed classification is then applied to various state-of-the-art systems leading to the identification of open research challenges and gaps in the literature intended as future directions for researchers working in this topic.

**Knockoffs for the mass: new feature importance statistics with false discovery guarantees**

An important problem in machine learning and statistics is to identify features that causally affect the outcome. This is often impossible to do from purely observational data, and a natural relaxation is to identify features that are correlated with the outcome even conditioned on all other observed features. For example, we want to identify that smoking really is correlated with cancer conditioned on demographics. The knockoff procedure is a recent breakthrough in statistics that, in theory, can identify truly correlated features while guaranteeing that the false discovery is limited. The idea is to create synthetic data -knockoffs- that captures correlations amongst the features. However there are substantial computational and practical challenges to generating and using knockoffs. This paper makes several key advances that enable knockoff application to be more efficient and powerful. We develop an efficient algorithm to generate valid knockoffs from Bayesian Networks. Then we systematically evaluate knockoff test statistics and develop new statistics with improved power. The paper combines new mathematical guarantees with systematic experiments on real and synthetic data.

**RuleMatrix: Visualizing and Understanding Classifiers with Rules**

With the growing adoption of machine learning techniques, there is a surge of research interest towards making machine learning systems more transparent and interpretable. Various visualizations have been developed to help model developers understand, diagnose, and refine machine learning models. However, a large number of potential but neglected users are the domain experts with little knowledge of machine learning but are expected to work with machine learning systems. In this paper, we present an interactive visualization technique to help users with little expertise in machine learning to understand, explore and validate predictive models. By viewing the model as a black box, we extract a standardized rule-based knowledge representation from its input-output behavior. We design RuleMatrix, a matrix-based visualization of rules to help users navigate and verify the rules and the black-box model. We evaluate the effectiveness of RuleMatrix via two use cases and a usability study.

**Knowledge-aware Autoencoders for Explainable Recommender Sytems**

Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guarantee users satisfaction, trust, and loyalty in a system. In this paper, we evaluate how different information encoded in a Knowledge Graph are perceived by users when they are adopted to show them an explanation. More precisely, we compare how the use of categorical information, factual one or a mixture of them both in building explanations, affect explanatory criteria for a recommender system. Experimental results are validated through an A/B testing platform which uses a recommendation engine based on a Semantics-Aware Autoencoder to build users profiles which are in turn exploited to compute recommendation lists and to provide an explanation.

**User Fairness in Recommender Systems**

Recent works in recommendation systems have focused on diversity in recommendations as an important aspect of recommendation quality. In this work we argue that the post-processing algorithms aimed at only improving diversity among recommendations lead to discrimination among the users. We introduce the notion of user fairness which has been overlooked in literature so far and propose measures to quantify it. Our experiments on two diversification algorithms show that an increase in aggregate diversity results in increased disparity among the users.

**Are Efficient Deep Representations Learnable**

Many theories of deep learning have shown that a deep network can require dramatically fewer resources to represent a given function compared to a shallow network. But a question remains: can these efficient representations be learned using current deep learning techniques In this work, we test whether standard deep learning methods can in fact find the efficient representations posited by several theories of deep representation. Specifically, we train deep neural networks to learn two simple functions with known efficient solutions: the parity function and the fast Fourier transform. We find that using gradient-based optimization, a deep network does not learn the parity function, unless initialized very close to a hand-coded exact solution. We also find that a deep linear neural network does not learn the fast Fourier transform, even in the best-case scenario of infinite training data, unless the weights are initialized very close to the exact hand-coded solution. Our results suggest that not every element of the class of compositional functions can be learned efficiently by a deep network, and further restrictions are necessary to understand what functions are both efficiently representable and learnable.

**Contextual Memory Trees**

We design and study a Contextual Memory Tree (CMT), a learning memory controller that inserts new memories into an experience store of unbounded size. It is designed to efficiently query for memories from that store, supporting logarithmic time insertion and retrieval operations. Hence CMT can be integrated into existing statistical learning algorithms as an augmented memory unit without substantially increasing training and inference computation. We demonstrate the efficacy of CMT by augmenting existing multi-class and multi-label classification algorithms with CMT and observe statistical improvement. We also test CMT learning on several image-captioning tasks to demonstrate that it performs computationally better than a simple nearest neighbors memory system while benefitting from reward learning.

**Is Uncertainty Always Bad : Effect of Topic Competence on Uncertain Opinions**

The proliferation of information disseminated by public/social media has made decision-making highly challenging due to the wide availability of noisy, uncertain, or unverified information. Although the issue of uncertainty in information has been studied for several decades, little work has investigated how noisy (or uncertain) or valuable (or credible) information can be formulated into people’s opinions, modeling uncertainty both in the quantity and quality of evidence leading to a specific opinion. In this work, we model and analyze an opinion and information model by using Subjective Logic where the initial set of evidence is mixed with different types of evidence (i.e., pro vs. con or noisy vs. valuable) which is incorporated into the opinions of original propagators, who propagate information over a network. With the help of an extensive simulation study, we examine how the different ratios of information types or agents’ prior belief or topic competence affect the overall information diffusion. Based on our findings, agents’ high uncertainty is not necessarily always bad in making a right decision as long as they are competent enough not to be at least biased towards false information (e.g., neutral between two extremes).

**An Acceleration Scheme for Memory Limited, Streaming PCA**

In this paper, we propose an acceleration scheme for online memory-limited PCA methods. Our scheme converges to the first

eigenvectors in a single data pass. We provide empirical convergence results of our scheme based on the spiked covariance model. Our scheme does not require any predefined parameters such as the eigengap and hence is well facilitated for streaming data scenarios. Furthermore, we apply our scheme to challenging time-varying systems where online PCA methods fail to converge. Specifically, we discuss a family of time-varying systems that are based on Molecular Dynamics simulations where batch PCA converges to the actual analytic solution of such systems.

**Jensen: An Easily-Extensible C++ Toolkit for Production-Level Machine Learning and Convex Optimization**

This paper introduces Jensen, an easily extensible and scalable toolkit for production-level machine learning and convex optimization. Jensen implements a framework of convex (or loss) functions, convex optimization algorithms (including Gradient Descent, L-BFGS, Stochastic Gradient Descent, Conjugate Gradient, etc.), and a family of machine learning classifiers and regressors (Logistic Regression, SVMs, Least Square Regression, etc.). This framework makes it possible to deploy and train models with a few lines of code, and also extend and build upon this by integrating new loss functions and optimization algorithms.

**Comparison of RNN Encoder-Decoder Models for Anomaly Detection**

In this paper, we compare different types of Recurrent Neural Network (RNN) Encoder-Decoders in anomaly detection viewpoint. We focused on finding the model what can learn the same data more effectively. We compared multiple models under the same conditions, such as the number of parameters, optimizer, and learning rate. However, the difference is whether to predict the future sequence or restore the current sequence. We constructed the dataset with simple vectors and used them for the experiment. Finally, we experimentally confirmed that the model performs better when the model restores the current sequence, rather than predict the future sequence.

**Using semantic clustering to support situation awareness on Twitter: The case of World Views**

In recent years, situation awareness has been recognised as a critical part of effective decision making, in particular for crisis management. One way to extract value and allow for better situation awareness is to develop a system capable of analysing a dataset of multiple posts, and clustering consistent posts into different views or stories (or, world views). However, this can be challenging as it requires an understanding of the data, including determining what is consistent data, and what data corroborates other data. Attempting to address these problems, this article proposes Subject-Verb-Object Semantic Suffix Tree Clustering (SVOSSTC) and a system to support it, with a special focus on Twitter content. The novelty and value of SVOSSTC is its emphasis on utilising the Subject-Verb-Object (SVO) typology in order to construct semantically consistent world views, in which individuals—particularly those involved in crisis response—might achieve an enhanced picture of a situation from social media data. To evaluate our system and its ability to provide enhanced situation awareness, we tested it against existing approaches, including human data analysis, using a variety of real-world scenarios. The results indicated a noteworthy degree of evidence (e.g., in cluster granularity and meaningfulness) to affirm the suitability and rigour of our approach. Moreover, these results highlight this article’s proposals as innovative and practical system contributions to the research field.

• Dynamic Visual Analytics for Elicitation Meetings with ELICA

• Gnirut: The Trouble With Being Born Human In An Autonomous World

• A Model for Evaluating Algorithmic Systems Accountability

• Human Trajectories Characteristics

• On Ternary Coding and Three-Valued Logic

• Bridging the Gap Between Layout Pattern Sampling and Hotspot Detection via Batch Active Sampling

• Motion Invariance in Visual Environments

• Fundamental Limits of Dynamic Interference Management with Flexible Message Assignments

• Performance Evaluation and Optimization of LPWA IoT Networks: A Stochastic Geometry Approach

• Invertibility of adjacency matrices for random $d$-regular graphs

• Iterative Residual Network for Deep Joint Image Demosaicking and Denoising

• ActiveStereoNet: End-to-End Self-Supervised Learning for Active Stereo Systems

• EC-Net: an Edge-aware Point set Consolidation Network

• Tiered Object Storage using Persistent Memory

• Guiding New Physics Searches with Unsupervised Learning

• Slow growth of entanglement and out-of-time-order correlators in integrable disordered systems

• Zap: Making Predictions Based on Online User Behavior

• On the Information Theoretic Distance Measures and Bidirectional Helmholtz Machines

• Unlimited Road-scene Synthetic Annotation (URSA) Dataset

• Analysis and Modeling of Behavioral Changes in a News Service

• Modeling the social media relationships of Irish politicians using a generalized latent space stochastic blockmodel

• Online Robust Policy Learning in the Presence of Unknown Adversaries

• Weakly Supervised Deep Learning for Thoracic Disease Classification and Localization on Chest X-rays

• Performance Optimization of MapReduce-based Apriori Algorithm on Hadoop Cluster

• Leveraging Pre-Trained 3D Object Detection Models For Fast Ground Truth Generation

• A Dataset of Laryngeal Endoscopic Images with Comparative Study on Convolution Neural Network Based Semantic Segmentation

• Scrambling dynamics across a thermalization-localization quantum phase transition

• Repeatability of Multiparametric Prostate MRI Radiomics Features

• Remaining Useful Life Estimation of Aero-Engines with Self-Joint Prediction of Continuous and Discrete States

• Shielded Decision-Making in MDPs

• A PTAS for $\ell_p$-Low Rank Approximation

• An agent-based model of an endangered population of the Arctic fox from Mednyi Island

• Information Theoretic Model Predictive Control on Jump Diffusion Processes

• Rectification from Radially-Distorted Scales

• Markov chain random fields, spatial Bayesian networks, and optimal neighborhoods for simulation of categorical fields

• Entropy and Energy Detection-based Spectrum Sensing over F Composite Fading Channels

• Improvement in the UAV position estimation with low-cost GPS, INS and vision-based system: Application to a quadrotor UAV

• Avoiding long Berge cycles II, exact bounds for all $n$

• Computational Social Dynamics: Analyzing the Face-level Interactions in a Group

• Design and Implementation of a Digital Signature Scheme Based on Low-density Generator Matrix Codes

• Effective Use of Synthetic Data for Urban Scene Semantic Segmentation

• Density estimation by Randomized Quasi-Monte Carlo

• Universal Behavior of the Corners of Orbital Beta Processes

• Stochastic Linearization of Multivariate Nonlinearities

• Introducing Quantum-Like Influence Diagrams for Violations of the Sure Thing Principle

• Longitudinal detection of radiological abnormalities with time-modulated LSTM

• Probably approximately correct learning of Horn envelopes from queries

• LSTMs with Attention for Aggression Detection

• A Mathematical Model for Optimal Decisions in a Representative Democracy

• Generative Adversarial Imitation from Observation

• Layer-wise Relevance Propagation for Explainable Recommendations

• A Stationary Planar Random Graph with Singular Stationary Dual: Dyadic Lattice Graphs

• A Note on Tail Triviality for Determinantal Point Processes

• Anaconda: A Non-Adaptive Conditional Sampling Algorithm for Distribution Testing

• Learning Convex Partitions and Computing Game-theoretic Equilibria from Best Response Queries

• Parikh Motivated Study on Repetitions in Words

• Experimental Resilience Assessment of An Open-Source Driving Agent

• A Discriminative Approach to Bayesian Filtering with Applications to Human Neural Decoding

• Schur function at generic points and limit shape of perfect matchings on contracting square hexagon lattices with piecewise boundary conditions

• Note on: Optimal choice for appointment scheduling window under patient no-show behavior

• Real-Time Index Authentication for Event-Oriented Surveillance Video Query using Blockchain

• Agent-based opinion formation modeling in social network: a perspective of social psychology and evolutionary game theory

• On nonlocal systems with jump processes of finite range and with decays

• Privacy-preserving classifiers recognize shared mobility behaviours from WiFi network imperfect data

• Faster Algorithms via Waring Decompositions

• Photo-unrealistic Image Enhancement for Subject Placement in Outdoor Photography

• Penalized matrix decomposition for denoising, compression, and improved demixing of functional imaging data

• Low-Resource Contextual Topic Identification on Speech

• Learning Generic Diffusion Processes for Image Restoration

• An exposition of the false confidence theorem

• Phase Retrieval for Signals in Union of Subspaces

• The tilted flashing Brownian ratchet

• Genetic algorithms in Forth

• Robust Deep Multi-modal Learning Based on Gated Information Fusion Network

• Hierarchical Multitask Learning for CTC-based Speech Recognition

• Definition of the moist-air available enthalpy (exergy) norm: a comparison with existing ‘moist energy norms’

• Context-adaptive neural network based prediction for image compression

• Equiangular lines and the Lemmens-Seidel conjecture

• Sparsifying Distributed Algorithms with Ramifications in Massively Parallel Computation and Centralized Local Computation

• On Lebesgue Integral Quadrature

• Classical lower bounds from quantum upper bounds

• Avoiding Brooms, Forks, and Butterflies in the Linear Lattices

• A Two-Stage Polynomial Approach to Stochastic Optimization of District Heating Networks

• Bench-Marking Information Extraction in Semi-Structured Historical Handwritten Records

• Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision

• Parameterized Query Complexity of Hitting Set using Stability of Sunflowers

• Domain Adaptation for Deviating Acquisition Protocols in CNN-based Lesion Classification on Diffusion-Weighted MR Images

• Fractional chromatic number of a random subgraph

• Preference-Based Monte Carlo Tree Search

• PointSeg: Real-Time Semantic Segmentation Based on 3D LiDAR Point Cloud

• Random-Field Ising like effective theory of the glass transition: I Mean-Field Models

• GeoDesc: Learning Local Descriptors by Integrating Geometry Constraints

• Random Field Ising-like effective theory of the glass transition II: Finite Dimensional Models

• Learning Neuron Non-Linearities with Kernel-Based Deep Neural Networks

• Joint Power and Time Allocation for NOMA-MEC Offloading

• Analytical approach to network inference: Investigating degree distribution

• Near-optimal Bootstrapping of Hitting Sets for Algebraic Circuits

• Leakage Mitigation and Internal Delay Compensation in FMCW Radar for Small Drone Detection

• Difference between families of weakly and strongly maximal integral lattice-free polytopes

• Saliency Map Estimation for Omni-Directional Image Considering Prior Distributions

• Activation energy distribution of dynamical structural defects in RuO$_2$ films

• Reinforcement Learning for LTLf/LDLf Goals

• Limit Theorems for Factor Models

• Learning with SGD and Random Features

• Battery health prediction under generalized conditions using a Gaussian process transition model

• On discovering functions in actin filament automata

• Magnetic Resonance Fingerprinting Reconstruction via Spatiotemporal Convolutional Neural Networks

• IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis

• Using statistical encoding to achieve tree succinctness never seen before

• Confidence Intervals for Testing Disparate Impact in Fair Learning

• The altmetric performance of publications authored by Brazilian researchers: analysis of CNPq productivity scholarship holders

• More good news on the (only) affine invariant test for multivariate reflected symmetry about an unknown center

• Next generation reconstruction of event-by-event particle yield fluctuations

• To Post or Not to Post: Using Online Trends to Predict Popularity of Offline Content

• Cycle-complete Ramsey numbers

• Learning to Listen, Read, and Follow: Score Following as a Reinforcement Learning Game

• Expressing Linear Orders Requires Exponential-Size DNNFs

• Modularity Classes and Boundary Effects in Multivariate Stochastic Dominance

• A Dense CNN approach for skin lesion classification

• Improving the ‘Correct Eventual Consistency’ Tool

• A Nordhaus-Gaddum conjecture for the minimum number of distinct eigenvalues of a graph

• Resource Allocation for Secure Gaussian Parallel Relay Channels with Finite-Length Coding and Discrete Constellations

• Quantum Chebyshev’s Inequality and Applications

• Centralized Scheduling Strategies for Cooperative HARQ Retransmissions in Multi-Source Multi-Relay Wireless Networks

• Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks

• On Computing Centroids According to the $p$-Norms of Hamming Distance Vectors

• Improved estimation of the extreme value index using related variables

• Strong convergence of a positive preserving drift-implicit Euler scheme for the fixed delay CIR process

• The Simplex Geometry of Graphs

• Tracking the $\ell_2$ Norm with Constant Update Time

• Roos’ Matrix Permanent Approximation Bounds for Data Association Probabilities

• Local Rejection Sampling with Soft Filters

• Exit Time Risk-Sensitive Control for Systems of Cooperative Agents

• Automated Treatment Planning in Radiation Therapy using Generative Adversarial Networks

• On Recovery Guarantees for One-Bit Compressed Sensing on Manifolds

• Information and Energy Transfer with Experimentally-Sampled Harvesting Functions

• One-Dimensional Packing: Maximality and Rationality

• Continuous Assortment Optimization with Logit Choice Probabilities under Incomplete Information

• Chinese Poetry Generation with Flexible Styles

• Optimization of the n-dimensional sliding window inter-channel correlation algorithm for multi-core architecture

• BAM: Bottleneck Attention Module

• The Active Bijection 2.a – Decomposition of activities for matroid bases, and Tutte polynomial of a matroid in terms of beta invariants of minors

• Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing

• A Generalized Forced Oscillation Method for tuning Proportional Resonant Controllers

• CBAM: Convolutional Block Attention Module

• On a generalization of a Ramanujan conjecture for binomial random variables

• A framework for remote sensing images processing using deep learning technique

• Effects of Dirac cone tilting in two-dimensional Dirac semimetal

• PIMMS: Permutation Invariant Multi-Modal Segmentation

• Pseudo-Feature Generation for Imbalanced Data Analysis in Deep Learning

• On the Beta Prime Prior for Scale Parameters in High-Dimensional Bayesian Regression Models

• Icing on the Cake: An Easy and Quick Post-Learnig Method You Can Try After Deep Learning

• Large deviations conditioned on large deviations I: Markov chain and Langevin equation

• The active bijection for graphs

• Boundary-to-Displacement Asymptotic Gains for Wave Systems With Kelvin-Voigt Damping

• A Deep Learning Driven Active Framework for Segmentation of Large 3D Shape Collections

• Computing the fully optimal spanning tree of an ordered bipolar directed graph

• Training Recurrent Neural Networks against Noisy Computations during Inference

• Power Networks: A Novel Neural Architecture to Predict Power Relations

• On Tutte polynomial expansion formulas in perspectives of matroids and oriented matroids

• Temporally Evolving Community Detection and Prediction in Content-Centric Networks

• Loop correlations in random wire models

• Explicating feature contribution using Random Forest proximity distances

• Fisher zeros and correlation decay in the Ising model

• The Active Bijection 2.b – Decomposition of activities for oriented matroids, and general definitions of the active bijection

• Counting higher order tangencies for plane curves

• Interpretable Latent Spaces for Learning from Demonstration

• Deep Exemplar-based Colorization