**The K Shortest Paths Problem with Application to Routing**

We present a simple algorithm for explicitly computing all k shortest paths bounded by length L from a fixed source to a target in O(m + kL) and O(mlogm + kL) time for unweighted and weighted directed graphs with m edges respectively. For many graphs, this outperforms existing algorithms by exploiting the fact that real world networks have short average path length. Consequently, we would like to adapt our almost shortest paths algorithm to find an efficient solution to the almost short- est simple paths, where we exclude paths that visit any node more than once. To this end, we consider realizations from the Chung-Lu random graph model as the Chung-Lu random graph model is not only amenable to analysis, but also emulates many of the properties frequently observed in real world networks including the small world phenomenon and degree heterogeneity. We provide theoretical and numeric evidence regarding the efficiency of utilizing our almost shortest paths algorithm to find al- most shortest simple paths for Chung-Lu random graphs for a wide range of parameters. Finally, we consider a special application of our almost shortest paths algorithm to study internet routing (withdrawals) in the Autonomous System graph.

**Automatic Identification of Sarcasm Target: An Introductory Approach**

Past work in computational sarcasm deals primarily with sarcasm detection. In this paper, we introduce a novel, related problem: sarcasm target identification (\textit{i.e.}, extracting the target of ridicule in a sarcastic sentence). We present an introductory approach for sarcasm target identification. Our approach employs two types of extractors: one based on rules, and another consisting of a statistical classifier. To compare our approach, we use two baselines: a na\’ive baseline and another baseline based on work in sentiment target identification. We perform our experiments on book snippets and tweets, and show that our hybrid approach performs better than the two baselines and also, in comparison with using the two extractors individually. Our introductory approach establishes the viability of sarcasm target identification, and will serve as a baseline for future work.

**Independent Component Analysis by Entropy Maximization with Kernels**

Independent component analysis (ICA) is the most popular method for blind source separation (BSS) with a diverse set of applications, such as biomedical signal processing, video and image analysis, and communications. Maximum likelihood (ML), an optimal theoretical framework for ICA, requires knowledge of the true underlying probability density function (PDF) of the latent sources, which, in many applications, is unknown. ICA algorithms cast in the ML framework often deviate from its theoretical optimality properties due to poor estimation of the source PDF. Therefore, accurate estimation of source PDFs is critical in order to avoid model mismatch and poor ICA performance. In this paper, we propose a new and efficient ICA algorithm based on entropy maximization with kernels, (ICA-EMK), which uses both global and local measuring functions as constraints to dynamically estimate the PDF of the sources with reasonable complexity. In addition, the new algorithm performs optimization with respect to each of the cost function gradient directions separately, enabling parallel implementations on multi-core computers. We demonstrate the superior performance of ICA-EMK over competing ICA algorithms using simulated as well as real-world data.

**Online Classification with Complex Metrics**

We present a framework and analysis of consistent binary classification for complex and non-decomposable performance metrics such as the F-measure and the Jaccard measure. The proposed framework is general, as it applies to both batch and online learning, and to both linear and non-linear models. Our work follows recent results showing that the Bayes optimal classifier for many complex metrics is given by a thresholding of the conditional probability of the positive class. This manuscript extends this thresholding characterization — showing that the utility is strictly locally quasi-concave with respect to the threshold for a wide range of models and performance metrics. This, in turn, motivates simple normalized gradient ascent updates for threshold estimation. We present a finite-sample regret analysis for the resulting procedure. In particular, the risk for the batch case converges to the Bayes risk at the same rate as that of the underlying conditional probability estimation, and the risk of proposed online algorithm converges at a rate that depends on the conditional probability estimation risk. For instance, in the special case where the conditional probability model is logistic regression, our procedure achieves

sample complexity, both for batch and online training. Empirical evaluation shows that the proposed algorithms out-perform alternatives in practice, with comparable or better prediction performance and reduced run time for various metrics and datasets.

**How to be Fair and Diverse?**

Due to the recent cases of algorithmic bias in data-driven decision-making, machine learning methods are being put under the microscope in order to understand the root cause of these biases and how to correct them. Here, we consider a basic algorithmic task that is central in machine learning: subsampling from a large data set. Subsamples are used both as an end-goal in data summarization (where fairness could either be a legal, political or moral requirement) and to train algorithms (where biases in the samples are often a source of bias in the resulting model). Consequently, there is a growing effort to modify either the subsampling methods or the algorithms themselves in order to ensure fairness. However, in doing so, a question that seems to be overlooked is whether it is possible to produce fair subsamples that are also adequately representative of the feature space of the data set – an important and classic requirement in machine learning. Can diversity and fairness be simultaneously ensured? We start by noting that, in some applications, guaranteeing one does not necessarily guarantee the other, and a new approach is required. Subsequently, we present an algorithmic framework which allows us to produce both fair and diverse samples. Our experimental results on an image summarization task show marked improvements in fairness without compromising feature diversity by much, giving us the best of both the worlds.

**Inertial Regularization and Selection (IRS): Sequential Regression in High-Dimension and Sparsity**

In this paper, we develop a new sequential regression modeling approach for data streams. Data streams are commonly found around us, e.g in a retail enterprise sales data is continuously collected every day. A demand forecasting model is an important outcome from the data that needs to be continuously updated with the new incoming data. The main challenge in such modeling arises when there is a) high dimensional and sparsity, b) need for an adaptive use of prior knowledge, and/or c) structural changes in the system. The proposed approach addresses these challenges by incorporating an adaptive L1-penalty and inertia terms in the loss function, and thus called Inertial Regularization and Selection (IRS). The former term performs model selection to handle the first challenge while the latter is shown to address the last two challenges. A recursive estimation algorithm is developed, and shown to outperform the commonly used state-space models, such as Kalman Filters, in experimental studies and real data.

**Representation Learning with Deconvolution for Multivariate Time Series Classification and Visualization**

We propose a new model based on the deconvolutional networks and SAX discretization to learn the representation for multivariate time series. Deconvolutional networks fully exploit the advantage the powerful expressiveness of deep neural networks in the manner of unsupervised learning. We design a network structure specifically to capture the cross-channel correlation with deconvolution, forcing the pooling operation to perform the dimension reduction along each position in the individual channel. Discretization based on Symbolic Aggregate Approximation is applied on the feature vectors to further extract the bag of features. We show how this representation and bag of features helps on classification. A full comparison with the sequence distance based approach is provided to demonstrate the effectiveness of our approach on the standard datasets. We further build the Markov matrix from the discretized representation from the deconvolution to visualize the time series as complex networks, which show more class-specific statistical properties and clear structures with respect to different labels.

**Large Scale Parallel Computations in R through Elemental**

Even though in recent years the scale of statistical analysis problems has increased tremendously, many statistical software tools are still limited to single-node computations. However, statistical analyses are largely based on dense linear algebra operations, which have been deeply studied, optimized and parallelized in the high-performance-computing community. To make high-performance distributed computations available for statistical analysis, and thus enable large scale statistical computations, we introduce RElem, an open source package that integrates the distributed dense linear algebra library Elemental into R. While on the one hand, RElem provides direct wrappers of Elemental’s routines, on the other hand, it overloads various operators and functions to provide an entirely native R experience for distributed computations. We showcase how simple it is to port existing R programs to Relem and demonstrate that Relem indeed allows to scale beyond the single-node limitation of R with the full performance of Elemental without any overhead.

**SSH (Sketch, Shingle, & Hash) for Indexing Massive-Scale Time Series**

Similarity search on time series is a frequent operation in large-scale data-driven applications. Sophisticated similarity measures are standard for time series matching, as they are usually misaligned. Dynamic Time Warping or DTW is the most widely used similarity measure for time series because it combines alignment and matching at the same time. However, the alignment makes DTW slow. To speed up the expensive similarity search with DTW, branch and bound based pruning strategies are adopted. However, branch and bound based pruning are only useful for very short queries (low dimensional time series), and the bounds are quite weak for longer queries. Due to the loose bounds branch and bound pruning strategy boils down to a brute-force search. To circumvent this issue, we design SSH (Sketch, Shingle, & Hashing), an efficient and approximate hashing scheme which is much faster than the state-of-the-art branch and bound searching technique: the UCR suite. SSH uses a novel combination of sketching, shingling and hashing techniques to produce (probabilistic) indexes which align (near perfectly) with DTW similarity measure. The generated indexes are then used to create hash buckets for sub-linear search. Our results show that SSH is very effective for longer time sequence and prunes around 95% candidates, leading to the massive speedup in search with DTW. Empirical results on two large-scale benchmark time series data show that our proposed method can be around 20 times faster than the state-of-the-art package (UCR suite) without any significant loss in accuracy.

**Introduction: Cognitive Issues in Natural Language Processing**

This special issue is dedicated to get a better picture of the relationships between computational linguistics and cognitive science. It specifically raises two questions: ‘what is the potential contribution of computational language modeling to cognitive science?’ and conversely: ‘what is the influence of cognitive science in contemporary computational linguistics?’

**Truncated Variance Reduction: A Unified Approach to Bayesian Optimization and Level-Set Estimation**

We present a new algorithm, truncated variance reduction (TruVaR), that treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian processes in a unified fashion. The algorithm greedily shrinks a sum of truncated variances within a set of potential maximizers (BO) or unclassified points (LSE), which is updated based on confidence bounds. TruVaR is effective in several important settings that are typically non-trivial to incorporate into myopic algorithms, including pointwise costs and heteroscedastic noise. We provide a general theoretical guarantee for TruVaR covering these aspects, and use it to recover and strengthen existing results on BO and LSE. Moreover, we provide a new result for a setting where one can select from a number of noise levels having associated costs. We demonstrate the effectiveness of the algorithm on both synthetic and real-world data sets.

**Transforming a matrix into a standard form**

We show that every matrix all of whose entries are in a fixed subgroup of the group of units of a commutative ring with identity is equivalent to a standard form. As a consequence, we improve the proof of Theorem 5 in D. Best, H. Kharaghani, H. Ramp [Disc. Math. 313 (2013), 855–864].

**Virtual Embodiment: A Scalable Long-Term Strategy for Artificial Intelligence Research**

Meaning has been called the ‘holy grail’ of a variety of scientific disciplines, ranging from linguistics to philosophy, psychology and the neurosciences. The field of Artifical Intelligence (AI) is very much a part of that list: the development of sophisticated natural language semantics is a sine qua non for achieving a level of intelligence comparable to humans. Embodiment theories in cognitive science hold that human semantic representation depends on sensori-motor experience; the abundant evidence that human meaning representation is grounded in the perception of physical reality leads to the conclusion that meaning must depend on a fusion of multiple (perceptual) modalities. Despite this, AI research in general, and its subdisciplines such as computational linguistics and computer vision in particular, have focused primarily on tasks that involve a single modality. Here, we propose virtual embodiment as an alternative, long-term strategy for AI research that is multi-modal in nature and that allows for the kind of scalability required to develop the field coherently and incrementally, in an ethically responsible fashion.

**Parallelizing Spectral Algorithms for Kernel Learning**

We consider a distributed learning approach in supervised learning for a large class of spectral regularization methods in an RKHS framework. The data set of size n is partitioned into

,

, disjoint subsets. On each subset, some spectral regularization method (belonging to a large class, including in particular Kernel Ridge Regression,

-boosting and spectral cut-off) is applied. The regression function

is then estimated via simple averaging, leading to a substantial reduction in computation time. We show that minimax optimal rates of convergence are preserved if m grows sufficiently slowly (corresponding to an upper bound for

) as

, depending on the smoothness assumptions on

and the intrinsic dimensionality. In spirit, our approach is classical.

**High-Dimensional Adaptive Function-on-Scalar Regression**

Applications of functional data with large numbers of predictors have grown precipitously in recent years, driven, in part, by rapid advances in genotyping technologies. Given the large numbers of genetic mutations encountered in genetic association studies, statistical methods which more fully exploit the underlying structure of the data are imperative for maximizing statistical power. However, there is currently very limited work in functional data with large numbers of predictors. Tools are presented for simultaneous variable selection and parameter estimation in a functional linear model with a functional outcome and a large number of scalar predictors; the technique is called AFSL for

It is demonstrated how techniques from convex analysis over Hilbert spaces can be used to establish a functional version of the oracle property for AFSL over any real separable Hilbert space, even when the number of predictors,

, is exponentially large compared to the sample size,

. AFSL is illustrated via a simulation study and data from the Childhood Asthma Management Program, CAMP, selecting those genetic mutations which are important for lung growth.

**On Multiplicative Multitask Feature Learning**

We investigate a general framework of multiplicative multitask feature learning which decomposes each task’s model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods have been proposed as special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effect. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. Empirical studies have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks.

**A data augmentation methodology for training machine/deep learning gait recognition algorithms**

There are several confounding factors that can reduce the accuracy of gait recognition systems. These factors can reduce the distinctiveness, or alter the features used to characterise gait, they include variations in clothing, lighting, pose and environment, such as the walking surface. Full invariance to all confounding factors is challenging in the absence of high-quality labelled training data. We introduce a simulation-based methodology and a subject-specific dataset which can be used for generating synthetic video frames and sequences for data augmentation. With this methodology, we generated a multi-modal dataset. In addition, we supply simulation files that provide the ability to simultaneously sample from several confounding variables. The basis of the data is real motion capture data of subjects walking and running on a treadmill at different speeds. Results from gait recognition experiments suggest that information about the identity of subjects is retained within synthetically generated examples. The dataset and methodology allow studies into fully-invariant identity recognition spanning a far greater number of observation conditions than would otherwise be possible.

• Automatic Image De-fencing System

• Safety Verification of Deep Neural Networks

• Sensitivity analysis for an unobserved moderator in RCT-to-target-population generalization of treatment effects

• Cut-off method for endogeny of recursive tree processes

• Mean-Field Variational Inference for Gradient Matching with Gaussian Processes

• A Noisy-Influence Regularity Lemma for Boolean Functions

• Improved Method to extract Nucleon Helicity Distributions using Event Weighting

• Permutation tests in the two-sample problem for functional data

• Learning Cost-Effective Treatment Regimes using Markov Decision Processes

• Tracy-Widom fluctuations for perturbations of the log-gamma polymer in intermediate disorder

• Distance signless Laplacian spectral radius and Hamiltonian properties of graphs

• Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification using Markov Random Fields

• Multitask Learning of Vegetation Biochemistry from Hyperspectral Data

• Modeling and Analysis of Uplink Non-Orthogonal Multiple Access (NOMA) in Large-Scale Cellular Networks Using Poisson Cluster Processes

• Ranking of classification algorithms in terms of mean-standard deviation using A-TOPSIS

• Optimization on Submanifolds of Convolution Kernels in CNNs

• P_3-Games on Chordal Bipartite Graphs

• Understanding Sea Ice Melting via Functional Data Analysis

• Ergodic maximum principle for stochastic systems

• Windings of planar processes and applications to the pricing of Asian options

• Exercise Motion Classification from Large-Scale Wearable Sensor Data Using Convolutional Neural Networks

• Study of Tomlinson-Harashima Precoding Strategies for Physical-Layer Security in Wireless Networks

• A class of Weiss-Weinstein bounds and its relationship with the Bobrovsky-Mayer-Wolf-Zakai bounds

• Certified Roundoff Error Bounds using Bernstein Expansions and Sparse Krivine-Stengle Representations

• p-Causality: Identifying Spatiotemporal Causal Pathways for Air Pollutants with Urban Big Data

• Convergence of the Euler-Maruyama method for multidimensional SDEs with discontinuous drift and degenerate diffusion coefficient

• The effect of delay on contact tracing

• The limits of weak selection and large population size in evolutionary game theory

• Fluctuations of Functions of Wigner Matrices

• Deep image mining for diabetic retinopathy screening

• Reinforcement Learning in Conflicting Environments for Autonomous Vehicles

• A statistical approach to covering lemmas

• Local Maxima and Improved Exact Algorithm for MAX-2-SAT

• General Central Limit Theorems for Associated Sequences

• Fast and Reliable Parameter Estimation from Nonlinear Observations

• Cross Device Matching for Online Advertising with Neural Feature Ensembles : First Place Solution at CIKM Cup 2016

• Analysis of Count Data by Transmuted Geometric Distribution

• Multi-View Subspace Clustering via Relaxed $L_1$-Norm of Tensor Multi-Rank

• The quadratic regulator problem and the Riccati equation for a process governed by a linear Volterra integrodifferential equations

• Asymptotic of Non-Crossings probability of Additive Wiener Fields

• The first Cheeger constant of a simplex

• Another characterization of homogeneous Poisson processes

• Two are Better than One: An Ensemble of Retrieval- and Generation-Based Dialog Systems

• Real-time Halfway Domain Reconstruction of Motion and Geometry

• On Zermelo’s theorem

• Stochastic inference with spiking neurons in the high-conductance state

• Colouring simplicial complexes: on the Lechuga-Murillo’s model

• Not All Multi-Valued Partial CFL Functions Are Refined by Single-Valued Functions

• Death and rebirth of neural activity in sparse inhibitory networks

• Hybrid-DCA: A Double Asynchronous Approach for Stochastic Dual Coordinate Ascent

• Learning Deep Architectures for Interaction Prediction in Structure-based Virtual Screening

• The Security of Hardware-Based Omega(n^2) Cryptographic One-Way Functions: Beyond Satisfiability and P=NP

• Simpler PAC-Bayesian Bounds for Hostile Data

• On the general solution of the Heideman-Hogan family of recurrences

• Distinguishing number and distinguishing index of Kronecker product of two graphs

• On the dynamic consistency of hierarchical risk-averse decision problems

• Output-sensitive Complexity of Multiobjective Combinatorial Optimization

• On the maximum number of colorings of a graph

• Stochastic Modeling and Statistical Inference of Intrinsic Noise in Gene Regulation System via Chemical Master Equation

• 3D Hand Pose Tracking and Estimation Using Stereo Matching

• Sets of Priors Reflecting Prior-Data Conflict and Agreement

• Eulerian polynomials and descent statistics

• Maximizing the number of $x$-colorings of $4$-chromatic graphs

• Partitioning Trillion-edge Graphs in Minutes

• Robust Bayesian Reliability for Complex Systems under Prior-Data Conflict

• A Polynomial Kernel for Distance-Hereditary Vertex Deletion

• Template Matching Advances and Applications in Image Analysis

• Hybrid Static/Dynamic Schedules for Tiled Polyhedral Programs

• SPiKeS: Superpixel-Keypoints Structure for Robust Visual Tracking

• Are mmWave Low-Complexity Beamforming Structures Energy-Efficient? Analysis of the Downlink MU-MIMO

• Power of one non-clean qubit

• Irregular Stochastic differential equations driven by a family of Markov processes

• Random Multiple Access for M2M Communications with QoS Guarantees

• Information-theoretic Physical Layer Security for Satellite Channels

• Dual Ore’s theorem for distributive intervals of small index

• Minimum triplet covers of binary phylogenetic $X$-trees

• Differential Modulation for Asynchronous Two-Way-Relay Systems over Frequency-Selective Fading Channels

• Bayesian Nonparametric Modeling of Heterogeneous Groups of Censored Data

• Evolutionary State-Space Model and Its Application to Time-Frequency Analysis of Local Field Potentials

• Bridging Neural Machine Translation and Bilingual Dictionaries

• Encoding Temporal Markov Dynamics in Graph for Time Series Visualization

• Cohort aggregation modelling for complex forest stands: Spruce-aspen mixtures in British Columbia

• Decentralized Transmission Policies for Energy Harvesting Devices

• Molecular solutions for the Maximum K-colourable Sub graph Problem in Adleman-Lipton model

• Channel capacity of polar coding with a given polar mismatched successive cancellation decoder

• Coincidences between characters to hooks and 2-part partitions on families arising from 2-regular classes

• A Rate-Distortion Approach to Caching

• Novel probabilistic models of spatial genetic ancestry with applications to stratification correction in genome-wide association studies

• Cubic edge-transitive bi-$p$-metacirculant

• Stability analysis of delay differential equations via Semidefinite programming

• Optimal insider control of systems with delay

• Discrete least-squares approximations over optimized downward closed polynomial spaces in arbitrary dimension

• Limiting behavior of 3-color excitable media on arbitrary graphs

• A coarse-to-fine algorithm for registration in 3D street-view cross-source point clouds

• Interference Management and Power Allocation for NOMA Visible Light Communications Network

• An Assmus-Mattson theorem for codes over commutative association schemes

• MultiCol-SLAM – A Modular Real-Time Multi-Camera SLAM System

• Optimizing egalitarian performance in the side-effects model of colocation for data~center resource management

• STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons

• Large and moderate deviations for the left random walk on GL d (R)

• Learning Reporting Dynamics during Breaking News for Rumour Detection in Social Media

• Challenges to be addressed for realising an Ephemeral Cloud Federation

• Theoretical Analysis of Active Contours on Graphs

• Cutoff phenomenon for the asymmetric simple exclusion process and the biased card shuffling

• On Solving Non-preemptive Mixed-criticality Match-up Scheduling Problem with Two and Three Criticality Levels

• QoE-aware Scalable Video Transmission in MIMO~Systems

• Characterization of an inconsistency ranking for pairwise comparison matrices

• Percolation results for the Continuum Random Cluster Model

• Record Counting in Historical Handwritten Documents with Convolutional Neural Networks

• Possibilities of Recursive GPU Mapping for Discrete Orthogonal Simplices

• The Function-on-Scalar LASSO with Applications to Longitudinal GWAS

• Tracking of Wideband Multipath Components in a Vehicular Communication Scenario

• C-mix: a high dimensional mixture model for censored durations, with applications to genetic data

• Simplices in a small set of points in $\mathbb{F}_p^2$

• Statistical Machine Translation for Indian Languages: Mission Hindi

• Using Machine Learning to Detect Noisy Neighbors in 5G Networks

• Reordering rules for English-Hindi SMT

• Fluctuations around mean walking behaviours in diluted pedestrian flows

• Coalescence on the real line

• Finite size scaling of random XORSAT

• An Attempt to Design a Better Algorithm for the Uncapacitated Facility Location Problem

• Greedy Gaussian Segmentation of Multivariate Time Series

• Deep Multi-scale Location-aware 3D Convolutional Neural Networks for Automated Detection of Lacunes of Presumed Vascular Origin

• A Framework for Parallel and Distributed Training of Neural Networks

• Hybrid Quantile Regression Estimation for Time Series Models with Conditional Heteroscedasticity

• Optimistic Aborts for Geo-distributed Transactions

• Conditions on square geometric graphs

• Distilling Information Reliability and Source Trustworthiness from Digital Traces

• Feature Sensitive Label Fusion with Random Walker for Atlas-based Image Segmentation

• Strongly robust toric ideals in codimension 2

• One-dimensional reflected rough differential equations

• Relating Diversity and Human Appropriation from Land Cover Data

• Laplacian regularized low rank subspace clustering

• Analyzing the structure of multidimensional compressed sensing problems through coherence

• Dynamic Complexity of the Dyck Reachability

• ‘Weak yet strong’ restrictions of Hindman’s Finite Sums Theorem

• Balancing Suspense and Surprise: Timely Decision Making with Endogenous Information Acquisition

• A Variational Bayesian Approach for Restoring Data Corrupted with Non-Gaussian Noise

• Nonlinear Adaptive Algorithms on Rank-One Tensor Models

• Fair prediction with disparate impact: A study of bias in recidivism prediction instruments

• Target Set Selection in Dense Graph Classes

• PhaseMax: Convex Phase Retrieval via Basis Pursuit

• Nonconvex penalized regression using depth-based penalty functions: multitask learning and support union recovery in high dimensions

• Collapse transition of the interacting prudent walk

• Statistical inference in partially observed stochastic compartmental models with application to cell lineage tracking of in vivo hematopoiesis

• Robustness of critical bit rates for practical stabilization of networked control systems

• Some Relationships and Properties of the Hypergeometric Distribution

• On the smoothness of the value function for affine optimal control problems

• Automatic and Manual Segmentation of Hippocampus in Epileptic Patients MRI

• Automated OCT Segmentation for Images with DME

• Asymptotics of the number of standard Young tableaux of skew shape

• Quantized Precoding for Massive MU-MIMO

• Geometry of Polysemy

• Node Isolation of Secure Wireless Sensor Networks under a Heterogeneous Channel Model

• On capacity of optical communications over a lossy bosonic channel with a receiver employing the most general coherent electro-optic feedback control

• Adjusting for Unmeasured Spatial Confounding with Distance Adjusted Propensity Score Matching

• Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

• On the Network Reliability Problem of the Heterogeneous Key Predistribution Scheme