Implementing AI systems with interpretability, transparency, and trus

This webcast will focus on different machine learning and visualization techniques that can be used to make complex artificial intelligence systems interpretable, transparent and trustable. We will show how these techniques can be used in the AI life cycle, specifically in pre-modelling, modeling and post-modelling stages. Machine learning models have been used successfully in areas such as object recognition, speech perception, language modeling and automated decision optimization leveraging reinforcement learning. However, increasingly complicated nonlinear models and heavily engineered features limit transparency, slowing adoption of machine learning models in application areas where critical decisions are made. Data scientists who understand the workings of complex models, their limitations, and the reasons for individual predictions are able to use predictive models more effectively.

MAPR ML Workshop 3: Machine Learning in Production

This third deep dive workshop in our machine learning logistics webinar series focuses on how to better manage models in production in real world business settings. The workshop will show how key design characteristics of Rendezvous Architecture makes it easier to manage many models simultaneously and to safely roll-out to production, how to achieve scaling and what the limitations for Rendezvous are. Join us Tuesday February 6th as Ted will cover the role of containerization for models, serverless model deployment, how speculative execution works and when to use it, how to fold in the requirements of SLAs for practical business value from machine learning, and how the rendezvous architecture can deliver reliability in production. Ted will explain how these methods for machine learning management play out in a range of case histories such as retail analytics, fraud scoring, ad scoring and recommendation.

A Collection of 10 Data Visualizations You Must See

Writing codes is fun. Creating models with them is even more intriguing. But things start getting tricky when it comes to presenting our work to a non-technical person. This is where visualizations comes in. They are one of the best ways of telling a story with data. In this article, we look at some of the best charts and graphs people have created using tools like Python, R, and Tableau, among others. I have also included the link to the source code or the official research paper, so you can attempt to create these visualizations on your machines or just get a general understanding of how it was created. Let’s get into it.

When Variable Reduction Doesn’t Work

Exceptions sometimes make the best rules. Here’s an example of well accepted variable reduction techniques resulting in an inferior model and a case for dramatically expanding the number of variables we start with.

Statistical Averages – Mean, Median and Mode

As I have mentioned several times, Data Science has 3 important pillars: Coding, Statistics and Business. To succeed, you have to be well-versed in all three. In this new series, I want to help you to learn the most important parts of Statistics. This is the first step – and in this episode we are going to get to know the most basic statistical concept: statistical averages.

Practical Text Generation with Tensorflow Serving

In this entry, I am going to talk about deep learning models exposure and serving via Tensorflow, while showcasing my setup for a flexible and practical text generation solution. With text generation I intend the automated task of generating new semantically valid pieces of text of variable length, given an optional seed string. The idea is to be able to avail of different models for different use-cases (Q&A, chatbot-utilities, simplification, next word(s) suggestion) also based on different type of content (e.g. narrative, scientific, code), sources or authors. Here a first preview of the setup in action for sentence suggestion.

Create your Machine Learning library from scratch with R ! (1/3)

When dealing with Machine Learning problems in R, most of the time you rely on already existing libraries. This fastens the analysis process, but do you really understand what is behind the algorithms? Could you implement a logistic regression from scratch with R? The goal of this post is to create our own basic machine learning library from scratch with R.

The 8 Neural Network Architectures Machine Learning Researchers Need to Learn

Machine learning is needed for tasks that are too complex for humans to code directly. Some tasks are so complex that it is impractical, if not impossible, for humans to work out all of the nuances and code for them explicitly. So instead, we provide a large amount of data to a machine learning algorithm and let the algorithm work it out by exploring that data and searching for a model that will achieve what the programmers have set it out to achieve.

Data as a Feature

The consumerization of applications is making the role of product managers more difficult than ever. How do you build products or services that meet user demands for both power and simplicity? Business applications are evolving and user expectations for quality, easy-to-use software are at an all-time high. Companies are now gaining competitive advantage by providing intuitive application experiences that help users achieve goals. The best applications—the ones that stick—are those that empower users to realize the full value of their data. This book explores embedded analytics and how treating ‘data as a feature’ can help product managers create indispensable applications that guide users to reach their most critical goals.

Deep Learning from first principles in Python, R and Octave – Part 3

This is the third part in my series on Deep Learning from first principles in Python, R and Octave. In the first part Deep Learning from first principles in Python, R and Octave-Part 1, I implemented logistic regression as a 2 layer neural network. The 2nd part Deep Learning from first principles in Python, R and Octave-Part 2, dealt with the implementation of 3 layer Neural Networks with 1 hidden layer to perform classification tasks, where the 2 classes cannot be separated by a linear boundary. In this third part, I implement a multi-layer, Deep Learning (DL) network of arbitrary depth (any number of hidden layers) and arbitrary height (any number of activation units in each hidden layer). The implementations of these Deep Learning networks, in all the 3 parts, are based on vectorized versions in Python, R and Octave. The implementation in the 3rd part is for a L-layer Deep Netwwork, but without any regularization, early stopping, momentum or learning rate adaptation techniques. However even the barebones multi-layer DL, is a handful and has enough hyperparameters to fine-tune and adjust.

PK/PD reserving models

In this post I’d like to discuss an extension to Jake Morris’ hierarchical compartmental reserving model, as described in his original paper (Morris (2016)) and my previous post, which allows for a time varying parameter ker(t) ker(t) describing the changing rate of earning and reporting and allows for correlation between RLR RLR , the reported loss ratio, and RRF RRF , the reserve robustness factor. A positive correlation would give evidence of a reserving cycle, i.e. in years with higher initial reported loss ratios a higher proportion of reported losses are paid.