Predictive Maintenance: Why It’s Important and how to Implement it

For those companies who have been collecting machine data for years, an incredible opportunity exists in making this data actionable. Utilizing actionable data can offer an invaluable competitive advantage by enabling companies to streamline operational processes, optimize demand forecasting and better understand their customers’ propensity to buy. In particular, predictive maintenance (PdM) is a core benefit of making machine data actionable, as it can decrease downtime and waste, leading to greater organizational efficiency. Turning the idea of PdM into an actual deployment can be complex, however there are several best practices that can help drive results early in the process. For instance, it’s best to start small in order to learn a repeatable process on a set of data focused on a singular use case. This exposes all stakeholders to the steps required and can help frame future PdM project discussions.


Kickstart your GDPR program

The role of Data Discovery in General Data Protection Regulation (GDPR) compliance is the essential first step to building a successful GDPR program, but it’s one that many companies are struggling to take. Companies are faced with terabytes or petabytes of data spread throughout their organization – and beyond – and don’t clearly know what personal data they hold or where it is. OpenText has launched a GDPR Discovery and Analysis Service and I’d like to explain how we help kickstart your GDPR program. Whether you’re prepared or not, GDPR is coming. We’re likely to see enforcement procedures commence shortly after 25 May this year. If you have EU residents as customers, suppliers or partners then you need to be compliant with the new regulation or face potential fines up to €20 million or 4% of annual turnover. GPDR compliance begins by knowing exactly what sensitive data you have, where its being stored and what you’re doing with it. Yet, over 60% of security professionals say that they don’t know where their sensitive data is. Data Discovery has become the first essential – and highly pressing – step to building an effective GDP compliance program.


Off the Beaten Path – HTM-based Strong AI Beats RNNs and CNNs at Prediction and Anomaly Detection

This is the second in our “Off the Beaten Path” series looking at innovators in machine learning who have elected strategies and methods outside of the mainstream. In this article we look at Numenta’s unique approach to scalar prediction and anomaly detection based on their own brain research.


Data Science Simplified Part 11: Logistic Regression

In the last blog post of this series, we discussed classifiers. The categories of classifiers and how they are evaluated were discussed. We have also discussed regression models in depth. In this post, we dwell a little deeper in how regression models can be used for classification tasks. Logistic Regression is a widely used regression model used for classification tasks. As usual, we will discuss by example.


Ensemble Learning in R with SuperLearner

Boost your machine learning results and discover ensembles in R with the SuperLearner package: learn about the Random Forest algorithm, bagging, and much more!


Flexdashboard in R – What, Why and How?

The biggest problem with I-am-an-R-coder Data Scientists is the big wall they hit when it comes to Web-friendly Interactive Visualization. Because in most of the organizations, Data Scientists’ role not just involve building sophisticated statistical models but more to do with extracting valuable insights out of the data chunk – whose end result is a (nice) visualization. The world hasn’t completely ruled out Powerpoint presentations, yet the need of the hour is Interactive Dashboards, because less is more – showing an information only on mouse hover is a lot better than having those values carved on the chart and interactive visualization also enables the analyst stuff-in more information (that reveals itself when required) than a static image.


JupyterLab is Ready for Users

We are proud to announce the beta release series of JupyterLab, the next-generation web-based interface for Project Jupyter.


Recommender Engine – Under The Hood

Many of us are bombarded with various recommendations in our day to day life, be it on e-commerce sites or social media sites. Some of the recommendations look relevant but some create range of emotions in people, varying from confusion to anger. There are basically two types of recommender systems, Content based and Collaborative filtering. Both have their pros and cons depending upon the context in which you want to use them.


Where AI is already rivaling humans

Since 2011, AI hit hypergrowth, and researchers have created several AI solutions that are almost as good as – or better than – humans in several domains, including games, healthcare, computer vision and object recognition, speech to text conversion, speaker recognition, and improved robots and chat-bots for solving specific problems.


Google Colab Free GPU Tutorial

Hello! I will show you how to use Google Colab, Google’s free cloud service for AI developers. With Colab, you can develop deep learning applications on the GPU for free.


Introduction to Deep Learning Using PyTorch

This video will serve as an introduction to PyTorch, a dynamic, deep learning framework in Python. In this video, you will learn to create simple neural networks, which are the backbone of artificial intelligence. We will start with fundamental concepts of deep learning (including feed forward networks, back-propagation, loss functions, etc.) and then dive into using PyTorch tensors to easily create our networks. Finally, we will CUDA render our code in order to be GPU-compatible for even faster model training.


Build a recurrent neural network using Apache MXNet

A step-by-step tutorial to develop an RNN that predicts the probability of a word or character given the previous word or character.


Deep Learning Image Classification with Keras and Shiny

I have to admit my initial thoughts of deep learning were pessimistic and in order to not succumb to impostor syndrome, I put off learning any new techniques in the growing sub field of machine learning, until recently. After attending & speaking at Data Day Texas and listening to Lukas Biewald’s Keynote titled: Deep Learning in the Real World, I began to see through the complexities of Deep Learning and understand the real world applications. My favorite example from the keynote was Coca Cola deploying a deep learning model to easily capture under the cap promotional codes. I left the conference with some initial ideas about detecting deer in my backyard using a web cam and running a image classification algorithm as my first step into learning by doing.
Advertisements