Forecasting & Time Series Analysis – Manufacturing Case Study Example (Part 1)

Today we are starting a new case study example series on YOU CANalytics involving forecasting and time series analysis. In this case study example, we will learn about time series analysis for a manufacturing operation. Time series analysis and modeling have many business and social applications. It is extensively used to forecast company sales, product demand, stock market trends, agricultural production etc. Before we learn more about forecasting let’s evaluate our own lives on a time scale: …


Time Series Decomposition – Manufacturing Case Study Example (Part 2)

In the previous article, we started a new case study on sales forecasting for a tractor and farm equipment manufacturing company called PowerHorse. Our final goal is to forecast tractor sales in the next 36 months. In this article, we will delve deeper into time series decomposition. As discussed earlier, the idea behind time series decomposition is to extract different regular patters embedded in the observed time series. But in order to understand why this is easier said than done we need to understand some fundamental properties of mathematics and nature as an answer to the question: …


More on Fully Automated Machine Learning

Recently we’ve been profiling Automated Machine Learning (AML) platforms, both of the professional variety, and particularly those proprietary one-click-to-model variety that are being pitched to untrained analysts and line-of-business managers. Since our first article, readers have suggested some additional companies we should look at which are profiled here along with some interesting observations about who is buying and why.


Nice Generalization of the K-NN Clustering Algorithm — Also Useful for Data Reduction

I describe here an interesting and intuitive clustering algorithm (that can be used for data reduction as well) offering several advantages, over traditional classifiers:
• More robust against outliers and erroneous data
• Executing much faster
• Generalizing well known algorithms
You don’t need to know K-NN to understand this article — but click here if you want to learn more about it. You don’t need a background in statistical science either.
Let’s describe this new algorithm and its various components, in simple English


Building Convolutional Neural Networks with Tensorflow

In the past I have mostly written about ‘classical’ Machine Learning, like Naive Bayes classification, Logistic Regression, and the Perceptron algorithm. In the past year I have also worked with Deep Learning techniques, and I would like to share with you how to make and train a Convolutional Neural Network from scratch, using tensorflow. Later on we can use this knowledge as a building block to make interesting Deep Learning applications.
For this you will need to have tensorflow installed (see installation instructions) and you should also have a basic understanding of Python programming and the theory behind Convolutional Neural Networks. After you have installed tensorflow, you can run the smaller Neural Networks without GPU, but for the deeper networks you will definitely need some GPU power.
The Internet is full with awesome websites and courses which explain how a convolutional neural network works. Some of them have good visualisations which make it easy to understand [click here for more info]. I don’t feel the need to explain the same things again, so before you continue, make sure you understand how a convolutional neural network works. For example,
• What is a convolutional layer, and what is the filter of this convolutional layer?
• What is an activation layer (ReLu layer (most widely used), sigmoid activation or tanh)?
• What is a pooling layer (max pooling / average pooling), dropout?
• How does Stochastic Gradient Descent work?


Dogs vs. Cats: Image Classification with Deep Learning using TensorFlow in Python

Given a set of labeled images of cats and dogs, a machine learning model is to be learnt and later it is to be used to classify a set of new images as cats or dogs.


Improving life with IoT and Analytics

Last weekend I went to attend one event hosted by GE digital in association with T-HUB and Idea labs. The event was about industrial IoT and Predix Platform developed by GE Digital for IIOT. It was a very much informative event, and I also had the privilege to meet some awesome people over there who are part of this platform development. GE is the well-known company we all know. Now coming to the usage of IoT and analytics in this. GE Develops a lot of turbines and other parts which are used to generate electricity by a different source such as water, gas and nuclear. In the new generation of these turbines and nuclear reactors, they are heavily using IoT sensor which sends data to IoT Hub and Cloud. Once data reaches to Predix platform (The Platform developed by GE for aggregating data and doing analytics) they do analysis on these data. using this data company can do predictive maintenance.


Manipulating and processing data in R

Data structures provide the way to represent data in data analytics. We can manipulate data in R for analysis and visualization.


Using prediction models with CoreML

Machine learning allows computers to learn without being explicitly programmed. It’s a hot and complex topic that you see in action almost everywhere, from movie recommendations to personal assistants. Recently, Apple released Core ML, a new framework for integrating machine learning models into any iOS app so predictions can happen on the device, without using any external service. You can use trained models from frameworks like Caffe, Keras, and scikit-learn, among others, and using coremltools, a Python library provided by Apple, you can convert those models to the CoreML format. In this tutorial, we’re going to review the process of creating a prediction model with scikit-learn, converting it to Core ML format, and integrating it into an app. It is aimed at beginners, so it will explain some concepts and guide you to install a Python environment for creating the model. A little knowledge of machine learning and Python will help you, but it’s not absolutely required. However, this tutorial won’t explain how to choose an algorithm to create a prediction model, how to preprocess the data, train the model, test it, and tune the process, all essential parts of a machine learning project.


Must-read Path-breaking Papers About Image Classification

Deep Learning models for Image Classification have achieved an exponential decline in error rate through last few years. Since then, Deep Learning has become prime focus area for AI research. However, Deep Learning has been around for a few decades now. Yann Lecun, presented a paper pioneering the Convolutional Neural Networks (CNN) in 1998. But it wasn’t until the start of the current decade that Deep Learning really took off. The recent disruption can be attributed to increased processing power (aka GPUs), the availability of abundant data (aka Imagenet dataset) and new algorithms and techniques. It all started in 2012 with the AlexNet, a large, deep Convolutional Neural Network which won the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC). ILSVRC is a competition where research teams evaluate their algorithms on the given data set and compete to achieve higher accuracy on several visual recognition tasks. Since then, variants of CNNs have dominated the ILSVRC and have surpassed the level of human accuracy, which is considered to lie in the 5-10% error range.


Five Principles for Good Data Visualization

1. Good data visualization is informative
2. Good data visualization is well balanced
3. Good data visualization is equally concerned with what is not displayed
4. Good data visualization is created with pure data
5. Good data visualization is human


Bayesian A/B Testing Calculator


Learning Deep Learning with Keras

I teach deep learning both for a living (as the main deepsense.io instructor, in a Kaggle-winning team1) and as a part of my volunteering with the Polish Children’s Fund giving workshops to gifted high-school students2. I want to share a few things I’ve learnt about teaching (and learning) deep learning. Whether you want to start learning deep learning for you career, to have a nice adventure (e.g. with detecting huggable objects) or to get insight into machines before they take over3, this post is for you! Its goal is not to teach neural networks by itself, but to provide an overview and to point to didactically useful resources.


Understanding AI Toolkits

Modern artificial intelligence makes many benefits available to business, bringing cognitive abilities to machines at scale. As a field of computer science, AI is moving at an unprecedented rate: the time you must wait for a research result in an academic paper to translate into production-ready code can now be measured in mere months. However, with this velocity comes a corresponding level of confusion for newcomers to the field. As well as developing familiarity with AI techniques, practitioners must choose their technology platforms wisely. This post surveys today’s foremost options for AI in the form of deep learning, examining each toolkit’s primary advantages as well as their respective industry supporters.


DeepMind’s Relational Reasoning Networks – Demystified

Every time DeepMind publishes a new paper, there is frenzied media coverage around it. Often you will read phrases that are often misleading. For example, its new paper on relational reasoning networks has futurism reporting it like DeepMind Develops a Neural Network That Can Make Sense of Objects Around It. This is not only misleading, but it also makes the everyday non PhD person intimidated. In this post I will go through the paper in an attempt to explain this new architecture in simple terms.


Heterogeneous change point inference

We propose, a heterogeneous simultaneous multiscale change point estimator called ‘H-SMUCE’ for the detection of multiple change points of the signal in a heterogeneous Gaussian regression model. A piecewise constant function is estimated by minimizing the number of change points over the acceptance region of a multiscale test which locally adapts to changes in the variance. The multiscale test is a combination of local likelihood ratio tests which are properly calibrated by scale-dependent critical values to keep a global nominal level a, even for finite samples. We show that H-SMUCE controls the error of overestimation and underestimation of the number of change points. For this, new deviation bounds for F-type statistics are derived. Moreover, we obtain confidence sets for the whole signal. All results are non-asymptotic and uniform over a large class of heterogeneous change point models. H-SMUCE is fast to compute, achieves the optimal detection rate and estimates the number of change points at almost optimal accuracy for vanishing signals, while still being robust. We compare H-SMUCE with several state of the art methods in simulations and analyse current recordings of a transmembrane protein in the bacterial outer membrane with pronounced heterogeneity for its states. An R-package is available on line.


Magick 1.0: Advanced Graphics and Image Processing in R

Last week, version 1.0 of the magick package appeared on CRAN: an ambitious effort to modernize and simplify high quality image processing in R. This R package builds upon the Magick++ STL which exposes a powerful C++ API to the famous ImageMagick library.


Lessons Learned From Benchmarking Fast Machine Learning Algorithms

Boosted decision trees are responsible for more than half of the winning solutions in machine learning challenges hosted at Kaggle, according to KDNuggets. In addition to superior performance, these algorithms have practical appeal as they require minimal tuning. In this post, we evaluate two popular tree boosting software packages: XGBoost and LightGBM, including their GPU implementations. Our results, based on tests on six datasets, are summarized as follows:
1. XGBoost and LightGBM achieve similar accuracy metrics.
2. LightGBM has lower training time than XGBoost and its histogram-based variant, XGBoost hist, for all test datasets, on both CPU and GPU implementations. The training time difference between the two libraries depends on the dataset, and can be as big as 25 times.
3. XGBoost GPU implementation does not scale well to large datasets and ran out of memory in half of the tests.
4. XGBoost hist may be significantly slower than the original XGBoost when feature dimensionality is high.
All our code is open-source and can be found in this repo. We will explain the algorithms behind these libraries and evaluate them across different datasets. Do you like your machine learning to be quick? Then keep reading.
Advertisements