Debugging & Visualising training of Neural Network with TensorBoard

I started my deep learning journey a few years back. I have learnt a lot in this period. But, even after all these efforts, every Neural network I train provides me with a new experience. If you have tried to train a neural network, you must know my plight! But, through all this time, I have now made a workflow, which I will share with you today. I am sharing my learning / experience about building Neural Network with all of you. I cannot guarantee it will work all the time, but at least it may guide you as to how would you approach to solve the problem. I will also share with you a tool which I find is a useful addition to the deep learning toolbox – TensorBoard.


Structural Changes in Global Warming

In time series analysis, structural changes represent shocks impacting the evolution with time of the data generating process. That is relevant because one of the key assumptions of the Box-Jenkins methodology is that the structure of the data generating process does not change over time. How can structural changes be identified ? The strucchange package can help in that and the present tutorial shows how.


Data Science: Performance of Python vs Pandas vs Numpy

Speed and time is a key factor for any Data Scientist. In business, you do not usually work with toy datasets having thousands of samples. It is more likely that your datasets will contain millions or hundreds of millions samples. Customer orders, web logs, billing events, stock prices – datasets now are huge. I assume you do not want to spend hours or days, waiting for your data processing to complete. The biggest dataset I worked with so far contained over 30 million of records. When I run my data processing script the first time for this dataset, estimated time to complete was around 4 days! I do not have very powerful machine (Macbook Air with i5 and 4 GB of RAM), but the most I could accept was running the script over one night, not multiple days. Thanks to some clever tricks, I was able to decrease this running time to a few hours. This post will explain the first step to achieve good data processing performance – choosing right library/framework for your dataset.


General Aspects in Selecting Best Variables

This chapter covers the following topics:
• The best variables ranking from conventional machine learning algorithms, either predictive or clustering.
• The nature of selecting variables with and without predictive models.
• The effect of variables working in groups (intuition and information theory).
• Exploring the best variable subset in practice using R.
Selecting the best variables is also known as feature selection, selecting the most important predictors, selecting the best predictors, among others.


Twitter analysis using R (Semantic analysis of French elections)

To perform the analysis, I needed an important number of tweets and I wanted to use all of the tweets concerning the election. The Twitter search API is limited since you only have access to a sample of tweets. On the other hand, the streaming API allows you to collect the data in real-time and to collect almost all tweets. Hence, I used the streamR package. So, I collected tweets on 60 seconds batch and saved them on .json files. The use of batches instead of one large file is to improve RAM consumption (Instead of reading and then subsetting one large file, you can do the subset on each of the batches and then merge them). Here is the code to collect the data with streamR.


Facets: An Open Source Visualization Tool for Machine Learning Training Data

Getting the best results out of a machine learning (ML) model requires that you truly understand your data. However, ML datasets can contain hundreds of millions of data points, each consisting of hundreds (or even thousands) of features, making it nearly impossible to understand an entire dataset in an intuitive fashion. Visualization can help unlock nuances and insights in large datasets. A picture may be worth a thousand words, but an interactive visualization can be worth even more. Working with the PAIR initiative, we’ve released Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive. These visualizations allow you to debug your data which, in machine learning, is as important as debugging your model. They can easily be used inside of Jupyter notebooks or embedded into webpages. In addition to the open source code, we’ve also created a Facets demo website. This website allows anyone to visualize their own datasets directly in the browser without the need for any software installation or setup, without the data ever leaving your computer.


What is the future of deep learning? Are most machine learning experts turning to deep learning?

Yes, most faculty, graduate students, and a lot of engineering teams in industry have already abandoned everything else and shifted to deep learning. Most new graduate students in applied areas such as computer vision that I meet, know nothing about probabilistic graphical models for instance, and their proposed solution to any problem is a CNN/LSTM/GAN.


Machine Learning Applied to Big Data, Explained

Machine learning with Big Data is, in many ways, different than ‘regular’ machine learning. This informative image is helpful in identifying the steps in machine learning with Big Data, and how they fit together into a process of their own.


R Programming Notes – Part 2

In an older post, I discussed a number of functions that are useful for programming in R. I wanted to expand on that topic by covering other functions, packages, and tools that are useful. Over the past year, I have been working as an R programmer and these are some of the new learnings that have become fundamental in my work.


Textual entailment with TensorFlow

Textual entailment is a simple exercise in logic that attempts to discern whether one sentence can be inferred from another. A computer program that takes on the task of textual entailment attempts to categorize an ordered pair of sentences into one of three categories. The first category, called “positive entailment,” occurs when you can use the first sentence to prove that a second sentence is true. The second category, “negative entailment,” is the inverse of positive entailment. This occurs when the first sentence can be used to disprove the second sentence. Finally, if the two sentences have no correlation, they are considered to have a “neutral entailment.” Textual entailment is useful as a component in much larger applications. For example, question-answering systems may use textual entailment to verify an answer from stored information. Textual entailment may also enhance document summarization by filtering out sentences that don’t include new information. Other natural language processing (NLP) systems find similar uses for entailment. Get O’Reilly’s AI newsletter This article will guide you through how to build a simple and fast-to-train neural network to perform textual entailment using TensorFlow.


Automatically Fitting the Support Vector Machine Cost Parameter

In an earlier post I discussed how to avoid overfitting when using Support Vector Machines. This was achieved using cross validation. In cross validation, prediction accuracy is maximized by varying the cost parameter. Importantly, prediction accuracy is calculated on a different subset of the data from that used for training. In this blog post I take that concept a step further, by automating the manual search for the optimal cost. The data set I’ll be using describes different types of glass based upon physical attributes and chemical composition. You can read more about the data here, but for the purposes of my analysis all you need to know is that the outcome variable is categorical (7 types of glass) and the 4 predictor variables are numeric.


Generalized Additive Models and Mixed-Effects in Agriculture

In the previous post I explored the use of linear model in the forms most commonly used in agricultural research. Clearly, when we are talking about linear models we are implicitly assuming that all relations between the dependent variable y and the predictors x are linear. In fact, in a linear model we could specify different shapes for the relation between y and x, for example by including polynomials (read for example: https://…/fitting-polynomial-regression-r ). However, we can do that only in cases where we can clearly see a particular shape of the relation, for example quadratic. The problem is in many cases we can see from a scatterplot that we have a non-linear distribution of the points, but it is difficult to understand its form. Moreover, in a linear model the interpretation of polynomial coefficients become more difficult and this may decrease their usefulness. An alternative approach is provided by Generalized Additive Models, which allows us to fit models with non-linear smoothers without specifying a particular shape a priori.


Artificial Intuition – A Breakthrough Cognitive Paradigm

In a previous post, I introduced the Meta Meta-Model of Deep Learning. However, I did not introduce its details. A word of warning for the reader, the concepts in this section is in flux and in undergoing a lot of changes. Therefore, this article is just a reflection of my current understanding of the language of Deep Learning Meta Meta-Model. That’s definitely a mouth full, so to make life simpler for everyone, I just call this the Deep Learning Canonical Patterns. These patterns are documented in the Deep Learning Design Patterns Wiki. In this post I will explore further the characteristics of Artificial Intuition with the goal of describing a set of patterns that can aid us in formulating novel architectures for Deep Learning. In a previous post “Deep Learning and Artificial Intuition”, I introduced the idea that there are two distinct cognitive mechanisms, one based on logical inference and another based on intuition. At least 6 decades have been spent exploring cognitive mechanisms based on logical inference without making much progress towards AGI. Deep Learning, a breakthrough discovered in 2012, revealed an alternative promising research approach based on the a different cognitive paradigm.