Understanding neural networks with TensorFlow Playground

You may have heard the buzz about neural networks and deep learning, and want to learn more. But when you learn about the technology from a textbook, many people find themselves overwhelmed by mathematical models and formulas. I certainly was. For people like me, there’s an awesome tool to help you grasp the idea of neural networks without any hard math: TensorFlow Playground, a web app written in JavaScript that lets you play with a real neural network running in your browser and click buttons and tweak parameters to see how it works.


Understanding Support Vector Machine via Examples

In the previous post on Support Vector Machines (SVM), we looked at the mathematical details of the algorithm. In this post, I will be discussing the practical implementations of SVM for classification as well as regression. I will be using the iris dataset as an example for the classification problem, and a randomly generated data as an example for the regression problem.


Retail store sales forecasting

Sales forecasting is an essential task for the management of a store. Being able to estimate the quantity of products that a retail store is going to sell in the future will allow the owners of these shops to prepare the inventory that they will need. Predictive analytics can help us to study and discover the factors that determine the number of sales that a retail store will have in the future. During this article we are going to use the information about the sales of a drug store from the last two years in order to predict the amount of sales that it is going to have one week in advance.


Automated Machine Learning – A Paradigm Shift That Accelerates Data Scientist Productivity @ Airbnb

At Airbnb, we are always searching for ways to improve our data science workflow. A fair amount of our data science projects involve machine learning, and many parts of this workflow are repetitive. These repetitive tasks include, but are not limited to:
Exploratory Data Analysis: Visualizing data before embarking on a modeling exercise is a crucial step in machine learning. Automating tasks such as plotting all your variables against the target variable being predicted as well as computing summary statistics can save lots of time.
Feature Transformations: There are many choices in how you can encode categorical variables, impute missing values, encode sequences and text, etc. Many of these feature transformations are canonical such that they can be reliably applied to many problems.
Algorithm Selection & Hyper-parameter Tuning: There are a dizzying number of algorithms to choose from and related hyper-parameters that can be tuned. These tasks are very amenable to automation.
Model Diagnostics: Learning curves, partial dependence plots, feature importances, ROC and other diagnostics are extremely useful to generate automatically.


CAN (Creative Adversarial Network) – Explained

Lately, GANs (Generative Adversarial Networks) have been really successful in creating interesting content that are fairly abstract and hard to create procedurally. This paper, aptly named CAN (Creative , instead of Generative, Adversarial Networks) explores the possibility of machine generated creative content.


Neuroevolution: A different kind of deep learning

Neuroevolution is making a comeback. Prominent artificial intelligence labs and researchers are experimenting with it, a string of new successes have bolstered enthusiasm, and new opportunities for impact in deep learning are emerging. Maybe you haven’t heard of neuroevolution in the midst of all the excitement over deep learning, but it’s been lurking just below the surface, the subject of study for a small, enthusiastic research community for decades. And it’s starting to gain more attention as people recognize its potential. Put simply, neuroevolution is a subfield within artificial intelligence (AI) and machine learning (ML) that consists of trying to trigger an evolutionary process similar to the one that produced our brains, except inside a computer. In other words, neuroevolution seeks to develop the means of evolving neural networks through evolutionary algorithms. Get O’Reilly’s AI newsletter When I first waded into AI research in the late 1990s, the idea that brains could be evolved inside computers resonated with my sense of adventure. At that time, it was an unusual, even obscure field, but I felt a deep curiosity and affinity. The result has been 20 years of my life thinking about this subject, and a slew of algorithms developed with outstanding colleagues over the years, such as NEAT, HyperNEAT, and novelty search. In this article, I hope to convey some of the excitement of neuroevolution as well as provide insight into its issues, but without the opaque technical jargon of scientific articles. I have also taken, in part, an autobiographical perspective, reflecting my own deep involvement within the field. I hope my story provides a window for a wider audience into the quest to evolve brains within computers.


Perform sentiment analysis with LSTMs, using TensorFlow

In this notebook, we’ll be looking at how to apply deep learning techniques to the task of sentiment analysis. Sentiment analysis can be thought of as the exercise of taking a sentence, paragraph, document, or any piece of natural language, and determining whether that text’s emotional tone is positive, negative or neutral. This notebook will go through numerous topics like word vectors, recurrent neural networks, and long short-term memory units (LSTMs). After getting a good understanding of these terms, we’ll walk through concrete code examples and a full Tensorflow sentiment classifier at the end. Before getting into the specifics, let’s discuss the reasons why deep learning fits into natural language processing (NLP) tasks.
Advertisements