Tensorflow Tutorial : Part 1 – Introduction

In this multi-part series, we will explore how to get started with tensorflow. This tensorflow tutorial will lay a solid foundation to this popular tool that everyone seems to be talking about. The first part will focus on introducing tensorflow, go through some applications and touch upon the architecture.
This post is the first part of the multi-part series on a complete tensorflow tutorial –
◾ Tensorflow Tutorial – Part 1: Introduction
◾ Tensorflow Tutorial – Part 2: Getting Started
◾ Tensorflow Tutorial – Part 3: Building the first model


Towards Artificial General Intelligence in Enterprise – Data Science Driven by Statistics Requires New Qualitative Analytics to Model Disruptive Changes

This solution is made possible by the availability of our CIF technology. It is a novel technology that is capable of learning from text documents on the fly to discover new ideas, subjects, names and acronyms and draw relations between contexts regardless of the size and volume of text. It is domain agnostic and does not require data dictionary, ontology, or prior machine learning on domain specific themes. The results have been promising. We have documented the deployment of such solution on the earnings calls of several public companies.


Dimensionality Reduction Using t-SNE

t-SNE is a machine learning technique for dimensionality reduction that helps you to identify relevant patterns. The main advantage of t-SNE is the ability to preserve local structure. This means, roughly, that points which are close to one another in the high-dimensional data set will tend to be close to one another in the chart. t-SNE also produces beautiful looking visualizations. When setting up a predictive model, the first step should always be to understand the data. Although scanning raw data and calculating basic statistics can lead to some insights, nothing beats a chart. However, fitting multiple dimensions of data into a simple chart is always a challenge (dimensionality reduction). This is where t-SNE (or, t-distributed stochastic neighbor embedding for long) comes in. In this blog post, I explain how t-SNE works, and how to conduct and interpret your own t-SNE.


How macroeconomics can push forward the frontiers of Data Science

Macroeconomics of the 70s was a field very much in flux – forecasting growth, unemployment, and the effects of monetary policy were pressing issues; aggregate data on economic conditions was getting more accurate and extensive; and better computers allowed more and more complex regression analyses to be performed. But the predictions of these statistical forecasts proved unreliable for policymakers. As a famous example, the Phillips curve found a negative historical correlation between unemployment and inflation – high inflation tended to coincide with periods of low unemployment, and vice versa. But when central banks reacted by pursuing inflationary policies, unemployment didn’t drop! Future Nobel laureate Robert Lucas provided an explanation for this sort of forecasting failure in a seminal 1976 paper which came to be known as the “Lucas critique”. His argument was essentially that contemporary macroeconomic forecasting confused correlation with causation – high inflation today may be correlated with low unemployment tomorrow, but this doesn’t mean it causes it. Lucas argued that forecasting models had to be revamped. Models estimating correlations between aggregate variables would be thrown out. In their place would be models capturing individual-level economic choices, with data used to estimate the “structural parameters” governing their behavior. These structural parameters would ideally be stable across time and environments, particularly following an intervention by a government or central bank. Only then could forecasting models really become useful for predicting the results of policy changes. We think the data science world of today is facing many of the same challenges confronted by macroeconomists of the 1970s. The field is overdue for a Lucas critique of its own to point the way toward a new generation of models providing better predictions and more successful and cost-effective interventions.


PyTorch: First program and walk through

I saw that Fast.ai is shifting on PyTorch, I saw that PyTorch is utmost favourable for research prototyping. So, I decided to implement some research paper in PyTorch. I have already worked on C-DSSM model at Parallel Dots. But there my implementation was in Keras. I will emphasize on the hacker perspective, of porting the code from Keras to PyTorch, than the research perspective in the blog here. My implementation is at nishnik/Deep-Semantic-Similarity-Model-PyTorch, I have documented the code too.


Neural Network Foundations, Explained: Activation Function

This is a very basic overview of activation functions in neural networks, intended to provide a very high level overview which can be read in a couple of minutes. This won’t make you an expert, but it will give you a starting point toward actual understanding.


Object detection: an overview in the age of Deep Learning

Tryolabs Blog Blog Subscribe Hire us Search Read time: 11 minutes Object detection: an overview in the age of Deep Learning Javier Wed, Aug 30, 2017 in #Machine Learning Machine Learning Computer Vision Deep Learning There’s no shortage of interesting problems in computer vision, from simple image classification to 3D-pose estimation. One of the problems we’re most interested in and have worked on a bunch is object detection. Like many other computer vision problems, there still isn’t an obvious or even “best” way to approach the problem, meaning there’s still much room for improvement. Before getting into object detection, let’s do a quick rundown of the most common problems in the field.


K-Nearest Neighbors – the Laziest Machine Learning Technique

K-Nearest Neighbors (K-NN) is one of the simplest machine learning algorithms. When a new situation occurs, it scans through all past experiences and looks up the k closest experiences. Those experiences (or: data points) are what we call the k nearest neighbors.


How to internationalize your Shiny apps with shiny.i18n?

We pride ourselves on the fact that we work on some of the most difficult business applications of Shiny. These range from sales, manufacturing, management tools, to even satellite data analysis tools. Many of our clients include multinational enterprises that are continuously looking for new markets to enter. Most markets, however, have a common issue that needs addressing: language barrier. In particular, we have seen an increasing need for dashboards that are tuned to the needs of diverse employees worldwide. This means that every country or region needs to have their dashboard change accordingly. In short, there is a need for internationalization, hence, shiny.i18n. We’ve build an internationalization package for shiny and have open sourced it for all to use, and hopefully, contribute to. This is not our first open source package. We encourage you to take a look at shiny.semantic, and shiny.collections. The i18n package support CSV and JSON based translations. It even formats data according to local standards. We are still working on localized numbers and are in the process of adding our package to CRAN. Feel free to check our demo which currently supports English, Polish and Italian: Our code is on GitHub and is under an MIT license. Take a look and let us know what you think.


Complier average causal effect? Exploring what we learn from an RCT with participants who don’t do what they are told

Inspired by a free online course titled Complier Average Causal Effects (CACE) Analysis and taught by Booil Jo and Elizabeth Stuart (through Johns Hopkins University), I’ve decided to explore the topic a little bit. My goal here isn’t to explain CACE analysis in extensive detail (you should definitely go take the course for that), but to describe the problem generally and then (of course) simulate some data. A plot of the simulated data gives a sense of what we are estimating and assuming. And I end by describing two simple methods to estimate the CACE, which we can compare to the truth (since this is a simulation); next time, I will describe a third way.
Advertisements