How Adversarial Attacks Work – How to trick any ML Classifier

Recent studies by Google Brain have shown that any machine learning classifier can be tricked to give incorrect predictions, and with a little bit of skill, you can get them to give pretty much any result you want. This fact steadily becomes worrisome as more and more systems are powered by artificial intelligence – and many of them are crucial for our safe and comfortable life. Banks, surveillance systems, ATMs, face recognition on your laptop – and very very soon, self-driving cars. Lately, safety concerns about AI were revolving around ethics – today we are going to talk about more pressuring and real issues.


Under the Hood With Chatbots

This is the second in our chatbot series. Here we explore Natural Language Understanding (NLU), the front end of all chatbots. We’ll discuss the programming necessary to build rules based chatbots and then look at the use of deep learning algorithms that are the basis for AI enabled chatbots.


Neural networks for beginners: popular types and applications

Today, neural networks are used for solving many business problems such as sales forecasting, customer research, data validation, and risk management. For example, at Statsbot we apply neural networks for time series predictions, anomaly detection in data, and natural language understanding. In this post, we’ll explain what neural networks are, the main challenges for beginners of working on them, popular types of neural networks, and their applications. We’ll also describe how you can apply neural networks in different industries and departments.


SLING: A Natural Language Frame Semantic Parser

Until recently, most practical natural language understanding (NLU) systems used a pipeline of analysis stages, from part-of-speech tagging and dependency parsing to steps that computed a semantic representation of the input text. While this facilitated easy modularization of different analysis stages, errors in earlier stages would have cascading effects in later stages and the final representation, and the intermediate stage outputs might not be relevant on their own. For example, a typical pipeline might perform the task of dependency parsing in an early stage and the task of coreference resolution towards the end. If one was only interested in the output of coreference resolution, it would be affected by cascading effects of any errors during dependency parsing. Today we are announcing SLING, an experimental system for parsing natural language text directly into a representation of its meaning as a semantic frame graph. The output frame graph directly captures the semantic annotations of interest to the user, while avoiding the pitfalls of pipelined systems by not running any intermediate stages, additionally preventing unnecessary computation. SLING uses a special-purpose recurrent neural network model to compute the output representation of input text through incremental editing operations on the frame graph. The frame graph, in turn, is flexible enough to capture many semantic tasks of interest (more on this below). SLING’s parser is trained using only the input words, bypassing the need for producing any intermediate annotations (e.g. dependency parses).


Product Launch: Increased Dataset Resources

Today we’re pleased to announce a 20x increase to the size limit of datasets you can share on Kaggle Datasets for free! At Kaggle, we’ve seen time and again how open, high quality datasets are the catalysts for scientific progress-and we’re striving to make it easier for anyone in the world to contribute and collaborate with data.


Capsule Networks Are Shaking up AI – Here’s How to Use Them

If you follow AI you might have heard about the advent of the potentially revolutionary Capsule Networks. I will show you how you can start using them today. Geoffrey Hinton is known as the father of “deep learning.” Back in the 50s the idea of deep neural networks began to surface and, in theory, could solve a vast amount of problems. However, nobody was able to figure out how to train them and people started to give up. Hinton didn’t give up and in 1986 showed that the idea of backpropagation could train these deep nets. However, it wasn’t until 5 years ago in 2012 that Hinton was able to demostrate his breakthrough, because of the lack of computational power of the time. This breakthrough set the stage for this decade’s progress in AI.


Basic Concepts of Feature Selection

Feature selection is a key part of data science but is it still relevant in the age of support vector machines (SVMs) and Deep Learning? Yes, absolutely. We explain why.


The tools that make TensorFlow productive

Deployment is a big chunk of using any technology, and tools to make deployment easier have always been an area of innovation in computing. For instance, the difficulties and uncertainties of installing software and keeping it up-to-date were one factor driving companies to offer software as a service over the Web. Likewise, big data projects present their own set of issues: how do you prepare and ingest the data? How do you view the choices made by algorithms that are complex and dynamic? Can you use hardware acceleration (such as GPUs) to speed analytics, which may need to operate on streaming, real-time data? Those are just a few deployment questions associated with deep learning.


Using Shiny with Scheduled and Streaming Data

Shiny applications are often backed by fluid, changing data. Data updates can occur at different time scales: from scheduled daily updates to live streaming data and ad-hoc user inputs. This article describes best practices for handling data updates in Shiny, and discusses deployment strategies for automating data updates.
Advertisements