An Introductory Guide to Understand how ANNs Conceptualize New Ideas (using Embedding)

Here’s something you don’t hear everyday – everything we perceive is just a best case probabilistic prediction by our brain, based on our past encounters and knowledge gained through other mediums. This might sound extremely counter intuitive because we have always imagined that our brain mostly gives us deterministic answers.


DeepCode Analyzes and Cleans your Code with the Help of Machine Learning

When we write code, we follow certain inherent guidelines since we started programming. There are things we inevitably miss, even when reviewing the script again. This is where machines are proving to be an unprecedented success. Once they are trained to perform a task, they do so with incredible time-saving speed.


Reinforcement Learning – Reward for Learning

Reinforcement Learning (RL) is more general than supervised learning or unsupervised learning. It learn from interaction with environment to achieve a goal or simply learns from reward and punishments. In other words algorithms learns to react to the environment. TD-learning seems to be closest to how humans learn in this type of situation, but Q-learning and others also have their own advantages. Reinforcement learning can be referred to a learning problem and a subfield of machine learning at the same time. As a learning problem, it refers to learning to control a system so as to maximize some numerical value which represents a long-term objective.


Towards fairness in ML with adversarial networks

From credit ratings to housing allocation, machine learning models are increasingly used to automate ‘everyday’ decision making processes. With the growing impact on society, more and more concerns are being voiced about the loss of transparency, accountability and fairness of the algorithms making the decisions. We as data scientists need to step-up our game and look for ways to mitigate emergent discrimination in our models. We need to make sure that our predictions do not disproportionately hurt people with certain sensitive characteristics (e.g., gender, ethnicity).


Desk Side Deep Learning: Done.

The APEXX W3 enables data scientists to develop and iterate their Deep Learning algorithms prior to scaling out on larger hardware deployments. With the APEXX W3, you don’t need a dedicated server room or costly infrastructure to get started with AI. The small GPU dense chassis is designed to train neural networks quietly by your desk.


Deep Learning from first principles in Python, R and Octave – Part 7

Specifically I discuss and implement the following gradient descent optimization techniques
a.Vanilla Stochastic Gradient Descent
b.Learning rate decay
c. Momentum method
d. RMSProp
e. Adaptive Moment Estimation (Adam)


The UCR Matrix Profile Page

The Matrix Profile (and the algorithms to compute it: STAMP, STAMPI, STOMP, SCRIMP, SCRIMP++ and GPU-STOMP), has the potential to revolutionize time series data mining because of its generality, versatility, simplicity and scalability. In particular it has implications for time series motif discovery, time series joins, shapelet discovery (classification), density estimation, semantic segmentation, visualization, rule discovery, clustering etc (note, for pure similarity search, we suggest you see MASS for Euclidean Distance, and the UCR Suite for DTW)


Rough.js

Rough.js is a light weight (9kB) graphics library that lets you draw in a sketchy, hand-drawn-like, style. The library defines primitives to draw lines, curves, arcs, polygons, circles, and ellipses. It also supports drawing SVG paths.


The Adversarial Robustness Toolbox: Securing AI Against Adversarial Threats

Recent years have seen tremendous advances in the development of artificial intelligence (AI). Modern AI systems achieve human-level performance on cognitive tasks such as recognizing objects in images, annotating videos, converting speech to text, or translating between different languages. Many of these breakthrough results are based on Deep Neural Networks (DNNs). DNNs are complex machine learning models bearing certain similarity with the interconnected neurons in the human brain. DNNs are capable of dealing with high-dimensional inputs (e.g. millions of pixels in high-resolution images), representing patterns in those inputs at various levels of abstraction, and relating those representations to high-level semantic concepts. An intriguing property of DNNs is that, while they are normally highly accurate, they are vulnerable to so-called adversarial examples. Adversarial examples are inputs (say, images) which have deliberately been modified to produce a desired response by a DNN. An example is shown in Figure 1: here the addition of a small amount of adversarial noise to the image of a giant panda leads the DNN to misclassify this image as a capuchin. Often, the target of adversarial examples is misclassification or a specific incorrect prediction which would benefit an attacker.


Kayenta

Kayenta is a platform for Automated Canary Analysis (ACA). It is used by Spinnaker to enable automated canary deployments. Please see the comprehensive canary documentation for more details. A canary release is a technique to reduce the risk from deploying a new version of software into production. A new version of software, referred to as the canary, is deployed to a small subset of users alongside the stable running version. Traffic is split between these two versions such that a portion of incoming requests are diverted to the canary. This approach can quickly uncover any problems with the new version without impacting the majority of users. The quality of the canary version is assessed by comparing key metrics that describe the behavior of the old and new versions. If there is significant degradation in these metrics, the canary is aborted and all of the traffic is routed to the stable version in an effort to minimize the impact of unexpected behavior. Canaries are usually run against deployments containing changes to code, but they can also be used for operational changes, including changes to configuration.


Introducing Kayenta: An open automated canary analysis tool from Google and Netflix

To perform continuous delivery at any scale, you need to be able to release software changes not just at high velocity, but safely as well. Today, Google and Netflix are pleased to announce Kayenta, an open-source automated canary analysis service that allows teams to reduce risk associated with rolling out deployments to production at high velocity. Developed jointly by Google and Netflix, Kayenta is an evolution of Netflix’s internal canary system, reimagined to be completely open, extensible and capable of handling more advanced use cases. It gives enterprise teams the confidence to quickly push production changes by reducing error-prone, time-intensive and cumbersome manual or ad-hoc canary analysis. Kayenta is integrated with Spinnaker, an open-source multi-cloud continuous delivery platform. This allows teams to easily set up an automated canary analysis stage within a Spinnaker pipeline. Kayenta fetches user-configured metrics from their sources, runs statistical tests, and provides an aggregate score for the canary. Based on the score and set limits for success, Kayenta can automatically promote or fail the canary, or trigger a human approval path.


A Discussion about Accessibility in AI at Stanford

I recently was a guest speaker at the Stanford AI Salon on the topic of accessiblity in AI, which included a free-ranging discussion among assembled members of the Stanford AI Lab. There were a number of interesting questions and topics, so I thought I would share a few of my answers here.


Z is for Z-Scores and Standardizing

Of course, we often will standardize variables in statistics, and the results are similar to Z-scores (though technically not the same if the mean and standard deviation aren’t population values). In fact, when I demonstrated the GLM function earlier this month, I skipped a very important step when conducting an analysis with interactions. I should have standardized my continuous predictors first, which means subtracting the variable mean and dividing by the variable standard deviation, creating a new variable with a mean of 0 and a standard deviation of 1 (just like the normal distribution).


The Next Generation of DevOps: ML Ops

The age of AI is upon us. As AI becomes more ubiquitous, many are finding new and innovative ways to operationalize data science in order to increase efficiency, speed and scale. As I look at traditional DevOp methodologies, there are synergies and parallels that can also be applied to the data science world. The new chasm involves multiple disciplines: Data Engineering, Data Science and Software Engineering. Traditional DevOps is the battleground for developers and operations which continues in the world of data science in a more pronounced manner – data engineers, data scientists, software developers and operations. These four personas come with different requirements, constraints and velocity. It is extremely hard to balance all four that satisfy the business requirements while complying with corporate and organizational policies.


Operational Machine Learning: Seven Considerations for Successful MLOps

In this article, we describe seven key areas to take into account for successful operationalization and lifecycle management (MLOps) of your ML initiatives
Advertisements