Volatility modelling in R exercises (Part-2)

This is the second part of the series on volatility modelling. For other parts of the series follow the tag volatility. In this exercise set we will use the dmbp dataset from part-1 and extend our analysis to GARCH (Generalized Autoregressive Conditional Heteroscedasticity) models.

SQL for Data Analysis – Tutorial for Beginners – ep4

You have already learnt a lot about the basics of SQL for data analysis. I figured, it would be nice to have now an episode focusing only on SQL best practices. So I wrote one! In this article you will learn:
• How to format your SQL query to make it more reusable?
• When to use capital and lowercase characters?
• How to use aliases?
• How to add comments?
• And more…

How Feature Engineering can help you do well in a Kaggle competition?—?Part III

In the first and second parts of this series, I introduced the Outbrain Click Prediction machine learning competition and my initial tasks to tackle the challenge. I presented the main techniques used for exploratory data analysis, feature engineering, cross-validation strategy and modeling of baseline predictors using basic statistics and machine learning. In this last post of the series, I describe how I used more powerful machine learning algorithms for the click prediction problem as well as the ensembling techniques that took me up to the 19th position on the leaderboard (top 2%).


As you know by now, machine learning is a subfield in Computer Science (CS). Deep learning, then, is a subfield of machine learning that is a set of algorithms that is inspired by the structure and function of the brain and which is usually called Artificial Neural Networks (ANN). Deep learning is one of the hottest trends in machine learning at the moment, and there are many problems where deep learning shines, such as robotics, image recognition and Artificial Intelligence (AI).