Navigating the Data Lake with Datamaran: Automatically Extracting Structure from Log Datasets

Organizations routinely accumulate semi-structured log datasets generated as the output of code; these datasets remain unused and uninterpreted, and occupy wasted space – this phenomenon has been colloquially referred to as ‘data lake’ problem. One approach to leverage these semi-structured datasets is to convert them into a structured relational format, following which they can be analyzed in conjunction with other datasets. We present Datamaran, an tool that extracts structure from semi-structured log datasets with no human supervision. Datamaran automatically identifies field and record endpoints, separates the structured parts from the unstructured noise or formatting, and can tease apart multiple structures from within a dataset, in order to efficiently extract structured relational datasets from semi-structured log datasets, at scale with high accuracy. Compared to other unsupervised log dataset extraction tools developed in prior work, Datamaran does not require the record boundaries to be known beforehand, making it much more applicable to the noisy log files that are ubiquitous in data lakes. Datamaran can successfully extract structured information from all datasets used in prior work, and can achieve 95% extraction accuracy on automatically collected log datasets from GitHub – a substantial 66% increase of accuracy compared to unsupervised schemes from prior work. Our user study further demonstrates that the extraction results of Datamaran are closer to the desired structure than competing algorithms.


Scalable Architecture for Personalized

The personalized health care service utilizes the relational patient data and big data analytics to tailor the medication recommendations. However, most of the health care data are in unstructured form and it consumes a lot of time and effort to pull them into relational form. This study proposes a novel data lake architecture to reduce the data ingestion time and improve the precision of healthcare analytics. It also removes the data silos and enhances the analytics by allowing the connectivity to the third-party data providers (such as clinical lab results, chemist, insurance company,etc.). The data lake architecture uses the Hadoop Distributed File System (HDFS) to provide the storage for both structured and unstructured data. This study uses K-means clustering algorithm to find the patient clusters with similar health conditions. Subsequently, it employs a support vector machine to find the most successful healthcare recommendations for the each cluster. Our experiment results demonstrate the ability of data lake to reduce the time for ingesting data from various data vendors regardless of its format. Moreover, it is evident that the data lake poses the potential to generate clusters of patients more precisely than the existing approaches. It is obvious that the data lake provides an unified storage location for the data in its native format. It can also improve the personalized healthcare medication recommendations by removing the data silos.


Minimally-Supervised Attribute Fusion for Data Lakes

Aggregate analysis, such as comparing country-wise sales versus global market share across product categories, is often complicated by the unavailability of common join attributes, e.g., category, across diverse datasets from different geographies or retail chains, even after disparate data is technically ingested into a common data lake. Sometimes this is a missing data issue, while in other cases it may be inherent, e.g., the records in different geographical databases may actually describe different product ‘SKUs’, or follow different norms for categorization. Record linkage techniques can be used to automatically map products in different data sources to a common set of global attributes, thereby enabling federated aggregation joins to be performed. Traditional record-linkage techniques are typically unsupervised, relying textual similarity features across attributes to estimate matches. In this paper, we present an ensemble model combining minimal supervision using Bayesian network models together with unsupervised textual matching for automating such ‘attribute fusion’. We present results of our approach on a large volume of real-life data from a market-research scenario and compare with a standard record matching algorithm. Finally we illustrate how attribute fusion using machine learning could be included as a data-lake management feature, especially as our approach also provides confidence values for matches, enabling human intervention, if required.


Doing good data science

Data scientists, data engineers, AI and ML developers, and other data professionals need to live ethical values, not just talk about them.


Continuous deployment of package documentation with pkgdown and Travis CI

Follow these simple steps to enable continuous deployment of your package documentation.


Top 10 Data Science Use Cases in Insurance

The insurance industry is regarded as one of the most competitive and less predictable business spheres. It is instantly related to risk. Therefore, it has always been dependent on statistics. Nowadays, data science has changed this dependence forever. Now, insurance companies have a wider range of information sources for the relevant risk assessment. Big Data technologies are applied to predict risks and claims, to monitor and to analyze them in order to develop effective strategies for customers attraction and retention. Undoubtedly, the insurance companies benefit from data science application within the spheres of their great interest. Therefore, we have prepared the top 10 data science use cases in the insurance industry, which cover many various activities.


The 4 Levels of Data Usage in Data Science

Overton noted that 5 years ago the idea of extracting value from data was new to businesses, and that businesses had to be convinced that it was worth their effort to collect data and analyze it for meaningful patterns from which they could benefit. He then contrasted that with the views of today, noted that it is now understood that not using data is losing out on a potential competitive edge. It seems a forgone conclusion that data science is necessary to some degree, yet businesses now have a different set of questions. What should our data scientists be doing What areas of our business should we be focusing on How exactly can our data add value
1 – Monitor & Predict
2 – Improve Efficiency
3 – Augment Decision Making
4 – Automation


R 3.5.1 is released

Last week the R Core Team released the latest update to the R statistical data analysis environment, R version 3.5.1. This update (codenamed ‘Feather Spray’ – a Peanuts reference) makes no user-visible changes and fixes a few bugs. It is backwards-compatible with R 3.5.0, and users can find updates for Windows, Linux and Mac systems at their local CRAN mirror.


A Comparative Review of the R Commander GUI for R

The R Commander is a free and open source user interface for the R software, one that focuses on helping users learn R commands by point-and-clicking their way through analyses. The R Commander is available on Windows, Mac, and Linux; there is no server version. This is one of a series of reviews which aim to help non-programmers choose the user interface for R which is best for them. Each review also includes a cursory description of the programming support that each interface offers.


The Ten Commandments for a well-formatted database

Our diligent readers already know how important having well formatted data is for efficient statistical analyses. Here we gathered some advice on how to make a well structured database, in order to perform accurate analyses, and avoid driving your fellow analysts crazy.
Commandment 1: all your data shall fit into one single dataframe
Commandment 2: Thou shalt respect a precise formatting
Commandment 3: A line = a statistical individual
Commandment 4: A column = a variable
Commandment 5: Thou shalt not encode thy qualitative variables
Commandment 6: Thy database shall only contain data
Commandment 7: Homogeneous thou shalt be
Commandment 8: Thy numerical variables with respect thou shalt treat
Commandment 9: Anonymous thy database thou shalt keep
Commandment 10: Human readable thy database shall be


Interpreting machine learning models

Regardless of the end goal of your data science solutions, an end-user will always prefer solutions that are interpretable and understandable. Moreover, as a data scientist you will always benefit from the interpretability of your model to validate and improve your work. In this blog post I attempt to explain the importance of interpretability in machine learning and discuss some simple actions and frameworks that you can experiment with yourself.
Advertisements