Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein

Friday, 16 March 2018

Big Data LDN Keynotes

The 2018 opening  keynotes of Big Data LDN have been announced.

Jay Kreps and Michael Stonebraker will be delivering the two opening keynotes.

Jay Kreps, opens the event on day 1, Tuesday 13th November. The Ex-Lead Architect for Data Infrastructure at LinkedIn, Co-creator of Apache Kafka and Co-founder & CEO of Confluent will take to the stage in the keynote theatre at 09:30.

Michael Stonebraker, the Turing Prize winner, IEEE John von Neumann Medal Holder, Co-founder of Tamr and Professor at MIT will address the keynote theatre at 09:30 on day 2, Wednesday 14th November.

Tuesday, 13 March 2018

Azure Cosmos DB Data Explorer

A new tool is available to use. Data Explorer provides a rich and unified experience for inserting, querying, and managing Azure Cosmos DB data within the Azure portal. The data explorer brings together 3 tools, Document Explorer, Query Explorer, and Script Explorer.

Monday, 12 March 2018

Apache Hive and HDInsight

Apache Hive is a data warehouse system for Hadoop. Hive enables data summarization, querying, and analysis of data. Hive queries are written in HiveQL, which is a query language similar to SQL.  Hive only maintains metadata information about your data stored on HDFS. Apache Spark has built-in functionality for working with Hive and HiveQL can be used to query data stored in HBase. Hive can handle large data sets. The data must have some structure. The query execution can be via Apache TezApache Spark, or MapReduce.

There are two types of tables within Hive.
  • Internal: Data is stored in the Hive data warehouse. The data warehouse is located at /hive/warehouse/ on the default storage for the cluster. This is for mainly temporary data.
  • External: data is stored outside the data warehouse. The data is also used outside of Hive or the data needs to stay in the underlying location

A Hive table consists of a schema stored in the metastore and the data is stored on HDFS. The supported file formats are Text File, SequenceFile, RCFile, Avro Files, ORC Files, Parquet, Custom INPUTFORMAT and OUTPUTFORMAT.

Apache Hive and Hive QL is on Azure HDInsight.  

Friday, 2 March 2018

The Magic of Data

It was the 10th anniversary of SQLBits this year, with the conference tag line being, the magic of data. The conference was held at Olympia, London between 21-24 February. I am proud to have attended every conference since its inception and that I have been a helper for the last 8 years. At the start of each conference it is always an interesting challenge to understand the new venue layout and what things we can do to make this the best SQLBits conference ever for the attendees and speakers.

There were the usual two days of expert instructor led training days. I looked after a PowerBI day with Adam Saxton and Python day with Dejan Sarka. The Python training included Machine Learning for in-database SQL Server. It was a very helpful to have an overview of data mining, machine learning and statistics for data scientists. Having an appreciation of the maths and algorithms is important in this new diverse data world.

Friday arrived with a mix of general sessions running in multiple tracks. I initially attended a session by Mark Wilcock on Text Analytics Of A Bank's Public Reports. The recording can be seen here. The text analysis in R that was demonstrated was very similar to the type of qualitative analysis I undertook in my PhD. The session an introduction to HDInsight was a great starting point for managing big data.

The date that every data person has in their head this year, is 25 May 2018. That is the GDPR deadline. The big question for Microsoft was understanding the telemetry data collected and its pipeline to ensure that they comply to GDPR. It was great to hear about all the work they have done to address GDPR for the data platform.

I attended more sessions in data science and SQL Graph. Graph databases are very useful in certain scenarios. The on-premises SQL Server 2017 graph engine is different to that of the graph API in Cosmos DB and has different syntax. There are many new features still to come for SQL Graph.

Other very interesting sessions were on performance tuning with the tiger tool box, R in PowerBI, the flexibility of SQL Server 2017, inside the classic machine learning algorithms with Professor Mark Whitehorn, and a session on don't cross the streams, a closer look at Stream Analytics by Johan Ludvig BrattÃ¥s. That concluded the breath of topics I  covered in this years conference. The conference covers an amazing breadth and depth of topics from database management, development, BI, data management and data science. My lightning talk experience from this year is shared here.

The rest of my time was spent mingling and sharing data experiences. It was an honor to have been able to be a part of the conference again.

Databricks in Azure

Databricks is a big data unified analytics platform that harness the power of AI. It is built on top of Spark, serverless and is highly elastic cloud based. Azure Databricks is in preview currently. This new Azure service aims to accelerate innovation by enabling data science with a high-performance analytics platform that’s optimized for Azure. It has native integration with other Azure services such as Power BI, SQL Data Warehouse, Cosmos DB as well as from enterprise-grade Azure security, including Active Directory integration, compliance, and enterprise-grade SLAs. More information can be found in these two links

A technical overview of Azure Databricks

Introduction to Azure Databricks

Databricks is a collaborative workspace.

Databricks have an ebook Simplifying Data Engineering to Accelerate Innovation which covers

  • The three primary keys to better data engineering
  • How to build and run faster and more reliable data pipelines
  • How to reduce operational complexity and total cost of infrastructure ownership
  • 5 examples of enterprises building reliable and highly performant data pipelines

Thursday, 1 March 2018

An Introduction to HDInsight

I attended a great session at SQLBits 2018 covering the basics of HDInsight by Edinson Medina. He introduced his talk explaining the term Big Data and that it is too complex for analysis in traditional databases. There are 2 types of processing batch processing, to shape the data for analysis and real time processing to capture streams of data for low latency querying.

Hadoop is described on the Hortonworks site as "Apache Hadoop is an open source software platform for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware."

A Hadoop cluster looks like

The underlying structure uses map reduce. The Tez engine is a newer faster engine for map reduce. The model is explained in the paper on "Analyzing performance of Apache Tez and MapReduce with hadoop multinode cluster on Amazon cloud"

HDInsight is 100% Apache Hadoop, but powered by the cloud.

There are many tools within the Hadoop ecosystem.

A meta data service that projects tabular schemas over folders and enables the folders to be queried as tables using a SQL like query.

Pig (an ETL Tool)
Performs a series of transformations to data relations based on Pig Latin statements.

A workflow engine for actions in a Hadoop cluster supporting parallel work streams.

A database integration service which enables bi-directional data transfer between an Hadoop cluster and databases via JDBC.

A low latency NoSQL database built on Hadoop modeled on Googles's BigTable. HBase stores files on HDFS.

An event processor for data streams such as real time monitoring and for event aggregation and logging. It defines a streaming topology that consists of spouts and bolts.

A fast general purpose computation engine that supports in memory operations. It is a unified stack for interactive, streaming and predictive analysis.

A management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.

Zepplin notebooks 
A multi-purposed web-based notebook which brings data ingestion, data exploration, visualization, sharing and collaboration features to Hadoop and Spark.