Friday, 30 March 2018

Comparison of big data engines


A comparison of big data querying engines is below.

Apache HBase is the Hadoop database, a distributed, scalable, big data store. HBase is an open-source, non-relational, distributed database modelled after Google's Bigtable and is written in Java.

Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data summarization, query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop.

Splunk turns machine data into answers with the leading platform to tackle the toughest IT, IoT and security challenges.



Tuesday, 27 March 2018

Machine Learning


Predictive analytics uses various statistical techniques such as, machine learning to analyze collected data for patterns or trends to forecast future events. Machine learning uses predictive models that learn from existing data to forecast future behaviors, outcomes, and trends.

Machine Learning libraries enable data scientists to use dozens of algorithms, each with their strengths and weaknesses. Download the machine learning algorithm cheat sheet to help identify how to choose a machine learning algorithm.















Saturday, 24 March 2018

SQL Relay Session Submission Open



SQLRelay session submission is open https://sessionize.com/sqlrelay2018/. Please submit a session to this travelling conference. Speakers can present at a single or multiple events, it's up to you.

We are running 5 events in the same week in 2018, Monday to Friday, covering 5 different cities within the UK.

    Mon 8 Oct - Newcastle
    Tue 9 Oct - Leeds
    Wed 10 Oct - Birmingham
    Thu 11 Oct - Reading
    Fri 12 Oct - Bristol

We cover a broad range of topics at different levels, from SQL Server DBA to Advanced Analytics in Azure, taking in all aspects of the Microsoft Data Platform. 


Friday, 16 March 2018

Big Data LDN Keynotes

The 2018 opening  keynotes of Big Data LDN have been announced.

Jay Kreps and Michael Stonebraker will be delivering the two opening keynotes.

Jay Kreps, opens the event on day 1, Tuesday 13th November. The Ex-Lead Architect for Data Infrastructure at LinkedIn, Co-creator of Apache Kafka and Co-founder & CEO of Confluent will take to the stage in the keynote theatre at 09:30.

Michael Stonebraker, the Turing Prize winner, IEEE John von Neumann Medal Holder, Co-founder of Tamr and Professor at MIT will address the keynote theatre at 09:30 on day 2, Wednesday 14th November.

Tuesday, 13 March 2018

Azure Cosmos DB Data Explorer

A new tool is available to use. Data Explorer provides a rich and unified experience for inserting, querying, and managing Azure Cosmos DB data within the Azure portal. The data explorer brings together 3 tools, Document Explorer, Query Explorer, and Script Explorer.


Monday, 12 March 2018

Apache Hive and HDInsight


Apache Hive is a data warehouse system for Hadoop. Hive enables data summarization, querying, and analysis of data. Hive queries are written in HiveQL, which is a query language similar to SQL.  Hive only maintains metadata information about your data stored on HDFS. Apache Spark has built-in functionality for working with Hive and HiveQL can be used to query data stored in HBase. Hive can handle large data sets. The data must have some structure. The query execution can be via Apache TezApache Spark, or MapReduce.





There are two types of tables within Hive.
  • Internal: Data is stored in the Hive data warehouse. The data warehouse is located at /hive/warehouse/ on the default storage for the cluster. This is for mainly temporary data.
  • External: data is stored outside the data warehouse. The data is also used outside of Hive or the data needs to stay in the underlying location

A Hive table consists of a schema stored in the metastore and the data is stored on HDFS. The supported file formats are Text File, SequenceFile, RCFile, Avro Files, ORC Files, Parquet, Custom INPUTFORMAT and OUTPUTFORMAT.




Apache Hive and Hive QL is on Azure HDInsight.  

Friday, 2 March 2018

The Magic of Data


It was the 10th anniversary of SQLBits this year, with the conference tag line being, the magic of data. The conference was held at Olympia, London between 21-24 February. I am proud to have attended every conference since its inception and that I have been a helper for the last 8 years. At the start of each conference it is always an interesting challenge to understand the new venue layout and what things we can do to make this the best SQLBits conference ever for the attendees and speakers.

There were the usual two days of expert instructor led training days. I looked after a PowerBI day with Adam Saxton and Python day with Dejan Sarka. The Python training included Machine Learning for in-database SQL Server. It was a very helpful to have an overview of data mining, machine learning and statistics for data scientists. Having an appreciation of the maths and algorithms is important in this new diverse data world.

Friday arrived with a mix of general sessions running in multiple tracks. I initially attended a session by Mark Wilcock on Text Analytics Of A Bank's Public Reports. The recording can be seen here. The text analysis in R that was demonstrated was very similar to the type of qualitative analysis I undertook in my PhD. The session an introduction to HDInsight was a great starting point for managing big data.

The date that every data person has in their head this year, is 25 May 2018. That is the GDPR deadline. The big question for Microsoft was understanding the telemetry data collected and its pipeline to ensure that they comply to GDPR. It was great to hear about all the work they have done to address GDPR for the data platform.

I attended more sessions in data science and SQL Graph. Graph databases are very useful in certain scenarios. The on-premises SQL Server 2017 graph engine is different to that of the graph API in Cosmos DB and has different syntax. There are many new features still to come for SQL Graph.

Other very interesting sessions were on performance tuning with the tiger tool box, R in PowerBI, the flexibility of SQL Server 2017, inside the classic machine learning algorithms with Professor Mark Whitehorn, and a session on don't cross the streams, a closer look at Stream Analytics by Johan Ludvig BrattÃ¥s. That concluded the breath of topics I  covered in this years conference. The conference covers an amazing breadth and depth of topics from database management, development, BI, data management and data science. My lightning talk experience from this year is shared here.

The rest of my time was spent mingling and sharing data experiences. It was an honor to have been able to be a part of the conference again.
There are revised patterns available for big data advanced analytical capabilities using the Azure Databricks platforms with Azure Machine Learning.




The new capabilities will enable advance analytics to be carried out using Azure Machine Learning. The different types of data requirements and consumption are integrated using CosmosDB.

Databricks in Azure

Databricks is a big data unified analytics platform that harness the power of AI. It is built on top of Spark, serverless and is highly elastic cloud based. Azure Databricks is in preview currently. This new Azure service aims to accelerate innovation by enabling data science with a high-performance analytics platform that’s optimized for Azure. It has native integration with other Azure services such as Power BI, SQL Data Warehouse, Cosmos DB as well as from enterprise-grade Azure security, including Active Directory integration, compliance, and enterprise-grade SLAs. More information can be found in these two links

A technical overview of Azure Databricks
https://azure.microsoft.com/en-gb/blog/a-technical-overview-of-azure-databricks/

Introduction to Azure Databricks
https://channel9.msdn.com/Events/Connect/2017/T257

Databricks is a collaborative workspace.

























Databricks have an ebook Simplifying Data Engineering to Accelerate Innovation which covers

  • The three primary keys to better data engineering
  • How to build and run faster and more reliable data pipelines
  • How to reduce operational complexity and total cost of infrastructure ownership
  • 5 examples of enterprises building reliable and highly performant data pipelines

Thursday, 1 March 2018

An Introduction to HDInsight

 
I attended a great session at SQLBits 2018 covering the basics of HDInsight by Edinson Medina. He introduced his talk explaining the term Big Data and that it is too complex for analysis in traditional databases. There are 2 types of processing batch processing, to shape the data for analysis and real time processing to capture streams of data for low latency querying.

Hadoop is described on the Hortonworks site as "Apache Hadoop is an open source software platform for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware."

A Hadoop cluster looks like



















The underlying structure uses map reduce. The Tez engine is a newer faster engine for map reduce. The model is explained in the paper on "Analyzing performance of Apache Tez and MapReduce with hadoop multinode cluster on Amazon cloud"



HDInsight is 100% Apache Hadoop, but powered by the cloud.



There are many tools within the Hadoop ecosystem.

Hive
A meta data service that projects tabular schemas over folders and enables the folders to be queried as tables using a SQL like query.

Pig (an ETL Tool)
Performs a series of transformations to data relations based on Pig Latin statements.

OoZie
A workflow engine for actions in a Hadoop cluster supporting parallel work streams.

Scoop
A database integration service which enables bi-directional data transfer between an Hadoop cluster and databases via JDBC.

HBase
A low latency NoSQL database built on Hadoop modeled on Googles's BigTable. HBase stores files on HDFS.

Storm
An event processor for data streams such as real time monitoring and for event aggregation and logging. It defines a streaming topology that consists of spouts and bolts.

Spark
A fast general purpose computation engine that supports in memory operations. It is a unified stack for interactive, streaming and predictive analysis.

Ambari
A management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.

Zepplin notebooks 
A multi-purposed web-based notebook which brings data ingestion, data exploration, visualization, sharing and collaboration features to Hadoop and Spark.