Passionately curious about Data, Databases and Systems Complexity. Data is ubiquitous, the database universe is dichotomous (structured and unstructured), expanding and complex. Find my Database Research at SQLToolkit.co.uk . Microsoft Data Platform MVP

"The important thing is not to stop questioning. Curiosity has its own reason for existing" Einstein

Wednesday, 23 May 2018

Deep learning

Deep Learning is a subset of machine learning which aims to solve thought related problems. To understand this bleeding edge technology here are a few links.

Learn an intuitive approach to building the complex models that help machines solve real-world problems with human-like intelligence with the Microsoft AI school.

Free webinar
On 31 May there is a free webinar with Jen Stirrup on Deep Learning and Artificial Intelligence in the Workplace . The webinar asks what is Microsoft’s approach to Deep Learning, and how does it differ from Open Source alternatives? In this session, it will will look at Deep Learning, and how it can be implemented in Microsoft and Azure technologies with the Cognitive Toolkit, Tensorflow in Azure and CaffeOnSpark on AzureHDInsight

How deep learning will change customer experience
This article discusses artificial neural networks for machines which will allow machines and devices to function in some ways as humans do.

Monday, 14 May 2018

MVP Award

My Microsoft MVP (Most Valuable Professional) Data Platform award has now arrived. What a privilege it is to have received it. I like to share my passion for data and Microsoft technology.  There is something about the data technology that excites my curiosity. I feel privileged to have found such a career focus. The award is for exceptional community leadership and expertise in technology focus.

Monday, 7 May 2018

Microsoft Build Azure Cosmos DB

Microsoft Build is underway sharing many useful features. The Azure Cosmos DB API is a versatile tool with a number of options. There are some quickstart tutorials and samples for these.

 Azure Cosmos DB now has multi-master write support. Multi-master in Azure Cosmos DB provides single-digit millisecond latency to write data and availability with built-in flexible conflict resolution support. There are some good examples in the article to help understand this functionality better.

Azure Operational Data Services includes Azure SQL DB; PostgreSQL; MySQL; Redis Cache; and Cosmos DB.

Saturday, 5 May 2018

Azure CosmosDB Change Feed

The Azure CosmosDB change feed can provide a persistent log of records within an Azure CosmosDB container. You can learn about this from this concise presentation.

Tuesday, 1 May 2018

Microsoft MVP Award

I received my first Data Platform MVP award yesterday. What an honour it is to be a part of an amazing community. I am humbled by the enormity of the award and it is a privilege to be able to share my passion for data. I am listed here.

Big Data Exploration

The big data landscape is growing and exploration of the data can help make better decisions. I came across this great infographic from IBM.

Monday, 30 April 2018

Machine Learning Algorithm Cheat Sheet

Another machine learning cheat sheet to help you choose your algorithm. The cheat sheet is designed for beginner data scientists and analysts.

The types of learning.

Wednesday, 25 April 2018


The General Data Protection Regulation (GDPR) comes into effect on 25 May 2018, one month from now. The EU General Data Protection Regulation is the most important change in data privacy regulation in 20 years. GDPR, is fundamentally about protecting and enabling the privacy rights of the individual.

A Guide to enhancing privacy and addressing GDPR requirements with the Microsoft SQL platform is an interesting read. The obligations related to controls and security around handling of personal data are some of the the concepts discussed in the the document.

GDPR Article 25—“Data protection by design and default”: Control exposure to personal
• Control accessibility—who is accessing data and how.
• Minimize data being processed in terms of amount of data collected, extent of
processing, storage period, and accessibility.
• Include safeguards for control management integrated into processing.
GDPR Article 32—“Security of processing”: Security mechanisms to protect personal data.
• Employ pseudonymization and encryption.
• Restore availability and access in the event of an incident.
• Provide a process for regularly testing and assessing effectiveness of security
GDPR Article 33—“Notification of a personal data breach to the supervisory authority”:
Detect and notify of breach in a timely manner (72 hours).
• Detect breaches.
• Assess impact on and identification of personal data records concerned.
• Describe measures to address breach.
GDPR Article 30—“Records of processing activities”: Log and monitor operations.
• Maintain an audit record of processing activities on personal data.
• Monitor access to processing systems.
GDPR Article 35—“Data protection impact assessment”: Document risks and security
• Describe processing operations, including their necessity and proportionality.
• Assess risks associated with processing.
• Apply measures to address risks and protect personal data, and demonstrate
compliance with the GDPR.

Friday, 20 April 2018

DataWorks Summit 2018

This was the first time I had attended the DataWorks summitIdeas. Insights. Innovation. for big data. I had the privilege to attend the Luminaries dinner on arrival at the conference. The dinner was held for the European data heroes award. The Hortonworks data heroes initiative recognizes the data visionaries, data scientists, and data architects transforming their businesses and organizations through Big Data.

Each day started with a set of keynotes.

Day 1 Opening Keynotes
The Single Most Important Formula for Business Success Scott Gnau - Hortonworks
Changing the Data Game with Open Metadata and Governance Mandy Chessell - IBM
Big Data Success In Practice: The Biggest Mistakes To Avoid Across The Top 5 Business Use Cases Bernard Marr - Bernard Marr & Co.
Munich Re: Driving a Big Data Transformation Andreas Kohlmaier - Munich Re

Scott Gnau opened his talk with an hypothesis “Data is your cloud is your business” Connecting disparate data to provide for real time information enables us to innovated fast. A data strategy is imperative, it needs to include governance, security and adopt rapid change. Data drives our lives everyday from smart edge devices to all businesses.

He concluded with your data strategy is your cloud strategy is your business strategy if (A) =(B) and (B) = (C) then (A) =(C).

Bernard Marr then shared his insights about AI automating more things faster and the fourth industrial revolution.  He mentioned the top 5 business use cases as

  • Informing: to make better decisions
  • Understand: know you customers better
  • Improvement: customer value proposition
  • Automation: key business processes
  • Monetization: data as an asset
A couple of interesting points raised were about specialist data hunting units to find new data sources and automation requirements to improve operations.  Data diversity is key to improve analytics along with data governance.

Day 2 Keynotes
Renault: A Data Lake Journey Kamelia Benchekroun - Renault Group
Are You Ready For GDPR? Jamie Engesser - Hortonworks, Srikanth Venkat - Hortonworks Inc
Embracing GDPR to Improve Your Business Practices in the Digital Age Enza Iannopollo - Forrester Research
Driving High Impact Business Outcomes from Artificial Intelligence Frank Saeuberlich – Teradata

Day 2 Forester Enza Iannopollo discussed embracing GDPR to improve your business practices in the digital age. Privacy by design and by default requires new business processes to be established and cultural change to happen. GDPR requires compliance across the organization and with external partners. The compliance strategies are only as good as your risk assessment and mitigation. The classification of data is a key place to start. Concluding the sessions with a quote
“Good Data protection normally enables you to do more things with data, not less” Tim Gough Head of Data Protection Guardian News and Media

Data Steward Studio (DSS) was launched at the conference. It is one of several services available for Hortonworks DataPlane Service; it provides a suite of capabilities that allows users to understand and govern data across enterprise data lakes.

Saturday, 14 April 2018

SQL Information Protection with Data Discovery and Classification

The public preview of SQL Information Protection brings advanced capabilities built into Azure SQL Database for discovering, classifying, labeling, and protecting the sensitive data in your databases. SQL Data Discovery and Classification are also added to SQL Server Management Studio.

This tools will help meet data privacy standards and regulatory compliance requirements, such as GDPR. It will enable data-centric security scenarios, such as monitoring (auditing) and alerting on anomalous access to sensitive data to be viewed in dashboards. It will help with controlling access to and hardening the security of databases containing highly sensitive data.

The SQL Information Protection (SQL IP) introduces a set of advanced services and new SQL capabilities, forming a new information protection paradigm in SQL aimed at protecting the data. The four areas covered are:

  • Discovery and recommendations 
  • Labeling
  • Monitoring/Auditing (Azure SQL Db only)
  • Visibility

Ph.D Graduation

“A story has no beginning or end: arbitrarily one chooses that moment of experience from which to look back or from which to look ahead.”
― Graham Greene, The End of the Affair

After 7 years of hard work, bringing industry and research together, I was excited to attend my Ph.D graduation. What an awesome and humbling day. Words can't express how it felt as a Ph.D graduate, with a Doctor of Philosophy, to sit on the stage along side the university academic staff. It is something I will never forget.

Now it is time to utilize my research skills gained throughout the Ph.D and begin something new. My aspirations in the academic field, are to write many  papers, share my research findings and to become a research fellow. 

Thursday, 12 April 2018

Microsoft Professional Program for Artificial Intelligence

With Artificial Intelligence (AI) defining the next generation this Microsoft course seems a great way to jump start your skills.

The course covers these modules

  • Introduction to AI
  • Use Python to Work with Data
  • Use Math and Statistics Techniques
  • Consider Ethics for AI
  • Plan and Conduct a Data Study
  • Build Machine Learning Models
  • Build Deep Learning Models
  • Build Reinforcement Learning Models
  • Develop Applied AI Solutions
  • Final Project
At the end you gain the Microsoft Professional Program Certificate in Artificial Intelligence.

Tuesday, 10 April 2018

Leverage data for building

The leverage data to build intelligent apps presentation gives an insightful overview of the Microsoft Data Platform and how to innovate with analytics and AI. 

Monday, 9 April 2018

Advice and guidance on becoming a speaker or volunteer

I watched this great session giving 'advice and guidance on becoming a speaker or volunteer' from SQLBits this year. 

I felt humbled when I listened to the SQLBits session recording as I am named as an absolute legend for attending all 16 SQLBits and helping for over 8 years. I had never spoken, never presented or been involved in the public facing side of the conference. It is such a great feeling helping the conference be successful, helping others enjoy what working with data brings and being a part of the sqlfamily. Thanks to SQLBits for enabling me to be a part of such an amazing event for all of these years.

The PhD Bookshelf

Following on from the creation of a literature map for my PhD, I started to formulate a plan of literature to read. These are some of the books on my bookshelf.

I also read many academic papers, stored in seven box files and in Mendeley

Mendeley is a free reference manager.  It enables you to manage your research, showcase your work, connect and collaborate with over six million researchers worldwide.

I found the Communications of the ACM journal and SIGMOD, the ACM Special Interest Group on Management of Data journal great reads.

Friday, 6 April 2018

Demystify complex relationships with SQL Server 2017 and graph

This great infographic shows some quick tips about SQL Server 2017 and graph databases. You can view this at: . The picture demonstrates nodes and edges and provides a clear example of the code changes between the Traditional SQL query and the Graph query.

Tuesday, 3 April 2018

Cosmos DB SQL query cheat sheet

The new Azure Cosmos DB: SQL Query Cheat Sheet helps you write queries for SQL API data by displaying common database queries, keywords, built-in functions, and operators in an easy to print PDF reference sheet. Reference information for the MongoDB API, Table API, and Gremlin/Graph API are also included.

Sunday, 1 April 2018

Literature Map

When you start any research project, you need to set the research in the context of the current literature. This will establish a framework for the importance of the study. This document was the starting place for organizing the literature of interest in my research.

Thesis Title: A Study in Best Practices and Procedures for the Management of Database Systems

Friday, 30 March 2018

Comparison of big data engines

A comparison of big data querying engines is below.

Apache HBase is the Hadoop database, a distributed, scalable, big data store. HBase is an open-source, non-relational, distributed database modelled after Google's Bigtable and is written in Java.

Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data summarization, query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop.

Splunk turns machine data into answers with the leading platform to tackle the toughest IT, IoT and security challenges.

Tuesday, 27 March 2018

Machine Learning

Predictive analytics uses various statistical techniques such as, machine learning to analyze collected data for patterns or trends to forecast future events. Machine learning uses predictive models that learn from existing data to forecast future behaviors, outcomes, and trends.

Machine Learning libraries enable data scientists to use dozens of algorithms, each with their strengths and weaknesses. Download the machine learning algorithm cheat sheet to help identify how to choose a machine learning algorithm.

Saturday, 24 March 2018

SQL Relay Session Submission Open

SQLRelay session submission is open https://sessionize.com/sqlrelay2018/. Please submit a session to this travelling conference. Speakers can present at a single or multiple events, it's up to you.

We are running 5 events in the same week in 2018, Monday to Friday, covering 5 different cities within the UK.

    Mon 8 Oct - Newcastle
    Tue 9 Oct - Leeds
    Wed 10 Oct - Birmingham
    Thu 11 Oct - Reading
    Fri 12 Oct - Bristol

We cover a broad range of topics at different levels, from SQL Server DBA to Advanced Analytics in Azure, taking in all aspects of the Microsoft Data Platform. 

Friday, 16 March 2018

Big Data LDN Keynotes

The 2018 opening  keynotes of Big Data LDN have been announced.

Jay Kreps and Michael Stonebraker will be delivering the two opening keynotes.

Jay Kreps, opens the event on day 1, Tuesday 13th November. The Ex-Lead Architect for Data Infrastructure at LinkedIn, Co-creator of Apache Kafka and Co-founder & CEO of Confluent will take to the stage in the keynote theatre at 09:30.

Michael Stonebraker, the Turing Prize winner, IEEE John von Neumann Medal Holder, Co-founder of Tamr and Professor at MIT will address the keynote theatre at 09:30 on day 2, Wednesday 14th November.

Tuesday, 13 March 2018

Azure Cosmos DB Data Explorer

A new tool is available to use. Data Explorer provides a rich and unified experience for inserting, querying, and managing Azure Cosmos DB data within the Azure portal. The data explorer brings together 3 tools, Document Explorer, Query Explorer, and Script Explorer.

Monday, 12 March 2018

Apache Hive and HDInsight

Apache Hive is a data warehouse system for Hadoop. Hive enables data summarization, querying, and analysis of data. Hive queries are written in HiveQL, which is a query language similar to SQL.  Hive only maintains metadata information about your data stored on HDFS. Apache Spark has built-in functionality for working with Hive and HiveQL can be used to query data stored in HBase. Hive can handle large data sets. The data must have some structure. The query execution can be via Apache TezApache Spark, or MapReduce.

There are two types of tables within Hive.
  • Internal: Data is stored in the Hive data warehouse. The data warehouse is located at /hive/warehouse/ on the default storage for the cluster. This is for mainly temporary data.
  • External: data is stored outside the data warehouse. The data is also used outside of Hive or the data needs to stay in the underlying location

A Hive table consists of a schema stored in the metastore and the data is stored on HDFS. The supported file formats are Text File, SequenceFile, RCFile, Avro Files, ORC Files, Parquet, Custom INPUTFORMAT and OUTPUTFORMAT.

Apache Hive and Hive QL is on Azure HDInsight.