Posted: 21st of November 2017 by Aimi Walker
The Data Revolution: Highlights from Big Data LDN, Day 1
In its second year, Big Data LDN took hold of an entire wing of the Kensington Olympia to host its conference and exhibition. The event, which brings together experts in the industry who have knowledge to share, welcomes the analytics community to share stories, this year around four key elements. Over the two-day event, the key themes explored were fast data, machine learning and AI, self-service analytics and the data driven capital city.
Learning all things Data Analytics
Starting at the Keynote Theatre for the opening seminar on day one, I learnt about the rise of the streaming platform and moving internal structures through data streams. Data storage in an open platform has become the most popular and main way in which companies manage their data, despite some of the disparate restrictions and data rules on a global scale. Speaking specifically about processing data in retail, Neha Narkhede from Confluent spoke about its impact on monitoring supply chain and customers.
Then it was over to the Self-Service Theatre for a talk by Mark Dalton at DOMO about data governance and self-service analytics at scale. He questioned how effective companies were at doing this, and then explained it doesn’t have to be one camp or another. He argued the two divided camps: the structure of data governance controlled by an IT team vs the freedom of having data democratisation is an AND, not an OR, but having the confidence to trust the data is most important.
Moving into the AI Lab Theatre, Alexandre Hubert spoke about applying advanced analytics to marketing attribution. With so much demand for growth across companies, the channels in which they choose to spend their money is critical to staying aligned with and supporting their strategy.
With the rapid expansion of digitisation itself, the number of consumer touch-points are developing so quickly, there’s barely enough time to monitor them! His solution is to use probabilistic models to drive decision marking around distribution of the marketing budget and showed some demonstrations of how this has been implemented at Dataiku.
Hackathons and vanity metrics
Back into the Self Service Theatre, complete with headphones as the noise of the exhibition built around the space, we listened to Christopher Dean’s invention of the hackathon at Travis Perkins, and how he has created a space where his data team are able to learn and own their new discoveries with evolving their data knowledge.
His advice to develop your own in house is to start small, demonstrate value, fail fast, keep it simple, focus on special interest groups and define it as a centre of excellence where your team can inject insights from everywhere. With 20 businesses across the group, there are so many big data opportunities.
Remaining seated in the Self Service Theatre, Kabalan Casparo at Looker spoke about ignoring vanity metrics which are arguably only useful to external people such as investors, and prioritising metrics which inform decisions. He used an interesting metaphor which everybody can relate to – vanity metrics are like grading systems at schools – too broad, not enough understanding into what contributes to that final score and how much room for improvement there is to raise that grade.
By keeping metrics vague and abstract rather than being defined completely, the answers are unclear. By applying more specific definition and reason behind metrics, you’re able to explore the impact of your assumptions on a particular goal and offers the ability to drill down deeper into actionable metrics. He championed the visibility of customer experience to identify earlier warning signs, which he felt had to come from the culture in the organisation and not just the tech.
The Fast Data Theatre
Venturing into the Fast Data Theatre for the first time, Graham Sharpe at ADIDAS demystified the industry buzzword ‘Big Data’ and what it meant for them. The positives he had experienced in the retail space included accelerating the building of relationships with customers and targeting more specific customer bases by using complex segmentations beyond the simple categories of age, gender and location.
He spoke about the customer’s social identity online, which allows us to unlock drivers, triggers to buy and their motivations through data, which is super valuable in the progression of the customer insight strategy. His tips included not expecting all the questions to be answered simply with new tech – it’s fundamentally down to the cleanliness of the data – and no letting hype or politics affect the decision to start using big data.
After some lunch, it was time to cut across to the AI Lab Theatre once again for a talk around unleashing a data science team on your organisation with Dimitris Pertinis at Telegraph Media, in partnership with Syntasa. He spoke about the pressures of ROI when a new team is brought into the business and how to use your data science team effectively on the questions which matter, outsourcing more of the database tasks externally and applying business logic when exploring.
Wrapping up Big Data LDN Day 1
Once again returning to the Self Service Theatre, Rich Dill from Snaplogic gave insight into the new dominant companies and how they are thriving on their use of and approach to data, both pre and post analysis. He spoke about the advancement of traditionally offline industries into online giants, and how platforms have been upgraded to deal with the customer demand of ease.
He campaigned for the adoption of self-service analytics on the basis of more heads being better than one, and also spoke about technology as a tool – only by using the right one for the job will you achieve greatness with your data. He told the story of how software evolves like a funnel, and how later releases pose less restrictions, asking data-driven companies to be patient with the constantly changing industry in order to be successful.
Lastly, it was time to talk about the endless applications of data science beyond the held view of strong statistics, and the qualities which make a data scientist different to a statistician. He proposed that, as humans, we need to be asking deeper questions, whilst computers should be doing the coding and automating insights. If we are able to reach a stage where the computers think like we do, then they can begin to reveal insights automatically without supervision instead of purely following protocol and processes.
He also spoke about the need for convincing your organisation to implement whatever it is you’ve discovered through analytics, because despite your hard work, your job is not finished until this is achieved. In order for the hoped outcome to be realised, reports should be interactive documents which change and evolve for stakeholders.
Stay tuned for insights on Big Data LDN Day 2!
In the meantime, get in touch with Aimi for all things Data & Web Analytics...