Principal Data Scientist and an Executive Advisor

Kirk Borne is the Principal Data Scientist and an Executive Advisor at global technology consulting firm Booz Allen Hamilton. He has a PhD in Astrophysics, with 20 years of service on NASA astronomy data systems, followed by 12 years as Professor of Astrophysics and Data Science at George Mason University. He has been at Booz Allen at 2015, where he advises, mentors, and trains numerous clients and colleagues in data science. Kirk is a global speaker and a worldwide top influencer in big data and data science.

Kirk’s

10:00 AM | Track #3

Many unsupervised learning models can converge more readily and be more valuable if we know in advance which parameterizations are best to choose. If we cannot know that (i.e.,because it truly is unsupervised learning), then we would like to know at least that our final model is optimal (in some way) in explaining the data. In both of these applications (supervised and unsupervised machine learning), if we don’t have these initial insights and validation metrics, then how does such model-building get started and get moving towards the optimal solution? This challenge is known as the cold-start problem! The solution to the problem is easy (sort of): We make a guess — an initial guess! Usually, that would be a totally random guess. We will itemize several examples at the end. But before we do that, let’s address the objective function. That is the true key that unlocks performance in a cold-start challenge. That’s the magic ingredient in most of the examples that we will list.