How we used Bayesian models to balance customer experience and courier earnings at Glovo

Javier Mas Adell ’17 (Data Science)

Neon sign depicts Bayes' Theorem

Glovo is a three-sided marketplace composed of couriers, customers, and partners. Balancing the interests of all sides of our platform is at the core of most strategic decisions taken at Glovo. To balance those interests optimally, we need to understand quantitatively the relationship between the main KPIs that represent the interests of each side.

I recently published an article on Glovo’s Engineering blog where I explain how we used Bayesian modeling to help us tackle the modeling problems we were facing due to the inherent heterogeneity and volatility of Glovo’s operations. The example in the article talks about balancing interests on two of the three sides of our marketplace: the customer experience and courier earnings.

The skillset I developed during the Barcelona GSE Master’s in Data Science is what’s enabled me to do work like this that requires knowledge of machine learning and other fields like Bayesian statistics and optimization.

Connect with the author

portrait

Javier Mas Adell ’17 is Lead Data Scientist at Kannact. He is an alum of the Barcelona GSE Master’s in Data Science.

Statistical Racism

Nandan Rao ’17 (Data Science) has posted a simulation over on the BGSE Data Science blog to see if racial profiling really helps catch more criminals.

Source: Nandan Rao ’17

“In the real-life algorithms being implemented by police departments, as in our toy simulation, the data used to find criminals is not the data on crimes, but the data on crimes caught.”

Read the post and see the code he uses to produce the simulation and graphics over on the BGSE Data Science blog.

Source: Nandan Rao ’17

A Bayesian Search for the Needle in the Haystack

Master project by Timothée Stumpf-Fétizon. Barcelona GSE Master’s Degree in Data Science

Editor’s note: This post is part of a series showcasing Barcelona GSE master projects by students in the Class of 2015. The project is a required component of every master program.


Author: 
Timothée Stumpf-Fétizon

Master’s Program:
Data Science

Paper Abstract:

I develop an extension to Monte Carlo methods that sample from large and complex model spaces. I assess the extension using a new and fully functional module for Bayesian model choice. In standard conditions, my extension leads to an increase of around 30 percent in sampling efficiency.

Presentation Slides:

[slideshare id=51095167&doc=bayesian-search-needle-haystack-slides-150730103703-lva1-app6891]

This is work in progress and there is no telling whether the rule works better in all situations!

If you’re interested in using BMA in practice, you can fork the software on my github (working knowledge of Python required!)

Variance: regression, clustering, residual and variance – Liyun Chen ’11

Liyun ChenLiyun Chen ’11 (Economics) is Senior Analyst for Data Science at eBay. She recently moved from the company’s offices in Shanghai, China to its headquarters in San Jose, California. The following post originally appeared on her economics blog in English and in Chinese. Follow her on Twitter @cloudlychen


Variance is an interesting word. When we use it in statistics, it is defined as the “deviation from the center”, which corresponds to the formula  \sum (x- \bar{x})^2 / (n-1), or in the matrix form Var(X) = E(X^2)- E(X)^2=X'X/N-(X'1/N)^2(1 is a column vector with N*1 ones). From its definition it is the second (order) central moment, i.e. sum of the squared distance to the central. It measures how much the distribution deviates from its center — the larger the sparser; the smaller the denser. This is how it works in the 1-dimension world. Many of you should be familiar with these.

Variance has a close relative called standard deviation, which is essentially the square root of variance, denoted by \sigma// . There is also something called the six-sigma theory– which comes from the 6-sigma coverage of a normal distribution.

79f0f736afc37931c22b82ecebc4b74542a911b7.jpg

Okay, enough on the single dimension case. Let’s look at two dimensions then. Usually we can visualize the two dimension world with a scatter plot. Here is a famous one — old faithful.

2014-12-27 23_41_46-Plot ZoomOld faithful is a “cone geyser located in Wyoming, in Yellowstone National Park in the United States (wiki)…It is one of the most predictable geographical features on Earth, erupting almost every 91 minutes.” We can see there are about two hundreds points in this plot. It is a very interesting graph that can tell you much about Variance.

Here is the intuition. Try to use natural language (rather than statistical or mathematical tones) to describe this chart, for example when you take your 6 year old kid to the Yellowstone and he is waiting for next eruption. What would you tell him if you have this data set? Perhaps “I bet the longer you wait, the longer next eruption lasts. Let’s count the time!”. Then the kid has a glance on your chart and say “No. It tells us that if we wait for more than one hour (70 minutes) then we will see a longer eruption in the next (4-5 minutes)”. Which way is more accurate?

Okay… stop playing with kids. We now consider the scientific way. Frankly, which model will give us a smaller variance after processing?

Well, always Regression first. Such a strong positive relationship, right? ( no causality…. just correlation)

2014-12-27 23_51_53-Plot Zoom

Now we obtain a significantly positive line though R-square from the linear model is only 81% (could it be better fitted?). Let’s look at the residuals.

2014-12-27 23_59_10-Plot ZoomIt looks like that the residuals are sparsely distributed…(the ideal residual is white noise which carries no information). In this residual chart we can roughly identify two clusters — so why don’t we try clustering?

Before running any program, let’s have a quick review the foundations of the K-means algorithm. In a 2-D world, we define the center as (\bar{x}, \bar{y}) // , then the 2-D variance is the sum of squares of each pint going to the center.

2014-12-28 00_09_03-Plot ZoomThe blue point is the center. No need to worry about the outlier’s impact on the mean too much…it looks good for now. Wait… doesn’t it feel like the starry sky at night? Just a quick trick and I promise I will go back to the key point.

 

2014-12-28 00_25_06-Plot Zoom

For a linear regression model, we look at the sum of squared residuals – the smaller the better fit is. For clustering methods, we can still look at such measurement: sum of squared distance to the center within each cluster. K-means is calculated by numerical iterations and its goal is to minimize such second central moment (refer to its loss function). We can try to cluster these stars to two galaxies here.

2014-12-28 00_32_00-Plot ZoomAfter clustering, we can calculate the residuals similarly – distance to the central (represents each cluster’s position). Then the residual point.

 

2014-12-28 00_51_13-Plot ZoomRed ones are from K-means which the blue ones come from the previous regression. Looks similar right?… so back to the conversation with the kid — both of you are right with about 80% accuracy.

Shall we do the regression again for each cluster?

2014-12-28 01_01_20-Plot ZoomNot many improvements. After clustering + regression the R-square increases to 84% (+3 points). This is because within each cluster it is hard to find any linear pattern of the residuals, and the regression line’s slope drops from 10 to 6 and 4 respectively, while each sub-regression only delivers an R-square less than 10%… so not much information after clustering. Anyway, it is better than a simple regression for sure. (the reason why we use k-means rather than some simple rules like x>3.5 is that k-means gives the optimized clustering results based on its loss function).

Here is another question: why do not we cluster to 3 or 5? It’s more about overfitting… only 200 points here. If the sample size is big then we can try more clusters.

Fair enough. Of course statisticians won’t be satisfied with these findings. The residual chart indicates an important information that the distribution of the residuals is not a standard normal distribution (not white noise). They call it heteroscedasticity. There are many forms of heteroscedasticity. The simplest one is residual increases when x increases. Other cases are in the following figure.

p109figureThe existence of heteroscedasticity makes our model (which is based on the training data set) less efficient. I’d like to say that statistical modelling is the process that we fight with residuals’ distribution — if we can diagnose any pattern then there is a way to improve the model. The econometricians prefer to name the residuals “rubbish bin” — however it is also a gold mine in some sense. Data is a limited resource… wasting is luxurious.

Some additional notes…

Residuals and the model: as long as the model is predictive, then residuals exist, regardless of the model’s type, either a tree or linear or whatever. Residual is just the true Y minus the prediction of Y (based on training data set).

Residuals and loss function: for ordinary least squares, if you solve it in the numerical way then it iterates by the SSR (sum of squared residuals) loss function (equals to the variance of residuals). In fact many machine learning algorithms relay on a similar loss function setting — either first order or higher order moments of residuals. From this perspective statistical modelling is always fighting with residuals. This differs from what the econometricians do so there was a huge debate on the trade off between consistency and efficiency. Fundamentally different believes of modelling.

Residuals, Frequentists and Bayesians: In the above paragraphs I mainly followed the Frequentist’s language. There was nothing on posterior… From my understanding many items there would be mathematically equivalent to the Bayesian’s frameworks so it should not matter. I will mention some Bayesian ideas in the following bullets so go as you wish.

Residuals, heteroscedasticity and robust standard error: We love and hate heteroscedasticity at the same time. It tells us that our model is not perfect while there is a chance to make some improvements. Last century people tried to offset heteroscedasticity’s impact by introducing the robust standard error concept — Heteroscedasticity-consistent standard errors, e.g. Eicker–Huber–White. Eicker–Huber–White changes the common sandwich matrix (bread and meat) we use for the significant test (you may play with it using the sandwich() package in R). Although Eicker–Huber–White contributes to the variance estimation by re-weighing with estimated residuals, this approach does not try to identify any patterns from the residuals. Thus there are methods like Generalized least square (GLS) and Feasible generalized least square (FGLS) that try to use a linear pattern to reduce the variance. Another interesting idea is clustered robust standard error which allows heterogeneity among clusters but constant variance within each cluster. This approach only works when the number of groups approaches infinite asymptotically. (otherwise you will be getting stupid numbers like me!)

Residuals and reduction of dimensions: generally speaking the more relevant co-variates introduced to the model the less the noise is; while there is also a trade-off towards overfitting. That is why we need to reduce the dimensions (e.g. via regularization). Moreover, it is not necessary that we want to make a prediction every time; sometimes we may want to filter out the significant features — a sort of maximizing the information we could get from a model (e.g. AIC or BIC or attenuation speed which increasing the punishment in regularization). In addition regularization is not necessarily linked to train-validation… not the same goal.

Residuals and experimentation data analysis: heteroscedasticity will not influence the consistency of Average Treatment Effect estimation in an experimentation analysis. The consistency originates from randomization. However people are still eager to learn more beyond a simple test-control comparison, especially when the treated individuals are very heterogenous; they look for heterogenous treatment effect. Quantile regression may help in some case if there is a strong covariate observed…but what could we do when there are thoudsands of dimensions? Reduce the dimension first?

Well, the first reaction to “heterogeneous” should be variance…right? otherwise how could we quantify heterogeneity? There is also a bundle of papers that try to see whether we would be able to find more information for treatment effects rather than simple ATE. This one for instance:

Ding, P., Feller, A., and Miratrix, L. W. (2015+). Randomization Inference for Treatment Effect Variation. http://t.cn/RzTsAnl

View full code in the original post on Ms. Chen’s blog

Can big data be official? – Barcelona GSE Data Scientists

Originally posted by Stefano Costantini ’15 on the Barcelona GSE Data Scientists blog. Stefano is on Twitter @stefanoc.

Originally posted by Stefano Costantini ’15 on the Barcelona GSE Data Scientists blog. Stefano is on Twitter @stefanoc.


At the Renyi Hour on November 13th 2014, Frederic Udina gave a talk on big data and official statistics. Apart from being a professor at UPF and BGSE, Frederic is Director of IDESCAT, the statistical institute of Catalonia.

Frederic Udina presenting to BGSE Data Science students
Frederic Udina presenting to BGSE Data Science students

In his talk, Frederic compared the “traditional” official statistics – slow to produce, with well-defined privacy limits and access rights – to “big data”, which is fast to produce, volatile and with fuzzy privacy limits. Frederic highlighted the tension between these two worlds, focusing particularly on the need for official statistics to become easier to collect, organise and customise to the need of the final user. In particular, Frederic identified the opportunity for IDESCAT (and other statistical institutes) to integrate the officially collected information with alternative information sources, such as:

  • Administrative data
  • Data freely available from the society
  • Data from private companies

Frederic outlined IDESCAT’s plan to move away from the current data generation system (the ‘stove pipe model’) which is slow, expensive and inefficient as it does not re-use information already collected, towards a fully integrated model (‘Plataforma Cerdà’) where any new information needs to be integrated with existing data.

The Renyi hour crowd
The Renyi hour crowd

Frederic noted that data is becoming increasingly important in society, and this is beginning to be recognised by official statistical institution. In particular, Frederic discussed the Royal Statistical Society’s Data manifesto where the RSS notes that data is:

  • A key tool for better, informed policy-making
  • A way to strengthen democracy and trust
  • A driver of prosperity.

The Royal Statistical Society Data Manifesto
The Royal Statistical Society Data Manifesto

Frederic also stressed the importance of confidentiality and privacy issues with regards to data availability. While it is desirable for some data to be freely available to the public, confidentiality and privacy should always be protected. However, it is important to strike the right balance between access and privacy, ensuring that while personal sensitive data is protected, important information is not prevented from being used in ways that may ultimately help the wider society. Personal health records are a classic example of this.

Frederic concluded his talk by providing some example of national statistical authorities integrating official statistics with widely available information to carry out new interesting analysis. Examples include:

  • Production of origin/destination arrays between territorial units (usually municipalities) for working or studying reasons using trajectories of mobile phones (ISTAT, New Zealand Statistics)
  • Using Google Trends to estimate/predict labour market, monthly forecast, small-area estimation (ISTAT)
  • Measuring use of TCI in firms, by using web scraping and text mining techniques

Lunch with Frederic after his talk
Lunch with Frederic after his talk

Useful links: