Want To Get A Tip As An Uber Driver? Don’t Pick-Up A Shared Ride.
June 6th, 2019
Long story short:
None of the other economics aside, if you’re a ride share driver in Chicago and you want to get tips, don’t pick up drivers that opt for shared rides.
Here are some of the factors (positive and negative) and their importance in determining whether or not a rideshare driver gets a tip:
Looking more closely at times when shared trips are authorized, we can see that riders who opt in for pooling are pretty stingy with their tips.
On average, about 17.3% of rideshare trips end up with the driver getting tipped. For trips where a shared trip was authorized, though, that number is halved to a measly 8.6%.
With that being said, there are all kind of caveats to consider before we can definitively say that shared trip riders are tipping Scrooges. Details below.
Long Story Much Longer:
Last month, the City of Chicago became the first city in the country to publish “comprehensive data on Transportation Network Providers like Uber, Lyft, and Via on the City’s open data portal.” The dataset is a literal treasure trove of insights into how we use rideshare services. Chicago publishes three anonymized datasets relevant to this that are all worth exploring:
While all of the datasets are robust (featuring 5.18 million vehicles, 5.13 million drivers, and 45.3 million trips¹), the trips dataset is far-and-away the largest and comes with some of the most interesting data. One particularly interesting field is the Tip
field. It begs the question — can we predict whether or not a rideshare trip will end with the driver receiving a tip?
Here’s what we’ll cover:
Exploratory analysis
Model selection
Model performance
Improvements
Exploratory Analysis
To begin, we need to take care of a small bit of data transformation. Our question, “can we predict whether or not a rideshare trip will end with the driver receiving a tip?”, is a classic classification problem. However, the Tip
field is a continuous variable, which would lend itself better to a regression focused question (such as “can we predict how much of a tip a driver will receive?”). Before anything else, we’ll transform Tip
into a binary field called tip_given
, which will be labeled as a 0
if the Tip
field is equal to $0.00 or if it is left blank and labeled as 1
for any other value. To begin our exploratory analysis, we’ll want to frame every question in terms of whether or not there was a tip given.
When performing an exploratory analysis for a problem that involves time, that’s one of the first places that I like to start. In this case, I’m curious about when did rideshare trips occur?
It’s interesting to note when exploring the graph that there are a few major peaks and valleys. Here’s what those peaks and valleys look like when labeled with dates:
Almost all of those dates are immediately meaningful:
November 22, 2018 was Thanksgiving
December 25, 2018 was Christmas
December 31, 2018 was New Year’s Eve
March 16, 2018 was St. Patrick’s Day²
The only date that is not immediately recognizable is January 30, 2019. A little bit of Googling tells us that a cold front came through Chicago on January 30, leading to their coldest temperatures in 34 years. Just by looking at a time plot, we have two avenues to explore:
How do holidays effect tips?
How does weather effect tips?
We’ll explore the first question, but save an exploration of weather for further analyses. We can explore holidays by checking to see how the percentage of tips changes on days when there is a holiday vs. days when there is not a holiday. While we see peaks and valleys on the four days listed above, we’ll explore all major holidays that occur in the dataset. Here’s how that looks:
At first glance, it appears that there is barely an effect on tipping when it is a holiday. However, the story might change if we disaggregate the data and look at each holiday individually.
Now we see major differences appear. It appears that people are particularly generous tippers on New Year’s Eve and a bit more generous than usual on Christmas and New Year’s Day. However, on Valentine’s Day we tip a bit worse than usual. These are all variables that we should consider for modeling.
Considering time again, we should take a look at how the density of tippers changes by date. We can simultaneously consider how the time of a trip effects tips.
It appears that trips in the holiday season are barely more likely to tip. Trips during normal working hours — approximately between 9 am and 9 pm also appear more likely to tip. These patterns hold true when looking at the date and time that trips end as well.
We can also explore how the day of the week of a trip effects tipping. My expectation is that certain days of the week — perhaps Friday and Saturday — would lend themselves to people being more likely to tip than not.
However, that does not appear to be the case. There is a slight uptick in the percentage of trips that lead to a tip on Saturdays and a slight downtick in the percentage of trips that lead to a tip on Sundays and Mondays, but visually it appears that the day of the week does not influence tipping patterns.
We can now continue to explore other variables. There is a trip_seconds
field included in the dataset. Understanding the relationship between time of a trip and whether or not a tip is given would be valuable. We’ll display the data as minutes in order to make it more understandable.
It appears that there is a clear difference here. A trip that is greater than 20 minutes is more likely to receive a tip than a trip that is less than 20 minutes.
A very similar pattern is apparent when exploring the trip_miles
field.
It appears that trips that are longer than 10 miles are more likely to receive a tip than trips that are less than 10 miles. However, we might also notice that there is an increase in density of trips at approximately 16 to 19 miles. This is extremely curious and something worth further exploration. First, though, we should understand how the relationship between distance and time interacts with whether or not a tip is given. We can do this by calculating theaverage_speed
for each trip.
The pattern that we observed for trip_seconds
and trip_miles
— a greater likelihood for a tip above a certain threshold — appears to hold true for average_speed
as well. Above ~20 miles per hour, a driver is more likely to earn a tip than below that threshold. This appears to be slightly different for trips below ~10 miles per hour, however we’ll ignore this for the sake of simplification.
Next, we should spend time diving deeper into the mysterious increase in trips between 16 and 19 miles. We can get a better understanding of this by getting a count of all dropoff_centroid_location
values that occurred after a trip was between 16 and 19 miles.
It is very clear that the location at POINT (-87.9030396611 41.9790708201)
has an outsized number of drop-offs compared to other points. A little bit of investigation shows us that this coordinate corresponds with the O’Hare International Airport.
The large number of NULL
values are explained by how data has been anonymized. Pick-up and drop-off locations from outside of Chicago are not provided. A bit of further investigation shows that POINT (-87.913624596 41.9802643146)
is essentially also part of O’Hare. We can compare how trips that end at O’Hare tip versus all other trips.
Clearly, riders heading to the airport are much more likely to tip than those going to all other destinations. It is reasonable to assume that if drop-offs to O’Hare are more likely to tip, then there would likely be a difference for pick-ups from O’Hare as well.
This appears to be true as well. When modeling, we should build flag fields for whether or not a trip is beginning or ending at O’Hare. In further modeling, it would be worthwhile exploring other pick-up and drop-off locations (and groups of locations) to determine if they have an effect on tipping.
There are a couple of last variables in the dataset to explore. The fare
variable demonstrates an unique density plot.
This unique shape is a product of how rounding occurs in for fare
values in the dataset as part of anonymization efforts. The values are rounded to the nearest $2.50, which you’ll note is exactly where each peak occurs. It appears that as fares increase, the number of tips increases as well.
Finally, we can take a deeper dive into the shared_trip_authorized
field — whether or not someone looked for a pooled ride.
It appears that when a ride share is authorized that the percentage of riders that tip dramatically decreases. In comparison, not authorizing a ride share means that you tip marginally higher than all customers.
Model Selection
While there is always more data exploration that can be done, we can begin to fit a model to the data. Here are all of the fields that we use as independent variables based upon the exploratory analysis that we completed:
long_trip_time
— whether or not a trip was greater than 20 minuteslong_trip_distance
— whether or not a trip was greater than 10 milesslow_trip
— whether or not a trip had an average speed less than 20 mphholiday
— whether a trip happened on Christmas, New Year’s Eve, New Year’s Day, or Valentine’s Day (and then one-hot encoded)airport_pickup
— whether a trip started at O’Hare International Airportairport_dropoff
— whether a trip ended at O’Hare International Airportshared_trip_authorized
— whether or not a ride share was authorized
When picking a model to fit the data to there are several factors to consider:
What type of problem am I solving? Classification? Predictive?
How much do I care about model transparency?
Does only predictive accuracy matter to me or do I want to also understand how the variables effect the model?
For this problem, we’re predicting whether or not a tip will be given, which is a classic classification problem. I don’t care too much about model transparency, which means that some of the more complex models are on the table to use. I do want to know how the variables effect the model. Ideally, I’d want to tell an Uber driver what actions they can take to increase their likelihood of receiving a tip.
With all of that in mind, a great model to use is gradient boosting. Specifically, we’ll use the xgboost
implementation of gradient boosting, which has become an extremely popular technique. It is a relatively black-box model, but it does give us the opportunity to understand variable importance.
Model Performance
After fitting the model, we can build a ROC curve to help diagnose how our classification model performed. For this model we can see that there is some predictive accuracy that we were able to gain, but that it is not a particularly robust model overall. The value of the AUC is 62.3%, meaning that it is a relatively weak classifier. For comparison, an AUC score of 50% would mean that the model’s ability to classify is equal to a coin flip. For an AUC as low as 50%, you’re essentially better off not having a model.
Even with the weak classifier, we can still examine which features are most important for building the model. The xgboost
implementation of gradient boosting allows for us to determine feature importance relatively easily by comparing the Gain
of each independent variable. The higher the percent gain for a feature, the more that it has contributed to the model.
Based upon this, we can see that the most important feature (by far) for the model is whether or not a ride share was authorized. Long trips, airport pick-ups, and airport drop-offs also all account for a significant proportion of gain.
Improvements
By no means is this a fully robust model that captures all of the nuance of rideshare tipping. Without any doubt, it would be inappropriate to draw any real conclusions about ridesharing from the model that we put together. There are many opportunities to build an improved model.
Here are some of the immediately available ways we could consider improving the model:
When exploring
trip_start_timestamps
, one of the first things that we noted was that there was a valley on January 30, 2019 that we could explain due to extremely cold weather. For purposes of this modeling, we left that alone. However, it seems like a reasonable hypothesis that weather has an effect on tipping. This should be one of the first places we should explore.While we zeroed in on O’Hare as a place where there is a higher likelihood of getting a tip (for both pick-ups and drop-offs), we didn’t spend time considering other locations. We could consider how other major destinations in Chicago effect rideshare tipping or we could consider examining specific routes (based upon pick-up centroid location to drop-off centroid location).
We could look for more nuance in variables that we are currently using. For example, the
shared_trip_authorized
field was extremely valuable, but we did not consider thetrips_pooled
variable. Perhaps the effect of not tipping is even more pronounced when there are 2 people sharing a pool versus 3 or 4. Perhaps people who authorize a shared trip, but then never have another person join them are actually more likely to leave a tip. There is plenty of nuance to uncover in that field alone, never mind all of the others.To build this model, we used a ~1% sample of the full dataset. We should increase our sample size and consider more robust techniques to ensure that we are not overfitting the data.
…and some of the things we wish we could do, but that we will probably never be able to:
While the City of Chicago publishes data about rideshare drivers and rideshare vehicles, we have no way of linking all of these datasets together. It seems like a reasonable hypothesis that there are features in those datasets that would have an effect on tipping. For example, are drivers with more experience — both by length of time that they have been drivers and by number of trips — more likely to receive a tip?
One key datapoint that is not made available is which rideshare network was a trip part of. Another reasonable hypothesis to consider is that riders that use certain rideshare networks are more likely to tip than others. For example, Uber implemented tipping about 2 years ago, but other rideshare services have had the ability to tip for significantly longer.
¹ In fact, the dataset is so robust that we chose to not work with the entire dataset. We took a random sample that is approximately equivalent to a 1% sample for the purposes of simplifying work. [back]
² So, this is a slight lie. St. Patrick’s Day is always on March 17th, but March 16th was the day that people actually celebrated St. Patrick’s Day. And to clarify, by celebrate we mean dye a river, have a parade, and drink “responsibly”.
CompassRed is a full-service data agency that specializes in providing data strategy for clients across multiple industries.