This is a follow up from my last post and the analysis takes a different direction in the next post, where I talk about someone beta testers who are not Firefox users.
First a short recap:
- Extracted the BROWSER_STARTUP and BROWSER_SHUTDOWN events from this data set.
- Sorted them by user_ids and then timestamps.
- Preserved only alternating startup/shutdown events for each user.
- Discarded about 10% of the data here (578,496 entries remained)
- Ignoring the user, found out the distribution of the session times and plotted it.
- Was surprised.
Unterminated sessions
One of the concerns was that it might be the case that the longer browser sessions were still 'on' when the study period ended. There were only about 10,000 browser session open at the week end, which is less than 2% of the total browser sessions in the data set. Hence, the long lasting browser sessions would not have effected the end results much.
One of the concerns was that it might be the case that the longer browser sessions were still 'on' when the study period ended. There were only about 10,000 browser session open at the week end, which is less than 2% of the total browser sessions in the data set. Hence, the long lasting browser sessions would not have effected the end results much.
User sessions
Also, it is clear (actually, only in hindsight) that the users who open their browser only for short periods, will open it often in a given fixed period. This is a classical problem of Palm Calculus. As we are looking at time limited data (1 week long), the shorter browser sessions have a greater propensity to occur. However, this does not invalidate the previous results: from the browser's point of view, it will still be closed in under 15 minutes 50% of the time.
Browser's point of view of session times |
Or, when stated more aesthetically:
Firefox session time distribution |
However, from the User's point of view, the scenario is a little different. Upon looking at the average length of Browser sessions by each user (more than 25,000 users have at least one open/close event pair and 95% have more than 2 such events), it clearly stands out that the number of people who have average time from 15 sec to 15 minutes is not very high:
Number of users who have the given average session time (log scale) |
Difference
Hence, this visualization which makes clear the difference between how many users experience an average session length and how many times does a browser experience a given session length:
The distribution of users and Firefox sessions against their distribution times. |
Update:
I did not like in the cramped feel of the objects on the graph, so I sacrificed some accuracy (the 5% and 3% bars are the same length in pixels now, but on the other hand, they do not even have error bars).
Hence, I condensed the graphs, changed a little text and decided to go with this:
The data is the same, but the Firefox bar lengths and the User bar lengths are comparable in size now. Even though comparing them does not make any sense, but it is slightly better to have the percentage sizes nearly equal, I think.
Conclusion
So what can we take away from this? The next improvement which Firefox should aim at. Consider the following feature from the two different point of views:
- If for only 10% users the average Firefox utilization is less then 15 minutes, and Firefox takes 5 seconds less to start, would it make a difference?
- If 45% of the time Firefox is opened and closed in a span of 15 seconds to 15 minutes, would shaving 5 seconds off the startup times make a difference?
Should the priority be more satisfied users or better software?
Which feature / improvements will appeal to users more and which are minor updates?
Which ones should you advertise?
Which point of view should the development team take?
This is just one trade-off, there might be more trade-offs involved in making long term uses/users better than the short term users/users. The information of how the scenario looks like from the user's and the browser's point of view would certainly help in making these decisions and deciding when the feature is a killer one.
Update: The visualization, along with several other excellent entries, is featured here: https://testpilot.mozillalabs.com/testcases/datacompetition
Update: The visualization, along with several other excellent entries, is featured here: https://testpilot.mozillalabs.com/testcases/datacompetition
musically_ut
Epilogue:
- Test pilot Visualization taken from here, designed by mart3ll
- Mozilla Logo from here.
- All graphics shared under Creative Commons Attribution-ShareAlike 3.0 license
No comments:
Post a Comment