Exaggerated Self-reporting of Mobile Phone Use

We asked some of our favorite bloggers to provide us a “lost gem” – a great article that deserved wider response than it received the first time it was published. This piece by Ashley Richards was originally published June 27 for Survey Post.

1 person 2 smartphones - mobile phone use

A recent article in the Journal of Computer-Mediated Communication examines the accuracy of self-reported mobile phone use. The study is similar to one I reported on previously that compared self-reported versus actual tweeting. In the present study, researchers compared self-reports to cell phone records and found that respondents over-reported use of their cell phones for outgoing calls and text messages. In fact, only 3% of respondents said they place outgoing calls less often than 1-2 times per week, but the records showed that 17% of respondents should have selected this response option. That is a big difference! In this post I’ll discuss explanations for the over-reporting, methods to improve the accuracy of these measurements, and I’ll end with a few lingering questions and ideas for future research.

Why might phone use have been over-reported?

  • For most respondents, placing calls or sending text messages is not out of the ordinary, so these events are not memorable and are therefore more difficult to recall.
  • The questions were asked in a way that differs from how we normally think about our phone use. Because of the way billing is typically set up, I assume most users think in terms of total minutes per month rather than outgoing calls per day or week.
  • Cell service is not cheap. People are probably aware of how much they pay or what they pay for (e.g., 1,000 minutes per month), and then they use that price tag or target use to estimate how much they use their phone (or should use their phone, to get their money’s worth if under contract). Maybe people inadvertently over-report to try to justify to themselves why they pay so much.

How could measurement of phone use be improved?

  • Reframe the questions to ask about cell use in a way that people think about cell use. As I mentioned earlier, it may be easier for respondents to think about their minutes or texts per month. Unfortunately, respondents may lack awareness of their use if they have unlimited texting or a monthly minute allowance so high they don’t worry about surpassing it. Even people who track their minutes carefully may not have an accurate perception of their actual use because many plans offer free nights and weekends, free calling to others with the same provider, etc.
  • Ask respondents to consult their phone records. This is a great approach in theory, but in reality, it’s quite a bit of effort for respondents. They may not even have access to this information at the time they complete the survey, or they may be unwilling to take so much time to dig up the requested info. Even if they are willing, they may struggle to correctly interpret the records.
  • Bypass respondents and collect this information directly. This can be done by acquiring records (as was done in this study) or by tapping into their phones (e.g., using an app that monitors and transmits usage statistics). This option is by far the most accurate. However, it’s likely to be the most expensive and time-intensive. Plus, some respondents will inevitably refuse access to their information.

Like nearly every decision in survey design, this decision comes down to weighing the pros and cons of different approaches. And like usual, there is no “right” answer, but rather, a call for further research.

Next steps

I’d be curious to see how the accuracy of self-reports differs for minutes used per month versus number of calls per day or week. If the purpose of the question is to separate respondents into groups of heavy and light mobile phone users, then asking about minutes used per month might not only be sufficient but also more accurate.

The authors point out that respondents’ perceptions of whether they are heavy or light users likely shapes their responses. I’d also be interested in knowing how accurately respondents could classify themselves into one of these groups when asked about phone use in the most general way possible.

One final thought

Last week there was a discussion on AAPORnet about how best to word a question about time spent at the airport. Someone pointed out that the approach will differ if you’re interested in perceived versus actual time. The same is true in this case. Perceived phone use would likely be of greater interest to a market researcher interested in learning how much people are willing to pay for phone service compared to an academic trying to discover how communication patterns are changing over time. Regardless of what you’re trying to measure – time at the airport, phone use, or something else – cognitive testing is a beneficial step in the process of developing a good survey. Cognitive testing can highlight issues with a survey (e.g. questions that are misunderstood or that ask for information respondents are unable to recall) before the survey is fielded, ultimately leading to more accurate data.

Ashley Richards is a survey methodologist at RTI International. Her interest in research methods and the social sciences led her to pursue a career in survey methodology. Her work at RTI has included questionnaire design and evaluation, text analysis, and cognitive interviewing.


Speak Your Mind