For this study, the research team estimated 150,000 apps: the first 100,000 based on the number of downloads from the Google Play Store, the first 20,000 from the alternative market, and 30,000 from pre-installed apps on Android smartphones.
12,706 of these apps, (~ 8.5%), can run to do something unknown to the user. They also found that some applications could be accessed remotely via native passwords to discover information inside them, and some applications, they found, had secret passwords that could trigger hidden options, including bypassing payments.
“Both users and developers are at risk if the bad guy got these secrets,” Lin said, explaining that attackers can swap these apps to find the keys.
Research associate Qingchuan Zhao said developers often mistakenly believe that reverse engineering their applications is not a legitimate threat.
“The key reason why mobile apps contain these secrets in the background is because the developers have lost confidence,” Zhao said. To truly secure their applications, he said, developers need to perform security-related listing entries and push their secrets to background servers.
Another 4,028 applications (~ 2.7%) were intended to block content containing certain keywords subject to censorship, cyber bullying or discrimination. But the surprising way they did it: they checked locally instead of remotely, Lin said. “On many platforms, user-generated content can be moderated or filtered before posting,” he said – social media sites, including Facebook, Instagram and Tumblr, limit content users who can post on those platforms.
“Unfortunately, there may be problems. For example, users know that certain words are banned in platform policy, but are not aware of examples of words that are considered banned words and can result in blocking content without user knowledge,” Zhao said. “Therefore, end users want to clarify unclear content policies by seeing examples of prohibited words.” In addition, he said, researchers studying censorship may want to understand what terms are considered sensitive.
The team has developed an open source tool called InputScope to help developers understand the weaknesses in their applications and to show that the reverse engineering process can be fully automated.
Ohio State has partnered with New York University and the German CISPA Helmholtz Center for Information Security. The study was accepted at the IEEE Symposium on Security and Privacy in May, now an online conference.