For the better part of the past 15 years or so, we as a society have very much placed our collective faith in the hands of smartphones and their apps. These convenient devices and their applications for any occasion have simplified our lives to an incredible degree. The only trade-off? Users just have to provide some personal information and consent to share their data.
It always seemed simple enough, and virtually everyone had no problem clicking the “I Agree” button on those annoying and lengthy terms of service agreements that no one bothered to read. In recent years the personal and societal value of user data has become much more apparent, in large part due to the Cambridge Analytica scandal.
Now, researchers at Ohio State University are supplying the latest in a seemingly never-ending parade of revelations regarding how tech companies and app makers have abused the trust users so easily handed over for years. Hardcoded “backdoor secrets” have been discovered in thousands of mobile apps that potentially allow others to access sensitive private data or block certain content.
As if that wasn’t bad enough, the study’s authors say these “backdoors” are ripe for the picking when it comes to hackers.
These hidden behaviors and tendencies within apps are completely unannounced to users, according to senior study author Zhiqiang Lin, an associate professor of computer science and engineering at OSU.
A mobile application usually engages with users by responding to a type of input; the push of a button, the swipe of a finger, or typing out certain words and phrases. Well, researchers discovered 12,706 apps that contained what they call backdoor secrets, or hidden app behaviors that can only be unlocked by certain types of content. Some of these identified apps even included “master passwords” that would allow anyone with a password to fully access the app and all private data within. Other apps harbored secret access keys, that when cracked, facilitated hidden options like bypassing any payment requirements within the application.
In total, 150,000 apps were analyzed for this project; the top 100,000 most downloaded on the Google Play store, the top 20,000 from an alternative app store, and an additional 30,000 picked out from pre-installed apps on Android smartphones. Among that entire dataset, 8.5% (12,706) were discovered to be housing backdoor secrets.
“Both users and developers are all at risk if a bad guy has obtained these ‘backdoor secrets,'” Lin explains in a press release. Lin went on to describe how an especially skilled hacker could even reverse-engineer a mobile app to discover these secret pathways.
Most developers wrongly assume that it would be impossible for a hacker to reverse-engineer one of their apps, lead study author Qingchuan Zhao, a graduate research assistant at Ohio State, says.
“A key reason why mobile apps contain these ‘backdoor secrets’ is because developers misplaced the trust,” Zhao adds.
An app is only truly secure if its developer installs “security-relevant user-input validations” and places all user data solely on backend servers, not lurking behind hidden doors within the app itself, researchers say.
Another 4,028 apps (2.7%) were blocking content such as certain words flagged due to censorship practices, cyberbullying, or discrimination. While generally speaking that isn’t all that surprising, the research team was shocked to see that so many apps were programmed to block local content automatically.
Platforms like Facebook or Twitter routinely block offensive content, but those actions are performed remotely, not locally on users’ own devices. These observations suggest that many apps are censoring users’ words as soon as they’re typed. Beyond just the ethical questions this raises, the research team says such an approach to censorship is sure to confuse users as well.
“Unfortunately, there might exist problems – for example, users know that certain words are forbidden from a platform’s policy, but they are unaware of examples of words that are considered as banned words and could result in content being blocked without users’ knowledge,” Lin explains. “Therefore, end users may wish to clarify vague platform content policies by seeing examples of banned words.”
Taking a proactive approach, the study’s authors have also created an open-source tool called InputScope that helps developers identify vulnerabilities within their apps.
The biggest takeaway we can all gain from this study is to be more selective about the apps we download on our phones. Your data is a valuable commodity, don’t go giving it away to just any app.
The study is set to be published and presented at the 2020 IEEE Symposium on Security and Privacy in May.