I would like to start today’s post with a question to my fellow iPhone-owning students, faculty and staff. How many apps do you have installed on your device? For comparison’s sake, I’ll limit this example to just iPhones. At the time of this post I counted 47 installed on my own device. Now that you have that number for your own device, stop and ask yourself, how many of these apps do I know were written and published using trusted code sets and verified publishers? Would I have installed these apps if I knew that they were not trusted and potentially malicious?
In order to thwart the publication of malicious apps Apple, Inc. has developed stringent policies and review processes around application development for their OS X and iOS platforms. To complement these processes, developers are required to use a specific software development package called Xcode.
Earlier this week Apple News announced that it had found that an unprecedented number of apps had made it past the review process and were published to the App Store, subsequently downloaded. To put this in perspective, “Prior to this attack, a total of just 5 malicious apps had ever been found in the App Store, according to cyber security firm Palo Alto Networks, Inc.”, says Jim Finkle from Reuters. To be more precise, 344 apps have been discovered by Chinese security firm Qihoo360 Technology Co. to be potentially affected by this attack.
This begs the question, how could this have happened? How could an organization with such strict requirements on app development inadvertently release apps infected with malicious code? The answer to this question lies with the software that we discussed earlier. Essentially a malicious copy of Xcode was created, also known as XcodeGhost. This framework almost identically mimics Xcode, with the exception that it can be modified to contain malicious code. While this may appear to be a rather simple concept the underlying logic of XcodeGhost is far more complex, a discussion which deserves its own white paper.
The fundamental issue with XcodeGhost is that it could potentially be used by legitimate app developers without their knowledge. Therefore we can assume that legitimate applications that leverage this malicious framework could be susceptible to SQL injection, XSS, or other client-based security flaws that could result in data leakage from a mobile device. Considering the types of apps that are available for banking, location tracking and social media, the problem of data leakage poses a very large and very real problem.
With that in mind, I think back to the 47 apps I have installed on my iPhone and wonder which apps I have installed that could potentially be vulnerable. Should there be a more strict verification process for app development that evaluates even the underlying development software that is being used? While we could leverage peer review methods or even the use of trusted certificates to avoid these situations we may continue to see these types of threats at a high frequency in the very near future. What do you think? Any discussion is welcome in the comments on how an organization might be able to avoid these situations.