Montag, 7. September 2015

Fingerprinting Mobile Devices Using Personalized Configurations

Recently, Apple removed access to various device hardware identifiers that were frequently misused by iOS third-party apps to track users. Therefore, within our latest research project, we are now studying the extent to which users of smartphones can still be uniquely identified simply through their personalized device configurations.

Using Apple’s iOS as an example, we show how a device fingerprint can be computed using 29 different configuration features that can be queried by arbitrary third-party apps via the official SDK. These include, for example, device names, language settings, lists of apps installed and most played songs.

For this purpose, we created our own App Store app “Unique” that collected these features and, if the user gave permission, sent the data to our server for evaluation. Any personally identifiable data was anonymized before transmission using hashing. During our 140-day study period, we collected almost 13,000 data records from 8,000 different real-world devices. All the fingerprints we discovered were unique. Although the fingerprints are clearly distinguishable in theory, it is hard to use them for long-term tracking of users as, in practice, individual fingerprint features change over time when the device is used. This aspect was also investigated, as almost 57% of the data records transmitted to us came from recurring devices.

We then propose a robust solution for measuring the general similarity between any pair of fingerprints, independent of their size and structure. In doing so, we determine an optimal similarity threshold using a supervised learning approach that considers the chronological order in which fingerprints would be received in a real world scenario.

This approach enables us to uniquely identify devices with a total accuracy of 93.76% when all 29 features are included. We then evaluate the collected data from various different perspectives and gradually reduce the feature space to determine features or combinations that would lead to an accuracy increase. Some test cases, for example, use only the list of installed apps or the top 50 most-played songs. Both pieces of information are freely available to third-party apps and querying them is very unobtrusive. Our approach proved capable of uniquely identifying devices purely on the basis of the apps installed, with an overall accuracy of more than 97%. Moreover, identifying devices based solely on the user’s music taste succeeded with a total accuracy of 94.20%.

With regard to user privacy, the main issue with our new approach is that, in most cases, users would be unaware of the data collection taking place and could not prevent it. We also demonstrate that our approach works even with modified configurations, i.e. when individual features are removed by iOS updates or change over time. It should be noted that our method also functions if devices are restored. Whereas Apple’s Advertising Identifiers change after a restore or device replacement, most of the features we use for fingerprinting are restored from backups during the restore process. In this sense, our configuration-based identifier is even stronger than any previous hardware identifiers. As long as a user’s personal profile does not change significantly, he or she can continue to be identified for an indeterminate amount of time.

As a last point, we discuss countermeasures and demonstrate how identification accuracy could be drastically decreased if Apple further tightened the app sandbox to prevent unrestricted access to only a few strong distinguishing features. Some of the countermeasures will already be in place within the upcoming iOS 9.

Details of our study will be presented at the Privacy Enhancing Technologies Symposium 2016 which will be held July 19–22 2016 in Darmstadt, Germany. The full paper will be published in the corresponding PoPETs journal 2016 issue 1.

Full Paper: Fingerprinting Mobile Devices Using Personalized Configurations.
Andreas Kurtz, Hugo Gascon, Tobias Becker, Konrad Rieck and Felix Freiling.
Proceedings on Privacy Enhancing Technologies (PoPETS), 2016 (1) , 4–19, to appear 2016. (PDF)

Source Code: The source code of our "Unique" app that was placed in the App Store for fingerprint collection is available at

Contact: For general questions on this research project, please contact us at

Freitag, 24. Oktober 2014

iOS 8 Touch ID Authentication API:
or the False Sense of Security of Dropbox's Passcode Protection

Since the release of iOS 8, the Touch ID fingerprint sensor can now also be used in third-party apps. The Local Authentication framework provides an API via which users can conveniently deploy their biometric fingerprint to authenticate themselves in both apps from the App Store and enterprise apps. In the medium term, we anticipate that more and more apps will switch to the fingerprint method of user authentication. After all, it’s much easier to place your finger on a sensor than it is to enter a 10-digit alphanumeric password — and the additional biometric verification gives users a very good sense of security. The problem is that this feeling of security is extremely deceptive.

Local Authentication Backgrounds

Merely glancing at the sample code provided by Apple makes it immediately apparent that Local Authentication, as the name states, is purely a client-side security measure. And, as we have come to realise, client-side measures are recommended only with caveats. Ultimately, when Local Authentication is used, the method evaluatePolicy:localizedReason:reply: returns to the app in the reply block, whether or not the fingerprint authentication was successful. Depending on the status of the Boolean variable, developers can then display an error message or, in successful cases, e.g., hide the frontmost login screen.

Doubtful Purpose

Since I first heard about this new framework, I have questioned its usefulness. Ultimately, Local Authentication becomes only important when an unlocked device falls into the wrong hands. Than Local Authentication is designed to ensure that, when an app is launched, an additional login screen is displayed to prevent access to the app’s contents. However, in this very moment it is still possible, e.g., to directly access all app data via the USB interface (for example, by creating a backup). After all, the device is already unlocked (otherwise you wouldn’t need Local Authentication).

In order to prevent the latter scenario, sensitive information is usually encrypted using an additional, app-level encryption. The key required to decrypt the information is then derived from the password entered when the app is launched. Touch ID means, however, that no password is entered and the Secure Enclave coprocessor returns only a thumbs-up or thumbs-down. There is, therefore, no password from which to derive a key. As authentication without additional encryption makes little sense when storing sensitive data locally, the Local Authentication concept should, therefore, be called into question. It may, for example, be sufficient to prevent children playing with the device from promptly accessing sensitive app data, but the Local Authentication provided in the sample code from Apple provides no actual security benefit.

I found it also somewhat surprising that, in WWDC 2014 Session 711 (Keychain and Authentication with Touch ID), Apple specifies possible uses for Local Authentication that involve Touch ID potentially replacing passwords or PINs, or even Touch ID being used as a second factor. As I aim to show in the following example, I find this questionable. Enterprise apps, in particular, would do better to avoid this form of authentication.

A Real-Life Example: Dropbox App

When searching the App Store for apps that already provide Local Authentication, I came across the Dropbox App. Just a few days ago, the integration of Touch ID into the Dropbox App was highly praised in the media. CNET quotes include: “Dropbox users with sensitive documents stored on their iPhone or iPad can now protect them with a simple fingerprint” and “the measure throws in an extra layer of app-specific protection to your files”.

In order to use Touch ID in the Dropbox App, users must first set their own passcode in the app settings. The app then displays a login screen each time it launches. At first glance, this seems nice and secure.

It is notable that the passcode comprises just four numbers. This alone makes it vulnerable to brute force attacks. Later, however, it turned out that it doesn’t actually matter, as the passcode is not used to encrypt the locally stored Dropbox files. Instead, the passcode is simply stored in the keychain. When the app is launched the next time and the PIN is entered, the app checks whether or not the value entered matches the value stored in the keychain. If it does, the method passcodeViewControllerDidReceiveCorrectPasscode from class DBLoggedInPasscodeState is used to dismiss the passcode view. The same method is called if, following authentication using Touch ID, the Local Authentication framework returns success in the reply block.

Users who want to try hiding the login screen without knowing the passcode can simply use the following Cycript Code:

// Dropbox for iPad
var PasscodeViewController = [UIApplication sharedApplication].keyWindow.rootViewController.presentedViewController->_passcodeController;

// Dropbox for iPhone
var PasscodeViewController = [UIApplication sharedApplication].keyWindow.rootViewController.presentedViewController;

var DBLoggedInPasscodeState = [DBStateManager sharedInstance]->_dbState;
[DBLoggedInPasscodeState  passcodeViewControllerDidReceiveCorrectPasscode:PasscodeViewController];

This hides the login screen and permits access to all Dropbox files. To avoid confusion, I’d like to emphasise that proving this concept using Cycript requires a jailbroken device! This attack cannot be carried out as described on non-jailbroken devices. Instead of changes being made using Cycript at runtime, the app binary would, for example, need to be modified and the passcode check directly deactivated within it. This would also be possible on non-jailbroken devices. Moreover, I'd like to emphasise that I'm not criticizing Touch ID in general, but its use in third-party apps.

To return to the topic at hand, however, it is important to realise that the authentication in Dropbox, and Local Authentication in general, lulls users into a false sense of security. As the Dropbox files are not protected by app-level encryption in addition to Apple’s file Data Protection, the files could, for example, still be read via the USB interface. This requires that the device in question be unlocked — but, as mentioned earlier, Local Authentication becomes relevant only in such scenarios. The bottom line is that Local Authentication can be used if your threat model includes your kids or spouse, but not to protect yourself against any serious adversary.

When it comes to Dropbox, I would like to see the following improvements:

  • The option to set a complex alphanumeric password instead of just a 4 digit passcode
  • Additional, app-level encryption of files based on the user passcode entered
  • Encryption of downloaded files using the NSFileProtectionComplete class key (instead of iOS default’s NSFileProtectionCompleteUntilFirstUserAuthentication).

If Touch ID authentication is to be continued, perhaps it would be worth taking a look at the keychain access control lists (ACL), a new concept that was introduced in iOS 8. The attribute kSecAttrAccessControl can be used to define that keychain entries can be decrypted only if the user has again been authenticated using the device passcode or Touch ID (kSecAccessControlUserPresence). In such cases, the Touch ID login would be more than just a worthless view. Instead, it could actually grant access to cryptographic keys. As, however, the keys would still be stored on the device (although in the keychain), this is merely a compromise, albeit one which could actually provide added value (ACL protected items are not backed up). From a security perspective, however, entering a password is still recommended.

Donnerstag, 18. September 2014

Malicious iOS Apps

A comparison before and after iOS 8 was released

As part of one of our recent research projects, we evaluated how malicious third-party apps could affect user privacy, despite the various security controls and the solid security architecture of the iOS platform. Therefore, we reviewed the iOS app sandbox model for weaknesses – and, indeed, made some finds. Some of these defects, which Markus Troßbach and I disclosed to Apple a while back, have been addressed with yesterday’s release of iOS 8 (CVE-2014-4361, CVE-2014-4362).

Update (April 9th, 2015): Yesterday's release of iOS 8.3 seems to fix some more of the below mentioned sandbox defects (CVE-2015-1113, CVE-2015-1115).


With iOS 8 the following deficiencies have been remedied:
  • Third-party apps can no longer readout a user’s Apple ID.
  • Third-party apps can no longer determine contact details of recently contacted persons.
  • Third-party apps can no longer monitor a user’s texting or messaging behavior, including details of persons contacted.
  • Third-party apps are no longer able to observe a user’s general app usage behavior, including the times and duration of both third-party and iOS-bundled system app usage.
  • Third-party apps can no longer secretly take pictures and transfer them off the device without user’s consent or knowledge.
However, the following issues are still present in iOS 8:
  • Third-party apps still can obtain a comprehensive list of the installed apps and the respective version numbers.
  • Third-party apps still can permanently monitor the iOS pasteboard for changes and read out any sensitive data that is probably copy-pasted.
  • Third-party apps still can observe phone call metadata, such as precise call durations and the call recipients’ phone numbers (fixed in iOS 8.3, CVE-2015-1115).

Each of these points is explained in full detail below, followed by a conclusion at the end. Please note that all of the following defects could be exploited by third-party apps running on non-jailbroken iOS devices.

Unrestricted File System Access

To determine the effectiveness of the iOS sandbox mechanism with regard to file system access, we first extracted all file-read permissions from the container sandbox profile, which is applied to all third-party apps by default. While reviewing these sandbox rules, we noticed that several files within the general system preferences folder /private/var/mobile/Library/Preferences/ had been explicitly whitelisted, which makes their contents accessible to any third-party app. It turned out that within this folder particularly the following files disclosed personal data:
Within each of these files, we found the value of the key _im_ab_cache, reflecting contact details of recently contacted persons. Moreover, the file contained the SuspendedGroupID key, which discloses the contact details of the current receiving party within the iOS messages app. This means that any app can learn about a user’s texting or messaging behavior, including details of persons contacted, by only periodically querying this file.

Moreover, we found a single file system access sufficient to determine a user’s personal Apple ID. In more detail, we noticed that a user’s Apple ID could be retrieved from the file, even if the Home Sharing feature (which allows users to stream music and videos from their iTunes media library to other devices in their households) was not enabled at all. It turns out that the Apple ID is automatically stored to the Home Sharing configuration file when a user is successfully authenticated by any Apple service for the first time (when setting up iCloud, for example, or downloading apps from the App Store). Presumably, this was done to prepopulate the Apple ID value within the Home Sharing section of the iOS preferences app.

The privacy implications of this issue are numerous. Up to iOS 7, several hardware identifiers were available to allow apps to uniquely identify any iOS device. For instance, both the unique device identifier (UDID) and the hardware address of the WiFi module (WiFi MAC address) were frequently used by advertising or tracking networks to relate personal data or usage patterns to specific users. As this posed a major privacy threat, Apple removed access to those identifiers in iOS 7. Access to a user’s Apple ID, however, can be considered a much stronger identifier, as it allows not only the identification of a device, but also the identification of its owner. The advertising industry might, therefore, also have been interested in this method of reliably identifying users across apps.

A user’s Apple ID could also be used in phishing attacks from within third-party apps, in which criminals impersonate the official iTunes store authentication dialog that is displayed whenever users buy apps from the App Store or set up access to any Apple service. As the official iTunes sign-in dialog carries no special label to prove its authenticity, it can easily be imitated using default iOS alert view mechanisms (see Figure 1). It should be noted that the only identifying mark in the official iTunes dialog is the preset Apple ID value. This requirement could, however, easily be met using the Apple ID value from the Home Sharing configuration file. Using this personal value could increase users’ trust in these fake dialogs and could, in turn, increase the risk of users falling for phishing scams.

Figure 1: Impersonated iTunes Store Phishing Dialog leveraging a user's Apple ID.

Within iOS 8, the sandbox rules have been updated to disable access to the general system preferences folder for third-party apps.

Monitoring of Phone Connections

On iPhone devices, the Core Telephony framework can be used to obtain information about a user’s cellular service provider and the current cellular call status. Apps can, therefore, register with an event-driven interface provided by the CTCallCenter class to access information about state changes for calls. To enable this, apps must register a callback by assigning a handler block to the callEventHandler property. This handler is called by the iOS system whenever a call event takes place and is provided with a CTCall object. This object can then be used to determine a current call’s state as described in the following listing.

self.callCenter = [[CTCallCenter alloc] init];
self.callCenter.callEventHandler = ^(CTCall* call) {
    if (call.callState == CTCallStateConnected) {
        // Call is connected
    } else if (call.callState == CTCallStateDisconnected) {
        // Call is disconnected
    } else if (call.callState == CTCallStateIncoming) {
        // Call is incoming and has not yet been answered 
    } else if (call.callState == CTCallStateDialing) {
        // User is dialing a number

It should be noted that an app is supposed to receive these call state events only when it is in an active state and executing code. While this constraint would, at first glance, seem to lower the actual impact, the apps are not actually required to run in the foreground in order to receive information about call state changes. By abusing the iOS background execution and multitasking capabilities, apps can continue running in the background for an indefinite period of time (e.g., by playing a silent audio file in an endless loop). In consequence, this would allow any app that makes use of iOS’s multitasking features to monitor a user’s regular calling habits.

Further analysis revealed that apps may determine not only the times calls were initiated or received and their precise duration, but also the phone number of the called party. In order to do this, an app may leverage the private Telephony Utilities framework. The TUCallCenter class within this framework contains a reference to an instance of the TUTelephonyCall class (currentCalls selector). This, in turn, contains information on the current call, including its status and duration. Although invoking most of the methods provided by TUCallCenter would require certain entitlements, it turns out that the description selector, which returns a textual description of an object’s contents — and, in this particular case, the phone number of the current call — can be invoked without any special grants (the description selector is provided by the root class NSObject, a class from which the vast majority of all Objective-C classes inherit). As this instance demonstrates, iOS’s entitlement-based compartmentalization mechanism obviously fails to restrict access to protected resources in certain cases, particularly when interacting with parts of the Objective-C foundation.

Class TUCallCenter_class = objc_getClass("TUCallCenter");
NSObject* callCenter = [TUCallCenter_class performSelector:@selector(sharedInstance)];
NSArray* calls = [callCenter performSelector:@selector(currentCalls)];
for(NSObject* call in calls) {
    NSLog(@"call: %@", [call description]);

When used together, this combination of techniques allows any app to observe metadata on a user’s phone calls, including the call durations and the recipients’ phone numbers. The only requirement for this is an app running in the background, accessing functionality provided by a private API. From our and other researchers’ experiences, both requirements can easily be met, even for apps from the App Store.

These issues have not been fixed. Even in iOS 8, third-party apps can observe a user’s phone call behavior, including the precise call durations as well as the callers and recipients phone numbers.

General App Usage Behavior

When searching for further techniques to actively monitor users, we came across a method that allows any third-party app to observe the timing and duration of app usage. For this, we leveraged the SpringBoardServices framework to query information on the current frontmost running app. In general, this private framework allows users to invoke functions in the iOS SpringBoard via Mach messages. To make all the functions of the SpringBoardServices framework available, the dynamic linker is first instructed to load the corresponding dynamic library and to return the addresses of the required symbols (see listing below, based on a Stack Overflow post). After querying the SpringBoard server’s Mach port, the SBFrontmostApplicationDisplayIdentifier method is invoked to query the bundle identifier of the current frontmost running app.

void *lib = dlopen("/System/Library/PrivateFrameworks/SpringBoardServices.framework/SpringBoardServices", RTLD_LAZY);
mach_port_t* ( *SBSSpringBoardServerPort)() = dlsym(lib,"SBSSpringBoardServerPort");
mach_port_t *port = (mach_port_t *)SBSSpringBoardServerPort();
void *(*SBFrontmostApplicationDisplayIdentifier)(mach_port_t *port, char *result) = dlsym(lib, "SBFrontmostApplicationDisplayIdentifier");
char appIdentifier[256];
memset(appIdentifier, 0, sizeof(appIdentifier));
SBFrontmostApplicationDisplayIdentifier(port, appIdentifier);
NSLog(@"frontmost app: %s", appIdentifier);

Through practical experiments, we verified that the SpringBoardServices can also be queried from apps running in the background. By repeatedly querying this method from a background app, we were able to determine the usage frequency and duration of both third-party and iOS-bundled system apps. This would allow any third-party app to gain detailed insights into a user’s general app usage.

Within iOS 8 this issue has been fixed. Third-party apps are therefore no longer able to observe a user’s general app usage habits.

Unrestricted Pasteboard Access

To allow users to exchange data within or between apps, iOS provides a “copy and paste” mechanism. A tap-and-hold Copy gesture advises the iOS system to write the selected items to a shared memory region known as the General pasteboard. Users can then choose to copy the cached pasteboard data (whether into the same app or into a different one), by using the same tap-and-hold gesture and choosing the Paste menu command.

From a privacy perspective, we found the following two problems with this pasteboard concept:
  1. The general pasteboard contents can be accessed both via these gestures and menu commands, and programmatically from any app. The latter can be done without the user’s permission or knowledge.
  2. The likelihood of such pasteboard compromises in iOS is increased by several factors, including the fact that even apps running in the background may periodically query the pasteboard to observe any content changes.
Although this is nothing new in general (particularly the first issue), such pasteboard stealing attacks might have been underestimated in the past, as users more and more use the pasteboard to even transfer sensitive data between apps.

For instance, during setup, the HotSpot Login App provided by Deutsche Telekom AG sends the user’s login data via text message. To make the overall setup process as convenient as possible, it instructs users to copy and paste the entire text message, including the username and password, to the app’s login fields. As the General pasteboard’s content is shared freely between apps, any rogue app can read the user’s access data from it.

Another instance in which sensitive data may end up on the iOS general pasteboard involves interaction with so-called “password manager” apps. These apps claim to secretly store confidential data such as passwords, PINs, bank and credit card information in an encrypted file, which is accessible only through a master password entered during app start. Once the secret data has been made available, however, it can simply be copied and pasted to another app.

Furthermore, apps like ScanPass Pro allow users to scan their (encrypted) passwords from printed QR codes. This should enable users to use complex passwords without worrying about mistyping them. A major drawback of this method, however, is that, after scanning the QR code, the app automatically places the (decrypted) password data, again, onto the general pasteboard for use in other apps.

Some password managers such as 1Password already try to limit exposure of sensitive data by clearing the pasteboard content after a certain period of time (so do enterprise apps also often clear the pasteboard contents on app suspend/close). These measures should prevent other apps - that are actively used in the meantime - to read out the cached pasteboard contents. However, such measures are completely ineffective against pasteboard stealing apps that are permanently running in the background.

Consequently, users should be aware that in iOS 8 third-party apps may still access the pasteboard without user’s consent and even apps running in the background are able to monitor the pasteboard for any content changes. Therefore, wouldn’t it be great if iOS would adopt known pasteboard concepts from the browser world and to allow access to the pasteboard only iff a user granted permission, either implicitly by using the copy/paste gesture commands or explicitly by answering a permission dialog, when an app tries to access the pasteboard programmatically? In addition, pasteboard access should generally be disabled for apps running in the background.

However, as long as this is not the case, it could be a good advise to be careful when using the iOS pasteboard mechanism, particularly when intended to transfer any sensitive data.

Installed Apps

A private method from the MobileCoreServices framework allows any app to query information on all the other apps available on a device. To obtain a comprehensive list of all the installed app names and the respective version numbers, an app must simply invoke the allApplications selector from the LSApplicationWorkspace class. It should be noted that the sample code described below will commonly not pass the App Store review process, as it directly invokes private API methods. As, however, we confirmed this part of the review process to be flawed in several aspects, a slight modification to this code would allow even App Store apps to invoke this private method (e.g., by dynamically instrumenting the Objective-C runtime).

Class LSApplicationWorkspace_class = objc_getClass("LSApplicationWorkspace");
NSObject* workspace = [LSApplicationWorkspace_class performSelector:@selector(defaultWorkspace)];
NSLog(@"apps: %@", [workspace performSelector:@selector(allApplications)]);

There are numerous possible uses for this information, many of which are, for obvious reasons, highly privacy-relevant. In keeping with the motto “Show me your apps and I’ll tell you who you are”, industries such as advertising might be extremely interested in this information. A list of installed apps would, for example, allow certain inferences to be drawn about users’ personal preferences. If a user installed a “baby monitor” app, for instance, plus an app to track a baby’s height, weight, sleep, etc., it would appear likely that the user was a parent and therefore particularly receptive to family-oriented content and advertising. And why not to deliver such tailored advertisements to users since we already know their Apple IDs?

One minor drawback of the aforementioned method is the invocation of private APIs. We therefore searched for alternative methods that rely solely on public APIs. We found out that the current sandbox rules allow read access to the folder /private/var/mobile/Library/Caches/ in which icons for all the installed apps are cached (see output below), presumably to be displayed on the iOS SpringBoard.

// System app icons[1]*[1]*[1]*[1]*[1]*[1]*[1]*[1]*[1]*[1]*[1]*
// Thid-party app icons[1]*[1]*

It turns out that the naming scheme for all files within this icon cache folder is based on an app’s bundle identifier, followed by the static string _CFBundleIcon*. This allows any app to retrieve a list of installed apps by enumerating all the files in this folder and extracting the respective bundle identifier values.

These issues have not been fixed. Even in iOS 8, third-party apps can make use of both techniques to determine a list of installed apps.

Unrestricted Access to Camera Hardware

Finally, one of the most beneficial and long overdue privacy features in iOS 8 is probably the restriction to camera hardware. While iOS requested user permission for apps to use location services or the microphone for a long time, access to the camera hardware was still unrestricted in most countries (so far, iOS required users’ consent to access the camera only for devices sold in China). This allowed any app up to and including iOS 7 to use the front and/or rear-facing cameras to secretly take pictures at any time and to transfer them off the device without the user’s consent or knowledge. All methods required to access the camera hardware were provided by public methods from the AV Foundation framework and allowed any app to secretly take pictures programmatically, which means that no preview window was displayed on the screen (as is the case when UIImagePickerController is used) and no user consent was required (e.g., users do not need to tap a camera trigger button to take a picture).

This issue has been addressed in iOS 8 by introducing a new camera privacy control. When an app now tries to access the camera for the first time, the user is presented with a camera permission dialog.


While the iOS 8 sandbox has been revised to limit the ways in which third-party apps could surveil users, such as monitoring their texting or app usage behavior, some of the issues we reported are still present (e.g., determining installed apps, permantetely monitoring pasteboard content from within background apps, observing phone call metadata).

However, as practical exploitation of some of these issues require repeated invocation of certain APIs, which in turn requires an app to run in the background, it remains to be seen how effective iOS’s new „battery usage“ feature is. This feature promises to display battery usage by app in the iOS settings and to automatically shut down those that are draining too much battery power.

Now, notwithstanding any local sandbox defects, one could say that the App Store vetting process mainly serves as a second line of defense and therefore would never permit such malicious surveillance apps in the App Store… well, more on this later ;)

Mittwoch, 23. April 2014

What Apple Missed to Fix in iOS 7.1.1

A few weeks ago, I noticed that email attachments within the iOS 7 are not protected by Apple's data protection mechanisms. Clearly, this is contrary to Apple's claims that data protection "provides an additional layer of protection for (..) email messages attachments".

I verified this issue by restoring an iPhone 4 (GSM) device to the most recent iOS versions (7.1 and 7.1.1) and setting up an IMAP email account1, which provided me with some test emails and attachments. Afterwards, I shut down the device and accessed the file system using well-known techniques (DFU mode, custom ramdisk, SSH over usbmux). Finally, I mounted the iOS data partition and navigated to the actual email folder. Within this folder, I found all attachments accessible without any encryption/restriction:

# mount_hfs /dev/disk0s1s2 /mnt2
# cd /mnt2/mobile/Library/Mail/

# xxd IMAP-MY_MAILADDRESS/INBOX.imapmbox/Attachments/4/2/my_file.pdf 
0000000: 2550 4446 2d31 2e34 0a25 81e2 81e3 81cf  %PDF-1.4.%......
0000010: 81d3 5c72 0a31 2030 206f 626a 0a3c 3c0a  ..\r.1 0 obj.<<.
0000020: 2f43 7265 6174 696f 6e44 6174 6520 2844  /CreationDate (D
0000030: 3a32 3031 3330 3830 3532 3034 3830 3329  :20130805204803)
0000040: 0a2f 4d6f 6444 6174 6520 2844 3a32 3031  ./ModDate (D:201
0000050: 3330 3830 3532 3034 3830 3329 0a2f 5469  30805204803)./Ti
0000060: 746c 6520 2852 2047 7261 7068 6963 7320  tle (R Graphics 
0000070: 4f75 7470 7574 290a 2f50 726f 6475 6365  Output)./Produce
0000080: 7220 2852 2033 2e30 2e31 290a 2f43 7265  r (R 3.0.1)./Cre
0000090: 6174 6f72 2028 5229 0a3e 3e0a 656e 646f  ator (R).>>.endo

To verify that data protection was actually enabled, I also tried to access the Protected Index file (email message database). As expected, access to that file was not permitted.

# xxd Protected\ Index
xxd: Protected Index: Operation not permitted

Note: I was also able to reproduce this issue on an iPhone 5s and an iPad 2 running iOS 7.0.4.

I reported these findings to Apple. They responded that they were aware of this issue, but did not state any date when a fix is to be expected. Considering the long time iOS 7 is available by now and the sensitivity of email attachments many enterprises share on their devices (fundamentally relying on data protection), I expected a near-term patch. Unfortunately, even today's iOS 7.1.1 did not remedy the issue, leaving users at risk of data theft. As a workaround, concerned users may disable mail synchronization (at least on devices where the bootrom is exploitable).

1 It turned out that POP or ActiveSync email accounts behave the same way.

Freitag, 24. Januar 2014

The Effects of Overhyped Usability

When Apps Get Out of (Privacy) Control 

Slow but steady, the everlasting trade-off between usability and security appears to reach a considerable peak within the mobile app ecosystem. Since "ease of use" has been one of the key drivers for designing mobile apps in the recent past, it's about time to pause for a moment and to rethink whether our strong expectations towards app usability may have gone too far. To demonstrate how our strong usability expectations are going to intensify the mobile privacy crisis, this blog entry describes one of my latest cases in which I noticed an app that automagically retrieves a user's login credentials from its backend. For convenience.

The Case of the Deutsche Telekom HotSpot Login App

According to the official App Store description, the HotSpot Login App by Deutsche Telekom assists users to connect to one of the public Telekom hotspots. It says "Telekom mobile customers can set up their credentials automatically with the app."

Figure 1: Automatic retrieval of login credentials by pushing the "Automatic setup" button.
The username is based on a user's phone number.

In practice, users just have to push the "Automatic setup" button within the account settings dialog. A few seconds later, the login form is magically filled with the corresponding hotspot credentials (see Figure 1). Noticeably, the username is based on a user's phone number, which actually shouldn't be accessible from an app at all due to iOS sandbox restrictions. So how did Telekom manage this? Using private API? In that case, how was it possible to bypass Apple's vetting process? Special treatment for mobile service providers? Far from it, as the following analysis will show.

By intercepting the cellular network traffic it could be easily determined that whenever the automatic setup button is pushed, the following HTTP request is issued to the system

GET /getCredentials?x-Hash=ad5af2d1ef8aead398cd132aa4d1479e07f43ac60cbeea3f73e45c9f96650f4e HTTP/1.1
Proxy-Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: */*
Accept-Language: de-de
Connection: keep-alive
x-Hash: ad5af2d1ef8aead398cd132aa4d1479e07f43ac60cbeea3f73e45c9f96650f4e
User-Agent: HotSpot%20Login/2.4.0 CFNetwork/672.0.8 Darwin/14.0.0

It should be noted that the only remarkable part about that request is an ominous hash value, which is placed in its parameter and header values. Obviously, that hash is meant to prevent fraudulent use of this web service.

This request resulted in the following server response:

HTTP/1.1 200 OK
x-Username: <PHONE_NUMBER>
Content-Length: 0
Content-Type: text/plain; charset=ISO-8859-1

It turned out that a user's credentials are disclosed within the two HTTP headers x-Password and x-Username, which, in turn, are used to fill the app's login form. This means that the initial assumption of private API usage to determine a user's phone number has proved to be false. Instead, a web service provided by Deutsche Telekom provides all the relevant data. In fact, this is not surprising as it should be easy for a mobile carrier to match a requester's IP address to their account information.

So far, the one and only requirement to query that web service is a single hash value, which is actually calculated inside the app. In more detail, the hash is calculated within the method getStringMax provided by the UserCredentialManager class. This method uses the device's current public IP address and appends a static "shared secret" value (ae2454ca2df8c8c3) to it. Finally, a SHA-256 hash value is computed of that assembled string (IP + shared secret) to legitimize the web service request.

Exploitation Scenarios

So, what does it all mean? When talking about risks, the good news is that the wlanauthenticate sytem is only accessible from the Deutsche Telekom cellular network. Moreover, the requester's IP address is used along with the hash value to finally authorize a request. Therefore, crawling user hotspot credentials on a large scale is not an option. However, the bad news is that any app can make use of that service to query the phone numbers of their users or, even worse, their hotspot password (limited to customers of Deutsche Telekom of course). For this, an app only needs to issue a single HTTP request as demonstrated above using the following Objective-C function to calculate the required hash value. In no time, any requesting app will be provided with the user's phone number and the hotspot password.

- (NSString*) calculateHash:(NSString *)ip
    NSString *sharedSecret = @"ae2454ca2df8c8c3";
    ip = [ip stringByAppendingString:sharedSecret];
    NSData *dataIn = [ip dataUsingEncoding:NSASCIIStringEncoding];
    uint8_t dataOut[CC_SHA256_DIGEST_LENGTH]={0};
    CC_SHA256(dataIn.bytes, dataIn.length,  dataOut);
    NSData *out=[NSData dataWithBytes:dataOut length:CC_SHA256_DIGEST_LENGTH];
    NSString *hash=[out description];
    hash = [hash stringByReplacingOccurrencesOfString:@" " withString:@""];
    hash = [hash stringByReplacingOccurrencesOfString:@"<" withString:@""];
    hash = [hash stringByReplacingOccurrencesOfString:@">" withString:@""];
    return hash;

This seems to be a great opportunity for the advertising industry as it allows not only to reliably track users based on their phone numbers, but also to extend spam activities to other channels like text messaging services. This could be the beginning of "Customers Who Frequently Used This App Also Used..." messages flooding mobile messenger networks. Or the other way round: Apps might harvest hotspot credentials to sell them on the black market. I'm not quite sure, which of these scenarios is worse...

Notwithstanding the above, such a precarious practice is questionable insofar as Apple has significantly ramped up efforts in recent years to restrict apps' overall capabilities to harm users' privacy. Removal of the Unique Device ID (UDID) or the WiFi MAC address, introducing loads of new entitlements to restrict access to private API within iOS 7, just to name a few. However, it just seems like a waste of efforts when a mobile service provider circumvents those restrictions by utilizing their exposed network position.

Reactions from the Deutsche Telekom CERT

For this reason, I contacted the Deutsche Telekom CERT at the beginning of November 2013 and responsibly disclosed my findings. I also sketched all the related privacy implications. They informed me at the end of November that the issues were still under investigation and provided me with a final reply at the end of December. Within that e-mail they stated that they weighed up the potential risk of abusing the web service against its enormous usability boost and decided to keep that function up running. Otherwise, the overall app usability would suffer significantly. They also pointed out that users would be at risk only when installing "malicious" apps from third-party marketplaces and it would be the users fault when not sticking to official App Store apps. On a side note, I was wondering since when Apple evaluates individual web service requests within their vetting process? Anyhow, they finally stated that "exploitation of this vulnerability would (at least) require special expertise and criminal energy". Duh!

PS: Within the last update of the HotSpot Login App the hash calculation method was renamed from the former name "getSecureHashForIP" to a more meaningless value of "getStringMax". Well enough.


At the very beginning of this analysis it turned out that the automatic setup procedure would require a cellular data connection (to enable the backend to match a requesters cellular IP address to their account information). This is ensured by the app class ReachabilityARC that will display an error message whenever setup is invoked from within a WiFi network. Clearly, this renders all well-known WiFi based network capture approaches useless. Thus, in order to inspect the app's traffic the cellular data connection has to be intercepted. This can be easily accomplished within iOS by setting up a specific APN (Access Point Name) payload that defines a reverse proxy (see Figure 2). Although those settings are not directly accessible from the iOS user interface, the Apple Configurator Application  allows creation of so called mobile configuration profiles, which in turn provide access to hidden settings like APN proxy configurations  etc. It should be noted that, in practice, it is recommended to redirect the intercepted proxy traffic back to the iOS device in order to deliver it to the backend systems. This ensures that all requests are in fact issued via the cellular network, which might prevent app malfunctions. For more information on how to easily proxy back to the device, please refer to my recent blog post "The Proxy Fight" .

Figure 2: Intercepting celluar network connections using a iOS mobile configuration profile

Montag, 29. Juli 2013

How to Easily Spot Broken Cryptography in iOS Applications

Behind the Scenes of iPIN Lite – A Secure PIN & Password Safe

Within one of my recent research projects on mobile application security, I reviewed some password managers for iOS devices from the Apple App Store. The primary goal of this study was to demonstrate the diverse possibilities of iOS runtime injection and how our new tool Snoop-it eases down security assessments of iOS applications.

Note: Snoop-it is a tool to assist dynamic analysis and blackbox security assessments of iOS applications by retrofitting existing apps with debugging and runtime tracing capabilities. It was introduced during the DeepSec Security Conference and is publicly available at (Cydia Repository:

Previous studies have indicated, that many of the available secure password managers aren’t as secure by design, as intended. In a study on “Secure Password Managers”, Andrey Belenko and Dmitry Sklyarov have shown that many mobile password managers fail to provide the claimed level of data protection.

One quite popular app that was not included within their study is “iPIN Lite - Secure PIN & Password Safe” by IBILITIES, INC. This app spotted my attention, not least because it provides an “innovative sensor keyboard” and “state-of-the-art encryption” using the “Advanced Encryption Standard and a key length of 256 bit” – so what could possibly go wrong?

Dynamic Analysis 

The typical approach which is chosen to analyze an iOS application dynamically, is to examine the app on a jailbroken device. This removes the limitations imposed by Apple, provides root access to the iOS operating system and enables access to the Objective-C Runtime.

Thus, after installing iPIN Lite (Version 2.27) from the Apple App Store on my testing device, I configured Snoop-it to get ready to run. During initialization of iPIN Lite, Snoop-it is transparently integrated using library injection techniques. At the same time, a webserver is started inside the app in order to make all debugging and runtime tracing capabilities of Snoop-it accessible via an easy-to-use graphical web interface.

After iPIN Lite has finished launching and the sensor keyboard (which is a special login view, more on this later) was displayed, I pointed my browser to the Snoop-it web interface.

One feature of Snoop-it is to monitor file system accesses of an app. During initialization of iPIN Lite several ViewControllers and resource files are loaded, obviously to present the login view. Less obvious, but even more interesting was one access to a file named iPinSecurityModel.dat which resides in the /Library/ipin_data/ folder of the application sandbox (see Figure 1).

Figure 1: Files accessed by iPIN Lite at startup

Although this file probably serves as a basis for the security model of iPIN Lite, it was not protected by Apple File Data Protection mechanisms (protection class NSFileProtectionNone). Consequently, one of the next steps was to look at the contents of this file (using Snoop-it it is as easy as double-clicking on the specific entry to download the file). Unfortunately, the contents of the security model file appeared to be in binary format, probably some kind of encoding or encryption. Worse luck! So what next?

Luckily, the characteristics of the Objective-C Runtime enable comprehensive dynamic analysis of running apps. One of the most important functions of the Objective-C Runtime is objc_MsgSend. This function serves as a central dispatcher and routes messages between existing objects. Accordingly, every method invocation in Objective-C results in one or more messages to that dispatcher. If we could intercept all messages to this dispatcher, we would get a very clear picture of the actual control flow and a clear vision of what is going on inside the app.

One solution for that could be to monitor all calls to objc_msgSend on a debugger level using gdb. This approach makes an awful lot of noise due to all the background activities of the runtime which are shown up as well. In consequence it’s really hard to figure out app specific calls.

A better approach would be to intercept messages to objc_msgSend within the runtime. On a runtime level, filters could be applied to focus on app specific classes and method invocations inside the actual app. Inspired by Aspective-C and Subjective-C we extended those existing solutions to consider penetration testing needs and integrated a powerful method tracing feature into Snoop-it.

Thus, in order to evaluate the encryption scheme, I switched over to the method tracing tab and examined the methods that were invoked during initialization of iPIN Lite. I was especially interested in the processing of the security model file, which has shown up in the file system access list earlier (see access to the file iPinSecurityModel.dat in Figure 1).

Indeed, as the tracing output reveals (see Listing 1), the security model file was accessed at the very beginning. In fact, the file was protected using a hard-coded cryptographic key that resides inside the application binary.

+ [iPinModel(0x90f68) initFromFile]
+ [iPinModel(0x90f68) securityModelFilePath]
+ [iPinModel(0x90f68) securityModelFilePath]
+ [PBKDF2(0x9124c) getKeyForPassphrase:], args: <__NSCFConstantString 0x92160: [initForWritingWithMutableData]>
+ [iPinModel(0x90f68) initSharedModelWithUnarchiver:withObjectKey:], args: <0x2002aef0>, <__NSCFConstantString 0x92150: iPINModel>
+ [iPinModel(0x90f68) sharedModel]
- [iPinModel(0x200e2130) initWithCoder:], args: <0x2002aef0>
- [iPinModel(0x200e2130) setSensorHash:], args: <__NSCFString 0x2002a630: 8CF37F50FB1A7943FBA8EAA20FFF1E56>
- [iPinModel(0x200e2130) setEncryptedSensorCode:], args: <__NSCFData 0x2002a540, length 16 bytes>
- [iPinModel(0x200e2130) setPasswordHash:], args: <__NSCFString 0x2002a470: 098F6BCD4621D373CADE4E832627B4F6>
- [iPinModel(0x200e2130) setEncryptedPassword:], args: <__NSCFData 0x2002a2d0, length 16 bytes>
- [iPinModel(0x200e2130) setFailedAttemptsCounter:], args: 0
Listing 1: Method tracing output of iPIN Lite – Part 1

According to the tracing output shown in Listing 1, the security model file contains hashes of a sensor-code (sensorHash) and a password (passwordHash) as well as an encrypted password string (encryptedPassword). These values are transferred into an instance of the “iPinModel” class. Presumably, these hashes will be used later on during authentication to verify the sensor-code or the password entered by the user. It’s quite questionable whether a key derivation function applied on a static string really makes sense :-)

Anyway, let’s take a quick look at this sensor-code: iPIN Lite provides an “innovative sensor keyboard” which consists of 9 touch-sensitive sensors (see Figure 2). This keyboard is supposed to provide “quick access to all your PINs - without any annoying and time-killing passwords.” The authentication is based on a geometrical shape or any individual sensor combination. Therefore, the “individual sensor code is calculated by the order in which (..) these sensors have been activated and deactivated”. By now, this looks like another showcase of the everlasting conflict between usability and security. Let’s see.

Figure 2: Sensor Keyboard of iPin Lite

As soon as one sensor is touched, its color is changed into blazing blue. In the background, the touch events are registered from the corresponding ViewControllers.

The following method trace (see Listing 2) shows, that the sensors are numbered consecutively from 10 to 90. A touch on the upper middle sensor corresponds to a value of 20. In the end, the values of all touched sensors are joined into one common sensor-code. On every touch, a MD5 hash is calculated from the current sensor-code and is compared to the sensorHash value (which was derived from the security model file). Consequently, the overall security of iPIN Lite solely depends on the strength of these sensor-codes, whose search space is in fact very limited. If we could guess the sensor-code, the security model of iPIN Lite would have been completely broken.

- [UISensorKeyboardImageView(0x200dc000) touched]
- [UISensorKeyboardImageView(0x200dc000) touch]
- [UISensorKeyboardImageView(0x200dc000) setTouched:], args: 1
- [UISensorKeyboardImageView(0x200dc000) numberOfTouches]
- [UISensorKeyboardImageView(0x200dc000) setNumberOfTouches:], args: 1
- [SensorKeyboardViewController(0x200c9a40) tock]
+ [iPinModel(0x90f68) sharedModel]
- [iPinModel(0x200e2130) sensorSoundTurnedOff]
- [SensorKeyboardViewController(0x200c9a40) input]
- [UISensorKeyboardImageView(0x200dc000) value]
- [SensorKeyboardViewController(0x200c9a40) setInput:], args: <__NSCFString 0x200dec80: 20>
- [LoginViewController(0x20093ce0) valueChanged:], args: <__NSCFString 0x200dec80: 20>
+ [CryptoUtils(0x90f54) md5:], args: <__NSCFString 0x200dec80: 20>
+ [iPinModel(0x90f68) sharedModel]
- [iPinModel(0x200e2130) sensorHash]
Listing 2: Method tracing output of iPIN Lite – Part 2

Attacking the Encryption Scheme 

Snoop-it provides a feature to invoke arbitrary methods at runtime. For this, Snoop-it queries the Objective-C Runtime for all available app classes and methods during startup. In addition, Snoop-it monitors initialization of each class and keeps track of all available instances in memory to invoke those instance methods later on. Thus, to determine the current sensorHash, I used that feature of Snoop-it and invoked the corresponding getter method of the iPinModel class. This returned me a hash value of 8CF37F50FB1A7943FBA8EAA20FFF1E56 (see Figure 3).

Figure 3: Determine the actual sensorHash from an instance of the iPinModel class

In order to attack the encryption scheme, I wrote a python script to brute force all possible sensor codes and to compare it against this sensorHash. After a few seconds, the script provided the correct sensor code sequence 10 20 30 60 90 (see Figure 4).

Output of the python script:
$ python -s 8CF37F50FB1A7943FBA8EAA20FFF1E56 
Sensor Hash: 8cf37f50fb1a7943fba8eaa20fff1e56 
Sensor Code: 1020306090 

Figure 4: Sensor Code Accepted

Finally, let’s see how the actual app data is decrypted. According to the output shown in Listing 3, the sensor-code is used to derive a key and to decrypt the encryptedPassword value which is stored in the security model file.

- [iPinModel(0x2007ee20) decryptPasswordWithKey:], args: <__NSCFString 0x2009d570: 1020306090>
- [iPinModel(0x2007ee20) encryptedPassword]
+ [PBKDF2(0xf824c) getKeyForPassphrase:], args: <__NSCFString 0x2009d570: 1020306090>
- [iPinModel(0x2007ee20) setPassword:], args: <__NSCFString 0x200f9d50: secretPassword>
- [iPinModel(0x2007ee20) calculatePasswordHash]
- [iPinModel(0x2007ee20) password]
Listing 3: Method tracing output of iPIN Lite – Part 3

Afterwards, the decrypted password (“secretPassword”) is used to derive another key, which is then used to decrypt the actual iPIN data (see Listing 4).

- [iPinModel(0x2007ee20) password]
+ [PBKDF2(0xf824c) getKeyForPassphrase:], args: <__NSCFString 0x200f9d50: secretPassword >
- [Pin(0x2002a7c0) initWithCoder:], args: <0x1f5a4320>
- [Pin(0x2002a7c0) setPinValue:], args: <__NSCFString 0x2002e3f0: 4711>
- [Pin(0x2002a7c0) setNote:], args: <__NSCFString 0x2002a760: Sample PIN Note>
- [Pin(0x1f5d1890) initWithCoder:], args: <0x1f5a4320>
- [Pin(0x1f5d1890) setPinValue:], args: <__NSCFString 0x200a5830: 1337>
- [Pin(0x1f5d1890) setNote:], args: <NULL>
- [iPINDataModel(0x2002ce80) setPinList:], args: <__NSArrayM 0x1f5c5330, size: 2>
- [iPinNavigationController(0x200317f0) init]
Listing 4: Method tracing output of iPIN Lite – Part 4

Lessons Learned

Once again, this case has demonstrated, that the security of an app is only as strong as its weakest link. Even if an app claims to protect your data using acknowledged encryption standards, it’s always worth to look behind the scenes. While this was quite time-consuming in the past, our new tool Snoop-it allows thorough analyses and on-the-fly manipulations of arbitrary iOS apps with an easy-to-use graphical user interface. Thus, reverse engineering of apps, bypassing client-side restrictions or unlocking additional features and premium content of apps is going to be a child’s play. Using Snoop-it, the attack surface of any iOS app can be explored more efficiently and even time-consuming steps, like evaluating encryption schemes, suddenly become possible in the twinkling of an eye.

Note: IBILITIES INC. was informed about these findings a few months ago. In the meantime, an updated version of iPIN was released.

The following video outlines the steps described above:

This video is also available at the following URL:

Thanks to Markus Troßbach for his close collaboration on developing Snoop-it!

Dienstag, 23. Juli 2013

The Proxy Fight, or How to Pentest an iOS App's Backend through an Existing VPN Connection

Have you ever been wondering how to pentest a mobile App backend that is only available through an existing VPN connection? This is often the case when it comes to assess the security of in-house developed enterprise Apps. Usually, company-owned devices first need to establish a VPN connection to the company's intranet in order to access data from internal backend systems. While this is a good design decision from a security perspective, it makes a penter's life a misery: As soon as a VPN connection is established, local LAN access is restricted. As a consequence, it is not as trivial as just configuring an HTTP proxy in your WiFi settings to man-in-the-middle between your App and the target web service.

To not waste too much time travelling in order to assess those web backends on site, the following steps provide you with a quick and comfortable solution to pentest iOS App's web services easily from remote locations, although an existing VPN connection is required.

Step 1 (on your host machine): Start your favorite intercepting proxy like Watobo, Burp, ZAP or the like (Port: 8080).

Step 2 (on your host machine): Configure your intercepting proxy to forward outgoing requests to an upstream HTTP proxy server using the following settings: Server: Port: 3128. In Burp e.g. these settings are defined within Options -> Connections -> Upstream Proxy Servers.

Step 3 (on your iDevice): Go to Cydia and install the package 3proxy.

Step 4 (on your iDevice): SSH into your iDevice and prepare a 3proxy configuration file:
iDevice:~ root# cat /var/root/3proxy.cfg
log /var/root/3proxy.log D
logformat "%d-%m-%Y %H:%M:%S %U %C:%c %R:%r %O %I %T"
proxy -p3128 -n
Step 5 (on your iDevice): Run 3proxy on your iDevice:
iDevice:~ root# 3proxy /var/root/3proxy.cfg &
[1] 11755
Step 6 (on your iDevice): Select your VPN configuration profile from the iOS Settings App (General -> VPN), scroll down to the Proxy settings and press "Manual". Here we need to fill in the following configuration: Server: Port: 8080 (this is the port on your host machine where your intercepting proxy is listening, see Step 1).

Step 7 (on your host machine): Now comes the most critical stage. As access to the local network is restricted whenever a VPN connection is established, we need to SSH into the iDevice over USB using usbmuxd. For this, get the usbmuxd source package, unpack and run
$ chmod +x ./usbmuxd-1.0.8/python-client/
$ ./usbmuxd-1.0.8/python-client/ -t 22:2222

Finally, run the following command to establish a SSH connection to your iDevice over USB and to setup all required SSH port forwardings:
$ ssh -p 2222 -L 3128: -R 8080: root@

Figure 1: Overview of the proxy chaining setup

Using this setup, every HTTP request originating from your iDevice is sent to the configured VPN proxy server first. The proxy server is nothing else than the intercepting proxy running on your host machine, just made accessible via the SSH tunnel over USB. After intercepting those HTTP requests, your intercepting proxy forwards them to the actual backend via the VPN connection using the 3proxy service running on your iDevice. From this point, you can proceed with your basic pentesting procedures and behave as if no VPN is present at all.

Please note that you might not be able to modify VPN proxy settings on your device, when the VPN profile was issued by a Mobile Device Management (MDM) solution. In this case, you need to adjust the VPN proxy configuration via the MDM interface. But beware of some MDM solutions that won't accept "localhost" (or as a valid proxy server setting.

Figure 2: Some MDM solutions like MobileIron are more restrictive on proxy server settings than iOS itself

Figure 2 shows the related error message, when localhost is used as a VPN proxy setting within MobileIron. This restriction can easily be bypassed by setting up an alias for localhost in your iDevice's /etc/hosts and by pointing your MDM to that alias.