How I cross-site scripted Twitter in 15 minutes, and why you shouldn't store important data on 37signals' applications

Posted on Thu, 3 Sep 2009
Today the Ruby on Rails security team released a patch for a cross-site scripting issue which affected multiple high-profile applications, including Twitter and Basecamp. If you're concerned about the issue and would like to see the patch, please read the advisory from the Rails security team. In this post, I discuss the overall process of finding the issue, and the reason why I'd suggest that no important information be stored on the 37signals applications (Basecamp, Highrise, Backpack, and Campfire).

Finding software security issues is a creative art. With multiple layers of stacks and protocols combining to form a single application, finding an issue requires seeing which layers and components can be mixed together to create a behavior not originally intended by the program's designer. Sometimes, inspiration strikes and the pieces fall into place quickly. After seeing a bug in Unicode handling in an unrelated program a few weeks ago, I suddenly had an idea: "I wonder if there are any web applications which have Unicode handling problems that might be security issues?"

My attention quickly turned to Twitter, the only web application I had open at that moment. A few minutes later, I had JavaScript from a URL query parameter falling through the escaping routines and running in the main body of twitter.com. Bingo! Cross-site scripting, the stuff that Twitter worms are made of. But was this a Twitter-specific issue, or did it affect other sites too?

I quickly pulled up a test Basecamp account I had made a while ago. Basecamp, like Twitter, uses Ruby on Rails for its web frontend, so this would be a good way to determine whether the issue was in Rails or if it was specific to Twitter. Sure enough, the same malformed UTF-8 sequence that worked on Twitter also worked on Basecamp.

At this point I was nearly convinced that the issue affected all Ruby on Rails applications, so I pulled up a demo Rails application I had lying around my sandbox and constructed a malicious URL for that application. Lo and behold, it did not work! Nor did several other Ruby on Rails sites, including the official Rails site.

Now thoroughly confused, I decided to send the issue to Twitter and 37signals, the company that makes Basecamp, to see if they could determine where the issue was actually coming from. Not being familiar with the Rails framework myself, I figured it would be more efficient to enlist the developers of the applications I knew to be vulnerable to help root cause the issue - especially if it was ultimately caused by a different library or component that both sites happened to share.

That's where the fun began. Most traditional application vendors - especially vendors of large, high-profile applications - have dedicated security contact information which comes up within the first few hits for " security" on Google. This information usually explains the policies that the vendor follows for working with researchers and gives an email address and PGP public key that can be used to contact the security research team directly. Apple, Microsoft, and Google each have good examples of this type of page. There are two very good reasons why these companies all post detailed, easy-to-find contact instructions for security researchers that bypass the usual customer support queues:

  1. The information that a security researcher has to share is likely to be important, and affect many if not all of the users of the company's products, and
  2. Responsible disclosure is a public service; it pays to make it as easy as possible for researchers to inform you of an issue before going public.
It took a fair amount of effort to find Twitter's page, but that wasn't helped by the overwhelming number of results for any Google search involving "Twitter". (The GPG public key listed on that contact page has appeared since I reported the issue.)

37signals was a different matter. They had a security page all right, but it didn't have any contact information for researchers. (That information has since been added to the page.) They helpfully suggested that if I had any questions about how their wonderful security allowed them to make claims like "Your data won't be compromised", I could file a support ticket - into the same queue where all customer support requests go. They promise to handle all requests within a few hours, so I filed it anyway.

After a few days of not receiving a response from either vendor, I decided to ping both of them to get an update. I pinged a security researcher who I knew worked at Twitter, and after a little back and forth things were quickly resolved and Twitter was patched. When it was done, I replied to an email from Jay Edwards and suggested making the security contact page easier to find and including a GPG key, the latter of which has now been done. Overall the process of working with Twitter was smooth, and the issue was fixed quickly.

37signals was a different matter. I asked them if they had a dedicated security contact address, and was told to use the support form. I replied that I had and once again asked for a direct contact, and was now told to check my spambox. A quick grep of my postfix logs showed that there had been no contact attempts from 37signals' mail servers since I submitted the issue, so I was now a little bit peeved. This netted a brief response: "I've resent my email to you." Sure enough, two emails then arrived in my inbox with the first line on each indicating they were being resent, but without any information on the date and time when the originals were sent. I replied and asked for that information to determine if my mail server was dropping mail without my knowledge, but I haven't heard anything since then.

What 37signals said in their email is that the issue was indeed a Ruby on Rails issue, and so I should contact the Rails security team to resolve the issue. This is what I was looking for from the beginning, so I sent a mail off to the Rails team with as much information as I could. Despite the fact that I couldn't provide a complete reduction of the issue, the Rails team was able to root-cause the issue and come up with a patch. Michael Koziarski was my main contact on this issue and did an excellent job of keeping me informed of the progress of the issue and the patch. The Rails security process was pleasant for me to work with as a researcher, and I'd like to thank Michael and everyone involved in putting together the patch.

One surprise I discovered during the process was that IE8 includes a Cross Site Scripting filter which effectively blocked this attack. I'm very impressed with the effort that Microsoft's taken to mitigate one of the most common web application security issues. Every other browser vendor needs to add this functionality yesterday.

Web application security is still an immature field, and many of the layers are sufficiently poorly designed that issues like this will pop up for a good long while. Just like buffer overflows have been a weak spot for C security as long as the Internet has been around, escaping issues will continue to be a weak spot for web security for as long as we're afflicted with this particular architecture. Web application vendors can do more to protect their users. There's no reason why the same filtering technology used in IE8 on the client side can't be used on the server side to protect against these simple type-1 XSS attacks, and even more complex reflected attacks could be caught by a signature-based database. At least one vendor claims to have a firewall that will filter out XSS attacks; I wouldn't be surprised to see more entrants in this space soon.

Winners: Ruby on Rails, Microsoft, Twitter
Loser: 37signals

As I mentioned in the intro, I don't think it's a wise idea to store important information on the 37signals suite of web applications. My experience working with them on this issue was so thoroughly poor that I am convinced that they can't be trusted with any data of mine. The grandiose claims made on their security page are simply factually incorrect. Your data can be compromised, and you can lose important data, and there might not be anything that 37signals can do about it if their upstream vendor's software is vulnerable or your browser is vulnerable. 37signals' confidence in their security was so complete that they didn't even bother to list a dedicated security contact. I'm not the only researcher to have issues with the 37signals security contact process recently either. They've since rectified this issue, but simply adding a contact address in response to a few researchers complaining won't change the attitudinal problem. 37signals seems to believe that its security is beyond reproach, and until they publicly commit to a robust security process and drop the self-congratulatory statements on security, I won't believe that they're following a process which will protect my data at least as well as I protect it.

Trackback pings for this entry are listed below. The URL to ping for this entry is: http://brian.mastenbrook.net/trackback/36