If at first you don't succeed; call it version 1.0
Monday, August 18, 2008

Do you think that just following security best practices will keep you and your users safe? Think again.

Recently, I've found 2 examples where following security best practices can actually expose you to security vulnerabilities, if you won't put your mind to it.

Example no. 1 - NoScript

Everybody who use Firefox and concerned about its own security and privacy uses NoScript. Unfortunately, for the customers of the PhishMe.com service, using NoScript will actually expose their private login credentials.

According to an eWeek article: "PhishMe, a new security SAAS offering from the Intrepidus Group, enables companies to launch mock phishing attacks against their own employees in the name of improving e-mail security...PhishMe does not collect sensitive information...JavaScript on the Web site overrides anything users actually input into fields during tests."

So, basically, using NoScript will disable JavaScript on the user's browser and will actually send over the sensitive information of the user.

Now, both of the teams here play fair in this game. Intrepidus Group follows some kind of privacy best practices by changing the HTML form to not send the user's private information over the network, and NoScript does it's own security best practice by disabling JavaScript on an unknown website.

But combined together (don't you love those blended threats?), the PhishMe.com service will try to phish users' credentials using pages which are not in the trusted domain, NoScript will then disable the JavaScript on the fake phishing page and the phished users of the fake phishing attack will eventually expose their private credentials.

 

Example no. 2 - Plain Text Emails

From "forgot my password" to "Johnny Depp wants to be added to your friends list", many services today send notification emails to their users. Security best practices wave a big "no, no" on HTML emails, and suggest that you read your email messages in plain text. There are services which already do the job for you and send their messages in plain text.

Unfortunately, what most of those services forget is that on a plain text email, a text which begins with either a URL protocol handler (e.g. http://, https://, etc) or "www.", will automatically transform itself to a clickable link, on most if not all mail clients.

This becomes a big issue when the plain text message contains a user generated content. The exact problem is described in my advisory over the TwitPwn website.

Twitter sends their users a notification, each and every time a different user has started following them on twitter. This email contains the following template:

Hi, *Your full name*.

*Follower's full name* (*Follower's username*) is now following your updates on Twitter.

Check out *Follower's username*'s profile here:

http: //twitter.com/*Follower's username*

You may follow *Follower's username* as well by clicking on the "follow" button.

Best,

Twitter

 

Now, both the Follower's username and full name can be alerted by the attacker, as it is save in his own profile. The username was restricted to alphanumeric characters, and therefore cannot be used for the attack. But, the full name was only restricted by the size, around 25 characters, enough to put the attacker's malicious http://www.evil.com link. All the attacker had to do was to run a bot which automatically follow people, and just wait for the victims to click on the links in the mails that were sent by twitter.

This vulnerability was fixed by twitter, and now you cannot use the dot character in the full name.

 

Conclusion

This post was not intended to get people to stop following security "best" practices. On the contrary, I encourage you all to follow them. All I'm saying is that following those and other security "best" practices will not make you and your users bullet-proof safe. You will now need to be more careful and think about other vectors too...


Monday, August 18, 2008 9:19:57 PM UTC | Comments [2] | Security#
Wednesday, August 20, 2008 3:07:23 PM UTC
If the PhishMe.com pages don't use a fallback mechanism to avoid phishing users who keep JavaScript disabled (ever heard of the <NOSCRIPT> element?), I guess there's a good ground for suing them over unauthorized data collection: they just cannot assume user's they're probing have JavaScript enabled.

They would better avoid submitting input fields at all (masked or not), but they probably want to know which fields have been filled by the user. This could be easily achieved in a "progressive enhancement" way, by keeping those input controls outside the form element (so that they cannot be accidentally submitted) and adding the statistical info to an hidden field at submission time, by using JavaScript if available.

If they really didn't implement a similar mechanism with due diligence, as you imply, they entirely deserve to be sued.

P.S.: I'm the NoScript's author: the guy who pointed me this article in a comment to another blog post said I'm dishonest because I didn't declare it ;)
Saturday, August 23, 2008 4:33:23 AM UTC
Blacklists are no security measure:

http://1089054563/
G. E. Rode
Comments are closed.     
Send me an Email
Follow me on Twitter
RSS Feeds
  
Blogroll
Archive
Admin Login
Sign In
Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.