When a security hole exists in a system, and a random person discovers this exploit, they have a few choices.
For most people, the third option is ruled out pretty quickly, but there's a lot of attractiveness to number 2 (particularly in the military). Most of the security people I talk to believe that number 2 gives only a weak feeling of security. The usual damning phrase is to call it (with a sneer) "security through obscurity". Many people insist on number 1, publishing the exploit widely and loudly to make sure it gets fixed fast or else.
Poonawalla mostly explains it well, except he misses a subtle point. Many of the most responsible security experts, the guys who routinely discover holes in protocols or cryptographic algorithms, feel that the most responsible path is to give the information about the hole first to the people who can fix it. It *may* be possible to fix the hole even before a potential hacker discovers it. However, in order to pressure companies to actually fix the holes, the security expert will also publish the information on the exploit to the Internet in a week or a month or two.
This stuff gets discussed frequently here at the IETF, obviously in the security area. I've seen representatives of large software companies plead with the independent security experts to help keep security holes secret at least for a short while. My opinion is that it takes somebody with a certain amount of resentment against these large companies, and a certain amount of willingness to make trouble and cause chaos, not to agree to keep secrets for a short while.
Coincidentally, Bruce Schneier just discussed this tonight at the IESG plenary at the IETF. (It's the last day of the 55th IETF conference here in Atlanta, and I've been extremely busy, but it's been good.)
No comments:
Post a Comment