Kerckhoffs' Principle: Difference between revisions
imported>Sandy Harris No edit summary |
imported>Sandy Harris |
||
Line 42: | Line 42: | ||
== Security through obscurity== | == Security through obscurity== | ||
[[Steve Bellovin]] | |||
It is moderately common for companies — and sometimes even standards bodies as in the case of the [[Digital_rights_management#CSS_analysis|CSS encryption on DVDs]] — to keep the inner workings of a system secret. Some even claim this '''security by obscurity''' makes the product safer. Such claims are utterly bogus; of course keeping the innards secret may improve security in the short term, but in the long run only systems which have been published and analyzed can be trusted. | |||
[[Steve Bellovin]] commented: | |||
{{quotation|The subject of security through obscurity comes up frequently. I think | {{quotation|The subject of security through obscurity comes up frequently. I think |
Revision as of 20:37, 24 May 2010
In Auguste Kerckhoffs' [1] 1883 book, La Cryptographie Militaire [2], he stated six axioms of cryptography. Some are no longer relevant given the ability of computers to perform complex encryption, but his second axiom, now known as Kerckhoffs' Principle, is still critically important:
“ | Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. | ” |
“ | The method must not need to be kept secret, and having it fall into the enemy's hands should not cause problems. | ” |
The same principle is also known as Shannon's Maxim after Claude Shannon who formulated it as "The enemy knows the system."
That is, the security should depend only on the secrecy of the key, not on the secrecy of the system. Keeping keys secret, and changing them from time to time, are reasonable propositions. Keeping your methods — the design of your cryptographic system — secret is more difficult, perhaps impossible in the long term against a determined enemy. Changing a deployed system can also be quite difficult. The solution is to design your system so that it remains secure even if the enemy knows how it works; then all you need to manage is keeping the keys secret.
Another English formulation is: "If the method of encipherment becomes known to one's adversary, this should not prevent one from continuing to use the cipher." [3]
Implications for analysis
Is your system secure when the enemy knows everything except the key? If not, then at some point it is certain to become worthless. Since a security analyst cannot know when that point might come, the analysis can be simplified to The system is insecure if it cannot withstand an attacker that knows all its internal details.
Any serious enemy — one with strong motives and plentiful resources — will learn all the other details. In war, the enemy will capture some of your equipment and some of your people, and will use spies. If your method involves software, enemies will do memory dumps, run it under the control of a debugger, and so on. If it is hardware, they will buy or steal some and build whatever programs or gadgets they need to test them, or dismantle them and look at chip details with microscopes. Or in any of these cases, they may bribe, blackmail or threaten your staff or your customers. One way or another, sooner or later they will know exactly how it all works.
From the defender's point of view, using secure cryptography is supposed to replace a difficult problem — keeping messages secure — with a much more manageable one — keeping relatively small keys secure. A system that requires long-term secrecy for something large and complex — the whole design of a cryptographic system — obviously cannot achieve that goal. It only replaces one hard problem with another.
Because of this, any competent person asked to analyse a system will first ask for all the internal details. An enemy will have them, so the analyst should if the analysis is to make sense.
Cryptographers will generally dismiss out-of-hand any security claims made for any system whose internal details are kept secret. Without analysis, no system should be trusted. Without details, it cannot be properly analysed. If you want your system trusted — or even just taken seriously — the first step is to publish all the internal details. Of course, there are some exceptions; if a major national intelligence agency claims that one of their secret systems is secure, the claim will be taken seriously because they have their own cipher-cracking experts. However, no-one else making such a claim is likely to be believed.
Security through obscurity
It is moderately common for companies — and sometimes even standards bodies as in the case of the CSS encryption on DVDs — to keep the inner workings of a system secret. Some even claim this security by obscurity makes the product safer. Such claims are utterly bogus; of course keeping the innards secret may improve security in the short term, but in the long run only systems which have been published and analyzed can be trusted.
Steve Bellovin commented:
The subject of security through obscurity comes up frequently. I think
a lot of the debate happens because people misunderstand the issue.
It helps, I think, to go back to Kerckhoffs' second principle, translated as
"The system must not require secrecy and can be stolen by the enemy without causing trouble", per http://petitcolas.net/fabien/kerckhoffs/). Kerckhoffs said neither "publish everything" nor "keep everything secret"; rather, he said that the system should still be secure *even if the enemy has a copy*.
In other words -- design your system assuming that your opponents know it in
detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.) After that, though, there's nothing wrong with trying to keep it secret -- it's another hurdle factor the enemy has to overcome. (One obstacle the British ran into when attacking the German Engima system was simple: they didn't know the unkeyed mapping between keyboard keys and the input to the rotor array.) But -- *don't rely on secrecy*. [4]
That is, "security through obscurity" does not work. Anyone who claims something is secure (except perhaps in the very short term) because its internals are secret is either clueless or lying, perhaps both. Such claims are one of the common indicators of cryptographic snake oil.
References
- ↑ Kahn, David (second edition, 1996), The Codebreakers: the story of secret writing, Scribners p.235
- ↑ Peticolas, Fabien, la cryptographie militaire
- ↑ Savard, John J. G., The Ideal Cipher, A Cryptographic Compendium
- ↑ Bellovin, Steve (June, 2009), Security through obscurity, Risks Digest