Kerckhoffs' Principle: Difference between revisions
imported>Sandy Harris |
imported>Sandy Harris (→Implications for analysis: re-order) |
||
Line 64: | Line 64: | ||
That is, '''it is an error to rely on the secrecy of a system'''. Anyone who claims something is secure (except perhaps in the very short term) because its internals are secret is either clueless or lying, perhaps both. Such claims are one of the common indicators of cryptographic [[Snake oil (cryptography)|snake oil]]. | That is, '''it is an error to rely on the secrecy of a system'''. Anyone who claims something is secure (except perhaps in the very short term) because its internals are secret is either clueless or lying, perhaps both. Such claims are one of the common indicators of cryptographic [[Snake oil (cryptography)|snake oil]]. | ||
Is your system secure when the enemy knows everything except the key? If not, then at some point it is certain to become worthless. Since a security analyst cannot know when that point might come, the analysis can be simplified to '''The system is insecure if it cannot withstand an attacker that knows all its internal details'''. | Is your system secure when the enemy knows everything except the key? If not, then at some point it is certain to become worthless. Since a security analyst cannot know when that point might come, the analysis can be simplified to '''The system is insecure if it cannot withstand an attacker that knows all its internal details'''. |
Revision as of 22:27, 24 May 2010
In 1883 Auguste Kerckhoffs [1] wrote two journal articles, La Cryptographie Militaire [2], in which he stated six axioms of cryptography. Some are no longer relevant given the ability of computers to perform complex encryption, but his second axiom, now known as Kerckhoffs' Principle, is still critically important:
“ | Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. | ” |
“ | The method must not need to be kept secret, and having it fall into the enemy's hands should not cause problems. | ” |
The same principle is also known as Shannon's Maxim after Claude Shannon who formulated it as "The enemy knows the system."
That is, the security should depend only on the secrecy of the key, not on the secrecy of the methods employed. Keeping keys secret, and changing them from time to time, are reasonable propositions. Keeping your methods — the design of your cryptographic system — secret is more difficult, perhaps impossible in the long term against a determined enemy. Changing a deployed system can also be quite difficult. The solution is to design the system assuming the enemy will know how it works, aiming at something that is secure even when the enemy knows everything except the key. If you achieve this, then all you need to manage is keeping the keys secret.
Another English formulation is: "If the method of encipherment becomes known to one's adversary, this should not prevent one from continuing to use the cipher." [3]
Security through obscurity
It is moderately common for companies — and sometimes even standards bodies as in the case of the CSS encryption on DVDs — to keep the inner workings of a system secret. Some even claim this security by obscurity makes the product safer. Such claims are utterly bogus; of course keeping the innards secret may improve security in the short term, but in the long run only systems which have been published and analyzed should be trusted.
Steve Bellovin commented:
The subject of security through obscurity comes up frequently. I think
a lot of the debate happens because people misunderstand the issue.
It helps, I think, to go back to Kerckhoffs' second principle, translated as
"The system must not require secrecy and can be stolen by the enemy without causing trouble", per http://petitcolas.net/fabien/kerckhoffs/). Kerckhoffs said neither "publish everything" nor "keep everything secret"; rather, he said that the system should still be secure *even if the enemy has a copy*.
In other words -- design your system assuming that your opponents know it in
detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.) After that, though, there's nothing wrong with trying to keep it secret -- it's another hurdle factor the enemy has to overcome. (One obstacle the British ran into when attacking the German Engima system was simple: they didn't know the unkeyed mapping between keyboard keys and the input to the rotor array.) But -- *don't rely on secrecy*. [4]
That is, it is an error to rely on the secrecy of a system. Anyone who claims something is secure (except perhaps in the very short term) because its internals are secret is either clueless or lying, perhaps both. Such claims are one of the common indicators of cryptographic snake oil.
Is your system secure when the enemy knows everything except the key? If not, then at some point it is certain to become worthless. Since a security analyst cannot know when that point might come, the analysis can be simplified to The system is insecure if it cannot withstand an attacker that knows all its internal details.
Any serious enemy — one with strong motives and plentiful resources — will learn all the other details. In war, the enemy will capture some of your equipment and some of your people, and will use spies. If your method involves software, enemies will do memory dumps, run it under the control of a debugger, and so on. If it is hardware, they will buy or steal some and build whatever programs or gadgets they need to test them, or dismantle them and look at chip details with microscopes. Or in any of these cases, they may bribe, blackmail or threaten your staff or your customers. One way or another, sooner or later they will know exactly how it all works.
From the defender's point of view, using secure cryptography is supposed to replace a difficult problem — keeping messages secure — with a much more manageable one — keeping relatively small keys secure. A system that requires long-term secrecy for something large and complex — the whole design of a cryptographic system — obviously cannot achieve that goal. It only replaces one hard problem with another.
Because of this, any competent person asked to analyse a system will first ask for all the internal details. An enemy will have them, so the analyst should if the analysis is to make sense.
References
- ↑ Kahn, David (second edition, 1996), The Codebreakers: the story of secret writing, Scribners p.235
- ↑ Peticolas, Fabien, electronic version and English translation of "La cryptographie militaire"
- ↑ Savard, John J. G., The Ideal Cipher, A Cryptographic Compendium
- ↑ Bellovin, Steve (June, 2009), Security through obscurity, Risks Digest