Security through Obscurity vs. Public Algorithms

Similarly to the historical time periods we have mentioned in previous articles, which did not distinguish between message concealment and its encoding (i.e. rendering it unintelligible), people worked for a long time on the assumption that to make a message legible only by its designated recipients, the only requirement was that no one else knew the encoding method. It was common with simple substitute ciphers where certain letters, words or pre-arranged meanings were replaced by other characters, whether forming a seemingly legible (though incomprehensible text) or using special characters familiar only to the insiders.

Only much later, particularly due to the significant portion of undercover operations and classified communication during the World War II, it became evident that in case of some ciphers, the encoding knowledge itself was not sufficient; it was also necessary to know the used key. A good example is the already mentioned Enigma encoding machine. The machine itself was not sufficient, it was also vital to obtain the top secret encoding manual (the key).

Once computer technology entered the scene, both cryptography and cryptology became exact science fields with all consequences. For both prestige and economic reasons, individual research groups became publishing their results, and the concept of concealing the encoding method  (also due to spreading information via internet) became unsustainable. At the same time, new scientific findings led to encoding algorithms whose knowledge was useless unless the attacker  also had the keys.

As a result, contemporary cryptography works almost entirely with the concept of so-called public algorithms when the encoding method used to create a classified message based on the original text and key is fully known to the public (and subject to expert discussions). However, this method’s characteristics make it extremely difficult, or rather time-consuming, to reconstruct the original text from the encoded message without the knowledge of the used key. Classification therefore lies in the key concealment and its (one-time) classified exchange.

Our concluding anecdote is a rather unusual, though commonly used term “security through obscurity” which we use today to describe classification based on not knowing the process from the original text to the encoded one. In general, contemporary cryptologists frown upon it and consider it outdated. Nevertheless, it can provide good service in simple applications serving a small and less important user circle. The real problem is that it can be broken in no time, also due to its development by a limited, closed circle of cryptographers, should the group become targeted by a dedicated hacker group. Additionally, to restore the broken application’s security, a mere change of keys is not enough, the source code of the entire application must be changed as well.