Jun 17

SafeDisc is a CD/DVD copy prevention program for Windows applications and games, developed by Macrovision Corporation, aiming to prevent software copying, as well as resisting home media duplication devices, professional duplicators, and reverse engineering attempts. There have been several editions of SafeDisc over the years, each one has the goals of making discs harder to copy. The current revision is marketed as SafeDisc Advanced.

Though SafeDisc protection effectively prevents regular home users from creating functional copies of CDs or DVDs, it is quite easy for skilled software crackers to bypass. The early versions of SafeDisc did not make the discs very difficult to copy. Recent versions 2.9+ can produce discs that are difficult to copy or reverse engineer, requiring specific burners capable of burning the “weak sectors” and odd data formats that are characteristic of SafeDisc.

Previous versions of SafeDisc were overcome by disc image emulator software such as Daemon Tools and Alcohol 120%. SafeDisc currently blacklists such software, meaning that those who want to use this method must install additional software to cloak the mounter. Examples include curer om.

Another potential attack on SafeDisc is to pull the encrypted application out of the archive it is contained in. All SafeDisc encrypted discs contain an ICD file, an encrypted format used by SafeDisc to ensure that the original CD is loaded. UnSafeDisc circumvents and decrypts SafeDisc encrypted files by opening the ICD file format, decrypting it, and converting it to an EXE file. However each program requires a specific patch to enable full functionality.

Operation

SafeDisc adds a unique digital signature to the optical media at the time of replication. Each time a SafeDisc-protected program runs, the SafeDisc authenticator performs various security checks and verifies the SafeDisc signature on the optical media. The authentication process takes about 10 to 20 seconds. Once verification has been established, the sequence is complete and the program will start normally. The SafeDisc signature is designed to be difficult to copy or transfer from the original media. Certain multimedia programs are designed to run from the PC hard drive without accessing files from the program disc after the initial installation. SafeDisc will permit this as long as the consumer retains the original CD or DVD disc, which is required for authentication each time the program is launched. Failure to place the original disc in the drive when loading the program will prevent validation of the SafeDisc signature.

SafeDisc (V1)

SafeDisk V1 protected CDs can be recognized by several files on the CD:

  • 00000001.TMP
  • CLCD16.DLL
  • CLCD32.DLL
  • CLOKSPL.EXE
  • DPLAYERX.DLL

And also by the existence of two files .EXE and .ICD (where is replaced with the actual game’s name).

The EXE executable is only a loader which decrypts and loads the protected game executable in the encrypted ICD file.

The initial version of SafeDisc was easy for home users and professional duplicators alike to copy, due to the fact that the ICD file can be decrypted and converted into an EXE file.

SafeDisc (V2)

The following files should exist on every original CD:

  • 00000001.TMP
  • 00000002.TMP (not always present)

The loader file (.EXE) is now integrated into the main executable, making the .ICD file obsolete. Also the CLOKSPL.EXE file, which was present in SafeDisc v1, no longer exists.

The SD2 version can be found inside the .EXE file through its string: “BoG_ *90.0&!! Yy>”, followed by three unsigned longs, these are the version, subversion and revision numbers (in hex). When making a backup, read errors will be encountered between sectors 822-10255.

The protection also has “weak” sectors, introduced with this version, which causes synchronization problems with certain CD-Writers. Digital signatures are still present in this version. But this has no effect on disc images mounted in Daemon Tools or similar programs. In addition, SafeDisc Version 2.50 added ATIP detection making it impossible to use a copy in a burner unless software that masks this is used (CloneCD has the ability to do this). SafeDisc Versions 2.90 and above make burning copies more difficult requiring burners that are capable of burning the “weak sectors”; these drives are uncommon.

SafeDisc (V3)

SafeDisc v3 uses a key to encrypt the main executable (EXE or DLL) and creates a corresponding digital signature which is added to the CD-ROM/DVD-ROM when they are replicated. The size of the digital signature varies from 3 to 20 MB depending how good the encryption must be. The authentication process takes about 10 to 20 seconds.

SafeDisc v3 is capable of encrypting multiple executables over one or more CD/DVD medias, as long as the executables are encrypted with the same key and the digital signature is added to each media. SafeDisc v3 supports Virtual Drives as long as the original CD/DVD is available. Once the CD has been authenticated the game should continue to run from the virtual drive (as long as the virtual drive software has not been blacklisted…).

With the introduction of SafeDisc Version 3 which included support for CD/DVD media, as well as virtual drives, the difficulty of burning a copy increased even further.

SafeDisc (V4)

The current SafeDisc version in use is Version 4. Over 40% of games are protected since August 2004, including Quake 4.

SafeDisс driver Vulnerability

SafeDisc installs its own Windows device driver to the user’s computer, named secdrv.sys. In addition to enabling the copy protection, it grants ring 0 access to the running application. This is a potential security risk, since trojans and other malware could use the driver to obtain administrator access to the machine, even if the programs are running under a limited account.

Even worse is that (beside the default configuration on Windows XP), most installers don’t set the security configuration appropriately, allowing every user to let the driver configuration point at an arbitrarily chosen executable which (at the next reboot) is started with administrator privileges.

On November 7, 2007 Microsoft stated that there is vulnerability in Macrovision SECDRV.SYS driver. on Windows and it could allow elevation of privilege. This vulnerability does not affect Windows Vista. The driver, secdrv.sys, is used by games which use Macrovision SafeDisc. Without the driver, games with SafeDisc protection would be unable to play on Windows.

written by admin

Jun 17

SecuROM is a CD/DVD copy protection product, most often used for computer games, developed by Sony DADC. SecuROM aims to resist home media duplication devices, professional duplicators, and reverse engineering attempts. The newest versions (v4 and up) prevent 1:1 CD-R copies from being made. Certain programs can circumvent its protection, but can’t duplicate it. The use of SecuROM is somewhat controversial. It installs a shell extension that prevents Windows Explorer from deleting 16-bit executables.

SecuROM v1.x–v3.x

One of the following files should exist in the installed directory (Depending on the operating system) or in the root of the original CDs:

  • CMS16.DLL
  • CMS_95.DLL
  • CMS_NT.DLL

The protection can also be recognized by DADC on the inside ring of the CD. DADC is a CD manufacturing plant; the more recent SecuROM protected games are also pressed in other plants. Open the main executable using a hex editor and search for the following ASCII text (it should appear twice): CMS

SecuROM v4.6

The protection modifies a CD-ROM’s q-channel in order to make a protected original distinguishable from a copy.

A set of nine locations where the Q-Channel is purposely destroyed is computed by the following function (demonstrated as python-code), using a vendor specific key.

BadSQ = 0x0VendorKey = [0,0,0,0,0,0,0,0,0]

Seed = [0,0,0,0,0,0,0,0,0]

BadSQTable = [0,0,0,0,0,0,0,0,0]

round = 0

for a in range (0,256):

BadSQ = BadSQ + (VendorKey[a % 9] & 0x1F) + 0x20

for b in range (0,9):

if (Seed[b] == a):

BadSQTable[round] = BadSQ

round += 1VendorKey[], Seed[] and BadSQ are initialized to secret values.
Possible optimizations were omitted to reflect the original implementation.

The function calculates nine sector numbers; if the corresponding Q-channel is not readable at these locations, the CD is considered being original. Note that the key is always the same for all titles issued by a specific vendor, resulting in identical Q-channel patterns. Also note that every key has 134,217,727 “twins” that will produce an identical BadSQTable.

SecuROM v4.7 and above

After development on SecuROM had apparently been stopped, SecuROM v4.7 had been the first updated version for months, obviously being a “public” beta. The new SecuROM brought several major changes about how the protection works and how it is integrated into the target program.

Unlike SecuROM v4.6, which relied on illegal SubQ-Information, the new scheme utilises “data density measurement” (not to be confused with “data position measurement” as being used by other protections). While the data density on normal CD/DVD-ROMs constantly degrades from the most inner to the most outer sector, data density on SecuROM v4.7 (and up) protected CD/DVD-ROMs is diversified by a certain, vendor specific pattern. This pattern can be reconstructed by high-precision time measurement during software<->CD/DVD-drive interaction and reflects the vendor-key as mentioned above.

To do so the protection defines a set of locations spread over the disc and issue two SCSI-read-commands per location to the drive. As the disc spins, the time it takes for the second command to return depends on the time it takes the disc to do a full round and thus depends on the data-density. To achieve the required timing-precision, the RDTSC command is used, which has a resolution of about 0.28 microseconds on x86-CPUs.

The pattern is made up from 72 locations, each either with normal or higher than normal density and thus reflects a binary pattern which assembles to the vendor specific key mentioned above.

SecuROM v4.84 and beyond includes “Trigger Functions” which allow the developer to program multiple and fully customizable authentication checks throughout the entire application. As the protection places itself between the application’s code and the OS, it can alter the behaviour of selected system functions.

Consider the following example (Pseudocode)

if (GetCurrentDate() == '13-32-2999') then    WorkCorrectly()

else

ScrewItUpSomehow()

end if

Obviously, a “normal” GetCurrentDate() function will never return ‘13-32-2999′. However, as SecuROM can modify the function’s result, the application can check for the protection’s presence during runtime; if the protection has been removed, the function will return with some other valid value, giving the application the opportunity to display an error message or render the application unusable (e.g. provoking a crash to desktop, making enemies invincible).

There are many different ways how “triggers” can be integrated into a program, making it much more complicated to universally circumvent the protection.

SecuROM v 7.x

Latest SecuROM Versions are all 7.x versions which are released and updated continuously. SecuROM 7.x, if run under a non-admin user account, installs its own service called UAService7.exe, which works in ring 3 of the computer’s operating system.
Securom has said: “it has been developed to enable users without Windows™ administrator rights the ability to access all SecuROM™ features” This has been called malware, and users must use 3rd party tools to remove ‘protection’ after uninstall of product.

written by admin

Jun 17

In cryptography, MD5 (Message-Digest algorithm 5) is a widely used cryptographic hash function with a 128-bit hash value. As an Internet standard (RFC 1321), MD5 has been employed in a wide variety of security applications, and is also commonly used to check the integrity of files. An MD5 hash is typically a 32-character hexadecimal number.

MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4. In 1996, a flaw was found with the design of MD5; while it was not a clearly fatal weakness, cryptographers began to recommend using other algorithms, such as SHA-1. In 2004, more serious flaws were discovered making further use of the algorithm for security purposes questionable.

Vulnerability

Because MD5 makes only one pass over the data, if two prefixes with the same hash can be constructed, a common suffix can be added to both to make the collision more reasonable.

All that is required to generate two colliding files is a template file, with a 128-byte block of data aligned on a 64-byte boundary, that can be changed freely by the collision-finding algorithm.

Recently, a number of projects have created MD5 “rainbow tables” which are easily accessible online, and can be used to reverse many MD5 hashes into strings that collide with the original input, usually for the purposes of password cracking.

Applications

MD5 digests have been widely used in the software world to provide some assurance that a transferred file has arrived intact. Forexample, file servers often provide a pre-computed MD5 checksum for the files, so that a user can compare the checksum of the downloaded file to it. Unix-based operating systems include MD5 sum utilities in their distribution packages, whereas Windows users use third-party applications.

However, now that it is easy to generate MD5 collisions, it is possible for the person who created the file to create a second file with the same checksum, so this technique cannot protect against some forms of malicious tampering. Also, in some cases the checksum cannot be trusted (for example, if it was obtained over the same channel as the downloaded file), in which case MD5 can only provide error-checking functionality: it will recognize a corrupt or incomplete download, which becomes more likely when downloading larger files.

MD5 is widely used to store passwords. A number of MD5 reverse lookup databases exist, which make it easy to decrypt password hashed with plain MD5. To prevent such attacks you can add a salt to your passwords before hashing them. Also, it is a good idea to apply the hashing function (MD5 in this case) more than once—see key strengthening. It increases the time needed to encode a password and discourages dictionary attacks.

Algorithm

MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks; the message is padded so that its length is divisible by 512. The padding works as follows: first a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with a 64-bit integer representing the length of the original message.

The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted ABC and D. These are initialized to certain fixed constants. The main algorithm then operates on each 512-bit message block in turn, each block modifying the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function F, modular addition, and left rotation.

Pseudocode

//Note: All variables are unsigned 32 bits and wrap modulo 2^32 when calculating

var int[64] r, k//r specifies the per-round shift amounts
r[ 0..15] := {7, 12, 17, 22,  7, 12, 17, 22,  7, 12, 17, 22,  7, 12, 17, 22}
r[16..31] := {5,  9, 14, 20,  5,  9, 14, 20,  5,  9, 14, 20,  5,  9, 14, 20}
r[32..47] := {4, 11, 16, 23,  4, 11, 16, 23,  4, 11, 16, 23,  4, 11, 16, 23}
r[48..63] := {6, 10, 15, 21,  6, 10, 15, 21,  6, 10, 15, 21,  6, 10, 15, 21}

//Use binary integer part of the sines of integers as constants:

for i from 0 to 63
    k[i] := floor(abs(sin(i + 1)) × (2 pow 32))

//Initialize variables:

var int h0 := 0×67452301
var int h1 := 0xEFCDAB89
var int h2 := 0×98BADCFE
var int h3 := 0×10325476

//Pre-processing:

append “1″ bit to message
append “0″ bits until message length in bits ≡ 448 (mod 512)
append bit (bit, not byte) length of unpadded message
as 64-bit little-endian integer to message
//Process the message in successive 512-bit chunks:

for each 512-bit chunk of message
    break chunk into sixteen 32-bit little-endian words w[i], 0 ≤ i ≤ 15

//Initialize hash value for this chunk:

    var int a := h0
    var int b := h1
    var int c := h2
    var int d := h3

//Main loop:

    for i from 0 to 63
        if 0 ≤ i ≤ 15 then
            f := (b and c) or ((not b) and d)
            g := i
        else if 16 ≤ i ≤ 31
            f := (d and b) or ((not d) and c)
            g := (5×i + 1) mod 16
        else if 32 ≤ i ≤ 47
            f := b xor c xor d
            g := (3×i + 5) mod 16
        else if 48 ≤ i ≤ 63
            f := c xor (b or (not d))
            g := (7×i) mod 16
temp := d
        d := c
        c := b
        b := b + leftrotate((a + f + k[i] + w[g]) , r[i])
        a := temp

//Add this chunk’s hash to result so far:

    h0 := h0 + a
    h1 := h1 + b
    h2 := h2 + c
    h3 := h3 + d
var int digest := h0 append h1 append h2 append h3
  //(expressed as little-endian)
  //leftrotate function definition

  leftrotate (x, c)
      return (x << c) or (x >> (32-c));

written by admin

Jun 17

But when you introduce the network client, this becomes an issue. And sending users and passwords in the clear is just not acceptable. So this blog tells you how to stop that from happening.

Java DB allows you to configure how credentials are sent to the server using four different modes, documented here. and here.

  • CLEAR_TEXT_PASSWORD_SECURITY (0×03) – User and password are sent in the clear. This is the default
  • USER_ONLY_SECURITY (0×04) - Only the user is sent. This doesn’t work if authentication is enabled on the server
  • STRONG_PASSWORD_SUBSTITUTE_SECURITY (0×08) – “When using this mechanism, a strong password substitute is generated and used to authenticate the user with the network server. The original password is never sent in any form across the network.” (from the docs)
  • ENCRYPTED_USER_AND_PASSWORD_SECURITY (0×09) – The user and password are encrypted using a pluggable encryption mechanism

Now, why on earth am I including the hex value for the security mechanism here? Because (and I kid you not), when you specify the security mechanism you want to use on your URL, you have to pass the numeric value for the mechanism, not the name.

Needless to say, when I discovered this, I immediately logged a bug. It didn’t help that nowhere do the docs actually tell you you need to do this. I had to figure this out through a series of experiments and failures.

And get this, you have to convert the hex value to an integer, if you pass in 0×08 you get a NumberFormatException. Sweet.

Anyway, I’m here to help. If you want security, you probably want to choose STRONG_PASSWORD_SUBSTITUTE or ENCRYPTED_USER_AND_PASSWORD. I personally don’t know the difference in terms of strength/breakability, but I do know that if you choose ENCRYPTED_USER_AND_PASSWORD, then you also need to install IBM JCE and configure the JRE to use an appropriate security provider. You have to do this for the JRE used by both the client and the server. If you are deploying your client to a wide swath of machines, this particular approach is not feasible, as far as I can tell. I am pinging the Derby user group about this, more info will be posted here if I get it.

So, personally, I’m opting for STRONG_PASSWORD_SUBSTITUTE (or 8 if you’re thinking in integers). Here’s how you do it:

First, if you want to enforce that only this security mechanism be used, you can configure the network server by putting this line in derby.properties:

derby.drda.securityMechanism=STRONG_PASSWORD_SUBSTITUTE_SECURITY

Next, when you connect, include the securityMechanism property in your URL, e.g

jdbc:derby://localhost:1527/mydb;create=true;securityMechanism=8

Now your password is no longer flying over the wire in the clear.

First of all, the reason you can’t use Sun’s Java runtime to do password encryption with Java DB is because Java DB network communication uses DRDA, a standard network protocol (used primarily by IBM). DRDA’s encryption protocol uses a 256-bit key, which is considered too short (and thus too weak) by Sun’s encryption engine. IBM JCE does support a 256-bit key, so that’s why you would need to install it. I have asked if there other encryption engines that support a 256-bit key.

The STRONG_PASSWORD_SUBSTITUTE mechanism I describe above was implemented as an alternative because standard encryption has these issues with support and configuration overhead.

There was also general agreement on the list that what you really want is encryption using industry-standard SSL (or its new incarnation, TLS). Java DB 10.3, which is getting ready to be released, will include support for SSL/TLS. Using this mechanism is recommended unless you are concerned about the performance impact of encrypting all communications and you are fine with just protecting the password.

written by admin

Jun 17

Short for Secure Sockets Layer, a protocol developed by Netscape for transmitting private documents via the Internet. SSL uses a cryptographic system that uses two keys to encrypt data − a public key known to everyone and a private or secret key known only to the recipient of the message. Both Netscape Navigator and Internet Explorer support SSL, and many Web sites use the protocol to obtain confidential user information, such as credit card numbers.By convention, URLs that require an SSL connection start with https:// instead of http://.

Transport Layer Security (TLS) and its predecessor, SSL, are cryptographic protocols which provide secure communications on the Internet for such things as web browsing, e-mail, Internet faxing, instant messaging and other data transfers. There are slight differences between SSL 3.0 and TLS 1.0, but the protocol remains substantially the same. The term “TLS” as used here applies to both protocols unless clarified by context.

The TLS protocol(s) allow applications to communicate across a network in a way designed to prevent eavesdropping, tampering, and message forgery. TLS provides endpoint authentication and communications privacy over the Internet using cryptography. Typically, only the server is authenticated (i.e., its identity is ensured) while the client remains unauthenticated; this means that the end user (whether an individual or an application, such as a Web browser) can be sure with whom they are communicating. The next level of security—in which both ends of the “conversation” are sure with whom they are communicating—is known as mutual authentication. Mutual authentication requires public key infrastructure (PKI) deployment to clients.

written by admin

Jun 17

Shannon

The era of modern cryptography really begins with Claude Shannon, arguably the father of mathematical cryptography, with the work he did during WWII on communications security. In 1949 he published the paper Communication Theory of Secrecy Systems in the Bell System Technical Journal and a little later the book, Mathematical Theory of Communication, with Warren Weaver. both included results from his WWII work. These, in addition to his other works on information and communication theory established a solid theoretical basis for cryptography and for cryptanalysis. And with that, cryptography more or less disappeared into secret government communications organizations such as the NSA, GCHQ, and equivalents elsewhere. Very little work was again made public until the mid ’70s, when everything changed.

An encryption standard

The mid-1970s saw two major public (i.e., non-secret) advances. First was the publication of the draft Data Encryption Standard in the U.S. Federal Register on 17 March 1975. The proposed DES was submitted by IBM, at the invitation of the National Bureau of Standards (now NIST), in an effort to develop secure electronic communication facilities for businesses such as banks and other large financial organizations. After ‘advice’ and modification by the NSA, it was adopted and published as a Federal Information Processing Standard Publication in 1977 (currently at FIPS 46-3). DES was the first publicly accessible cipher to be ‘blessed’ by a national agency such as NSA. The release of its specification by NBS stimulated an explosion of public and academic interest in cryptography.

The aging DES was officially replaced by the Advanced Encryption Standard (AES) in 2001 when NIST announced FIPS 197. After an open competition, NIST selected Rijndael, submitted by two Belgian cryptographers, to be the AES. DES, and more secure variants of it (such as Triple DES; see FIPS 46-3), are still used today, having been incorporated into many national and organizational standards. However, its 56-bit key-size has been shown to be insufficient to guard against brute force attacks (one such attack, undertaken by the cyber civil-rights group Electronic Frontier Foundation in 1997, succeeded in 56 hours — the story is in Cracking DES, published by O’Reilly and Associates). As a result, use of straight DES encryption is now without doubt insecure for use in new cryptosystem designs, and messages protected by older cryptosystems using DES, and indeed all messages sent since 1976 using DES, are also at risk. Regardless of its inherent quality, the DES key size (56-bits) was thought to be too small by some even in 1976, perhaps most publicly by Whitfield Diffie. There was suspicion that government organizations even then had sufficient computing power to break DES messages; clearly others have achieved this capability.

Public key

The second development, in 1976, was perhaps even more important, for it fundamentally changed the way cryptosystems might work. This was the publication of the paper New Directions in Cryptography by Whitfield Diffie and Martin Hellman. It introduced a radically new method of distributing cryptographic keys, which went far toward solving one of the fundamental problems of cryptography, key distribution, and has become known as Diffie-Hellman key exchange. The article also stimulated the almost immediate public development of a new class of enciphering algorithms, the asymmetric key algorithms.

Prior to that time, all useful modern encryption algorithms had been symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. All of the electromechanical machines used in WWII were of this logical class, as were the Caesar and Atbash ciphers and essentially all cipher and code systems throughout history. The ‘key’ for a code is, of course, the codebook, which must likewise be distributed and kept secret.

Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system (the term usually used is ‘via a secure channel’) such as a trustworthy courier with a briefcase handcuffed to a wrist, or face-to-face contact, or a loyal carrier pigeon. This requirement is never trivial and rapidly becomes unmanageable as the number of participants increases, or when secure channels aren’t available for key exchange, or when, as is sensible cryptographic practice, keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users. A system of this kind is known as a secret key, or symmetric key cryptosystem. D-H key exchange (and succeeding improvements and variants) made operation of these systems much easier, and more secure, than had ever been possible before.

In contrast, asymmetric key encryption uses a pair of mathematically related keys, each of which decrypts the encryption performed using the other. Some, but not all, of these algorithms have the additional property that one of the paired keys cannot be deduced from the other by any known method other than trial and error. An algorithm of this kind is known as a public key or asymmetric key system. Using such an algorithm, only one key pair is needed per user. By designating one key of the pair as private (always secret), and the other as public (often visible), no secure channel is needed for key exchange. So long as the private key stays secret, the public key can be widely known for a very long time without compromising security, making it safe to reuse the same key pair indefinitely.

For two users of an asymmetric key algorithm to communicate securely over an insecure channel, each user will need to know their own public and private keys as well as the other user’s public key. Take this basic scenario: Alice and Bob each have a pair of keys they’ve been using for years with many other users. At the start of their message, they exchange public keys, unencrypted over an insecure line. Alice then encrypts a message using her private key, and then re-encrypts that result using Bob’s public key. The double-encrypted message is then sent as digital data over a wire from Alice to Bob. Bob receives the bit stream and decrypts it using his own private key, and then decrypts that bit stream using Alice’s public key. If the final result is recognizable as a message, Bob can be confident that the message actually came from someone who knows Alice’s private key, and that anyone eavesdropping on the channel will need both Alice’s and Bob’s private keys in order to understand the message.

Asymmetric algorithms rely for their effectiveness on a class of problems in mathematics called one-way functions, which require relatively little computational power to execute, but vast amounts of power to reverse. A classic example of a one-way function is multiplication of large prime numbers. It’s fairly quick to multiply two large primes, but very difficult to factor the product of two large primes. Because of the mathematics of one-way functions, most possible keys are bad choices as cryptographic keys; only a small fraction of the possible keys of a given length are suitable, and so asymmetric algorithms require very long keys to reach the same level of security provided by relatively shorter symmetric keys. The need to both generate the key pairs, and perform the encryption/decryption operations make asymmetric algorithms computationally expensive, compared to most symmetric algorithms. Since symmetric algorithms can often use any sequence of (random, or at least unpredictable) bits as a key, a disposable session key can be quickly generated for short-term use. Consequently, it is common practice to use a long asymmetric key to exchange a disposable, much shorter (but just as strong) symmetric key. The slower asymmetric algorithm securely sends a symmetric session key, and the faster symmetric algorithm takes over for the remainder of the message.

Asymmetric key cryptography, Diffie-Hellman key exchange, and the best known of the public key / private key algorithms (i.e., what is usually called the RSA algorithm), all seem to have been independently developed at a UK intelligence agency before the public announcement by Diffie and Hellman in ‘76. GCHQ has released documents claiming that they had developed public key cryptography before the publication of Diffie and Hellman’s paper. Various classified papers were written at GCHQ during the 1960s and 1970s which eventually led to schemes essentially identical to RSA encryption and to Diffie-Hellman key exchange in 1973 and 1974. Some of these have now been published, and the inventors (James H. Ellis, Clifford Cocks, and Malcolm Williamson) have made public (some of) their work.

Cryptography politics

This in turn broke the near monopoly on high quality cryptography held by government organizations (see S Levy’s Crypto for a journalistic account of some of the policy controversy in the US). For the first time ever, those outside government organizations had access to cryptography not readily breakable by anyone (including governments). Considerable controversy, and conflict, both public and private, began more or less immediately. It has not yet subsided. In many countries, for example, export of cryptography is subject to restrictions. Until 1996 export from the U.S. of cryptography using keys longer than 40 bits was sharply limited. As recently as 2004, former FBI Director Louis Freeh, testifying before the 9/11 Commission, called for new laws against public use of encryption.

One of the most important people favoring strong encryption for public use was Phil Zimmermann. He wrote and then in 1991 released PGP (Pretty Good Privacy), a very high quality crypto system.. He distributed a freeware version of PGP when he felt threatened by legislation then under consideration by the US Government that would require back doors be included in all cryptographic solutions developed within the US. His efforts in releasing PGP worldwide earned him a long battle with the Justice Department for the alleged violation of export restrictions. The Justice Department eventually dropped its case against Zimmermann, and the freeware distribution of PGP made its way around the world and eventually became an open standard (RFC2440 or OpenPGP).

written by admin

Jun 17

By World War II, mechanical and electromechanical cipher machines were in wide use, although — where such machines were impractical — manual systems continued in use. Great advances were made in cipher-breaking, all in secrecy. Information about this period has begun to be declassified as the official British 50-year secrecy period has come to an end, as U.S. archives have slowly opened, and as assorted memoirs and articles have appeared.

The Germans made heavy use, in several variants, of an electromechanical rotor machine known as Enigma. Mathematician Marian Rejewski, at Poland’s Cipher Bureau, in December 1932 reconstructed the German Army Enigma, using mathematics and limited documentation supplied by Captain Gustave Bertrand of French military intelligence. This was the greatest breakthrough in cryptanalysis in a thousand years and more. Rejewski and his mathematical Cipher Bureau colleagues, Jerzy Różycki and Henryk Zygalski, continued reading Enigma and keeping pace with the evolution of the machine’s components and encipherment procedures. As the Poles’ resources became strained by the changes being introduced by the Germans, and as war loomed, the Cipher Bureau, on the Polish General Staff’s instructions, on July 25, 1939, at Warsaw, initiated French and British intelligence representatives into the secrets of Enigma decryption.

Soon after World War II broke out on September 1, 1939, key Cipher Bureau personnel were evacuated southeastward; on September 17, as the Soviet Union entered eastern Poland, they crossed into Romania. From there they reached Paris, France; at PC Bruno, near Paris, they continued breaking Enigma, collaborating with British cryptologists at Bletchley Park as the British got up to speed. In due course, the British cryptologists — whose ranks included many chess masters and mathematics dons such as Gordon Welchman, Max Newman, and Alan Turing the conceptual founder of modern computing — substantially advanced the scale and technology of Enigma decryption.

At the end of the War, on 19 April 1945 Britain’s top military officers were told that they could never reveal that the German Enigma code had been broken because it would give the defeated enemy the chance to say they “were not well and fairly beaten”. [1]

US Navy cryptographers (with cooperation from British and Dutch cryptographers after 1940) broke into several Japanese Navy crypto systems. The break into one of them, JN-25, famously led to the US victory in the Battle of Midway. A US Army group, the SIS, managed to break the highest security Japanese diplomatic cipher system (an electromechanical ’stepping switch’ machine called Purple by the Americans) even before WWII began. The Americans referred to the intelligence resulting from cryptanalysis, perhaps especially that from the Purple machine, as ‘Magic’. The British eventually settled on ‘Ultra’ for intelligence resulting from cryptanalysis, particularly that from message traffic enciphered by the various Enigmas. An earlier British term for Ultra had been ‘Boniface’.

The German military also deployed several mechanical attempts at a one-time pad. Bletchley Park called them the Fish ciphers, and Max Newman and colleagues designed and deployed the world’s first programmable digital electronic computer, the Colossus, to help with their cryptanalysis. The German Foreign Office began to use the one-time pad in 1919; some of this traffic was read in WWII partly as the result of recovery of some key material in South America that was insufficiently carefully discarded by a German courier.

The Japanese Foreign Office used a locally developed electrical stepping switch based system (called Purple by the US), and also used several similar machines for attaches in some Japanese embassies. One of these was called the ‘M-machine’ by the US, another was referred to as ‘Red’. All were broken, to one degree or another by the Allies.

Allied cipher machines used in WWII included the British TypeX and the American SIGABA; both were electromechanical rotor designs similar in spirit to the Enigma, though with major improvements. Neither is known to have been broken by anyone during the war. The Poles used the Lacida machine, but its security was found to be less than intended, and its use was discontinued. US troops in the field used the M-209 and less secure M-94 family. British SOE agents initially used ‘poem ciphers’ (memorized poems were the encryption/decryption keys), but later in the war, they began to switch to one-time pads.

written by admin

Jun 17

Although cryptography has a long and complex history, it wasn’t until the 19th century that it developed anything more than ad hoc approaches to either encryption or cryptanalysis (the science of finding weaknesses in crypto systems). Examples of the latter include Charles Babbage’s Crimean War era work on mathematical cryptanalysis of polyalphabetic ciphers, rediscovered and published somewhat later by the Prussian Friedrich Kasiski. Understanding of cryptography at this time typically consisted of hard-won rules of thumb; see, for example, Auguste Kerckhoffs’ cryptographic writings in the latter 19th century. Edgar Allan Poe used systematic methods to solve ciphers in the 1840s. In particular he placed a notice of his abilities in the Philadelphia paper Alexander’s Weekly (Express) Messenger, inviting submissions of ciphers, of which he proceeded to solve almost all. His success created a public stir for some months. He later wrote an essay on methods of cryptography which proved useful as an introduction for novice Room 40 British cryptanalysts of attempting to break German codes and ciphers during World War I.

In 1917, Gilbert Vernam proposed a teletype cipher in which a previously-prepared key, kept on paper tape, is combined character by character with the plaintext message to produce the cyphertext. This lead to the development of the one time pad and the use of electromechanical devices as cipher machines.

Mathematical methods proliferated in the period prior to World War II (notably in William F. Friedman’s application of statistical techniques to cryptanalysis and cipher development and in Marian Rejewski’s initial break into the German Army’s version of the Enigma system) in 1932. Both cryptography and cryptanalysis have become far more mathematical since WWII. Even so, it has taken the wide availability of computers, and the Internet as a communications medium, to bring effective cryptography into common use by anyone other than national governments or similarly large enterprises.

written by admin

Jun 17

It was probably religiously motivated textual analysis of the Qur’an which led to the invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers sometime around 1000 CE (Ibrahim Al-Kadi -1992). It was the most fundamental cryptanalytic advance until WWII. Essentially all ciphers remained vulnerable to this cryptanalytic technique until the invention of the polyalphabetic cipher by Alberti (ca 1465), and many remained so thereafter.[citation needed]

Although Alberti is usually considered the father of polyalphabetic cipher, Prof. Al-Kadi’s 1990 paper (ref- 3), reviewing Arabic contributions to cryptography reported knowledge of polyalphabetic ciphers 500 years before Alberti, based on a recently discovered manuscript).

It appears that Abu Yusuf Yaqub ibn Is-haq ibn as Sabbah ibn ‘omran ibn Ismail Al- Kindi, who wrote a book on crytography called “Risalah fi Istikhraj al-Mu’amma” (Manuscript for the Deciphering Cryptographic Messages), circa 750 CE), may have described cryptanalysis techniques (including some for polyalphabetic ciphers), cipher classification, Arabic Phonetics and Syntax, and, most importantly, described the use of several statistical techniques for cryptanalysis. [This book seems to be the first post-classical era reference by about 300 years.] It also contains probability and statistical work some 800 years before Pascal and Fermat.

Cryptography became (secretly) still more important as a consequence of political competition and religious revolution. For instance, in Europe during and after the Renaissance, citizens of the various Italian states — the Papal States and the Roman Catholic Church included — were responsible for rapid proliferation of cryptographic techniques, few of which reflect understanding (or even knowledge) of Alberti’s polyalphabetic advance. ‘Advanced ciphers’, even after Alberti, weren’t as advanced as their inventors / developers / users claimed (and probably even themselves believed). They were regularly broken. This over-optimism may be inherent in cryptography for it was then, and remains today, fundamentally difficult to really know how vulnerable your system actually is. In the absence of knowledge, guesses and hopes, as may be expected, are common.

Cryptography, cryptanalysis, and secret agent/courier betrayal featured in the Babington plot during the reign of Queen Elizabeth I which led to the execution of Mary, Queen of Scots. An encrypted message from the time of the Man in the Iron Mask (decrypted just prior to 1900 by Étienne Bazeries) has shed some, regrettably non-definitive, light on the identity of that real, if legendary and unfortunate, prisoner. Cryptography, and its misuse, were involved in the plotting which led to the execution of Mata Hari and in the conniving which led to the travesty of Dreyfus’ conviction and imprisonment, both in the early 20th century. Fortunately, cryptographers were also involved in exposing the machinations which had led to Dreyfus’ problems; Mata Hari, in contrast, was shot.

Outside of Europe, after the end of the Muslim Golden Age at the hand of the Mongols, cryptography remained comparatively undeveloped. Cryptography in Japan seems not to have been used until about 1510, and advanced techniques were not known until after the opening of the country to the West beginning in the 1860s.

written by admin

Jun 16

The earliest known use of cryptography is found in non-standard hieroglyphs carved into monuments from Egypt’s Old Kingdom (ca 4500+ years ago). These are not thought to be serious attempts at secret communications, however, but rather to have been attempts at mystery, intrigue, or even amusement for literate onlookers. These are examples of still other uses of cryptography, or of something that looks (impressively if misleadingly) like it. Some clay tablets from Mesopotamia, somewhat later are clearly meant to protect information — they encrypt recipes, presumably commercially valuable. Later still, Hebrew scholars made use of simple monoalphabetic substitution ciphers (such as the Atbash cipher) beginning perhaps around 500 to 600 BCE.

Cryptography has a long tradition in religious writing likely to offend the dominant culture or political authorities. Perhaps the most famous is the ‘Number of the Beast’ from the Book of Revelations in the Christian New Testament. ‘666′ might be a cryptographic (i.e., encrypted) way of concealing a dangerous reference; many scholars believe it’s a concealed reference to the Roman Empire, or more likely to the Emperor Nero himself, (and so to Roman persecution policies) that would have been understood by the initiated (who ‘had the key to understanding’), and yet be safe or at least somewhat deniable (and so ‘less’ dangerous) if it came to the attention of the authorities. At least for orthodox Christian writing, most of the need for such concealment ended with Constantine’s conversion and the adoption of Christianity as the official religion of the Empire.

The Greeks of Classical times are said to have known of ciphers (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Herodotus tells us of secret messages physically concealed beneath wax on wooden tablets or as a tattoo on a slave’s head concealed by regrown hair, though these are not properly examples of cryptography per se as the message, once known, is directly readable; this is known as steganography. The Romans certainly did know something of cryptography (e.g., the Caesar cipher and its variations). There is ancient mention of a book about Roman military cryptography (especially Julius Caesar’s); it has been, unfortunately, lost.

In India, cryptography was also well known. It is recommended in the Kama Sutra as a technique by which lovers can communicate without being discovered.

written by admin