The certificate_authorities extension – Digital Certificates and Certification Authorities

10.5.12 The certificate_authorities extension

Finally, we turn to certificate authorities within TLS. TLS in itself places hardly any requirement on CAs, except that they should be able to issue X.509 certificates. However, Alice and Bob can use the certificate˙authorities extension to specify which certification authorities they support and which should be used by the receiving TLS party to select the appropriate certificates.

Client Bob sends the certificate˙authorities extension in his ClientHello message. Server Alice sends the certificate˙authorities extension in her CertificateRequest message. certificate˙authorities contains the CertificateAuthoritiesExtension data structure, as shown in Listing 10.8.

Listing 10.8: CertificateAuthoritiesExtension data structure

opaque DistinguishedName<1..2^16-1>;

struct {
   DistinguishedName authorities<3..2^16-1>;
} CertificateAuthoritiesExtension;

In CertificateAuthoritiesExtension, field authorities holds a list of the distinguished names of suitable CAs. The CA names are represented in the DER-encoded format. The authorities variable also contains the name of the trust anchor or the subordinate CA. As a result, server Alice and client Bob can use the certificate˙authorities extension to specify the known trust anchors as well as the preferred authorization space.

10.6 Summary

In this chapter, we have discussed digital certificates as a means to provide authenticity for public keys, and the bodies that issue certificates, Certification Authorities (CAs). In particular, we looked at the minimum set of data that needs to be presented within a certificate, and the optional parts of a certificate.

Regarding CAs, we discussed their tasks and the processes for obtaining and validating certificates. We have also seen how CAs fit into the larger structure needed to manage public keys, the Public-Key Infrastructure (PKI).

After these more general considerations, we looked in detail at how digital certificates are handled within the TLS 1.3 handshake protocol.

The next chapter will be more technical again, as it discusses hash functions and message authentication codes. Apart from digital signatures (which also use hash functions), they are the main cryptographic mechanisms for providing authenticity to handshake messages.

The need for authenticity and integrity – Hash Functions and Message Authentication Codes

11.1 The need for authenticity and integrity

Imagine Alice being a control computer in a train control system and Bob being a board computer installed within a train. For a more realistic scenario, let’s assume the train control system is a positive train control. This means that the train is only allowed to move if it receives an explicit move message from the train control. Otherwise, the train does not move.

Further, assume that there are two different move messages that onboard computer Bob can receive from control computer Alice:

  • Message ms instructing the train to move slowly, for example, before entering a train station
  • Message mf instructing the train to move fast

In addition, to secure the train control against cyberattacks, the communication channel between Alice and Bob is protected using a cryptographic mechanism that provides confidentiality only. That is, Alice and Bob share a secret key k and can compute an encryption function ek to make their communication unintelligible to the attacker, Mallory. However, they have no means to detect manipulation of the encrypted messages. The setup is illustrated in Figure 11.1.

Now, while Mallory cannot read the clear text communication between Alice and Bob, she can record and manipulate the encrypted messages. So, when Alice sends Bob the message ek(mf), Mallory simply changes it to the message ek(ms). Upon receiving the manipulated message, Bob decrypts it and obtains ms. Because ms is a legitimate message in the train control system, the onboard computer Bob accepts it and makes the train go slower than it is supposed to go, resulting in a delay at the next train station.

In cryptographic terms, the above attack works because Bob cannot verify the integrity of the message he received. After decrypting the message, Bob can determine whether it is in principle a valid train control message, but he has no way to check if it was manipulated en route.

Figure 11.1: Example setting ensuring confidentiality, but not integrity and authenticity

Next, imagine a scenario where the train is halted for safety reasons, say, waiting for another train coming from the opposite direction to pass. Since our example is a positive train control, no messages are sent by the control computer Alice to the onboard computer Bob and, as a result, the train remains halted. What happens if Mallory sends the message ek(ms) to Bob?

Upon receiving ek(ms), Bob decrypts it and obtains the clear text message ms telling the train to move slowly. Again, ms is a valid message in the train control system and so is processed by onboard computer Bob. The train is set in motion and, as a result, causes an accident if there is no human operator in the loop to react in a timely way.

From the cryptographic perspective, the above attack is possible because Bob cannot verify the authenticity of the received message ek(ms). While he can check that the plain text message ms is a valid message, Bob cannot determine whether it was actually sent by Alice or by someone else. In other words, Bob is not able to verify the origin of the message. Moreover, there is no way for Bob to verify the freshness of the message, which opens up further attack possibilities for Mallory (this was already discussed earlier, in Section 2.5, Authentication in Chapter 2, Secure Channel and the CIA Triad).

What cryptographic guarantees does encryption provide? – Hash Functions and Message Authentication Codes

11.2 What cryptographic guarantees does encryption provide?

On a more fundamental level, the attacks described in the above examples work because Alice and Bob, as illustrated in Figure 11.1, can only use encryption ek.

Intuitively, it might seem as if encryption protects Alice’s and Bob’s messages against manipulation by Mallory because the ciphertext hides the plaintext message and Mallory cannot know how to manipulate the encrypted message in a meaningful way. But this is completely wrong! Encryption provides no guarantees for message integrity or authenticity.

We can convince ourselves that this is indeed the case by taking a closer look at the one-time pad encryption scheme from Chapter 4, Encryption and Decryption.

Recall that the one-time pad encrypts a message m under the key k as:

where ⊕ denotes a bit-wise exclusive OR (XOR) operation. If you take two bits b0,b1 and apply the XOR operation to them, b0 ⊕ b1 will yield zero whenever both bits have the same value (that is, both are zero or both are one) and one whenever the bits have a different value.

To decrypt a message encrypted under the one-time pad scheme, the receiver computes:

In the case a one-time pad is used, it is very easy for Mallory to manipulate the ciphertext c because every bit flip in the ciphertext leads to the same bit being flipped in the decrypted message.

In other words, if Mallory has a ciphertext c that encrypts Bob’s message m, she can easily generate a manipulated ciphertext c′ that encrypts the same message as m, but with one or more bits of Mallory’s choice flipped.

Thus, even a perfectly secret – that is, information-theoretically secure – encryption scheme does not provide message integrity. The same is true for stream ciphers because, as we have seen in Chapter 4, Encryption and Decryption, their encryption operation is identical to encryption using the one-time pad: the plaintext is simply XORed with the key stream.

Intuitively, one might think that block ciphers – a class of much stronger encryption algorithms we will learn about in Chapter 14, Block Ciphers and Their Modes of Operation – are resilient against the above manipulations and would offer some degree of integrity. After all, modern block ciphers are pseudorandom permutations where a one-bit change in the plaintext results in the change of roughly half of the bits in the ciphertext.

As a result, if Mallory changes only a single bit in ciphertext c to obtain a manipulated ciphertext c′, the decryption result p′ = dk(c′) will be totally different from the genuine decryption result p = dk(c). It turns out, however, that even this property does not help with message integrity and authenticity!

As an example, if a block cipher is used in the so-called electronic codebook (ECB) mode (more on this in Chapter 14, Block Ciphers and their Modes of Operation) and Mallory flips a bit in the i-th block of the ciphertext, only the i-th block of the plaintext will change when Bob decrypts the manipulated ciphertext.

Alternatively, Mallory can manipulate the order of blocks in the original ciphertext c to obtain a manipulated ciphertext c′. When Bob decrypts c′, he will obtain a manipulated plaintext p′ where the individual blocks are identical to those of the genuine plaintext p, but their order is manipulated (their order is the same as in c′).

If only encryption is used, Bob is not able to detect these manipulations. Moreover, this is independent of the type of symmetric key encryption algorithm used by Bob. We can therefore conclude that encryption alone provides only confidentiality, but not message integrity or authenticity.

One-way functions – Hash Functions and Message Authentication Codes

11.3 One-way functions

In Chapter 4, Encryption and Decryption, we learned that the notion of computational security is built on the concept of pseudorandomness, the idea that bit strings can look completely random even though they are not. In fact, pseudorandom generators, functions, and permutations form the basis of modern symmetric key cryptography. As being one-way is also one of the defining properties of a cryptographic hash function, we chose to include a more formal discussion of this property in this section, even though it is fundamental for the whole of cryptography.

This is because mathematicians have proved that pseudorandom generators, functions, and permutations can be constructed from one-way functions.

As a result, the existence of one-way functions is equivalent to the existence of any non-trivial symmetric-key cryptography [97]. This means, if we can find functions that we can prove to be one-way, we can use them to construct symmetric-key cryptographic schemes, for example, symmetric-key encryption algorithms or keyed hash functions, that are provably computationally secure.

The good news is that there is a number of functions mathematicians have studied for decades that exhibit one-way properties. We will cover some of the most prominent examples of such candidate one-way functions in a minute.

The bad news is that the currently known candidate one-way functions are much less efficient than constructions actually used in practice, say a modern block cipher. Bridging this gap between theory and practice is one of the most important open research problems in modern cryptography as it allows you to build provably-secure pseudorandom generators, functions, and permutations.

11.3.1 Mathematical properties

A function f : X → Y is called a one-way function if f(x) is easy to compute for all x ∈ X, but for essentially all elements y ∈ Y it is computationally infeasible to find any x ∈ X such that f(x) = y [117]. Such an x is called a preimage of y.

Here, easy to compute simply means that for any given x ∈ X, Alice and Bob can compute f(x) in polynomial time.

The requirement that it is computationally infeasible for Eve to find any x ∈ X such that f(x) = y means that a one-way function must be hard to invert. Since modern cryptography is about specifying cryptographic algorithms that are secure against a probabilistic polynomial-time attacker – to put it more precisely, that can be broken by a probabilistic polynomial-time attacker with only negligible probability – a function f is considered hard to invert if no probabilistic polynomial-time algorithm is capable of finding a preimage x ∈ X for any given element y ∈ Y , except with negligible probability.

The definition of what it means for a function to be hard to invert might seem complicated at first. But it actually only excludes two extremes:

  • It is always possible to guess a preimage x. Since we can choose the size of the range of f, that is, the set {y = f(x) ∈ Y |x ∈ X}, when designing a cryptographic algorithm, we can make the likelihood of a successful guess arbitrarily small, but never zero. As a result, we need to account for the fact that Eve could find a preimage y for a randomly chosen x with negligible probability.
  • It is always possible to find a preimage of y by brute-force searching the domain of f(x), that is, by simply trying all values of x until one produces the correct y. Such a brute-force search requires exponential time. As a result, we exclude this extreme by requiring f to be computationally infeasible to invert only for probabilistic polynomial-time algorithms.

Summing up, a function y = f(x) is said to be one-way, if it is hard to invert for all values of y. If there is a probabilistic polynomial-time algorithm that can invert f for some values of y with a non-negligible probability—even if this probability is very small—then f is not a one-way function.

Because the brute-force search runs in exponential time and always succeeds, the existence of one-way functions is an assumption about computational complexity and computational hardness [97]. In other words, it is an assumption about the existence of mathematical problems that can be solved in principle, but cannot be solved efficiently.

Candidate one-way functions – Hash Functions and Message Authentication Codes

11.3.2 Candidate one-way functions

With the current knowledge in complexity theory, mathematicians do not know how to unconditionally prove the existence of one-way functions. As a result, their existence can only be assumed. There are, however, good reasons to make this assumption: there exists a number of very natural computational problems that were the subject of intense mathematical research for decades, sometimes even centuries, yet no one was able to come up with a polynomial-time algorithm that can solve these problems.

According to the fundamental theorem of arithmetic, every positive integer can be expressed as a product of prime numbers, the numbers being referred to as prime factors of the original number [74]. One of the best known computational problems that is believed to be a one-way function is prime factorization: given a large integer, find its prime factors.

The first table of prime factors of integers was created by the Greek mathematician Eratosthenes more than 2,500 years ago. The last factor table for numbers up to 10,017,000 was published by the American mathematician Derrick Norman Lehmer in 1909 [178].

Trial division was first described in 1202 by Leonardo of Pisa, also known as Fibonacci, in his manuscript on arithmetic Liber Abaci. French mathematician Pierre de Fermat proposed a method based on a difference of squares, today known as Fermat’s factorization method, in the 17th century. A similar integer factorization method was described by the 14th century Indian mathematician Narayana Pandita. The 18th century Swiss mathematician Leonhard Euler proposed a method for factoring integers by writing them as a sum of two square numbers in two different ways.

Starting from the 1970s, a number of so-called algebraic-group factorization algorithms were introduced that work in an algebraic group. As an example, the British mathematician John Pollard introduced two new factoring algorithms called the p− 1 algorithm and the rho algorithm in 1974 and 1975, respectively. The Canadian mathematician Hugh Williams proposed the Williams’ p + 1 algorithm in 1982.

In 1985, the Dutch mathematician Hendrik Lenstra published the elliptic curve method, currently the best known integer factorization method among the algorithms whose complexity depends on the size of the factor rather than the size of the number to be factored [204].

Moreover, a number of so-called general-purpose integer factorization methods whose running time depends on the size of the number to be factored have been proposed over time. Examples of such algorithms are the Quadratic Sieve introduced in 1981 by the American mathematician Carl Pomerance, which was the most effective general-purpose algorithm in the 1980s and early 1990s, and the General Number Field Sieve method, the fastest currently known algorithm for factoring large integers [39].

Yet, except for the quantum computer algorithm proposed in 1994 by the American mathematician Peter Shor, none of the above algorithms can factor integers in polynomial time. As a result, mathematicians assume that prime factorization is indeed a one-way function, at least for classical computers. Shor’s algorithm, on the other hand, requires a sufficiently large quantum computer with advanced error correction capabilities to keep the qubits stable during the entire computation, a technical challenge for which no solutions are known as of today.

Another example of a function believed to be one-way is a family of permutations based on the discrete logarithm problem. Recall that the discrete logarithm problem involves determining the integer exponent x given a number of the form gx where g is a generator of a cyclic group. The problem is believed to be computationally intractable, that is, no algorithms are known that can solve this problem in polynomial time.

The conjectured one-way family of permutations consists of the following:

  • An algorithm to generate an n-bit prime p and an element g ∈ {2,…,p − 1}
  • An algorithm to generate a random integer x in the range {1,…,p−1}
  • The function fp,g(x) = gx (mod p)

Function fp,g(x) is easy to compute, for example, using the square-and-multiply algorithm. Inverting fp,g(x), on the other hand, is believed to be computationally hard because inverting modular exponentiation is equivalent to solving the discrete logarithm problem for which no polynomial-time algorithms are known to date, as discussed in Chapter 7, Public-Key Cryptography.

Hash functions – Hash Functions and Message Authentication Codes

11.4 Hash functions

A hash function is some function hash that maps an arbitrarily long input string onto an output string of fixed length n. More formally, we have hash : {0,1}∗→{0,1}n.

A simplistic example of a hash function would be a function that always outputs the last n bits of an arbitrary input string m. Or, if n = 1, one could use the bitwise XOR of all input bits as the hash value.

However, these simple hash functions do not possess any of the properties required from a cryptographically secure hash function. We will now first discuss these properties, and afterward look at how secure hash functions are actually constructed.

11.4.1 Collision resistance

Cryptographic hash functions are hard to construct because they have to fulfill stringent requirements, which are motivated by their use within Message Authentication Codes (MACs) (see Section 11.5, Message Authentication Codes) and Digital Signatures (see Chapter 9, Digital Signatures).

Recall, for example, that in the RSA cryptosystem, Alice computes a digital signature over some message m as

where d is Alice’s private key and n is the public module. Alice then sends the pair (m,sigAlice(m)) to Bob.

If Eve observes this pair, she can compute hash(m) using Alice’s public key PKAlice = (e,n) via

Eve now knows hash(m) and the corresponding preimage m. If she manages to find another message m with the same hash value (a so-called second preimage), m and m will have the same signature. Effectively, Eve has signed m in Alice’s name without knowing her private key. This is the most severe kind of attack on a cryptographic hash function.

Therefore, given m and hash(m), it must be computationally hard for Eve to find a second preimage m such that hash(m) = hash(m). This property of a cryptographic hash function is called second preimage resistance or weak collision resistance.

Note that when trying out different input messages for the hash function, collisions must occur at some point, because hash can map longer messages onto shorter messages. In particular, if the given hash value hash(m) is n bits long, a second preimage m should be found after O(2n) trials. Therefore a second preimage attack is considered successful only if it has a significantly smaller complexity than O(2n).

A weaker form of attack occurs if Eve manages to find any collision, that is, any two messages m1,m2 with

without reference to some given hash value. If it is computationally hard for Eve to construct any collisions, the hash is called strongly collision resistant.

Again, when trying out many different candidate messages, collisions will naturally occur at some point, this time after about 2n∕2 trials. This smaller number is a consequence of a phenomenon commonly known as Birthday Paradox, which we will discuss in detail in Section 19.7, Attacks on hash functions in Chapter 19, Attacks on Cryptography.

Consequently, an attack on strong collision resistance is considered successful only if it has a significantly smaller complexity than O(2n∕2). This also shows that in general, that is, assuming there are no cryptographic weaknesses, hash functions with longer hash values can be considered to be more secure than hash functions with shorter hash values.

Note that strong collision resistance of a hash function implies weak collision resistance. Hash functions that are both strongly and weakly collision resistant are called collision resistant hash functions (CRHF).

One-way property – Hash Functions and Message Authentication Codes

11.4.2 One-way property

In Chapter 5, Entity Authentication, we showed how passwords can be stored in a secure way on a server using hash functions. More specifically, each password is hashed together with some random value (the salt) and the hash value is stored together with the corresponding user ID. This system can only be secure if it is computationally difficult to invert the hash function, that is, to find a matching input for a known output. The same requirement emerges if the hash function is used in a key-dependent way in order to form a MAC (see Section 11.5, Message authentication codes).

In order to put this requirement in a more precise way, we only need to apply our earlier definition of a one-way function from Section 11.3, One-way functions, to hash functions:

A hash function hash is said to be one-way or preimage resistant, if it is computationally infeasible to find an input m for a given output y so that y = hash(m).

As is the case for second preimages, preimages for a given n-bit output will occur automatically after O(2n) trial inputs. Hash functions that are preimage resistant and second preimage resistant are called one-way hash functions (OWHF).

11.4.3 Merkle-Damgard construction

Our previous discussion of requirements on a secure hash function shows that in order to achieve collision resistance, it is important that all input bits have an influence on the hash value. Otherwise, it would be very easy to construct collisions by varying the input bits that do not influence the outcome.

How can we accommodate this requirement when dealing with inputs m of indeterminate length? We divide m into pieces (or blocks) of a fixed size, then you deal with the blocks one after the other. In one construction option, the block is compressed, that is mapped onto a smaller bit string, which is then processed together with the next block.

The Merkle-Damgard scheme has been the main construction principle for cryptographic hash functions in the past. Most importantly, the MD5 (128-bit hash), SHA-1 (160-bit hash), and SHA-2 (256-bit hash) hash functions are built according to this scheme. Later in this chapter, in Section 11.7, Hash functions in TLS, we will look at the SHA-family of hash functions in detail, as these functions play an important role within TLS.

For now, we’ll concentrate on the details of the Merkle-Damgard scheme. In order to compute the hash value of an input message m of arbitrary length, we proceed according to the following steps:

  • Separate message m into k blocks of length r, using padding if necessary. In the SHA-1 hash function, input messages are always padded by a 1 followed by the necessary number of 0-bits. The block length of SHA-1 is r = 512.
  • Concatenate the first block m1 with an initialization vector IV of length n.
  • Apply a compression function comp : {0,1}n+r → {0,1}n on the result, to get
  • Process the remaining blocks by computing

Note that each hi has length n.

  • Set

Note that finding a collision in comp implies a collision in hash. More precisely, if we can find two different bit strings y1,y2 of length r so that comp(x||y1) = comp(x||y2) for some given n−bit string x, then we can construct two different messages m,m with the same hash value:

The article [8] lists a number of generic attacks on hash functions based on the Merkle-Damgard scheme. Although in most cases these attacks are far from being practical, they are still reason for concern about the general security of the scheme.

Sponge construction – Hash Functions and Message Authentication Codes

11.4.4 Sponge construction

Sponge construction is used in the formulation of the SHA-3 standard hash algorithm Keccak [26]. It works by first absorbing the input message into some state vector →S (the sponge). After one block has been absorbed, the state vector is permuted to achieve a good mixing of the input bits. After all input blocks have been processed, the n bits of the hash value are squeezed out of the sponge.

The detailed construction is as follows:

  1. Separate message m into k blocks of length r.
  2. Form the first state vector →S0 = 0b, that is, a string consisting of b 0’s, where b = 25 × 2l, and b > r.
  3. Absorb: For each message block, modify state vector →Si−1 by message block mi and permute the result via some bijective round function f : {0,1}b → 0,1b:

The final result is a b-bit vector →Sk, into which the message blocks have been absorbed.

4. Squeeze: We are now squeezing n bit out of the state vector →Sk.

If n < r, we simply take the first n bit of →Sk:

Otherwise, we form the following string of length (12 + 2l + 1) × r by repeatedly applying the round function f on →Sk:

Afterward, we pick the first n bits again:

We will now see how hash functions are used to form Message Authentication Codes (MACs).

11.5 Message authentication codes

If Alice wants to securely transmit a message m to Bob, she must use a so-called Message Authentication Code (MAC) to prevent Eve from tampering with that message. More precisely, a MAC prevents Mallory from doing the following:

  • Modifying m without Bob noticing it
  • Presenting Bob a message m′ generated by Mallory, m′≠m, without Bob noticing that m′ was not sent by Alice

Therefore, a MAC helps us to achieve the two security objectives integrity protection and message authentication (see Chapter 2, Secure Channel and the CIA Triad and Chapter 5, Entity Authentication). Note that a MAC cannot prevent the tampering itself, nor can it prevent message replay. The active attacker Mallory can always manipulate the genuine message m, or present Bob with the message m′ and pretend that it was sent by Alice. A MAC only gives Bob the ability to detect that something went wrong during the transmission of the message he received. Bob cannot reconstruct the genuine message m from a MAC. In fact, he cannot even determine whether the wrong MAC results from an attack by Mallory or from an innocuous bit flip caused by a transmission error. Later in this chapter, we will see that this property has fundamental implications on the use of MACs in safety-critical systems.

If Alice and Bob want to secure their messages with MACs, they need to share a secret k in advance. Once the shared secret is established, Alice and Bob can use MACs as illustrated in Figure 11.2. The sender Alice computes the MAC t as a function of her message m and the secret key k she shares with Bob. She then appends t to message m—denoted by m∥t—and sends the result to Bob. Upon receiving the data, Bob uses the message m, the MAC t, and the shared secret k to verify that t is a valid MAC on message m.

Figure 11.2: Working principle of MACs

So how are MACs actually computed?

How to compute a MAC – Hash Functions and Message Authentication Codes

11.5.1 How to compute a MAC

Basically, there are two options to form a MAC. The first option closely follows the approach we adopted to compute digital signatures in Chapter 9, Digital Signatures. Back then, we hashed the message m first and encrypted the hash value with the signer’s private key:

Analogously, using their shared secret k, Alice and Bob could compute

as MAC. Here, encryption is done via some symmetric encryption function, for example, a block cipher (see Chapter 14, Block Ciphers and Their Modes of Operation). Note that if Alice sends m||t to Bob and Eve manages to find another message m so that hash(m) = hash(m), then Eve can replace m with m without being noticed. This motivates the collision resistance requirement on hash functions described in Section 11.4, Hash functions.

However, even if we are using a collision-resistant hash function, in a symmetric setting where Alice and Bob both use the same key k, one might ask whether it is really necessary to agree on and deploy two different kinds of algorithms for computing a MAC. Moreover, hash functions are built for speed and generally have a much better performance than block ciphers.

The second option for computing a MAC therefore only uses hash functions as building blocks. Here, the secret k is used to modify the message m in a certain way and the hash function is applied to the result:

This option is called a key-dependent hash value. In which way k should influence the message m, depends on how the hash function is constructed. In any case, if Eve is able to reconstruct the input data from the output value hash(m,k), she might be able to get part of or even the complete secret key k. This motivates the one-way property requirement on hash functions described in Section 11.4, Hash functions. A well-proven way to construct a key-dependent hash called HMAC is defined in [103].

11.5.2 HMAC construction

The HMAC construction is a generic template for constructing a MAC via a key-dependent hash function. In this construction, the underlying hash function hash is treated as a black box that can be easily replaced by some other hash function if necessary. This construction also makes it easy to use existing implementations of hash functions. It is used within TLS as part of the key derivation function HKDF (see Section 12.3, Key derivation functions in TLS within Chapter 12, Key Exchange).

When looking at the way hash functions are built, using either the Merkle-Damgard or the Sponge Construction, it quickly becomes clear that input bits from the first message blocks are well diffused over the final output hash value. Input bits in the last message blocks, on the other hand, are only processed at the very end and the compression or the round function, respectively, is only applied a few times on these bits. It is therefore a good idea to always append the message to the key in key-dependent hash functions. The simple construction

however, suffers from so-called Hash Length Extension Attacks, if the hash function is constructed according to the Merkle-Damgard scheme. Here, an attacker knowing a valid pair (m,MACk(m)) can append another message mA to the original message m and compute the corresponding MAC without knowing the secret key k. This is because

where comp is the compression function used for building the hash function.

In the HMAC construction, the input message m is therefore appended twice to the keying material, but the second time in a hashed form that cannot be forged by an attacker. More specifically, for an input message m and a symmetric key k, we have

where:

  • hash : {0,1}∗ → {0,1}n is some collision-resistant OWHF, which processes its input in blocks of size r.
  • k is the symmetric key. It is recommended that the key size should be ≥ n. If k has more than r bits, one should use hash(k) instead of k.
  • k′ is the key padded with zeros so that the result has r bits.
  • opad and ipad are fixed bitstrings of length r: opad = 01011100 repeated r∕8 times, and ipad = 00110110 repeated r∕8 times. Both opad and ipad, when added via ⊕, flip half of the key bits.

In this construction, the hash length extension attack will not work, because in order to forge MACk(m||mA), an attacker would need to construct hash(k′⊕ ipad||m||mA). This is impossible, however, as the attacker does not know hash(k′⊕ ipad||m).

More generally, the HMAC construction does not rely on the collision-resistance of the underlying hash function, because a collision in the hash function does not imply the construction of a colliding HMAC.

MAC versus CRC 2 – Hash Functions and Message Authentication Codes

So, to encode a two-byte message 0x0102, Bob would interpret it as the polynomial m(x) = x8 + x, divide it by x2 + x + 1 using polynomial division, and get a remainder polynomial r(x) = 1. In hexadecimal notation, the remainder has the value 0x01. He would then append the remainder value as the CRC check value and transmit the message 0x010201 to Alice.

Upon receiving the message, Alice would perform the same computation and check whether the received CRC value 0x01 is equal to the computed CRC value. Let’s assume there was an error during transmission – an accidental bit flip – so that Alice received the message 0x010101. In that case, the CRC value computed by Alice would be 0x02 and Alice would detect the transmission error.

At first glance, this looks very similar to a MAC and, especially in systems that already support CRCs, it might be tempting to use CRC as a replacement for a MAC. Don’t! Recall that MACs are built on top of cryptographic hash functions, and cryptographic hash functions are collision-resistant. CRCs, on the other hand, are not collision resistant.

As an example, Listing 11.1 shows the Python code for computing CRC-8. This CRC uses generator polynomial x2 + x + 1 and outputs an 8-bit CRC value.

Listing 11.1: Python code for computing CRC-8 using generator polynomial x2+x+1

def crc8(data, n, poly, crc=0):
   g = 1 << n | poly  # Generator polynomial
   for d in data:
       crc ^= d << (n – 8)
       for _ in range(8):
           crc <<= 1
           if crc & (1 << n):
               crc ^= g
   return crc

Now, if you compute CRC-8 checksum values for different 2-byte messages using the code shown in Listing 11.2, you can quickly verify yourself that messages 0x020B, 0x030C, 0x0419, and many others have the same CRC value of 0x1B.

Listing 11.2: Python code to compute CRC-8 for different 2-byte messages

for i in range(0,256):
   for j in range(0, 256):
       if crc8([i,j], 8, 0x07) == 0x1b:
           print(f”Message {hex(i)}, {hex(j)} has CRC 0x1b”)

Consequently, if Alice and Bob were to use CRCs to protect their message integrity against malicious attacker Mallory rather than accidental transmission errors, it would be very easy for Mallory to find messages that have an identical CRC check value. That, in turn, would allow Mallory to exchange a message that Bob sent to Alice without her noticing it (and vice versa). And that is exactly the reason why a MAC needs to be collision-resistant. Moreover, and maybe even more importantly, even if Mallory cannot be bothered to find collisions for the CRC value already in place, he can simply compute the matching CRC value for a message of his choice and replace both the message and the CRC. This is possible because there is no secret information going into the CRC. To summarize, a CRC will only protect you against accidental, random transmission errors, but not against an intelligent attacker.