Today we covered public-key encryption schemes secure against chosen-ciphertext attacks as well as digital signature schemes. This concludes our discussion of the low-level details of cryptography, though we will continue to see applications of cryptography throughout the rest of the course.

At the end of today’s lecture I mentioned some ‘pitfalls’ that can arise when using cryptography. In the next 1-2 lectures we will see some case studies of things that can go wrong. Please try to read the essays/papers assigned to today’s lecture and next Monday’s lecture in time for Monday’s class. I will also post papers for next Wednesday’s lecture. (One is already there.) These papers are required reading for the midterm exam.

Questions/comments?

### Like this:

Like Loading...

*Related*

This entry was posted on September 30, 2009 at 9:25 pm and is filed under lectures. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

October 5, 2009 at 9:15 am |

I have read the three essays from Schneier and the implementation pitfalls from Kohno (assigned to lecture 9). Schneier gives a general overview about the common problems in /designing/using/applying cryptography. Kohno’s paper gives a more detailed overview where the focus is the common mistakes in implementing cryptography. He gives concrete examples for the problems described in his paper.

One important issue in Kohno’s paper is the realization of random number generation in software. If some of you did not read the paper, here is the summary:

In C programing language, “rand()” method is used to generate a 32-bit random number. The user has to input an 32-bit “seed” to seed the random number generation. The problem with this method is that, seed is only 32-bit long. Therefore, it is pretty feasible to perform an exhaustive search to predict the random output (Note that that can be used as a key or IV in a cipher mode).

The second example was the random number generation in Netscape. In Netscape the random numbers are generated using SHA-1. The seed for the generation is a 160-bit value (input for SHA-1). The output is 160-bit random looking value (SHA-1 output). Even though an exhaustive search in this implementation is not possible, the weakness lies in the generation of the seed: The designers of Netscape use the process id and the current time in seconds and milliseconds as the seed for the generation. Obviously, the adversary observing the communication could predict the seed and therefore the random output.

Question:

—————

It seems that using the Netscape way of random generation is better than using the built-in random generator “rand()” (at least my intuition). However, Netscape way random generation is not secure due to “seed” generation. The problem seems to be chicken-egg problem. To be able to generate a good random number, a good random “seed” should be selected. But since we do not have a possibility of choosing a random seed without a random generator, the seeds are somehow predictable, therefore, the random outputs for adversaries.

Since almost every implementation of cryptography requires random number, I think this is a very important question: How should we implement random generation in software? (I think, some generations that are using e.g., radio noise, processor load, keyboard stroke, etc as the source of randomness may be secure. But, they would not always be available for the implementation platform)

Any comment on this?

(Even, it would be worth to look at the random number generations when the implementations are attacked in the second part of the homework 2 ;))

October 5, 2009 at 12:58 pm |

There are two issues here: (1) the security of the PRNG itself, assuming a good seed is available; (2) how to find a good seed in the first place. The problem with rand() is the first (the seed is too small, even if perfectly random) and the problem with Netscape was the second.

Your question was about the second. Indeed, generating a random seed is difficult. I suggest you look here, and in particular the references at the bottom of page 3 of this paper.