Lecture 10

In today’s lecture we continued to cover “crypto pitfalls”, and also began discussing some case studies from the paper “Why Cryptosystems Fail” (linked from the course website).

Several students raised some interesting points in class, and I hope those students will provide pointers to the examples they mentioned.

16 Responses to “Lecture 10”

  1. Josh Wright Says:

    Here’s a link to an article(from 2007) discussing Bank of America’s SiteKey system, which is what several people brought up in class. According to the article, SiteKey was defeated at some point. The article doesn’t mention what kind of updates Bank of America has put into place(if any), but it’s still an interesting read with video of the attack in action!

    http://news.cnet.com/8301-10784_3-9776757-7.html

  2. Anonymous Says:

    Thanks Josh for the link. Very interesting article. I have some comments on it.

    1-) The following paragraph is copied from the article:
    “Some banks choose to issue their customers a cryptographic hardware token (a keychain with a digital display that spits out a new random number every 60 seconds). Others, especially those banks with less profitable customers, have opted to instead adopt software solutions. The advantage of this, of course, being that they don’t have to spend any money to send widgets out to their customers.”

    I think using a hardware token to generate some kind of random numbers is not the only cost-effective way. A bank can generate a list of “good” random numbers and sent it to the customer e.g., by post. For a login, the customer than inputs a random number from the list whose index is selected by Bank randomly. As long as the letter is not compromised on the way to customers, this is, I think, as good as using a token for generating random numbers. Maybe hardware token is more comfortable.

    2-) Regarding the phishing attacks in a combination of man-in-the-middle attack, I think, the main problem is the way how the authentication is done that is usually one-way. That means, the bank is identified using a certificate. But, the user is authenticated only by passwords+some additional inputs (such as the random numbers generated by tokens) without any certificate.
    So, what happens if the users would also be authenticated by certificates so that the bank could be sure that it is communicating with the intended user and when the communication is encrypted en-to-end. The second phase of authentication by passwords could start only if the user’s certificate is valid.

    Am I missing something here?

    • Josh Wright Says:

      Hi Anonymous,

      The setup you propose is similar to hardware tokens, but it also has a problem.

      The bank has to make the choice of generating a list of random numbers for each customer, or generating a master list for all customers. The master list isn’t a good idea, because one bad customer could leak the list so everyone knows it, then the bank has to make a new one and send it to all customers. So individual lists are the only real feasible way to do it. But this would require all users to carry around(and keep safe) the letter which contains the numbers. This isn’t probably too big of an inconvenience when compared to a hardware keychain; but the hardware key chain never needs to be updated with new random numbers, it generates them on it’s on(or receives them through a broadcast maybe). When the bank has to issue a new list to a customer, there will be a delay between the time the list is generated and the time that the customer gets the new list. During this time, the old list is still valid, otherwise the customer couldn’t log in to their account during the switch over. So an attacker who compromises the list can still use the old, unsafe list while the new one is in transit.

      The second scheme your propose sounds like SSL/TLS, which has support for server and client authentication using certifications. But there is at least one way to partially attack it, but it stems from implementation problems, not a problem with SSL. Your scheme would prevent the attacker from impersonating the user to the bank, so long as the user certificate is kept safe. But there are ways to impersonate being the bank and getting the users password. For example, at DefCon 17, there was a presentation about null prefix SSL attacks. Basically, attackers can get certificates that appear to be valid(ie from paypal), but they’re not. The details can be found at

      Click to access null-prefix-attacks.pdf

      This attack only currently affects the Microsoft CryptoAPI, most other major libraries have been patched. In fact just yesterday, a null prefix certificate for http://www.paypal.com was posted on a full disclosure list. Again, this attack wouldn’t divulge the user certificate, but it definitely weakens the system(if you’re using the MS Crypto API).

  3. Anonymous Says:

    Hi Josh,

    anonymous sounds not so good. From now on, I am osmugus 🙂

    For 1)
    ——–
    yes, as you noted I meant individual lists for the users. For the hardware token, I am not an expert, but, I think what it generates is a previous element of a hash chain which is initialized for an individual user. So every time when the token is pressed, it reveals one element of the users hash chain.
    If the tokens would be generated randomly, there would be a synchronization problem with the Bank. Or do you know more about this?

    Regarding to possible delays in case of the list containing precomputed random numbers. Bank can easily keep a counter for the used random numbers for a user and send a new list in advance.
    I think compromising the list almost similar to compromising the hardware token. User has to keep it secret!

    For 2)
    ——–
    First of all thank you for the paper link. As you said the problem is not the technology itself, but its realization in real world. This shows again that the essays and papers that Prof. Katz has asked us to read are not stories for security paranoids, but, the reality.

    I think the problem with using using certificates for client authentication would be that it is not very practical. Imagine that you have a single computer at home. You and your girl friend want to login to the bank. Probably, the configuration of browser would not be so easy.

  4. osmugus Says:

    “I think the problem with using using certificates for client authentication would be that it is not very practical. Imagine that you have a single computer at home. You and your girl friend want to login to the bank. Probably, the configuration of browser would not be so easy.”

    Or, you want to login to the bank on your friend’s computer…

  5. jonkatz Says:

    Some comments on the above:

    1) I have seen banks in Europe use a version of the S/Key system (that we will cover in a later class) that requires users to carry around a small sheet of paper with, say, 50 random numbers on it. But it can only be used for 50 logins, at which point it must be refreshed. You also have to remember to carry around the piece of paper in your wallet in order to log-in.

    I don’t think there is any way to improve on this without using a hardware token.

    2) I assume the hardware token you are talking about is something like the RSA secureID.

    3) It is not currently practical, in general, for human users to obtain and use certificates. I guess in principle it could change, but there would be several difficulties to overcome. There are other problems with the way PKI is currently implemented (even for servers), anyway.

  6. osmugus Says:

    “2) I assume the hardware token you are talking about is something like the RSA secureID.”

    I read the wiki article.
    It seems that the hardware token is a one-way keyed function f_k, that generates a one-time user password (OTP) by using a secret seed and a time stamp derived from the hardware token’s clock.
    Namely,
    OTP = f_k(secret_seed,time_stamp).

    Observation:
    —————
    In the wiki, it is stated that
    “While the RSA SecurID system adds a strong layer of security to a network, difficulty can occur if the authentication server’s clock becomes out of sync with the clock built in to the authentication tokens. However, typically the RSA Authentication Manager automatically corrects for this without affecting the user.”

    Probably, what an automatic correction means that if OTP != f_k(secret_seed,time_stamp),

    the server computes

    OTP_i = f_k(secret_seed,time_stamp_i) for a small i, where time_stamp_i is (time_stamp +- delta).
    And, OTP is accepted if OTP = OTP_i for some i.

    Question:
    —————
    Since a bit-accurate time synchronization is not very realistic, the automatic correction is applied probably at each log-in operation.
    Could it be a security weakness in the system? Or, one should not be so paranoid 🙂

    • Josh Wright Says:

      I think in theory it is a security weakness, for any one moment in time there are multiple valid OTP’s, but in practice I would image that this isn’t too big of an issue. When a user authenticates the server would compute not just OTP_i, but maybe also OTP_i-1 and OTP_1+1 like you said. I would guess that it would only compute +/- 1 on either side because the OTP displayed is usually good for 30 to 60 seconds; that’s lots of room to account for clock skew. The pictures on Wikipedia show a 6 digit number, from 0 to 9. So we have a key space of 10^6. The difference between 1/10^6 and 3/10^6 isn’t a huge advantage in my mind, but many securID Authentication servers take proactive steps like allowing 1 invalid OTP per minute and requiring passwords/pins as well as the OTP. So a brute force attack on the OTP key space takes a little less than 2 years to complete(assuming 1 attempt/min). I’m not sure if the birthday principle could apply to this situation or not, but if it does then you might only require 1 year. But even if you got lucky and guessed a correct OTP, you’d have to know the user name and password or pin of the user as well.

  7. osmugus Says:

    I found an interesting article about “Man-in-the-Browser” attacks.
    This attack basically circumvents all conventional authentication mechanisms such as SSL, PKI, 2-factor authentication mechanisms etc.

    Details may be found at:

    Click to access SecureClient.pdf

  8. osmugus Says:

    Hi Josh,

    yes you are right. I think, one try per minute restriction is a very important point.
    By the way, for i =3, it would take 2/3 years. since one try has 3 chance to be accepted according to our setting. But, it is still not a big advantage..

  9. osmugus Says:

    “I’m not sure if the birthday principle could apply to this situation or not, but if it does then you might only require 1 year.”

    Josh, I think, birth day attack applies, if f is a compression function.

  10. jonkatz Says:

    I lost track on what you are discussing exactly, but I don’t think a birthday attack applies here. Here it does not help the attacker to find a collision in the OTP; instead, the adversary is trying to guess the next value of the OTP.

    • Josh Wright Says:

      But wouldn’t a collision in the OTP yield a valid OTP? Several of the articles from Wikipedia discuss the output of f_k being a hash function, consisting of an expansion round, 4 rounds of a block-cipher like function which modifies the key every round, a final permutation then a conversion to decimal. Couldn’t you consider the securID server computing the next valid value(s) as being your randomly chosen x_1 and f_k(x_1) and you picking a random x_2 and computing f_k(x_2)? If the server accepts the login, you found a collision, if not pick another random x_2 and the server will pick another random x_1.

      • osmugus Says:

        Josh,
        I guess you are talking about this paper:

        Click to access 162.pdf

        If f is a non-bijective function as stated in the paper, I agree with you.
        Assuming that server is expecting
        OTP_s = f(x_s),
        at a given time.
        You have theoretically a chance to guess either “x_s”, or a collusion x’ with OTP_s = f(x’).

        But, as you noted above, this is a very small probability.

      • osmugus Says:

        And one more comment (Sorry I forgot to put in my post above)

        I think, what Prof. Katz means is that a user is sending a OTP to the server, not the input for the f. Therefore, you either send a correct OTP or not.

        And, I guess, what you mean is that, though the user is sending only OTP, due to a possibility of a collusion, the chance being accepted is doubled. What actually I agree with you.

        P.S.:
        ——
        I hope, I am not spamming by sending so many posts.
        Prof. Katz, is it not possible to give the right for modifying our own posts? This would reduce the number of posts.

  11. jonkatz Says:

    Let OTP_i be the value displayed at time i.

    If OTP_{i+1} = F(k, OTP_i) for some arbitrary function F and secret key k, then a collision in OTP will cause the subsequent values of OTP to cycle. So, for example, if the adversary knows OTP_i, OTP_{i+1} and then it turns out that OTP_j = OTP_i, then the adversary can predict OTP_{j+1} = OTP_{i+1}.

    However, that is not how Secure ID computes OTP. Instead, it computes the value as OTP_i = F(k, i), where F is a block cipher. So even if OTP_i = OTP_j, this will not allow the adversary to predict OTP_{j+1}.

Leave a reply to Josh Wright Cancel reply