Jay Allen is a hero

I’ve just installed MT-Blacklist which enables killing comment spam. After being plagued by a persistent IP swapping blogspammer I decided to do something serious. It was a breeze to install, and I’m hoping that it’s half as successful in stopping spam as it was easy to set up and configure.
What’s more Jay provides a bunch of RSS feeds to keep up to date with the Blacklist. I wish I was that organised for BouncyCastle.
Oh, and if you’re accidentally blocked, and your message barfs send me an email so I can do something smart about reducing the aggressiveness of the filtering if necessary.

Advertisements

Ian Thorpe should not be scared

My rehabilitation for my elbow injury has started and I’ve joined the local pool. This is the first time that I’ve done lap swimming for 10 years. I’m reasonably fit, but I don’t think my swimming is going to strike fear into the heart of the Australian Olympic Squad.
What I do notice is that swimming is great for my elbow and shoulder, but boy howdy, is it sore now. Stretch those muscles, ligaments and all the other squishy bits. Ow, ow, ow.

ECC and RSA speed comparison

There has been a lot of discussion in the crypto community, especially those interested in the mobile space, about the implementation of ECC due to the smaller bit sizes for keys and the perceived speed increase.
I’ve never been that sure, and an early report that I had from a BouncyCastle user indicated that in the real world it was somewhat different. So, I decided to write some code to perform some comparative testing on the key generation and key exchange speeds of ECC and RSA.
I wanted to test the real world scenarios of public key usage, where public keys are created very sparsely compared with the key exchanges that actually occur using those keys. So, in a way, I’m comparing the effort used in an operational scenario that I have specified rather than a discrete individual operation.
The playing field is as follows;

  • BouncyCastle Java API’s version 1.23
  • RSA key sizes from 1024 – 2048 in steps of 256 bits
  • ECC key sizes, 190, 239 and 256 bits using the X.962 EC-DSA named curves prime190v1, prime239v1 and prime256v1.
  • Variable number of key exchanges for a key creation
  • ECC uses ECDH for key exchange
  • RSA uses an public key encrypt and a private key decrypt for key exchange
  • These scenarios do not perform signing or verification (which could be added)
  • Dell D600 Latitude with a 1.4Ghz Centrino and 512Mb of RAM

Also taking into account the following recommended key strength comparisons;

RSA ECC
1024 160
2048 282
4096 409

The comparisons below are a rough interpolation of to provide comparable key strengths.
The following tables show the results, but these operational scenarios are based on a single key creation for multiple key exchanges where the multiple is 100-1000 key exchanges for each key creation. As this turns out to be the major determinant of difference between the ECC and RSA scenario speeds, this parameter should reflect your operational scenarios as closely as possible.
The numbers in the tables show a delta between start of scenario and end of scenario in milliseconds, so the smaller the number the better.

Algorithm Key Exchanges Delta
ECC (192, 239, 256) 1 96 162 182
RSA (1280, 1536, 1792) 1 995 2269 5713
ECC (192, 239, 256) 100 6329 10457 11652
RSA (1280, 1536, 1792) 100 5014 7300 12465
ECC (192, 239, 256) 500 30422 52527 57584
RSA (1280, 1536, 1792) 500 18423 30079 46924
ECC (192, 239, 256) 2500 153245 259025 288532
RSA (1280, 1536, 1792) 2500 86191 143894 219485

Note: As with all things, the usual caveats apply that I might have stuffed it all up and these are nothing more than random numbers, so do your own testing. If people want the source for these tests, then drop me an email.
What does this all mean ? Well, I’m not sure what it means to you, but what it means to me is that ECC is not a panacea, and that the comparative key strengths for RSA are better performers than their equivalent ECC key strengths. There are different considerations for each application, such as the size of the encrypted data and the memory use of the algorithms, and those are best left as an exercise for the reader.
Edit: 31st October 2006
Code available here

Robocop – no more

After the last couple of months it’s nice to have something go super smoothly.
This morning I had day surgery and the surgeon removed the external fixator. My wife (Sue) is a bit sad because after a general anaesthetic I didn’t have any reactions (no nausea, nothing). But she’s been fantastic looking after me. I look forward to wearing a long sleeved shirt. Sad how it’s the little things that you miss the most.

BouncyCastle release 1.23 now available.

This release contains further improvements to the OpenPGP package which improves the range of other implementations we are compatible with and a heuristic decoder has been added which “guesses” whether or not it is dealing with an armored input stream or a PGP binary stream. The OpenPGP examples have been rewritten to take advantage of new API features, and an unpacking problem which was occasionally causing the OpenPGP API to fail doing El Gamal decryption has been fixed. RipeMD160 and RipeMD320 have been added and support for RipeMD has been added to the CMS/SMIME handlers. A number of other bugs and issues have also been addressed.
Go find it at http://www.bouncycastle.org/

What’s wrong with Java is all the choices…

Admittedly the title is somewhat tongue in cheek, but after a recent blog entry (I’m sure you can find it) and having a look around the blogsphere for responses (http referrer logging is very useful) it is clear to me that Java developers are different in some ways, and much of this is due to the design of the Java APIs. I’ll use the stereotype “Java developers” for convenience, and because that’s how many respondents viewed the group, but I believe it’s a different group entirely and that Java developers aren’t a single stereotype any more than “Perl Hackers” or “VB weenies” are.
In the Java world there is a lot of infrastructure (those pesky “morass of factories”) that allows for alternative implementations to be substituted. This allows people to write code that conforms to a single API and only change the behaviour based on the implementation. This gives great runtime flexibility, and especially for much of the code that I write (crypto) it allows for flexibility in the legal arenas where different strength crypto is allowed in different countries. Also, there are countries that have patents for particular algorithms that don’t exist in other countries. One of my comments which was taken out of context on use.perl.org obviously didn’t understand this, or didn’t even try to understand that different world view. Again, why should they, they’ve probably never had to build something like that in their life, so it’s completely foreign, strange, and in their view, wrong.
Now, as many Java developers have been users of the Java API’s (well, duh) it is natural that their implementations have also been influenced by their experiences. Now in some cases this is good, and in some cases this is bad. It’s good when it’s clear and obvious (using a different JDBC driver, different JCE provider, different JDO implementation) and it’s bad when it weird and confusing (no example off the top of my head, but I’m sure there are lots).
What the Java development world is doing is trading off is runtime flexibility with compile time decision making.
I’m now going to talk about Perl, something that I’ve had a lot of experience with (which is why I have been confused that the Perl people have been slagging off at me, I never said anything bad about you guys, back off), but I’m sure it equally applies to other languages/implementation/developers. When writing a Perl program I have to choose what implementation of a library I want to use, will it be Cache::Cache, Cache::FastMemoryCache or Cache::Mmap (please, I’m not picking on the implementation, I just did a quick browse of CPAN looking for alternative implementations that didn’t share a common interface). For a developer, I need to know how my software is going to be used, and the environment and the potential pitfalls of each. What’s more, it makes it harder to mock out the specific implementations for testing, when the code is relying on a concrete implementation.
Is this bad ? In general, no. Is this different to how I think ? In general, yes.
I’m not smart enough to be able to guess how my code will be used by somebody else, so I like to reduce coupling and allow for different implementations to be substituted during runtime. Much of this is due to my experience with Java, but it’s also based on bitter real world experience where I need to have this reduced coupling because in more cases than not, I’ve needed this flexibility. Using a scripting language (Perl for example) much of this can be resolved by having different and specific implementations in each environment. I’m sure this is fine for many people, but when I’m shipping class files to a customer, they don’t have the ability to change the code for their environment, so I have to allow it beforehand.
So, the next time you wish to have a rant about how hard something is to use, how it seems over-engineered, take a long deep breath and imagine that you’re not always right, and that sometimes it’s out of necessity rather than some perverse need for complexity and that maybe, just maybe, the people that wrote that code actually did know what they were doing.