I got rejected by Google – woe is me

I’ve just had the standard rejection letter from Google about how I’m not a good fit for their position “$p”. Awww. I was kinda enjoying the interviews in a masochistic way. This was at the end of interview #2.
I thought I’d share what went on, and my thoughts on the process, and what sort of person Google is likely to get out of them. Also, I suspect Google is on a hiring rampage right now, with HR people scouring the web for people with some degree of fit.
Before the Google fan-boi’s jump out of the woodwork and exclaim that this is just sour grapes, I’m going to clear up a few things.
#1. I never applied. Google contacted me, and asked me to interview. I agreed, only after explaining that I was very happy where I was and that I was very unlikely to move.
#2. At each step of the process I expressed my concern about the process, and what that meant to hiring people.
Of course, Google being Google meant that was kinda irrelevant, because “Everybody wants to work there”.
First of all the job that I was asked to apply for was in C/C++, in a testing department and had pretty much nothing to do with my current skillset. W..T…F..
OK, so I had a chat with the recruitement person about that. “Oh, we’ll just go ahead anyway as I’m sure we can find another job somewhere with your skills” says she.
Turns out Google don’t interview specifically. They must have a checklist of things they’re going to answer, and if you can’t answer those, then you’re not a good fit for Google.
In the first interview, you’re asked to rate yourself on a bunch of topics, with a wide variety of technical skills on offer, I appraised myself with fairly low to medium for anything not in my current domain of expertise (JavaEE) and expert in that field (seems reasonable, I have written a couple of books).
In my second interview with an engineer, I was delighted to be asked questions about physical networking devices, detailed questions about specific operating system filesystems and some algorithmic questions that were theoretical at best but could be used to gauge peoples thinking skills.
Excellent, after I answered them honestly (I could take a guess about “a” and “b”, but I’d prefer to look things like that up) and went through what was required for “c”.
The engineer said that the interview was over. I was like “huh”. Don’t you want to ask me about Java, Agile development, teamwork, managing distributed development ? Any of the 10 things I am actually expert in doing ?
The answer was (predictably) “no”. These questions (the types of ones that people working in small companies who do networking, user and operating system administration) which only a specific set of people will answer without digging through old textbooks are what Google considers will give them a good indication of whether people are a fit to work at Google.
I was amused by the process, but felt it was fairly “typical” of an american companies interview process. After having gone through a similar (but much more sophisticated) version for ThoughtWorks, I’m starting to believe that these companies are so entrenched in their ideals of what makes a good fit that they are creating a mono-culture.
I’d say, from experiencing the process, the people that Google will find “a good fit” are:
1. Bright, and capable of solving relatively theoretical algorithmic problems
2. 22-28 years of age and working in, or have just recently worked in, a very small environment where they have had to do all the networking, the operating system management as well as small amounts of programming.
Ability to write code and work in teams appears to be a secondary concern, something I find fascinating from a socialogical point of view, but given what I’ve read about the environment, possibly not all that suprising.
I wish Google well, I wish people who interview with Google well, and I hope that people who read this get some degree of insight and amusement into the process. I’ve deliberately kept the contents of the interview process vague, as they have asked me to do so. I don’t believe for a moment that is legally binding, but it seems like the polite thing to do.

Hudson – CI tool written in Java

https://hudson.dev.java.net/
Very nice. Slick and easy to setup. I’ve not got a fully running application yet (stupid corporate network) but I can’t imagine it’s going to take very long.
Using Java WebStart as the download container was such a good idea.
One of the really compelling components of Hudson is that it has a master/slave build system built into the core. A very nice feature for large projects.

Debugging with Tomcat and Eclipse using jpda

I’ve not had much need for debugging my servlets, but while I’m currently working on a legacy codebase, it’s a great means of stepping through what is really happening.
I’m not a fan of the “run Tomcat in Eclipse”, well, just because it feels ooky, and I’d rather run the two VM’s separately so I can bounce them independently, and because I write such crap code, I’d rather not crash both of them at the same time.
The information on how to make Tomcat and Eclipse play nicely together is a bit sparse, so I’ve provided a nice little summary here. This is working with Tomcat 5.0.X, Eclipse 3.2, JDK 1.4.2_*
Step 1. Configure Tomcat
(This is for the Win32 build with the spiffy UI for starting/stopping)
a) Open up the configuration GUI (“Configure Tomcat”)
b) Select the Java tab
c) Into the Java Options include (substituting the correct locations)
-Xdebug
-Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n
-Dcatalina.home=c:\tomcat
-Djava.endorsed.dirs=c:\tomcat\common\endorsed
-Djava.io.tmpdir=c:\tomcat

NB: These are all on separate lines, with a <CR> at each EOL
d) Select the Startup tab
e) Into the Arguments section include:
jpda
start

NB: These are all on separate lines, with a <CR> at each EOL
f) Start and Stop Tomcat completely
2. Configure Eclipse (well, not much really to configure)
a) While in the Java perspective select Run/Debug…
b) Choose “Remote Java Application” from the tree (right click/New)
c) The defaults are all that is required.
d) Click “Debug” in the bottom corner to start it now, or Close for later
3. Debugging the Application
a) Select the servlet/code that requires examination
b) Create a breakpoint in the code
c) Click on the “Debug” (if not already debugging) (*)
d) Click on the Debug Perspective (optional)
— this should show the Eclipse connected to Tomcat, and there should be a huge list Threads and the title of the Debug should be something like:
NameOfApplication [Remote Java Application]
Java HotSpot(TM) Server VM[localhost:8000]

e) Now just run the application as normal (via a browser or whatever)
f) Watch in amazement as Eclipse debugs the application at the breakpoint.
(*) If you get an error such as “Failed to connect to remote VM. Connection Refused”. This normally means that Tomcat isn’t started, _or_ there is already a debugging session started via jpda. Check Tomcat is running, and check the Debug perspective to make sure that it isn’t running.

So cool

After those extended late night hacking sessions and far too much World of Warcraft I’ve been finding my poor Dell D600 lappy getting a bit hot under the collar.
Some might say that’s a good time to take a break. Feh! I say. Clearly there must be a technical solution to this problem.
And there is !
Laptop Cooling Options.
I bought the Vantec Lapcool 3 which for the princely sum of $25 Australian (about 10c US) I’ve now got a nice cool lappy to use.
Works great, powered by USB, adds about 1cm to the height of the lappy (which makes very little difference to my ergonomics). Super quiet, and my laptop never got above “tepid” over the weekend. It had been getting to “magma” and causing the CPU to go into “slowdown” mode.

Clean firetrucks and clean code

Seth Godin wrote an article about clean firetrucks that got a few responses from fireys (I love you guys!) and general commenters.
The premise of the article was “why do they stand around and clean stuff when they should be out preventing fires”. Now, it was Seth’s goal to paint a vivid picture about why organisations should be pro-active in getting business, rather than just standing around.
However, interestingly enough, I had a different picture in my mind after I read the title, but before I read the blog entry.
Cleaning firetrucks is possibly the most important activity that can be performed in a firestation, for the same reason as cleaning your own car (shame on me) and keeping your code clean.
You are actively looking at the item in question, and it’s more than just “a bit of spit and polish”, you’re taking time to make sure the hose-reels are nicely wound, and that the tyres are pumped, and the spare tyre is ready to go, and that there is no clutter in the driving cabin.
Why is that ? Well, when the shit hits the fan, and the rubber has to hit the road, then the truck’s gotta be in tip-top shape and operating at peak effectiveness.
So, the perception that “washing the truck” is just “busy work” is so pervasive that even clever and articulate commentators would choose to reflect on these activites in this manner.
What chances do we have as software developers to get people to understand that refactoring code, and general code and build housekeeping provides those same values ?
When the bug comes in, or the change request comes in, I wanna be in a position to slide down the pole, put on my red shiny hat, big ass pants and wellies and write code to save the day. (How’s that for a visual ?)
I don’t want to have to find out that the hose has kinks in it, or the cabin is filled with Maccas rubbish at the worst possible time. The orphanage is on fire and nuns and children need to be recued now !
Cleanliness may not be next to godliness, but I’m sure there is a high correlation between cleanliness and the level of maintenance on the items in hand, whether that be a 10-tonne fire truck (or whatever they weigh) and the software that I write.
Now, if you’ll excuse me, I need to hose down some code I wrote yesterday.

The grass is always brown

During lunch today I managed to catch up with some ex-colleagues and have a good gasbag about the work we’re all doing. I was bemoaning the general level of Java experience in the industry, and was waxing lyrical about how few people understood what an “interface” (the Java keyword) is used for.
Quick as you like, Marty replied with:
“That’s where you declare your constants”
Needless to say, that brought a lot of laughing, and then we all collectively sighed as we realised that’s what we deal with on a daily basis.
Note for the sarcasm impaired; Marty doesn’t believe that. It’s humour, laugh.

The time has come, the walrus said..

‘To talk of many things:
Of shoes – and ships – and sealing-wax –
Of cabbages – and kings –
And why the sea is boiling hot –
And whether pigs have wings.’
As of Friday, I’m no longer working at ThoughtWorks. After 2 and 1/2 years of working in consulting, I’ve decided that it’s not a career I want to continue with.
ThoughtWorks is a great place, with great people, but basically consulting sucks.
I wish everybody there the best.

The need for speed

For the past 6 weeks I’ve been working on developing a prototype that acts as a front end for a query service. The queries come in quite a few different shapes and sizes and the prototype only implements a vertical slice through a few of the queries.
The architecture is quite simple, there is a web front end, some middleware, a proxy/gateway and a legacy data source. All of this is written in Java, with a J2EE server talking to the proxy (Java application). The interesting part of the prototype was to have a pluggable middleware layer so we could test a few different technologies (raw sockets, JMS and Javaspaces) for performance.
One of the interesting parts of this project is that the legacy datasource is fast, very, very fast. A query returning a single result across a very, very large dataset returns in about 15-20 ms. A larger result set would take closer to 60ms. Now, I’ll pose a question here (which is answered later so no peeking), given the architecture, and given the technologies involved, what do you think the slowest point would be ?
The implementation used Tomcat as the J2EE container, and had JSP for producing the output (both an HTML view and a data only view). The hardware that initial testing was done on my desktop P4 3.0Ghz and later on some whizzy Sun hardware.
We had simple monitoring for the transactions at the browser (using JMeter), at the servlet and at the proxy/gateway. These consisted of keeping the average transaction time at that point. JMeter provided a couple of other bits of information, but we only compared the averages.
We also tested a “no-load” scenario (4 transaction/sec), up to “extreme load” (600 transaction/sec). No attempt was made during the development of the system to optimise for performance, we were only really interested in comparative performance for the middleware, but total performance would have become an issue if average transaction time approached 500ms.
Thankfully, that was never an issue. The final results of the system indicated that the vast majority of the time was spent in the legacy system (remember, only 20ms). The web and middleware only adding a total of 10-15ms back to the browser. This was fairly consistent under extreme load, which was a result that surprised us, and the performance testing experts who came in to independently measure the results.
How many people would have guessed that final result ? Certainly not me. I’ve never thought that J2EE was slow, but to actually participate in the development of a system and produce real metrics was very interesting.
So, it’s completely possible to produce well performing J2EE systems, and if your system doesn’t perform, don’t blame J2EE, blame your code.