Tracing, without tracking

There’s been a lot of discussion about the COVID-19 tracking application being developed by the Australian Government.  I’m a very strong critic, because of general IT incompetence, and these things are hard, as well as the actual base implementation being reliant on the protocols of TraceTogether (the Singapore Government application).

The TraceTogether application has a centralised data store, which is then used, in conjunction with stored personal information to contact the people who have been “near” an infected person.  This has a whole bunch of significant privacy concerns, especially as there is absolutely no need to build the application in this manner.  Not only is this an issue if the store is breached (you lose all your private information), but all those folks who have been near you can be identified.  This is, not good.

This might seem confusing to people, but you can actually build systems without needing to know the identity of people involved, and have the information not need to be securely stored, because it’s public information anyway.

How?  Well, let me tell you.

First of all, I’m going to describe what the required conditions are for this to work, but they’re not onerous, and quite sensible.

  1. The application users are trying to help, and will respond to application alerts.  This is no different to other applications, but if people download the application with no reason to use it effectively, then it’s exactly the same as them not downloading it in the first place.  So, it’s not a negative, it doesn’t break anything, it’s just “people aren’t notified”
  2. There is somewhere all application users can contact for updates. There needs to be a way to announce some new information that applications can make sense of.  This might be an alert, or it might be the applications can say “what’s new?”

Um, that’s it really.

Now, this next bit I’m going to use a metaphor, and it’s not completely true, but it gives the right kind of story about how this can work, and how it can work without you needing to give up any information ahead of time to anybody, unless you are either tested positive, or at risk.  Sounds good?  Ok, here we go.

Alice has download the app COVID-SONG-STAR (CSS) to her phone.  She starts the application, and then goes about her daily business.  All the while, CSS is broadcasting songs to everybody within 1.5m and adding the songs to a daily playlist (DPL).  These songs are chosen randomly from all the songs on SpotifyUniversePlus, so there’s a lot of songs, and at the time, nobody in the world will ever play the same songs.

At the same time, Bob also has downloaded CSS and is rocking out with his songs.  Same with Carol, Dave, Eve, Frank and Grace.  CSS is a pretty cool app, and is able to hear what other songs are being sung with 1.5m.  Every time you hear a song, you add it to your heard list (DHL).

The daily playlists (DPL) are held for 21 days (DP1 -> DP21) and heard lists (HL1 -> HL21) the same.  These are held on the phones, and if any of our cast of characters decide to no longer participate, they can just delete everything.  Nothing leaves the phone without their consent.  No cloud, no internet, just your phone.  There’s no private data anywhere, just the songs (DPL and DHL) you’ve played, and heard.

Now what?  Well, Alice has got a bit of sore throat, can’t smell anything and thinks oops, maybe I should be tested.  So she rocks off to her trusted and caring medical professional and tests positive for COVID-19.  Fuck.  Poor Alice.  (Don’t worry everybody, Alice turns out fine, this story has a very happy ending).

We need to find out who Alice has been near, and guess what, we have all the information we need.  We just publish all the songs on Alices playlists (DP1 -> DP21).  Anybody who’s heard these songs is at risk, but how will they know?  Well, the CSS can get a daily update of all the Infected People Play Lists (IPPL), or a server can send alerts to the applications each time a new IPPL is published.

So, Alice publishes her DPL (with help from her medical professional, because we only want IPPL, not just any DPL) and Bob, Carol, Dave, Eve, Frank and Grace all look at the songs, and see if any of them are in DHL1 to DHL21.  If they are, then that means they were close enough to Alice to be at risk of contracting COVID-19, so they need to go to their medical professional and say “I heard this song, and I hear the person playing it got sick, so I think you need to stick a probe uncomfortably up my nose, and then go and hide in my house for 14 days”.

What do we notice?  At no point do I ever need to store private information about our cast of characters.  Alice, Bob et al are all anonymous, in control of their own data up until the point where they are at risk.  There is no way to “forward trace” by a centralised system, because while Alice has played all these songs, there’s no way to know if anybody, or everybody heard them because it’s all stored locally on the phones.  Only Eve knows the songs Eve has heard.

What I’ve described in a story in which the participants cooperate, and the CSS has been built with this system in mind.  It’s very easy to imagine a scenario where the CSS is built in such a way that it uploads everything, including all your phone data, but that’s why many people are strongly advocating that the app is independently reviewed (and the protocols and design of the system) so we can be sure that it’s operating as expected.

At this point, based on what the track record of our government in competence, and the privacy and security implications of this information the government wants to collect, I cannot recommend downloading and installing the COVID-19 app being developed.

Thanks for reading.

 

 

 

Consequences

After our last federal election, I felt like Australia, and the electorate should take more responsibility for the choices people were making.  I felt that I didn’t want to reward the electorates that would vote for politicians that were not in alignment with my principles. I spend money on all sorts of things, holidays (hence the name of the application) but also electronics, clothing, etc.

Politicians that were abusing the electoral system for personal gain.  Politicians that were claiming to support areas of electorates and ideologies but actively working against those constituents.

So, I wrote some software to analyse our electoral data and voting patterns. I also implemented a very simple weighting model which allows the input of any weighting against votes for a party (if you like the Greens, give them a weighting of 20, if you don’t like the ALP, give them a weighting of zero).

The software allows me to generate a weight-sorted (prioritised) list of electorates.  I’m just an individual, but I am choosing to not support electorates that vote in ways that I don’t find compatible with my principles.  This allows me to support those electorates that are in alignment by choosing my holiday dollars in those places in priority over those electorates that do not.

If nothing else, it gave me a much greater ability to examine our electoral information, and get closer to understanding the patterns of voting, and some of the patterns surprised me, and saddened me.  I can probably build versions for most electoral systems as the way the system is built it should fit, if somebody has electoral data for other geographies, or wants to point me at it, I’d be willing to give it a go.

You can find a running version at http://holidays.aws.eaves.org after I build a semi-functional web interface (the original, and core is all CLI based).

Edit: I took the site down.  It was fun to play with, but didn’t feel like paying the $5 to AWS each month.  Maybe as we get closer to the next election I’ll revisit.  (April 24th 2020)

Now, Turning to Reason, & Its Just Sweetness (the design)

This is the first part in a 3 part blog series on how to write code so you can build testable systems with external dependencies.  Link to part 2 is at the end of the post.

There has been some discussion over the last year or so about testing. Some of this discussion has been naive but well meaning. The intent is good, we all want to build reliable and robust systems, but it’s been naive because the chosen path leads to the opposite.

The general thrust of this discussion is about “mocking out external interfaces”. Which, on the surface seems like a sensible thing to do. After all, we want to test our software and we don’t want to have these slow external interfaces (like a database, a message queue, a file system) impact our development.  So clearly, the approach is to mock/stub out the calls to AWS/MQ/database in our code and “pow” – tested, reliable software.

Well, no.

What we have now is coupled, fragile software and those tests you spent all that time writing, they’re basically testing “does String.equals() work?”

So, it’s clear that there’s a gulf in both education and experience in this space, so prompted by the recent discussions on Slack, I’ll present to you, a narrative on the design and development of a multi-component software system.

First, we have some context so people can do some reading.

  1. The assertion that we need “AWS stubs” to test systems that are deployed to AWS
  2. A wonderful video by Gary Bernhardt about testing which covers a lot of what I’m going to go through Boundaries — Destroy All Software Talks
  3. A wonderful blog post by Ken Scambler about why mocks and stubs should be avoided. To Kill a Mockingtest | realestate.com.au Tech Blog

Second, I’m going to talk a little bit about testing before launching into some solution design so that people can understand the why part.

Testing is hard. Much harder than people give credit for, and what’s worse, most people think that testing is easier than it is, leading to lots and lots of terrible tests. Mocks and stubs are an indication of a design smell. Ken’s comments in the Slack conversation and his post provide more concrete description of why.

Along with testing being hard, there are different concerns with testing, and this is fundamentally where the big issue occurs. Not all testing is symmetrical, and not all techniques are sensible or desirable.

Within a software system we have 3 main categories of tests.

  1. Functionality tests – these are tests that ensure that the software WE are writing behaves as WE expect it to. The most notable type of test that are encountered in this space are Unit tests.
  2. Contract verification – these are fixtures that VALIDATE the components and interfaces that the software validated with unit tests will continue to work. Think of these as a sort of pre-condition. They’re not so much tests as they are contract verification. It just so happens a lot of the testing frameworks that we have in the software ecosystem are very good to be able to build contract verification suites.
  3. Smoke tests – these are fixtures that VALIDATE the components deployed into an environment are correctly configured, the interfaces are available as expected and they all operate together correctly. This can be a single verification, it can be a sub-set of the contract verification tests, it can be a synthetic transaction e-2-e through the system. So many choices, so many options.

For the purposes of this narrative, I’m going to be only interested in the first 2, they are the general consideration for component design and development. This doesn’t, nor shouldn’t mean that for an operational system the 3rd category isn’t equally, or even more important – it’s just that I’m going to deliberately put them out of scope for now.

Ok, context done. Let’s do some design. Step one is to have a look at the problem we want to solve, and fortunately for us we have a spare one.

The system receives an SQS event which indicate which files in S3 to load and process, the files contain some numbers which we “do math” and the result is written out to S3

Many people would at this point launch into TDD, and while that might seem sensible, I’d always advocate it’s worth some thinking about the problem, and some analysis and preliminary high level design.

30 minutes later, add a small amount of Ken Scambler for sanity and we have the following initial thoughts about how the system design will proceed.  Note, this isn’t locked in stone, but when doing TDD, it’s not some random wandering in the dark about where your design will end up – you should be doing science here. Have a hypothesis, and let the code help you work that all out.

5A097570-6BEB-4AE7-92C7-B73042B48D9B.png

We can see the main components, and have identified the basic flows. Nothing too exciting, probably 30mins worth of chatting with Ken. For those interested, he did a “functional style” analysis and having us both work on the design we ended up with substantially the same system components and interaction design. Was a lot of fun. Recommended. Would pair with Ken again.

Now we want to think about one of the most interesting parts of the implementation. How will the use of the data store work with the event queue. Part of the requirements says that the events are only to be removed when the data store items have been successfully processed – so we need some form of signalling between the two. We can couple the two together with some horrible “if” code, and expose the innards of the event queue. Guaranteed this will be hard to test, so we’ll just dependency inject in some processor into the event queue – seems like the best approach. Writing code will test it out, but if you don’t know the direction you’re heading in – you’ll just wander all over the map.

EC6EED05-A319-45F5-8936-F91A1017C0E9.png

(Note: You’ll see that I’ve put some form of “attach()” method in the interface/contract. This gives me some way of doing “authentication” / “connection” to the external systems. Probably not going to implement in the initial phases, but just a reminder that it’s probably going to be important at some point)

The important part of this is the process(Processor p):boolean method. This enables us to “tell, don’t ask” when processing things on the event queue. For now, we’re only going to get one type of thing on that queue so this is probably the simplest implementation and all that is needed. If there was a bunch of different things on the event queue, I’d probably construct the event queue with some form of Factory that would allow each of the events to be processed, but no need for that now.

The last little bits are pretty similar, and don’t really require any major thought – just simple data sources and sinks.

C04C7A58-FCCA-4E18-BABC-EDBFF78BD83C.pngC216D512-23E9-48AA-BBDE-5C4321ABE6CD.png

As stated above, names-not-final. There’s nothing about what I’ve scrawled here that is “forcing” me to do it in this way, and the code may well change my thoughts as I get into it. However, spending the (about) 60 minutes to draw these 4 pictures and talk with Ken gives me confidence I have a robust solution that’s going to fulfil the solution requirements as well as have the right sorts of extension points. The discussion and some thought experiments means that I’m pretty sure I can implement this solution using any underlying implementation technologies. Files, databases, queues, sockets etc. This is the most important thing when designing something – it’s not about “can I build this using technology <X>”, it’s “can I build this in ANY technology”.

Finally, if we look at this now slightly differently, we have the classic “boundaries” model where our business logic (the calculation) is all in the “middle” with our external interfaces providing the interfaces to the horrible messiness of the outside world. Functional core, imperative shell. This is another good indication that our design proposal has merit.

7D2C37D1-894C-45BF-976E-31233C2F42A7.png

This also helps us understand where our testing styles should be going. We should have our unit/functional tests for the “functional core”, and contract/verification tests for our “imperative shell”. Our code is the core – this is really, really important and is the key point that needs to be made from this entire narrative. Our job is not (NOT!) to test the AWS libraries, the DB connection libraries, the SNS/SQS libraries – these can be verified at run-time using smoke tests, or at various points in the development cycle using contract/validation tests.

For people who worry about the protocols – that’s not a testing job, that’s a business logic task. If the payload in the event queue is “different” – then the system should just fail (gracefully or otherwise). The contract is broken, you no longer have to continue to behave rationally and can make sensible decisions about your own reactions. Under no circumstances should you attempt to “massage/hide” the broken contract. This leads to hard to detect errors and is a significant source of production failures. Just fail early – and in close proximity to the broken contract. This is a fundamental of good software implementations.

Now, Turning to Reason, & Its Just Sweetness (the aftermath)

This is the last part in a 3 part blog series on how to write code so you can build testable systems with external dependencies.  The first post can be found here

Thanks to Ken Scambler, Milly Rowett and Alyssa Biasi who all made contributions to the posts.

Things that happened about the design.

The initial design is pretty compact, and the components are well factored in terms of their responsibility, but I noticed a bit of a smell in the implementation where there was a bit of twisty-turny logic to deal with the receipt of the message on a queue, and then waiting around to see if the file existed.

The implementation problems the design creates (which I ended up ignoring for this example) is that if the file _never_ appears, you end up with all sorts of knots, and I don’t really want that sort of logic to be part of my “process the file” code.

Going through the design a second time with Alyssa Biasi we came up with separating the logic for the “got the message, wait for the file” and “process the file”. We can decouple these 2 problems and the responsibility for each part makes it a lot cleaner. Then we can tune the “retry logic” in the “wait for the file” code, and the “process the file” logic never needs to know. A simple IPC mechanism (another queue message suffices) and responsibility separation seems to be a lot cleaner. The nice thing is that it’s just another form of QueueProcessor, so we can re-use most of the code framework, with just some changed wiring. Winning.

In the implementation of S3DataStore(), there’s the opportunity to decouple the Authentication method from the implementation. For the purpose of the example I didn’t bother adding this complexity in, but my original notes in the design highlighted how it might happen. The Java AWS libraries actually make the implementation of such a design very straightforward. (Currently the implementation assumes Anonymous “world read” access.)

Things that happened about the code.

It was fun writing code again. There’s parts of the code that I’d refactor to make neater. A protocol implementation decision about how to pass information in the request queue and some of my test code is kinda bleh and I’d like to fix that in the future.

My choice of code structure (and lack of new Java 1.8 features) is based on history and a lack of writing much code. I didn’t particularly want to use this as a mechanism for learning new coding techniques, and to be frank, there’s nothing complicated enough in here that warrants it.  There’s definitely areas that could be cleaned up, generally where I create local variables for return values. Most of those can be inlined (the compiler is doing this anyway), but it’s handier for me to examine the variables in a debugger when I’m tracing things.

I was also working with Milly Rowett for a significant part of the development. When mentoring, I prefer to keep things obvious, even if it means a little more typing is involved. Milly might be able to provide feedback on how valuable it was – it certainly was easier for me to explain as I typed.

The code structure completely changed when I decided to use maven (to get the AWS dependencies managed) which was pretty painless, but annoying. Not sure I would have done it differently, because I didn’t need maven until half way through. The final structure is fine, but created a change-set which was nothing but structural changes.

The code isn’t really meant to be a demonstration of “great art in writing code” or “faultless”. It was done as an implementation to show how to design code so you can test external dependencies (such as AWS) without relying on mocking libraries or having the code be solely dependent on that implementation.

If you want to download and play – and finish it – git@bitbucket.org:jon_eaves/aws-testing.git

Now, Turning to Reason, & Its Just Sweetness (the code)

This is the second part in a 3 part blog series on how to write code so you can build testable systems with external dependencies.  Link to part 3 is at the end of the post. The first post can be found here

Authors comment:This post was written over a period of a couple of weeks, and developed along side the code. There is inconsistency in what I say at the start and what ends up in the code. This is to be expected, and probably worth examining how things changed – as much to see the thought process, and that while design tends to remain fairly static, the implementation changes as new information is gathered. The total time spend on the code is about 10 hours. Of that time about 4-5 hours was code developed while mentoring a (very smart) graduate. Other consideration is my unfamiliarity these days with code authoring (sadly). I imagine if I was to start again, I’d probably be able to do it all in about 4 hours (if that). The code is pretty trivial (like most code).

—- Actual post follows —

78C6E64D-A06F-4FA4-B1A0-B37ADA6AA55E.png

First, simple structure, as would be expected from most projects. The “contracts” section contains the verification tests for our external interfaces. The sorts of testing that is “pact-like” in that if our code breaks for unexpected or unexplained reasons, we can run our contract tests, and see if the remote systems that we have no control over have changed. These are our sanity check. They’re not there to “prove” anything – other than our confidence that at the time the code was written, they were against those contract tests. It’s a good idea, you should think about it.

 

8E4EEE0A-2165-445A-AB32-5EBAB3A2E316.png

Now, our code layout looks like our design, even after working through it TDD-styles.  That’s pretty handy. There’s 2 useful observations and questions at this point. The first is “how do we calculate the math part?”. The answer to that is “we implement a Processor” for it. The second is “where is the application?”. The answer to that is “there isn’t one yet”, but it’s basically a simple wrapper around the EventProcessor. We can see how it will look if we examine one of the EventProcessor tests.

35989839-CF8B-4B28-BDB3-3C485DBA2B7D.png

We can see our alternative implementations of the interfaces for DataStore, Processor, EventQueue and OutputDevice for our testing. I’m not completely happy with the EventProcessor and needing to set the delay and retry. It seems to be the right thing internally, it just looks ugly. Maybe something better will emerge further down the line and EventProcessor will become an interface, with some form of concrete implementation. For now it’s concrete, and it’s the only real co-ordination point within the code.

 

Where the interesting things occur within the code is in the EventQueue. The queue has the defined responsibility for processing the events. This is by design, and we can see the SuccessRemovalEventQueue in the example above has injected a List of Events (the queue) and the Processor which in this case is the KeyExistsInStoreProcessor. These particular implementations were chosen because I want to start modelling and investigating parts of the real solution that’s going to be needed. The concrete implementations here for EventQueue and OutputDevice are used as a “dipstick” (Testing with mock objects – JUnit in Action – page 106) .

This form of development makes it trivial to then compose up the application. Turns out – it’s just a setup, then construction of the dependencies. I wanted to print out when things were processed in the first version (let’s call it the MVP) so to do that we just implemented ConsoleOutput to implement the output device. Total time to create the application – 2 minutes.

91E79EC8-523F-4B47-93AD-7E6F51F01BB0.png

You’ll notice that it’s one of the test cases, that’s been modified. That’s ok – it’s a great way to start – the actual application doesn’t need to know, and we can focus on implementing each of the features one at a time. We’ll start with the DataStore, because that’s probably the easiest part to implement first.

In the blink of an eye, the BatchFileProcessor has changed to the following. And with the great confidence that as the injected strategies have the same contracts as the tested ones, our imperative shell works as advertised.

21188019-AEF7-4266-9F5D-C2B581685600.png

Now, build out DoSomeMathProcessor() with TDD.

 

CBF5D664-7CDA-4688-98D7-375A891DFECA.png

 

tsm.png

Essentially we test this by having a known set of values passed into our data store. This is made easier by having an “object mother” (ObjectMother)for the creation of the test data. Notice how we’re testing results, not trying to delve into the class to see if “math works” – but if we’re getting the right results.

We’re at the point of our code where we now have great confidence if we have a DataStore that returns us the valid data – we can add it up correctly. So, do that next.

And the contract test for S3DataStore(). Note here that I’m not attempting to mock, or stub or do anything to test the implementation of S3DataStore() with the BatchFileProcessor. This will be creating the contract for the use of the S3DataStore(). If I can use it against S3, then it works. That’s it !

s3dsc.png

There’s a little bit of context worth discussing here. For this particular bit of code to work, that means that the S3DataStore() class needs to be implemented and working. This was done in a few stages (over about 20-40 minutes as I looked up various bits of the S3 API). I started with the ds.exists() test, because that also allowed me to see if the Authentication parts were going to work.
For this test to ever work, we need to set up the world. That’s ok – we know that, we’re not trying to fake out the world here, this is a contract test to verify if our code works against the real world. This could also form part of a smoke test. I manually set up an S3 bucket in the Sydney region, and manually copied the “testdata.txt” file into it. I could use a shell script to do this, I could use a bit of Java code to do this and clean up after. That’s all completely valid, but doesn’t really help in understanding about “how to test AWS code” (really, it’s the implementations of the imperative shell we’re testing, AWS is just a particular implementation).
s3ds.png

The implementation is pretty simple. Current implementation is naive and will not fulfil the “requirements” if there are errors, but the interesting part is it’s trivial to add all the business logic and test it. If we need to check for malformed files, we can do that – and have the load(String path) method do sensible things. We can trivially create test cases for this to make sure the processor acts correctly.

At this point – the code is now “working” – and we can run our implementation. We would then choose the next part of the project to implement. If I was going to continue, I’d probably do the output notification – mostly because that requires the least amount of setup.

“Sense Amid Madness, Wit Amidst Folly” (Surface Detail)

After 6 years (and change) of working with REA Group I have decided to resign. It’s mixed sadness and joy, as the place I joined all those years ago, is not the place I’m leaving. That’s good, and that’s also one of the reasons.

I’m moving to a medical genetic interpretation service called myDNA (http://www.mydna.life).  In short, people don’t always respond in the same way to medicines, and some of this is because of our genes.  In some cases we can look at our genetic structure and give additional advice to doctors about prescribing certain medicine.

I think this is pretty cool, and I’m actually going to be working on things that will make life better for people every day.  I’ll also be spending heaps of time working with software development teams directly, pairing with developers and improving their skills. At this stage it looks like I’ll also get to learn C# – which I’m pretty excited about.

One of the things that was important to me was being able to have a balance, and I’ve negotiated that with my new employer.  I’ll be working 4 days a week, giving me a day to pursue my own objectives.

I will be;

  1. Focus on providing mentoring/coaching for Women in Technology. From juniors who want to learn to code, to seniors who want assistance with CVs, with conference talk preparation or just general chatting about the state of the industry. I want to help. I want more women in the industry and I want it to thrive. I have 3 amazing women that I’m mentoring now that are happy to provide feedback should anybody be interested.
  2. Cycling.  Yeah.  A spare day a week to go cycling. Living the dream.  I suspect some of this will be consumed in housework.  I’ve made a deal with Mike Rowe that I’m going to ride the 3 peaks with him next year – so I’m going to need that extra time.
  3. Consulting.  If you’re a corporate, with a lot of money that wants somebody with 30 years of software development experience, team leading experience, AWS experience, privacy/security/cryptography and just general knowledge of the industry to come and give guidance, feel free to contact me and see if we can work out a deal.

From what I can tell, the office small, is open plan (but team sizes, not cattle-yard), and I get my own desk – that I can leave stuff on overnight, and have my monitors and keyboard and chair all set up how I like it.  I can have photos of Jo, and George stuck there, and gaze at them when I’m thinking.

I’m unreasonably happy about these little things, but it shows how much those sorts of things count as part of being a human.

The journey starts again.

 

 

 

On the nature of solitude

I don’t like being by myself as much as I am these days.  It’s something I struggle with quite a bit. I’ve had quite a while to reflect on this topic, and there’s a couple of words that are often used to describe the situation, but they mean quite different things to me.

The first is “alone“. To me, being alone, is to not have physical or mental proximity to other humans. This is relatively rare, and for me is a choice that I make if I decide to isolate myself. I like to be alone at times, and to ride, and to run. They are my favourite things to do alone.

The second is “lonely“. This is a feeling of a lack of connectedness with other humans, and in my case I feel this more strongly without a partner. I’m certainly least lonely at this stage when I have my close buddies over, chilling and talking shit. I feel less lonely when I have George with me.

Those who know me IRL would probably consider me fairly extroverted, and that’s true to some extent. I do enjoy being in groups of humans, and it’s something I’ve become comfortable at. It’s not natural for me by any means, I was taught this by my parents, and something that I’ve worked on in my career. What many people might not understand is that I do like to be alone at times, and to contemplate the vastness during that time. I’ve never really been able to describe it well, but I like to ride (and now run) to the limits of my physical capabilities – and use that so my thoughts become focussed on “the now”.

I’ve done this for years, and really didn’t notice what I was doing until I had a conversation with a Twitter friend about what she gets out of Yoga and the mindfulness aspects of it. I like this alone, it’s active or voluntary alone-ness.

Then I read this;  http://www.brainpickings.org/2014/09/03/how-to-be-alone-school-of-life/

I was impressed by how it seemed to accurately describe my feelings on the matter. The distinct difference between alone-ness and loneliness was laid bare and what was confusing to me (I like to be alone, I don’t like to be lonely) and how the world reacts to the alone-ness. I must say I’ve not really felt any great society pressure about my need to be alone. Probably because it’s hidden behind my physical activities that are considered normal to be performed solo.

Turning the gaze to loneliness it’s a bit harder to reconcile my thoughts and feelings on the emotion. While I’d like to “not care”, I find it very hard – and it’s almost something that I find defining as a human. I’m unsure what other people in similar situations to myself do, or if they feel the same way. I suspect that at at this point some form of substitution of life occurs, where distraction, numbing or soothing becomes commonplace.

Working long hours? Drinking? Drugs? Religion?

Who knows?

Excuse me while I go for a ride and think about it a bit more…

Men will always be mad, and those who think they can cure them are the maddest of all. “

Is software an asset or an expense?

I had a brief conversation on Twitter with Camille Fournier () about costing for projects and development, and this is something that I’d been thinking about (but not writing about) since I was at ANZ bank about 2007 or 2008. Most of it was trying to explain/understand expectations about software development and delivery and “how much effort to put into projects”.

Even though I actually have a commerce degree, I don’t intend this observation to be a strict treatise on the accounting terms “asset costing” and “expense costing” but more about the general expectations set by considering the constructed software as an asset, or an expense.

The basic premise is that in general teams of software developers, in the absence of specific direction or rules will assume that the software being delivered will have the properties of an asset. The software will last for an extended period of time, it will be modified and updated and will not generally have a short defined end of life date.

However, in many cases the teams of people asking for software to be built will not always have the same view, and they may be well asking for systems that are developed in shorter time frames, and have a short defined end-of-life date. This generally is where the difference in expectation on project cost (and time) often comes from. We see this a lot in start-ups, where iterating and responding to new ideas trumps all possible long term benefits. With nothing right now, there is no future for the start-up.

Coupled with this particular problem, there is generally a mismatch between how much effort the demand team expects the supply team will take to construct a solution.

So, what should we do about it?

At REA, I’m starting to help our teams with this, and the first step is to get some greater definition about the expectation of not only what problem the system should help solve (“functional requirements”) but also the scope of what that system should participate in (“non-functional requirements”).

Generally these would be called something like SLA (service level agreements). How many users will it support, what is the uptime? What is the response rates? What are the transaction rates? These are pretty standard non-functional requirements that you’d see in systems descriptions. However, one key part that I’m trying to encourage people to think about is “how long should this last?”.

I think the first part of the problem is trying to get an understanding from the customers about what their goals are. Do they want to create some “disposable software” to solve this problem? Do they think that the solution to this problem should last for a long time and be enhanced over that time? Do they even understand there is a difference to the engineering required to do this?

Now, if we add to this the general trend towards microservices (smaller units of functionality that can be more easily replaced) maybe we are looking at a general shift in the way we write code for systems and how we might wish to think about, and set expectations for system development. Can we really think of software components as “write them to throw away”?

Certainly if we’re looking at treating the components or code like an expense, I think there’s a better chance. I also think that using expense based thinking for “research and discovery” might lead to more opportunities for faster iterating through ideas, knowing the expected use and lifespan of the work performed.

I’d also like to see more input from people who have tried similar ideas. It’s a slice of a topic about “software life-cycle management” that is more deserving of thought that most teams give.

2014 – Random reflections

I get to the end of the year and wonder what happened and then I feel like I didn’t actually do anything much over the year. This time I thought I’d put some effort in to reflect on what I inflicted on the universe for the year.

Family

Starting with the most important part of my life. Life with George was again fantastic. He’s growing up into a wonderful human being. He’s so polite, so kind, so generous and just delightful to be with. I completely miss him when he’s not around. The best thing that happened in 2014 was a re-negotiation of our custody arrangements which gives me 6/14 (6 days out of 14) during the school year, and 7/14 during holidays. A great step forward, maybe 50/50 in the near future.

We have a good relationship. We’re both honest with each other about how we feel, and how we want to be treated. This leads to a few tough conversations at times, but they get easier, and pretty much all disagreements end very fast, and normally with lots of cuddles and “sorry how I behaved” – on both sides of the fence. He’s not the only one that has bad days, and it’s important to me that he knows that just a part of life and dealing with it as a family is crucial.

We have a house that we both love, his school is nice and close and we spend many, many hours a week playing cricket when we’re together. If we’re not off finding some nets, we’re playing keeping in the back yard, or watching it on TV. George loves cricket, and that’s been reflected by his achievements in playing with his new club. He plays in the under -10s, manages an average of about 20, top score of 40 (off 4 overs) and best bowling of 2/1 (off 2 overs). He’s only been dismissed once, and that was an overenthusiastic pull shot that ended up destroying the stumps. Oops. Pretty handy in the field and likes to keep as well. It’s great to see him doing well at something that he enjoys.

He’s going great at school, he works hard and enjoys turning up and doing different things with his friends. He’s a good little man.

Health

Nothing really to report here. Both of us have avoided most of the really terrible coughs and colds, and despite both of our best efforts neither have managed to end up hospitalised for our recreational (mis)adventures. I’m fit and healthy, and after spending 5+ years being completely obsessed about riding bikes I’ve started to broaden my horizons to other activities.

I was finding it harder and harder to get consistently onto the bike looking while also looking after George. I’d need to spend a good 3-4 hours in the hills to get a solid workout, and that just wasn’t possible for much of the time. So after Amy’s Ride this year decided to take a break from riding for a while (to and from work doesn’t count) and in late October/early November started to look at running. Now, I’d not done any serious running for about 20 years when I used to run in 10km fun runs. The good news is that my cardio fitness base is solid, the bad news is that I’m missing a lot of muscle development for running.

At this stage, I’m pleased with my progress, getting to 4:30 min pace for 5km and 5:30 pace for 10km. Only time will tell how the body will handle it, as I’m already noticing a few niggles. Hopefully just related to lack of muscle development in those areas.

From a mental health perspective, I’m probably in as good a shape as I’m likely to ever get. Most of the anxiety and the pretty severe dent my self-worth took during the later parts of the marriage have been repaired. I’m still pretty nervous about what relationships might mean in the future – but I’ll cross that bridge when it happens. Soon, I hope.

Friends

This was a really big year for friends. Some moved within Melbourne, some moved to another country (I miss you Rup!) and some had some bad news. The best thing for me was meeting up with friends I’ve known for close to 10 years. I was able to travel to Blizzcon in November and got to hang out for a week with the most awesome group of people from all walks of life. It would be pretty safe to say that I really didn’t want that week to end, with geeking out about computers, gaming and drinking far too many beers.

Work

There were 2 great things about work this year. The first was that REA started their graduate recruitment program, and I got to play a significant part in forming it, and getting the graduates on board and working with them. The second was that we finally managed to fill all the open roles in the Group Architecture team, and I can spend more time working with a team rather than trying to create it.

It’s fascinating working in the role that I have at REA, and it’s always challenging – most of these are people challenges, not technical ones – and I’m constantly left open mouthed at how some people react to change. I’ve blogged about my work a few times this year, I hope to do it a bit more next year.

Personal Achievements

It was a pretty good year on this front. I’ve been working on an open source project for a very long time, it’s almost part of the furniture in my life and I don’t give it much thought. It then pops up at unlikely times to make me re-evaluate what reach my software has had. The software in question is BouncyCastle. A Java cryptographic library.

  • It’s been shipped in over a billion devices as part of the Android operating system in 2014 alone (3bn total)
  • It’s being used by 12m people creating virtual environments in Minecraft
  • It seems that a large book selling and cloud computing company may also be using it for various things internally (unconfirmed)

So, at this stage there’s few people electronically connected that haven’t been directly or indirectly using software that I’ve written. That’s kinda cool and makes me feel pretty good.

I also managed to get back and do some conference speaking. Something I enjoyed doing years ago (pre-George) and thanks to Evan, Beth and the YOW! crew it was a great experience to do it again.

So?

2014 was a good year. Probably one of the best I’ve had in recent memory. I’m feeling more balanced as a person and more comfortable in my role as a parent. I’d like to spend a bit more time on my personal projects as I feel my software skills are deteriorating below where I’d like.

Life is good. I’m very lucky.

You are not your ideas – a strategy to lessen the blow of rejection

Inspired by @dys_morphia on Twitter, I’ve decided to document my strategy for dealing with rejection of ideas. This particular approach came from a discussion with James Ross and Simon Harris many years ago working with on a consulting project.

James, Simon and I were discussing a bunch of ideas about design and implementation. We were thrashing through them thick and fast and each of us were proposing particular solutions which would be then unceremoniously torn apart by the others. To people outside our little gathering it really looked like we were intent on destruction.  Nothing could be further from the truth, as even though the other 2 are mostly wrong about everything and can’t see the genius of my ideas, as the respect for our work and our worth is paramount in these discussions. Few ideas survived the withering attacks, yet none of us felt harm, hurt or lacking in respect from the participants.

After we’d been doing this for a while, we started to reflect on why this is such an “easy” task for the 3 of us to perform, yet it appears to be very stressful for others. We talked a lot about rejection and about how people feel very close affinity to their ideas and proposals, and that rejection (or criticism) of them is like a personal attack.

James made this very clear explanation about how he thinks about ideas, and why Simon and I probably feel the same way – yet others struggle.

He said(*), “Many people hold their ideas close to themselves, their ideas are hugged, like a teddy to the chest, so any attack on the idea is in very close proximity to themselves and attacks hit not only the idea, but the person behind the idea. The idea is precious, there’s not many of them, and each one is special and nurtured and getting new ideas is a hard thing to do”.

This was compared to what we do, “We feel our ideas are like balls. We generate them, we toss them into the ring for people to observe and comment on. They’re cheap and cheerful and colourful and we know there is a bucket of them we can just keep getting new ones from. Sure, some are special and different in their own way, but the ideas are tossed away from our selves, and criticism of the size and colour of the balls are clearly not directed at the person”

I don’t want people to think that James, Simon and I are reckless, or foolhardy, or don’t care about our ideas. There’s often very heated debate about our thoughts, our dreams, our visions (and our fears) when we engage in these conversations. It’s just that we realise that our ideas have a life of their own, and it’s our job to bring them to life – we’re the parent of those ideas. We’re not part of the ideas.

If you’re an aspiring artist, a software designer, a poet, an author – or even just somebody trying to work out where to go for lunch, then consider setting your ideas free – toss them away and give them life of their own. You’ve already done the important work in the communication. You can’t be held responsible for how others react to your ideas, any more than you can be held responsible for other people liking your choice in bikes (even though there is a clear right answer here) and more importantly, by giving life and freedom to the ideas, you’re making it clear of a very important fact, you are not your ideas.

(*) I can’t remember exactly what was said, so I’m going to make up the story to convey the intent.