Now, Turning to Reason, & Its Just Sweetness (the design)

This is the first part in a 3 part blog series on how to write code so you can build testable systems with external dependencies.  Link to part 2 is at the end of the post.

There has been some discussion over the last year or so about testing. Some of this discussion has been naive but well meaning. The intent is good, we all want to build reliable and robust systems, but it’s been naive because the chosen path leads to the opposite.

The general thrust of this discussion is about “mocking out external interfaces”. Which, on the surface seems like a sensible thing to do. After all, we want to test our software and we don’t want to have these slow external interfaces (like a database, a message queue, a file system) impact our development.  So clearly, the approach is to mock/stub out the calls to AWS/MQ/database in our code and “pow” – tested, reliable software.

Well, no.

What we have now is coupled, fragile software and those tests you spent all that time writing, they’re basically testing “does String.equals() work?”

So, it’s clear that there’s a gulf in both education and experience in this space, so prompted by the recent discussions on Slack, I’ll present to you, a narrative on the design and development of a multi-component software system.

First, we have some context so people can do some reading.

  1. The assertion that we need “AWS stubs” to test systems that are deployed to AWS
  2. A wonderful video by Gary Bernhardt about testing which covers a lot of what I’m going to go through Boundaries — Destroy All Software Talks
  3. A wonderful blog post by Ken Scambler about why mocks and stubs should be avoided. To Kill a Mockingtest | Tech Blog

Second, I’m going to talk a little bit about testing before launching into some solution design so that people can understand the why part.

Testing is hard. Much harder than people give credit for, and what’s worse, most people think that testing is easier than it is, leading to lots and lots of terrible tests. Mocks and stubs are an indication of a design smell. Ken’s comments in the Slack conversation and his post provide more concrete description of why.

Along with testing being hard, there are different concerns with testing, and this is fundamentally where the big issue occurs. Not all testing is symmetrical, and not all techniques are sensible or desirable.

Within a software system we have 3 main categories of tests.

  1. Functionality tests – these are tests that ensure that the software WE are writing behaves as WE expect it to. The most notable type of test that are encountered in this space are Unit tests.
  2. Contract verification – these are fixtures that VALIDATE the components and interfaces that the software validated with unit tests will continue to work. Think of these as a sort of pre-condition. They’re not so much tests as they are contract verification. It just so happens a lot of the testing frameworks that we have in the software ecosystem are very good to be able to build contract verification suites.
  3. Smoke tests – these are fixtures that VALIDATE the components deployed into an environment are correctly configured, the interfaces are available as expected and they all operate together correctly. This can be a single verification, it can be a sub-set of the contract verification tests, it can be a synthetic transaction e-2-e through the system. So many choices, so many options.

For the purposes of this narrative, I’m going to be only interested in the first 2, they are the general consideration for component design and development. This doesn’t, nor shouldn’t mean that for an operational system the 3rd category isn’t equally, or even more important – it’s just that I’m going to deliberately put them out of scope for now.

Ok, context done. Let’s do some design. Step one is to have a look at the problem we want to solve, and fortunately for us we have a spare one.

The system receives an SQS event which indicate which files in S3 to load and process, the files contain some numbers which we “do math” and the result is written out to S3

Many people would at this point launch into TDD, and while that might seem sensible, I’d always advocate it’s worth some thinking about the problem, and some analysis and preliminary high level design.

30 minutes later, add a small amount of Ken Scambler for sanity and we have the following initial thoughts about how the system design will proceed.  Note, this isn’t locked in stone, but when doing TDD, it’s not some random wandering in the dark about where your design will end up – you should be doing science here. Have a hypothesis, and let the code help you work that all out.


We can see the main components, and have identified the basic flows. Nothing too exciting, probably 30mins worth of chatting with Ken. For those interested, he did a “functional style” analysis and having us both work on the design we ended up with substantially the same system components and interaction design. Was a lot of fun. Recommended. Would pair with Ken again.

Now we want to think about one of the most interesting parts of the implementation. How will the use of the data store work with the event queue. Part of the requirements says that the events are only to be removed when the data store items have been successfully processed – so we need some form of signalling between the two. We can couple the two together with some horrible “if” code, and expose the innards of the event queue. Guaranteed this will be hard to test, so we’ll just dependency inject in some processor into the event queue – seems like the best approach. Writing code will test it out, but if you don’t know the direction you’re heading in – you’ll just wander all over the map.


(Note: You’ll see that I’ve put some form of “attach()” method in the interface/contract. This gives me some way of doing “authentication” / “connection” to the external systems. Probably not going to implement in the initial phases, but just a reminder that it’s probably going to be important at some point)

The important part of this is the process(Processor p):boolean method. This enables us to “tell, don’t ask” when processing things on the event queue. For now, we’re only going to get one type of thing on that queue so this is probably the simplest implementation and all that is needed. If there was a bunch of different things on the event queue, I’d probably construct the event queue with some form of Factory that would allow each of the events to be processed, but no need for that now.

The last little bits are pretty similar, and don’t really require any major thought – just simple data sources and sinks.



As stated above, names-not-final. There’s nothing about what I’ve scrawled here that is “forcing” me to do it in this way, and the code may well change my thoughts as I get into it. However, spending the (about) 60 minutes to draw these 4 pictures and talk with Ken gives me confidence I have a robust solution that’s going to fulfil the solution requirements as well as have the right sorts of extension points. The discussion and some thought experiments means that I’m pretty sure I can implement this solution using any underlying implementation technologies. Files, databases, queues, sockets etc. This is the most important thing when designing something – it’s not about “can I build this using technology <X>”, it’s “can I build this in ANY technology”.

Finally, if we look at this now slightly differently, we have the classic “boundaries” model where our business logic (the calculation) is all in the “middle” with our external interfaces providing the interfaces to the horrible messiness of the outside world. Functional core, imperative shell. This is another good indication that our design proposal has merit.


This also helps us understand where our testing styles should be going. We should have our unit/functional tests for the “functional core”, and contract/verification tests for our “imperative shell”. Our code is the core – this is really, really important and is the key point that needs to be made from this entire narrative. Our job is not (NOT!) to test the AWS libraries, the DB connection libraries, the SNS/SQS libraries – these can be verified at run-time using smoke tests, or at various points in the development cycle using contract/validation tests.

For people who worry about the protocols – that’s not a testing job, that’s a business logic task. If the payload in the event queue is “different” – then the system should just fail (gracefully or otherwise). The contract is broken, you no longer have to continue to behave rationally and can make sensible decisions about your own reactions. Under no circumstances should you attempt to “massage/hide” the broken contract. This leads to hard to detect errors and is a significant source of production failures. Just fail early – and in close proximity to the broken contract. This is a fundamental of good software implementations.


Now, Turning to Reason, & Its Just Sweetness (the aftermath)

This is the last part in a 3 part blog series on how to write code so you can build testable systems with external dependencies.  The first post can be found here

Thanks to Ken Scambler, Milly Rowett and Alyssa Biasi who all made contributions to the posts.

Things that happened about the design.

The initial design is pretty compact, and the components are well factored in terms of their responsibility, but I noticed a bit of a smell in the implementation where there was a bit of twisty-turny logic to deal with the receipt of the message on a queue, and then waiting around to see if the file existed.

The implementation problems the design creates (which I ended up ignoring for this example) is that if the file _never_ appears, you end up with all sorts of knots, and I don’t really want that sort of logic to be part of my “process the file” code.

Going through the design a second time with Alyssa Biasi we came up with separating the logic for the “got the message, wait for the file” and “process the file”. We can decouple these 2 problems and the responsibility for each part makes it a lot cleaner. Then we can tune the “retry logic” in the “wait for the file” code, and the “process the file” logic never needs to know. A simple IPC mechanism (another queue message suffices) and responsibility separation seems to be a lot cleaner. The nice thing is that it’s just another form of QueueProcessor, so we can re-use most of the code framework, with just some changed wiring. Winning.

In the implementation of S3DataStore(), there’s the opportunity to decouple the Authentication method from the implementation. For the purpose of the example I didn’t bother adding this complexity in, but my original notes in the design highlighted how it might happen. The Java AWS libraries actually make the implementation of such a design very straightforward. (Currently the implementation assumes Anonymous “world read” access.)

Things that happened about the code.

It was fun writing code again. There’s parts of the code that I’d refactor to make neater. A protocol implementation decision about how to pass information in the request queue and some of my test code is kinda bleh and I’d like to fix that in the future.

My choice of code structure (and lack of new Java 1.8 features) is based on history and a lack of writing much code. I didn’t particularly want to use this as a mechanism for learning new coding techniques, and to be frank, there’s nothing complicated enough in here that warrants it.  There’s definitely areas that could be cleaned up, generally where I create local variables for return values. Most of those can be inlined (the compiler is doing this anyway), but it’s handier for me to examine the variables in a debugger when I’m tracing things.

I was also working with Milly Rowett for a significant part of the development. When mentoring, I prefer to keep things obvious, even if it means a little more typing is involved. Milly might be able to provide feedback on how valuable it was – it certainly was easier for me to explain as I typed.

The code structure completely changed when I decided to use maven (to get the AWS dependencies managed) which was pretty painless, but annoying. Not sure I would have done it differently, because I didn’t need maven until half way through. The final structure is fine, but created a change-set which was nothing but structural changes.

The code isn’t really meant to be a demonstration of “great art in writing code” or “faultless”. It was done as an implementation to show how to design code so you can test external dependencies (such as AWS) without relying on mocking libraries or having the code be solely dependent on that implementation.

If you want to download and play – and finish it –

Now, Turning to Reason, & Its Just Sweetness (the code)

This is the second part in a 3 part blog series on how to write code so you can build testable systems with external dependencies.  Link to part 3 is at the end of the post. The first post can be found here

Authors comment:This post was written over a period of a couple of weeks, and developed along side the code. There is inconsistency in what I say at the start and what ends up in the code. This is to be expected, and probably worth examining how things changed – as much to see the thought process, and that while design tends to remain fairly static, the implementation changes as new information is gathered. The total time spend on the code is about 10 hours. Of that time about 4-5 hours was code developed while mentoring a (very smart) graduate. Other consideration is my unfamiliarity these days with code authoring (sadly). I imagine if I was to start again, I’d probably be able to do it all in about 4 hours (if that). The code is pretty trivial (like most code).

—- Actual post follows —


First, simple structure, as would be expected from most projects. The “contracts” section contains the verification tests for our external interfaces. The sorts of testing that is “pact-like” in that if our code breaks for unexpected or unexplained reasons, we can run our contract tests, and see if the remote systems that we have no control over have changed. These are our sanity check. They’re not there to “prove” anything – other than our confidence that at the time the code was written, they were against those contract tests. It’s a good idea, you should think about it.



Now, our code layout looks like our design, even after working through it TDD-styles.  That’s pretty handy. There’s 2 useful observations and questions at this point. The first is “how do we calculate the math part?”. The answer to that is “we implement a Processor” for it. The second is “where is the application?”. The answer to that is “there isn’t one yet”, but it’s basically a simple wrapper around the EventProcessor. We can see how it will look if we examine one of the EventProcessor tests.


We can see our alternative implementations of the interfaces for DataStore, Processor, EventQueue and OutputDevice for our testing. I’m not completely happy with the EventProcessor and needing to set the delay and retry. It seems to be the right thing internally, it just looks ugly. Maybe something better will emerge further down the line and EventProcessor will become an interface, with some form of concrete implementation. For now it’s concrete, and it’s the only real co-ordination point within the code.


Where the interesting things occur within the code is in the EventQueue. The queue has the defined responsibility for processing the events. This is by design, and we can see the SuccessRemovalEventQueue in the example above has injected a List of Events (the queue) and the Processor which in this case is the KeyExistsInStoreProcessor. These particular implementations were chosen because I want to start modelling and investigating parts of the real solution that’s going to be needed. The concrete implementations here for EventQueue and OutputDevice are used as a “dipstick” (Testing with mock objects – JUnit in Action – page 106) .

This form of development makes it trivial to then compose up the application. Turns out – it’s just a setup, then construction of the dependencies. I wanted to print out when things were processed in the first version (let’s call it the MVP) so to do that we just implemented ConsoleOutput to implement the output device. Total time to create the application – 2 minutes.


You’ll notice that it’s one of the test cases, that’s been modified. That’s ok – it’s a great way to start – the actual application doesn’t need to know, and we can focus on implementing each of the features one at a time. We’ll start with the DataStore, because that’s probably the easiest part to implement first.

In the blink of an eye, the BatchFileProcessor has changed to the following. And with the great confidence that as the injected strategies have the same contracts as the tested ones, our imperative shell works as advertised.


Now, build out DoSomeMathProcessor() with TDD.





Essentially we test this by having a known set of values passed into our data store. This is made easier by having an “object mother” (ObjectMother)for the creation of the test data. Notice how we’re testing results, not trying to delve into the class to see if “math works” – but if we’re getting the right results.

We’re at the point of our code where we now have great confidence if we have a DataStore that returns us the valid data – we can add it up correctly. So, do that next.

And the contract test for S3DataStore(). Note here that I’m not attempting to mock, or stub or do anything to test the implementation of S3DataStore() with the BatchFileProcessor. This will be creating the contract for the use of the S3DataStore(). If I can use it against S3, then it works. That’s it !


There’s a little bit of context worth discussing here. For this particular bit of code to work, that means that the S3DataStore() class needs to be implemented and working. This was done in a few stages (over about 20-40 minutes as I looked up various bits of the S3 API). I started with the ds.exists() test, because that also allowed me to see if the Authentication parts were going to work.
For this test to ever work, we need to set up the world. That’s ok – we know that, we’re not trying to fake out the world here, this is a contract test to verify if our code works against the real world. This could also form part of a smoke test. I manually set up an S3 bucket in the Sydney region, and manually copied the “testdata.txt” file into it. I could use a shell script to do this, I could use a bit of Java code to do this and clean up after. That’s all completely valid, but doesn’t really help in understanding about “how to test AWS code” (really, it’s the implementations of the imperative shell we’re testing, AWS is just a particular implementation).

The implementation is pretty simple. Current implementation is naive and will not fulfil the “requirements” if there are errors, but the interesting part is it’s trivial to add all the business logic and test it. If we need to check for malformed files, we can do that – and have the load(String path) method do sensible things. We can trivially create test cases for this to make sure the processor acts correctly.

At this point – the code is now “working” – and we can run our implementation. We would then choose the next part of the project to implement. If I was going to continue, I’d probably do the output notification – mostly because that requires the least amount of setup.

“Sense Amid Madness, Wit Amidst Folly” (Surface Detail)

After 6 years (and change) of working with REA Group I have decided to resign. It’s mixed sadness and joy, as the place I joined all those years ago, is not the place I’m leaving. That’s good, and that’s also one of the reasons.

I’m moving to a medical genetic interpretation service called myDNA (  In short, people don’t always respond in the same way to medicines, and some of this is because of our genes.  In some cases we can look at our genetic structure and give additional advice to doctors about prescribing certain medicine.

I think this is pretty cool, and I’m actually going to be working on things that will make life better for people every day.  I’ll also be spending heaps of time working with software development teams directly, pairing with developers and improving their skills. At this stage it looks like I’ll also get to learn C# – which I’m pretty excited about.

One of the things that was important to me was being able to have a balance, and I’ve negotiated that with my new employer.  I’ll be working 4 days a week, giving me a day to pursue my own objectives.

I will be;

  1. Focus on providing mentoring/coaching for Women in Technology. From juniors who want to learn to code, to seniors who want assistance with CVs, with conference talk preparation or just general chatting about the state of the industry. I want to help. I want more women in the industry and I want it to thrive. I have 3 amazing women that I’m mentoring now that are happy to provide feedback should anybody be interested.
  2. Cycling.  Yeah.  A spare day a week to go cycling. Living the dream.  I suspect some of this will be consumed in housework.  I’ve made a deal with Mike Rowe that I’m going to ride the 3 peaks with him next year – so I’m going to need that extra time.
  3. Consulting.  If you’re a corporate, with a lot of money that wants somebody with 30 years of software development experience, team leading experience, AWS experience, privacy/security/cryptography and just general knowledge of the industry to come and give guidance, feel free to contact me and see if we can work out a deal.

From what I can tell, the office small, is open plan (but team sizes, not cattle-yard), and I get my own desk – that I can leave stuff on overnight, and have my monitors and keyboard and chair all set up how I like it.  I can have photos of Jo, and George stuck there, and gaze at them when I’m thinking.

I’m unreasonably happy about these little things, but it shows how much those sorts of things count as part of being a human.

The journey starts again.




2014 – Random reflections

I get to the end of the year and wonder what happened and then I feel like I didn’t actually do anything much over the year. This time I thought I’d put some effort in to reflect on what I inflicted on the universe for the year.


Starting with the most important part of my life. Life with George was again fantastic. He’s growing up into a wonderful human being. He’s so polite, so kind, so generous and just delightful to be with. I completely miss him when he’s not around. The best thing that happened in 2014 was a re-negotiation of our custody arrangements which gives me 6/14 (6 days out of 14) during the school year, and 7/14 during holidays. A great step forward, maybe 50/50 in the near future.

We have a good relationship. We’re both honest with each other about how we feel, and how we want to be treated. This leads to a few tough conversations at times, but they get easier, and pretty much all disagreements end very fast, and normally with lots of cuddles and “sorry how I behaved” – on both sides of the fence. He’s not the only one that has bad days, and it’s important to me that he knows that just a part of life and dealing with it as a family is crucial.

We have a house that we both love, his school is nice and close and we spend many, many hours a week playing cricket when we’re together. If we’re not off finding some nets, we’re playing keeping in the back yard, or watching it on TV. George loves cricket, and that’s been reflected by his achievements in playing with his new club. He plays in the under -10s, manages an average of about 20, top score of 40 (off 4 overs) and best bowling of 2/1 (off 2 overs). He’s only been dismissed once, and that was an overenthusiastic pull shot that ended up destroying the stumps. Oops. Pretty handy in the field and likes to keep as well. It’s great to see him doing well at something that he enjoys.

He’s going great at school, he works hard and enjoys turning up and doing different things with his friends. He’s a good little man.


Nothing really to report here. Both of us have avoided most of the really terrible coughs and colds, and despite both of our best efforts neither have managed to end up hospitalised for our recreational (mis)adventures. I’m fit and healthy, and after spending 5+ years being completely obsessed about riding bikes I’ve started to broaden my horizons to other activities.

I was finding it harder and harder to get consistently onto the bike looking while also looking after George. I’d need to spend a good 3-4 hours in the hills to get a solid workout, and that just wasn’t possible for much of the time. So after Amy’s Ride this year decided to take a break from riding for a while (to and from work doesn’t count) and in late October/early November started to look at running. Now, I’d not done any serious running for about 20 years when I used to run in 10km fun runs. The good news is that my cardio fitness base is solid, the bad news is that I’m missing a lot of muscle development for running.

At this stage, I’m pleased with my progress, getting to 4:30 min pace for 5km and 5:30 pace for 10km. Only time will tell how the body will handle it, as I’m already noticing a few niggles. Hopefully just related to lack of muscle development in those areas.

From a mental health perspective, I’m probably in as good a shape as I’m likely to ever get. Most of the anxiety and the pretty severe dent my self-worth took during the later parts of the marriage have been repaired. I’m still pretty nervous about what relationships might mean in the future – but I’ll cross that bridge when it happens. Soon, I hope.


This was a really big year for friends. Some moved within Melbourne, some moved to another country (I miss you Rup!) and some had some bad news. The best thing for me was meeting up with friends I’ve known for close to 10 years. I was able to travel to Blizzcon in November and got to hang out for a week with the most awesome group of people from all walks of life. It would be pretty safe to say that I really didn’t want that week to end, with geeking out about computers, gaming and drinking far too many beers.


There were 2 great things about work this year. The first was that REA started their graduate recruitment program, and I got to play a significant part in forming it, and getting the graduates on board and working with them. The second was that we finally managed to fill all the open roles in the Group Architecture team, and I can spend more time working with a team rather than trying to create it.

It’s fascinating working in the role that I have at REA, and it’s always challenging – most of these are people challenges, not technical ones – and I’m constantly left open mouthed at how some people react to change. I’ve blogged about my work a few times this year, I hope to do it a bit more next year.

Personal Achievements

It was a pretty good year on this front. I’ve been working on an open source project for a very long time, it’s almost part of the furniture in my life and I don’t give it much thought. It then pops up at unlikely times to make me re-evaluate what reach my software has had. The software in question is BouncyCastle. A Java cryptographic library.

  • It’s been shipped in over a billion devices as part of the Android operating system in 2014 alone (3bn total)
  • It’s being used by 12m people creating virtual environments in Minecraft
  • It seems that a large book selling and cloud computing company may also be using it for various things internally (unconfirmed)

So, at this stage there’s few people electronically connected that haven’t been directly or indirectly using software that I’ve written. That’s kinda cool and makes me feel pretty good.

I also managed to get back and do some conference speaking. Something I enjoyed doing years ago (pre-George) and thanks to Evan, Beth and the YOW! crew it was a great experience to do it again.


2014 was a good year. Probably one of the best I’ve had in recent memory. I’m feeling more balanced as a person and more comfortable in my role as a parent. I’d like to spend a bit more time on my personal projects as I feel my software skills are deteriorating below where I’d like.

Life is good. I’m very lucky.

You are not your ideas – a strategy to lessen the blow of rejection

Inspired by @dys_morphia on Twitter, I’ve decided to document my strategy for dealing with rejection of ideas. This particular approach came from a discussion with James Ross and Simon Harris many years ago working with on a consulting project.

James, Simon and I were discussing a bunch of ideas about design and implementation. We were thrashing through them thick and fast and each of us were proposing particular solutions which would be then unceremoniously torn apart by the others. To people outside our little gathering it really looked like we were intent on destruction.  Nothing could be further from the truth, as even though the other 2 are mostly wrong about everything and can’t see the genius of my ideas, as the respect for our work and our worth is paramount in these discussions. Few ideas survived the withering attacks, yet none of us felt harm, hurt or lacking in respect from the participants.

After we’d been doing this for a while, we started to reflect on why this is such an “easy” task for the 3 of us to perform, yet it appears to be very stressful for others. We talked a lot about rejection and about how people feel very close affinity to their ideas and proposals, and that rejection (or criticism) of them is like a personal attack.

James made this very clear explanation about how he thinks about ideas, and why Simon and I probably feel the same way – yet others struggle.

He said(*), “Many people hold their ideas close to themselves, their ideas are hugged, like a teddy to the chest, so any attack on the idea is in very close proximity to themselves and attacks hit not only the idea, but the person behind the idea. The idea is precious, there’s not many of them, and each one is special and nurtured and getting new ideas is a hard thing to do”.

This was compared to what we do, “We feel our ideas are like balls. We generate them, we toss them into the ring for people to observe and comment on. They’re cheap and cheerful and colourful and we know there is a bucket of them we can just keep getting new ones from. Sure, some are special and different in their own way, but the ideas are tossed away from our selves, and criticism of the size and colour of the balls are clearly not directed at the person”

I don’t want people to think that James, Simon and I are reckless, or foolhardy, or don’t care about our ideas. There’s often very heated debate about our thoughts, our dreams, our visions (and our fears) when we engage in these conversations. It’s just that we realise that our ideas have a life of their own, and it’s our job to bring them to life – we’re the parent of those ideas. We’re not part of the ideas.

If you’re an aspiring artist, a software designer, a poet, an author – or even just somebody trying to work out where to go for lunch, then consider setting your ideas free – toss them away and give them life of their own. You’ve already done the important work in the communication. You can’t be held responsible for how others react to your ideas, any more than you can be held responsible for other people liking your choice in bikes (even though there is a clear right answer here) and more importantly, by giving life and freedom to the ideas, you’re making it clear of a very important fact, you are not your ideas.

(*) I can’t remember exactly what was said, so I’m going to make up the story to convey the intent.

Micro services, what even are they?

This blog post was inspired by Jonathan Ferguson (@jonoabroad on Twitter) where the exchange started.

All the Twitters

@jonoabroad “Does anyone have an agreed term of what micro services is?”
@joneaves “Does it need one?”
@jonoabroad “yes. How is it any different to SOA?”

At this point, 140 characters was just going to make things harder so I suggested I’d respond with a blog post. So here it is.

Firstly, I’m going to start by saying I’ve probably got no right to be leading the charge for a definition for micro services, but I do have a lot of skin in this game, as it’s the direction that I’ve been pushing REA development for the past 2-3 years.  Much of this is my personal perspective, but I do think it’s broadly applicable and does provide what I consider an alternate viewpoint on the vision for micro services that exist. 

To answer Jonathan’s second question “How is it any different to SOA?”, my immediate response is “the intent is different”. With SOA, the intent is a layered architecture of co-operating services where SOA focuses on describing the organisation and co-ordination of the services. With micro services, the intent is to describe the nature of the services themselves and not quite so much the organisation and co-ordination of them.

While SOA is used as a comparison, SOA itself has no “one true definition” but merely a collection of patterns/principles and attributes regarding the organisation and co-ordination between services. I should point out that I see micro services and SOA working well together, with micro services describing attributes of the services themselves and SOA providing useful guidance on how to arrange them.

So, why do I think this way?

I’m a software developer, designer and architect. I like to think a lot about the human factors of software development and how can I put systems in place to encourage development teams to “do the right thing” when building software. There’s far too much shit software out there, and I like to have teams not contribute to that. With that in mind, why did I think micro services was a “good approach”? My definition is meant to be used to guide _development_. The benefits that we get operationally is wonderful – but that’s not the primary reason. It’s to get developers to stop building Borgified software with unclear responsibilities and brittle coupling.

First it’s probably worth providing my definition of what a micro service is, so that there’s at least some context around the discussions that may, or may not ensue. After defining the attributes, I’ll expand on why I consider them important.

Desirable attributes of a micro service is;

  1. The responsibility is narrow. The service does one thing, and one thing well.
  2. The code base is small. The service can be rewritten and redeployed in 2 weeks
  3. There is no 3.

I tried to think of more, but most of them were derived from these. A valuable attribute is the ease of upgrade and redeployment. This is directly related to #1. Another valuable attribute is the ease of change. Both #1 and #2 provide support here. There is also the ability for services to be re-used effectively. This is related to #1.  A person much smarter than I am once said “The unit of reuse is the unit of release”.

There’s possibly some rambly hipster crap about “REST services” and “HATEOAS” but really, that’s such flavour of the month and not really something that I think is that important. Certainly no more interesting than JSON vs XML vs ASN.1. All of these things can be done well, or badly – but don’t provide a defining point on if an implementation has desirable attributes.

The responsibility is narrow

This key point relates to design and the fundamental architectural principles. If the responsibility is narrow, then hopefully it follows that the codebase will be small. If the responsibility is narrow, then the understanding of where to make changes is clearer and design intent can be carried forward. If the responsibility is narrow, then understanding how the service fits in the broader network of services, or how the service can be reused is much clearer.

The second important part here is the ability to release the services often, cheaply and without needing to have a deep graph of dependencies. Having a narrow responsibility means that any systems that want to use the services are only coupled to that service for that responsibility. There’s no undesirable coupling. 

Like object oriented software, services are best with high cohesion and low coupling. Creating services as micro services helps in this regard.

The code base is small

When I first started proposing micro services I wanted to appeal to the developers, so I said that services could be written in any language they choose, The only caveats were that the component had to conform to our monitoring and logging interfaces (to aid with deployment and operations) and that it could be re-written in 2 weeks.

This created significant consternation, not by developers, but by management. They were concerned about the “explosion of software that nobody could understand”. I did laugh while explaining my reasoning. I laughed mostly because their basis of concern was that “it would take too long”. Sadly this shows the lack of understanding about software that pervades our industry.

Most developers are perfectly capable of understanding new syntax, and generally can understand new syntax in a relatively short period of time. What takes much, much longer is understanding twisted and tortured domain logic, scattered across 6 packages and libraries all bundled together in one monolithic application. 

My rationale is that if software is written according to the simple rules (narrow responsibility and small codebase) then the actual language choice is for the most part irrelevant in terms of defect fixing and extension. Sadly, I don’t have a lot of data points in this regard, as developers seem to want to choose the path of least resistance (which is normally keep writing the same old shit in the same way), but I do have a great example written by my team. 

We had the need to write a service and one of the team wrote it in Go. It was working well, performed as expected and when it came to adding some additional monitoring we hit a snag because the Go runtime wasn’t supported by NewRelic. The developer who wrote it had sadly departed the team (I still miss you Eric!) so another team member re-wrote the service and had it redeployed in 2 weeks. Written in Java, using Dropwizard. It was a perfect example of exactly what I was proposing.

There are some really useful patterns that we developed while creating the service, not really suitable for addition here, but if there is enough interest I can expand on it in another post. However, the way we thought about building the initial service and more importantly the automated testing around that service made re-development trivial, and incredibly safe.