Cultural Fit Doesn’t Exist … hiring for long term success over ‘fit’

Hiring is a critical part of a leaders role and a significant part of recruiting the right person for any role is working out whether the candidate is a “Cultural Fit”. In my experience that is an impossible question to answer. Having interviewed hundreds of people across many disciplines, I have come to recognise that there is no such thing as an individual fitting the culture. Either an individual will add value to the organisation’s team and both will therefore thrive and be successful, or they will have a negative impact. There exists no neutral state of “cultural fit”.

This recognition leads to a much more valuable and measurable hiring bar:

Will this candidate be successful in our environment?

When coupled with its “competence in discipline” partner question we have a strong and objective hiring bar by which to make a rational and objective decision:

  1. Is this candidate technically competent to do the role we would ask them to do?
  2. Will this candidate be successful in our environment?

For any Hire decision, there should be objective evidence that amounts to a YES to both of these questions.

What is Culture and why do we evaluate candidates for ‘fit’?

There are many definitions of “Company Culture” but my preferred definition is:

Culture is the set of values and behaviours that anchor all interactions within the team or organisation. They guide the way the team interacts with each other, the way the team conducts itself in the wider marketplace and it indirectly influences the value the team delivers to our users or customers.

A company’s culture is generally agreed to be born through its early stages – beginning with the team of co-founders and their first employees. It is almost certainly defined within the first 25–50 hires. It may never be written down accurately or completely but it exists in every interaction of the team – in what behaviours are accepted and those that receive an anti-body response and are rejected. It is both fragile and anti-fragile.

If you add a single person into a team of 4, the dynamics of the team will change. At the very least the communication effort is multiplied by a factor of the increased number of people. If that person has a positive impact – embracing the team’s values and behaviours while proactively challenging the group to be better – then it most likely produces a successful outcome over time. However, if they have a negative impact, the team’s productivity and the impact they have will deteriorate almost immediately, and likely indefinitely. While the strongest of teams can absorb this for a longer period of time, it will eventually cause irreparable damage. This will then require costly intervention.

As we focus more on smaller empowered teams this remains amplified and the impact of a single bad hire is more noticeable. The 800th hire requires the same investment and consideration as the 8th.

It is also important to avoid the trap of considering success for one team, rather than the larger group or whole organisation. There will always be cross team collaboration but it is also likely that people will move around, particularly as the organisation grows and projects change. Therefore we are not looking at the immediate team but whether they will be successful over the long term and across multiple teams.

If we let this standard slip – the ripple effect of a bad hire is significant enough to have an impact across the organisation. This is what we are trying to address in assessing cultural fit.

Setting up for success

The challenge we face as hiring managers and interviewers is determining what impact the candidate will have on both the immediate team and the wider communities within the organisation. This impact will determine the, hopefully increased, success of the team and, just as importantly, the success of the individual within the organisation. When interviewing for “Cultural fit”, we are looking for the signals that tell us that it is likely to the candidate will have a positive impact on the teams:

  • Will this candidate add value to our teams and increase its level of success, now and in the long term?
  • Will this candidate find value from working within our team and be themselves engaged and successful?

This is summarised into the hiring bar question:

Will this candidate be successful in our environment?

It is therefore essential that we identify and document the core set of behaviours that are both accurate and desired, for people to be successful in our environment. These may be aspirational, but should be clearly evident in how the team interacts to deliver value.

Assessing the likelihood for future success through past performance

The likelihood of a candidate’s success in our environment is most accurately predicted through identifying their past performance in situations that are common place in our teams. Therefore targeted behavioural and situational interviewing is the single most effective technique in assessing this. By talking about the candidates experiences, performance and results in a very specific situations, we can tease out their competencies, strengths and weaknesses in an objective manner.

This behavioural interviewing is one of the most valuable tools in our armoury and yet is often over-looked for more superficial (“Describe your current role?”) or seemingly cool (e.g. brainteasers such as “How many people are currently in the air onboard a plane, worldwide?”) interview techniques. Behavioural interviews allow us to evaluate a candidate against the pre-determined set of behaviours or competencies by focusing on the explicit evidence and examples. At the same time, they also make the candidate feel at ease and in control as they are talking about their own concrete experiences. They do not specifically address details you read in the candidate’s CV, but get to the layers of detail below that where reality exists. In fact we do not need to read a candidate’s CV to interview a candidate using this technique.

There are several mechanisms we can use, but ultimately I favour an approach that is based upon the concepts identified in the STAR technique. By leveraging this technique as a base we can cover off one or multiple sets of the behaviours and performance (with results) that you expect from a member of the team in a single starter question in the form:

“Tell me about a time when you…?” or “Give me an example of a situation where you…?”

The STAR technique leads the candidate into describing their experiences in a structured mechanism:

  • Situation – the context of the experience.
  • Task – the tasks that were needed.
  • Action – the actions the individual performed.
  • Results – the desired outcomes and those achieved.

Using this approach, a single question can allow us, as interviewers, to get an insight into the various values, behaviours and actions the candidate has concrete performed and map that against the pre-determined behaviours for success in our environment. I consider this more like an inter-related tree than a linear script – one situation, leads to multiple tasks, leads multiple actions. The conversation allows me to traverse specific branches that cover the behaviours I am looking for evidence of. A good interview is one where I have likely asked no more than one or two of these starter questions, but managed, through conversation, to gather multiple examples. Over time and much practice this has become second nature but I actively encourage those just starting out in assessing cultural fit to use this technique as a guide and plan for these drill down questions in advance.

The outcomes of the behavioural interview should be clear evidence of how the candidate has demonstrated, or not, the behaviours we are assessing for. This can form the basis of prediction of the impact they will have in our environment.

Keep it Objective

The most critical factor in this approach, and any interview, is objectivity. How many of us have seen this type of feedback on a candidate?

I like this candidate, we had a great conversation and got on really well. We should hire them.

That type of statement does not make clear any evidence that supports a such a decision. It is critical that we don’t slip into a state of focussing on our subjective assessments of the candidate. We actively focus on leveraging their experiences to gather the evidence that forms an effective and objective summary of the interview.

Deep Rooted Belief vs Situational Driven Behaviours

It is important that we get below the surface of the candidates experiences. One of the nuances of behavioural interviewing is determining what are ingrained, belief led behaviours of the candidate and what are behaviours that are caused by the environments that they have experienced to date. For example, if an environment is driven by command & control type behaviours, a candidate will likely have been forced to behave with similar characteristics to be successful there. Those same behaviours may be red flags for our environment. By getting to the deep root of why someone behaved in such a way is an important aspect to discovering this.

It is likely that specific situation led behaviours can be coached out and the candidate can then be considered as likely successful in your environment. Candidates may demonstrate this through talking about their motivations for a change, or through the behaviours they dislike seeing in given situations.

Deep rooted beliefs that don’t align with the specific values and behaviours that anchor your team are unlikely to be coachable. This is likely a candidate who will not be successful in your environment, as the drag on the organisation will be greater than the reward.

To get to this level of detail, we should look for multiple data points to determine whether there are environmental factors that distorts the evidence as presented. Focussing on the candidate’s actually actions, approaches in the face of the environment, can also give a real insight into this.

Beware the Bias

A critical part of objectivity in the interview process as a whole is recognising that there is a set of biases and pressures that exist throughout our hiring decisions that we have to be more conscious of:

  • Hiring Manager Bias – There are significant pressures on the hiring manager to fill the role. Will they take someone who can do the job but less than successful in the long term?
  • HiPPO [aka Seniority] Bias – The most senior person’s recommendation is the result for the candidate. Whether this is subjective or not. Particularly where there is a mixed group of interviewers, this is one where objectivity and evidence should change the conversation.
  • CV Bias – Given the candidate’s current title, their extensive history – we begin to make assumptions and set expectations for the candidate. This can be dangerous as it puts a different set of expectations on different candidates. These assumptions lead to subjective decisions.
  • Unconscious Bias – Is this person wearing red or blue? Did they shake your hand with over confidence? How does this factor into your hiring decision? Unconscious Bias is a strong factor, but hiring decisions should be made rationally. By focusing on the evidence we can manage our biases and ensure we remain objective.[1]
  • Decision / Recommendation Bias – This manifests itself more in the presentations of feedback following the interview. If I have subjectively decided we want to hire this person, the evidence presented will unconsciously favour the evidence that supports that over other evidence available. Its important to document your evidence and then make a decision.

This is particularly applicable when it comes to assessing the likelihood of a candidate being successful in our environment.

Every team should crave diversity. Different experiences and different expertise can create dynamic and effective teams – teams that support and challenge each other and become greater than the sum of our parts. However “cultural fit” can also be interpreted as “same as us”. If this happens, you create the conditions to reduce diversity not increase it. In identifying the common values and behaviours that make people successful, we are identifying exactly what will allow our diverse workforce to gel and function well as a team. Our role as interviewers is to focus on all of these not the alternative factors.

“Will this candidate be successful in our environment?”

As a leader in a large Product Engineering team – I continue to invest more than a day a week in ensuring that we recruit the best possible people to work with. I do at least 4 interviews (and subsequent decision review ’wash-ups’) almost every working week across all functions / disciplines. Much of that time is spent is on addressing the question of whether candidates will be successful in our environment – particularly the case if I’m interviewing cross-discipline e.g. for Finance, Talent or Legal teams – where I am in not qualified to assess the candidate’s competence. I have also defined the role of the Bar Raiser programme at Skyscanner – the leading interviewers that facilitate an objective Hiring Decision for the candidate and organisation.

Through this experience I have determined that Cultural Fit does not exist in a way that we can objectively evaluate and that there is no such thing as a culturally neutral hire. The aim of our assessment process is to as accurately as possible, predict whether a candidate will be successful in our environment. An individual will only add value to the organisation or have a negative impact. They will be successful or they will not.

Therefore we have refined our hiring bar to be explicitly focussed on this and its equivalent, discipline competence partner:

  1. Is this candidate technically competent to do the role we would ask them to do?
  2. Will this candidate be successful in our environment?

The best way to determine this is through the candidates explicit experience, performance and results of dealing with the types of situations that our team face on a very regular basis and determine whether that match the expectations of our team in the same scenarios.

Ultimately a hire-no hire decision is a question of risk and risk management weighing up the evidence presented. Making an objective, evidence led hiring decisions is the most important part of the overall interview process. It should never be neglected. The impact of a False positive (a bad hire) is a significant cost and drag on an organisation while the impact of a False Negative (a rejection that goes on to greater things) is always just a disappointment. Whatever the outcome failing to address the questions of the impact the candidate will have on the team is a significant gap in the process.


  1. Unconscious Bias is an important topic in its own right – here are some highly valuable materials from Google 1; 2 and Facebook I’d recommend to start exploring the topic. ↩︎
Advertisement

What Am I?

A publication of a recent internal communication. A light-hearted, Friday afternoon view on a modern website’s complexities.

What am I?

I am accessed through an app on your computer – the one in your pocket and the one on your desk. I am available on almost anything with an internet connection. It’ll be your phone, your phablet, your tablet but that’s not all – it’ll still be the work laptop with its 2 huge screens, the PC at home or the TV and fridge. I am Mobile First because that is what you need me to be. More and more you use me across all of the devices you own and, on a frequent basis, are starting a task on one and completing it on another.

To allow focus on an artificial classification of device, you might think I should be broken and split. This has been done many times over and caused more confusion for you than not. The line is so variable – what would you do? Have me for the tablet and create a brother for mobile? What about your smart watch, phablet, TV, fridge et al. Where would you stop? With the right structure I can do it all.

I work across 4 browsers and 2 Operating Systems. If only it was 2005 and that was all I had to care about! Those were the easy years and that was hard enough – thanks mainly to IE6. Things are evolving faster than ever. There are thousands of devices, hundreds of OS’, tens of browsers and plenty of non-browser based windows. A Facebook App’s webview anyone? At a minimum, I am usable on as many combinations as you tell me to be and need to be awesome on most. It’s certainly more than those simple 8 – I’m lucky if it’s <800.

I used to be just 960px wide and then I got wider. I still can be but I have more fun than that. I have to be usable on so many screen sizes – I am responsive to both orientation and size. Sometimes there’s just so much really,
really important stuff that folks want me to show that I simply run out of space. Sometimes I hide it and hope you won’t notice. My bad. That screen real estate problem is a big one for sure but there are many other factors and capabilities that matter. I must adapt to many but definitely not all.

At my core I am HTML. I am a Document with a Model of page Objects to give meaning and structure. But I am so much more than that. HTML alone offers such limited control. You tell me HTML5 is the answer – if only that were true. I need scripts and shims to control implementations that vary amid light-speed evolution. I do enough to keep up but it’s no magic bullet.

Most of what you see is prettified with CSS – a language for style that changes as quickly as HTML5. So much evolves with little or mixed support. WebKit, oh Webkit – a rendering engine among many. It may be the best but the others matter too. I do what I can to keep it right, but demands for pixel perfection are not the least bit helpful.

A lot of our interaction is managed by Javascript. That’s my code executing right there in your window. But is it consistent in every damned window? Sometimes, I run into that many errors, I just want to give up and pack it in.

My power is controlled by the strength of your connection. I break the rules to be as quick as I can, but your faked 3G connection is slower than dial up. That is my problem, something I can’t just ignore. I am slow, wearisome and need to do more.

I have both human and technical masters and I must serve both. You find me in Google. It’s the way to get found. Well, but that’s just for you – others go to Yandex, Baidu, Naver and more. My public face needs to support those automated bots. Including the impact of Google’s Knowledge based search. Just as I got to grips with that, Facebook evolved Social search and I am already moving on to that next challenge. But you, the human, need to be first master not last. The rest simply follows as you “like” me.

If only life was simple and I could be one-size-fits-all. Everyone uses me to varying degrees but there are worlds that need me to conform to some unique perceptions. You like me clean and simple with lots of white space. Your friend likes me to be compact and busy with a clutter of stuff. I must adapt to fit those mental models and cultural nuances. I need to be one-size-fits-all while concurrently adapting to your personal context.

“You are obsolete” some of you say – ready for retirement. Yes, I now have a family – a close product family – where I support my sisters, the native Apps. In turn, they support me back. We are different but equal, like 2 sides of an equation. Obsolete you say? Maybe only to a blinkered minority.

With all of these challenges why not hang up my boots and call it a day? That time may come but today, I am more active than ever!

What am I?

I am a website in 2012 2014 and I am proud to be adaptive and highly complex!

Those who design and build me are the best of the best!

Defining Done in a DevOps World

InfoQ recently published an article on QA in Scrum. It published a really simple definition of done list.

In a counterpoint post Matt Davey added Acceptance Testing to the list, bringing up Acceptance Tests as part off the definition.

That’s great but I feel they have both missed a critical point.

Your feature is worth nothing if it has not been given over to users or if it is of such low quality that those who are using it stop.

As software delivery specialists (Developers, QAs, Project Managers, Product Managers, SysAdmins etc.) we strive to make useful software products and it is of little value to us or the business if no one is using it. We use “Done” to define when a Unit of work (Feature, Use Case, Story, Task etc.) is complete. Rarely, does that definition take into account the true realisation of value locked in it.

Done
“Done” is probably one of the most variable terms in Software Engineering methodologies. To be fair – it’s succinct and to the point and really  should mean what it says. However it is just far too ambiguous.

We have a myriad of different definitions of what it means to be “Done.” All development practices focus on what those are and SCRUM teams have their “Definition of Done” check lists to tell them when a unit of work is complete. Every team I have worked with has had widely different definitions. To get around this we have all heard awful phrases such as “Done, Done” or “Ready, Ready” when we mean more than just completing part of that unit of work. I am as guilty of using these to try and cut through that ambiguity. Which in turn leads to greater ambiguity.

Almost none of the definitions I have seen across various teams completely match with a DevOps culture we should be trying to instil. The idea that the software has to be used for it to have value is not included. The delivery team focuses on build but not operations. In a world of DevOps & Continuous Delivery the lack of the last mile in such a key delivery metric has become a stumbling block. When velocity is measured against a measure of Done that allows a huge batch to build towards release, we are generally asking for failure. With that failure comes the invalidation of the declaration of success that the project tracking has given the team.

A DevOps definition of “Done”

“Released with a high enough level of confidence in it’s quality.”

This, for my team, embodies the necessary premises of DevOps – collaborating to fully deliver the product to our end users from code to production. There are still the check list of what that means but now the team knows when we say Done how far we expect them to have taken it. They understand that it isn’t enough to have finished the development tasks plus the relevant QA, it has to be released and they understand that they need to prepare for this early. Thinking about it up front at the start of the project, means that the first thing a team does is ensure that there is a solid and repeatable pipeline to production. That generally means development collaborating with our operational counterparts right from the start. Doing it early in the project gives us the opportunity to leverage that specialism from the start. And immediately starts to break down the synthetic silos between development and operations.

There’s also little excuse for anyone to deliver without confidence in its quality. Through a shared responsibility we should understand that there is enough test coverage, including those acceptance tests. But only that it is high enough to know it won’t break under the majority of user circumstances but perhaps without test full coverage. You can never truly have 100% confidence – bugs escaping is a fact of life for software engineers. But we can minimise the impact of that through the right amount of testing and assurance as the path proceeds to production.

The definition also does not shy away from the fact that a unit of work cannot count towards our velocity until it has been released. Project progress is measured based on whether it is delivering actual value to the user and the business. This is a bit of a leap of faith but think about it? Is it right to report success without running over that last mile?

Nothing New
Nothing I am saying here is new or ground breaking. It seems common sense to me but DevOps is a cultural shift for many engineers and sys admins. Defining “Done” to be something that ensures the start of that collaboration is one way of starting to instil the culture and values. Seeing the value in the engagement early, leads to more of a willingness to collaborate further. I’ve had some success with it in my own teams. While it takes time to fully embed the practice, if your team is willing then this definitely worth a go.

Burn Down Charts Suck

Burn Down Charts Suck. For me any way. The frustration is that they abstracting away the real picture of the progress into a single line.

A Burn Down Chart is “a graphical representation of work left to do versus time”*. Let’s take a look at one.

burndown

What is happening here? It looks right, the trend is downward which is good and progress seems to have been good over the last few days. At the start Day 1 there is 100 units of work to do and by day 14 – we are down to 58.

However, what happened on day 2? Why did it go Up when its a Burn Down chart. Well that’s obvious, the scope increased for some reason. OK so the scope increased to 112 units of work. Well actually it didn’t. The Burn Down chart is hiding the fact that there was a scope increase of 20% to 120 and that there were 8 units of work completed. At a glance – you cannot see that detail, your team cannot see it and your stakeholders cannot see it.

Day 7 is another good example. That’s a great day. 15 units of work completed, right? Well, actually No. The burn down chart again fails to highlight that there was a fluctuation in the scope. This time a reduction from the removal of 10 unnecessary units of work. Add that to the 5 units of work completed and its a pretty big drop.

My preference is for the Burn Up chart. This is the equivalent of the above Burn Down. The scope is red and the cumulative work completed the purple.

burnup
It still performs the same task – providing a graphical representation of work left against time. Rather than chasing 0 you are ticking off the work done and rising to meet the target.

It also provides a lot more detail at a glance. The fluctuations in scope are by far and away the most obvious win here. It is clear to see that the target has moved and it becomes an easy talking point. Particularly in Day 7 when this could easily be missed in the Burn Down chart. It also provides a graphical representation of the actual momentum/velocity of the team. In this example it is very steady progress. The team hit their stride and were ticking of units of work at a reasonable velocity. Scope changes could easily mask this in the burn down chart so it’s much clearer in this chart.

There is an argument that scope should never fluctuate and that everything should be so well defined that we know where we will be. Unfortunately that might be the case in a lab or training environment, but in reality, scope changes, particularly in an environment that is fast paced and changing. And we should embrace that change.

Realistically though, both charts have their place. And producing the two from the same data is really easy. However if you have space for 1 chart at your stand up or in your stakeholder sprint review slide deck make sure its the one that gives the full picture, not part of the story – the Burn Up chart.

Chart Data:

Day 1 2 3 4 5 6 7 8 9 10 11 12 13 14
TotalScope 100 100 120 120 120 120 110 110 110 110 110 110 110 110
Done 0 0 8 5 2 2 5 8 5 2 2 5 5 3
Cumulative Done 0 0 8 13 15 17 22 30 35 37 39 44 49 52
Work Left 100 100 112 107 105 103 88 80 75 73 71 66 61 58