How We Hire Tech Folks

Ann Lewis
12 min readApr 19, 2019
We’re hiring! Work from wherever you want. Use your tech skills to make the world a better place.

Hello world, I’m the CTO of MoveOn, and I’m hiring a senior software engineer. This article is about MoveOn’s hiring process for this role. To candidates reading this: my goal is to make our process as clear as possible. To hiring managers reading this: my goal is to open source our hiring processes. We want to make hiring as equitable, inclusive, competitive, and successful as possible. Feel free to reuse anything here that is useful to you.

I’ve interviewed ~500 candidates and reviewed ~5000 applications over the last 15 years at a variety of companies, including startups, Amazon.com and Rosetta Stone. As CTO of MoveOn, I hired a new tech team in 2015, and have run 8 hiring processes at MoveOn since 2015. Despite this tech hiring experience (or perhaps because of it), as a fallible human, I consider myself vulnerable to cognitive biases that impact every hiring process. Working with Emeliem Ogbolu, MoveOn’s Talent Acquisition and OrgDev Manager, we have developed strategies for testing our theories about hiring, making the process more fair, and creating real accountability across a hiring team. Putting ideas about hiring and fairness into practice requires careful independent oversight.

Here’s a summary of what we’ve found works and doesn’t work, and how we try to be more rigorous and fair when hiring. We’ll cover some key technical interviewing problems that make the process harder than it should be, and our strategies for solving these problems, specifically: writing a good job description, advertising the job properly, running a fair interview process, having clear goals for the hiring process, and holding ourselves accountable.

Technical Interviewing Problems

Over the last 15 years, I’ve noticed several widespread problems with technical interviewing. These problems make hiring harder, create implicit bias, yield less competitive hiring pools, and inadvertently discourage top candidates from applying.

  • Many hiring managers assume that the default applicant pool for their job is perfectly representative of the potentially accessible pool of applicants. This is usually wrong. Getting a representative pool takes work, including careful job description design and broad advertisement of the job posting. If you don’t do this work, the default pool will be strongly skewed.
  • High ego candidates disproportionately mass apply to jobs. Low ego candidates selectively apply, based on where the job has been advertised and the wording of the job description. Almost all hiring managers want to hire low ego problem solvers, but spend lots of time filtering out high ego and less qualified candidates.
  • Most technical interviews contain too many gatekeeping questions. Gatekeeping questions test whether the candidate has had a specific prior cultural experience, instead of measuring technical problem solving or likelihood of success in the job. Gatekeeping questions filter the applicant pool to match the interviewer’s personal experience, and are not useful predictors of long-term success in a particular software engineering job. An example: “When can a social network be considered a planar graph?” As a computer science major and math minor at Carnegie Mellon University, I took (and loved) several graph theory classes, and would be able to answer this question. But the vast majority of candidates would not, including computer science majors at universities with more applied programs, bootcamp grads, and the majority of seasoned professional software engineers who haven’t taken a math class in over a decade. This is a gatekeeping question because it tests whether the candidate has a similar background to me, not whether the candidate would be able to succeed at this job.
  • Gatekeeping behavior of any kind disproportionately filters out people with identities who don’t fit the current media-sponsored definition of what a “tech person” should look like. Any kind of identity-based filtering makes the candidate pool significantly less competitive.

Note to hiring managers: in this article I talk about “representativeness” of a hiring pool, instead of talking about “diversity” as a blanket term. Many technical hiring managers pay lip service to diversity while not analyzing underlying structural problems of bias in their hiring process, company, industry. “Trying to get some diversity” without putting work into it is patronizing, and signals— especially to top candidates with identities different than the dominant culture of tech — that “diversity” means doing the poor unfortunate people who don’t look like you a favor. If you’re in an industry that serves a particular group of people, you should hire staff that are representative of that group of people. Lack of diversity in your hiring process is a strong indicator of lack of competitiveness, which leads to lack of opportunity for qualified candidates and also lowers the likelihood your organization will find the best candidate for the job. Acknowledging problems and fixing your hiring process is advantageous for everyone.

Solving These Problems

Here are our strategies for making our hiring process more competitive and more fair, while also making the best use of our limited time and financial resources.

  1. We carefully wordsmith our job description, and test it for “red flag” phrases before publishing
  2. We broadly and deeply advertise the job, periodically audit the candidate pool, and only proceed to the first interview stage when we believe the pool is representative
  3. We created a technical interview process using questions that come up every day for the tech team, and we avoid using gatekeeping questions.
  4. We hold ourselves accountable to our stated standards of fairness.

The Job Description

Words matter. Many job descriptions contain “red flag or “trigger” words that discourage a significant percentage of interested candidates from applying. According to Project Include, “including ‘salary negotiable’ in a job description reduces the gender wage gap by 45 percent.” According to Glassdoor, male-oriented tech job titles like “superhero” and “ninja” drive away female-identifying applicants. But using neutral, descriptive titles like “engineer,” “project manager,” or “developer” removes this selection bias. In addition to generalized bias patterns, everyone individually has unconscious bias based on personality and experience, and we tend to prefer candidates who remind us of ourselves. For all these reasons, it’s necessary to test a job description for bias before publishing.

We write an initial draft of a job description, then the hiring team collaboratively edits it, then we run it through an automated bias tester like Textio or Gender Decoder, and then we reach out to talented tech folks broadly representative of the demographic pool we’re interested in attracting — people who we would want to hire but who aren’t actively looking — and ask them for quick gut-level feedback: would they apply to this job description, and if not why? We then remove all the flags this test group identifies.

This process has helped us remove a variety of trigger words and phrases that would have discouraged top candidates from applying. Some examples:

  • “High-energy environment” can signal “young folks only” to more experienced engineers
  • Laundry lists of years of experience in particular languages and frameworks discourage low-ego high achievers who don’t exactly match the description from applying
  • Defining titles like “senior” in terms of total years of experience (as opposed to types of experiences) discourages high potential low-ego younger candidates from applying

This process has led to the wording in these two software engineer job descriptions:

The result of testing our job description this way is that many more candidates apply to our job listings (on average, we get 100–250 applications per hiring process), our job description is more representative of the actual job, and the process is more inline with our values.

Advertising The Job

Hiring managers: if you take one thing away from this article, I hope it will be to remember to always publicly advertise your jobs. Seriously folks. So many smaller organizations skip this step and just hire friends of friends, to the detriment of both their organization and the candidate pool. Hiring friends of friends who “seem technical” skews the candidate pool more white, more male, and more junior. Qualified candidates really want to work with your organization, but how will they be able to find you if you don’t post your job publicly? How will you be able to find the best person for your job if you don’t tell the world you’re looking, and allow some sort of competition in your process?

Advertise both broadly and deeply.

Everyone at MoveOn works remotely, so first we advertise jobs broadly through remote job boards like We Work Remotely , Stack Overflow, LinkedIn, GlassDoor. Then we advertise broadly through values-aligned job boards like Idealist.

Then we advertise deeply through identity-based tech networks, like People Of Color In Tech, Lesbians Who Tech, Techqueria, and the Black Beltway Listserv. And we advertise deeply through values-aligned job boards, like the Progressive Coders Network, the ‘Engineering Progress Facebook’ group, the NOI / Wellstone Listserv.

After the job posting has been open for a few weeks, we audit the candidate pool and determine whether it’s representative of the progressive tech world. If we detect underrepresentation, we do more work on advertising through underrepresented networks. Many hiring managers look at a candidate pool with few women or few people of color and conclude that perhaps there just aren’t any qualified candidates of the underrepresented demographic. This is a logical fallacy, and means that not only are you a bad scientist for thinking this, but also that you need to work harder on your hiring process. It takes work to attract top candidates from underrepresented demographics, but it’s work you should do if you want to hire the best people for the job.

The Interview Process

Our interview process has the following stages:

  • Application review: we consider resume, cover letter, and other application materials
  • Phone screen: we ask about candidate experience and technical problem solving
  • Automated technical skills test: using the automated test framework TestDome, we ask a handful of questions about web development and web architecture, and then a few questions that require writing 10–20 of lines of javascript
  • Writing Assignment: we ask the candidate to write about technical decision making
  • Pair programming interview: with several members of the tech team, collaboratively solve a programming challenge over video
  • Team growth interview: meet tech-adjacent members of the MoveOn team, discuss recent technical collaborations

The automated technical skills test

This test exists because we get several hundred applications for every open position, and have a small team who can’t interview everyone, so we use this test to attempt to fairly assess baseline skills at a very high level. The questions are on topics that come up at least a few times a week for the MoveOn tech team. The current test contains a series of multiple choice questions about web development and web architecture, and a few questions that require writing 10–20 of lines of Javascript. While prior experience in Javascript is not required for this position, we believe most engineers working on web frameworks of any kind will have encountered Javascript, and that scrappy engineers from any background should be able to puzzle together solutions by carefully reading the problem statements, and carefully testing answers. It is both ok and encouraged to use web search to help you answer these questions. We’re constantly learning and acquiring new expertise as we work, and believe new team members should too.

The writing assignment

We all work remotely, and to make our team processes clear and effective, we rely heavily on written artifacts of communication: team conversation happens mostly in Slack, all meetings have notes so we can record decisions made and share our decisions transparently with other teams, and writing detailed and clear code review / pull request feedback helps everyone on the team grow their technical skills. To be effective in our particular remote environment, candidates need to be strong written communicators. We’ll evaluate test writing anonymously.

The pair programming interview

Our remote tech environment is very collaborative, and we spend much of the week in video conferences, screensharing with campaigners and pair programming with each other. The rapid response nature of our work requires constant collaboration and communication with nontechnical stakeholders, and as a team we prefer to be more connected to each other, and spend significant time problem solving together. In the pair programming interview, we’ll present the candidate with a coding problem that is challenging enough to take close to an hour to complete, and is a challenge that would come up in the context of our work. The candidate will solve this coding challenge in the language and framework of their choice while screensharing with the interviewer. This test is meant to be a more in-depth technical assessment than the automated coding test. The interviewer will assess problem solving aptitude, collaboration style, and ability to get work done.

Why the team growth interview?

We used to call this the “team fit” interview, but realized that we’re not looking for a candidate who exactly matches our current team, but rather a person who helps our team grow to be a better team, and brings skills and experiences not already represented on our team. The team growth interview will focus on cross team collaboration, and understanding the impact of technical problem solving on the rest of our organization.

Goals of the Hiring Process

We’re looking for software engineers who are hands on, willing to dive into any system and language and problem. We’re a small team who work on tight timelines, and everyone’s work is critical — we don’t have room for some people to be peripheral. We all pitch in when there is urgent, impactful work to do.

We’re looking for people who are flexible and pragmatic builders, who believe in finding the simplest solution that will get the job done, and who look for the right tool for the particular task vs preferring to stay in their comfort zone. We’re a scrappy nonprofit, and don’t have room for language zealots or architecture astronauts. Our work is bound to tight political timelines- we can’t move the date of an election, so we sometimes need to work under pressure, and we have to deal with nonnegotiable deadlines, like elections. Sometimes we have time to implement the best possible solution, and sometimes we have to do what it takes to ship code as quickly as possible.

We’re looking for people who are passionate advocates of other engineers — people who not only get the job done, but build up and support others along the way. We’re looking for people who invest in mentoring, and who care about not just making the world a better place with tech, but building a better tech workplace environment and tech industry.

Accountability

We create accountability in our process in several ways: we test everything we can meaningfully test, we audit the candidate pool for representativeness, and we track and assess the effectiveness of interview questions against the success of the new team member.

We test the job description wording and continuously evolve it, we take all our own interview tests, and additionally test new technical interview questions on peer engineers we respect before rolling these questions out to the candidate pool.

After the job posting has been open for a few weeks, we audit the candidate pool and determine whether it’s representative of the progressive tech world. We do not proceed to the interview stage until we approve the representativeness of the pool.

HR tracks staff tenure, and before new hiring processes, reviews whether existing interview questions and application evaluation materials were predictors of past candidates’ success in the role. We’ve learned surprising things from this audit step. For example: adding additional questions to the application to encourage candidates to “prove” how interested they were skews a candidate pool more female-identifying, but also disproportionately white, and does not predict the long-term success of a candidate in a new staff role. We consider interview questions and application evaluation steps that don’t predict long-term staff success to be cultural gatekeeping, and remove these steps from the process.

Summary

We should all strive to make our technical hiring processes more competitive and more fair. Doing so involves acknowledging existing problems with technical interviewing processes, acknowledging the bias we all have as individual humans and systems of humans, and doing work to account for this bias and mitigate these problems. The payoff for doing this work is huge: top candidates are better matched to jobs, and hiring processes become both more inclusive and competitive. The key to making all this work is accountability.

Additional Resources

📝 Read this story later in Journal.

👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.

--

--