Hiring Is Broken — And I Have the Screening Data to Prove It
Hiring Is Broken — And I Have the Screening Data to Prove It
When I joined Jadwaa to build the tech team from scratch, I expected the hiring process to take a few weeks. Instead, I screened over 300 candidates over 4 months to hire 5 senior engineers. The data from that process changed how I think about technical hiring forever.
The Funnel (And Where It Leaks)
Here's what our funnel looked like:
- 300 applications received
- 180 filtered out by CV/resume review (60%)
- 120 sent a take-home assessment
- 45 completed it (37.5% completion rate)
- 20 invited to technical interview
- 8 passed to final round
- 5 received offers
- 5 accepted
The biggest leak? Not the technical interview. It was the take-home assessment completion rate. Over 60% of candidates who received the assessment simply never submitted it.
What We Learned from the Take-Home Data
We designed a practical assessment: build a simple REST API with authentication, CRUD operations, and error handling. We gave candidates 48 hours.
The submissions fell into clear tiers:
Tier 1 (top 10%): Clean architecture, proper error handling, comprehensive tests, clear README. These candidates went on to pass every subsequent round.
Tier 2 (middle 30%): Functional but inconsistent — maybe missing tests on error paths, or using a global error handler that swallows important errors. Most became solid contributors after onboarding.
Tier 3 (bottom 60%): Copy-pasted from tutorials, no error handling, hardcoded credentials, or didn't run at all.
The uncomfortable finding: CV prestige and Tier 1 performance had almost zero correlation. Candidates from top-10 universities were just as likely to submit Tier 3 work as self-taught developers were to submit Tier 1 work.
The Interview Questions That Actually Predict Performance
After analyzing which of our 5 hires performed best over their first 6 months, the strongest predictors were:
-
"Tell me about a production incident you caused" — Candidates who openly discussed their mistakes and what they learned outperformed those who only shared success stories.
-
"How would you handle this impossible deadline?" — We didn't look for "I'll work harder." We looked for candidates who would negotiate scope, communicate risks early, and prioritize ruthlessly.
-
System design with a twist — We asked candidates to design a system, then changed the requirements mid-design. The best performers adapted in real-time instead of starting over or getting frustrated.
What I'd Change Next Time
- Skip the CV review entirely — Use a short (2-hour max) automated challenge as the first filter. Anyone who can write working code gets to talk to a human.
- Pair programming over whiteboard coding — Watching someone think out loud while coding tells you more than any take-home.
- Reference checks before final round, not after — We wasted time on candidates whose references revealed fundamental issues.
- Pay candidates for their time — If you're asking for 10 hours of work, pay for 10 hours. It changes who applies and who completes the assessment.
The Diversity Data I Wasn't Expecting
Our most diverse hiring round (by gender, nationality, and educational background) also produced our best performers. Not because diverse teams are inherently better (though they often are), but because we had to design a more rigorous process to reduce our biases. The rigor helped everyone.
Hiring is a system, and systems can be improved. But you need data, not gut feelings.
Comments (0)
Sign in to leave a comment