When algorithms read your CV first
How AI is reshaping recruitment
Artificial intelligence has promised for years to transform recruitment — promising fewer biases, faster decisions, and better matches. In 2025, that transformation is finally here.
AI-driven recruitment software now screens millions of CVs, ranks candidates, and drafts personalised messages that feel eerily human. LinkedIn says its AI hiring tools boost response rates by 40–44% compared with traditional outreach — a sign of how quickly AI is changing the way employers source and communicate with candidates.
Meanwhile, Indeed’s new AI agents — “Career Scout” for jobseekers and “Talent Scout” for employers — aim to automate early-stage matching and halve time-to-hire. And OpenAI has announced its own AI jobs platform, promising to match skills to roles algorithmically.
For hiring teams, this is automation nirvana: a faster funnel and more personalised candidate experience. But as AI takes over key parts of recruitment, it also raises harder questions — about transparency, bias, data privacy and control.
Europe’s new rules on AI in recruitment
Under the EU AI Act, which came into force on 1 August 2024, AI systems used in hiring are classified as “high-risk”. That means providers and employers must meet strict obligations on data governance, bias testing, and human oversight. The rules will begin to apply from 2025 into 2026 — and Brussels has confirmed there will be no delay.
Europe’s stance contrasts with the largely self-regulated U.S. market. But even there, momentum is shifting. New York City’s Local Law 144 mandates independent bias audits for automated hiring tools, while the U.S. Equal Employment Opportunity Commission has made AI discrimination a priority through 2028. In the UK, the Information Commissioner’s Office (ICO) has published guidance requiring employers to explain when AI influences decisions and how applicants can challenge them.
These moves point to one reality: AI in recruitment is now regulated technology, not a shiny HR add-on.
What AI in hiring gets right
AI’s greatest contribution may be its ability to support the shift toward skills-based hiring — evaluating what people can do rather than where they went to school. Algorithms can extract and compare skills from job histories or portfolios at a scale that human recruiters can’t match.
Generative AI also personalises outreach: instead of generic “We loved your profile!” emails, systems trained on public data reference a candidate’s actual work. LinkedIn credits this kind of contextual personalisation for its rising engagement metrics.
Automating administrative tasks — scheduling, document handling, FAQs — removes friction that often frustrates candidates. For small businesses, AI recruitment tools can act like an HR assistant: checking job descriptions for inclusive language or flagging unrealistic requirements. Indeed’s AI “Talent Scout” positions itself squarely in that market.
Where AI recruitment systems go wrong
Yet the same technology that promises fairness can easily encode bias. AI bias in hiring usually stems from training data: if a company has historically favoured certain backgrounds, the system learns to replicate that pattern.
The danger is subtle — what some describe as “automated nostalgia.” Algorithms reward the profiles that look most like past success, not necessarily those with true potential.
Data transparency is another problem. Applicants rarely know which data shaped their ranking — or that behavioural traces such as test-completion speed or time-of-day applications were even collected. Under the EU AI Act, employers will have to keep clear documentation and allow for human review of AI-made decisions.
The third concern is standardisation. When hiring tools rely on the same language models and templates, they can flatten company culture. Everyone starts to sound — and hire — the same.
Platform power and data control
The business model behind AI recruitment may matter as much as the algorithms. As the market consolidates, control over the hiring funnel is shifting to a few major platforms.
Indeed and Glassdoor’s parent company recently announced layoffs and a deeper integration under an “AI-centred strategy.” OpenAI’s jobs platform adds another heavyweight entrant. When these firms own both the candidate data and the matching models, employers lose visibility into how rankings are made — and candidates lose leverage over how their data trains future systems.
Europe’s regulators have noticed. The European Commission has reaffirmed it will stick to the AI Act timeline and is exploring interoperability standards to prevent companies from being locked into opaque ecosystems.
The human factor: trust and transparency
For all its mathematical sophistication, AI in recruitment ultimately runs on trust. Candidates want speed and clarity — but not opacity. Employers want efficiency — but not reputational risk.
When a system screens a CV in milliseconds, it must still respect what the human eye might catch: a career break explained in a cover letter, a creative leap between industries, the spark of potential that no dataset can quantify.
There is a fairer version of AI hiring — one where automation handles routine screening while humans make the final call, one where transparency and accountability are built in rather than bolted on. The technology already exists. What’s missing is a shared commitment to use it well.
Until then, jobseekers will keep wondering what the algorithm saw — and what it didn’t.