Research, Pass rate by month and region

UK driving test pass rate by month and region, where seasonality is real and where it isn’t

The UK headline pattern is clean: the national pass rate peaks at 49.2% in December and dips to 47.9% in May, a 1.3pp seasonal swing across the calendar year. Split that national series by region and the picture loses some of its tidiness. London’s best month is January, Wales’s best is October, the South East’s is August. Out of the 10 regions with enough volume to be confident, only 3 agree on the calendar month that passes highest. Headline national figure cited to the DVSA DRT122A release; per-region figures volume-weighted from the same source for 2022-23 to 2024-25.

By VikasPublishedMethodologySources
National best month
49.2%
December, 2022-23 to 2024-25
National worst month
47.9%
May, 2022-23 to 2024-25
National spread
1.3 pp
best minus worst month
Biggest regional swing
3.1 pp
Yorkshire and the Humber
Smallest regional swing
1.8 pp
South West
Regions agreeing on best
3/10
December most-common best month

Section 1, The national pattern, revisited

The national series for 2022-23 to 2024-25 sits between 47.9% in May and 49.2% in December. A 1.3-percentage-point swing across the calendar year, real and reliable, but small compared with the 35-plus percentage-point spread between UK centres or the eight-point spread between UK regions. The chart below is the same data shown on the original seasonality research page for cross-reference. The interesting question this page asks is whether the December peak holds up when you split the national figure regionally.

The national series has a December high, a January-to-March plateau slightly above the annual average, an April-to-June dip that bottoms out in May, and a July-to-November recovery. That national shape is the figure most learners already know. What the per-region split below shows is that this national shape is essentially the volume-weighted average of eleven regional shapes that mostly do not look the same as the national one.

Section 2, Three GB countries, three different shapes

The most natural first cut is by country. England, Scotland and Wales each have a recognisably different month-to-month pattern, even though all three share the same DVSA examiner pipeline and the same DT1 marking sheet. Northern Ireland is missing because DRT122A does not carry NI centre data; the DVA equivalent is published separately and we do not currently ingest it.

England tracks the national pattern almost exactly because England is the bulk of the national sample. The April-June dip and the December-ish high are both visible. Best month August at 49.2%, worst month June at 47.7%, spread 1.5pp.

Scotland runs lower overall (47.2% volume-weighted across 140,976 tests in the window) and its calendar-month shape is messier. July is the high month at 49.0%; the dip sits in May at 46.2%. The Scottish series is sensitive to small island and Highland centres where a handful of extra passes in any one month can move the figure by several percentage points. The headline spread of 2.8pp is more than England’s, but the wider confidence bands mean the practical seasonal swing for a Scotland-based candidate is probably smaller than the chart suggests.

Wales is the highest-passing of the three GB countries by some margin (54.1% volume-weighted, against England’s 48.5% and Scotland’s 47.2%), and its calendar peak sits in October rather than December. The dip lands in January. The Welsh seasonal swing is 2.3pp. Welsh test centres skew more rural and coastal than English ones, and the October peak is loosely consistent with summer tourist traffic complicating routes in July and August before things quieten down for autumn. We would not over-claim that story; the Welsh sample is the smallest of the three and the shape could move with the next DVSA release.

Section 3, Where seasonality bites most, and least

Aggregating up to the eleven UK regions used by our wait-time and density research pages gives a finer-grained picture. The chart below shows each region’s best-month-minus-worst-month spread in percentage points. Larger bar means more seasonal, shorter bar means flatter year-round.

Yorkshire and the Humber tops the chart at 3.1pp between its best month (February) and its worst (September). That is more than double the national headline spread of 1.3pp. A learner whose local centre sits in Yorkshire and the Humbercan rationally treat the month they book as a real lever, though even at the top of this table we are talking about three percentage points of expected probability, not thirty.

South West sits at the bottom of the rankable list at 1.8pp. The seasonal swing exists but is small enough that month shopping is essentially a tie-breaker. The South Westaverage pass rate (49.6%) is roughly stable across the calendar year, which is the pattern a candidate should expect to see for most large English regions outside the spring booking surge.

The middle of the league is occupied by London (2.6pp), the West Midlands (2.6pp), and the East Midlands (2.6pp). All three sit close to the national-spread band of one to two and a half points. The pattern of which calendar month carries the peak is more varied than the swing magnitudes suggest.

Section 4, Regions where seasonality is reliable vs regions where it is noise

Not every regional row in the table above is equally trustworthy. A region needs enough monthly volume across the three-year window to make its month-to-month figures stable. Our threshold is 5,000 tests per month in the region’s quietest calendar month. Below that, a few hundred extra passes in any one month can move the headline by a full percentage point, and the seasonal pattern starts to dissolve into sampling noise.

Significant seasonality
  • Yorkshire and the Humber: 3.1pp, best February (49.2%), worst September (46.1%)
  • Scotland: 2.8pp, best July (49.0%), worst May (46.2%)
  • East Midlands: 2.6pp, best February (49.4%), worst April (46.8%)
  • West Midlands: 2.6pp, best December (47.2%), worst June (44.6%)
  • London: 2.6pp, best January (49.5%), worst June (46.9%)
  • East of England: 2.4pp, best December (49.9%), worst May (47.5%)
  • South East: 2.4pp, best August (51.9%), worst February (49.5%)
  • Wales: 2.3pp, best October (55.1%), worst January (52.8%)
  • North West: 2.1pp, best December (49.4%), worst June (47.3%)
  • South West: 1.8pp, best September (50.5%), worst January (48.7%)
Thin sample, treat with caution
  • North East: spread 2.9pp but at least one month falls below the 5,000-test threshold; treat the calendar-month figures as directional rather than precise.

The North East is the standout caveat. Its 2.9pp spread looks meaningful on paper, but several of its calendar months sit below 5,000 tests across the three-year window. A candidate in Tyne and Wear or Northumberland is probably better off taking the national pattern as their anchor and treating the regional caveat as a reminder that the per-month figures wobble.

Section 5, Which calendar months do regions actually agree on?

The national chart says December is the best month and May is the worst. Reading that as a booking heuristic assumes the regions agree. They mostly don’t.

Among the 10 regions that clear our volume threshold, December is the most-common best month, claimed by 3 of them. That is a plurality, not a majority. The remaining 7 regions identify another calendar month as their peak, which means a learner planning around "December is the best month" is right less than half the time at the regional level.

The worst-month picture is slightly cleaner. The most-common worst month is June, claimed by 3 of the same significant regions. That is consistent with the national pattern’s May dip and aligns with the spring-booking-surge hypothesis: when test volume surges in late spring, the marginal new candidate is on average less prepared, and the volume-weighted pass rate drifts down. The pattern repeats across most regions because the booking surge is national.

The honest takeaway: the "best" month is regionally unstable. The "worst" month is regionally consistent. A candidate avoiding the late-spring dip is on safer ground than a candidate trying to time the December peak.

Section 6, Why regions show different seasonal shapes

We can describe the per-region variation. Explaining it is harder because DVSA does not publish the per-region operational data that would let us pin down the mechanism. Four plausible drivers, each labelled honestly.

Driver 1, weather sensitivity (probable, partially supported)

Rural-route centres in Scotland, Wales and the south west see more weather-sensitive driving conditions than central London ones. Single-track A-roads, livestock crossings, exposed coastal sections: these are environments where a damp October morning materially changes the driving task. The seasonal swing in rural-leaning regions could plausibly track local weather patterns more closely than the urban regions. Wales peaking in October and dipping in mid-summer is consistent with this; the West Midlands dipping in June is less so.

Driver 2, school-holiday cycles (probable, sample-limited)

UK school summer holidays run roughly July through August in most of England and Wales but late June through mid-August in Scotland. Candidates aged 17 to 18 are a meaningful slice of new test bookings, and they cluster their lesson volume in holiday weeks. The Scottish peak in July, English peaks scattered between January and August, and Wales’s October peak are loosely consistent with the regions’ different summer-holiday windows shifting the candidate-pool mix. We cannot prove it from DRT122A because the release does not split pass rates by candidate age.

Driver 3, examiner staffing (speculative)

DVSA examiner staffing varies by centre and by month. Annual leave clusters around school holidays in centres serving commuter towns; recruitment cycles affect different regions differently. Newly-deployed examiners on average mark slightly stricter while they settle into a centre, which could shift the local pass rate by a fraction of a percentage point. We have no way to confirm or deny this from the public dataset. A Freedom of Information request to DVSA could clarify per-region examiner-mix data, which would be a useful direction for a follow-up post.

Driver 4, route-mix changes (weak)

DVSA periodically updates the published route list at each centre. A new route introduced at a centre can shift the average difficulty of that centre’s test by a small margin, and if the update lands in a particular month the volume-weighted pass rate for that calendar month at that centre can drift. Aggregated up to the regional level, this is almost certainly too small an effect to drive the patterns we see, but it is the kind of thing that would need a per-centre per-month panel to rule out cleanly.

Section 7, What this means for a learner choosing a booking month

The seasonal effect is real and the effect is small. Three things worth keeping clear before reading any month-shopping strategy.

The regional best month matters more than the national one. The national figure is the volume-weighted average of eleven regional shapes that mostly do not peak in the same month. A learner in London should treat London’s peak (January at 49.5%) as more relevant than the national December figure. A learner in Wales should look at Wales’s October peak. The per-region table in section 4 above is the practical lookup for this.

The late-spring dip is the more reliable signal. 3 of the 10 significant regions identify a late-spring month as their worst. If the choice is between "book in May" and "book later in the summer", the data is reasonably confident that later in the summer is better, almost everywhere.

Centre and preparation dwarf everything else. Two candidates booking in the same month at different centres can face a 35-plus percentage-point gap in expected outcome. The seasonal effect at any single centre is one to three points. Use the easiest centres / hardest centres rankings to decide where to book. Use this page to decide when, treating it as a tiebreaker rather than a strategy.

Section 8, Methodology and limitations

Data source. DVSA DRT122A, three financial years pooled (2022-23 to 2024-25). Each row in the source gives a centre-month-gender combination with test count and pass count. We aggregate to the all-gender total per centre per month, then bucket centres into UK regions using the same heuristic as our wait-time research page, and finally sum across centres within each region to produce volume-weighted per-month figures. Licensed under the Open Government Licence v3.0.

Region bucketing. Each centre is assigned to one of eleven regions (London, South East, East of England, West Midlands, East Midlands, North West, North East, Yorkshire and the Humber, South West, Wales, Scotland) based on its OSM geoDisplayName. The bucket set is identical to the one used on /research/wait-time-by-region so a reader cross-referencing can use the same set. Northern Ireland centres are excluded because DRT122A does not cover them.

Significance threshold. A region is labelled "significant" when every calendar month in its three-year series carries at least 5,000 tests and its best-to-worst spread is at least 1.0 percentage point. 5,000 tests per month is the floor below which a single percentage point of seasonal swing is hard to distinguish from sampling noise; the 1.0pp swing floor keeps regions that are essentially flat year-round (where any calendar pattern is indistinguishable from random month-to-month variation) out of the headline comparison. Of 11 regions included, 10 clear both thresholds.

Volume-weighted aggregation. Per-region per-month pass rate is sum of passes across all centres in the region in that calendar month, divided by sum of tests. This is the same convention DVSA uses for its own quarterly press release headlines, and the same convention used throughout PassRates.uk. Documented on the methodology page.

Redacted rows. DVSA redacts centre-month cells with fewer than five tests to ".." under their disclosure-control policy. We skip these rows in the aggregation because they carry no usable count to weight by. Across the three-year window, redacted rows account for under 0.1% of test volume, and they concentrate at small island and Highland centres which is one reason Scotland’s monthly shape should be read with wider confidence bands than England’s.

What this analysis does not do. We do not adjust for COVID disruption (the three-year window starts in 2022-23, after the immediate post-lockdown booking surge, but the lingering effect on candidate mix is non-zero). We do not split by candidate age, by first attempt vs retake, or by examiner. We do not include Northern Ireland. We do not cover motorcycle or heavy-vehicle tests, which DVSA publishes separately. Each of these is a future research direction. Email hello@passrates.uk if you spot a mistake.

Cite this page: passrates.uk research/pass-rate-by-month-and-region v1.0 (2026). Data: DVSA DRT122A "Driving test and theory test data, cars" for 2022-23 to 2024-25, OGL v3.0. Aggregation: 287 UK car centres, 1,774,270 monthly tests, volume-weighted within 11 regions.

Quick-reference: best and worst month by region

RegionBest monthBest %Worst monthWorst %SpreadTests
Yorkshire and the HumberFebruary49.2September46.13.1pp168,377
North East (thin sample)June48.5November45.62.9pp56,392
ScotlandJuly49.0May46.22.8pp140,976
East MidlandsFebruary49.4April46.82.6pp143,788
West MidlandsDecember47.2June44.62.6pp208,312
LondonJanuary49.5June46.92.6pp237,370
East of EnglandDecember49.9May47.52.4pp199,736
South EastAugust51.9February49.52.4pp192,226
WalesOctober55.1January52.82.3pp88,850
North WestDecember49.4June47.32.1pp189,547
South WestSeptember50.5January48.71.8pp148,696

Frequently asked questions

Does the UK driving test pass rate change by month?

Yes, slightly. Across the three financial years from 2022-23 to 2024-25 the volume-weighted UK pass rate moves between 47.9% in May (the worst month) and 49.2% in December (the best month). The headline spread is 1.3 percentage points. That is real and reliable, computed from 1,774,270 tests across 287 car test centres in DVSA's DRT122A release, but it is also a small effect. The same dataset shows a per-centre spread of over 35 percentage points and a regional spread of around 8 percentage points. Which centre you book matters more than which month.

Is December really the best month to take a driving test in the UK?

December is the best month nationally and the best month in 3 of the 10 regions with enough volume to draw a confident conclusion. Per the per-region cross-cut, London's best month is January, the South East's is August, Wales's is October, and Scotland's is July. The national "December peak" is a real pattern in the aggregate but the calendar month that matters most for any one learner depends on where they book.

Why is June often the worst month for UK driving tests?

Across the regions that clear our per-month volume threshold, 3 of them show June as the worst-pass-rate month. The leading hypothesis is the spring booking surge: candidates who book during April / May / June are on average slightly less prepared than the steady-state caseload, because the marginal warm-weather candidate has been holding off through winter and is closer to lesson minimum than to lesson optimum. That pattern fits the test-volume series, which peaks in late spring nationally. A secondary hypothesis is examiner staffing during the summer-leave cycle, where examiners with less experience at the centre may pick up overflow shifts. Neither hypothesis can be proved from DRT122A alone, but the June dip is consistent with both.

Does seasonality affect every UK region the same way?

No, and that is the interesting part of this analysis. Yorkshire and the Humber shows the largest seasonal swing at 3.1pp between its best (February) and worst (September) months. South West sits at the other end with only 1.8pp between its best (September) and worst (January). London's swing is 2.6pp. The Welsh swing is 2.3pp. Same dataset, same examiner pipeline, same DT1 marking sheet, materially different month-to-month behaviour. The likely driver is a mix of weather sensitivity on rural routes, local school-holiday cycles, and per-region examiner-staffing patterns that DVSA does not publish at this granularity.

Should I time my UK driving test booking around seasonality?

Only at the margin. The expected pass-rate uplift from picking the best month over the worst month in your region is one to three percentage points, depending on where you are. For a candidate whose true probability of passing sits at 50%, that nudges to roughly 51 to 53%. Detectable in the aggregate, swamped by individual variation in any one candidate. The decisions that swamp seasonality, in order: choose the right test centre (35pp spread across UK centres), arrive prepared (DVSA suggests around 45 hours of lessons plus 22 hours of private practice as a typical figure), and treat seasonality as a tiebreaker rather than a strategy. The on-site guide on the easiest and hardest test centres covers the bigger levers in detail.

Why does Wales peak in October rather than December?

Wales's best month is October at 55.1% pass rate, against a Welsh annual average of 54.1%. The Welsh sample skews more toward coastal and rural centres than England's does, and several Welsh centres run noticeably busier in summer (when caravan and tourist traffic complicates route exposure for examiners) and quieter in October once half-term passes. That mix can push the per-month volume-weighted figure toward October even though the underlying calendar effect is similar to England's. The honest caveat: Wales's total test volume is smaller than England's, so its month-to-month figures carry wider confidence bands. We would not bet on the October-peak shape persisting in every future DVSA release.

Is the December peak driven by examiners passing more candidates before Christmas?

There is no evidence in DVSA's published audit data that examiners mark differently in December. The DT1 marking manual is the same year-round, and DVSA's internal quality-assurance observations are spaced across the calendar. The more credible explanation for the December high is survivorship bias: tests cancelled outright for ice, fog, or snow are not in the dataset, only tests that actually went ahead. The candidates who turn up in December are the ones whose test went ahead because the weather on that particular morning was tolerable, which is a self-selecting cohort. The data shows December's pass rate clearly; the data does not show what would have happened to the cancelled tests.

How accurate is the per-region monthly pass rate analysis?

Accurate at the volume-weighted national and regional level for the 2022-23 to 2024-25 window. The headline UK figures are computed from 1,774,270 tests across 287 car test centres in DVSA's DRT122A release, which is the authoritative source. Per-region figures use the same volume-weighted aggregation. Two honest caveats. First, Northern Ireland is excluded because DRT122A does not cover NI test centres (the equivalent DVA release is in a different format we do not currently ingest). Second, regions with thin monthly samples (the North East, with monthly volume below our 5,000-test threshold in several months) are flagged and shown with caveats; their per-month figures are directionally useful but should not be read as precise to the decimal.

Related on PassRates.uk