Bradford VTS — Header Scheme 06
Finding the Evidence for the Capability Rating Scales — Bradford VTS
Educational Supervision · Bradford VTS

Finding the Evidence for the
Capability Rating Scales

"Yes, you do have to write something in those boxes. No, 'I'm quite good at this' is not acceptable." — Your helpful guide to doing it properly.

👩‍⚕️ For Trainees, Trainers & TPDs 💎 Knowledge Not Found Elsewhere ⚡ High-Impact Learning in Minutes
📅 Last updated: April 2026 🩺 13 Professional Capabilities covered 📋 RCGP-aligned
⚠️ August 2025 Curriculum Update: The RCGP's updated GP Curriculum goes live 1 August 2025. There are minor wording changes to some capabilities — but no changes to required WPBA numbers or assessment formats. All evidence previously linked to capabilities remains valid. This page reflects the current (2025) capability names and framework.

Handouts, Checklists & Teaching Resources

Everything you need — whether you're prepping for your ESR, helping a trainee, or building a tutorial. Download and use freely.

📂 Intro to Educational Supervision

📂 For Trainees

Includes the ES Mapping Workbook (use this every ESR period), the Form R quick guide, and the ePortfolio ARCP-readiness guide.

📂 ES Checklists

The Bradford ES Checklist, capability rating scales help sheet, and ST3 ARCP final checklist. These are the most heavily-used downloads on this whole section of the site.

Curated Links — Official & Informal

A hand-picked mix of official guidance and real-world GP training resources. Because sometimes the best pearls are not hiding in the official documents.

RCGP Official WPBA Capabilities Framework

The RCGP's definitive page on all 13 capabilities with progression descriptors.

RCGP Official CSR, iESR and ESR Guidance

How the Clinical Supervisor's Report and Educational Supervisor's Report work.

RCGP Official WPBA Overview — RCGP

The full WPBA framework including CATs, MiniCEX, COT, MSF and PSQ.

RCGP Official How to Assess WPBAs — Guidance for Assessors

What grades mean, who can assess, and how to use them properly.

Bradford VTS Which WPBA at Which Stage?

A clear breakdown of how many assessments you need at each ST year.

Bradford VTS Formulating Action Points for Each Capability

How to write SMART, meaningful action points — not vague waffle.

Bradford VTS Ram's Easy Way to Write Reflective Log Entries

The simple framework for writing good learning log entries quickly.

Bradford VTS Learning Log & Reflection Pages

Everything about learning logs — structure, examples, and reflection levels.

Bradford VTS ES for Trainees — Bradford VTS

The full trainee guide to educational supervision and ESR preparation.

Bradford VTS ARCP Pages — Bradford VTS

What happens at ARCP, what the panel looks for, and how to prepare.

Bradford VTS FourteenFish ePortfolio Technical Help

Step-by-step technical guides for using your FourteenFish ePortfolio.

Bradford VTS PDP Pages — Bradford VTS

How to write a Personal Development Plan that actually works.

Trainee Resource WellMedic — Capability Portfolio Examples

Practical examples of how trainees write capability evidence with FourteenFish screenshots.

⚡ If You Only Read One Section — Read This

The full picture in 90 seconds. Everything else on this page is just the detail behind these points.

13
Professional Capabilities you must evidence
×3
Minimum evidence links per capability per ESR review
4+1
Evidence types: quantitative WPBA + qualitative narrative
2hrs
Minimum prep time needed before your ES meeting
  • The Capability Rating Scales live in your FourteenFish ePortfolio, in the ESR Preparation section. You fill them in before your ES meeting — not during it.
  • For each capability: write both quantitative evidence (WPBA grades) and qualitative evidence (narrative themes from CSR, MSF, logs).
  • The purpose is to show yourself off — you're presenting your case to ARCP panel members who don't know you. Underselling yourself is not modesty; it's a missed opportunity.
  • Items from structured WPBA tools (CbD, COT, MiniCEX) carry the most weight. Log entries and CSR themes support and contextualise them.
  • Action points must be SMART and specific. "Continue to improve" is not an action point. "Attend a clinical supervision skills day by December and write a reflection" is.
  • The ES Mapping Workbook (download above) helps you track coverage across all 13 capabilities and all Clinical Experience Groups. Use it from Day 1 of each new year.
  • Even ST3 trainees completing to CCT must write action points — because they become your PDP for your very first GP appraisal.
  • Educational supervision is your responsibility to initiate, not your ES's responsibility to chase you for.

💡 Why the Capability Rating Scales Matter

📋 They Drive Your ARCP

The ARCP panel reviews your ESR — and the Capability Rating Scales are a core part of it. This is how the panel decides whether you progress to the next ST year. A vague or empty rating scale is a red flag.

🔍 They Identify Gaps Early

When you can't find evidence for a particular capability, that's useful information. It means you haven't been generating that evidence — or haven't been reflecting on it. Identifying this in ST1 is much better than discovering it in ST3.

🎯 They Are Your Professional CV

The people reading your capability write-ups don't know you. They are making a professional judgement about you based on what you've written. Think of it as a job interview in written form.

🌱 They Shape Your Learning

Reviewing your capabilities regularly helps you notice which areas you're not developing. It shifts the focus from "doing WPBAs" to "using WPBAs to demonstrate growth." That's a much more powerful mindset.

"The numbers on their own don't tell you the whole story — you need the narrative as well." — Dr Ramesh Mehay, Former Programme Director, Bradford GP Training Scheme

🗺 Understanding the Framework

Two things you need to understand before you start: how the 13 capabilities are grouped (the RDMp framework), and how evidence flows through the system to reach your ARCP.

The RDMp Framework — 4 Groups, 13 Capabilities

The 13 Professional Capabilities are grouped into 4 themes, known as RDMp. Understanding these groups helps you ensure your evidence covers all areas — not just the ones you happen to enjoy.

R
Relationships

How you engage with patients and colleagues

FtP EA CC
D
Diagnostics & Decisions

Clinical reasoning and data use

DG CEPS DD
M
Management & Complexity

Breadth of care and real-world management

CM MC TW
p
Professionalism & Organisation

Being a safe, organised, reflective doctor

PLT OML HPHS CHES

How Evidence Flows to Your ARCP

Understanding this flow helps you see why each piece of evidence matters — and why gaps at the bottom affect everything above.

🩺
Clinical Work & WPBAs
📓
Learning Logs & Reflections
📊
Capability Rating Scales
📋
ESR (by your ES)
🏁
ARCP & Progression

WPBA Tools at a Glance

ToolWhat It AssessesWhere Used
CbD (Care Assessment Tool)Professional judgement, clinical reasoning, capability-mapped discussionST1, ST2, ST3 (primary and secondary care)
COT (Consultation Observation Tool)Consultation skills, clinical reasoning, management — all GP capabilitiesGP posts only (ST3 mainly; some ST1/2 GP)
AudioCOTAs per COT but using an audio-recorded consultationGP posts
MiniCEXClinical examination, communication, judgementNon-primary care posts (hospital)
MSF (Multi-Source Feedback)Professional behaviour and clinical performance as seen by colleaguesAt least once per training year
PSQ (Patient Satisfaction Questionnaire)Communication and holistic care as experienced by patientsGP posts
CEPS (Clinical Examination & Procedural Skills)5 mandatory clinical examinations observed by a qualified assessorAny post (must complete all 5 during training)
CSR (Clinical Supervisor's Report)Overall narrative assessment of trainee performance in that postEnd of every post
AKTApplied clinical knowledge — pass/fail examUsually during ST3 (or earlier)
SCASimulated consultation assessment — pass/fail exam (replaced CSA in 2020)Usually during ST3

📖 How to Use the Ram's Capability Evidence Table

1️⃣
Open Your FourteenFish ePortfolio

Navigate to the ESR Preparation section and open the Capability Rating Scales for your upcoming review period.

2️⃣
Open the Capability Below

Click on the relevant capability accordion below to expand it. Read the template — it shows you exactly what sources to reference and what to write.

3️⃣
Copy the Structure, Fill the Blanks

The prompts in italics show you what to fill in. Replace each prompt with your actual evidence. Be specific — use numbers, grades and named themes.

4️⃣
★ = Strongest Evidence First

Items marked ★ provide the most robust evidence. Always include these if you have them. Qualitative sources support and contextualise the quantitative data.

5️⃣
Tag Your Log Entries

In FourteenFish, tag 2–3 of your strongest log entries to each capability. These become your "Tagged Evidence" — the specific entries you're pointing the panel to.

6️⃣
Write Your Action Points

Every capability needs at least one SMART action point — what's the next concrete step? See the Action Points section for guidance.

💡

Insider Tip: Train the Trainee Early

Educational Supervisors: spend time in your very first meeting thoroughly going through (a) what the capabilities mean, (b) what evidence looks like, and (c) how to write up the rating scales. If you get the trainee well set-up from the beginning, the next three years run so much more smoothly. The Bradford ES Workbook in the downloads section is your best friend here.

🎯 All 13 Professional Capabilities

Click on any capability to expand it. Each one shows you exactly what to write in the Capability Rating Scales section of your FourteenFish ePortfolio — both quantitative WPBA grades and qualitative narrative evidence. Items marked ★ provide the strongest evidence and should always be included if available.

💡

How to Read the Templates

The tool names (e.g. CbD, COT) tell you the source. The ✏️ prompts in grey show you exactly what to fill in. Replace each prompt with your own specific data — grades, numbers, themes. Be concise and specific. Remember: the goal is to show the ARCP panel the full picture of your competence in that area.

Filter by RDMp Group: R — Relationships (FtP, EA, CC) D — Diagnostics (DG, CEPS, DD) M — Management (CM, MC, TW) p — Professionalism (PLT, OML, HPHS, CHES)
FtP Fitness to Practise Relationships
📌
Reminder: The doctor's awareness of when their own performance, conduct or health — or that of others — might put patients at risk. What actions did they take to protect patients? This capability is rarely directly witnessed but can be explored through CbD or reflected upon in Learning Log Entries.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (1) Fitness to Practise
    ✏️ item (1) Fitness to practise:

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Professionalism'
    ✏️ with respect to the trainee's conduct, performance and health, the CS says…
  • MSF — themes around conduct, performance and health are…
  • Log Entries — select entries that show STRONG evidence
    ✏️ select ones that show STRONG evidence.
EA An Ethical Approach Relationships
📌
Reminder: To practise ethically is to practise with integrity and show respect for diversity. Use recognised ethical frameworks to help patients. Themes include honesty, integrity, trust and respect. Also includes autonomy, beneficence, non-maleficence, justice, morality, utility, rationing and human rights.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (2) Maintaining an ethical approach
    ✏️ item (2) Maintaining an ethical approach:

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Professionalism'
    ✏️ under 'Professionalism' – ethical themes are…
  • MSF — under 'Professional Behaviour'
    ✏️ under 'professional behaviour', ethical themes are…
  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular: Clinical Case Reviews, LEAs/SEAs, Reflection on feedback (esp complaints), QIA, Leadership/Management/Professionalism
CC Communicating and Consulting Relationships
📌
Reminder: Specific consultation and communication techniques the trainee uses — and why. Includes understanding and using consultation frameworks effectively. Covers verbal communication, written communication, and digital/remote consultation.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • COT — Overall Assessment of Performance
    ✏️ item 'Overall Assessment of Performance':
  • AudioCOT — Overall Assessment of Performance
    ✏️ item 'Overall Assessment of Performance':
  • MiniCEX — item (2) Communication skills
    ✏️ item 2 Communication skills:
  • SCA — passed/not yet passed — Interpersonal Skills (IPS) domain
    ✏️ this trainee has passed which is a good indicator that a trainee's communication skills were, on the whole, good enough. The score for IPS was x out of y

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • PSQ — communication skills themes are…
  • CSR — under 'Communication'
    ✏️ communication skills themes are…
  • MSF — following communication skills themes emerge…
  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular, look at Clinical Case Reviews.
💡 Note on COT: The majority of COT items relate to communication — especially items (1) patient contribution, (2) cues, (3) psychosocial context, (4) ICE, (8) explanations, (10) patient involvement and (11) shared understanding. Rather than writing about each item separately, write in terms of the Overall Assessment of Performance. Note on AudioCOT: Similarly, items (1)–(6), (9), (10), (11) and (12) are all primarily about communication. Summarise holistically.
DG Data Gathering and Interpretation Diagnostics
📌
Reminder: The gathering and use of data for clinical judgement, the choice of physical examination and investigations, and their interpretation. This is primarily about clinical data — salient features in the history and examination. Outline the key features the trainee identifies and uses.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (4) Data Gathering & Interpretation
    ✏️ item (4) data gathering & interpretation:
  • COT — item (5) Includes/excludes significant conditions; item (6) Appropriate examination
    ✏️ item (5) Includes/excludes significant condition:; item (6) Examination:
  • AudioCOT — item (3) Identifies the reason for the call; item (7) History Taking
    ✏️ item (3) Identifies the reason for the call:; item (7) History Taking:
  • MiniCEX — item (3) Clinical Assessment (History and Examination)
    ✏️ item (3) Clinical Assessment (Hx & Ex)
  • SCA — Data Gathering (DG) domain
    ✏️ this trainee has passed which is a good indicator that a trainee's Data Gathering skills were, on the whole, good enough. The score for DG was x out of y

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Clinical Assessment'
    ✏️ under 'Clinical Assessment' the CS says…. The 'Level of Supervision' required is reported as…
  • MSF — under Clinical Performance
    ✏️ under Clinical Performance, comments around Data Gathering are:
  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAS/SEAs & CEPs.
CEPS Clinical Examination and Procedural Skills Diagnostics
📌
Reminder: Any reference to the doctor's ability to perform clinical examinations, clinical procedures and investigations — and to interpret their findings appropriately.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CEPS records — mandatory clinical examinations completed
    ✏️ This trainee has done XXX out of the 5 mandatory examinations.
  • COT — item (6) Appropriate physical or mental examination
    ✏️ item (6) appropriate physical or mental examination:
  • MiniCEX — item (3) Physical examination skills
    ✏️ item (3) physical examination skills:

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Clinical Assessment'
    ✏️ under 'Clinical Assessment' – the CS says the following about examination skills…
  • MSF — the following comments are made about examination skills…
  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular, look at Clinical Case Reviews & CEPS.
DD Decision-Making and Diagnosis Diagnostics
📌
Reminder: Not just about reaching a diagnosis — it's about any decision-making process. Write about the conscious, structured approach to decision-making used to protect patients. Describe the thinking behind the decision: what pointed towards it, what pointed away, and how the final decision was reached.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (6) Making Diagnoses/Decisions
    ✏️ item (6) Making diagnosis/decisions:
  • COT — item (6) Appropriate examination; item (7) Appropriate working diagnosis
    ✏️ items (6) Appropriate examination:; item (7) Appropriate working diagnosis:
  • AudioCOT — item (8) Appropriate working diagnosis; item (9) Creates an appropriate treatment plan; item (11) Safety netting/follow-up
    ✏️ item (8) appropriate working diagnosis:; item (9) creates an appropriate, effective treatment plan:; item (11) safety netting/FU:
  • MiniCEX — item (7) Clinical judgement
    ✏️ item (7) clinical judgement:
  • SCA — overall clinical decision-making reflected in CM and DG domains
    ✏️ This trainee has/has not passed the RCA which provides evidence of good Decision-making skills.

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Clinical Assessment', 'Management of Patients' and 'Context of Care'
    ✏️ under 'Clinical Assessment', 'Management of Patients', 'Context of Care' the CSR says the following about Decision & Diagnosis skills… The 'Level of Supervision' required is reported as…
  • MSF — under Clinical Performance
    ✏️ under Clinical Performance, themes around Diagnoses/Decisions are…
  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs/SEAs, Leadership/Management/Professionalism & Prescribing.
CM Clinical Management Management
📌
Reminder: The recognition and management of common medical conditions in primary care. Includes formulating appropriate management plans, safety-netting, applying evidence-based medicine, and shared decision-making.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (4) Clinical Management
    ✏️ item (4) Clinical management
  • MiniCEX — item (4) Overall clinical care
    ✏️ item (4) overall clinical care:
  • COT — item (10) Appropriate management plan and follow-up
    ✏️ item (10) Appropriate management plan & FU:
  • AudioCOT — item (9) Creates an appropriate, effective, mutually acceptable treatment plan
    ✏️ item (9) creates an appropriate, effective mutually acceptable treatment plan:
  • AKT — passed/not yet passed — clinical questions domain
    ✏️ this trainee has passed which is a good indicator that a trainee's clinical management skills are, on the whole, good enough. The score for clinical questions was: x out of y
  • SCA — Clinical Management (CM) domain
    ✏️ this trainee has passed which is a good indicator that a trainee's clinical management skills are, on the whole, good enough. The score for CM was x out of y

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Clinical Assessment', 'Management of Patients' and 'Context of Care'
    ✏️ Under 'Clinical Assessment', 'Management of Patients', 'Context of Care' the following Clinical Management themes emerge… The 'Level of Supervision' is reported as…
  • MSF — under Clinical Performance
    ✏️ Under Clinical Performance, the following 'clinical management' themes emerge…
  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. Nearly all types of Learning Logs will often be about the management of a condition in primary care.
MC Medical Complexity Management
📌
Reminder: Aspects of care beyond managing straightforward problems. It's about the approach to health rather than just illness. Includes: managing co-morbidity, managing uncertainty, explaining risk, coordinating across specialties, recognising safeguarding concerns, and dealing with the 'messy' reality of complex patients.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (08) Managing Medical Complexity
    ✏️ item (08) Managing medical complexity:

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Management of Patients' and 'Context of Care'
    ✏️ Under 'Management of Patients' & 'Context of Care' the CS says…
  • Log Entries — select entries with STRONG evidence
    ✏️ select the ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs/SEAs, Leadership/Management/Professionalism & QIA.
💡 Tip: Trainees often underestimate this capability. It's not just "the patient has multiple conditions." Look for entries where the trainee explicitly reflects on managing uncertainty (e.g. safety-netting when unsure), explaining risk (e.g. using QRISK or fracture risk tools), coordinating care across teams (e.g. who is the lead clinician?), or noticing a safeguarding concern within a complex presentation.
TW Team Working Management
📌
Reminder: Working effectively with other professionals to ensure good, seamless patient care. Includes the sharing of information with colleagues, seeking help and advice appropriately, providing help and advice for others, safe delegation, and working constructively within the MDT. The performance of this capability is best reported by colleagues themselves — hence the importance of MSF.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (09) Working with Colleagues in Teams
    ✏️ item (09) Working with colleagues in teams:

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • MSF — under 'Professional Behaviour'
    ✏️ Under Professional Behaviour, 'working with colleagues' themes are…
  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, Reflection on Feedback, Leadership/Mx and Professionalism & QIA.
  • CSR — under 'Working with Colleagues and in Teams'
    ✏️ under 'Working with colleagues and in teams', the CS says…
PLT Performance, Learning and Teaching Professionalism
📌
Reminder: Maintaining the performance and effective continuing professional development of oneself and others. It's about how the trainee goes about meeting their own learning needs AND how they contribute to the teaching and development of others.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • No specific quantitative WPBA items — qualitative evidence predominates for this capability
    ✏️ No quantitative WPBA items — qualitative evidence predominates for this capability.

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Professionalism'
    ✏️ under 'Professionalism' the CS says the following things about this trainee as a learner…
  • MSF — under 'Professional Behaviour' and/or 'Clinical Performance'
    ✏️ under 'professional behaviour' and/or 'clinical performance', the following themes emerge about this trainee as a learner…
  • Learning Logs (LEAs & SEAs) — reflections on learning from patient encounters
    ✏️ Select log entries which show how you went about to educate yourself after seeing particular patients. LEAs & SEAs as a result of seeing patients. Reflection on feedback from others e.g. on particular patients or from the trainer after CBDs, COTs etc.
  • Other Log Entries — courses/CPD, Academic Activity, Audit, QIA, Prescribing, Leadership
    ✏️ Link log entries about – attending courses/CPD, Academic Activity, Audit, PDSAs, Projects & QIA projects. Anything you have done in terms of Prescribing? Or Leadership?
OML Organisation, Management and Leadership Professionalism
📌
Reminder: Taking responsibility for organising yourself and developing systems to manage your workload. Includes taking initiative (leadership) in managing others and making systems easier and safer for everyone. For example: improving IT systems, leading a QI project, redesigning a care pathway, or taking on a named responsibility within the practice.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (11) Organisation, Management and Leadership
    ✏️ item (11) Organisation, Management, Leadership:

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • Log Entries — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs/SEAs, Leadership/Management/Professionalism & QIA.
HPHS Holistic Practice, Health Promotion and Safeguarding Professionalism
📌
Reminder: The ability to operate in physical, psychological, socioeconomic and cultural dimensions simultaneously. It involves taking into account feelings as well as thoughts — both the patient's and the doctor's. Includes opportunistic health promotion, supporting self-management, and maintaining a constant awareness of safeguarding.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (13) Practising Holistically
    ✏️ item (13) Practising holistically:
  • COT — item (3) Psychosocial context; item (5) Patient's health understanding
    ✏️ item (3) psychosocial context:; item (5) Pt's health understanding:
  • AudioCOT — item (5) Places complaint in psychosocial context; item (6) Explores ICE
    ✏️ item (5) Places complaint in psycho-social contexts:; item (6): Explores ICE:
  • PSQ — item (4) Interested in you as a whole person; item (5) Fully understanding your concerns
    ✏️ item (4) Interested in you as a whole person:; item (5) Fully understanding your concerns:
  • SCA — Interpersonal Skills (IPS) domain
    ✏️ this trainee has passed which is a good indicator that a trainee's 'practising holistically' skills are, on the whole, good enough. The score for IPS was x out of y

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • CSR — under 'Context of Care'
    ✏️ under 'Context of Care' the CS says the following about this trainee's "practising holistically" skills…
  • Learning Logs — select entries with STRONG evidence
    ✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs, SEAs, Safeguarding entries.
CHES Community Health and Environmental Sustainability Professionalism
📌
Reminder: Management of the health and social care of the practice population and local community — thinking beyond the individual patient to the health of groups and populations. Also includes environmental sustainability: awareness of the NHS net zero agenda and making prescribing or care decisions that are environmentally responsible. Community orientation comes to life through audits, QI projects and population-based reflections.

📊 Quantitative Evidence

Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.

  • CbD — item (8) Community Orientation / Population Health
    ✏️ item (8) Community orientation:

📝 Qualitative Evidence

Narrative evidence from reports, feedback and learning log entries.

  • Learning Logs — use entries where you talk about particular patient groups
    ✏️ Use learning logs where you talk about particular patient groups. Or where an encounter with a specific patient stimulates you into thinking about a patient group with the same characteristic e.g. diabetics, Bengali-speaking patients, new mums, teenagers etc.
  • Other Learning Logs — audits, QI projects, leadership activities
    ✏️ Think about other types of logs where you are looking at patient groups rather than individual patients. For example, in audits and projects. Leadership work and activities.
  • Environmental Sustainability Logs — net zero and greener practice reflections
    ✏️ reflections on greener prescribing choices (e.g. MDI vs DPI inhalers), reducing unnecessary investigations, or engagement with practice sustainability initiatives
  • CSR — under 'Context of Care'
    ✏️ under 'Context of Care' the following themes around Community Orientation emerge…
💡 ⚠️ August 2025 update: The RCGP updated this capability to include environmental sustainability alongside community health. Evidence of engagement with net zero NHS principles — such as reflecting on environmentally responsible prescribing — is now explicitly relevant here.

📊 Which WPBA Tool Maps to Which Capability?

A quick reference matrix. ● = strong / direct evidence for that capability; ○ = supporting evidence that contributes when triangulated; — = this tool does not directly assess this capability.

Capability MSF PSQ COT / AudioCOT CbD MiniCEX CSR SCA Logs
R — Relationships
Fitness to Practise (FtP)
An Ethical Approach (EA)
Communicating & Consulting (CC)
D — Diagnostics & Decisions
Data Gathering & Interpretation (DG)
Clinical Exam & Procedural Skills (CEPS)
Decision-Making & Diagnosis (DD)
M — Management & Complexity
Clinical Management (CM)
Medical Complexity (MC)
Team Working (TW)
p — Professionalism & Organisation
Performance, Learning & Teaching (PLT)
Organisation, Management & Leadership (OML)
Holistic Practice, Health Promo & Safeguarding (HPHS)
Community Health & Environmental Sustainability (CHES)
Strong / direct evidence Supporting evidence (needs triangulation) Not directly applicable

✍️ What Good Looks Like — Real Examples

Here are two Professional Capability write-ups from one of our trainee's ePortfolios which I was incredibly impressed with. Have a read and compare with what you currently do. Is there anything you can learn from it to make yours even better?

Remember — when you are providing evidence for the Capability Rating Scales, you are meant to be "showing yourself off" to panel members who do not know you! A bit like in a job interview — where you are selling and proving your worth. This is the same — so why would you want to do it quickly and undersell yourself? Spend time and write it up with care, thought and consideration. Impress the many people who will be reading it (some of whom you will not know — but will be making a decision about you as to whether you move up an ST grade or not in your GP training).

Communicating and Consulting (CC)

Real trainee write-up — rated excellent by the ARCP panel

Evidence

  • COTs — 8 out of 9 COTs have been marked as "Competent for Licensing."
  • PSQ was very good in all areas. Patients felt I allowed them to speak, tried to really understand their problem, and left satisfied. Mean score 5 for most items; median score 6.
  • CSR — under "Communication" says I regularly explore a patient's ideas, concerns and expectations. Also comments: "excellent at recognising the impact of the problem on a patient's life" and "generally makes good management plans in partnership with the patient."
  • MSF themes: "good communicator," "explains things well," "good at communicating the treatment plan."

Tagged Evidence

  • 06/05/2019 Learning log: Taught medical students consultation models
  • 30/01/2019 Learning log: Antibiotics wrongly prescribed by another GP
  • 10/05/2019 Learning log: Gentleman with unexplained back pain and lots of worries

Suggested Action Before Next Review

Maybe now try and look at difficult consultations and advanced consultation techniques. Start by finding a course on handling difficult consultations — then put that into practice.

Decision-Making and Diagnosis (DD)

Real trainee write-up — rated excellent by the ARCP panel

Evidence

  • CbDs: for item Making Diagnosis/Decisions = 9/11 C, 2/11 NFD, 1/11 IE
  • COTs (9 done in total): item 6 appropriate examination = 6/9 C, 2E, 1 NFD; item 7 Appropriate working diagnosis = 9/9 C
  • Previous CSR suggested working more independently and balancing when to seek reassurance. Current CSR: meets expectations for diagnostics, appropriate differential diagnosis, refers appropriately — with comments "clinically very good," "does not over or under investigate," and "more self-confident and less reliant on others."
  • MSF themes: good clinical knowledge, explores differential diagnosis very well, good at knowing when to ask for help, thinks laterally when needed

Tagged Evidence

  • 30/04/2019 Learning log: The girl who was taking the pill incorrectly
  • 20/03/2019 Learning log: Home visit man with haematuria — what next?
  • 25/04/2019 Learning log: Training in telephone triage

Suggested Action Before Next Review

I have made incredible progress in coping with uncertainty over the last 12 months. I would like to build on this further — perhaps read Tim Crossley's book 'I Don't Know What It Is But I Don't Think It's Serious' (confidence and decisiveness in primary care).

🔍 Can you see that…

  • These write-ups are concise and specific. As a result, they are not particularly long. Long write-ups usually indicate vagueness or waffle!
  • There is both quantitative and qualitative evidence within each write-up.
  • At the end, the trainee is quite specific about how they plan to build on each competency in the near future.

Can you do something similar? I'm sure you can. Speak to your Educational Supervisor or GP Trainer if there is anything here that confuses you or if there is something you want more help with.

🎯 Writing Good Action Points

Action points are where most trainees — and many supervisors — fall flat. Here's how to get them right.

❌ Woolly — Don't Write This

  • "Continue to build on my consultation skills."
  • "Try to improve in this area."
  • "Read more about ethical frameworks."
  • "Continue to reflect on my learning."

✅ SMART — Write This Instead

  • "Book and attend the 'Difficult Consultations' half-day course at Bradford by January 2026. Write a reflection afterwards and tag to CC."
  • "Complete at least one QI project this year and write a reflection on population-level impact. Tag to CHES."
  • "Ask trainer to select 2 ethically complex CbD cases in the next 3 months. Write structured reflection using the 4-principles framework."
  • "Complete Tim Crossley's book by end of rotation. Write 3 log entries linking key learning to clinical cases."
💡

The Easiest Way to Write an Action Point

Ask yourself: "What is the single next practical step that would move me forward in this capability?" Then write that step — with a specific what, how, and when. That's your action point.

⚠️ Common Mistakes — Don't Be That Trainee

These are the things trainees consistently get wrong. Every one of these has been seen repeatedly in real portfolios. Read them once — then don't do them.

🚫 Leaving It to the Last Minute

You cannot fill in 13 capability rating scales the night before your ES meeting without it showing. The prep work takes around 2 hours. Block the time in your diary at the start of each rotation.

🚫 Only Numbers, No Narrative

Writing "8/9 COTs competent" and nothing else is not evidence. Numbers need narrative — what themes emerged? What do they mean? What does the CS actually say? The numbers are the skeleton; the narrative is the flesh.

🚫 Vague Action Points

"Continue to build on this skill" is not an action point. It's a polite way of saying you haven't thought about it. Action points must be SMART: Specific, Measurable, Achievable, Relevant, Time-bound.

🚫 Skipping Rare Capabilities

FtP, EA, MC, OML and CHES often get minimal evidence because they're less obvious in daily work. But they still need evidencing. You may need to specifically seek out cases or reflections that cover them.

🚫 Misunderstanding Medical Complexity

MC is not just "the patient had many conditions." It's about managing uncertainty, explaining risk, coordinating care across teams, and recognising safeguarding within complexity. See the MC accordion above.

🚫 Forgetting CHES (Community Health)

Previously called Community Orientation — now includes environmental sustainability. Trainees either forget this entirely or write a very thin entry. Audit work and QI projects are your best friends here.

🚫 Not Tagging Log Entries

In FourteenFish, you must specifically tag your log entries to capabilities. It's not enough to have written a great log entry — the panel can only see what you've pointed them to. Tag as you write.

🚫 Not Reflecting Progress

Good write-ups show change over time. If your current review looks identical to the last one — same grades, same themes, same action points — that's a concern. Show the panel how you've grown.

🚫 Forgetting ST3 Action Points

Even if you're completing to CCT, you must still write action points for every capability. These become the basis of your Personal Development Plan for your very first GP appraisal as a qualified doctor.

🎙 From the Trainee Community — What Actually Helps

Distilled from UK GP training networks, deanery resources, trainee blogs, peer-reviewed research on WPBA, and the collective wisdom of GP educators across the country. Everything here aligns with official RCGP guidance — these are the practical insights that official documents don't always shout about.

📊 Which Capabilities Are Trainees Typically Under-Evidencing?

This chart reflects patterns consistently observed across UK training schemes. Every capability needs evidencing — the red and amber ones are where trainees most often fall short. Not because they lack the evidence — but because they haven't realised it existed.

Typically well evidenced by trainees Variable — needs active seeking Frequently underweighted or forgotten
CC
Communicating & Consulting
Well covered — COT, PSQ, MSF all contribute
DG
Data Gathering
Usually well covered through CbDs and COTs
DD
Decision-Making & Diagnosis
Clinical cases generate natural evidence
CM
Clinical Management
Almost every clinical log touches this
HPHS
Holistic Practice, Health & Safeguarding
PSQ and COT items cover this well
CEPS
Clinical Exam & Procedural Skills
5 mandatory recordings needed — plan ahead
PLT
Performance, Learning & Teaching
Needs diverse log types (courses, teaching, CPD)
TW
Team Working
Often only via MSF — rich log potential missed
MC
Medical Complexity
Narrowly interpreted as "multimorbidity" only
EA
An Ethical Approach
Rarely reflected upon unless prompted
FtP
Fitness to Practise
The most commonly thin or missing capability
OML
Organisation, Management & Leadership
Seen as someone else's job — it isn't
CHES
Community Health & Environmental Sustainability
Most frequently forgotten until ESR prep panic
💡

The 4-Underdog Rule

At every portfolio review, actively check your evidence for FtP, EA, OML and CHES. Trainees consistently neglect these four — not from incompetence, but from not spotting the opportunities. Most clinical situations contain evidence for at least two of them if you look carefully enough.

📐 The Ideal Log Entry — How to Proportion Your Writing

This is one of the clearest signals of reflection quality that supervisors use. Assessors across the country are looking for the same structural balance. The blocks below are proportional — aim for yours to look something like this.

~15%
What
Happened
~8%
What
Followed
~47%
What I Learned
(use "I" — assessors notice)
~30%
What I Will Do
Differently

Tip from UK deanery assessors: a quick way to judge reflection quality is to look for the word "I" in the "what I learned" section. Entries written in the passive voice or without personal ownership are a sign of surface-level reflection. Make it personal — it makes it real.

🔵 FourteenFish Capability Circles — What the Dashboard Is Telling You

The capability progress dashboard in FourteenFish uses coloured circles to show your evidence coverage at a glance. Here's what each state means — and what you should do when you see them.

0
Empty (no evidence yet)

No log entries or assessments have been linked to this capability yet. If you're early in training, some of these are expected. If you're approaching an ESR, address these urgently.

1–2
Grey (some evidence)

One or two pieces of evidence linked and validated. This shows activity but is typically insufficient for an ESR. Aim for at least 3 for each review period.

3+
Green (sufficient evidence)

Three or more validated links — the minimum the RCGP expects per capability per review period. The circle turns green. But remember: 3 excellent entries always beats 10 weak ones.

🐟 FourteenFish Navigation Tips — From Trainee Experience

  • Link capabilities at the time of writing each log entry — not just at ESR prep. Saves hours later and keeps your circles updated in real time.
  • Check the capability progress screen weekly. It's the fastest way to spot gaps before they become a problem at review.
  • Use the FourteenFish mobile app to write log entries between consultations or on the commute. It works offline and syncs when you reconnect. Seriously, use it.
  • Log in 3–4 times per week during your first 2 weeks just to get familiar with the layout — before you have the pressure of adding content. This simple habit saves a lot of confusion later.
  • Check the Educator Notes section regularly. Your trainer and ES leave important feedback there, including concerns. Trainees who miss these look disengaged.
  • In FourteenFish, your ES must set up the review period before you can add self-rating scales. If this hasn't happened at the start of a new post — chase it immediately. Don't wait.

📅 The Portfolio Journey — Key Milestones Across 3 Years

A simplified timeline of the key portfolio events across GP training. The amber dots are the non-negotiable milestones — miss these and you risk an ARCP issue.

ST1
Start
Set up FourteenFish; ES opens review; start logging
Monthly
3 Clinical Case Reviews per month (36 per year)
6 months
Interim ESR (iESR). Self-rating + ES review
Throughout
4 CbDs, 4 COTs/MiniCEX, MSF, PSQ, CEPS
12 months
Full ESR. ARCP panel reviews evidence
ST2
New year
New ES Workbook. Reset review period
6 + 12 mo
iESR then full ESR + ARCP
ST3
Final year
5 CATs, 7 COTs. AKT + SCA. OOH passport
Final ESR
3 pieces of ST3 evidence per capability. PDP for first appraisal
ARCP ✅
Final outcome → CCT application

⚠️ Important: The ARCP Evidence Review Day happens one week BEFORE the panel. If evidence is missing on that day, you may receive an Outcome 5 — even if you add it before the actual panel date. Don't cut it close.

💡 Log Entry Habits That Actually Work

  • Write log entries within 24 hours of the clinical encounter. Your memory fades fast and the reflection quality drops with it.
  • Start with bullet points if the blank page intimidates you. Expand from there. The structure matters more than the prose style.
  • One good case can cover 3–4 capabilities if you reflect broadly enough. You don't always need a new case — you need a new lens on a case you already wrote about.
  • Routine cases are completely valid. An unremarkable consultation that you handled well is evidence of competence. Not every log entry needs to be an unusual or dramatic case.
  • "Little and often" beats "lots at the end" — every time, without exception. The evidence of sustainability impresses reviewers. A sudden burst of 12 entries in the last week of a post does not.
  • After a CbD or COT session, use the discussion (not just the formal assessment) to spark a log entry. The conversation with your trainer often reveals the deeper learning.

🏥 Hospital Post Portfolio Tips

  • Hospital consultants may never have used FourteenFish before. Don't assume they know what to do. Proactively email them a link to the assessor registration page and briefly explain the grading system — before the assessment, not after.
  • Don't forget you're training to be a GP even when working in a hospital. Every ward round, every complex patient, every MDT is GP-relevant. Reflect on it through a GP lens.
  • Hospital posts are excellent for evidencing Medical Complexity (MC), Team Working (TW), and Organisation, Management and Leadership (OML). These capabilities come alive in secondary care — use the opportunity.
  • MiniCEX in hospital posts directly generates evidence for DG, DD, CEPS and CM — the same capabilities you'd use COTs for in GP. Make the most of them.
  • If you're in a speciality where CEPS recordings are available (cardiology, respiratory, musculoskeletal), get them done. The 5 mandatory CEPS recordings must be completed by the end of training — start early.

🌍 IMG-Specific Insights

  • UK GP training places significant weight on reflective writing — a format many international training systems don't emphasise. This is a transferable skill, not an innate one. It genuinely improves with practice.
  • If you find writing in English harder than practising in it, try drafting your reflection in your first language and then translating. The reflection quality matters more than the language you first thought it in.
  • Research across UK deaneries found that IMGs are, on average, more positive about WPBA than UK graduates. The framework tends to work well for those who engage with it fully.
  • The ePortfolio can sometimes struggle to capture the true capabilities of a trainee when supervisor dynamics are variable. If you feel your portfolio doesn't reflect your abilities, speak to your TPD — not just your ES.
  • Some capabilities (especially OML and CHES) require understanding of how UK GP practices are organised and how local health systems work. Ask your trainer about the local context — then write about it.

⚠️ Hidden Time Bombs — Things That Catch Trainees Out Late

  • Safeguarding Level 3: Your 3-yearly certificate alone is NOT enough. You need an annual knowledge update AND a reflective log entry — every 12 months. This catches trainees out repeatedly.
  • Form R: Must be completed via TIS Self Service online — not the Word document version. The Word version is no longer accepted. Uploading the wrong version before your ARCP will cause problems.
  • ARCP Evidence Review Day: This happens one week before the panel. If evidence is missing at that point, you may receive Outcome 5 even if you add it the day before the actual panel. Panels do not reopen their review for late additions.
  • ST3 Capability Evidence: In your final ESR, the RCGP requires 3 pieces of evidence from the current ST3 review period for each capability. Evidence from a previous review period cannot be reused as primary evidence (though you can mention it in the free text).
  • OOH Passport: Some deaneries require a specific OOH passport document uploaded to Supporting Documentation — not just log entries. Check your local deanery requirements early in ST3.

🔬 What the Research Says — WPBA From the Inside

IMGs More Positive

Research found IMGs were significantly more positive towards WPBA than UK graduates — and showed no difference in WPBA attainment by ethnicity.

📊

Subjectivity Is Real

Some trainees feel their portfolio doesn't fully reflect their abilities because supervisor attitudes and rating styles vary between practices and posts.

✍️

Reflection Is a Skill

Studies confirm reflective writing is a learnable skill — not something you either "have" or "don't have." Trainees who struggle early usually improve significantly with feedback and practice.

🤝

Relationship Matters

The trainee–trainer relationship has a direct impact on portfolio quality. Trainees who have good educational dialogue with their ES consistently produce richer, more evidenced portfolios.

🎓 What Educational Supervisors Consistently Notice — and Rarely Say Out Loud

  • Trainees who check their Educator Notes regularly signal engagement and professional maturity. Trainees who never check them — and respond to concerns months late — signal the opposite.
  • A large number of thin, vague log entries is actually worse than a smaller number of rich, specific ones. Volume without quality creates extra work for the ES and communicates that you're treating the portfolio as a tick-box exercise.
  • The self-rating scales filled in quickly with one-line descriptions frustrate ARCP panels. The panels are reviewing up to 15 portfolios in a single sitting — they notice, and they remember, the ones that made the effort.
  • ESRs of trainees with consistent, spread-out portfolio activity write themselves. ESRs of trainees with a chaotic, end-of-rotation burst of activity are the most time-consuming — and the most concerning — for supervisors.
  • If you're struggling with the portfolio, your ES wants to know early — not at the ESR meeting. Raising concerns early allows them to help you. Raising them in week 11 of a 12-week post helps no-one.

💬 From the Training Community — Educator & Trainee Wisdom

Insights gathered from UK GP training educators, official training sessions, peer-reviewed research, trainee blogs, and GP training support platforms. All content aligns with RCGP guidance. Where insights come from community experience rather than official sources, this is clearly indicated.

🔺 Miller's Pyramid — Where WPBA Fits in the MRCGP

A widely-used framework by UK GP training educators — including the RCGP's own WPBA Clinical Lead — to explain what each component of the MRCGP tests. The capability rating scales sit firmly at the top of this pyramid.

DOES
WPBA → FourteenFish ePortfolio
SHOWS HOW
SCA (Simulated Consultation)
KNOWS HOW
AKT applied reasoning
KNOWS
AKT factual recall
🔺

Why This Matters for the Portfolio

Miller's Pyramid makes clear why WPBA is the hardest component to fake. The AKT tests what you know from a book. The SCA tests what you can do in a 10-minute simulation. But WPBA tests what you actually do — day after day, with real patients, under real pressure. That's why the evidence must be rich, varied, and built up consistently over time. As one RCGP educator puts it: "It is YOUR portfolio. Use it to demonstrate progression towards competence, learning and reflection. It is a professional document."

👋 Trainee insight: Many trainees assume passing the AKT and SCA means WPBA is just a formality. This is wrong. WPBA is an equal component of the MRCGP. Trainees who treat the portfolio as a box-ticking exercise have been asked to repeat posts or received developmental ARCP outcomes, even after passing both written exams.

🏁 ARCP Outcomes — What They Mean

The ARCP panel issues one of these six outcomes after reviewing your portfolio. Understanding each one removes the anxiety. Outcomes 1 and 6 are what you're aiming for. Outcome 5 is the most common avoidable one — it usually means missing evidence, not poor clinical performance.

1
Satisfactory Progress ✅
Making progress at the expected rate. No concerns. This is your target for every annual review until completion.
2
Developmental — No Extra Time
Specific capabilities need more development, but no additional training time is required. A focused learning plan will be agreed.
3
Inadequate Progress ⚠️
Progress is below expected rate AND additional training time is required. This is a serious but recoverable outcome with the right support.
4
Released From Programme
Released from the training programme, with or without specified capabilities met. Very rare — reserved for serious or persistent concerns.
5
Incomplete Evidence 🔴
Cannot assess progress because evidence is incomplete. This is the most common avoidable outcome — and entirely preventable with good portfolio habits.
6
Completion of Training 🎓
All required competences met. Recommendation for CCT. This happens at your final ARCP when you've completed all three years.
⚠️

How Most Outcome 5s Are Caused

The vast majority of Outcome 5s are not caused by poor clinical performance — they are caused by missing administrative evidence: no ESR submitted within 2 weeks of the ARCP, missing Form R, incomplete mandatory requirements, or evidence added after the Evidence Review Day. Every one of these is preventable. Use the Bradford ES Checklist and the RCGP Mandatory Evidence Summary Sheet from Day 1 of training.

🎓 What ARCP Panels Look For — From Official Training Guidance

Compiled from official RCGP training sessions for ST1 trainees, delivered by senior ARCP assessors and WPBA clinical leads. This is what those panels are actually checking — and how to give them exactly what they need.

📋 What They Actually Review

  • All mandatory assessments completed at the correct stage
  • ESR submitted and released (not just drafted)
  • Learning logs entered regularly throughout the post — not in a last-minute burst
  • Evidence of capability coverage across all 13 — with gaps explained
  • PDP present, reviewed and updated in each review period
  • Form R completed correctly via TIS Self Service
  • CPR/AED and safeguarding certificates current with annual updates evidenced

📌 What Gets You a Good Outcome 1

  • Evidence of progression between reviews — not just presence of evidence
  • Log entries that are succinct, meaningful, and justify capability links specifically
  • Assessments spread evenly throughout the post — not clustered at the end
  • PDP objectives that are SMART and have been actively reviewed
  • Range of Clinical Experience Groups covered — not just the common ones
  • Meetings booked early and supervisors engaged proactively

🚩 What Raises Concern at ARCP

  • All assessments clustered in the last 2 weeks of a post
  • Log entries that are descriptive rather than reflective (a sign of minimal engagement)
  • Repeated NFD-Below Expectations grades without an explanatory narrative
  • Missing or thin entries for FtP, EA, OML or CHES
  • No response or acknowledgement of Educator Notes left by supervisors
  • PDP objectives identical across review periods — no evolution
  • Portfolio suddenly looking "complete" just before ARCP after months of inactivity

📊 The WPBA Grading System — Rethinking NFD

One of the most common sources of unnecessary anxiety for new GP trainees is misunderstanding what "NFD" means. The grading system is fundamentally different from anything you encountered in medical school or hospital training.

E
Excellent
Performance significantly above the expected standard for a GP at this stage. Rarely given — and genuinely exceptional when it is. In early training, receiving many Excellents can look implausible to ARCP panels.
C
Competent
Performance meets the expected standard. This is a positive grade — it means the trainee is on track. In later training, consistent Competent grades are what's needed for a satisfactory ARCP.
NFD
Needs Further Development
This is the EXPECTED grade in early training. It does NOT mean you have failed. It means you are developing — which is exactly what a trainee on a training programme should be doing. In ST1, most grades should be NFD. The trajectory towards Competent is what matters.
IE
Insufficient Evidence
Not enough evidence in the case to make a judgement. This is about the case choice or write-up — not necessarily poor performance. Reframe it as a prompt to choose a richer case or document more fully next time.
💡

The NFD Trap

Trainees who came from competitive medical school environments often feel devastated by their first NFD grade. Bradford VTS trainers, deanery educators, and experienced GP trainers all make the same point: an ST1 portfolio full of "Competent" grades is actually suspicious. It means either your cases were too easy, your assessor was being too generous, or the assessment didn't genuinely challenge you. NFD in ST1 is a sign that you're being assessed rigorously and developing appropriately — embrace it.

✍️ Smart Log Entry Strategies — From GP Training Educators & Trainee Networks

Compiled from UK GP training educators, trainee support platforms, and deanery guidance. Every point below aligns with RCGP and GMC guidance on reflective practice.

1
Keep Description Minimal
Don't waffle in the "what happened" section. Assessors read this to understand context — they don't need a full clinical summary. Two to three sentences is usually enough.
2
Justify Links Specifically
Capability linking is like a driving test — make it obvious. Don't just name the capability; quote the specific descriptor and explain precisely how your actions demonstrate it. Vague links get unlinked by supervisors.
3
Focus Reflection on 1–2 Points
Trying to cover everything in one entry produces scattered, shallow reflection. Pick one or two genuine learning points and go deep. Focused entries are more valuable and more readable.
4
Think Beyond the Case
After reflecting on the specific case, ask: how does this apply to my wider practice? What patient group or population does this represent? This broader thinking strengthens entries and makes them evidence for more capabilities.
5
Include Successes Too
Trainees often only write about difficult or unusual cases. Supervisors across the UK consistently note that good portfolios also include things that went well — and specifically reflect on why they went well, to replicate them.
6
Use Ordinary Cases
You do not need dramatic or unusual cases. Routine, well-managed consultations are genuine evidence of competence. Trainees who only write about interesting cases end up with thin coverage of common presentations. Use what you see every day.
7
Pre-Write Justifications
At the time of writing the log entry, also write a brief justification for each capability link. This takes 2 minutes now and saves an hour at ESR prep time when context has faded.
8
Colleague-Neutral Writing
Do not write entries that are critical of named or identifiable colleagues. Frame learning from others' errors as systemic insights, not personal criticisms. These entries are a professional document — write accordingly.

⭐ Hidden Quality Signals — What Separates Good Portfolios from Excellent Ones

These are the less-obvious quality markers that UK GP training educators consistently highlight. None of them require extra time — they just require a slightly different mindset.

⭐ Visible Professional Growth

  • Earlier reviews show developing understanding; later reviews show confident application — this trajectory is what CCT readiness looks like
  • Action points from previous ESRs are demonstrably acted on — not just carried forward unchanged
  • Self-rating narrative shows increasing insight and honest self-awareness over time
  • Capability write-ups become more nuanced and specific with each review period

💎 Small Details That Signal Professionalism

  • Log entry titles are clear and descriptive — not just "CBD reflection" or "clinical case". A title like "Challenging consultation — patient refusing cancer treatment" tells the panel everything at a glance
  • Entries are spell-checked. Portfolios with repeated spelling errors signal carelessness, even if the content is good
  • Educator Notes are acknowledged and responded to in subsequent log entries
  • Placement Planning Meeting log entry at the start of each new post (required but often missed)
  • Entries are shared promptly after writing — not left as drafts for weeks

🔗 Evidence That "Closes the Loop"

  • A learning need identified in one entry is followed by a later entry showing how it was addressed — this is the "closed learning loop" that supervisors and panels actively look for
  • PDP objectives link directly to specific log entries that demonstrate they were achieved
  • Leadership activity leads to a reflective log entry that links to OML capability evidence
  • MSF feedback themes are explicitly addressed in subsequent log entries and PDP updates

🧠 The "Intelligence Behind the Doing"

  • Don't just describe what you did — explain why you made that decision and what you were weighing up
  • For management decisions: include why you chose this option over others, especially if it was not standard first-line
  • For DD (Decision-Making): include the diagnostic uncertainty you navigated, not just the conclusion you reached
  • For TW (Team Working): describe how you communicated, what information you shared and why — not just that you referred to someone
  • This "reasoning narrative" is the thing most commonly missing from otherwise adequate portfolios
⚖️

Important: Your ePortfolio is a Professional Document

Concerns have circulated in trainee networks about the safety of reflective writing following high-profile cases. The RCGP and GMC have both emphasised that honest, reflective log entries — used within the ePortfolio system as intended — are an important and protected part of professional development. However, the key principle remains: your portfolio is a professional document, visible to supervisors and ARCP panels. Write with care and professionalism. Reflect genuinely, but do so in language you would be comfortable standing behind. The FourteenFish system also has built-in AI that scans for sensitive patient data before you share entries — use this as a safety prompt.

🎯 Per-Capability Community Tips — What Trainees Find Hardest

These insights come from trainee support platforms, training scheme resources, and UK GP educator guidance. They address the specific capabilities trainees most commonly ask about or get wrong.

FtP Fitness to Practise — How to Actually Evidence It

FtP is the capability most consistently missing from trainee portfolios. The reason is straightforward: trainees assume it only applies to serious professional conduct matters. It doesn't. FtP evidence shows up in everyday moments — and once you know this, you'll find opportunities everywhere.

❌ What trainees write (wrong)

"I did not observe any fitness to practise issues during this post."

This is almost always wrong — and it means you've missed the evidence.

✅ What actually counts as FtP evidence
  • Recognising you were tired or stressed and adjusting your practice accordingly
  • Noticing a colleague struggling and taking appropriate action
  • Seeking supervision when unsure rather than proceeding alone
  • Reflecting on a case where your knowledge gap nearly led to a mistake
  • Any situation where you proactively protected patient safety from a performance risk
OML Organisation, Management & Leadership — It's NOT Just About Big Projects

The most common OML mistake is waiting for a big QI project or audit before writing an OML entry. Meanwhile, daily clinical work is full of OML evidence that trainees simply don't recognise. Bradford VTS's capability cheat sheet is particularly clear on this.

Organisation Evidence (everyday)
  • How you manage your inbox/blood results/letters
  • Methods to keep on top of admin during a busy GP day
  • Prioritising when multiple patients need attention simultaneously
  • Efficient handover approaches you've developed
Management Evidence
  • Coordinating a complex patient's care across specialties
  • Improving a system or process within the practice
  • Taking responsibility for a task that wasn't strictly yours
  • Delegating appropriately and safely
Leadership Evidence
  • Stepping in to coordinate a complex patient when no one else was
  • Identifying a safety issue and taking steps to address it
  • Contributing to a QI project or audit
  • Teaching or supporting a colleague or student
CHES Community Health & Environmental Sustainability — Evidencing the "New" Part

This capability was updated in August 2025 to explicitly include environmental sustainability. Many trainees know Community Health well but are unsure how to evidence the sustainability element. The good news: it doesn't require grand gestures.

Community Health Evidence
  • Any encounter that makes you think about a patient population, not just one patient
  • Audit that looks at a specific patient group
  • Reflection on health inequalities in your local area
  • QI project with population-level impact
Environmental Sustainability Evidence
  • Choosing a DPI inhaler over a metered-dose inhaler and reflecting on why
  • Avoiding an unnecessary investigation — and reflecting on its environmental cost
  • Engagement with practice net zero or sustainability initiatives
  • Reflection on any prescribing or investigation decision through a sustainability lens
Key insight: A single well-crafted log entry reflecting on the population-level impact of a clinical decision, and its environmental implications, can cover both aspects of CHES. You don't need two separate entries — you need one broad enough to cover both dimensions.
DD Decision-Making & Diagnosis — Show the Thinking, Not Just the Conclusion

This is the capability most commonly written up incorrectly. Trainees describe their diagnosis and management — but omit the decision-making process. Bradford VTS and multiple deanery guides are explicit: the panel wants the "intelligence behind the diagnosis," not just the diagnosis itself.

What to include in a DD entry (beyond the diagnosis):
  • Differential diagnosis: what else were you considering, and what moved you towards or away from each option?
  • Uncertainty management: if you were not certain, how did you use safety-netting, time, or investigation to manage that uncertainty safely?
  • Cognitive process: was this pattern recognition (System 1 thinking) or deliberate analytical reasoning (System 2)? What prompted you to switch?
  • Risk: were there any red flags you actively considered and excluded?
  • The moment of decision: what was the final factor that settled your working diagnosis?

Useful resources: the Bradford VTS Professional Capabilities Cheat Sheet includes specific guidance on what to write for DD, including Dual Process Theory, cognitive biases, and using time as a diagnostic tool. Well worth reading before your next CbD.

🎓 For Educational Supervisors & Trainers

Hard-won insights for supervisors conducting ES reviews and capability rating scale sign-offs.

📚 Train the Trainee from Day One

In your first meeting, go through: what the 13 capabilities mean; what evidence looks like; how to fill in the rating scales. If you do this well at the start, the next three years become significantly easier for both of you.

⚡ Use the Bradford ES Workbook

Always ask trainees to complete the Bradford ES Workbook before the meeting. It saves hours, focuses the conversation, and ensures nothing is missed. Don't do an ES meeting without it.

🎯 Quality of Evidence Over Quantity

The capability assessment is qualitative, not quantitative. Three excellent, specific, well-linked log entries are worth more than twelve vague ones. Teach trainees to write better — not just more.

🔴 3 or More NFD-Below → Refer to Panel

If you rate a trainee as NFD-Below Expectations in 3 or more capabilities, you must sign off the report as "unsatisfactory progress" and notify the TPD and scheme administrator. Do not soften this — it exists to protect both the trainee and patients.

🔍 Validate Meaningfully — But Selectively

When validating log entries against capabilities, only validate when there is a clear, strong indicator. If you award validation loosely, the rating scale data becomes unreliable — for both trainee and ARCP panel.

💬 The Narrative Matters as Much as the Grade

When writing up capabilities, resist the temptation to write only grades and totals. ARCP panels need to see the narrative — what themes emerged, what level of supervision was needed, what progress has been made. The numbers alone don't tell the story.

❓ Common Questions

How much do I need to write for each capability?
Aim for concise and specific. A well-written capability entry might be 3–5 sentences of quantitative evidence plus 3–5 sentences of qualitative narrative, followed by a specific action point. Long entries usually indicate vagueness or waffle. Look at the exemplars above — they are not long, but every word counts.
Do I need evidence for all 13 capabilities every review?
Yes — all 13 need some evidence in every ESR period. The RCGP expects at least 3 evidence links per capability per review (visible in FourteenFish as circles). Some capabilities are harder to evidence in certain posts (e.g. CHES in a hospital post) — discuss with your ES how to approach these, and use learning logs creatively.
What if I haven't passed my SCA or AKT yet?
Simply write "not yet sat" or "not yet passed" where relevant. Exam results are not mandatory for the capability rating scales — they are one helpful source of evidence among many. Focus on building the qualitative evidence through log entries and your WPBA grades instead.
I'm in a hospital post — how do I evidence GP-specific capabilities?
Use learning logs creatively. Any patient encounter in any setting can be reflected upon in a way that evidences GP capabilities. A complex patient in a hospital ward can generate evidence for MC (Medical Complexity). A ward round where you coordinated care evidences TW (Team Working). The capability is about the learning, not the setting.
My ES hasn't replied to my emails. What do I do?
Educational supervision is a mandatory part of GP training — your ES is obliged to meet with you. If there is persistent non-engagement, contact your TPD (Training Programme Director) or Scheme Administrator and document the attempts you've made in your ePortfolio. Do not suffer in silence or assume it's your fault.
I'm an ST3 completing to CCT. Do I still need action points?
Yes, absolutely. Even if you're completing training, you must write action points for every capability. The reason is practical: these action points will form the basis of your Personal Development Plan (PDP) for your very first GP appraisal as a qualified, independent GP. Think of the 'suggested action before next review' box as 'suggested action before first appraisal.'
What's the difference between NFD-Above Expectations and Excellent?
Needs Further Development — Above Expectations (NFD-AE) means the trainee is performing at a higher level than their current ST stage requires. Excellent means performance is at or above the standard expected for a qualified, independent GP. NFD-AE is actually a positive grade in early training — it means you're ahead of where you're expected to be, not behind.
What changed with the August 2025 curriculum update?
The RCGP updated the GP Curriculum with minor wording changes to some capabilities. No changes were made to the required numbers of WPBA assessments, and all evidence previously linked to capabilities remains valid. The CHES capability now more explicitly includes environmental sustainability. This page has been updated to reflect the current capability names and descriptions.

🏁 The Bits to Remember Tomorrow

1

Evidence for the Capability Rating Scales comes in two forms: quantitative (WPBA grades and scores) and qualitative (narrative themes from CSR, MSF, PSQ and log entries). You need both for every capability.

2

Items marked ★ in the templates above provide the strongest evidence. Always prioritise these — they carry the most weight with ARCP panels.

3

The Capability Rating Scales are your professional CV. You're showing yourself to people who don't know you. Specific, detailed write-ups with real examples make a strong impression. Generic summaries do not.

4

SMART action points are non-negotiable. Every capability needs a concrete next step — not a vague aspiration. If you're stuck, ask: "What is the single thing I could do in the next 3 months that would move me forward here?"

5

Don't forget CHES (Community Health and Environmental Sustainability) and MC (Medical Complexity). These are frequently underevidenced — but they matter. Audit work, QI projects, and case reviews involving uncertainty or population thinking will help you here.

6

Educational supervision is trainee-led. It's your responsibility to arrange meetings, prepare the portfolio, and complete the rating scales before the meeting. If you fail to prepare, the only person who loses out is you.

7

The ES Mapping Workbook (in the downloads above) is your most useful tracking tool. Use it from Day 1. It shows you at a glance which capabilities are well evidenced and which need more attention.

8

Even ST3 trainees completing to CCT must write action points for every capability — because these become your PDP for your first GP appraisal. The finish line for GP training is not the finish line for professional development.

Bradford VTS — A free educational resource created by Dr Ramesh Mehay | Updated April 2026

For educational use only. Always verify clinical information against current RCGP, GMC and NICE guidance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top