Finding the Evidence for the
Capability Rating Scales
"Yes, you do have to write something in those boxes. No, 'I'm quite good at this' is not acceptable." — Your helpful guide to doing it properly.
Handouts, Checklists & Teaching Resources
Everything you need — whether you're prepping for your ESR, helping a trainee, or building a tutorial. Download and use freely.
📂 Intro to Educational Supervision
📂 For Trainees
Includes the ES Mapping Workbook (use this every ESR period), the Form R quick guide, and the ePortfolio ARCP-readiness guide.
📂 ES Checklists
The Bradford ES Checklist, capability rating scales help sheet, and ST3 ARCP final checklist. These are the most heavily-used downloads on this whole section of the site.
Curated Links — Official & Informal
A hand-picked mix of official guidance and real-world GP training resources. Because sometimes the best pearls are not hiding in the official documents.
The RCGP's definitive page on all 13 capabilities with progression descriptors.
How the Clinical Supervisor's Report and Educational Supervisor's Report work.
The full WPBA framework including CATs, MiniCEX, COT, MSF and PSQ.
What grades mean, who can assess, and how to use them properly.
A clear breakdown of how many assessments you need at each ST year.
How to write SMART, meaningful action points — not vague waffle.
The simple framework for writing good learning log entries quickly.
Everything about learning logs — structure, examples, and reflection levels.
The full trainee guide to educational supervision and ESR preparation.
What happens at ARCP, what the panel looks for, and how to prepare.
Step-by-step technical guides for using your FourteenFish ePortfolio.
Practical examples of how trainees write capability evidence with FourteenFish screenshots.
⚡ If You Only Read One Section — Read This
The full picture in 90 seconds. Everything else on this page is just the detail behind these points.
- The Capability Rating Scales live in your FourteenFish ePortfolio, in the ESR Preparation section. You fill them in before your ES meeting — not during it.
- For each capability: write both quantitative evidence (WPBA grades) and qualitative evidence (narrative themes from CSR, MSF, logs).
- The purpose is to show yourself off — you're presenting your case to ARCP panel members who don't know you. Underselling yourself is not modesty; it's a missed opportunity.
- Items from structured WPBA tools (CbD, COT, MiniCEX) carry the most weight. Log entries and CSR themes support and contextualise them.
- Action points must be SMART and specific. "Continue to improve" is not an action point. "Attend a clinical supervision skills day by December and write a reflection" is.
- The ES Mapping Workbook (download above) helps you track coverage across all 13 capabilities and all Clinical Experience Groups. Use it from Day 1 of each new year.
- Even ST3 trainees completing to CCT must write action points — because they become your PDP for your very first GP appraisal.
- Educational supervision is your responsibility to initiate, not your ES's responsibility to chase you for.
💡 Why the Capability Rating Scales Matter
📋 They Drive Your ARCP
The ARCP panel reviews your ESR — and the Capability Rating Scales are a core part of it. This is how the panel decides whether you progress to the next ST year. A vague or empty rating scale is a red flag.
🔍 They Identify Gaps Early
When you can't find evidence for a particular capability, that's useful information. It means you haven't been generating that evidence — or haven't been reflecting on it. Identifying this in ST1 is much better than discovering it in ST3.
🎯 They Are Your Professional CV
The people reading your capability write-ups don't know you. They are making a professional judgement about you based on what you've written. Think of it as a job interview in written form.
🌱 They Shape Your Learning
Reviewing your capabilities regularly helps you notice which areas you're not developing. It shifts the focus from "doing WPBAs" to "using WPBAs to demonstrate growth." That's a much more powerful mindset.
🗺 Understanding the Framework
Two things you need to understand before you start: how the 13 capabilities are grouped (the RDMp framework), and how evidence flows through the system to reach your ARCP.
The RDMp Framework — 4 Groups, 13 Capabilities
The 13 Professional Capabilities are grouped into 4 themes, known as RDMp. Understanding these groups helps you ensure your evidence covers all areas — not just the ones you happen to enjoy.
How you engage with patients and colleagues
Clinical reasoning and data use
Breadth of care and real-world management
Being a safe, organised, reflective doctor
How Evidence Flows to Your ARCP
Understanding this flow helps you see why each piece of evidence matters — and why gaps at the bottom affect everything above.
WPBA Tools at a Glance
| Tool | What It Assesses | Where Used |
|---|---|---|
| CbD (Care Assessment Tool) | Professional judgement, clinical reasoning, capability-mapped discussion | ST1, ST2, ST3 (primary and secondary care) |
| COT (Consultation Observation Tool) | Consultation skills, clinical reasoning, management — all GP capabilities | GP posts only (ST3 mainly; some ST1/2 GP) |
| AudioCOT | As per COT but using an audio-recorded consultation | GP posts |
| MiniCEX | Clinical examination, communication, judgement | Non-primary care posts (hospital) |
| MSF (Multi-Source Feedback) | Professional behaviour and clinical performance as seen by colleagues | At least once per training year |
| PSQ (Patient Satisfaction Questionnaire) | Communication and holistic care as experienced by patients | GP posts |
| CEPS (Clinical Examination & Procedural Skills) | 5 mandatory clinical examinations observed by a qualified assessor | Any post (must complete all 5 during training) |
| CSR (Clinical Supervisor's Report) | Overall narrative assessment of trainee performance in that post | End of every post |
| AKT | Applied clinical knowledge — pass/fail exam | Usually during ST3 (or earlier) |
| SCA | Simulated consultation assessment — pass/fail exam (replaced CSA in 2020) | Usually during ST3 |
📖 How to Use the Ram's Capability Evidence Table
Navigate to the ESR Preparation section and open the Capability Rating Scales for your upcoming review period.
Click on the relevant capability accordion below to expand it. Read the template — it shows you exactly what sources to reference and what to write.
The prompts in italics show you what to fill in. Replace each prompt with your actual evidence. Be specific — use numbers, grades and named themes.
Items marked ★ provide the most robust evidence. Always include these if you have them. Qualitative sources support and contextualise the quantitative data.
In FourteenFish, tag 2–3 of your strongest log entries to each capability. These become your "Tagged Evidence" — the specific entries you're pointing the panel to.
Every capability needs at least one SMART action point — what's the next concrete step? See the Action Points section for guidance.
Insider Tip: Train the Trainee Early
Educational Supervisors: spend time in your very first meeting thoroughly going through (a) what the capabilities mean, (b) what evidence looks like, and (c) how to write up the rating scales. If you get the trainee well set-up from the beginning, the next three years run so much more smoothly. The Bradford ES Workbook in the downloads section is your best friend here.
🎯 All 13 Professional Capabilities
Click on any capability to expand it. Each one shows you exactly what to write in the Capability Rating Scales section of your FourteenFish ePortfolio — both quantitative WPBA grades and qualitative narrative evidence. Items marked ★ provide the strongest evidence and should always be included if available.
How to Read the Templates
The tool names (e.g. CbD, COT) tell you the source. The ✏️ prompts in grey show you exactly what to fill in. Replace each prompt with your own specific data — grades, numbers, themes. Be concise and specific. Remember: the goal is to show the ARCP panel the full picture of your competence in that area.
FtP Fitness to Practise Relationships ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (1) Fitness to Practise
✏️ item (1) Fitness to practise:
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Professionalism'
✏️ with respect to the trainee's conduct, performance and health, the CS says… - MSF — themes around conduct, performance and health are…
- Log Entries — select entries that show STRONG evidence
✏️ select ones that show STRONG evidence.
EA An Ethical Approach Relationships ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (2) Maintaining an ethical approach
✏️ item (2) Maintaining an ethical approach:
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Professionalism'
✏️ under 'Professionalism' – ethical themes are… - MSF — under 'Professional Behaviour'
✏️ under 'professional behaviour', ethical themes are… - Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular: Clinical Case Reviews, LEAs/SEAs, Reflection on feedback (esp complaints), QIA, Leadership/Management/Professionalism
CC Communicating and Consulting Relationships ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ COT — Overall Assessment of Performance
✏️ item 'Overall Assessment of Performance': - ★ AudioCOT — Overall Assessment of Performance
✏️ item 'Overall Assessment of Performance': - ★ MiniCEX — item (2) Communication skills
✏️ item 2 Communication skills: - ★ SCA — passed/not yet passed — Interpersonal Skills (IPS) domain
✏️ this trainee has passed which is a good indicator that a trainee's communication skills were, on the whole, good enough. The score for IPS was x out of y
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- PSQ — communication skills themes are…
- CSR — under 'Communication'
✏️ communication skills themes are… - MSF — following communication skills themes emerge…
- Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular, look at Clinical Case Reviews.
DG Data Gathering and Interpretation Diagnostics ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (4) Data Gathering & Interpretation
✏️ item (4) data gathering & interpretation: - ★ COT — item (5) Includes/excludes significant conditions; item (6) Appropriate examination
✏️ item (5) Includes/excludes significant condition:; item (6) Examination: - ★ AudioCOT — item (3) Identifies the reason for the call; item (7) History Taking
✏️ item (3) Identifies the reason for the call:; item (7) History Taking: - ★ MiniCEX — item (3) Clinical Assessment (History and Examination)
✏️ item (3) Clinical Assessment (Hx & Ex) - ★ SCA — Data Gathering (DG) domain
✏️ this trainee has passed which is a good indicator that a trainee's Data Gathering skills were, on the whole, good enough. The score for DG was x out of y
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Clinical Assessment'
✏️ under 'Clinical Assessment' the CS says…. The 'Level of Supervision' required is reported as… - MSF — under Clinical Performance
✏️ under Clinical Performance, comments around Data Gathering are: - Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAS/SEAs & CEPs.
CEPS Clinical Examination and Procedural Skills Diagnostics ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CEPS records — mandatory clinical examinations completed
✏️ This trainee has done XXX out of the 5 mandatory examinations. - ★ COT — item (6) Appropriate physical or mental examination
✏️ item (6) appropriate physical or mental examination: - ★ MiniCEX — item (3) Physical examination skills
✏️ item (3) physical examination skills:
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Clinical Assessment'
✏️ under 'Clinical Assessment' – the CS says the following about examination skills… - MSF — the following comments are made about examination skills…
- Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular, look at Clinical Case Reviews & CEPS.
DD Decision-Making and Diagnosis Diagnostics ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (6) Making Diagnoses/Decisions
✏️ item (6) Making diagnosis/decisions: - ★ COT — item (6) Appropriate examination; item (7) Appropriate working diagnosis
✏️ items (6) Appropriate examination:; item (7) Appropriate working diagnosis: - ★ AudioCOT — item (8) Appropriate working diagnosis; item (9) Creates an appropriate treatment plan; item (11) Safety netting/follow-up
✏️ item (8) appropriate working diagnosis:; item (9) creates an appropriate, effective treatment plan:; item (11) safety netting/FU: - ★ MiniCEX — item (7) Clinical judgement
✏️ item (7) clinical judgement: - ★ SCA — overall clinical decision-making reflected in CM and DG domains
✏️ This trainee has/has not passed the RCA which provides evidence of good Decision-making skills.
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Clinical Assessment', 'Management of Patients' and 'Context of Care'
✏️ under 'Clinical Assessment', 'Management of Patients', 'Context of Care' the CSR says the following about Decision & Diagnosis skills… The 'Level of Supervision' required is reported as… - MSF — under Clinical Performance
✏️ under Clinical Performance, themes around Diagnoses/Decisions are… - Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs/SEAs, Leadership/Management/Professionalism & Prescribing.
CM Clinical Management Management ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (4) Clinical Management
✏️ item (4) Clinical management - ★ MiniCEX — item (4) Overall clinical care
✏️ item (4) overall clinical care: - ★ COT — item (10) Appropriate management plan and follow-up
✏️ item (10) Appropriate management plan & FU: - ★ AudioCOT — item (9) Creates an appropriate, effective, mutually acceptable treatment plan
✏️ item (9) creates an appropriate, effective mutually acceptable treatment plan: - ★ AKT — passed/not yet passed — clinical questions domain
✏️ this trainee has passed which is a good indicator that a trainee's clinical management skills are, on the whole, good enough. The score for clinical questions was: x out of y - ★ SCA — Clinical Management (CM) domain
✏️ this trainee has passed which is a good indicator that a trainee's clinical management skills are, on the whole, good enough. The score for CM was x out of y
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Clinical Assessment', 'Management of Patients' and 'Context of Care'
✏️ Under 'Clinical Assessment', 'Management of Patients', 'Context of Care' the following Clinical Management themes emerge… The 'Level of Supervision' is reported as… - MSF — under Clinical Performance
✏️ Under Clinical Performance, the following 'clinical management' themes emerge… - Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. Nearly all types of Learning Logs will often be about the management of a condition in primary care.
MC Medical Complexity Management ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (08) Managing Medical Complexity
✏️ item (08) Managing medical complexity:
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Management of Patients' and 'Context of Care'
✏️ Under 'Management of Patients' & 'Context of Care' the CS says… - Log Entries — select entries with STRONG evidence
✏️ select the ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs/SEAs, Leadership/Management/Professionalism & QIA.
TW Team Working Management ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (09) Working with Colleagues in Teams
✏️ item (09) Working with colleagues in teams:
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- MSF — under 'Professional Behaviour'
✏️ Under Professional Behaviour, 'working with colleagues' themes are… - Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, Reflection on Feedback, Leadership/Mx and Professionalism & QIA. - CSR — under 'Working with Colleagues and in Teams'
✏️ under 'Working with colleagues and in teams', the CS says…
PLT Performance, Learning and Teaching Professionalism ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- No specific quantitative WPBA items — qualitative evidence predominates for this capability
✏️ No quantitative WPBA items — qualitative evidence predominates for this capability.
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Professionalism'
✏️ under 'Professionalism' the CS says the following things about this trainee as a learner… - MSF — under 'Professional Behaviour' and/or 'Clinical Performance'
✏️ under 'professional behaviour' and/or 'clinical performance', the following themes emerge about this trainee as a learner… - Learning Logs (LEAs & SEAs) — reflections on learning from patient encounters
✏️ Select log entries which show how you went about to educate yourself after seeing particular patients. LEAs & SEAs as a result of seeing patients. Reflection on feedback from others e.g. on particular patients or from the trainer after CBDs, COTs etc. - Other Log Entries — courses/CPD, Academic Activity, Audit, QIA, Prescribing, Leadership
✏️ Link log entries about – attending courses/CPD, Academic Activity, Audit, PDSAs, Projects & QIA projects. Anything you have done in terms of Prescribing? Or Leadership?
OML Organisation, Management and Leadership Professionalism ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (11) Organisation, Management and Leadership
✏️ item (11) Organisation, Management, Leadership:
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- Log Entries — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs/SEAs, Leadership/Management/Professionalism & QIA.
HPHS Holistic Practice, Health Promotion and Safeguarding Professionalism ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (13) Practising Holistically
✏️ item (13) Practising holistically: - ★ COT — item (3) Psychosocial context; item (5) Patient's health understanding
✏️ item (3) psychosocial context:; item (5) Pt's health understanding: - ★ AudioCOT — item (5) Places complaint in psychosocial context; item (6) Explores ICE
✏️ item (5) Places complaint in psycho-social contexts:; item (6): Explores ICE: - ★ PSQ — item (4) Interested in you as a whole person; item (5) Fully understanding your concerns
✏️ item (4) Interested in you as a whole person:; item (5) Fully understanding your concerns: - ★ SCA — Interpersonal Skills (IPS) domain
✏️ this trainee has passed which is a good indicator that a trainee's 'practising holistically' skills are, on the whole, good enough. The score for IPS was x out of y
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- CSR — under 'Context of Care'
✏️ under 'Context of Care' the CS says the following about this trainee's "practising holistically" skills… - Learning Logs — select entries with STRONG evidence
✏️ select ones that show STRONG evidence. In particular, look at: Clinical Case Reviews, LEAs, SEAs, Safeguarding entries.
CHES Community Health and Environmental Sustainability Professionalism ▼
📊 Quantitative Evidence
Structured assessments that generate grades or scores. Items marked ★ provide the strongest evidence.
- ★ CbD — item (8) Community Orientation / Population Health
✏️ item (8) Community orientation:
📝 Qualitative Evidence
Narrative evidence from reports, feedback and learning log entries.
- Learning Logs — use entries where you talk about particular patient groups
✏️ Use learning logs where you talk about particular patient groups. Or where an encounter with a specific patient stimulates you into thinking about a patient group with the same characteristic e.g. diabetics, Bengali-speaking patients, new mums, teenagers etc. - Other Learning Logs — audits, QI projects, leadership activities
✏️ Think about other types of logs where you are looking at patient groups rather than individual patients. For example, in audits and projects. Leadership work and activities. - Environmental Sustainability Logs — net zero and greener practice reflections
✏️ reflections on greener prescribing choices (e.g. MDI vs DPI inhalers), reducing unnecessary investigations, or engagement with practice sustainability initiatives - CSR — under 'Context of Care'
✏️ under 'Context of Care' the following themes around Community Orientation emerge…
📊 Which WPBA Tool Maps to Which Capability?
A quick reference matrix. ● = strong / direct evidence for that capability; ○ = supporting evidence that contributes when triangulated; — = this tool does not directly assess this capability.
| Capability | MSF | PSQ | COT / AudioCOT | CbD | MiniCEX | CSR | SCA | Logs |
|---|---|---|---|---|---|---|---|---|
| R — Relationships | ||||||||
| Fitness to Practise (FtP) | ● | — | — | ● | — | ● | — | ○ |
| An Ethical Approach (EA) | ● | — | — | ● | — | ● | — | ○ |
| Communicating & Consulting (CC) | ● | ● | ● | ○ | ● | ● | ● | ○ |
| D — Diagnostics & Decisions | ||||||||
| Data Gathering & Interpretation (DG) | ○ | — | ● | ● | ● | ● | ● | ○ |
| Clinical Exam & Procedural Skills (CEPS) | ○ | — | ● | — | ● | ● | — | ○ |
| Decision-Making & Diagnosis (DD) | ○ | — | ● | ● | ● | ● | ● | ○ |
| M — Management & Complexity | ||||||||
| Clinical Management (CM) | ○ | — | ● | ● | ● | ● | ● | ● |
| Medical Complexity (MC) | — | — | — | ● | ○ | ● | — | ● |
| Team Working (TW) | ● | — | — | ● | — | ● | — | ● |
| p — Professionalism & Organisation | ||||||||
| Performance, Learning & Teaching (PLT) | ● | — | — | — | ○ | ● | — | ● |
| Organisation, Management & Leadership (OML) | — | — | — | ● | — | — | — | ● |
| Holistic Practice, Health Promo & Safeguarding (HPHS) | — | ● | ● | ● | — | ● | ● | ● |
| Community Health & Environmental Sustainability (CHES) | — | — | — | ● | — | ○ | — | ● |
✍️ What Good Looks Like — Real Examples
Here are two Professional Capability write-ups from one of our trainee's ePortfolios which I was incredibly impressed with. Have a read and compare with what you currently do. Is there anything you can learn from it to make yours even better?
Remember — when you are providing evidence for the Capability Rating Scales, you are meant to be "showing yourself off" to panel members who do not know you! A bit like in a job interview — where you are selling and proving your worth. This is the same — so why would you want to do it quickly and undersell yourself? Spend time and write it up with care, thought and consideration. Impress the many people who will be reading it (some of whom you will not know — but will be making a decision about you as to whether you move up an ST grade or not in your GP training).
Communicating and Consulting (CC)
Real trainee write-up — rated excellent by the ARCP panelEvidence
- COTs — 8 out of 9 COTs have been marked as "Competent for Licensing."
- PSQ was very good in all areas. Patients felt I allowed them to speak, tried to really understand their problem, and left satisfied. Mean score 5 for most items; median score 6.
- CSR — under "Communication" says I regularly explore a patient's ideas, concerns and expectations. Also comments: "excellent at recognising the impact of the problem on a patient's life" and "generally makes good management plans in partnership with the patient."
- MSF themes: "good communicator," "explains things well," "good at communicating the treatment plan."
Tagged Evidence
- 06/05/2019 Learning log: Taught medical students consultation models
- 30/01/2019 Learning log: Antibiotics wrongly prescribed by another GP
- 10/05/2019 Learning log: Gentleman with unexplained back pain and lots of worries
Suggested Action Before Next Review
Decision-Making and Diagnosis (DD)
Real trainee write-up — rated excellent by the ARCP panelEvidence
- CbDs: for item Making Diagnosis/Decisions = 9/11 C, 2/11 NFD, 1/11 IE
- COTs (9 done in total): item 6 appropriate examination = 6/9 C, 2E, 1 NFD; item 7 Appropriate working diagnosis = 9/9 C
- Previous CSR suggested working more independently and balancing when to seek reassurance. Current CSR: meets expectations for diagnostics, appropriate differential diagnosis, refers appropriately — with comments "clinically very good," "does not over or under investigate," and "more self-confident and less reliant on others."
- MSF themes: good clinical knowledge, explores differential diagnosis very well, good at knowing when to ask for help, thinks laterally when needed
Tagged Evidence
- 30/04/2019 Learning log: The girl who was taking the pill incorrectly
- 20/03/2019 Learning log: Home visit man with haematuria — what next?
- 25/04/2019 Learning log: Training in telephone triage
Suggested Action Before Next Review
🔍 Can you see that…
- These write-ups are concise and specific. As a result, they are not particularly long. Long write-ups usually indicate vagueness or waffle!
- There is both quantitative and qualitative evidence within each write-up.
- At the end, the trainee is quite specific about how they plan to build on each competency in the near future.
Can you do something similar? I'm sure you can. Speak to your Educational Supervisor or GP Trainer if there is anything here that confuses you or if there is something you want more help with.
🎯 Writing Good Action Points
Action points are where most trainees — and many supervisors — fall flat. Here's how to get them right.
❌ Woolly — Don't Write This
- "Continue to build on my consultation skills."
- "Try to improve in this area."
- "Read more about ethical frameworks."
- "Continue to reflect on my learning."
✅ SMART — Write This Instead
- "Book and attend the 'Difficult Consultations' half-day course at Bradford by January 2026. Write a reflection afterwards and tag to CC."
- "Complete at least one QI project this year and write a reflection on population-level impact. Tag to CHES."
- "Ask trainer to select 2 ethically complex CbD cases in the next 3 months. Write structured reflection using the 4-principles framework."
- "Complete Tim Crossley's book by end of rotation. Write 3 log entries linking key learning to clinical cases."
The Easiest Way to Write an Action Point
Ask yourself: "What is the single next practical step that would move me forward in this capability?" Then write that step — with a specific what, how, and when. That's your action point.
⚠️ Common Mistakes — Don't Be That Trainee
These are the things trainees consistently get wrong. Every one of these has been seen repeatedly in real portfolios. Read them once — then don't do them.
🚫 Leaving It to the Last Minute
You cannot fill in 13 capability rating scales the night before your ES meeting without it showing. The prep work takes around 2 hours. Block the time in your diary at the start of each rotation.
🚫 Only Numbers, No Narrative
Writing "8/9 COTs competent" and nothing else is not evidence. Numbers need narrative — what themes emerged? What do they mean? What does the CS actually say? The numbers are the skeleton; the narrative is the flesh.
🚫 Vague Action Points
"Continue to build on this skill" is not an action point. It's a polite way of saying you haven't thought about it. Action points must be SMART: Specific, Measurable, Achievable, Relevant, Time-bound.
🚫 Skipping Rare Capabilities
FtP, EA, MC, OML and CHES often get minimal evidence because they're less obvious in daily work. But they still need evidencing. You may need to specifically seek out cases or reflections that cover them.
🚫 Misunderstanding Medical Complexity
MC is not just "the patient had many conditions." It's about managing uncertainty, explaining risk, coordinating care across teams, and recognising safeguarding within complexity. See the MC accordion above.
🚫 Forgetting CHES (Community Health)
Previously called Community Orientation — now includes environmental sustainability. Trainees either forget this entirely or write a very thin entry. Audit work and QI projects are your best friends here.
🚫 Not Tagging Log Entries
In FourteenFish, you must specifically tag your log entries to capabilities. It's not enough to have written a great log entry — the panel can only see what you've pointed them to. Tag as you write.
🚫 Not Reflecting Progress
Good write-ups show change over time. If your current review looks identical to the last one — same grades, same themes, same action points — that's a concern. Show the panel how you've grown.
🚫 Forgetting ST3 Action Points
Even if you're completing to CCT, you must still write action points for every capability. These become the basis of your Personal Development Plan for your very first GP appraisal as a qualified doctor.
🎙 From the Trainee Community — What Actually Helps
Distilled from UK GP training networks, deanery resources, trainee blogs, peer-reviewed research on WPBA, and the collective wisdom of GP educators across the country. Everything here aligns with official RCGP guidance — these are the practical insights that official documents don't always shout about.
📊 Which Capabilities Are Trainees Typically Under-Evidencing?
This chart reflects patterns consistently observed across UK training schemes. Every capability needs evidencing — the red and amber ones are where trainees most often fall short. Not because they lack the evidence — but because they haven't realised it existed.
The 4-Underdog Rule
At every portfolio review, actively check your evidence for FtP, EA, OML and CHES. Trainees consistently neglect these four — not from incompetence, but from not spotting the opportunities. Most clinical situations contain evidence for at least two of them if you look carefully enough.
📐 The Ideal Log Entry — How to Proportion Your Writing
This is one of the clearest signals of reflection quality that supervisors use. Assessors across the country are looking for the same structural balance. The blocks below are proportional — aim for yours to look something like this.
Happened
Followed
(use "I" — assessors notice)
Differently
Tip from UK deanery assessors: a quick way to judge reflection quality is to look for the word "I" in the "what I learned" section. Entries written in the passive voice or without personal ownership are a sign of surface-level reflection. Make it personal — it makes it real.
🔵 FourteenFish Capability Circles — What the Dashboard Is Telling You
The capability progress dashboard in FourteenFish uses coloured circles to show your evidence coverage at a glance. Here's what each state means — and what you should do when you see them.
No log entries or assessments have been linked to this capability yet. If you're early in training, some of these are expected. If you're approaching an ESR, address these urgently.
One or two pieces of evidence linked and validated. This shows activity but is typically insufficient for an ESR. Aim for at least 3 for each review period.
Three or more validated links — the minimum the RCGP expects per capability per review period. The circle turns green. But remember: 3 excellent entries always beats 10 weak ones.
🐟 FourteenFish Navigation Tips — From Trainee Experience
- Link capabilities at the time of writing each log entry — not just at ESR prep. Saves hours later and keeps your circles updated in real time.
- Check the capability progress screen weekly. It's the fastest way to spot gaps before they become a problem at review.
- Use the FourteenFish mobile app to write log entries between consultations or on the commute. It works offline and syncs when you reconnect. Seriously, use it.
- Log in 3–4 times per week during your first 2 weeks just to get familiar with the layout — before you have the pressure of adding content. This simple habit saves a lot of confusion later.
- Check the Educator Notes section regularly. Your trainer and ES leave important feedback there, including concerns. Trainees who miss these look disengaged.
- In FourteenFish, your ES must set up the review period before you can add self-rating scales. If this hasn't happened at the start of a new post — chase it immediately. Don't wait.
📅 The Portfolio Journey — Key Milestones Across 3 Years
A simplified timeline of the key portfolio events across GP training. The amber dots are the non-negotiable milestones — miss these and you risk an ARCP issue.
⚠️ Important: The ARCP Evidence Review Day happens one week BEFORE the panel. If evidence is missing on that day, you may receive an Outcome 5 — even if you add it before the actual panel date. Don't cut it close.
💡 Log Entry Habits That Actually Work
- Write log entries within 24 hours of the clinical encounter. Your memory fades fast and the reflection quality drops with it.
- Start with bullet points if the blank page intimidates you. Expand from there. The structure matters more than the prose style.
- One good case can cover 3–4 capabilities if you reflect broadly enough. You don't always need a new case — you need a new lens on a case you already wrote about.
- Routine cases are completely valid. An unremarkable consultation that you handled well is evidence of competence. Not every log entry needs to be an unusual or dramatic case.
- "Little and often" beats "lots at the end" — every time, without exception. The evidence of sustainability impresses reviewers. A sudden burst of 12 entries in the last week of a post does not.
- After a CbD or COT session, use the discussion (not just the formal assessment) to spark a log entry. The conversation with your trainer often reveals the deeper learning.
🏥 Hospital Post Portfolio Tips
- Hospital consultants may never have used FourteenFish before. Don't assume they know what to do. Proactively email them a link to the assessor registration page and briefly explain the grading system — before the assessment, not after.
- Don't forget you're training to be a GP even when working in a hospital. Every ward round, every complex patient, every MDT is GP-relevant. Reflect on it through a GP lens.
- Hospital posts are excellent for evidencing Medical Complexity (MC), Team Working (TW), and Organisation, Management and Leadership (OML). These capabilities come alive in secondary care — use the opportunity.
- MiniCEX in hospital posts directly generates evidence for DG, DD, CEPS and CM — the same capabilities you'd use COTs for in GP. Make the most of them.
- If you're in a speciality where CEPS recordings are available (cardiology, respiratory, musculoskeletal), get them done. The 5 mandatory CEPS recordings must be completed by the end of training — start early.
🌍 IMG-Specific Insights
- UK GP training places significant weight on reflective writing — a format many international training systems don't emphasise. This is a transferable skill, not an innate one. It genuinely improves with practice.
- If you find writing in English harder than practising in it, try drafting your reflection in your first language and then translating. The reflection quality matters more than the language you first thought it in.
- Research across UK deaneries found that IMGs are, on average, more positive about WPBA than UK graduates. The framework tends to work well for those who engage with it fully.
- The ePortfolio can sometimes struggle to capture the true capabilities of a trainee when supervisor dynamics are variable. If you feel your portfolio doesn't reflect your abilities, speak to your TPD — not just your ES.
- Some capabilities (especially OML and CHES) require understanding of how UK GP practices are organised and how local health systems work. Ask your trainer about the local context — then write about it.
⚠️ Hidden Time Bombs — Things That Catch Trainees Out Late
- Safeguarding Level 3: Your 3-yearly certificate alone is NOT enough. You need an annual knowledge update AND a reflective log entry — every 12 months. This catches trainees out repeatedly.
- Form R: Must be completed via TIS Self Service online — not the Word document version. The Word version is no longer accepted. Uploading the wrong version before your ARCP will cause problems.
- ARCP Evidence Review Day: This happens one week before the panel. If evidence is missing at that point, you may receive Outcome 5 even if you add it the day before the actual panel. Panels do not reopen their review for late additions.
- ST3 Capability Evidence: In your final ESR, the RCGP requires 3 pieces of evidence from the current ST3 review period for each capability. Evidence from a previous review period cannot be reused as primary evidence (though you can mention it in the free text).
- OOH Passport: Some deaneries require a specific OOH passport document uploaded to Supporting Documentation — not just log entries. Check your local deanery requirements early in ST3.
🔬 What the Research Says — WPBA From the Inside
IMGs More Positive
Research found IMGs were significantly more positive towards WPBA than UK graduates — and showed no difference in WPBA attainment by ethnicity.
Subjectivity Is Real
Some trainees feel their portfolio doesn't fully reflect their abilities because supervisor attitudes and rating styles vary between practices and posts.
Reflection Is a Skill
Studies confirm reflective writing is a learnable skill — not something you either "have" or "don't have." Trainees who struggle early usually improve significantly with feedback and practice.
Relationship Matters
The trainee–trainer relationship has a direct impact on portfolio quality. Trainees who have good educational dialogue with their ES consistently produce richer, more evidenced portfolios.
🎓 What Educational Supervisors Consistently Notice — and Rarely Say Out Loud
- Trainees who check their Educator Notes regularly signal engagement and professional maturity. Trainees who never check them — and respond to concerns months late — signal the opposite.
- A large number of thin, vague log entries is actually worse than a smaller number of rich, specific ones. Volume without quality creates extra work for the ES and communicates that you're treating the portfolio as a tick-box exercise.
- The self-rating scales filled in quickly with one-line descriptions frustrate ARCP panels. The panels are reviewing up to 15 portfolios in a single sitting — they notice, and they remember, the ones that made the effort.
- ESRs of trainees with consistent, spread-out portfolio activity write themselves. ESRs of trainees with a chaotic, end-of-rotation burst of activity are the most time-consuming — and the most concerning — for supervisors.
- If you're struggling with the portfolio, your ES wants to know early — not at the ESR meeting. Raising concerns early allows them to help you. Raising them in week 11 of a 12-week post helps no-one.
💬 From the Training Community — Educator & Trainee Wisdom
Insights gathered from UK GP training educators, official training sessions, peer-reviewed research, trainee blogs, and GP training support platforms. All content aligns with RCGP guidance. Where insights come from community experience rather than official sources, this is clearly indicated.
🔺 Miller's Pyramid — Where WPBA Fits in the MRCGP
A widely-used framework by UK GP training educators — including the RCGP's own WPBA Clinical Lead — to explain what each component of the MRCGP tests. The capability rating scales sit firmly at the top of this pyramid.
Why This Matters for the Portfolio
Miller's Pyramid makes clear why WPBA is the hardest component to fake. The AKT tests what you know from a book. The SCA tests what you can do in a 10-minute simulation. But WPBA tests what you actually do — day after day, with real patients, under real pressure. That's why the evidence must be rich, varied, and built up consistently over time. As one RCGP educator puts it: "It is YOUR portfolio. Use it to demonstrate progression towards competence, learning and reflection. It is a professional document."
👋 Trainee insight: Many trainees assume passing the AKT and SCA means WPBA is just a formality. This is wrong. WPBA is an equal component of the MRCGP. Trainees who treat the portfolio as a box-ticking exercise have been asked to repeat posts or received developmental ARCP outcomes, even after passing both written exams.
🏁 ARCP Outcomes — What They Mean
The ARCP panel issues one of these six outcomes after reviewing your portfolio. Understanding each one removes the anxiety. Outcomes 1 and 6 are what you're aiming for. Outcome 5 is the most common avoidable one — it usually means missing evidence, not poor clinical performance.
How Most Outcome 5s Are Caused
The vast majority of Outcome 5s are not caused by poor clinical performance — they are caused by missing administrative evidence: no ESR submitted within 2 weeks of the ARCP, missing Form R, incomplete mandatory requirements, or evidence added after the Evidence Review Day. Every one of these is preventable. Use the Bradford ES Checklist and the RCGP Mandatory Evidence Summary Sheet from Day 1 of training.
🎓 What ARCP Panels Look For — From Official Training Guidance
Compiled from official RCGP training sessions for ST1 trainees, delivered by senior ARCP assessors and WPBA clinical leads. This is what those panels are actually checking — and how to give them exactly what they need.
📋 What They Actually Review
- All mandatory assessments completed at the correct stage
- ESR submitted and released (not just drafted)
- Learning logs entered regularly throughout the post — not in a last-minute burst
- Evidence of capability coverage across all 13 — with gaps explained
- PDP present, reviewed and updated in each review period
- Form R completed correctly via TIS Self Service
- CPR/AED and safeguarding certificates current with annual updates evidenced
📌 What Gets You a Good Outcome 1
- Evidence of progression between reviews — not just presence of evidence
- Log entries that are succinct, meaningful, and justify capability links specifically
- Assessments spread evenly throughout the post — not clustered at the end
- PDP objectives that are SMART and have been actively reviewed
- Range of Clinical Experience Groups covered — not just the common ones
- Meetings booked early and supervisors engaged proactively
🚩 What Raises Concern at ARCP
- All assessments clustered in the last 2 weeks of a post
- Log entries that are descriptive rather than reflective (a sign of minimal engagement)
- Repeated NFD-Below Expectations grades without an explanatory narrative
- Missing or thin entries for FtP, EA, OML or CHES
- No response or acknowledgement of Educator Notes left by supervisors
- PDP objectives identical across review periods — no evolution
- Portfolio suddenly looking "complete" just before ARCP after months of inactivity
📊 The WPBA Grading System — Rethinking NFD
One of the most common sources of unnecessary anxiety for new GP trainees is misunderstanding what "NFD" means. The grading system is fundamentally different from anything you encountered in medical school or hospital training.
The NFD Trap
Trainees who came from competitive medical school environments often feel devastated by their first NFD grade. Bradford VTS trainers, deanery educators, and experienced GP trainers all make the same point: an ST1 portfolio full of "Competent" grades is actually suspicious. It means either your cases were too easy, your assessor was being too generous, or the assessment didn't genuinely challenge you. NFD in ST1 is a sign that you're being assessed rigorously and developing appropriately — embrace it.
✍️ Smart Log Entry Strategies — From GP Training Educators & Trainee Networks
Compiled from UK GP training educators, trainee support platforms, and deanery guidance. Every point below aligns with RCGP and GMC guidance on reflective practice.
⭐ Hidden Quality Signals — What Separates Good Portfolios from Excellent Ones
These are the less-obvious quality markers that UK GP training educators consistently highlight. None of them require extra time — they just require a slightly different mindset.
⭐ Visible Professional Growth
- Earlier reviews show developing understanding; later reviews show confident application — this trajectory is what CCT readiness looks like
- Action points from previous ESRs are demonstrably acted on — not just carried forward unchanged
- Self-rating narrative shows increasing insight and honest self-awareness over time
- Capability write-ups become more nuanced and specific with each review period
💎 Small Details That Signal Professionalism
- Log entry titles are clear and descriptive — not just "CBD reflection" or "clinical case". A title like "Challenging consultation — patient refusing cancer treatment" tells the panel everything at a glance
- Entries are spell-checked. Portfolios with repeated spelling errors signal carelessness, even if the content is good
- Educator Notes are acknowledged and responded to in subsequent log entries
- Placement Planning Meeting log entry at the start of each new post (required but often missed)
- Entries are shared promptly after writing — not left as drafts for weeks
🔗 Evidence That "Closes the Loop"
- A learning need identified in one entry is followed by a later entry showing how it was addressed — this is the "closed learning loop" that supervisors and panels actively look for
- PDP objectives link directly to specific log entries that demonstrate they were achieved
- Leadership activity leads to a reflective log entry that links to OML capability evidence
- MSF feedback themes are explicitly addressed in subsequent log entries and PDP updates
🧠 The "Intelligence Behind the Doing"
- Don't just describe what you did — explain why you made that decision and what you were weighing up
- For management decisions: include why you chose this option over others, especially if it was not standard first-line
- For DD (Decision-Making): include the diagnostic uncertainty you navigated, not just the conclusion you reached
- For TW (Team Working): describe how you communicated, what information you shared and why — not just that you referred to someone
- This "reasoning narrative" is the thing most commonly missing from otherwise adequate portfolios
Important: Your ePortfolio is a Professional Document
Concerns have circulated in trainee networks about the safety of reflective writing following high-profile cases. The RCGP and GMC have both emphasised that honest, reflective log entries — used within the ePortfolio system as intended — are an important and protected part of professional development. However, the key principle remains: your portfolio is a professional document, visible to supervisors and ARCP panels. Write with care and professionalism. Reflect genuinely, but do so in language you would be comfortable standing behind. The FourteenFish system also has built-in AI that scans for sensitive patient data before you share entries — use this as a safety prompt.
🎯 Per-Capability Community Tips — What Trainees Find Hardest
These insights come from trainee support platforms, training scheme resources, and UK GP educator guidance. They address the specific capabilities trainees most commonly ask about or get wrong.
FtP Fitness to Practise — How to Actually Evidence It ▼
FtP is the capability most consistently missing from trainee portfolios. The reason is straightforward: trainees assume it only applies to serious professional conduct matters. It doesn't. FtP evidence shows up in everyday moments — and once you know this, you'll find opportunities everywhere.
"I did not observe any fitness to practise issues during this post."
This is almost always wrong — and it means you've missed the evidence.
- Recognising you were tired or stressed and adjusting your practice accordingly
- Noticing a colleague struggling and taking appropriate action
- Seeking supervision when unsure rather than proceeding alone
- Reflecting on a case where your knowledge gap nearly led to a mistake
- Any situation where you proactively protected patient safety from a performance risk
OML Organisation, Management & Leadership — It's NOT Just About Big Projects ▼
The most common OML mistake is waiting for a big QI project or audit before writing an OML entry. Meanwhile, daily clinical work is full of OML evidence that trainees simply don't recognise. Bradford VTS's capability cheat sheet is particularly clear on this.
- How you manage your inbox/blood results/letters
- Methods to keep on top of admin during a busy GP day
- Prioritising when multiple patients need attention simultaneously
- Efficient handover approaches you've developed
- Coordinating a complex patient's care across specialties
- Improving a system or process within the practice
- Taking responsibility for a task that wasn't strictly yours
- Delegating appropriately and safely
- Stepping in to coordinate a complex patient when no one else was
- Identifying a safety issue and taking steps to address it
- Contributing to a QI project or audit
- Teaching or supporting a colleague or student
CHES Community Health & Environmental Sustainability — Evidencing the "New" Part ▼
This capability was updated in August 2025 to explicitly include environmental sustainability. Many trainees know Community Health well but are unsure how to evidence the sustainability element. The good news: it doesn't require grand gestures.
- Any encounter that makes you think about a patient population, not just one patient
- Audit that looks at a specific patient group
- Reflection on health inequalities in your local area
- QI project with population-level impact
- Choosing a DPI inhaler over a metered-dose inhaler and reflecting on why
- Avoiding an unnecessary investigation — and reflecting on its environmental cost
- Engagement with practice net zero or sustainability initiatives
- Reflection on any prescribing or investigation decision through a sustainability lens
DD Decision-Making & Diagnosis — Show the Thinking, Not Just the Conclusion ▼
This is the capability most commonly written up incorrectly. Trainees describe their diagnosis and management — but omit the decision-making process. Bradford VTS and multiple deanery guides are explicit: the panel wants the "intelligence behind the diagnosis," not just the diagnosis itself.
- Differential diagnosis: what else were you considering, and what moved you towards or away from each option?
- Uncertainty management: if you were not certain, how did you use safety-netting, time, or investigation to manage that uncertainty safely?
- Cognitive process: was this pattern recognition (System 1 thinking) or deliberate analytical reasoning (System 2)? What prompted you to switch?
- Risk: were there any red flags you actively considered and excluded?
- The moment of decision: what was the final factor that settled your working diagnosis?
Useful resources: the Bradford VTS Professional Capabilities Cheat Sheet includes specific guidance on what to write for DD, including Dual Process Theory, cognitive biases, and using time as a diagnostic tool. Well worth reading before your next CbD.
🎓 For Educational Supervisors & Trainers
Hard-won insights for supervisors conducting ES reviews and capability rating scale sign-offs.
📚 Train the Trainee from Day One
In your first meeting, go through: what the 13 capabilities mean; what evidence looks like; how to fill in the rating scales. If you do this well at the start, the next three years become significantly easier for both of you.
⚡ Use the Bradford ES Workbook
Always ask trainees to complete the Bradford ES Workbook before the meeting. It saves hours, focuses the conversation, and ensures nothing is missed. Don't do an ES meeting without it.
🎯 Quality of Evidence Over Quantity
The capability assessment is qualitative, not quantitative. Three excellent, specific, well-linked log entries are worth more than twelve vague ones. Teach trainees to write better — not just more.
🔴 3 or More NFD-Below → Refer to Panel
If you rate a trainee as NFD-Below Expectations in 3 or more capabilities, you must sign off the report as "unsatisfactory progress" and notify the TPD and scheme administrator. Do not soften this — it exists to protect both the trainee and patients.
🔍 Validate Meaningfully — But Selectively
When validating log entries against capabilities, only validate when there is a clear, strong indicator. If you award validation loosely, the rating scale data becomes unreliable — for both trainee and ARCP panel.
💬 The Narrative Matters as Much as the Grade
When writing up capabilities, resist the temptation to write only grades and totals. ARCP panels need to see the narrative — what themes emerged, what level of supervision was needed, what progress has been made. The numbers alone don't tell the story.
❓ Common Questions
How much do I need to write for each capability?
Do I need evidence for all 13 capabilities every review?
What if I haven't passed my SCA or AKT yet?
I'm in a hospital post — how do I evidence GP-specific capabilities?
My ES hasn't replied to my emails. What do I do?
I'm an ST3 completing to CCT. Do I still need action points?
What's the difference between NFD-Above Expectations and Excellent?
What changed with the August 2025 curriculum update?
🏁 The Bits to Remember Tomorrow
Evidence for the Capability Rating Scales comes in two forms: quantitative (WPBA grades and scores) and qualitative (narrative themes from CSR, MSF, PSQ and log entries). You need both for every capability.
Items marked ★ in the templates above provide the strongest evidence. Always prioritise these — they carry the most weight with ARCP panels.
The Capability Rating Scales are your professional CV. You're showing yourself to people who don't know you. Specific, detailed write-ups with real examples make a strong impression. Generic summaries do not.
SMART action points are non-negotiable. Every capability needs a concrete next step — not a vague aspiration. If you're stuck, ask: "What is the single thing I could do in the next 3 months that would move me forward here?"
Don't forget CHES (Community Health and Environmental Sustainability) and MC (Medical Complexity). These are frequently underevidenced — but they matter. Audit work, QI projects, and case reviews involving uncertainty or population thinking will help you here.
Educational supervision is trainee-led. It's your responsibility to arrange meetings, prepare the portfolio, and complete the rating scales before the meeting. If you fail to prepare, the only person who loses out is you.
The ES Mapping Workbook (in the downloads above) is your most useful tracking tool. Use it from Day 1. It shows you at a glance which capabilities are well evidenced and which need more attention.
Even ST3 trainees completing to CCT must write action points for every capability — because these become your PDP for your first GP appraisal. The finish line for GP training is not the finish line for professional development.
Bradford VTS — A free educational resource created by Dr Ramesh Mehay | Updated April 2026
For educational use only. Always verify clinical information against current RCGP, GMC and NICE guidance.