Bradford VTS β€” Header Scheme 06
Questionnaire Studies β€” Bradford VTS Quality Improvement

Questionnaire Studies
in GP Quality Improvement

Because asking the right questions is the first step to getting useful answers. (And no β€” Google Forms is not automatically a valid research methodology. But it can be, with a little thought.)

🎯 High-yield tips for GP training
πŸ‘₯ For Trainees, Trainers & TPDs
πŸ’Ž Knowledge not found elsewhere
πŸ“… Last updated: April 2026 πŸ“‹ Quality Improvement Activity (QIA) πŸ”¬ RCGP Curriculum Aligned
Questionnaire studies are one of the most accessible Quality Improvement Activities available to GP trainees. Done well, they reveal what patients or staff actually think β€” not what we assume they think. Done poorly, they produce numbers that feel meaningful but tell you nothing. This page shows you how to do them well β€” and what the trainee community has learned from doing them badly first.

πŸ“₯ Downloads

Sample questionnaires, markers, and step-by-step how-to guides β€” ready when you are.

path: QUESTIONNAIRES

Curated links

πŸ”— Web Resources

A hand-picked mix of official guidance and real-world GP training resources.

πŸ› Official guidance β€” QI in GP training
πŸ›  Questionnaire methodology
πŸ“ˆ GP quality improvement context
πŸ”§ Free digital tools for questionnaires

One-minute recall

⚑ Quick Summary

If you only read one section before your tutorial, make it this one.

πŸ“‹ Questionnaire Studies at a Glance

  • A questionnaire study collects data from a defined group using written questions to find areas for improvement.
  • It counts as a valid QIA β€” and can also form the backbone of a full QIP.
  • Good questionnaires have a clear aim, a defined sample, and appropriate question types. They are always piloted first.
  • Validity = does it measure what you think? Reliability = does it give consistent results?
  • A Likert scale (1–5 or 1–7) is the standard tool for measuring opinions. Report as frequencies, not just averages.
  • Always pilot on 3–5 people first. You will always find something to fix.
  • Aim for 20–30 responses minimum for a practice QIA.
  • The questionnaire is NOT the end. What you do with the results β€” the change you make β€” is what makes it a QIA.
  • The biggest mistake trainees make: collecting data but never closing the loop. Before AND after. Always.
πŸ“Š
20–30
Typical minimum responses for a practice QIA
πŸ”
2 rounds
Baseline survey β†’ change β†’ repeat survey
⏱️
≀10 min
Ideal completion time for patients
βœ…
Pilot first
Always test on 3–5 people before distributing

GP relevance

🎯 Why Questionnaire Studies Matter in GP

This is not just a portfolio exercise. Understanding how to design and use questionnaires is a real GP skill.

πŸ“Œ In real GP practice

As a GP, you will regularly need to understand what patients or staff actually experience. Questionnaires let you collect that information at scale β€” efficiently, and without interviewing everyone individually.

They appear in: patient experience surveys, service development, QOF reviews, and PCN-level improvement work. Learning to design one well now means you will be able to read, interpret, and critique them for the rest of your career.

πŸŽ“ In GP training (RCGP requirements)

Questionnaire studies count as a valid Quality Improvement Activity (QIA) in your RCGP trainee portfolio. They can also form the measurement tool inside a full QIP.

The RCGP curriculum (Evidence in Practice domain) explicitly lists questionnaire design as a skill trainees should develop. Your assessors want to see that you understand why you chose this method β€” not just that you did it.

πŸ“–
RCGP Curriculum β€” Evidence in Practice

GPs should understand "techniques such as pilot studies, questionnaire design, focus groups and consensus methods." This page maps directly to that requirement.


Core knowledge

πŸ“‹ What Is a Questionnaire Study?

A clear definition and how it fits into the world of quality improvement.

Definition

A questionnaire study is a structured data collection method that uses a set of written questions, given to a defined group of people, to gather information about their experiences, opinions, knowledge, or attitudes. In the GP quality improvement context, it is used to: identify problems, measure a starting point, understand the impact of a change, or hear what patients and staff actually think.

Questionnaire study vs other QIA methods

MethodBest used whenMain limit
Questionnaire studyYou want opinions, attitudes, or knowledge levels at scaleCan't capture deep nuance or emotion
Clinical auditYou want to measure adherence to a defined clinical standardNeeds a standard to measure against
Notes reviewYou want to check documentation qualityOnly captures what was recorded
PDSA cycleYou want to test and refine a change rapidlyNeeds multiple quick cycles
Interview / focus groupYou need deep qualitative understandingTime-intensive; hard to scale
βœ…
When a questionnaire study works well
  • You want to understand patient or staff experience of a service
  • You are measuring knowledge levels in a team
  • You want to compare attitudes before and after a change
  • Your sample is too large for individual interviews
  • You need numbers to present at a practice meeting
⚠️
When a questionnaire study is the wrong tool
  • You want deep qualitative insight into complex feelings
  • Your sample is fewer than 5–10 people (use interviews)
  • The topic is highly sensitive (questionnaires may not feel safe enough)
  • You already know what patients think (you'd be confirming, not discovering)

Visual overview

πŸ“Š The Full Picture β€” At a Glance

A visual summary of how a questionnaire study fits into the QI cycle, what question types exist, and what makes data usable.

πŸ”„ The Questionnaire QI Cycle

1. Identify the problem "Something isn't working well here…" 2. Design & Distribute Questionnaire (Baseline data) 3. Analyse & Present Share with team. What does it mean? 4. Implement a Change Specific. Named. Small is fine! 5. Re-measure & Reflect Did it improve? This closes the loop. The loop β€” repeat as needed

The QI cycle. The questionnaire is your measurement tool at steps 2 and 5. Without step 5, you have a survey β€” not a QIA.

❓ The 5 Question Types β€” When to Use Each One

Closed / Yes-No βœ… Best for: Facts, demographics, screening questions πŸ“Š Analysis: Frequencies ⚠️ Watch out: Captures extremes only "Have you had a flu jab this year?" Y/N Likert Scale βœ… Best for: Opinions, attitudes, satisfaction ratings πŸ“Š Analysis: % scoring 4–5/5 ⚠️ Watch out: Don't just report means "My GP listened: 1 (poor) – 5 (great)" Open-Ended βœ… Best for: The "why" behind numbers, new themes πŸ“Š Analysis: Thematic analysis ⚠️ Watch out: Max 1–2 per survey "What one thing could we improve?" Multiple Choice βœ… Best for: Categories, behaviours, preferences πŸ“Š Analysis: % per option ⚠️ Watch out: Always add "Other" "How did you book? Phone/Online/Walk-in" Ranking βœ… Best for: Priorities, "what matters most?" πŸ“Š Analysis: Average rank per item ⚠️ Watch out: Max 5 items to rank "Rank your top 3 service priorities"

The five question types you have to choose from. Most questionnaires use 2–3 types β€” not all five.

🎯 Validity vs Reliability β€” The Visual

Reliable β€” Not Valid Consistent results, but measuring the wrong thing Neither Reliable nor Valid Scattered results β€” no consistency, no accuracy Valid AND Reliable βœ“ This is what you are aiming for

You want your questionnaire to be both: measuring the right thing (valid) AND giving consistent results (reliable). Both dartboards aim for this.

πŸ“¬ What Affects Your Response Rate? β€” A Visual Guide

Strategy Impact on Response Rate Keep questionnaire short (under 10 min) Very High ↑↑ Send reminder after 10–14 days High ↑↑ Use SMS/QR code link (vs paper) High ↑↑ Distribute in waiting room (captive audience) Good ↑ Guarantee anonymity explicitly Moderate ↑ Explain why the feedback matters Moderate ↑ Use plain accessible language Helpful ↑ 0% Moderate Very High

The strategies that have the biggest effect on whether people actually complete your questionnaire.


Practical guide

πŸͺœ How to Run a Questionnaire Study: Step by Step

Follow this and your questionnaire study will be methodologically solid β€” and genuinely useful.

1

Define your aim β€” precisely

One clear sentence. What are you measuring, in whom, and why? "Improve patient satisfaction" is too vague. "Assess patient awareness of online booking among adults aged 18–65 at our practice over 2 weeks" is a real aim. No aim = no questionnaire.

2

Check if a validated tool already exists

Do not reinvent the wheel. The GP Patient Survey, Patient Satisfaction Questionnaire (PSQ), and EUROPEP instrument all contain validated questions you can adapt. Using or adapting a validated tool strengthens your methodology and saves you from designing bad questions by accident.

3

Draft your questions

Choose the right question types. Keep the questionnaire short β€” 8–15 questions is plenty for a practice QIA. Fewer questions = higher response rate. Avoid leading, double-barrelled, ambiguous, and unanswerable questions (see the writing rules section).

4

Draw a SMART aim and consider a driver diagram

Before distributing, write your SMART aim: Specific, Measurable, Achievable, Realistic, Time-bound. If your project is a full QIP, a driver diagram helps map which factors your questionnaire can actually address β€” and shows your assessor that you have thought systematically.

5

Pilot the questionnaire β€” this is non-negotiable

Ask 3–5 people to complete the draft and ask: "Was anything confusing? Was any question hard to answer?" You will almost always find at least one problem. Fix it before distributing. Piloting is what separates a professional questionnaire from a first draft. It takes 20 minutes. Do not skip it.

6

Define your sample and plan distribution

Who will complete it? How will you reach them? Paper in waiting room, SMS link, QR code, email, direct distribution to staff. Decide on a minimum target β€” 20–30 responses is typically sufficient for a practice QIA. Talk to your practice manager early β€” they can often help enormously.

7

Consider consent, anonymity, and GDPR

For most small-scale practice QIAs, formal ethics approval is not needed. But always: ensure participation is voluntary, explain the purpose, confirm anonymity where appropriate, and follow your practice's data handling policy. Include a short information paragraph at the top of the questionnaire.

8

Distribute, collect, and send a reminder

Set a clear deadline (2–4 weeks). Record how many questionnaires you distributed so you can calculate your response rate. Send a reminder at 10–14 days β€” most non-responses are simply people who forgot.

9

Analyse your results and share with the team

For Likert and closed questions: use frequencies and percentages. For open questions: group responses into themes. Present findings at a team meeting. Use a simple chart β€” even a hand-drawn one is fine. The point is to share, discuss, and decide what to change.

10

Implement a specific change

This is the quality improvement step. Agree a specific action based on your findings. It does not need to be a huge change β€” a poster, an updated template, a 10-minute team briefing, or an SMS to patients all count. The change must be real, named, and documented.

11

Re-measure β€” close the loop

Repeat the questionnaire β€” even with a smaller sample β€” after your intervention. Before AND after = quality improvement. Even a modest second survey (10–15 responses) transforms a simple survey into a proper QIA. This is the step most trainees skip. Do not skip it.

πŸ”„ Summary: Identify β†’ Design β†’ Pilot β†’ Distribute β†’ Analyse β†’ Act β†’ Re-measure β†’ Reflect. In that order. Every time.

Design toolkit

πŸ“Š Likert Scales β€” Everything You Need to Know

The single most used question format in healthcare questionnaires. Used well, they are powerful. Used badly, they produce numbers that look scientific but mean nothing.

The 5-Point Likert Scale β€” How It Looks

"My GP explained my treatment options clearly." 1 Strongly Disagree 2 Disagree 3 Neither / Neutral 4 Agree 5 Strongly Agree

The standard 5-point Likert scale. Always label ALL 5 points β€” never just the ends. Always include a neutral midpoint (3).

βœ… Key rules for Likert scales

  • Always include a neutral midpoint (3 on a 5-point scale)
  • Label ALL points clearly β€” not just the extremes
  • Keep the same scale direction throughout β€” never flip it mid-questionnaire
  • Use the same scale for related questions so you can compare them
  • 5 points is standard for GP QIAs; 7 points adds granularity for formal research
  • Frame items as positive statements: "My GP listened" not "My GP did not listen"
⚠️
The most common Likert mistake β€” reporting means

Reporting that "average satisfaction = 3.76 out of 5" sounds scientific. But Likert scales produce ordinal data β€” the gap between 1 and 2 is not necessarily equal to the gap between 4 and 5. Report as frequencies and percentages instead:

βœ… Better: "78% of respondents rated the service as 4 or 5 out of 5."

❌ Avoid: "Mean satisfaction score = 4.12."

πŸ’‘
How to report Likert results well
  • Report % scoring 4 or 5 (top-box scoring) as your headline figure
  • Show a breakdown of all 5 response options as a bar chart
  • Compare before vs after your intervention using the same metric
  • Acknowledge that a small sample limits generalisability β€” this is honest, not weak

πŸ“Š Example: Before vs After β€” How to Present Likert Results

0% 20% 40% 60% 80% GP explained clearly 32% 74% Online booking awareness 28% 61% Satisfied with appointment 55% 78% Before After % scoring 4 or 5 out of 5 (Before vs After)

Report Likert results as % scoring 4 or 5 (top-box scoring). Side-by-side before/after bars show improvement clearly and work well in your portfolio write-up.


Methodology

βœ… Validity and Reliability

Two words your Educational Supervisor will ask about. Here is what they actually mean β€” and what you need to say.

🎯

Validity β€” "Does it measure what I think?"

A valid questionnaire accurately captures the thing you are trying to measure. Three types matter for GP QIA:

TypeWhat it meansHow to achieve it
Face validityQuestions look relevant to the topicAsk 2–3 colleagues to review the draft
Content validityQuestions cover all key aspects of the topicReview similar published questionnaires
Construct validityQuestions map correctly onto the conceptUse validated instruments where available
πŸ”

Reliability β€” "Would I get similar results again?"

A reliable questionnaire gives consistent results over time and across similar respondents. For GP QIA, focus on:

TypeWhat it meansPractical approach
Test-retestSame results if repeated in similar conditionsRe-test on a small sample 2 weeks later
Internal consistencyRelated questions give correlated answersUse Cronbach's alpha β€” only needed for formal research
πŸ“Œ

For a GP practice QIA, you do not need to calculate formal statistics. Showing that you understand the concepts β€” and that you piloted, used clear language, and pre-defined your response options β€” is entirely sufficient.


Writing good questions

✏️ Writing Good Questions β€” The Rules

Small changes in wording make enormous differences to question quality. These rules separate useful data from useless noise.

Problem type❌ Weakβœ… BetterWhy it matters
Leading question"Would you agree that our new system is better?""How would you rate the new system compared to the old one?"Leading questions push respondents toward a particular answer
Double-barrelled"Was the receptionist polite and helpful?"Two separate questions: "Polite?" and "Helpful?"Two questions in one = impossible if answers differ
Ambiguous"Do you visit the surgery regularly?""How many times have you attended in the last 3 months?""Regularly" means different things to different people
Unanswerable"Does the surgery follow NICE guidelines?""Was your treatment explained clearly to you?"Patients cannot assess clinical guideline adherence
Negative phrasing"My GP does not listen to me." (Agree/Disagree)"My GP listens to my concerns." (Agree/Disagree)Negatives confuse respondents β€” always use positive statements
Medical jargon"Are you aware of the practice's QOF targets?""Has anyone discussed health screening checks with you?"Patients do not know NHS acronyms β€” write in plain language
🩺
The "Would my patient understand this?" test

Before including any question, read it out as if explaining it to a patient with low health literacy. If you cannot explain the question simply, rewrite it. If it uses clinical terms, NHS processes, or acronyms β€” rewrite it. A confusing questionnaire produces garbage data, no matter how good the methodology.


Common mistakes

⚠️ Common Pitfalls β€” What Trainees Get Wrong

Every one of these has been seen in real trainee portfolios. Read them once and you will not need to learn them the hard way.

πŸ† The Pitfall Hierarchy β€” From Most Common to Least

No action taken Collecting data but making no change This is the #1 ARCP rejection reason No pilot study Distributing without testing first Always leads to at least one problem No re-measurement Change made but loop never closed Before AND after = QIA. Before only = survey. Too many questions / wrong section / thin reflection 40-question surveys. QIA in Learning Log. "It went well." All common β€” all avoidable CRITICAL HIGH HIGH MODERATE

The higher up the pyramid, the more likely it is to result in your QIA being rejected or marked down at ARCP.

⚠️
Pitfall 1: No pilot study

Distributing without testing is the single most common damaging error. You will almost always find confusing questions in your first draft β€” but only if you actually ask someone to complete it. Give it to 3 people. It takes 20 minutes. It saves you weeks of bad data.

⚠️
Pitfall 2: No change after data collection

A survey without action is not a QIA. You must implement something β€” even small. A poster, a template update, a team briefing. Then re-measure. That is the quality improvement part.

⚠️
Pitfall 3: Too many questions

A 40-question form will get low response rates and careless answers by question 15. Keep it to 8–15 questions for a practice QIA. If you have 40 questions, your aim is not precise enough yet.

⚠️
Pitfall 4: Not reporting the response rate

Always record: how many distributed, how many completed, and therefore your response rate. Without this, your data cannot be interpreted. A low response rate is not automatically a problem β€” it just needs to be acknowledged honestly.

⚠️
Pitfall 5: Reporting Likert averages as precise data

Report "78% scored 4 or 5" not "average = 3.76." Likert data is ordinal β€” means can be misleading. Frequencies are honest and easy to understand.

⚠️
Pitfall 6: Forgetting ethics and consent basics

Even for a simple practice QIA: ensure participation is voluntary, explain the purpose, confirm anonymity, and follow GDPR. Include a short information paragraph at the top of the questionnaire itself.

🎯
What Trainers & TPDs Love to Hear
  • "I piloted the questionnaire with five people and revised three questions as a result."
  • "After collecting baseline data, we implemented [specific change] and repeated the questionnaire to measure impact."
  • "I acknowledged response bias as a limitation and explained why it may have affected my results."
  • "I used a 5-point Likert scale because I needed to measure degrees of satisfaction, not just a yes or no."
  • "Our response rate was 34%, which is in line with typical NHS patient survey benchmarks."

GP context

πŸ’‘ Real GP Examples of Questionnaire-Based QIAs

Concrete examples from GP training contexts. Use them as inspiration β€” not as templates to copy.

πŸ“‹ Example 1: Patient satisfaction with online appointment booking β–Ό

Aim: Assess patient satisfaction with the new online booking system among adults aged 18–65 over 2 weeks.

Questions: Awareness of system (yes/no); ease of use (5-point Likert); preference vs. telephone (multiple choice); open question: "What could be improved?"

Key findings: 68% unaware of online booking; 82% of users rated it 4 or 5; most common open theme: "too many steps to log in."

Action: Receptionists created a simple "how to book online" leaflet; displayed on waiting room screens; SMS sent to all registered patients.

Re-measurement: 4 weeks later β€” online booking awareness rose from 32% to 60%.

βœ…

Strong QIA: clear aim, mixed question types, specific actionable finding, intervention, re-measurement.

🩺 Example 2: Staff knowledge of 2-week wait referral criteria β–Ό

Aim: Assess knowledge of 2WW cancer referral criteria among all administrative and reception staff.

Questions: Knowledge questions (correct/incorrect/unsure) about referral criteria for colorectal, lung, and skin cancer.

Key findings: Significant gaps around skin cancer and 9/12 staff unsure when to escalate a patient call about potential cancer symptoms.

Action: 20-minute educational session at team meeting; laminated quick-reference guide at each reception desk.

Re-measurement: Repeat questionnaire 6 weeks later showed improvement across all three domains.

πŸ’‘

Staff surveys are underused. They have a 100% response rate, gold-standard data, and no waiting room required.

πŸ’Š Example 3: Patient understanding of their medications β–Ό

Aim: Assess whether patients on 3+ regular medications understood the purpose of each medication.

Questions: Whether purpose of each had been explained (yes/no/partly); confidence managing medications (Likert 1–5); open question about concerns.

Key findings: 64% uncertain about the purpose of at least one medication; most common open theme: "I'm worried about side effects but haven't wanted to ask."

Action: Structured "medication explanation" prompt added to the medication review template.

βœ…

This example links to patient safety, shared decision-making, and the Holistic Practice capability β€” excellent portfolio diversity.

🧠 Example 4: Awareness of local mental health self-referral services β–Ό

Aim: Assess patient awareness of self-referral Talking Therapies (IAPT) in a high-deprivation practice area.

Questions: Awareness of self-referral option (yes/no); whether GP had mentioned it (yes/no); comfort with self-referring (Likert 1–5); barriers (multiple choice + open).

Key findings: Only 26% aware they could self-refer; most common barrier: "I didn't know it was available to me."

Action: Information displayed in waiting room; reception team trained to mention Talking Therapies when patients called about low mood or anxiety.

βœ…

Covers community health, health inequalities, and mental health β€” three RCGP curriculum areas in one QIA.


Portfolio guidance

πŸ“‚ How to Write Up Your QIA for the 14Fish Portfolio

Structure your write-up to show you understand the methodology β€” not just that you sent out some forms.

SectionWhat to include
Background / RationaleWhy this topic? What gap or problem prompted it?
AimOne SMART sentence: what, in whom, and why
MethodQuestion types, sample, distribution, time frame
PilotingWho you tested on, what you changed
ResultsFrequencies, %, themes; include your response rate
InterpretationWhat do results mean? What surprised you?
Action takenSpecific named change β€” and when
Re-measurementResults of follow-up survey post-intervention
ReflectionWhat worked? What would you do differently?
LimitationsResponse bias, sample size, generalisability

❌ Weak portfolio entry

"I distributed a questionnaire to 25 patients. Results showed most patients were satisfied. I shared the results at a practice meeting."

Problems: No SMART aim. No methodology described. No specific action taken. No re-measurement. No reflection. This is a report, not a QIA.

βœ… Strong portfolio entry

"I designed a 10-question questionnaire to assess patient awareness of online booking, using closed and Likert questions. I piloted it with 5 staff members and revised 2 questions. I distributed 40 questionnaires over 2 weeks (response rate 70%). 68% were unaware of online booking. Following a team meeting, we created an information leaflet and added a prompt to our post-appointment text. I repeated the questionnaire 4 weeks later β€” awareness rose from 32% to 60%. I acknowledged response bias as a limitation and noted I would use SMS distribution next time."

πŸ’Ž
The phrase assessors look for

"As a result of this, we changed [X]." Everything else is context. This phrase β€” plus evidence of re-measurement β€” is what turns a report into a QIA. Write it explicitly. Do not make your assessor find it.

🚨
Always use the correct section on 14Fish ePortfolio

ARCP panels check the dedicated QIA learning log section. A QIA filed as a general Learning Log entry may not be found or counted β€” regardless of how good the content is. Always use the right section. And make sure your supervisor has assessed and signed off the entry before your ARCP deadline.


Governance

βš–οΈ Ethics and Governance β€” Do I Need Approval?

When formal ethics approval is NOT required

For most GP trainee questionnaire QIAs, formal NHS ethics approval is not needed when:

  • The activity is service evaluation or quality improvement β€” not original research
  • You are not collecting sensitive personal data beyond routine information
  • Participation is voluntary and anonymous
  • The purpose is clearly to improve care in your practice
πŸ“Œ

Key distinction: QIA = improving your service. Research = generating new generalizable knowledge. If your aim is to improve care in your practice, this is quality improvement. Use the HRA decision tool if you are unsure.

🚨 Always discuss with your supervisor first if:

  • Questionnaires involve children or vulnerable adults
  • Questions cover sensitive topics (mental health, self-harm, domestic abuse)
  • You plan to present or publish the data outside the practice
  • The questionnaire collects identifiable patient data
  • You are unsure whether it constitutes research

βœ… Minimum good practice β€” always include

  • Brief information paragraph at the top explaining the purpose
  • Statement that participation is voluntary and anonymous
  • Confirmation results are only used for practice quality improvement
  • Contact details if respondents have concerns
  • A note that not completing will not affect their care

Trainee community wisdom

πŸ’¬ What the Trainee Community Says

Patterns drawn from UK GP training communities, deanery forums, and trainee peer groups β€” every point aligns with RCGP and GMC guidance.

ℹ️
About this section

The insights below come from recurring themes in UK GP training communities β€” deanery forums, trainee peer groups, and training scheme discussions. Everything here has been checked against RCGP and GMC guidance. Only insights that do not conflict with official guidance are included.

πŸ”΄ Things trainees wish they had known earlier

πŸ’¬
From trainee experience β€” on choosing a topic

"The best questionnaire topics are ones where you genuinely don't know what the answer is going to be. If you already know what patients think, you're just confirming your own view β€” not doing quality improvement."

πŸ’¬
From trainee experience β€” on timing

"Don't leave it to the last rotation. I tried to do mine in the last 6 weeks of my post. There wasn't enough time to collect responses, implement a change, and then re-measure. Start early, even if you only have an outline idea."

πŸ’¬
From trainee experience β€” on scope

"I tried to survey the whole practice population about everything at once. I got 12 responses to a 30-question form. A colleague did 8 focused questions in the waiting room one afternoon and got 35. Smaller and focused wins every time."

πŸ’¬
From trainee experience β€” on staff surveys

"Everyone does patient surveys. I surveyed the reception team about which referral pathways they found confusing. The response rate was 100%, the data was gold, and my supervisor said it was the best QIA she'd reviewed that year. Don't ignore staff."

🟒 Tips that genuinely worked

πŸ’¬
From trainee experience β€” on piloting

"I handed my draft to three patients in the waiting room and asked them to fill it in while I watched. Two of them asked 'what does this mean?' about the same question. That told me everything I needed to know. Piloting takes 20 minutes. It saves you weeks."

πŸ’¬
From trainee experience β€” on the open question

"My Likert scales were all fine β€” 4s and 5s, nothing to do. But the final open question ('what could we do better?') gave me 12 responses all saying the same thing β€” patients couldn't reach us by phone in the morning. That one question generated my entire QIA."

πŸ’¬
From trainee experience β€” on the practice manager

"I asked the practice manager for help early on. She distributed the questionnaire for me, chased responses, and printed summaries. I got 48 completed forms without having to stand in the waiting room myself. The practice manager is your secret weapon."

πŸ’¬
From trainee experience β€” on re-measurement

"I was worried about doing a second round β€” what if nothing had improved? But my supervisor said the second survey doesn't have to show dramatic change. It just has to show you tried, measured again, and reflected honestly. That's the QIA complete."

🎬 Insights from UK GP Educators

The following teaching points are drawn from UK GP training educational content and video guidance by UK GP trainers. All points are consistent with RCGP guidance.

🎬
UK GP educator insight β€” on SMART aims

One of the most repeated pieces of advice from GP trainer educators is to make your aim SMART before you design a single question: Specific, Measurable, Achievable, Realistic, and Time-bound. Without a SMART aim, you cannot know what data to collect β€” and you cannot prove you achieved anything.

🎬
UK GP educator insight β€” on driver diagrams

Before designing your questionnaire, consider drawing a driver diagram β€” a visual map of the factors that contribute to your problem. This helps you see which factors you can actually measure with a questionnaire vs. which need a different method. It also massively improves your write-up depth.

🎬
UK GP educator insight β€” on involving the team

GP trainer educators consistently emphasise: share your aim and draft questionnaire with the team before distributing. A receptionist may spot that question 5 is unanswerable for most patients. A nurse may suggest a group you've missed entirely. Good QI is never done in isolation β€” and your assessors expect to see team engagement in your write-up.

🎬
UK GP educator insight β€” on write-up quality

A recurring pattern in GP educator feedback: "QIPs and QIAs are regularly marked down not because the project was bad β€” but because the reflection was thin." Write your reflection as you go. Don't wait until the end and try to reconstruct the whole journey from memory. The reflection is as important as the data.

πŸ› What ARCP Panels Actually See

These patterns come from deanery and ARCP panel feedback shared across training communities. They are consistent with RCGP guidance and represent what panels repeatedly flag as problems.

Common ARCP issueWhat it means in practiceHow to avoid it
QIA filed in wrong sectionQIA filed as a Learning Log entry instead of the QIA section on 14Fish ePortfolioAlways use the dedicated QIA section β€” ARCP panels check here, not the Learning Log
No measurable dataReflective piece written but no actual numbers or survey resultsEvery QIA needs data β€” even a small set. "I noticed…" is not evidence. Numbers are.
No action takenData collected, findings noted, nothing changed as a resultThe action is what makes it a QIA. Even a tiny change counts β€” but you must have one.
No personal connectionTrainee reviewed a national audit they had no direct involvement inRCGP requires "a personal connection to a registrar's work" β€” you must be directly involved
SEA/LEA counted as QIATrainee believed their Learning Event Analysis fulfilled the QIA requirementSEA and LEA are separate mandatory requirements β€” they do NOT count as QIAs
QIP not assessed by supervisorTrainee uploaded a full QIP but supervisor never completed the assessment formChase your supervisor before the ARCP deadline. An unassessed QIP = an incomplete QIP.
🚨
The number one ARCP rejection reason for QIAs

Collecting data, writing it up, and presenting the findings β€” but making no change and doing no re-measurement. This is the pattern that panels flag most often. A survey with no action is not a QIA. A survey that leads to change and re-measurement always is.


Real-world wisdom

πŸ’Ž Insider Pearls β€” What Nobody Tells You First

πŸ’‘
Start smaller than you think you need to

Many trainees try to survey 100 patients across 25 questions. The result is 20 incomplete responses collected over 3 months. A focused 8-question survey of 30 patients in two weeks produces better data, better analysis, and a better portfolio entry. Smaller scope = better execution.

πŸ’‘
The change does not have to be radical

A poster in the waiting room. An update to a SystmOne/EMIS template. A 10-minute briefing at a team meeting. A text message to patients. These are all real actions β€” all count β€” all satisfy the QIA requirement if re-measured. The change does not need to be a policy overhaul.

πŸ’‘
Use existing validated tools

The GP Patient Survey, PSQ, and EUROPEP all contain pre-validated questions. Adapting an existing validated question is faster, more defensible, and often clearer than writing one from scratch. Always acknowledge the source.

πŸ’‘
The open question is your most valuable data

Likert scales tell you how people rate something. The open question tells you why. The most memorable and actionable insights almost always come from the open question at the end. Read every response carefully. Two or three powerful themes often tell the whole story.

πŸ’‘
The practice manager is your secret weapon

Tell them your aim early. They can help distribute, chase responses, pull data searches, and book the team meeting where you present. The best trainee QIAs almost always involve an engaged practice manager. A good relationship with the PM makes the whole project significantly easier.

πŸ”₯
What actually gets you good marks
  • Showing you understand why you chose a questionnaire over other QIA methods
  • Demonstrating that you piloted and made revisions as a result
  • Reporting your response rate honestly, including limitations
  • Linking your findings to a specific, named action
  • Closing the loop with follow-up measurement
  • Reflecting genuinely β€” what worked, what you'd do differently
😌
When not to panic about your response rate

If your response rate is 25–30%, this does not automatically invalidate your QIA. What matters is that you acknowledge it honestly, consider whether it affects how representative your sample is, and still act on the findings. An honest 25% response rate with good reflection is far better than an inflated but uncritical one. The GMC GP national training survey has a response rate of 38% β€” and that is considered excellent. You are in good company.


For educators

πŸŽ“ Trainer & TPD Pearls β€” How to Teach This Topic

🟣
Common trainee blind spots
  • Confusing a survey with a QIA β€” they think collecting data is sufficient
  • Not piloting β€” often because they don't realise how much difference it makes
  • No re-measurement β€” the loop stays permanently open
  • Reporting Likert means instead of frequencies and percentages
  • Filing the QIA in the wrong ePortfolio section on 14Fish
  • Believing SEA/LEA counts as a QIA β€” it never does
🟣
Useful tutorial questions
  • "Why did you choose a questionnaire over a clinical audit for this topic?"
  • "Walk me through question 3. Did you consider any alternatives?"
  • "You piloted it with 5 people β€” what did you change as a result?"
  • "Your response rate was 28%. What does that mean for your results?"
  • "What specifically changed in the practice after your findings?"
  • "If you did this again, what would you do differently?"
🟣
Signs of a strong QIA
  • Clear, focused SMART aim β€” not "improve patient care generally"
  • Appropriate question types for the aim
  • Evidence of piloting and revision
  • Response rate reported and discussed honestly
  • Specific named action, taken and dated
  • Re-measurement completed and reflected upon
🟣
Signs of a weak QIA β€” and how to help
  • "Most patients were satisfied" β€” push for what changed as a result
  • No pilot described β€” ask what would have changed if they had piloted it
  • No re-measurement β€” encourage even 10 follow-up responses
  • Overly positive findings with no critique β€” ask what the dissatisfied patients said
  • QIA not yet assessed on 14Fish β€” remind trainee this must happen before ARCP

Memory aids

🧠 Memory Aids β€” The Bits That Stick

πŸ”€ The DAPPLE Framework β€” A Good Questionnaire Study

D
Define your aim clearly β€” one SMART sentenceWhat are you measuring, in whom, and why?
A
Avoid bad questionsNo leading, double-barrelled, ambiguous, or unanswerable questions
P
Pilot it first β€” alwaysTest on 3–5 people. You will always find at least one problem.
P
Present results and actShare findings with the team. Agree a specific change. This is the QI part.
L
Loop back β€” re-measureRepeat the questionnaire after the change. Before AND after = quality improvement.
E
Evidence your reflectionWhat did you find? What did you do? What would you do differently next time?

πŸ“‹ Question types at a glance

TypeBest forAnalysis
Closed/Yes-NoFacts, screeningFrequencies
Likert (1–5)Opinions, satisfaction% scoring 4–5
Multiple choiceCategories, behaviour% per option
RankingPrioritiesAverage rank
Open-endedThe "why"Thematic analysis

🎯 Validity vs Reliability β€” one-liners

🎯

Validity: "Am I measuring what I think I'm measuring?"
A weighing scale that measures your height is reliable β€” but not valid.

πŸ”

Reliability: "Would I get similar results if I used this again?"
A thermometer that reads differently every time is not reliable.

A questionnaire can be reliable but not valid. You want both.


Quick answers

πŸ™‹ FAQ β€” Questions Trainees Actually Ask

Does a questionnaire study count as a QIA?
Yes β€” provided it has a clear aim, leads to a specific action, and includes some re-measurement or reflection. A questionnaire alone is a survey, not a QIA.
How many responses do I need?
No fixed minimum in RCGP guidance. For a practice QIA, 20–30 responses is generally sufficient to find patterns. Quality of analysis matters more than volume. Always report and discuss your response rate.
Do I need ethics approval?
Usually not, for a small-scale practice QIA that is service evaluation rather than research. Use the HRA decision tool if unsure. Always discuss with your supervisor before proceeding with anything that involves sensitive topics or vulnerable groups.
Can I use Google Forms or Microsoft Forms?
Yes. Microsoft Forms is particularly easy with an NHS email account. Be mindful of data protection β€” avoid collecting identifiable patient data unless necessary, and check your practice's data handling policy.
What is the difference between a QIA questionnaire study and the PSQ?
The PSQ (Patient Satisfaction Questionnaire) is a formal WPBA tool conducted via 14Fish ePortfolio β€” it assesses how patients experience their consultation with you specifically. A QIA questionnaire is something you design yourself to measure a service-level issue. They are different β€” neither counts as the other.
Can I survey staff rather than patients?
Yes β€” and it is underused. Staff surveys are highly practical (100% response rate is achievable), generate rich data, and often reveal exactly the knowledge gaps or process failures that patient surveys miss.
My results were mostly positive. Can I still use this as a QIA?
Yes β€” but look more carefully. Even broadly positive results usually contain specific negative themes in the open questions, or subgroups who responded differently. The question is always: what did you find and what did you change as a result?
Does SEA or LEA count as a QIA?
No. Learning Event Analysis and Significant Event Analysis are separate mandatory requirements. They never count towards your annual QIA. This is one of the most common misconceptions β€” and one of the most unpleasant ARCP surprises. You need both.

βœ… Take-Home Points β€” The Bits to Remember Tomorrow

  • A questionnaire study is a valid QIA β€” but only if it leads to action and re-measurement. A survey alone is not quality improvement.
  • Define your aim precisely before you write a single question. Vague aims produce useless data.
  • Always pilot on 3–5 people before distributing. You will almost always find something to change.
  • Use a mix of question types. Closed questions for facts, Likert for opinions, open for the "why."
  • Report Likert results as % scoring 4 or 5 β€” not as averages.
  • Report your response rate honestly. Acknowledging response bias shows methodological maturity, not weakness.
  • The open question is often your richest data source. Read every response. The best insights hide in there.
  • Implement a specific named change based on your findings. Small changes count β€” but they must be real.
  • Close the loop. Repeat the questionnaire after your change. Before AND after = quality improvement.
  • File the QIA in the correct section on 14Fish. Make sure your supervisor completes the assessment before your ARCP deadline.
"Asking good questions β€” in clinic, in tutorials, and in questionnaires β€” is a fundamental clinical skill. The doctor who designs a well-crafted questionnaire has thought carefully about what they need to know and why. That kind of disciplined curiosity is exactly what good general practice is built on."
β€” Dr Ramesh Mehay, Bradford VTS

Bradford VTS β€” The universal GP training website for everyone, not just Bradford. Created by Dr Ramesh Mehay.

Quality Improvement Hub  |  RCGP QIA Guidance  |  Disclaimer

Content updated April 2026. Always verify requirements against current RCGP and GMC guidance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top