Questionnaire Studies
in GP Quality Improvement
Because asking the right questions is the first step to getting useful answers. (And no β Google Forms is not automatically a valid research methodology. But it can be, with a little thought.)
π₯ Downloads
Sample questionnaires, markers, and step-by-step how-to guides β ready when you are.
path: QUESTIONNAIRES
π Web Resources
A hand-picked mix of official guidance and real-world GP training resources.
β‘ Quick Summary
If you only read one section before your tutorial, make it this one.
π Questionnaire Studies at a Glance
- A questionnaire study collects data from a defined group using written questions to find areas for improvement.
- It counts as a valid QIA β and can also form the backbone of a full QIP.
- Good questionnaires have a clear aim, a defined sample, and appropriate question types. They are always piloted first.
- Validity = does it measure what you think? Reliability = does it give consistent results?
- A Likert scale (1β5 or 1β7) is the standard tool for measuring opinions. Report as frequencies, not just averages.
- Always pilot on 3β5 people first. You will always find something to fix.
- Aim for 20β30 responses minimum for a practice QIA.
- The questionnaire is NOT the end. What you do with the results β the change you make β is what makes it a QIA.
- The biggest mistake trainees make: collecting data but never closing the loop. Before AND after. Always.
π― Why Questionnaire Studies Matter in GP
This is not just a portfolio exercise. Understanding how to design and use questionnaires is a real GP skill.
π In real GP practice
As a GP, you will regularly need to understand what patients or staff actually experience. Questionnaires let you collect that information at scale β efficiently, and without interviewing everyone individually.
They appear in: patient experience surveys, service development, QOF reviews, and PCN-level improvement work. Learning to design one well now means you will be able to read, interpret, and critique them for the rest of your career.
π In GP training (RCGP requirements)
Questionnaire studies count as a valid Quality Improvement Activity (QIA) in your RCGP trainee portfolio. They can also form the measurement tool inside a full QIP.
The RCGP curriculum (Evidence in Practice domain) explicitly lists questionnaire design as a skill trainees should develop. Your assessors want to see that you understand why you chose this method β not just that you did it.
GPs should understand "techniques such as pilot studies, questionnaire design, focus groups and consensus methods." This page maps directly to that requirement.
π What Is a Questionnaire Study?
A clear definition and how it fits into the world of quality improvement.
Definition
A questionnaire study is a structured data collection method that uses a set of written questions, given to a defined group of people, to gather information about their experiences, opinions, knowledge, or attitudes. In the GP quality improvement context, it is used to: identify problems, measure a starting point, understand the impact of a change, or hear what patients and staff actually think.
Questionnaire study vs other QIA methods
| Method | Best used when | Main limit |
|---|---|---|
| Questionnaire study | You want opinions, attitudes, or knowledge levels at scale | Can't capture deep nuance or emotion |
| Clinical audit | You want to measure adherence to a defined clinical standard | Needs a standard to measure against |
| Notes review | You want to check documentation quality | Only captures what was recorded |
| PDSA cycle | You want to test and refine a change rapidly | Needs multiple quick cycles |
| Interview / focus group | You need deep qualitative understanding | Time-intensive; hard to scale |
- You want to understand patient or staff experience of a service
- You are measuring knowledge levels in a team
- You want to compare attitudes before and after a change
- Your sample is too large for individual interviews
- You need numbers to present at a practice meeting
- You want deep qualitative insight into complex feelings
- Your sample is fewer than 5β10 people (use interviews)
- The topic is highly sensitive (questionnaires may not feel safe enough)
- You already know what patients think (you'd be confirming, not discovering)
π The Full Picture β At a Glance
A visual summary of how a questionnaire study fits into the QI cycle, what question types exist, and what makes data usable.
π The Questionnaire QI Cycle
The QI cycle. The questionnaire is your measurement tool at steps 2 and 5. Without step 5, you have a survey β not a QIA.
β The 5 Question Types β When to Use Each One
The five question types you have to choose from. Most questionnaires use 2β3 types β not all five.
π― Validity vs Reliability β The Visual
You want your questionnaire to be both: measuring the right thing (valid) AND giving consistent results (reliable). Both dartboards aim for this.
π¬ What Affects Your Response Rate? β A Visual Guide
The strategies that have the biggest effect on whether people actually complete your questionnaire.
πͺ How to Run a Questionnaire Study: Step by Step
Follow this and your questionnaire study will be methodologically solid β and genuinely useful.
Define your aim β precisely
One clear sentence. What are you measuring, in whom, and why? "Improve patient satisfaction" is too vague. "Assess patient awareness of online booking among adults aged 18β65 at our practice over 2 weeks" is a real aim. No aim = no questionnaire.
Check if a validated tool already exists
Do not reinvent the wheel. The GP Patient Survey, Patient Satisfaction Questionnaire (PSQ), and EUROPEP instrument all contain validated questions you can adapt. Using or adapting a validated tool strengthens your methodology and saves you from designing bad questions by accident.
Draft your questions
Choose the right question types. Keep the questionnaire short β 8β15 questions is plenty for a practice QIA. Fewer questions = higher response rate. Avoid leading, double-barrelled, ambiguous, and unanswerable questions (see the writing rules section).
Draw a SMART aim and consider a driver diagram
Before distributing, write your SMART aim: Specific, Measurable, Achievable, Realistic, Time-bound. If your project is a full QIP, a driver diagram helps map which factors your questionnaire can actually address β and shows your assessor that you have thought systematically.
Pilot the questionnaire β this is non-negotiable
Ask 3β5 people to complete the draft and ask: "Was anything confusing? Was any question hard to answer?" You will almost always find at least one problem. Fix it before distributing. Piloting is what separates a professional questionnaire from a first draft. It takes 20 minutes. Do not skip it.
Define your sample and plan distribution
Who will complete it? How will you reach them? Paper in waiting room, SMS link, QR code, email, direct distribution to staff. Decide on a minimum target β 20β30 responses is typically sufficient for a practice QIA. Talk to your practice manager early β they can often help enormously.
Consider consent, anonymity, and GDPR
For most small-scale practice QIAs, formal ethics approval is not needed. But always: ensure participation is voluntary, explain the purpose, confirm anonymity where appropriate, and follow your practice's data handling policy. Include a short information paragraph at the top of the questionnaire.
Distribute, collect, and send a reminder
Set a clear deadline (2β4 weeks). Record how many questionnaires you distributed so you can calculate your response rate. Send a reminder at 10β14 days β most non-responses are simply people who forgot.
Analyse your results and share with the team
For Likert and closed questions: use frequencies and percentages. For open questions: group responses into themes. Present findings at a team meeting. Use a simple chart β even a hand-drawn one is fine. The point is to share, discuss, and decide what to change.
Implement a specific change
This is the quality improvement step. Agree a specific action based on your findings. It does not need to be a huge change β a poster, an updated template, a 10-minute team briefing, or an SMS to patients all count. The change must be real, named, and documented.
Re-measure β close the loop
Repeat the questionnaire β even with a smaller sample β after your intervention. Before AND after = quality improvement. Even a modest second survey (10β15 responses) transforms a simple survey into a proper QIA. This is the step most trainees skip. Do not skip it.
π Likert Scales β Everything You Need to Know
The single most used question format in healthcare questionnaires. Used well, they are powerful. Used badly, they produce numbers that look scientific but mean nothing.
The 5-Point Likert Scale β How It Looks
The standard 5-point Likert scale. Always label ALL 5 points β never just the ends. Always include a neutral midpoint (3).
β Key rules for Likert scales
- Always include a neutral midpoint (3 on a 5-point scale)
- Label ALL points clearly β not just the extremes
- Keep the same scale direction throughout β never flip it mid-questionnaire
- Use the same scale for related questions so you can compare them
- 5 points is standard for GP QIAs; 7 points adds granularity for formal research
- Frame items as positive statements: "My GP listened" not "My GP did not listen"
Reporting that "average satisfaction = 3.76 out of 5" sounds scientific. But Likert scales produce ordinal data β the gap between 1 and 2 is not necessarily equal to the gap between 4 and 5. Report as frequencies and percentages instead:
β Better: "78% of respondents rated the service as 4 or 5 out of 5."
β Avoid: "Mean satisfaction score = 4.12."
- Report % scoring 4 or 5 (top-box scoring) as your headline figure
- Show a breakdown of all 5 response options as a bar chart
- Compare before vs after your intervention using the same metric
- Acknowledge that a small sample limits generalisability β this is honest, not weak
π Example: Before vs After β How to Present Likert Results
Report Likert results as % scoring 4 or 5 (top-box scoring). Side-by-side before/after bars show improvement clearly and work well in your portfolio write-up.
β Validity and Reliability
Two words your Educational Supervisor will ask about. Here is what they actually mean β and what you need to say.
Validity β "Does it measure what I think?"
A valid questionnaire accurately captures the thing you are trying to measure. Three types matter for GP QIA:
| Type | What it means | How to achieve it |
|---|---|---|
| Face validity | Questions look relevant to the topic | Ask 2β3 colleagues to review the draft |
| Content validity | Questions cover all key aspects of the topic | Review similar published questionnaires |
| Construct validity | Questions map correctly onto the concept | Use validated instruments where available |
Reliability β "Would I get similar results again?"
A reliable questionnaire gives consistent results over time and across similar respondents. For GP QIA, focus on:
| Type | What it means | Practical approach |
|---|---|---|
| Test-retest | Same results if repeated in similar conditions | Re-test on a small sample 2 weeks later |
| Internal consistency | Related questions give correlated answers | Use Cronbach's alpha β only needed for formal research |
For a GP practice QIA, you do not need to calculate formal statistics. Showing that you understand the concepts β and that you piloted, used clear language, and pre-defined your response options β is entirely sufficient.
βοΈ Writing Good Questions β The Rules
Small changes in wording make enormous differences to question quality. These rules separate useful data from useless noise.
| Problem type | β Weak | β Better | Why it matters |
|---|---|---|---|
| Leading question | "Would you agree that our new system is better?" | "How would you rate the new system compared to the old one?" | Leading questions push respondents toward a particular answer |
| Double-barrelled | "Was the receptionist polite and helpful?" | Two separate questions: "Polite?" and "Helpful?" | Two questions in one = impossible if answers differ |
| Ambiguous | "Do you visit the surgery regularly?" | "How many times have you attended in the last 3 months?" | "Regularly" means different things to different people |
| Unanswerable | "Does the surgery follow NICE guidelines?" | "Was your treatment explained clearly to you?" | Patients cannot assess clinical guideline adherence |
| Negative phrasing | "My GP does not listen to me." (Agree/Disagree) | "My GP listens to my concerns." (Agree/Disagree) | Negatives confuse respondents β always use positive statements |
| Medical jargon | "Are you aware of the practice's QOF targets?" | "Has anyone discussed health screening checks with you?" | Patients do not know NHS acronyms β write in plain language |
Before including any question, read it out as if explaining it to a patient with low health literacy. If you cannot explain the question simply, rewrite it. If it uses clinical terms, NHS processes, or acronyms β rewrite it. A confusing questionnaire produces garbage data, no matter how good the methodology.
β οΈ Common Pitfalls β What Trainees Get Wrong
Every one of these has been seen in real trainee portfolios. Read them once and you will not need to learn them the hard way.
π The Pitfall Hierarchy β From Most Common to Least
The higher up the pyramid, the more likely it is to result in your QIA being rejected or marked down at ARCP.
Distributing without testing is the single most common damaging error. You will almost always find confusing questions in your first draft β but only if you actually ask someone to complete it. Give it to 3 people. It takes 20 minutes. It saves you weeks of bad data.
A survey without action is not a QIA. You must implement something β even small. A poster, a template update, a team briefing. Then re-measure. That is the quality improvement part.
A 40-question form will get low response rates and careless answers by question 15. Keep it to 8β15 questions for a practice QIA. If you have 40 questions, your aim is not precise enough yet.
Always record: how many distributed, how many completed, and therefore your response rate. Without this, your data cannot be interpreted. A low response rate is not automatically a problem β it just needs to be acknowledged honestly.
Report "78% scored 4 or 5" not "average = 3.76." Likert data is ordinal β means can be misleading. Frequencies are honest and easy to understand.
Even for a simple practice QIA: ensure participation is voluntary, explain the purpose, confirm anonymity, and follow GDPR. Include a short information paragraph at the top of the questionnaire itself.
- "I piloted the questionnaire with five people and revised three questions as a result."
- "After collecting baseline data, we implemented [specific change] and repeated the questionnaire to measure impact."
- "I acknowledged response bias as a limitation and explained why it may have affected my results."
- "I used a 5-point Likert scale because I needed to measure degrees of satisfaction, not just a yes or no."
- "Our response rate was 34%, which is in line with typical NHS patient survey benchmarks."
π‘ Real GP Examples of Questionnaire-Based QIAs
Concrete examples from GP training contexts. Use them as inspiration β not as templates to copy.
Aim: Assess patient satisfaction with the new online booking system among adults aged 18β65 over 2 weeks.
Questions: Awareness of system (yes/no); ease of use (5-point Likert); preference vs. telephone (multiple choice); open question: "What could be improved?"
Key findings: 68% unaware of online booking; 82% of users rated it 4 or 5; most common open theme: "too many steps to log in."
Action: Receptionists created a simple "how to book online" leaflet; displayed on waiting room screens; SMS sent to all registered patients.
Re-measurement: 4 weeks later β online booking awareness rose from 32% to 60%.
Strong QIA: clear aim, mixed question types, specific actionable finding, intervention, re-measurement.
Aim: Assess knowledge of 2WW cancer referral criteria among all administrative and reception staff.
Questions: Knowledge questions (correct/incorrect/unsure) about referral criteria for colorectal, lung, and skin cancer.
Key findings: Significant gaps around skin cancer and 9/12 staff unsure when to escalate a patient call about potential cancer symptoms.
Action: 20-minute educational session at team meeting; laminated quick-reference guide at each reception desk.
Re-measurement: Repeat questionnaire 6 weeks later showed improvement across all three domains.
Staff surveys are underused. They have a 100% response rate, gold-standard data, and no waiting room required.
Aim: Assess whether patients on 3+ regular medications understood the purpose of each medication.
Questions: Whether purpose of each had been explained (yes/no/partly); confidence managing medications (Likert 1β5); open question about concerns.
Key findings: 64% uncertain about the purpose of at least one medication; most common open theme: "I'm worried about side effects but haven't wanted to ask."
Action: Structured "medication explanation" prompt added to the medication review template.
This example links to patient safety, shared decision-making, and the Holistic Practice capability β excellent portfolio diversity.
Aim: Assess patient awareness of self-referral Talking Therapies (IAPT) in a high-deprivation practice area.
Questions: Awareness of self-referral option (yes/no); whether GP had mentioned it (yes/no); comfort with self-referring (Likert 1β5); barriers (multiple choice + open).
Key findings: Only 26% aware they could self-refer; most common barrier: "I didn't know it was available to me."
Action: Information displayed in waiting room; reception team trained to mention Talking Therapies when patients called about low mood or anxiety.
Covers community health, health inequalities, and mental health β three RCGP curriculum areas in one QIA.
π How to Write Up Your QIA for the 14Fish Portfolio
Structure your write-up to show you understand the methodology β not just that you sent out some forms.
| Section | What to include |
|---|---|
| Background / Rationale | Why this topic? What gap or problem prompted it? |
| Aim | One SMART sentence: what, in whom, and why |
| Method | Question types, sample, distribution, time frame |
| Piloting | Who you tested on, what you changed |
| Results | Frequencies, %, themes; include your response rate |
| Interpretation | What do results mean? What surprised you? |
| Action taken | Specific named change β and when |
| Re-measurement | Results of follow-up survey post-intervention |
| Reflection | What worked? What would you do differently? |
| Limitations | Response bias, sample size, generalisability |
β Weak portfolio entry
"I distributed a questionnaire to 25 patients. Results showed most patients were satisfied. I shared the results at a practice meeting."
Problems: No SMART aim. No methodology described. No specific action taken. No re-measurement. No reflection. This is a report, not a QIA.
β Strong portfolio entry
"I designed a 10-question questionnaire to assess patient awareness of online booking, using closed and Likert questions. I piloted it with 5 staff members and revised 2 questions. I distributed 40 questionnaires over 2 weeks (response rate 70%). 68% were unaware of online booking. Following a team meeting, we created an information leaflet and added a prompt to our post-appointment text. I repeated the questionnaire 4 weeks later β awareness rose from 32% to 60%. I acknowledged response bias as a limitation and noted I would use SMS distribution next time."
"As a result of this, we changed [X]." Everything else is context. This phrase β plus evidence of re-measurement β is what turns a report into a QIA. Write it explicitly. Do not make your assessor find it.
ARCP panels check the dedicated QIA learning log section. A QIA filed as a general Learning Log entry may not be found or counted β regardless of how good the content is. Always use the right section. And make sure your supervisor has assessed and signed off the entry before your ARCP deadline.
βοΈ Ethics and Governance β Do I Need Approval?
When formal ethics approval is NOT required
For most GP trainee questionnaire QIAs, formal NHS ethics approval is not needed when:
- The activity is service evaluation or quality improvement β not original research
- You are not collecting sensitive personal data beyond routine information
- Participation is voluntary and anonymous
- The purpose is clearly to improve care in your practice
Key distinction: QIA = improving your service. Research = generating new generalizable knowledge. If your aim is to improve care in your practice, this is quality improvement. Use the HRA decision tool if you are unsure.
π¨ Always discuss with your supervisor first if:
- Questionnaires involve children or vulnerable adults
- Questions cover sensitive topics (mental health, self-harm, domestic abuse)
- You plan to present or publish the data outside the practice
- The questionnaire collects identifiable patient data
- You are unsure whether it constitutes research
β Minimum good practice β always include
- Brief information paragraph at the top explaining the purpose
- Statement that participation is voluntary and anonymous
- Confirmation results are only used for practice quality improvement
- Contact details if respondents have concerns
- A note that not completing will not affect their care
π¬ What the Trainee Community Says
Patterns drawn from UK GP training communities, deanery forums, and trainee peer groups β every point aligns with RCGP and GMC guidance.
The insights below come from recurring themes in UK GP training communities β deanery forums, trainee peer groups, and training scheme discussions. Everything here has been checked against RCGP and GMC guidance. Only insights that do not conflict with official guidance are included.
π΄ Things trainees wish they had known earlier
"The best questionnaire topics are ones where you genuinely don't know what the answer is going to be. If you already know what patients think, you're just confirming your own view β not doing quality improvement."
"Don't leave it to the last rotation. I tried to do mine in the last 6 weeks of my post. There wasn't enough time to collect responses, implement a change, and then re-measure. Start early, even if you only have an outline idea."
"I tried to survey the whole practice population about everything at once. I got 12 responses to a 30-question form. A colleague did 8 focused questions in the waiting room one afternoon and got 35. Smaller and focused wins every time."
"Everyone does patient surveys. I surveyed the reception team about which referral pathways they found confusing. The response rate was 100%, the data was gold, and my supervisor said it was the best QIA she'd reviewed that year. Don't ignore staff."
π’ Tips that genuinely worked
"I handed my draft to three patients in the waiting room and asked them to fill it in while I watched. Two of them asked 'what does this mean?' about the same question. That told me everything I needed to know. Piloting takes 20 minutes. It saves you weeks."
"My Likert scales were all fine β 4s and 5s, nothing to do. But the final open question ('what could we do better?') gave me 12 responses all saying the same thing β patients couldn't reach us by phone in the morning. That one question generated my entire QIA."
"I asked the practice manager for help early on. She distributed the questionnaire for me, chased responses, and printed summaries. I got 48 completed forms without having to stand in the waiting room myself. The practice manager is your secret weapon."
"I was worried about doing a second round β what if nothing had improved? But my supervisor said the second survey doesn't have to show dramatic change. It just has to show you tried, measured again, and reflected honestly. That's the QIA complete."
π¬ Insights from UK GP Educators
The following teaching points are drawn from UK GP training educational content and video guidance by UK GP trainers. All points are consistent with RCGP guidance.
One of the most repeated pieces of advice from GP trainer educators is to make your aim SMART before you design a single question: Specific, Measurable, Achievable, Realistic, and Time-bound. Without a SMART aim, you cannot know what data to collect β and you cannot prove you achieved anything.
Before designing your questionnaire, consider drawing a driver diagram β a visual map of the factors that contribute to your problem. This helps you see which factors you can actually measure with a questionnaire vs. which need a different method. It also massively improves your write-up depth.
GP trainer educators consistently emphasise: share your aim and draft questionnaire with the team before distributing. A receptionist may spot that question 5 is unanswerable for most patients. A nurse may suggest a group you've missed entirely. Good QI is never done in isolation β and your assessors expect to see team engagement in your write-up.
A recurring pattern in GP educator feedback: "QIPs and QIAs are regularly marked down not because the project was bad β but because the reflection was thin." Write your reflection as you go. Don't wait until the end and try to reconstruct the whole journey from memory. The reflection is as important as the data.
π What ARCP Panels Actually See
These patterns come from deanery and ARCP panel feedback shared across training communities. They are consistent with RCGP guidance and represent what panels repeatedly flag as problems.
| Common ARCP issue | What it means in practice | How to avoid it |
|---|---|---|
| QIA filed in wrong section | QIA filed as a Learning Log entry instead of the QIA section on 14Fish ePortfolio | Always use the dedicated QIA section β ARCP panels check here, not the Learning Log |
| No measurable data | Reflective piece written but no actual numbers or survey results | Every QIA needs data β even a small set. "I noticedβ¦" is not evidence. Numbers are. |
| No action taken | Data collected, findings noted, nothing changed as a result | The action is what makes it a QIA. Even a tiny change counts β but you must have one. |
| No personal connection | Trainee reviewed a national audit they had no direct involvement in | RCGP requires "a personal connection to a registrar's work" β you must be directly involved |
| SEA/LEA counted as QIA | Trainee believed their Learning Event Analysis fulfilled the QIA requirement | SEA and LEA are separate mandatory requirements β they do NOT count as QIAs |
| QIP not assessed by supervisor | Trainee uploaded a full QIP but supervisor never completed the assessment form | Chase your supervisor before the ARCP deadline. An unassessed QIP = an incomplete QIP. |
Collecting data, writing it up, and presenting the findings β but making no change and doing no re-measurement. This is the pattern that panels flag most often. A survey with no action is not a QIA. A survey that leads to change and re-measurement always is.
π Insider Pearls β What Nobody Tells You First
Many trainees try to survey 100 patients across 25 questions. The result is 20 incomplete responses collected over 3 months. A focused 8-question survey of 30 patients in two weeks produces better data, better analysis, and a better portfolio entry. Smaller scope = better execution.
A poster in the waiting room. An update to a SystmOne/EMIS template. A 10-minute briefing at a team meeting. A text message to patients. These are all real actions β all count β all satisfy the QIA requirement if re-measured. The change does not need to be a policy overhaul.
The GP Patient Survey, PSQ, and EUROPEP all contain pre-validated questions. Adapting an existing validated question is faster, more defensible, and often clearer than writing one from scratch. Always acknowledge the source.
Likert scales tell you how people rate something. The open question tells you why. The most memorable and actionable insights almost always come from the open question at the end. Read every response carefully. Two or three powerful themes often tell the whole story.
Tell them your aim early. They can help distribute, chase responses, pull data searches, and book the team meeting where you present. The best trainee QIAs almost always involve an engaged practice manager. A good relationship with the PM makes the whole project significantly easier.
- Showing you understand why you chose a questionnaire over other QIA methods
- Demonstrating that you piloted and made revisions as a result
- Reporting your response rate honestly, including limitations
- Linking your findings to a specific, named action
- Closing the loop with follow-up measurement
- Reflecting genuinely β what worked, what you'd do differently
If your response rate is 25β30%, this does not automatically invalidate your QIA. What matters is that you acknowledge it honestly, consider whether it affects how representative your sample is, and still act on the findings. An honest 25% response rate with good reflection is far better than an inflated but uncritical one. The GMC GP national training survey has a response rate of 38% β and that is considered excellent. You are in good company.
π Trainer & TPD Pearls β How to Teach This Topic
- Confusing a survey with a QIA β they think collecting data is sufficient
- Not piloting β often because they don't realise how much difference it makes
- No re-measurement β the loop stays permanently open
- Reporting Likert means instead of frequencies and percentages
- Filing the QIA in the wrong ePortfolio section on 14Fish
- Believing SEA/LEA counts as a QIA β it never does
- "Why did you choose a questionnaire over a clinical audit for this topic?"
- "Walk me through question 3. Did you consider any alternatives?"
- "You piloted it with 5 people β what did you change as a result?"
- "Your response rate was 28%. What does that mean for your results?"
- "What specifically changed in the practice after your findings?"
- "If you did this again, what would you do differently?"
- Clear, focused SMART aim β not "improve patient care generally"
- Appropriate question types for the aim
- Evidence of piloting and revision
- Response rate reported and discussed honestly
- Specific named action, taken and dated
- Re-measurement completed and reflected upon
- "Most patients were satisfied" β push for what changed as a result
- No pilot described β ask what would have changed if they had piloted it
- No re-measurement β encourage even 10 follow-up responses
- Overly positive findings with no critique β ask what the dissatisfied patients said
- QIA not yet assessed on 14Fish β remind trainee this must happen before ARCP
π§ Memory Aids β The Bits That Stick
π€ The DAPPLE Framework β A Good Questionnaire Study
π Question types at a glance
| Type | Best for | Analysis |
|---|---|---|
| Closed/Yes-No | Facts, screening | Frequencies |
| Likert (1β5) | Opinions, satisfaction | % scoring 4β5 |
| Multiple choice | Categories, behaviour | % per option |
| Ranking | Priorities | Average rank |
| Open-ended | The "why" | Thematic analysis |
π― Validity vs Reliability β one-liners
Validity: "Am I measuring what I think I'm measuring?"
A weighing scale that measures your height is reliable β but not valid.
Reliability: "Would I get similar results if I used this again?"
A thermometer that reads differently every time is not reliable.
A questionnaire can be reliable but not valid. You want both.
π FAQ β Questions Trainees Actually Ask
β Take-Home Points β The Bits to Remember Tomorrow
- A questionnaire study is a valid QIA β but only if it leads to action and re-measurement. A survey alone is not quality improvement.
- Define your aim precisely before you write a single question. Vague aims produce useless data.
- Always pilot on 3β5 people before distributing. You will almost always find something to change.
- Use a mix of question types. Closed questions for facts, Likert for opinions, open for the "why."
- Report Likert results as % scoring 4 or 5 β not as averages.
- Report your response rate honestly. Acknowledging response bias shows methodological maturity, not weakness.
- The open question is often your richest data source. Read every response. The best insights hide in there.
- Implement a specific named change based on your findings. Small changes count β but they must be real.
- Close the loop. Repeat the questionnaire after your change. Before AND after = quality improvement.
- File the QIA in the correct section on 14Fish. Make sure your supervisor completes the assessment before your ARCP deadline.
Bradford VTS β The universal GP training website for everyone, not just Bradford. Created by Dr Ramesh Mehay.
Quality Improvement Hub | RCGP QIA Guidance | Disclaimer
Content updated April 2026. Always verify requirements against current RCGP and GMC guidance.