Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • 10 Research Question Examples to Guide Your Research Project

10 Research Question Examples to Guide your Research Project

Published on October 30, 2022 by Shona McCombes . Revised on October 19, 2023.

The research question is one of the most important parts of your research paper , thesis or dissertation . It’s important to spend some time assessing and refining your question before you get started.

The exact form of your question will depend on a few things, such as the length of your project, the type of research you’re conducting, the topic , and the research problem . However, all research questions should be focused, specific, and relevant to a timely social or scholarly issue.

Once you’ve read our guide on how to write a research question , you can use these examples to craft your own.

Research question Explanation
The first question is not enough. The second question is more , using .
Starting with “why” often means that your question is not enough: there are too many possible answers. By targeting just one aspect of the problem, the second question offers a clear path for research.
The first question is too broad and subjective: there’s no clear criteria for what counts as “better.” The second question is much more . It uses clearly defined terms and narrows its focus to a specific population.
It is generally not for academic research to answer broad normative questions. The second question is more specific, aiming to gain an understanding of possible solutions in order to make informed recommendations.
The first question is too simple: it can be answered with a simple yes or no. The second question is , requiring in-depth investigation and the development of an original argument.
The first question is too broad and not very . The second question identifies an underexplored aspect of the topic that requires investigation of various  to answer.
The first question is not enough: it tries to address two different (the quality of sexual health services and LGBT support services). Even though the two issues are related, it’s not clear how the research will bring them together. The second integrates the two problems into one focused, specific question.
The first question is too simple, asking for a straightforward fact that can be easily found online. The second is a more question that requires and detailed discussion to answer.
? dealt with the theme of racism through casting, staging, and allusion to contemporary events? The first question is not  — it would be very difficult to contribute anything new. The second question takes a specific angle to make an original argument, and has more relevance to current social concerns and debates.
The first question asks for a ready-made solution, and is not . The second question is a clearer comparative question, but note that it may not be practically . For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

Note that the design of your research question can depend on what method you are pursuing. Here are a few options for qualitative, quantitative, and statistical research questions.

Type of research Example question
Qualitative research question
Quantitative research question
Statistical research question

Other interesting articles

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, October 19). 10 Research Question Examples to Guide your Research Project. Scribbr. Retrieved June 9, 2024, from https://www.scribbr.com/research-process/research-question-examples/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, writing strong research questions | criteria & examples, how to choose a dissertation topic | 8 steps to follow, evaluating sources | methods & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Grad Coach

Research Question Examples 🧑🏻‍🏫

25+ Practical Examples & Ideas To Help You Get Started 

By: Derek Jansen (MBA) | October 2023

A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights.  But, if you’re new to research, it’s not always clear what exactly constitutes a good research question. In this post, we’ll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

Research Question Examples

  • Psychology research questions
  • Business research questions
  • Education research questions
  • Healthcare research questions
  • Computer science research questions

Examples: Psychology

Let’s start by looking at some examples of research questions that you might encounter within the discipline of psychology.

How does sleep quality affect academic performance in university students?

This question is specific to a population (university students) and looks at a direct relationship between sleep and academic performance, both of which are quantifiable and measurable variables.

What factors contribute to the onset of anxiety disorders in adolescents?

The question narrows down the age group and focuses on identifying multiple contributing factors. There are various ways in which it could be approached from a methodological standpoint, including both qualitatively and quantitatively.

Do mindfulness techniques improve emotional well-being?

This is a focused research question aiming to evaluate the effectiveness of a specific intervention.

How does early childhood trauma impact adult relationships?

This research question targets a clear cause-and-effect relationship over a long timescale, making it focused but comprehensive.

Is there a correlation between screen time and depression in teenagers?

This research question focuses on an in-demand current issue and a specific demographic, allowing for a focused investigation. The key variables are clearly stated within the question and can be measured and analysed (i.e., high feasibility).

Free Webinar: How To Find A Dissertation Research Topic

Examples: Business/Management

Next, let’s look at some examples of well-articulated research questions within the business and management realm.

How do leadership styles impact employee retention?

This is an example of a strong research question because it directly looks at the effect of one variable (leadership styles) on another (employee retention), allowing from a strongly aligned methodological approach.

What role does corporate social responsibility play in consumer choice?

Current and precise, this research question can reveal how social concerns are influencing buying behaviour by way of a qualitative exploration.

Does remote work increase or decrease productivity in tech companies?

Focused on a particular industry and a hot topic, this research question could yield timely, actionable insights that would have high practical value in the real world.

How do economic downturns affect small businesses in the homebuilding industry?

Vital for policy-making, this highly specific research question aims to uncover the challenges faced by small businesses within a certain industry.

Which employee benefits have the greatest impact on job satisfaction?

By being straightforward and specific, answering this research question could provide tangible insights to employers.

Examples: Education

Next, let’s look at some potential research questions within the education, training and development domain.

How does class size affect students’ academic performance in primary schools?

This example research question targets two clearly defined variables, which can be measured and analysed relatively easily.

Do online courses result in better retention of material than traditional courses?

Timely, specific and focused, answering this research question can help inform educational policy and personal choices about learning formats.

What impact do US public school lunches have on student health?

Targeting a specific, well-defined context, the research could lead to direct changes in public health policies.

To what degree does parental involvement improve academic outcomes in secondary education in the Midwest?

This research question focuses on a specific context (secondary education in the Midwest) and has clearly defined constructs.

What are the negative effects of standardised tests on student learning within Oklahoma primary schools?

This research question has a clear focus (negative outcomes) and is narrowed into a very specific context.

Need a helping hand?

how to test research questions

Examples: Healthcare

Shifting to a different field, let’s look at some examples of research questions within the healthcare space.

What are the most effective treatments for chronic back pain amongst UK senior males?

Specific and solution-oriented, this research question focuses on clear variables and a well-defined context (senior males within the UK).

How do different healthcare policies affect patient satisfaction in public hospitals in South Africa?

This question is has clearly defined variables and is narrowly focused in terms of context.

Which factors contribute to obesity rates in urban areas within California?

This question is focused yet broad, aiming to reveal several contributing factors for targeted interventions.

Does telemedicine provide the same perceived quality of care as in-person visits for diabetes patients?

Ideal for a qualitative study, this research question explores a single construct (perceived quality of care) within a well-defined sample (diabetes patients).

Which lifestyle factors have the greatest affect on the risk of heart disease?

This research question aims to uncover modifiable factors, offering preventive health recommendations.

Research topic evaluator

Examples: Computer Science

Last but certainly not least, let’s look at a few examples of research questions within the computer science world.

What are the perceived risks of cloud-based storage systems?

Highly relevant in our digital age, this research question would align well with a qualitative interview approach to better understand what users feel the key risks of cloud storage are.

Which factors affect the energy efficiency of data centres in Ohio?

With a clear focus, this research question lays a firm foundation for a quantitative study.

How do TikTok algorithms impact user behaviour amongst new graduates?

While this research question is more open-ended, it could form the basis for a qualitative investigation.

What are the perceived risk and benefits of open-source software software within the web design industry?

Practical and straightforward, the results could guide both developers and end-users in their choices.

Remember, these are just examples…

In this post, we’ve tried to provide a wide range of research question examples to help you get a feel for what research questions look like in practice. That said, it’s important to remember that these are just examples and don’t necessarily equate to good research topics . If you’re still trying to find a topic, check out our topic megalist for inspiration.

how to test research questions

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

What is a research question?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • News & Highlights

Search

  • Publications and Documents
  • Postgraduate Education
  • Browse Our Courses
  • C/T Research Academy
  • K12 Investigator Training
  • Harvard Catalyst On-Demand
  • Translational Innovator
  • SMART IRB Reliance Request
  • Biostatistics Consulting
  • Regulatory Support
  • Pilot Funding
  • Informatics Program
  • Community Engagement
  • Diversity Inclusion
  • Research Enrollment and Diversity
  • Harvard Catalyst Profiles

Harvard Catalyst Logo

Creating a Good Research Question

  • Advice & Growth
  • Process in Practice

Successful translation of research begins with a strong question. How do you get started? How do good research questions evolve? And where do you find inspiration to generate good questions in the first place?  It’s helpful to understand existing frameworks, guidelines, and standards, as well as hear from researchers who utilize these strategies in their own work.

In the fall and winter of 2020, Naomi Fisher, MD, conducted 10 interviews with clinical and translational researchers at Harvard University and affiliated academic healthcare centers, with the purpose of capturing their experiences developing good research questions. The researchers featured in this project represent various specialties, drawn from every stage of their careers. Below you will find clips from their interviews and additional resources that highlight how to get started, as well as helpful frameworks and factors to consider. Additionally, visit the Advice & Growth section to hear candid advice and explore the Process in Practice section to hear how researchers have applied these recommendations to their published research.

  • Naomi Fisher, MD , is associate professor of medicine at Harvard Medical School (HMS), and clinical staff at Brigham and Women’s Hospital (BWH). Fisher is founder and director of Hypertension Services and the Hypertension Specialty Clinic at the BWH, where she is a renowned endocrinologist. She serves as a faculty director for communication-related Boundary-Crossing Skills for Research Careers webinar sessions and the Writing and Communication Center .
  • Christopher Gibbons, MD , is associate professor of neurology at HMS, and clinical staff at Beth Israel Deaconess Medical Center (BIDMC) and Joslin Diabetes Center. Gibbons’ research focus is on peripheral and autonomic neuropathies.
  • Clare Tempany-Afdhal, MD , is professor of radiology at HMS and the Ferenc Jolesz Chair of Research, Radiology at BWH. Her major areas of research are MR imaging of the pelvis and image- guided therapy.
  • David Sykes, MD, PhD , is assistant professor of medicine at Massachusetts General Hospital (MGH), he is also principal investigator at the Sykes Lab at MGH. His special interest area is rare hematologic conditions.
  • Elliot Israel, MD , is professor of medicine at HMS, director of the Respiratory Therapy Department, the director of clinical research in the Pulmonary and Critical Care Medical Division and associate physician at BWH. Israel’s research interests include therapeutic interventions to alter asthmatic airway hyperactivity and the role of arachidonic acid metabolites in airway narrowing.
  • Jonathan Williams, MD, MMSc , is assistant professor of medicine at HMS, and associate physician at BWH. He focuses on endocrinology, specifically unravelling the intricate relationship between genetics and environment with respect to susceptibility to cardiometabolic disease.
  • Junichi Tokuda, PhD , is associate professor of radiology at HMS, and is a research scientist at the Department of Radiology, BWH. Tokuda is particularly interested in technologies to support image-guided “closed-loop” interventions. He also serves as a principal investigator leading several projects funded by the National Institutes of Health and industry.
  • Osama Rahma, MD , is assistant professor of medicine at HMS and clinical staff member in medical oncology at Dana-Farber Cancer Institute (DFCI). Rhama is currently a principal investigator at the Center for Immuno-Oncology and Gastroenterology Cancer Center at DFCI. His research focus is on drug development of combinational immune therapeutics.
  • Sharmila Dorbala, MD, MPH , is professor of radiology at HMS and clinical staff at BWH in cardiovascular medicine and radiology. She is also the president of the American Society of Nuclear Medicine. Dorbala’s specialty is using nuclear medicine for cardiovascular discoveries.
  • Subha Ramani, PhD, MBBS, MMed , is associate professor of medicine at HMS, as well as associate physician in the Division of General Internal Medicine and Primary Care at BWH. Ramani’s scholarly interests focus on innovative approaches to teaching, learning and assessment of clinical trainees, faculty development in teaching, and qualitative research methods in medical education.
  • Ursula Kaiser, MD , is professor at HMS and chief of the Division of Endocrinology, Diabetes and Hypertension, and senior physician at BWH. Kaiser’s research focuses on understanding the molecular mechanisms by which pulsatile gonadotropin-releasing hormone regulates the expression of luteinizing hormone and follicle-stimulating hormone genes.

Insights on Creating a Good Research Question

Junichi Tokuda, PhD

Play Junichi Tokuda video

Ursula Kaiser, MD

Play Ursula Kaiser video

Start Successfully: Build the Foundation of a Good Research Question

Jonathan Williams, MD, MMSc

Start Successfully Resources

Ideation in Device Development: Finding Clinical Need Josh Tolkoff, MS A lecture explaining the critical importance of identifying a compelling clinical need before embarking on a research project. Play Ideation in Device Development video .

Radical Innovation Jeff Karp, PhD This ThinkResearch podcast episode focuses on one researcher’s approach using radical simplicity to break down big problems and questions. Play Radical Innovation .

Using Healthcare Data: How can Researchers Come up with Interesting Questions? Anupam Jena, MD, PhD Another ThinkResearch podcast episode addresses how to discover good research questions by using a backward design approach which involves analyzing big data and allowing the research question to unfold from findings. Play Using Healthcare Data .

Important Factors: Consider Feasibility and Novelty

Sharmila Dorbala, MD, MPH

Refining Your Research Question 

Play video of Clare Tempany-Afdhal

Elliot Israel, MD

Play Elliott Israel video

Frameworks and Structure: Evaluate Research Questions Using Tools and Techniques

Frameworks and Structure Resources

Designing Clinical Research Hulley et al. A comprehensive and practical guide to clinical research, including the FINER framework for evaluating research questions. Learn more about the book .

Translational Medicine Library Guide Queens University Library An introduction to popular frameworks for research questions, including FINER and PICO. Review translational medicine guide .

Asking a Good T3/T4 Question  Niteesh K. Choudhry, MD, PhD This video explains the PICO framework in practice as participants in a workshop propose research questions that compare interventions. Play Asking a Good T3/T4 Question video

Introduction to Designing & Conducting Mixed Methods Research An online course that provides a deeper dive into mixed methods’ research questions and methodologies. Learn more about the course

Network and Support: Find the Collaborators and Stakeholders to Help Evaluate Research Questions

Chris Gibbons, MD,

Network & Support Resource

Bench-to-bedside, Bedside-to-bench Christopher Gibbons, MD In this lecture, Gibbons shares his experience of bringing research from bench to bedside, and from bedside to bench. His talk highlights the formation and evolution of research questions based on clinical need. Play Bench-to-bedside. 

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research process
  • Writing Strong Research Questions | Criteria & Examples

Writing Strong Research Questions | Criteria & Examples

Published on 30 October 2022 by Shona McCombes . Revised on 12 December 2023.

A research question pinpoints exactly what you want to find out in your work. A good research question is essential to guide your research paper , dissertation , or thesis .

All research questions should be:

  • Focused on a single problem or issue
  • Researchable using primary and/or secondary sources
  • Feasible to answer within the timeframe and practical constraints
  • Specific enough to answer thoroughly
  • Complex enough to develop the answer over the space of a paper or thesis
  • Relevant to your field of study and/or society more broadly

Writing Strong Research Questions

Table of contents

How to write a research question, what makes a strong research question, research questions quiz, frequently asked questions.

You can follow these steps to develop a strong research question:

  • Choose your topic
  • Do some preliminary reading about the current state of the field
  • Narrow your focus to a specific niche
  • Identify the research problem that you will address

The way you frame your question depends on what your research aims to achieve. The table below shows some examples of how you might formulate questions for different purposes.

Research question formulations
Describing and exploring
Explaining and testing
Evaluating and acting

Using your research problem to develop your research question

Example research problem Example research question(s)
Teachers at the school do not have the skills to recognize or properly guide gifted children in the classroom. What practical techniques can teachers use to better identify and guide gifted children?
Young people increasingly engage in the ‘gig economy’, rather than traditional full-time employment. However, it is unclear why they choose to do so. What are the main factors influencing young people’s decisions to engage in the gig economy?

Note that while most research questions can be answered with various types of research , the way you frame your question should help determine your choices.

Prevent plagiarism, run a free check.

Research questions anchor your whole project, so it’s important to spend some time refining them. The criteria below can help you evaluate the strength of your research question.

Focused and researchable

Criteria Explanation
Focused on a single topic Your central research question should work together with your research problem to keep your work focused. If you have multiple questions, they should all clearly tie back to your central aim.
Answerable using Your question must be answerable using and/or , or by reading scholarly sources on the topic to develop your argument. If such data is impossible to access, you likely need to rethink your question.
Not based on value judgements Avoid subjective words like , , and . These do not give clear criteria for answering the question.

Feasible and specific

Criteria Explanation
Answerable within practical constraints Make sure you have enough time and resources to do all research required to answer your question. If it seems you will not be able to gain access to the data you need, consider narrowing down your question to be more specific.
Uses specific, well-defined concepts All the terms you use in the research question should have clear meanings. Avoid vague language, jargon, and too-broad ideas.

Does not demand a conclusive solution, policy, or course of action Research is about informing, not instructing. Even if your project is focused on a practical problem, it should aim to improve understanding rather than demand a ready-made solution.

Complex and arguable

Criteria Explanation
Cannot be answered with or Closed-ended, / questions are too simple to work as good research questions—they don’t provide enough scope for robust investigation and discussion.

Cannot be answered with easily-found facts If you can answer the question through a single Google search, book, or article, it is probably not complex enough. A good research question requires original data, synthesis of multiple sources, and original interpretation and argumentation prior to providing an answer.

Relevant and original

Criteria Explanation
Addresses a relevant problem Your research question should be developed based on initial reading around your . It should focus on addressing a problem or gap in the existing knowledge in your field or discipline.
Contributes to a timely social or academic debate The question should aim to contribute to an existing and current debate in your field or in society at large. It should produce knowledge that future researchers or practitioners can later build on.
Has not already been answered You don’t have to ask something that nobody has ever thought of before, but your question should have some aspect of originality. For example, you can focus on a specific location, or explore a new angle.

The way you present your research problem in your introduction varies depending on the nature of your research paper . A research paper that presents a sustained argument will usually encapsulate this argument in a thesis statement .

A research paper designed to present the results of empirical research tends to present a research question that it seeks to answer. It may also include a hypothesis – a prediction that will be confirmed or disproved by your research.

As you cannot possibly read every source related to your topic, it’s important to evaluate sources to assess their relevance. Use preliminary evaluation to determine whether a source is worth examining in more depth.

This involves:

  • Reading abstracts , prefaces, introductions , and conclusions
  • Looking at the table of contents to determine the scope of the work
  • Consulting the index for key terms or the names of important scholars

An essay isn’t just a loose collection of facts and ideas. Instead, it should be centered on an overarching argument (summarised in your thesis statement ) that every part of the essay relates to.

The way you structure your essay is crucial to presenting your argument coherently. A well-structured essay helps your reader follow the logic of your ideas and understand your overall point.

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, December 12). Writing Strong Research Questions | Criteria & Examples. Scribbr. Retrieved 9 June 2024, from https://www.scribbr.co.uk/the-research-process/research-question/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a research proposal | examples & templates, how to write a results section | tips & examples, what is a research methodology | steps & tips.

  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

How To Write a Research Question

Deeptanshu D

Academic writing and research require a distinct focus and direction. A well-designed research question gives purpose and clarity to your research. In addition, it helps your readers understand the issue you are trying to address and explore.

Every time you want to know more about a subject, you will pose a question. The same idea is used in research as well. You must pose a question in order to effectively address a research problem. That's why the research question is an integral part of the research process. Additionally, it offers the author writing and reading guidelines, be it qualitative research or quantitative research.

In your research paper , you must single out just one issue or problem. The specific issue or claim you wish to address should be included in your thesis statement in order to clarify your main argument.

A good research question must have the following characteristics.

how to test research questions

  • Should include only one problem in the research question
  • Should be able to find the answer using primary data and secondary data sources
  • Should be possible to resolve within the given time and other constraints
  • Detailed and in-depth results should be achievable
  • Should be relevant and realistic.
  • It should relate to your chosen area of research

While a larger project, like a thesis, might have several research questions to address, each one should be directed at your main area of study. Of course, you can use different research designs and research methods (qualitative research or quantitative research) to address various research questions. However, they must all be pertinent to the study's objectives.

What is a Research Question?

what-is-a-research-question

A research question is an inquiry that the research attempts to answer. It is the heart of the systematic investigation. Research questions are the most important step in any research project. In essence, it initiates the research project and establishes the pace for the specific research A research question is:

  • Clear : It provides enough detail that the audience understands its purpose without any additional explanation.
  • Focused : It is so specific that it can be addressed within the time constraints of the writing task.
  • Succinct: It is written in the shortest possible words.
  • Complex : It is not possible to answer it with a "yes" or "no", but requires analysis and synthesis of ideas before somebody can create a solution.
  • Argumental : Its potential answers are open for debate rather than accepted facts.

A good research question usually focuses on the research and determines the research design, methodology, and hypothesis. It guides all phases of inquiry, data collection, analysis, and reporting. You should gather valuable information by asking the right questions.

Why are Research Questions so important?

Regardless of whether it is a qualitative research or quantitative research project, research questions provide writers and their audience with a way to navigate the writing and research process. Writers can avoid "all-about" papers by asking straightforward and specific research questions that help them focus on their research and support a specific thesis.

Types of Research Questions

types-of-research-question

There are two types of research: Qualitative research and Quantitative research . There must be research questions for every type of research. Your research question will be based on the type of research you want to conduct and the type of data collection.

The first step in designing research involves identifying a gap and creating a focused research question.

Below is a list of common research questions that can be used in a dissertation. Keep in mind that these are merely illustrations of typical research questions used in dissertation projects. The real research questions themselves might be more difficult.

Research Question Type

Question

Descriptive 

What are the properties of A?

Comparative 

What are the similarities and distinctions between A and B?

Correlational

What can you do to correlate variables A and B?

Exploratory

What factors affect the rate of C's growth? Are A and B also influencing C?

Explanatory

What are the causes for C? What does A do to B? What's causing D?

Evaluation

What is the impact of C? What role does B have? What are the benefits and drawbacks of A?

Action-Based

What can you do to improve X?

Example Research Questions

examples-of-research-question

The following are a few examples of research questions and research problems to help you understand how research questions can be created for a particular research problem.

Problem

Question

Due to poor revenue collection, a small-sized company ('A') in the UK cannot allocate a marketing budget next year.

What practical steps can the company take to increase its revenue?

Many graduates are now working as freelancers even though they have degrees from well-respected academic institutions. But what's the reason these young people choose to work in this field?

Why do fresh graduates choose to work for themselves rather than full-time? What are the benefits and drawbacks of the gig economy? What do age, gender, and academic qualifications do with people's perceptions of freelancing?

Steps to Write Research Questions

steps-to-write-a-research-question

You can focus on the issue or research gaps you're attempting to solve by using the research questions as a direction.

If you're unsure how to go about writing a good research question, these are the steps to follow in the process:

  • Select an interesting topic Always choose a topic that interests you. Because if your curiosity isn’t aroused by a subject, you’ll have a hard time conducting research around it. Alos, it’s better that you pick something that’s neither too narrow or too broad.
  • Do preliminary research on the topic Search for relevant literature to gauge what problems have already been tackled by scholars. You can do that conveniently through repositories like Scispace , where you’ll find millions of papers in one place. Once you do find the papers you’re looking for, try our reading assistant, SciSpace Copilot to get simple explanations for the paper . You’ll be able to quickly understand the abstract, find the key takeaways, and the main arguments presented in the paper. This will give you a more contextual understanding of your subject and you’ll have an easier time identifying knowledge gaps in your discipline.

     Also: ChatPDF vs. SciSpace Copilot: Unveiling the best tool for your research

  • Consider your audience It is essential to understand your audience to develop focused research questions for essays or dissertations. When narrowing down your topic, you can identify aspects that might interest your audience.
  • Ask questions Asking questions will give you a deeper understanding of the topic. Evaluate your question through the What, Why, When, How, and other open-ended questions assessment.
  • Assess your question Once you have created a research question, assess its effectiveness to determine if it is useful for the purpose. Refine and revise the dissertation research question multiple times.

Additionally, use this list of questions as a guide when formulating your research question.

Are you able to answer a specific research question? After identifying a gap in research, it would be helpful to formulate the research question. And this will allow the research to solve a part of the problem. Is your research question clear and centered on the main topic? It is important that your research question should be specific and related to your central goal. Are you tackling a difficult research question? It is not possible to answer the research question with a simple yes or no. The problem requires in-depth analysis. It is often started with "How" and "Why."

Start your research Once you have completed your dissertation research questions, it is time to review the literature on similar topics to discover different perspectives.

Strong  Research Question Samples

Uncertain: How should social networking sites work on the hatred that flows through their platform?

Certain: What should social media sites like Twitter or Facebook do to address the harm they are causing?

This unclear question does not specify the social networking sites that are being used or what harm they might be causing. In addition, this question assumes that the "harm" has been proven and/or accepted. This version is more specific and identifies the sites (Twitter, Facebook), the type and extent of harm (privacy concerns), and who might be suffering from that harm (users). Effective research questions should not be ambiguous or interpreted.

Unfocused: What are the effects of global warming on the environment?

Focused: What are the most important effects of glacial melting in Antarctica on penguins' lives?

This broad research question cannot be addressed in a book, let alone a college-level paper. Focused research targets a specific effect of global heating (glacial  melting), an area (Antarctica), or a specific animal (penguins). The writer must also decide which effect will have the greatest impact on the animals affected. If in doubt, narrow down your research question to the most specific possible.

Too Simple: What are the U.S. doctors doing to treat diabetes?

Appropriately complex: Which factors, if any, are most likely to predict a person's risk of developing diabetes?

This simple version can be found online. It is easy to answer with a few facts. The second, more complicated version of this question is divided into two parts. It is thought-provoking and requires extensive investigation as well as evaluation by the author. So, ensure that a quick Google search should not answer your research question.

How to write a strong Research Question?

how-to-write-a-strong-research-question

The foundation of all research is the research question. You should therefore spend as much time as necessary to refine your research question based on various data.

You can conduct your research more efficiently and analyze your results better if you have great research questions for your dissertation, research paper , or essay .

The following criteria can help you evaluate the strength and importance of your research question and can be used to determine the strength of your research question:

  • Researchable
  • It should only cover one issue.
  • A subjective judgment should not be included in the question.
  • It can be answered with data analysis and research.
  • Specific and Practical
  • It should not contain a plan of action, policy, or solution.
  • It should be clearly defined
  • Within research limits
  • Complex and Arguable
  • It shouldn't be difficult to answer.
  • To find the truth, you need in-depth knowledge
  • Allows for discussion and deliberation
  • Original and Relevant
  • It should be in your area of study
  • Its results should be measurable
  • It should be original

Conclusion - How to write Research Questions?

Research questions provide a clear guideline for research. One research question may be part of a larger project, such as a dissertation. However, each question should only focus on one topic.

Research questions must be answerable, practical, specific, and applicable to your field. The research type that you use to base your research questions on will determine the research topic. You can start by selecting an interesting topic and doing preliminary research. Then, you can begin asking questions, evaluating your questions, and start your research.

Now it's easier than ever to streamline your research workflow with SciSpace ResearchGPT . Its integrated, comprehensive end-to-end platform for research allows scholars to easily discover, read, write and publish their research and fosters collaboration.

how to test research questions

You might also like

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Consensus GPT vs. SciSpace GPT: Choose the Best GPT for Research

Sumalatha G

Literature Review and Theoretical Framework: Understanding the Differences

Nikhil Seethi

Types of Essays in Academic Writing - Quick Guide (2024)

how to test research questions

Get science-backed answers as you write with Paperpal's Research feature

How to Write a Research Question: Types and Examples 

research quetsion

The first step in any research project is framing the research question. It can be considered the core of any systematic investigation as the research outcomes are tied to asking the right questions. Thus, this primary interrogation point sets the pace for your research as it helps collect relevant and insightful information that ultimately influences your work.   

Typically, the research question guides the stages of inquiry, analysis, and reporting. Depending on the use of quantifiable or quantitative data, research questions are broadly categorized into quantitative or qualitative research questions. Both types of research questions can be used independently or together, considering the overall focus and objectives of your research.  

What is a research question?

A research question is a clear, focused, concise, and arguable question on which your research and writing are centered. 1 It states various aspects of the study, including the population and variables to be studied and the problem the study addresses. These questions also set the boundaries of the study, ensuring cohesion. 

Designing the research question is a dynamic process where the researcher can change or refine the research question as they review related literature and develop a framework for the study. Depending on the scale of your research, the study can include single or multiple research questions. 

A good research question has the following features: 

  • It is relevant to the chosen field of study. 
  • The question posed is arguable and open for debate, requiring synthesizing and analysis of ideas. 
  • It is focused and concisely framed. 
  • A feasible solution is possible within the given practical constraint and timeframe. 

A poorly formulated research question poses several risks. 1   

  • Researchers can adopt an erroneous design. 
  • It can create confusion and hinder the thought process, including developing a clear protocol.  
  • It can jeopardize publication efforts.  
  • It causes difficulty in determining the relevance of the study findings.  
  • It causes difficulty in whether the study fulfils the inclusion criteria for systematic review and meta-analysis. This creates challenges in determining whether additional studies or data collection is needed to answer the question.  
  • Readers may fail to understand the objective of the study. This reduces the likelihood of the study being cited by others. 

Now that you know “What is a research question?”, let’s look at the different types of research questions. 

Types of research questions

Depending on the type of research to be done, research questions can be classified broadly into quantitative, qualitative, or mixed-methods studies. Knowing the type of research helps determine the best type of research question that reflects the direction and epistemological underpinnings of your research. 

The structure and wording of quantitative 2 and qualitative research 3 questions differ significantly. The quantitative study looks at causal relationships, whereas the qualitative study aims at exploring a phenomenon. 

  • Quantitative research questions:  
  • Seeks to investigate social, familial, or educational experiences or processes in a particular context and/or location.  
  • Answers ‘how,’ ‘what,’ or ‘why’ questions. 
  • Investigates connections, relations, or comparisons between independent and dependent variables. 

Quantitative research questions can be further categorized into descriptive, comparative, and relationship, as explained in the Table below. 

 
Descriptive research questions These measure the responses of a study’s population toward a particular question or variable. Common descriptive research questions will begin with “How much?”, “How regularly?”, “What percentage?”, “What time?”, “What is?”   Research question example: How often do you buy mobile apps for learning purposes? 
Comparative research questions These investigate differences between two or more groups for an outcome variable. For instance, the researcher may compare groups with and without a certain variable.   Research question example: What are the differences in attitudes towards online learning between visual and Kinaesthetic learners? 
Relationship research questions These explore and define trends and interactions between two or more variables. These investigate relationships between dependent and independent variables and use words such as “association” or “trends.  Research question example: What is the relationship between disposable income and job satisfaction amongst US residents? 
  • Qualitative research questions  

Qualitative research questions are adaptable, non-directional, and more flexible. It concerns broad areas of research or more specific areas of study to discover, explain, or explore a phenomenon. These are further classified as follows: 

   
Exploratory Questions These question looks to understand something without influencing the results. The aim is to learn more about a topic without attributing bias or preconceived notions.   Research question example: What are people’s thoughts on the new government? 
Experiential questions These questions focus on understanding individuals’ experiences, perspectives, and subjective meanings related to a particular phenomenon. They aim to capture personal experiences and emotions.   Research question example: What are the challenges students face during their transition from school to college? 
Interpretive Questions These questions investigate people in their natural settings to help understand how a group makes sense of shared experiences of a phenomenon.   Research question example: How do you feel about ChatGPT assisting student learning? 
  • Mixed-methods studies  

Mixed-methods studies use both quantitative and qualitative research questions to answer your research question. Mixed methods provide a complete picture than standalone quantitative or qualitative research, as it integrates the benefits of both methods. Mixed methods research is often used in multidisciplinary settings and complex situational or societal research, especially in the behavioral, health, and social science fields. 

What makes a good research question

A good research question should be clear and focused to guide your research. It should synthesize multiple sources to present your unique argument, and should ideally be something that you are interested in. But avoid questions that can be answered in a few factual statements. The following are the main attributes of a good research question. 

  • Specific: The research question should not be a fishing expedition performed in the hopes that some new information will be found that will benefit the researcher. The central research question should work with your research problem to keep your work focused. If using multiple questions, they should all tie back to the central aim. 
  • Measurable: The research question must be answerable using quantitative and/or qualitative data or from scholarly sources to develop your research question. If such data is impossible to access, it is better to rethink your question. 
  • Attainable: Ensure you have enough time and resources to do all research required to answer your question. If it seems you will not be able to gain access to the data you need, consider narrowing down your question to be more specific. 
  • You have the expertise 
  • You have the equipment and resources 
  • Realistic: Developing your research question should be based on initial reading about your topic. It should focus on addressing a problem or gap in the existing knowledge in your field or discipline. 
  • Based on some sort of rational physics 
  • Can be done in a reasonable time frame 
  • Timely: The research question should contribute to an existing and current debate in your field or in society at large. It should produce knowledge that future researchers or practitioners can later build on. 
  • Novel 
  • Based on current technologies. 
  • Important to answer current problems or concerns. 
  • Lead to new directions. 
  • Important: Your question should have some aspect of originality. Incremental research is as important as exploring disruptive technologies. For example, you can focus on a specific location or explore a new angle. 
  • Meaningful whether the answer is “Yes” or “No.” Closed-ended, yes/no questions are too simple to work as good research questions. Such questions do not provide enough scope for robust investigation and discussion. A good research question requires original data, synthesis of multiple sources, and original interpretation and argumentation before providing an answer. 

Steps for developing a good research question

The importance of research questions cannot be understated. When drafting a research question, use the following frameworks to guide the components of your question to ease the process. 4  

  • Determine the requirements: Before constructing a good research question, set your research requirements. What is the purpose? Is it descriptive, comparative, or explorative research? Determining the research aim will help you choose the most appropriate topic and word your question appropriately. 
  • Select a broad research topic: Identify a broader subject area of interest that requires investigation. Techniques such as brainstorming or concept mapping can help identify relevant connections and themes within a broad research topic. For example, how to learn and help students learn. 
  • Perform preliminary investigation: Preliminary research is needed to obtain up-to-date and relevant knowledge on your topic. It also helps identify issues currently being discussed from which information gaps can be identified. 
  • Narrow your focus: Narrow the scope and focus of your research to a specific niche. This involves focusing on gaps in existing knowledge or recent literature or extending or complementing the findings of existing literature. Another approach involves constructing strong research questions that challenge your views or knowledge of the area of study (Example: Is learning consistent with the existing learning theory and research). 
  • Identify the research problem: Once the research question has been framed, one should evaluate it. This is to realize the importance of the research questions and if there is a need for more revising (Example: How do your beliefs on learning theory and research impact your instructional practices). 

How to write a research question

Those struggling to understand how to write a research question, these simple steps can help you simplify the process of writing a research question. 

Topic selection Choose a broad topic, such as “learner support” or “social media influence” for your study. Select topics of interest to make research more enjoyable and stay motivated.  
Preliminary research The goal is to refine and focus your research question. The following strategies can help: Skim various scholarly articles. List subtopics under the main topic. List possible research questions for each subtopic. Consider the scope of research for each of the research questions. Select research questions that are answerable within a specific time and with available resources. If the scope is too large, repeat looking for sub-subtopics.  
Audience When choosing what to base your research on, consider your readers. For college papers, the audience is academic. Ask yourself if your audience may be interested in the topic you are thinking about pursuing. Determining your audience can also help refine the importance of your research question and focus on items related to your defined group.  
Generate potential questions Ask open-ended “how?” and “why?” questions to find a more specific research question. Gap-spotting to identify research limitations, problematization to challenge assumptions made by others, or using personal experiences to draw on issues in your industry can be used to generate questions.  
Review brainstormed questions Evaluate each question to check their effectiveness. Use the FINER model to see if the question meets all the research question criteria.  
Construct the research question Multiple frameworks, such as PICOT and PEA, are available to help structure your research question. The frameworks listed below can help you with the necessary information for generating your research question.  
Framework Attributes of each framework
FINER Feasible 
Interesting 
Novel 
Ethical 
Relevant 
PICOT Population or problem 
Intervention or indicator being studied 
Comparison group 
Outcome of interest 
Time frame of the study  
PEO Population being studied 
Exposure to preexisting conditions 
Outcome of interest  

Sample Research Questions

The following are some bad and good research question examples 

  • Example 1 
Unclear: How does social media affect student growth? 
Clear: What effect does the daily use of Twitter and Facebook have on the career development goals of students? 
Explanation: The first research question is unclear because of the vagueness of “social media” as a concept and the lack of specificity. The second question is specific and focused, and its answer can be discovered through data collection and analysis.  
  • Example 2 
Simple: Has there been an increase in the number of gifted children identified? 
Complex: What practical techniques can teachers use to identify and guide gifted children better? 
Explanation: A simple “yes” or “no” statement easily answers the first research question. The second research question is more complicated and requires the researcher to collect data, perform in-depth data analysis, and form an argument that leads to further discussion. 

References:  

  • Thabane, L., Thomas, T., Ye, C., & Paul, J. (2009). Posing the research question: not so simple.  Canadian Journal of Anesthesia/Journal canadien d’anesthésie ,  56 (1), 71-79. 
  • Rutberg, S., & Bouikidis, C. D. (2018). Focusing on the fundamentals: A simplistic differentiation between qualitative and quantitative research.  Nephrology Nursing Journal ,  45 (2), 209-213. 
  • Kyngäs, H. (2020). Qualitative research and content analysis.  The application of content analysis in nursing science research , 3-11. 
  • Mattick, K., Johnston, J., & de la Croix, A. (2018). How to… write a good research question.  The clinical teacher ,  15 (2), 104-108. 
  • Fandino, W. (2019). Formulating a good research question: Pearls and pitfalls.  Indian Journal of Anaesthesia ,  63 (8), 611. 
  • Richardson, W. S., Wilson, M. C., Nishikawa, J., & Hayward, R. S. (1995). The well-built clinical question: a key to evidence-based decisions.  ACP journal club ,  123 (3), A12-A13 

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Scientific Writing Style Guides Explained
  • Ethical Research Practices For Research with Human Subjects
  • 8 Most Effective Ways to Increase Motivation for Thesis Writing 
  • 6 Tips for Post-Doc Researchers to Take Their Career to the Next Level

Transitive and Intransitive Verbs in the World of Research

Language and grammar rules for academic writing, you may also like, how to write the first draft of a..., mla works cited page: format, template & examples, academic editing: how to self-edit academic text with..., measuring academic success: definition & strategies for excellence, phd qualifying exam: tips for success , quillbot review: features, pricing, and free alternatives, what is an academic paper types and elements , 9 steps to publish a research paper, what are the different types of research papers, how to make translating academic papers less challenging.

Writing Studio

Formulating your research question (rq).

In an effort to make our handouts more accessible, we have begun converting our PDF handouts to web pages. Download this page as a PDF: Formulating Your Research Question Return to Writing Studio Handouts

In a research paper, the emphasis is on generating a unique question and then synthesizing diverse sources into a coherent essay that supports your argument about the topic. In other words, you integrate information from publications with your own thoughts in order to formulate an argument. Your topic is your starting place: from here, you will develop an engaging research question. Merely presenting a topic in the form of a question does not transform it into a good research question.

Research Topic Versus Research Question Examples

1. broad topic versus narrow question, 1a. broad topic.

“What forces affect race relations in America?”

1b. NARROWER QUESTION

“How do corporate hiring practices affect race relations in Nashville?”

The question “What is the percentage of racial minorities holding management positions in corporate offices in Nashville?” is much too specific and would yield, at best, a statistic that could become part of a larger argument.

2. Neutral Topic Versus Argumentative Question

2a. neutral topic.

“How does KFC market its low-fat food offerings?”

2b. Argumentative question

“Does KFC put more money into marketing its high-fat food offerings than its lower-fat ones?”

The latter question is somewhat better, since it may lead you to take a stance or formulate an argument about consumer awareness or benefit.

3. Objective Topic Versus Subjective Question

Objective subjects are factual and do not have sides to be argued. Subjective subjects are those about which you can take a side.

3a. Objective topic

“How much time do youth between the ages of 10 and 15 spend playing video games?”

3b. Subjective Question

“What are the effects of video-gaming on the attention spans of youth between the ages of 10 and 15?”

The first question is likely to lead to some data, though not necessarily to an argument or issue. The second question is somewhat better, since it might lead you to formulate an argument for or against time spent playing video games.

4. Open-Ended Topic Versus Direct Question

4a. open-ended topic.

“Does the author of this text use allusion?”

4b. Direct question (gives direction to research)

“Does the ironic use of allusion in this text reveal anything about the author’s unwillingness to divulge his political commitments?”

The second question gives focus by putting the use of allusion into the specific context of a question about the author’s political commitments and perhaps also about the circumstances under which the text was produced.

Research Question (RQ) Checklist

  • Is my RQ something that I am curious about and that others might care about? Does it present an issue on which I can take a stand?
  • Does my RQ put a new spin on an old issue, or does it try to solve a problem?
  • Is my RQ too broad, too narrow, or OK?
  • within the time frame of the assignment?
  • given the resources available at my location?
  • Is my RQ measurable? What type of information do I need? Can I find actual data to support or contradict a position?
  • What sources will have the type of information that I need to answer my RQ (journals, books, internet resources, government documents, interviews with people)?

Final Thoughts

The answer to a good research question will often be the THESIS of your research paper! And the results of your research may not always be what you expected them to be. Not only is this ok, it can be an indication that you are doing careful work!

Adapted from an online tutorial at Empire State College: http://www.esc.edu/htmlpages/writerold/menus.htm#develop (broken link)

Last revised: November 2022 | Adapted for web delivery: November 2022

In order to access certain content on this page, you may need to download Adobe Acrobat Reader or an equivalent PDF viewer software.

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker
  • APA Citation Generator
  • MLA Citation Generator
  • Chicago Citation Generator
  • Vancouver Citation Generator
  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

How to Write a Good Research Question (w/ Examples)

how to test research questions

What is a Research Question?

A research question is the main question that your study sought or is seeking to answer. A clear research question guides your research paper or thesis and states exactly what you want to find out, giving your work a focus and objective. Learning  how to write a hypothesis or research question is the start to composing any thesis, dissertation, or research paper. It is also one of the most important sections of a research proposal . 

A good research question not only clarifies the writing in your study; it provides your readers with a clear focus and facilitates their understanding of your research topic, as well as outlining your study’s objectives. Before drafting the paper and receiving research paper editing (and usually before performing your study), you should write a concise statement of what this study intends to accomplish or reveal.

Research Question Writing Tips

Listed below are the important characteristics of a good research question:

A good research question should:

  • Be clear and provide specific information so readers can easily understand the purpose.
  • Be focused in its scope and narrow enough to be addressed in the space allowed by your paper
  • Be relevant and concise and express your main ideas in as few words as possible, like a hypothesis.
  • Be precise and complex enough that it does not simply answer a closed “yes or no” question, but requires an analysis of arguments and literature prior to its being considered acceptable. 
  • Be arguable or testable so that answers to the research question are open to scrutiny and specific questions and counterarguments.

Some of these characteristics might be difficult to understand in the form of a list. Let’s go into more detail about what a research question must do and look at some examples of research questions.

The research question should be specific and focused 

Research questions that are too broad are not suitable to be addressed in a single study. One reason for this can be if there are many factors or variables to consider. In addition, a sample data set that is too large or an experimental timeline that is too long may suggest that the research question is not focused enough.

A specific research question means that the collective data and observations come together to either confirm or deny the chosen hypothesis in a clear manner. If a research question is too vague, then the data might end up creating an alternate research problem or hypothesis that you haven’t addressed in your Introduction section .

What is the importance of genetic research in the medical field?
How might the discovery of a genetic basis for alcoholism impact triage processes in medical facilities?

The research question should be based on the literature 

An effective research question should be answerable and verifiable based on prior research because an effective scientific study must be placed in the context of a wider academic consensus. This means that conspiracy or fringe theories are not good research paper topics.

Instead, a good research question must extend, examine, and verify the context of your research field. It should fit naturally within the literature and be searchable by other research authors.

References to the literature can be in different citation styles and must be properly formatted according to the guidelines set forth by the publishing journal, university, or academic institution. This includes in-text citations as well as the Reference section . 

The research question should be realistic in time, scope, and budget

There are two main constraints to the research process: timeframe and budget.

A proper research question will include study or experimental procedures that can be executed within a feasible time frame, typically by a graduate doctoral or master’s student or lab technician. Research that requires future technology, expensive resources, or follow-up procedures is problematic.

A researcher’s budget is also a major constraint to performing timely research. Research at many large universities or institutions is publicly funded and is thus accountable to funding restrictions. 

The research question should be in-depth

Research papers, dissertations and theses , and academic journal articles are usually dozens if not hundreds of pages in length.

A good research question or thesis statement must be sufficiently complex to warrant such a length, as it must stand up to the scrutiny of peer review and be reproducible by other scientists and researchers.

Research Question Types

Qualitative and quantitative research are the two major types of research, and it is essential to develop research questions for each type of study. 

Quantitative Research Questions

Quantitative research questions are specific. A typical research question involves the population to be studied, dependent and independent variables, and the research design.

In addition, quantitative research questions connect the research question and the research design. In addition, it is not possible to answer these questions definitively with a “yes” or “no” response. For example, scientific fields such as biology, physics, and chemistry often deal with “states,” in which different quantities, amounts, or velocities drastically alter the relevance of the research.

As a consequence, quantitative research questions do not contain qualitative, categorical, or ordinal qualifiers such as “is,” “are,” “does,” or “does not.”

Categories of quantitative research questions

Attempt to describe the behavior of a population in regard to one or more variables or describe characteristics of those variables that will be measured. These are usually “What?” questions.Seek to discover differences between groups within the context of an outcome variable. These questions can be causal as well. Researchers may compare groups in which certain variables are present with groups in which they are not.Designed to elucidate and describe trends and interactions among variables. These questions include the dependent and independent variables and use words such as “association” or “trends.”

Qualitative Research Questions

In quantitative research, research questions have the potential to relate to broad research areas as well as more specific areas of study. Qualitative research questions are less directional, more flexible, and adaptable compared with their quantitative counterparts. Thus, studies based on these questions tend to focus on “discovering,” “explaining,” “elucidating,” and “exploring.”

Categories of qualitative research questions

Attempt to identify and describe existing conditions.Attempt to describe a phenomenon.
Assess the effectiveness of existing methods, protocols, theories, or procedures.
Examine a phenomenon or analyze the reasons or relationships between subjects or phenomena.
Focus on the unknown aspects of a particular topic.

Quantitative and Qualitative Research Question Examples

Descriptive research question
Comparative research question
Correlational research question
Exploratory research question
Explanatory research question
Evaluation research question

stacks of books in black and white; research question examples

Good and Bad Research Question Examples

Below are some good (and not-so-good) examples of research questions that researchers can use to guide them in crafting their own research questions.

Research Question Example 1

The first research question is too vague in both its independent and dependent variables. There is no specific information on what “exposure” means. Does this refer to comments, likes, engagement, or just how much time is spent on the social media platform?

Second, there is no useful information on what exactly “affected” means. Does the subject’s behavior change in some measurable way? Or does this term refer to another factor such as the user’s emotions?

Research Question Example 2

In this research question, the first example is too simple and not sufficiently complex, making it difficult to assess whether the study answered the question. The author could really only answer this question with a simple “yes” or “no.” Further, the presence of data would not help answer this question more deeply, which is a sure sign of a poorly constructed research topic.

The second research question is specific, complex, and empirically verifiable. One can measure program effectiveness based on metrics such as attendance or grades. Further, “bullying” is made into an empirical, quantitative measurement in the form of recorded disciplinary actions.

Steps for Writing a Research Question

Good research questions are relevant, focused, and meaningful. It can be difficult to come up with a good research question, but there are a few steps you can follow to make it a bit easier.

1. Start with an interesting and relevant topic

Choose a research topic that is interesting but also relevant and aligned with your own country’s culture or your university’s capabilities. Popular academic topics include healthcare and medical-related research. However, if you are attending an engineering school or humanities program, you should obviously choose a research question that pertains to your specific study and major.

Below is an embedded graph of the most popular research fields of study based on publication output according to region. As you can see, healthcare and the basic sciences receive the most funding and earn the highest number of publications. 

how to test research questions

2. Do preliminary research  

You can begin doing preliminary research once you have chosen a research topic. Two objectives should be accomplished during this first phase of research. First, you should undertake a preliminary review of related literature to discover issues that scholars and peers are currently discussing. With this method, you show that you are informed about the latest developments in the field.

Secondly, identify knowledge gaps or limitations in your topic by conducting a preliminary literature review . It is possible to later use these gaps to focus your research question after a certain amount of fine-tuning.

3. Narrow your research to determine specific research questions

You can focus on a more specific area of study once you have a good handle on the topic you want to explore. Focusing on recent literature or knowledge gaps is one good option. 

By identifying study limitations in the literature and overlooked areas of study, an author can carve out a good research question. The same is true for choosing research questions that extend or complement existing literature.

4. Evaluate your research question

Make sure you evaluate the research question by asking the following questions:

Is my research question clear?

The resulting data and observations that your study produces should be clear. For quantitative studies, data must be empirical and measurable. For qualitative, the observations should be clearly delineable across categories.

Is my research question focused and specific?

A strong research question should be specific enough that your methodology or testing procedure produces an objective result, not one left to subjective interpretation. Open-ended research questions or those relating to general topics can create ambiguous connections between the results and the aims of the study. 

Is my research question sufficiently complex?

The result of your research should be consequential and substantial (and fall sufficiently within the context of your field) to warrant an academic study. Simply reinforcing or supporting a scientific consensus is superfluous and will likely not be well received by most journal editors.  

reverse triangle chart, how to write a research question

Editing Your Research Question

Your research question should be fully formulated well before you begin drafting your research paper. However, you can receive English paper editing and proofreading services at any point in the drafting process. Language editors with expertise in your academic field can assist you with the content and language in your Introduction section or other manuscript sections. And if you need further assistance or information regarding paper compositions, in the meantime, check out our academic resources , which provide dozens of articles and videos on a variety of academic writing and publication topics.

Enago Academy

How to Develop a Good Research Question? — Types & Examples

' src=

Cecilia is living through a tough situation in her research life. Figuring out where to begin, how to start her research study, and how to pose the right question for her research quest, is driving her insane. Well, questions, if not asked correctly, have a tendency to spiral us!

Image Source: https://phdcomics.com/

Questions lead everyone to answers. Research is a quest to find answers. Not the vague questions that Cecilia means to answer, but definitely more focused questions that define your research. Therefore, asking appropriate question becomes an important matter of discussion.

A well begun research process requires a strong research question. It directs the research investigation and provides a clear goal to focus on. Understanding the characteristics of comprising a good research question will generate new ideas and help you discover new methods in research.

In this article, we are aiming to help researchers understand what is a research question and how to write one with examples.

Table of Contents

What Is a Research Question?

A good research question defines your study and helps you seek an answer to your research. Moreover, a clear research question guides the research paper or thesis to define exactly what you want to find out, giving your work its objective. Learning to write a research question is the beginning to any thesis, dissertation , or research paper. Furthermore, the question addresses issues or problems which is answered through analysis and interpretation of data.

Why Is a Research Question Important?

A strong research question guides the design of a study. Moreover, it helps determine the type of research and identify specific objectives. Research questions state the specific issue you are addressing and focus on outcomes of the research for individuals to learn. Therefore, it helps break up the study into easy steps to complete the objectives and answer the initial question.

Types of Research Questions

Research questions can be categorized into different types, depending on the type of research you want to undergo. Furthermore, knowing the type of research will help a researcher determine the best type of research question to use.

1. Qualitative Research Question

Qualitative questions concern broad areas or more specific areas of research. However, unlike quantitative questions, qualitative research questions are adaptable, non-directional and more flexible. Qualitative research question focus on discovering, explaining, elucidating, and exploring.

i. Exploratory Questions

This form of question looks to understand something without influencing the results. The objective of exploratory questions is to learn more about a topic without attributing bias or preconceived notions to it.

Research Question Example: Asking how a chemical is used or perceptions around a certain topic.

ii. Predictive Questions

Predictive research questions are defined as survey questions that automatically predict the best possible response options based on text of the question. Moreover, these questions seek to understand the intent or future outcome surrounding a topic.

Research Question Example: Asking why a consumer behaves in a certain way or chooses a certain option over other.

iii. Interpretive Questions

This type of research question allows the study of people in the natural setting. The questions help understand how a group makes sense of shared experiences with regards to various phenomena. These studies gather feedback on a group’s behavior without affecting the outcome.

Research Question Example: How do you feel about AI assisting publishing process in your research?

2. Quantitative Research Question

Quantitative questions prove or disprove a researcher’s hypothesis through descriptions, comparisons, and relationships. These questions are beneficial when choosing a research topic or when posing follow-up questions that garner more information.

i. Descriptive Questions

It is the most basic type of quantitative research question and it seeks to explain when, where, why, or how something occurred. Moreover, they use data and statistics to describe an event or phenomenon.

Research Question Example: How many generations of genes influence a future generation?

ii. Comparative Questions

Sometimes it’s beneficial to compare one occurrence with another. Therefore, comparative questions are helpful when studying groups with dependent variables.

Example: Do men and women have comparable metabolisms?

iii. Relationship-Based Questions

This type of research question answers influence of one variable on another. Therefore, experimental studies use this type of research questions are majorly.

Example: How is drought condition affect a region’s probability for wildfires.  

How to Write a Good Research Question?

good research question

1. Select a Topic

The first step towards writing a good research question is to choose a broad topic of research. You could choose a research topic that interests you, because the complete research will progress further from the research question. Therefore, make sure to choose a topic that you are passionate about, to make your research study more enjoyable.

2. Conduct Preliminary Research

After finalizing the topic, read and know about what research studies are conducted in the field so far. Furthermore, this will help you find articles that talk about the topics that are yet to be explored. You could explore the topics that the earlier research has not studied.

3. Consider Your Audience

The most important aspect of writing a good research question is to find out if there is audience interested to know the answer to the question you are proposing. Moreover, determining your audience will assist you in refining your research question, and focus on aspects that relate to defined groups.

4. Generate Potential Questions

The best way to generate potential questions is to ask open ended questions. Questioning broader topics will allow you to narrow down to specific questions. Identifying the gaps in literature could also give you topics to write the research question. Moreover, you could also challenge the existing assumptions or use personal experiences to redefine issues in research.

5. Review Your Questions

Once you have listed few of your questions, evaluate them to find out if they are effective research questions. Moreover while reviewing, go through the finer details of the question and its probable outcome, and find out if the question meets the research question criteria.

6. Construct Your Research Question

There are two frameworks to construct your research question. The first one being PICOT framework , which stands for:

  • Population or problem
  • Intervention or indicator being studied
  • Comparison group
  • Outcome of interest
  • Time frame of the study.

The second framework is PEO , which stands for:

  • Population being studied
  • Exposure to preexisting conditions
  • Outcome of interest.

Research Question Examples

  • How might the discovery of a genetic basis for alcoholism impact triage processes in medical facilities?
  • How do ecological systems respond to chronic anthropological disturbance?
  • What are demographic consequences of ecological interactions?
  • What roles do fungi play in wildfire recovery?
  • How do feedbacks reinforce patterns of genetic divergence on the landscape?
  • What educational strategies help encourage safe driving in young adults?
  • What makes a grocery store easy for shoppers to navigate?
  • What genetic factors predict if someone will develop hypothyroidism?
  • Does contemporary evolution along the gradients of global change alter ecosystems function?

How did you write your first research question ? What were the steps you followed to create a strong research question? Do write to us or comment below.

Frequently Asked Questions

Research questions guide the focus and direction of a research study. Here are common types of research questions: 1. Qualitative research question: Qualitative questions concern broad areas or more specific areas of research. However, unlike quantitative questions, qualitative research questions are adaptable, non-directional and more flexible. Different types of qualitative research questions are: i. Exploratory questions ii. Predictive questions iii. Interpretive questions 2. Quantitative Research Question: Quantitative questions prove or disprove a researcher’s hypothesis through descriptions, comparisons, and relationships. These questions are beneficial when choosing a research topic or when posing follow-up questions that garner more information. Different types of quantitative research questions are: i. Descriptive questions ii. Comparative questions iii. Relationship-based questions

Qualitative research questions aim to explore the richness and depth of participants' experiences and perspectives. They should guide your research and allow for in-depth exploration of the phenomenon under investigation. After identifying the research topic and the purpose of your research: • Begin with Broad Inquiry: Start with a general research question that captures the main focus of your study. This question should be open-ended and allow for exploration. • Break Down the Main Question: Identify specific aspects or dimensions related to the main research question that you want to investigate. • Formulate Sub-questions: Create sub-questions that delve deeper into each specific aspect or dimension identified in the previous step. • Ensure Open-endedness: Make sure your research questions are open-ended and allow for varied responses and perspectives. Avoid questions that can be answered with a simple "yes" or "no." Encourage participants to share their experiences, opinions, and perceptions in their own words. • Refine and Review: Review your research questions to ensure they align with your research purpose, topic, and objectives. Seek feedback from your research advisor or peers to refine and improve your research questions.

Developing research questions requires careful consideration of the research topic, objectives, and the type of study you intend to conduct. Here are the steps to help you develop effective research questions: 1. Select a Topic 2. Conduct Preliminary Research 3. Consider Your Audience 4. Generate Potential Questions 5. Review Your Questions 6. Construct Your Research Question Based on PICOT or PEO Framework

There are two frameworks to construct your research question. The first one being PICOT framework, which stands for: • Population or problem • Intervention or indicator being studied • Comparison group • Outcome of interest • Time frame of the study The second framework is PEO, which stands for: • Population being studied • Exposure to preexisting conditions • Outcome of interest

' src=

A tad helpful

Had trouble coming up with a good research question for my MSc proposal. This is very much helpful.

This is a well elaborated writing on research questions development. I found it very helpful.

Rate this article Cancel Reply

Your email address will not be published.

how to test research questions

Enago Academy's Most Popular Articles

retractions and research integrity

  • Publishing Research
  • Trending Now
  • Understanding Ethics

Understanding the Impact of Retractions on Research Integrity – A global study

As we reach the midway point of 2024, ‘Research Integrity’ remains one of the hot…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Launch of "Sony Women in Technology Award with Nature"

  • Industry News

Breaking Barriers: Sony and Nature unveil “Women in Technology Award”

Sony Group Corporation and the prestigious scientific journal Nature have collaborated to launch the inaugural…

Guide to Adhere Good Research Practice (FREE CHECKLIST)

Achieving Research Excellence: Checklist for good research practices

Academia is built on the foundation of trustworthy and high-quality research, supported by the pillars…

Setting Rationale in Research: Cracking the code for excelling at research

Research Problem Statement — Find out how to write an impactful one!

how to test research questions

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

how to test research questions

As a researcher, what do you consider most when choosing an image manipulation detector?

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Submit?
  • About Public Opinion Quarterly
  • About the American Association for Public Opinion Research
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Cognitive interviews, supplements to conventional pretests, experiments, statistical modeling, mode of administration, special populations, effects of testing, an agenda for the future.

  • < Previous

Methods for Testing and Evaluating Survey Questions

  • Article contents
  • Figures & tables
  • Supplementary Data

Stanley Presser, Mick P. Couper, Judith T. Lessler, Elizabeth Martin, Jean Martin, Jennifer M. Rothgeb, Eleanor Singer, Methods for Testing and Evaluating Survey Questions, Public Opinion Quarterly , Volume 68, Issue 1, March 2004, Pages 109–130, https://doi.org/10.1093/poq/nfh008

  • Permissions Icon Permissions

An examination of survey pretesting reveals a paradox. On the one hand, pretesting is the only way to evaluate in advance whether a questionnaire causes problems for interviewers or respondents. Consequently, both elementary textbooks and experienced researchers declare pretesting indispensable. On the other hand, most textbooks offer minimal, if any, guidance about pretesting methods, and published survey reports usually provide no information about whether questionnaires were pretested and, if so, how, and with what results. Moreover, until recently there was relatively little methodological research on pretesting. Thus pretesting’s universally acknowledged importance has been honored more in the breach than in the practice, and not a great deal is known about many aspects of pretesting, including the extent to which pretests serve their intended purpose and lead to improved questionnaires.

Pretesting dates to the founding of the modern sample survey in the mid-1930s or shortly thereafter. The earliest references in scholarly journals are from 1940, by which time pretests apparently were well established. In that year Katz reported, “The American Institute of Public Opinion [i.e., Gallup] and Fortune [i.e., Roper] pretest their questions to avoid phrasings which will be unintelligible to the public and to avoid issues unknown to the man on the street” ( Katz 1940 , p. 279).

Although the absence of documentation means we cannot be sure, our impression is that for much of survey research’s history, there has been one conventional form of pretest. Conventional pretesting is essentially a dress rehearsal, in which interviewers receive training like that for the main survey and administer the questionnaire as they would during the survey proper. After each interviewer completes a handful of interviews, response distributions may be tallied, and there is a debriefing in which the interviewers relate their experiences with the questionnaire and offer their views about the questionnaire’s problems.

Survey researchers have shown remarkable confidence in this approach. According to one leading expert, “It usually takes no more than 12–25 cases to reveal the major difficulties and weaknesses in a pretest questionnaire” ( Sheatsley 1983 , p. 226). This judgment is similar to that of another prominent methodologist, who maintained that “20–50 cases is usually sufficient to discover the major flaws in a questionnaire” ( Sudman 1983 , p. 181).

This faith in conventional pretesting is probably based on the common experience that a small number of conventional interviews often reveal numerous problems, such as questions that contain unwarranted suppositions, awkward wordings, or missing response categories. However, there is no scientific evidence to justify the confidence that this kind of pretesting identifies the major problems in a questionnaire.

Conventional pretests are based on the assumption that questionnaire problems will be signaled either by the answers that the questions elicit (e.g., “don’t knows” or refusals), which will show up in response tallies, or by some other visible consequence of asking the questions (e.g., hesitation or discomfort in responding), which interviewers can describe during debriefing. However, as Cannell and Kahn (1953, p. 353) noted, “There are no exact tests for these characteristics.” They go on to say, “The help of experienced interviewers is most useful at this point in obtaining subjective evaluations of the questionnaire.” Similarly, Moser and Kalton (1971, p. 50) judged, “Almost the most useful evidence of all on the adequacy of a questionnaire is the individual fieldworker’s [i.e., interviewer’s] report on how the interviews went, what difficulties were encountered, what alterations should be made, and so forth.” This emphasis on interviewer perceptions is nicely illustrated in Sudman and Bradburn’s ( 1982 , p. 49) advice for detecting unexpected word meanings: “A careful pilot test conducted by sensitive interviewers is the most direct way of discovering these problem words” (emphasis added).

Yet even if interviewers were extensively trained in recognizing problems with questions (as compared with receiving no special training at all, which is typical), conventional pretesting would still be ill suited to uncovering many questionnaire problems. Certain kinds of problems will not be apparent from observing respondent behavior, and the respondents themselves may be unaware of the problems. For instance, respondents can misunderstand a closed question’s intent without providing any indication of having done so. Moreover, because conventional pretests are almost always “undeclared” to the respondent, as opposed to “participating” (in which respondents are informed of the pretest’s purpose; see Converse and Presser 1986 ), respondents are usually not asked directly about their interpretations or other problems the questions may have caused. As a result, undeclared conventional pretesting seems better designed to identify problems the questionnaire poses for interviewers, who know the purpose of the testing, than for respondents, who do not.

Furthermore, when conventional pretest interviewers do describe respondent problems, there are no rules for assessing their descriptions or for determining which problems that are identified ought to be addressed. Researchers typically rely on intuition and experience in judging the seriousness of problems and deciding how to revise questions that are thought to have flaws.

In recent decades a growing awareness of conventional pretesting’s drawbacks has led to two interrelated changes. First, there has been a subtle shift in the goals of testing, from an exclusive focus on identifying and fixing overt problems experienced by interviewers and respondents to a broader concern for improving data quality so that measurements meet a survey’s objectives. Second, new testing methods have been developed or adapted from other uses. These methods include cognitive interviews, behavior coding, response latency, vignette analysis, formal respondent debriefings, experiments, and statistical modeling. 1 The development of these methods raises issues of how they might best be used in combination, as well as whether they in fact lead to improvements in survey measurement. In addition, the adoption of computerized modes of administration poses special challenges for pretesting, as do surveys of special populations, such as children, establishments, and those requiring questionnaires in more than one language—all of which have greatly increased in recent years. We review these developments, drawing on the latest research presented in the first volume devoted exclusively to testing and evaluating questionnaires ( Presser et al. 2004 ).

Ordinary interviews focus on producing codable responses to the questions. Cognitive interviews, by contrast, focus on providing a view of the processes elicited by the questions. Concurrent or retrospective “think-alouds” and/or probes are used to produce reports of the thoughts respondents have either as they answer the survey questions or immediately after. The objective is to reveal the thought processes involved in interpreting a question and arriving at an answer. These thoughts are then analyzed to diagnose problems with the question.

Although he is not commonly associated with cognitive interviewing, William Belson (1981) pioneered a version of this approach. In the mid-1960s Belson designed “intensive” interviews to explore seven questions respondents had been asked the preceding day during a regular interview administered by a separate interviewer. Respondents were first reminded of the exact question and the answer they had given to it. The interviewer then inquired, “When you were asked that question yesterday, exactly what did you think the question meant?” After nondirectively probing to clarify what the question meant to the respondent, interviewers asked, “Now tell me exactly how you worked out your answer from that question. Think it out for me just as you did yesterday . . . only this time say it aloud for me.” Then, after nondirectively probing to illuminate how the answer was worked out, interviewers posed scripted probes about various aspects of the question. These probes differed across the seven questions and were devised to test hypotheses about problems particular to each of the questions. Finally, after listening to the focal question once more, respondents were requested to say how they would now answer it. If their answer differed from the one they had given the preceding day, they were asked to explain why (Appendix, pp. 194–97). Six interviewers, who received two weeks of training, conducted 265 audiotaped, intensive interviews with a cross-section sample of London, England residents. Four analysts listened to the tapes and coded the incidence of various problems.

These intensive interviews differed in a critical way from today’s cognitive interviews, which integrate the original and follow-up interviews in a single administration with one interviewer. Belson assumed that respondents could accurately reconstruct their thoughts from an interview conducted the previous day, which is inconsistent with what we now know about the validity of self-reported cognitive processes. However, in many respects, Belson moved considerably beyond earlier work, such as Cantril and Fried (1944) , which used just one or two scripted probes to assess respondent interpretations of survey questions. Thus, it is ironic that Belson’s approach had little impact on pretesting practices, an outcome possibly due to its being so labor-intensive.

The pivotal development leading to a role for cognitive interviews in pretesting did not come until two decades later with the Cognitive Aspects of Survey Methodology (CASM) conference (Jabine et al. 1984) . Particularly influential was Loftus’s ( 1984 ) postconference analysis of how respondents answered survey questions about past events, in which she drew on the think-aloud technique used by Herbert Simon and his colleagues to study problem solving ( Ericsson and Simon 1980 ). Subsequently, a grant from Murray Aborn’s program at the National Science Foundation to Monroe Sirken supported both research on the technique’s utility for understanding responses to survey questions ( Lessler, Tourangeau, and Salter 1989 ) and the creation at the National Center for Health Statistics (NCHS) in 1985 of the first “cognitive laboratory,” where the technique could routinely be used to pretest questionnaires (e.g., Royston and Bercini 1987 ).

Similar cognitive laboratories were soon established by other U.S. statistical agencies and survey organizations. 2 The labs’ principal, but not exclusive, activity involved cognitive interviewing to pretest questionnaires. Facilitated by special exemptions from Office of Management and Budget survey clearance requirements, pretesting for U.S. government surveys increased dramatically through the 1990s ( Martin, Schechter, and Tucker 1999 ). At the same time, the labs took tentative steps toward standardizing and codifying their practices in training manuals (e.g., Willis 1994 ) or protocols for pretesting (e.g., DeMaio et al. 1993 ).

Although there is now general agreement about the value of cognitive interviewing, no consensus has emerged about best practices, such as whether (or when) to use think-alouds versus probes, whether to employ concurrent or retrospective reporting, and how to analyze and evaluate results. In part this is due to the paucity of methodological research examining these issues, but it is also due to a lack of attention to the theoretical foundation for applying cognitive interviews to survey pretesting.

As Willis (2004) notes, Ericsson and Simon (1980) argued that verbal reports are more likely to be veridical if they involve information a person has available in short-term (as opposed to long-term) memory, and if the verbalization itself does not fundamentally alter thought processes (e.g., does not involve further explanation). Thus some survey tasks (for instance, nontrivial forms of information retrieval) may be well suited to elucidation in a think-aloud interview. However, the general use of verbal report methods to target cognitive processes involved in answering survey questions is difficult to justify, especially for tasks (such as term comprehension) that do not satisfy the conditions for valid verbal reports. Willis also notes that the social interaction involved in interviewer-administered cognitive interviews may violate a key assumption posited by Ericsson and Simon for use of the method.

Research has demonstrated various problems with the methods typically used to conduct cognitive interview pretests. Beatty (2004) , for example, found that certain kinds of probes produce difficulties that respondents would not otherwise experience. His analysis of a set of cognitive interviews indicated that respondents who received re-orienting probes (asking for an answer) had little difficulty choosing an answer, whereas those who received elaborating probes (asking for further information) had considerable difficulty. Beatty also found that, aside from reading the questions, cognitive probes (those traditionally associated with cognitive interviews, such as “What were you thinking?” “How did you come up with that?” or “What does [term] mean to you?”) accounted for less than one-tenth of all interviewer utterances. Over nine-tenths consisted of confirmatory probes (repeating something the respondent said, in a request for confirmation), expansive probes (requests for elaboration, such as “Tell me more about that”), functional remarks (repetition or clarification of the question, including re-orienting probes), and feedback (e.g., “thanks; that’s what I want to know” or “I know what you mean”). Thus cognitive interview results appear to be importantly shaped by the interviewers’ contributions, which may not be well focused in ways that support the inquiry. As one way to deal with this problem, Beatty recommended that cognitive interviewers be trained to recognize distinctions among probes and the situations in which each ought to be employed.

Conrad and Blair ( 2004 ) argue that verbal report quality should be assessed in terms of problem detection and problem repair, which are the central goals of cognitive interviewing. They designed an experimental comparison of two different cognitive interviewing approaches: one, uncontrolled, using the unstandardized practices of four experienced cognitive interviewers; the other, more controlled, using four less experienced interviewers trained to probe only when there were explicit indications the respondent was experiencing a problem. The conventional cognitive interviews identified many more problems than did the conditional probe interviews.

As in Beatty ( 2004 ), however, more problems did not mean higher-quality results. Conrad and Blair assessed the reliability of problem identification in two ways: by inter-rater agreement among a set of trained coders who reviewed transcriptions of the taped interviews, and by agreement between coders and interviewers. Overall, agreement was quite low, consistent with the finding of some other researchers about the reliability of cognitive interview data ( Presser and Blair 1994 ). But reliability was higher for the conditional probe interviews than for the conventional ones. (This may be partly due to the conditional probe interviewers having received training in what should be considered a “problem,” compared to the conventional interviewers who were provided no definition of what constituted a “problem.”) Furthermore, as expected, conditional interviewers probed much less often than conventional interviewers, but more of their probes were in cases associated with the identification of a problem. Thus we need to rethink what interviewers do in cognitive interviews.

The importance of this rethinking is underscored by DeMaio and Landreth ( 2004 ), who conducted an experiment in which three different organizations were commissioned to have two interviewers each conduct five cognitive interviews of the same questionnaire using whatever methods were typical for the organization, and then deliver a report identifying problems in the questionnaire as well as a revised questionnaire addressing the problems. In addition, expert reviews of the original questionnaire were obtained from three individuals who were not involved in the cognitive interviews. Finally, another set of cognitive interviews was conducted by a fourth organization to test the revised questionnaires.

The three organizations reported considerable diversity on many aspects of the interviews, including location (respondent’s home versus research lab), interviewer characteristics (field interviewer versus research staff), question strategy (think-aloud versus probes), and data source (review of audiotapes versus interviewer notes and recollections). This heterogeneity is consistent with the findings of Blair and Presser ( 1993 ), but it is even more striking given the many intervening years in which some uniformity of practice might have emerged. It does, however, mean that differences in the results across the organizations cannot be attributed to any one factor.

There was variation across the organizations in both the number of questions identified as having problems and the total number of problems identified. Moreover, there was only modest overlap across the organizations in the particular problems diagnosed. Likewise, the cognitive interviews and the expert reviews overlapped much more in identifying which questions had problems than in identifying what the problems were. The organization that identified the fewest problems also showed the lowest agreement with the expert panel. This organization was the only one that did not review the audiotapes in evaluating the results, which suggests that relying solely on interviewer notes and memory leads to error. 3 However, the findings from the tests of the revised questionnaires did not identify one organization as consistently better or worse than the others.

In sum, research on cognitive interviews has begun to reveal how the methods used to conduct the interviews shape the data produced. Yet much more work is needed to provide a foundation for optimal cognitive interviewing.

Unlike cognitive interviews, which are completely distinct from conventional pretests, other testing methods that have been developed may be implemented as add-ons to conventional pretests (or as additions to a survey proper). These include behavior coding, response latency, formal respondent debriefings, and vignettes.

Behavior coding was developed in the 1960s by Charles Cannell and his colleagues at the University of Michigan Survey Research Center, and it can be used to evaluate both interviewers and questions. Its early applications were almost entirely focused on interviewers, so it had no immediate impact on pretesting practices. In the late 1970s and early 1980s a few European researchers adopted behavior coding to study questions, but it was not applied to pretesting in the United States until the late 1980s ( Oksenberg, Cannell, and Kalton’s 1991 article describes behavior coding as one of two “new strategies for pretesting questions”).

Behavior coding involves monitoring interviews or reviewing taped interviews (or transcripts) for a subset of the interviewer’s and respondent’s verbal behavior in the question asking and answering interaction. Questions marked by high frequencies of certain behaviors (e.g., the interviewer did not read the question verbatim or the respondent requested clarification) are seen as needing repair.

Van der Zouwen and Smit ( 2004 ) describe an extension of behavior coding that draws on the sequence of interviewer and respondent behaviors, not just the frequency of the individual behaviors. Based on the sequence of a question’s behavior codes, an interaction is coded as either paradigmatic (the interviewer read the question correctly, the respondent chose one of the offered alternatives, and the interviewer coded the answer correctly), problematic (the sequence was nonparadigmatic, but the problem was solved; e.g., the respondent asked for clarification and then chose one of the offered alternatives), or inadequate (the sequence was nonparadigmatic, and the problem was not solved). Questions with a high proportion of nonparadigmatic sequences are identified as needing revision.

Van der Zouwen and Smit compared the findings from this approach in a survey of the elderly with the findings from basic behavior coding and from four “ex ante” methods—that is, methods not entailing data collection: a review by five methodology experts; reviews by the authors guided by two different questionnaire appraisal coding schemes; and the “quality predictor” developed by Saris and his colleagues, which we describe in the “statistical modeling” section below. The two methods based on behavior codes produced very similar results, as did three of the four ex ante methods—but the two sets of methods identified very different problems. As Van der Zouwen and Smit observe, the ex ante methods point out what could go wrong with the questionnaire, whereas the behavior codes and sequence analyses reveal what actually did go wrong.

Another testing method based on observing behavior involves the measurement of “response latency,” the time it takes a respondent to answer a question. Since most questions are answered rapidly, latency measurement requires the kind of precision (to fractions of a second) that is almost impossible without computers. Thus it was not until after the widespread diffusion of computer-assisted survey administration in the 1990s that the measurement of response latency was introduced as a testing tool ( Bassili and Scott 1996 ).

Draisma and Dijkstra ( 2004 ) used response latency to evaluate the accuracy of respondents’ answers and, therefore, indirectly to evaluate the questions themselves. The authors reasoned that longer delays signal respondent uncertainty, and they tested this idea by comparing the latency of accurate and inaccurate answers (with accuracy determined by information from another source). In addition, they compared the performance of response latency to that of several other indicators of uncertainty.

In a multivariate analysis, both longer response latencies and the respondents’ expressions of greater uncertainty about their answers were associated with inaccurate responses. Other research ( Martin 2004 ; Schaeffer and Dykema 2004 ) reports no relationship (or even, in some studies, an inverse relationship) between respondents’ confidence or certainty and the accuracy of their answers. Thus future work needs to develop a more precise specification of the conditions in which different measures of respondent uncertainty are useful in predicting response error.

Despite the fact that the interpretation of response latency is less straightforward than that of other measures of question problems (lengthy times may indicate careful processing, as opposed to difficulty), the method appears sufficiently promising to encourage its further use. This is especially so as the ease of collecting latency information means it could be routinely included in computer-assisted surveys at very low cost. The resulting collection of data across many different surveys would facilitate improved understanding of the meaning and consequences of response latency and of how it might best be combined with other testing methods, such as behavior coding, to enhance the diagnosis of questionnaire problems.

Unlike behavior coding and response latency, which are “undeclared” testing methods, respondent debriefings are a “participating” method, which informs the respondent about the purpose of the inquiry. Such debriefings have long been recommended as a supplement to conventional pretest interviews ( Kornhauser 1951 , p. 430), although they most commonly have been conducted as unstructured inquiries improvised by interviewers. Martin ( 2004 ) shows how implementing debriefings in a standardized manner can reveal both the meanings of questions and the reactions respondents have to the questions. In addition, she demonstrates how debriefings can be used to measure the extent to which questions lead to missed or misreported information.

Martin (2004) also discusses vignettes—hypothetical scenarios that respondents evaluate—which may be incorporated in either undeclared or participating pretests. Vignette analysis appears well suited to (1) explore how people think about concepts; (2) test whether respondents’ interpretations of concepts are consistent with those that are intended; (3) analyze the dimensionality of concepts; and (4) diagnose other question wording problems. Martin offers evidence of vignette analysis’s validity by drawing on evaluations of questionnaire changes made on the basis of the method.

The research we have reviewed suggests that the various supplements to conventional pretests differ in the kinds of problems they are suited to identify, their potential for diagnosing the nature of a problem and thereby for fashioning appropriate revisions, the reliability of their results, and the resources needed to conduct them. It appears, for instance, that formal respondent debriefings and vignette analysis are more apt than behavior coding and response latency to identify certain types of comprehension problems. Yet we do not have good estimates of many of the ways the methods differ. The implication is not only that we need research explicitly designed to make such comparisons, but also that multiple testing methods are probably required in many cases to ensure that respondents understand the concepts underlying questions and are able and willing to answer them accurately (for good examples of multimethod applications, see Kaplowitz, Lupi, and Hoehn [ 2004 ] and Schaeffer and Dykema [ 2004 ]).

Both supplemental methods to conventional pretests and cognitive interviews identify questionnaire problems and lead to revisions designed to address the problems. To determine whether the revisions are improvements, however, there is no substitute for experimental comparisons of the original and revised items. Such experiments are of two kinds. First, the original and revised items can be compared using the testing method(s) that identified the problem(s). Thus, if cognitive interviews showed respondents had difficulty with an item, the item and its revision can be tested in another round of cognitive interviews in order to confirm that the revision shows fewer such problems than the original. The interpretation of results from this kind of experiment is usually straightforward, though there is no assurance that observed differences will have any effect on survey estimates.

Second, original and revised items can be tested to examine what, if any, difference they make for a survey’s estimates. The interpretation from this kind of experiment is sometimes less straightforward, but such split-sample experiments have a long history in pretesting. Indeed, they were the subject of one of the earliest articles devoted to pretesting ( Sletto 1950 ), although the experiments it described dealt with the impact on cooperation to mail surveys of administrative matters such as questionnaire length, nature of the cover letter’s appeal, use of follow-up postcards, and questionnaire layout. None of the examples concerned question wording.

Fowler ( 2004 ) describes three ways to evaluate the results of experiments that compare question wordings: differences in response distributions, validation against a standard, and usability, as measured, for instance, by behavior coding. He illustrates how cognitive interviews and experiments are complementary: the former identify potential problems and propose solutions, and the latter test the impact of the solutions. As he argues, experimental evidence is essential in estimating whether different question wordings affect survey results, and if so, by how much.

Fowler focuses on comparisons of single items that vary in only one way. Experiments can also be employed to test versions of entire questionnaires that vary in multiple, complex ways, as described by Moore et al. ( 2004 ). These researchers revised the Survey of Income and Program Participation (SIPP) questionnaire to meet three major objectives: to minimize response burden and thereby decrease both unit and item nonresponse; to reduce “seam bias” reporting errors; and to introduce questions about new topics. Then, to assess the effects of the revisions before switching to the new questionnaire, an experiment was conducted in which respondents were randomly assigned to either the new or old version.

Both item nonresponse and seam bias were lower with the new questionnaire, and, with one exception, the overall estimates of income and assets (key measures in the survey) did not differ between versions. On the other hand, unit nonresponse reductions were not obtained (in fact, in initial waves, nonresponse was higher for the revised version), and the new questionnaire took longer to administer. Moore et al. note that these results may have been caused by two complicating features of the experimental design. First, experienced SIPP interviewers were used for both the old and new instruments. The interviewers’ greater comfort level with the old questionnaire (some reported being able to “administer it in their sleep”) may have contributed to their administering it more quickly than the new questionnaire and persuading more respondents to cooperate with it. Second, the addition of new content to the revised instrument may have more than offset the changes that were introduced to shorten the interview.

Tourangeau ( 2004 ) argues that the practical consideration that leads many experimental designs to compare packages of variables, as in the SIPP case, hampers the science of questionnaire design. Because the SIPP research experimented with a package of variables, it could estimate the overall effect of the redesign, which is vital to the SIPP sponsors, but not estimate the effects of individual changes, which is vital to an understanding of the effects of questionnaire features (and therefore to sponsors of other surveys making design changes). Relative to designs comparing packages of variables, factorial designs allow inference not only about the effects of particular variables, but about the effects of interactions between variables as well. Greater use of factorial designs (as well as more extensive use of laboratory experiments, for which Tourangeau also argues because they are usually much cheaper than field experiments) is therefore needed.

Questionnaire design and statistical modeling are usually thought of as worlds apart. Researchers who specialize in questionnaires tend to have rudimentary statistical understanding, and those who specialize in statistical modeling generally have little appreciation for question wording. This is unfortunate, as the two should work in tandem for survey research to progress. Moreover, the “two worlds” problem is not inevitable. In the early days of survey research, Paul Lazarsfeld, Samuel Stouffer, and their colleagues made fundamental contributions to both questionnaire design and statistical analysis (e.g., Stouffer et al. 1950 ). Thus it is fitting that one recent development to evaluate questionnaires draws on a technique, “latent class analysis” (LCA), rooted in Lazarsfeld’s work.

Paul Biemer ( 2004 ) shows how LCA may be used to estimate the error associated with questions when the questions have been asked of the same respondents two or more times. Yet, as Biemer notes, LCA depends heavily on an assumed model, and there is usually no direct way to evaluate the model assumptions. He recommends that rather than relying on a single statistical method for evaluating questions, multiple methods ought to be employed.

Whereas research like Biemer’s focuses on individual survey questions, psychometricians have long focused on the properties of scales composed of many items. Traditionally, applications of classical test theory have provided little information about the performance of the separate questions. Reeve and Mâsse ( 2004 ) describe how item response theory (IRT) models can assess the degree to which different items discriminate among respondents who have the same value on a trait. The power of IRT to identify the discriminating properties of specific items allows researchers to design shorter scales that do a better job of measuring constructs. Even greater efficiency can be achieved by using IRT methods to develop computer adaptive tests (CAT). With CAT, a respondent is presented a question near the middle of the scale range, and an estimate of his total score is constructed based on his response. Another item is then selected based on that estimate, and the process is repeated. At each step, the precision of the estimated total score is computed, and when the desired precision is reached, no more items are presented.

Both latent class analysis and item response theory models require large numbers of cases and thus are relatively expensive to conduct. By contrast no new data collection is required to make use of a statistical modeling approach first proposed by Frank Andrews. Andrews ( 1984 ) applied the multitrait, multimethod (MTMM) measurement strategy ( Campbell and Fiske 1959 ) to estimate the reliability and validity of a sample of questionnaire items, and he suggested the results could be used to characterize the reliability and validity of question types. Following his suggestion, Saris, Van der Veld, and Gallhofer ( 2004 ) created a data base of MTMM studies that provides estimates of reliability and validity for 1,067 questionnaire items. They then developed a coding system to characterize the items according to the nature of their content, complexity, type of response scale, position in the questionnaire, data collection mode, sample type, and the like. Two large regression models in which these characteristics were the independent variables and the MTMM reliability or validity estimates were the dependent variables provide estimates of the effect on the reliability or validity of the question characteristics. New items can be coded (aided by the authors’ software) and the prediction equation (also automated) used to estimate their quality. Although more MTMM data are needed to improve the models, and—even more importantly—the model predictions need to be tested in validation studies, such additional work promises a significant payoff for evaluating questions.

The introduction of computer technology has changed many aspects of questionnaires. On the one hand, the variety of new modes—beginning with computer-assisted telephone interviewing (CATI), but soon expanding to computer-assisted personal interviewing (CAPI) and computer-assisted self-interviewing (CASI)—has expanded our ability to measure a range of phenomena more efficiently and with improved data quality ( Couper et al. 1998 ). On the other hand, the continuing technical innovations—including audio-CASI, interactive voice response, and the Internet—present many challenges for questionnaire design.

The proliferation of data collection modes has at least three implications for the evaluation and testing of survey instruments. One implication is the mounting recognition that answers to survey questions may be affected by the mode in which the questions are asked. Thus, testing methods must take into consideration the delivery mode. A related implication is that survey instruments consist of much more than words, e.g., their layout and design, logical structure and architecture, and the technical aspects of the hardware and software used to deliver them. All of these elements need to be tested, and their possible effects on measurement error explored. A third implication is that survey instruments are ever more complex and demand ever-expanding resources for testing. The older methods that relied on visual inspection to test flow and routing are no longer sufficient. Newer methods must be found to facilitate the testing of instrument logic, quite aside from the wording of individual questions. In sum, the task of testing questionnaires has greatly expanded.

With the growing complexity of computer-assisted survey instruments and the expanding range of design features available, checking for programming errors has become an increasingly costly and time-consuming part of the testing process, often with no guarantee of complete success. Much of this testing can be done effectively and efficiently only by machine, but existing software is often not up to the task ( Cork et al. 2003 ; Tarnai and Moore 2004 ).

The visual presentation of information to the interviewer, as well as the design of auxiliary functions used by the interviewer in computer-assisted interviewing, are critical to creating effective instruments. Thus testing for usability can be as important as testing for programming errors. As Hansen and Couper ( 2004 ) argue, computerized questionnaires require interviewers to manage two interactions, one with the computer and another with the respondent, and the goal of good design must therefore be to help interviewers manage both interactions to optimize data quality. Hansen and Couper provide illustrations of the ways in which usability testing assists in achieving this end.

A focus on question wording is insufficient even in the technologically simple paper-and-pencil mode. Dillman and Redline ( 2004 ) demonstrate how cognitive interviews may be adapted to explore the various aspects of visual language in self-administered questionnaires. They also show how the results of cognitive interviews can aid in the interpretation of split-sample field experiments.

Web surveys require testing of aspects unique to that mode, such as respondents’ monitor display properties, the presence of browser plug-ins, and features of the hosting platform that define the survey organization’s server. In addition to testing methods used in other modes, Baker, Crawford, and Swinehart ( 2004 ) recommend evaluations based on process data that are easily collected during Web administration (e.g., response latencies, backups, entry errors, and breakoffs). Like Tarnai and Moore ( 2004 ), Baker, Crawford, and Swinehart underscore the importance of automated testing tools, and, consistent with Dillman and Redline ( 2004 ) and Hansen and Couper ( 2004 ), they emphasize that the testing of Web questionnaires must focus on their visual aspects.

Surveys of children, establishments, and populations that require questionnaires in multiple languages pose special design problems. Thus, pretesting is still more vital in these cases than it is for surveys of adults interviewed with questionnaires in a single language. Remarkably, however, pretesting has been even further neglected for such surveys than for “ordinary” ones. As a result, the methodological literature on pretesting is even sparser for these cases than for monolingual surveys of adults.

Willimack et al. ( 2004 ) describe distinctive characteristics of establishment surveys that have made questionnaire pretesting uncommon. Establishment surveys tend to be mandatory, to rely on records, and to target populations of a few very large organizations, which are included with certainty, and many smaller ones, which are surveyed less often. These features seem to have militated against adding to the already high respondent burden by conducting pretests. In addition, because establishment surveys are disproportionately designed to measure change over time, questionnaire changes are rare. Finally, establishment surveys tend to rely on post-collection editing to correct data.

Willimack et al. outline various ways to improve the design and testing of establishment questionnaires. In addition to greater use of conventional methods, they recommend strategies like focus groups, site visits, record-keeping studies, and consultation with subject area specialists and other stakeholders. They also suggest making better use of ongoing quality evaluations and reinterviews, as well as more routine documentation of respondents’ feedback, to provide diagnoses of questionnaire problems. Finally, they recommend that tests be embedded in existing surveys so that proposed improvements can be evaluated without increasing the burden.

In “Pretesting Questionnaires for Children and Adolescents,” De Leeuw, Borgers, and Smits ( 2004 ) review studies of children’s cognitive development for guidance about the kinds of questions and cognitive tasks that can be asked of children of different ages. The evidence suggests that 7 years old is about the earliest age at which children can be interviewed with structured questionnaires, although the ability to handle certain kinds of questions (e.g., hypothetical ones) is acquired only later. The authors discuss how various pretesting methods, including focus groups, cognitive interviews, observation, and debriefing, can be adapted to accommodate children of different ages, and they provide examples of pretests that used these methods with children.

Questionnaire translation has always been basic to cross-national surveys, and recently it has become increasingly important for national surveys as well. Some countries (e.g., Canada, Switzerland, and Belgium) must administer surveys in multiple languages by law. Other nations are translating questionnaires as a result of growing numbers of immigrants. In the United States, for instance, the population 18 years and older that speaks a language at home other than English increased from 13.8 percent in 1990 to 17.8 percent in 2000. Moreover, by 2000, 4.4 percent of U.S. adults lived in “linguistically isolated” households, those in which all the adults spoke a language other than English, and none spoke English “very well” (U.S. Census Bureau 2003) .

Despite its importance, Smith ( 2004 ) reports that “no aspect of cross-national survey research has been less subjected to systematic, empirical investigation than translation.” He describes sources of non-equivalence in translated questions and discusses the problems involved in translating response scales or categories so they are equivalent. He then outlines several strategies to address problems arising from noncomparability across languages: asking multiple questions about a concept (e.g., well-being) with different terms in each question (e.g., satisfaction versus happiness), so that translation problems with a single term do not result in measurement error for all the items; using questions that are equivalent across cultures and languages as well as those that are culture-specific; and conducting special studies to calibrate scale terms.

Harkness, Pennell, and Schoua-Glusberg ( 2004 ) offer guidance on procedures and protocols for translation and assessment. They envision a more rigorous process of “translatology” than the ad hoc practices common to most projects. They emphasize the need for appraisals of the translated text (and hence do not believe back-translation is adequate), and they argue that the quality of translations, as well as the performance of the translated questions as survey questions, must be assessed. Finally, they recommend team approaches that bring different types of expertise to bear on the translation, and they suggest ways to organize the effort of translation, assessment, and documentation (the last of which is particularly important for interpreting results after a survey is completed).

Does pretesting lead to better measurement? We know of only one study that unambiguously addresses this question. Forsyth, Rothgeb, and Willis ( 2004 ) assessed whether pretesting (a) predicts data collection problems and (b) improves survey outcomes. The authors used three methods—informal expert review, appraisal coding, and cognitive interviews—to identify potential problems in a pretest of a questionnaire consisting of 83 items. The 12 questions diagnosed most consistently by the three methods as having problems were then revised to address the problems. Finally, a split-sample field experiment was conducted to compare the original and revised items. The split-sample interviews were behavior coded, and the interviewers were asked to evaluate the questionnaires after completing the interviews.

The versions of the original questions identified in the pretest as particularly likely to pose problems for interviewers were more likely to show behavior-coded interviewing problems in the field and to be identified by interviewers as having posed problems for them. Similarly, the questions identified by the pretest as posing problems for respondents resulted in more respondent problems, according to both the behavior coding and the interviewer ratings. Item nonresponse was also higher for questions identified by the pretest as presenting either recall or sensitivity problems than for questions not identified as having those problems. Thus the combination of pretesting methods was a good predictor of the problems the items would produce in the field.

However, the revised questions generally did not appear to outperform the original versions. The item revisions had no effect on the frequency of behavior-coded interviewer and respondent problems. And while interviewers did rate the revisions as posing fewer respondent problems, they rated them as posing more interviewer problems. The authors suggest various possible explanations for this outcome, including their selection of only questions diagnosed as most clearly problematic, which often involved multiple problems that required complex revisions to address. In addition, the revised questions were not subjected to another round of testing using the three methods that originally identified the problems to confirm that the revisions were appropriate. Nonetheless, the results are chastening, as they suggest that we have much better tools for diagnosing questionnaire problems than for fixing them.

Different pretesting methods, and different ways of carrying out the same method, influence the numbers and types of problems identified. Consistency among methods is often low, and the reasons for this need more investigation. One possibility is that, in their present form, some of the methods are unreliable. But two other possibilities are also worth exploring. First, lack of consistency may occur because the methods are suited for identifying different problem types. For example, comprehension problems that occur with no disruption in the question asking and answering process are unlikely to be picked up by behavior coding. Thus, we should probably expect only partial overlap in the problems identified by different pretesting methods. Second, inconsistencies may reflect a lack of consensus among researchers, cognitive interviewers, or coders about what is regarded as a problem. For example, is it a problem if a question is awkward to ask but obtains accurate responses, or is it only a problem if the question obtains erroneous answers? The kinds and severity of problems that a questionnaire pretest (or methodological evaluation) aims to identify are not always clear, and this lack of specification may contribute to the inconsistencies that have been observed.

In exploring such inconsistencies, the cross-organization approach used by DeMaio and Landreth ( 2004 ; see also Martin, Schechter, and Tucker 1999 ) holds promise not only of leading to greater standardization, and therefore to higher reliability, but to enhancing our understanding of which methods are appropriate in different circumstances and for different purposes.

It is also clear that problem identification does not necessarily point to problem solution in any obvious or direct way. For instance, Forsyth, Rothgeb, and Willis (2004) and Schaeffer and Dykema ( 2004 ) used pretesting to identify problems that were then addressed by revisions, only to find in subsequent field studies that the revisions either did not result in improvements or created new problems. The fact that we are better able to identify problems than to formulate solutions underscores the desirability of additional testing after questionnaires have been revised.

Four general recommendations seem particularly important to us for advancing questionnaire testing and evaluation. These involve

the connection between problem identification and measurement error;

the impact of testing methods on survey costs;

the role of basic research and theory in guiding the repair of question flaws; and

the development of a data base to facilitate cumulative knowledge.

First, we need studies that examine the connection between problem diagnosis and measurement error. A major objective of testing is to reduce measurement error, yet we know little about the degree to which error is predicted by the various problem indicators at the heart of the different testing methods. Draisma and Dijkstra (2004) and Schaeffer and Dykema (2004) are unusual in making use of external validation in this way. Other research has taken an indirect approach, by examining the link between problem diagnosis and specific response patterns (for example, missing data, or “seam bias”), on the assumption that higher or lower levels are more accurate. But inferences based on indirect approaches must be more tentative than those based on direct validation (e.g., record-check studies). With appropriately designed validation studies, we might be better able to choose among techniques for implementing particular methods, evaluate the usefulness of different methods for diagnosing different kinds of problems, and understand how much pretesting is “enough.” We acknowledge, however, that validation data are rarely available and are themselves subject to error. Thus another challenge for future research is to develop further indicators of measurement error that can be used to assess testing methods.

Second, we need information about the impact of different testing methods on survey costs. The cost of testing may be somewhat offset, completely offset, or even more than offset (and therefore reduce the total survey budget), depending on whether the testing results lead to the identification (and correction) of problems that affect those survey features—e.g., interview length, interviewer training, and post-survey data processing—that have implications for cost. Although we know something about the direct costs of various testing methods, we know almost nothing about how the methods differ in their impact on overall costs. Thus a key issue for future research is to estimate how different testing methods perform in identifying the kinds of problems that increase survey costs.

Third, since improved methods for diagnosing problems are mainly useful to the extent that we can repair the problems, we need more guidance in making repairs. As a result, advances in pretesting depend partly on advances in the science of asking questions ( Schaeffer and Presser 2003 ). Such a science involves basic research into the question and answer process that is theoretically motivated ( Krosnick and Fabrigar forthcoming ; Sudman, Bradburn, and Schwarz 1996 ; Tourangeau, Rips, and Rasinski 2000 ). But this is a two-way street. On the one hand, pretesting should be guided by theoretically motivated research into the question and answer process. On the other hand, basic research and theories of the question and answer process should be shaped by both the results of pretesting and developments in the testing methods themselves, e.g., the question taxonomies, or classification typologies, used in questionnaire appraisal systems ( Lessler and Forsyth 1996 ), and the kind of statistical modeling described by Saris, Van der Veld, and Gallhofer (2004) . In particular, pretesting’s focus on aspects of the response tasks that can make it difficult for respondents to answer accurately ought to inform theories of the connection between response error and the question and answer process.

Finally, we need improved ways to accumulate knowledge across pretests. This will require greater attention to documenting what is learned from pretests of individual questionnaires. One of the working groups at the Second Advanced Seminar on the Cognitive Aspects of Survey Methodology ( Sirken et al. 1999 , p. 56) suggested that survey organizations archive, in a central repository, the cognitive interviews they conduct, including the items tested, the methods used, and the findings produced. As that group suggested, this would “facilitate systematic research into issues such as: What characteristics of questions are identified by cognitive interviewing as engendering particular problems? What testing features are associated with discovering different problem types? What sorts of solutions are adopted in response to various classes of problems?” We believe this recommendation should apply to all methods of pretesting. Establishing a pretesting archive on the Web would not only facilitate research on questionnaire evaluation; it would also serve as an invaluable resource for researchers developing questionnaires for new surveys. 4

This is a revised version of chapter 1 from Presser et al., 2004 .

All the methods discussed in this article involve data collection to test a questionnaire. We do not treat focus groups ( Bischoping and Dykema 1999 ) or ethnographic interviews ( Gerber 1999 ), which are most commonly used at an early stage, before there is an instrument to be tested. Nor do we review evaluations by experts ( Presser and Blair 1994 ), artificial intelligence ( Graesser et al. 2000 ), or coders applying formal appraisal systems ( Lessler and Forsyth 1996 ), none of which involve data collection from respondents.

Laboratory research to evaluate self-administered questionnaires was already underway at the Census Bureau before the 1980 census ( Rothwell 1983 , 1985 ). Although inspired by marketing research rather than cognitive psychology, this work, in which observers encouraged respondents to talk aloud as they filled out questionnaires, foreshadowed cognitive interviewing. See also Hunt, Sparkman, and Wilcox 1982 .

Bolton and Bronkhorst ( 1996 ) describe a computerized approach to evaluating cognitive interview results, which should reduce error even further.

Many Census Bureau pretest reports are available online at www.census.gov/srd/www/byyear.html , and many other pretest reports may be found in the Proceedings of the American Statistical Association Survey Research Methods Section and the American Association for Public Opinion Research available at www.amstat.org/sections/srms/proceedings . But neither site is easily searchable, and the reports often contain incomplete information about the procedures used.

Andrews , Frank . 1984 . “Construct Validity and Error Components of Survey Measures.” Public Opinion Quarterly 48 : 409 –42.

Baker , Reginald , Scott Crawford, and Janice Swineheart. 2004 . “Development and Testing of Web Questionnaires.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Bassili , John and Stacey Scott. 1996 . “Response Latency as a Signal to Question Problems in Survey Research.” Public Opinion Quarterly 60 : 390 –99.

Beatty , Paul . 2004 . “The Dynamics of Cognitive Interviewing.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Belson , William . 1981 . The Design and Understanding of Survey Questions . London: Gower.

Biemer , Paul . 2004 . “Modeling Measurement Error to Identify Flawed Questions.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Bischoping , Katherine , and Jennifer Dykema. 1999 . “Towards a Social Psychological Program for Improving Focus Group Methods of Developing Questionnaires.” Journal of Official Statistics 15 : 495 –516.

Blair , Johny , and Stanley Presser. 1993 . “Survey Procedures for Conducting Cognitive Interviews to Pretest Questionnaires: A Review of Theory and Practice.” Proceedings of the Section on Survey Research Methods of the American Statistical Association 370 –375.

Bolton , Ruth , and Tina Bronkhorst. 1996 . “Questionnaire Pretesting: Computer-Assisted Coding of Concurrent Protocols.” In Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research , ed. Norbert Schwarz and Seymour Sudman, pp. 37 –64. San Francisco: Jossey-Bass.

Campbell , Donald , and Donald Fiske. 1959 . “Convergent and Discriminant Validation by the Multitrait Multimethod Matrices.” Psychological Bulletin 56 : 81 –105.

Cannell , Charles , and Robert Kahn. 1953 . “The Collection of Data by Interviewing.” In Research Methods in the Behavioral Sciences , ed. Leon Festinger and Daniel Katz, pp. 327 –80. New York: Dryden.

Cantril , Hadley , and Edrita Fried. 1944 . “The Meaning of Questions.” In Gauging Public Opinion , ed. Hadley Cantril, pp. 3 –22. Princeton, NJ: Princeton University Press.

Conrad , Fred , and Johny Blair. 2004 . “Data Quality in Cognitive Interviews.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Converse , Jean , and Stanley Presser. 1986 . Survey Questions: Handcrafting the Standardized Questionnaire . Newbury Park, CA: Sage.

Cork , Daniel , Michael Cohen, Robert Groves, and William Kalsbeek, eds. 2003 . Survey Automation: Report and Workshop Proceedings . Washington, DC: National Academies Press.

Couper , Mick , Reginald Baker, Jelke Bethlehem, Cynthia Clark, Jean Martin, William Nicholls II, and James O’Reilly. 1998 . Computer-Assisted Survey Information Collection . New York: Wiley.

De Leeuw , Edith , Natacha Borgers, and Astrid Smits. 2004 . “Pretesting Questionnaires for Children and Adolescents.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

DeMaio , Theresa , and Ashley Landreth. 2004 . “Do Different Cognitive Interview Techniques Produce Different Results?” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

DeMaio , Theresa , Nancy Mathiowetz, Jennifer Rothgeb, Mary Ellen Beach, and Sharon Durant. 1993 . “Protocol for Pretesting Demographic Surveys at the Census Bureau.” Washington, DC: U.S. Bureau of the Census.

Dillman , Don , and Cleo Redline. 2004 . “Concepts and Procedures for Testing Paper Self-Administered Questionnaires: Cognitive Interview and Field Test Questions.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Draisma , Stasja , and Wil Dijkstra. 2004 . “Response Latency and (Para)Linguistic Expressions as Indicators of Response Error.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Ericsson , K. Anders , and Herbert Simon. 1980 . “Verbal Reports as Data.” Psychological Review 87 : 215 –51.

Forsyth , Barbara , Jennifer Rothgeb, and Gordon Willis. 2004 . “Does Pretesting Make a Difference?” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith T. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Fowler , Floyd . 2004 . “The Case for More Split-Sample Experiments in Developing Survey Instruments.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Gerber , Eleanor . 1999 . “The View from Anthropology: Ethnography and the Cognitive Interview.” In Cognition and Survey Research , ed. Monroe Sirken, Douglas Hermann, Ssuan Schechter, Norbert Schwarz, Judith Tanur, and Roger Tourangeau, pp. 217 –34. New York: Wiley.

Graesser , Art , Katja Wiemer-Hastings, Peter Wiemer-Hastings, and Roger Kreuz. 2000 . “The Gold Standard of Question Quality on Surveys: Experts, Computer Tools, versus Statistical Indices.” Proceedings of the Section on Survey Research Methods of the American Statistical Association 459 –64.

Hansen , Sue Ellen , and Mick P. Couper. 2004 . “Usability Testing to Evaluate Computer-Assisted Instruments.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Harkness , Janet , Beth-Ellen Pennell, and Alisú Schoua-Glusberg. 2004 . “Survey Questionnaire Translation and Assessment.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Hunt , Shelby , Richard Sparkman, Jr., and James Wilcox. 1982 . “The Pretest in Survey Research: Issues and Preliminary Findings.” Journal of Marketing Research 19 : 269 –73.

Jabine , Thomas , Miron Straf, Judith Tanur, and Roger Tourangeau. 1984 . Cognitive Aspects of Survey Methodology: Building a Bridge between Disciplines . Washington, DC: National Academy Press.

Kaplowitz , Michael , Frank Lupi and John P. Hoehn. 2004 . “Multiple Methods for Developing and Evaluating a Stated-Choice Questionnaire to Value Wetlands.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Katz , Daniel . 1940 . “Three Criteria: Knowledge, Conviction, and Significance.” Public Opinion Quarterly 4 : 277 –84.

Kornhauser , Arthur . 1951 . “Constructing Questionnaires and Interview Schedules.” In Research Methods in Social Relations: Part Two , ed. Marie Jahoda, Morton Deutsch, and Stuart Cook, pp. 423 –62. New York: Dryden.

Krosnick , Jon , and Leandre Fabrigar. Forthcoming. Designing Questionnaires to Measure Attitudes . New York: Oxford University Press.

Lessler , Judith , and Barbara Forsyth. 1996 . “A Coding System for Appraising Questionnaires.” In Answering Questions: Methodology for Determining Cognitive and Communicative Processes in Survey Research , ed. Norbert Schwarz and Seymour Sudman, pp. 259 –92. San Francisco: Jossey-Bass.

Lessler , Judith , Roger Tourangeau, and William Salter. 1989 . “Questionnaire Design Research in the Cognitive Research Laboratory.” Vital and Health Statistics (Series 6, No. 1; DHHS Publication No. PHS-89-1076). Washington, DC: Government Printing Office.

Loftus , Elizabeth . 1984 . “Protocol Analysis of Responses to Survey Recall Questions.” In Cognitive Aspects of Survey Methodology: Building a Bridge Between Disciplines , ed. Thomas Jabine, Miron Straf, Judith Tanur, and Roger Tourangeau, pp. 61 –64. Washington, DC: National Academy Press.

Martin , Elizabeth . 2004 . “Vignettes and Respondent Debriefing for Questionnaire Design and Evaluation.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Martin , Elizabeth , Susan Schechter, and Clyde Tucker. 1999 . “Interagency Collaboration among the Cognitive Laboratories: Past Efforts and Future Opportunities.” In Statistical Policy Working Paper 28: 1998 Seminar on Interagency Coordination and Cooperation , pp. 359 –87. Washington, DC: Federal Committee on Statistical Methodology.

Moser , Claus , and Graham Kalton. 1971 . Survey Methods in Social Investigation . London: Heinemann.

Moore , Jeffrey , Joanne Pascale, Pat Doyle, Anna Chan, and Julia Klein Griffiths. 2004 . “Using Field Experiments to Improve Instrument Design.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Oksenberg , Lois , Charles Cannell, and Graham Kalton. 1991 . “New Strategies for Pretesting Survey Questions.” Journal of Official Statistics 7 : 349 –56.

Presser , Stanley , and Johny Blair. 1994 . “Survey Pretesting: Do Different Methods Produce Different Results?” Sociological Methodology 24 : 73 –104.

Presser , Stanley , Jennifer M. Rothgeb, Mick P. Couper, Judith T. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer, eds. 2004 . Methods for Testing and Evaluating Survey Questionnaires . New York: Wiley.

Reeve , Bryce , and Louise Mâsse. 2004 . “Item Response Theory (IRT) Modeling for Questionnaire Evaluation.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Rothwell , Naomi . 1983 . “New Ways of Learning How to Improve Self-Enumerative Questionnaires: A Demonstration Project.” Unpublished manuscript, U.S. Bureau of the Census.

———. 1985 . “Laboratory and Field Response Research Studies for the 1980 Census of Population in the United States” Journal of Official Statistics 1 : 137 –57.

Royston , Patricia , and Deborah Bercini. 1987 . “Questionnaire Design Research in a Laboratory Setting: Results of Testing Cancer Risk Factor Questions.” Proceedings of the Survey Research Methods Section of the American Statistical Association 829 –33.

Saris , Willem , William van der Veld, and Irmtraud Gallhofer. 2004 . “Development and Improvement of Questionnaires Using Predictions of Reliability and Validity.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Schaeffer , Nora Cate , and Jennifer Dykema. 2004 . “A Multiple-Method Approach to Improving the Clarity of Closely Related Concepts.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Schaeffer , Nora Cate , and Stanley Presser. 2003 . “The Science of Asking Questions.” Annual Review of Sociology 29 : 65 –88.

Sheatsley , Paul . 1983 . “Questionnaire Construction and Item Writing.” In Handbook of Survey Research , ed. Peter Rossi, James Wright, and Andy Anderson, pp. 195 –230. New York: Academic Press.

Sirken , Monroe , Thomas Jabine, Gordon Willis, Elizabeth Martin, and Clyde Tucker, eds. 1999 . A New Agenda for Interdisciplinary Research: Proceedings of the CASM II Seminar . Hyattsville, MD: National Center for Health Statistics.

Sletto , Raymond . 1950 . “Pretesting of Questionnaires.” American Sociological Review 5 : 193 –200.

Smith , Tom . 2004 . “Developing and Evaluating Cross-National Survey Instruments.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Stouffer , Samuel , Louis Guttman, Edward Suchman, Paul Lazarsfeld, Shirley Star, and John Clausen. 1950 . Measurement and Prediction . Princeton, NJ: Princeton University Press.

Sudman , Seymour . 1983 . “Applied Sampling.” In Handbook of Survey Research , ed. Peter Rossi, James Wright, and Andy Anderson, pp. 145 –194. New York: Academic Press.

Sudman , Seymour , and Norman Bradburn. 1982 . Asking Questions: A Practical Guide to Questionnaire Design . San Francisco: Jossey-Bass.

Sudman , Seymour , Norman Bradburn, and Norbert Schwarz. 1996 . Thinking about Answers: The Application of Cognitive Processes to Survey Methodology . San Francisco: Jossey-Bass.

Tarnai , John , and Danna Moore. 2004 . “Methods for Testing and Evaluating Computer-Assisted Questionnaires.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Tourangeau , Roger . 2004 . “Experimental Design Considerations for Testing and Evaluating Questionnaires.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Tourangeau , Roger , Lance Rips, and Kenneth Rasinski. 2000 . The Psychology of Survey Response . Cambridge: Cambridge University Press.

U.S. Census Bureau. 2003 . Census 2000, Summary File 3, Tables P19, PCT13, and PCT14. Summary Tables on Language Use and English Ability: 2000 (PHC-T-20).

Van der Zouwen , Johannes , and Johannes Smit. 2004 . “Evaluating Survey Questions by Analyzing Patterns of Behavior Codes and Question-Answer Sequences.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Willimack , Diane , Lars Lyberg, Jean Martin, Lilli Japec, and Patricia Whitridge. 2004 . “Evolution and Adaptation of Questionnaire Development, Evaluation and Testing for Establishment Surveys.” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith L. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Willis , Gordon . 1994 . Cognitive Interviewing and Questionnaire Design: A Training Manual . Hyattsville, MD: National Center for Health Statistics

———. 2004 . “Cognitive Interviewing Revisited: A Useful Technique, in Theory?” In Methods for Testing and Evaluating Survey Questionnaires , ed. Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith T. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. New York: Wiley.

Month: Total Views:
November 2016 1
December 2016 2
January 2017 73
February 2017 206
March 2017 200
April 2017 139
May 2017 154
June 2017 151
July 2017 157
August 2017 300
September 2017 239
October 2017 264
November 2017 348
December 2017 1,216
January 2018 1,077
February 2018 1,255
March 2018 1,837
April 2018 1,960
May 2018 1,893
June 2018 1,404
July 2018 1,491
August 2018 1,816
September 2018 1,556
October 2018 1,416
November 2018 1,565
December 2018 978
January 2019 910
February 2019 939
March 2019 1,196
April 2019 1,415
May 2019 1,238
June 2019 932
July 2019 904
August 2019 1,037
September 2019 1,082
October 2019 1,159
November 2019 909
December 2019 787
January 2020 955
February 2020 1,104
March 2020 780
April 2020 1,136
May 2020 673
June 2020 1,212
July 2020 1,063
August 2020 970
September 2020 1,312
October 2020 1,426
November 2020 1,209
December 2020 941
January 2021 921
February 2021 1,053
March 2021 1,142
April 2021 1,030
May 2021 826
June 2021 715
July 2021 745
August 2021 677
September 2021 815
October 2021 824
November 2021 778
December 2021 632
January 2022 737
February 2022 652
March 2022 658
April 2022 700
May 2022 701
June 2022 475
July 2022 416
August 2022 327
September 2022 361
October 2022 445
November 2022 458
December 2022 384
January 2023 709
February 2023 659
March 2023 668
April 2023 566
May 2023 707
June 2023 530
July 2023 540
August 2023 566
September 2023 446
October 2023 483
November 2023 478
December 2023 606
January 2024 856
February 2024 818
March 2024 1,121
April 2024 895
May 2024 726
June 2024 147

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1537-5331
  • Copyright © 2024 American Association for Public Opinion Research
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • How it works

researchprospect post subheader

How to Write the Research Questions – Tips & Examples

Published by Owen Ingram at August 13th, 2021 , Revised On October 3, 2023

Conducting research and writing an academic paper requires a clear direction and focus.

A good research question provides purpose to your research and clarifies the direction. It further helps your readers to understand what issue your research aims to explore and address.

If you are unsure about how to write research questions, here is a list of the attributes of a good research question;

  • The research question should contain only a single problem
  • You should be able to find the answer to it using  primary and secondary data sources
  • You should be able to address it within the time limit and other constraints
  • Can attain in-depth and detailed results
  • Relevant and applicable
  • Should relate to your chosen field of research

Whenever you want to discover something new about a  topic , you will ask a question about it. Therefore, the research question is important in the overall research process  and provides the author with the reading and writing guidelines.

In a research paper or an essay, you will need to create a single research question that highlights just one problem or issue. The thesis statement should include the specific problem you aim to investigate to establish your argument’s central position or claim.

A larger project such as a  dissertation or thesis , on the other hand, can have multiple research questions, but every question should focus on your main  research problem .  Different types of research will help you answer different research questions, but they should all be relevant to the research scope.

How to Write a Research Question

Steps to develop your research question.

  • Choose a topic  with a wide range of published literature
  • Read and skim relevant articles to find out different problems and issues
  • Specify a theoretical or practical  research problem  that your research question will address
  • Narrow down the focus of your selected core niche

research questions

Example Research Question (s)

Here are examples of research problems and research questions to help you understand how to create a research question for a given research problem.

Example Research Problem Example Research Question (s)
A small-scale company, ‘A’ in the UK, cannot allocate a marketing budget for next year due to their poor revenue collection in the running year. What practical steps can the company take to improve its revenue?
Many fresh graduates in the UK are working as freelancers despite having attained degrees well known academic institutes, but what is causing these youngsters to engage in this type of work? What is the cause of fresh graduates engaging in freelance activities rather than going for full-time employment? What are the advantages and disadvantages of the gig economy for young people? How do age, gender, and academic qualification relate to people’s perception of freelancing?

Types of Research Questions

There are two main types of research;  quantitative and qualitative research . Both types of research require research questions. What research question you will answer is dependent on the type of research you wish to employ.

The first part of  designing research  is to find a gap and create a fully focused research question.

The following table shows common research questions for a dissertation project. However, it is important to note that these examples of dissertation research questions are straightforward, and the actual research questions may be more complicated than these examples.

Research question type Formulation
Descriptive approach What will be the properties of A?
Comparative approach What are the similarities and differences between A and B?
Correlational approach How can you correlate variables A and B?
Exploratory approach Factors affecting the rate of C? Does A and B also influence C?
Explanatory approach What are the causes of C? How does B impact A? What is causing D?
Evaluation approach How useful and influential is C? What role does B play? What are the advantages and disadvantages of A?
Action research How can you improve X with different interventions?

What data collection method best suits your research?

  • Find out by hiring an expert from ResearchProspect today!
  • Despite how challenging the subject may be, we are here to help you.

data collection method best suits your research

Steps to Write Research Questions

The research question provides you with a path and focuses on the real problem and the research gap you aim to fill. These are steps you need to take if you are unsure about how to write a research question:

Choose an Interesting Topic

Choose a topic  of research according to your interest. The selected topic should be neither too broad nor too narrow.

Do Preliminary Research on the Topic

Find articles, books, journals, and theses relevant to your chosen topic. Understand what research problem each scholar addressed as part of their research project.

Consider your Audience

It is necessary to know your audience to develop focused research questions for your essay or dissertation. You can find aspects of your topic that could be interesting to your audience when narrowing your topic.

Start Asking Questions

What, why, when, how, and other open-ended questions will provide in-depth knowledge about the topic.

Evaluate your Question

After formulating a research question, evaluate to check its effectiveness and how it can serve the purpose. Revise and refine the dissertation research question.

  • Do you have a clear research question? 

It would help if you formed the research question after finding a research gap. This approach will enable the research to solve part of the problem.

  • Do you have a focused research question?

It is necessary that the research question is specific and relating to the central aim of your research.

  • Do you have a complex research question? 

The research question cannot be answered by yes or no but requires in-depth analysis. It often begins with “How” or “Why.”

Begin your Research

After you have prepared dissertation research questions, you should research the existing literature on similar topics to find various perspectives.

Also See: Formulation of Research Question

If you have been struggling to devise research questions for your dissertation or are unsure about which topic would be suitable for your needs, then you might be interested in taking advantage of our dissertation topic and outline service, which includes several topic ideas in your preferred area of study and a 500/1000 words plan on your chosen topic. Our topic and outline service will help you jump-start your dissertation project.

Find out How Our Topics & Outline Service Can Help You!

Tips on How to Write a Strong Research Question

A research question is the foundation of the entire research. Therefore, you should spend as much time as required to refine the research question.

If you have good research questions for the dissertation, research paper , or essay, you can perform the research and analyse your results more effectively. You can evaluate the strength of the research question with the help of the following criteria. Your research question should be;

Intensive and Researchable

  • It should cover a single issue
  • The question shouldn’t include a subjective judgment
  • It can be answerable with the data analysis or research=

Practical and Specific

  • It should not include a course of action, policy, or solution
  • It should be well-defined
  • Answerable within research limits

Complicated and Arguable

  • It should not be simple to answer
  • Need in-depth knowledge to find facts
  • Provides scope for debate and deliberation

Unique and Relevant

  • It should lie in your field of study
  • Its results should be contributable
  • It should be unique

Conclusion – How to Write Research Questions

A research question provides a clear direction for research work. A bigger project, such as a dissertation, may have more than one research question, but every question should focus on one issue only.

Your research questions should be researchable, feasible to answer, specific to find results, complex (for Masters and PhD projects), and relevant to your field of study. Dissertation research questions depend upon the research type you are basing your paper on.

Start creating a research question by choosing an interesting topic, do some preliminary research, consider your audience, start asking questions, evaluating your question, and begin your research.

At ResearchProspect, we have dissertation experts for all academic subjects. Whether you need help with the individual chapters or the whole dissertation paper, you can be confident that your paper competed to the highest academic standard. There is a reason why our clients keep returning to us over and over. You can also look at our essay services if you are struggling to draft a first-class academic paper.

At ResearchProspect, we have dissertation experts for all academic subjects. Whether you need help with the  individual chapters  or the  whole dissertation paper,  you can be confident that your paper competed to the highest academic standard. There is a reason why our clients keep returning to us over and over.

You can also look at our  essay services  if you are struggling to draft a first-class academic paper.

Place Order

Frequently Asked Questions

How are research questions written.

Research questions are written by:

  • Identifying your topic.
  • Considering what you want to explore.
  • Making questions clear and concise.
  • Ensuring they’re researchable.
  • Avoiding bias or leading language.
  • Focusing on one main idea per question.

What are examples of research questions?

  • Does regular exercise improve mental well-being in adults over 50?
  • How do online courses impact student engagement compared to traditional classes?
  • What are the economic effects of prolonged pandemic lockdowns?
  • How does early childhood nutrition influence academic performance in later life?
  • Does urban green space reduce stress levels?

How to write a research question?

  • Identify a specific topic or issue of interest.
  • Conduct preliminary research to understand existing knowledge.
  • Narrow the focus to address gaps or unresolved issues.
  • Phrase the question to be clear, concise, and researchable.
  • Ensure it is specific enough for systematic investigation.

How to formulate my research questions for my geography dissertation?

  • Identify a geographical topic or phenomenon of interest.
  • Review existing literature to find gaps.
  • Consider spatial, temporal, environmental, or societal aspects.
  • Ensure questions are specific, feasible, and significant.
  • Frame questions to guide methodology: quantitative, qualitative, or mixed.
  • Seek feedback from peers/advisors.

You May Also Like

This article is a step-by-step guide to how to write statement of a problem in research. The research problem will be half-solved by defining it correctly.

Struggling to find relevant and up-to-date topics for your dissertation? Here is all you need to know if unsure about how to choose dissertation topic.

Make sure that your selected topic is intriguing, manageable, and relevant. Here are some guidelines to help understand how to find a good dissertation topic.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Logo for Open Educational Resources

Chapter 4. Finding a Research Question and Approaches to Qualitative Research

We’ve discussed the research design process in general and ways of knowing favored by qualitative researchers.  In chapter 2, I asked you to think about what interests you in terms of a focus of study, including your motivations and research purpose.  It might be helpful to start this chapter with those short paragraphs you wrote about motivations and purpose in front of you.  We are now going to try to develop those interests into actual research questions (first part of this chapter) and then choose among various “traditions of inquiry” that will be best suited to answering those questions.  You’ve already been introduced to some of this (in chapter 1), but we will go further here.

Null

Developing a Research Question

Research questions are different from general questions people have about the social world.  They are narrowly tailored to fit a very specific issue, complete with context and time boundaries.  Because we are engaged in empirical science and thus use “data” to answer our questions, the questions we ask must be answerable by data.  A question is not the same as stating a problem.  The point of the entire research project is to answer a particular question or set of questions.  The question(s) should be interesting, relevant, practical, and ethical.  Let’s say I am generally interested in the problem of student loan debt.  That’s a good place to start, but we can’t simply ask,

General question: Is student loan debt really a problem today?

How could we possibly answer that question? What data could we use? Isn’t this really an axiological (values-based) question? There are no clues in the question as to what data would be appropriate here to help us get started. Students often begin with these large unanswerable questions. They are not research questions. Instead, we could ask,

Poor research question: How many people have debt?

This is still not a very good research question. Why not? It is answerable, although we would probably want to clarify the context. We could add some context to improve it so that the question now reads,

Mediocre research question: How many people in the US have debt today? And does this amount vary by age and location?

Now we have added some context, so we have a better idea of where to look and who to look at. But this is still a pretty poor or mediocre research question. Why is that? Let’s say we did answer it. What would we really know? Maybe we would find out that student loan debt has increased over time and that young people today have more of it. We probably already know this. We don’t really want to go through a lot of trouble answering a question whose answer we already have. In fact, part of the reason we are even asking this question is that we know (or think) it is a problem. Instead of asking what you already know, ask a question to which you really do not know the answer. I can’t stress this enough, so I will say it again: Ask a question to which you do not already know the answer . The point of research is not to prove or make a point but to find out something unknown. What about student loan debt is still a mystery to you? Reviewing the literature could help (see chapter 9). By reviewing the literature, you can get a good sense of what is still mysterious or unknown about student loan debt, and you won’t be reinventing the wheel when you conduct your research. Let’s say you review the literature, and you are struck by the fact that we still don’t understand the true impact of debt on how people are living their lives. A possible research question might be,

Fair research question: What impact does student debt have on the lives of debtors?

Good start, but we still need some context to help guide the project. It is not nearly specific enough.

Better research question: What impact does student debt have on young adults (ages twenty-five to thirty-five) living in the US today?

Now we’ve added context, but we can still do a little bit better in narrowing our research question so that it is both clear and doable; in other words, we want to frame it in a way that provides a very clear research program:

Optimal research question: How do young adults (ages twenty-five to thirty-five) living in the US today who have taken on $30,000 or more in student debt describe the impact of their debt on their lives in terms of finding/choosing a job, buying a house, getting married, and other major life events?

Now you have a research question that can be answered and a clear plan of how to answer it. You will talk to young adults living in the US today who have high debt loads and ask them to describe the impacts of debt on their lives. That is all now in the research question. Note how different this very specific question is from where we started with the “problem” of student debt.

Take some time practicing turning the following general questions into research questions:

  • What can be done about the excessive use of force by police officers?
  • Why haven’t societies taken firmer steps to address climate change?
  • How do communities react to / deal with the opioid epidemic?
  • Who has been the most adversely affected by COVID?
  • When did political polarization get so bad?

Hint: Step back from each of the questions and try to articulate a possible underlying motivation, then formulate a research question that is specific and answerable.

It is important to take the time to come up with a research question, even if this research question changes a bit as you conduct your research (yes, research questions can change!). If you don’t have a clear question to start your research, you are likely to get very confused when designing your study because you will not be able to make coherent decisions about things like samples, sites, methods of data collection, and so on. Your research question is your anchor: “If we don’t have a question, we risk the possibility of going out into the field thinking we know what we’ll find and looking only for proof of what we expect to be there. That’s not empirical research (it’s not systematic)” ( Rubin 2021:37 ).

Researcher Note

How do you come up with ideas for what to study?

I study what surprises me. Usually, I come across a statistic that suggests something is common that I thought was rare. I tend to think it’s rare because the theories I read suggest it should be, and there’s not a lot of work in that area that helps me understand how the statistic came to be. So, for example, I learned that it’s common for Americans to marry partners who grew up in a different class than them and that about half of White kids born into the upper-middle class are downwardly mobile. I was so shocked by these facts that they naturally led to research questions. How do people come to marry someone who grew up in a different class? How do White kids born near the top of the class structure fall?

—Jessi Streib, author of The Power of the Past and Privilege Lost

What if you have literally no idea what the research question should be? How do you find a research question? Even if you have an interest in a topic before you get started, you see the problem now: topics and issues are not research questions! A research question doesn’t easily emerge; it takes a lot of time to hone one, as the practice above should demonstrate. In some research designs, the research question doesn’t even get clearly articulated until the end of data collection . More on that later. But you must start somewhere, of course. Start with your chosen discipline. This might seem obvious, but it is often overlooked. There is a reason it is called a discipline. We tend to think of “sociology,” “public health,” and “physics” as so many clusters of courses that are linked together by subject matter, but they are also disciplines in the sense that the study of each focuses the mind in a particular way and for particular ends. For example, in my own field, sociology, there is a loosely shared commitment to social justice and a general “sociological imagination” that enables its practitioners to connect personal experiences to society at large and to historical forces. It is helpful to think of issues and questions that are germane to your discipline. Within that overall field, there may be a particular course or unit of study you found most interesting. Within that course or unit of study, there may be an issue that intrigued you. And finally, within that issue, there may be an aspect or topic that you want to know more about.

When I was pursuing my dissertation research, I was asked often, “Why did you choose to study intimate partner violence among Native American women?” This question is necessary, and each time I answered, it helped shape me into a better researcher. I was interested in intimate partner violence because I am a survivor. I didn’t have intentions to work with a particular population or demographic—that came from my own deep introspection on my role as a researcher. I always questioned my positionality: What privileges do I hold as an academic? How has public health extracted information from institutionally marginalized populations? How can I build bridges between communities using my position, knowledge, and power? Public health as a field would not exist without the contributions of Indigenous people. So I started hanging out with them at community events, making friends, and engaging in self-education. Through these organic relationships built with Native women in the community, I saw that intimate partner violence was a huge issue. This led me to partner with Indigenous organizations to pursue a better understanding of how Native survivors of intimate partner violence seek support.

—Susanna Y. Park, PhD, mixed-methods researcher in public health and author of “How Native Women Seek Support as Survivors of Intimate Partner Violence: A Mixed-Methods Study”

One of the most exciting and satisfying things about doing academic research is that whatever you end up researching can become part of the body of knowledge that we have collectively created. Don’t make the mistake of thinking that you are doing this all on your own from scratch. Without even being aware of it, no matter if you are a first-year undergraduate student or a fourth-year graduate student, you have been trained to think certain questions are interesting. The very fact that you are majoring in a particular field or have signed up for years of graduate study in a program testifies to some level of commitment to a discipline. What we are looking for, ideally, is that your research builds on in some way (as extension, as critique, as lateral move) previous research and so adds to what we, collectively, understand about the social world. It is helpful to keep this in mind, as it may inspire you and also help guide you through the process. The point is, you are not meant to be doing something no one has ever thought of before, even if you are trying to find something that does not exactly duplicate previous research: “You may be trying to be too clever—aiming to come up with a topic unique in the history of the universe, something that will have people swooning with admiration at your originality and intellectual precociousness. Don’t do it. It’s safer…to settle on an ordinary, middle-of-the-road topic that will lend itself to a nicely organized process of project management. That’s the clever way of proceeding.… You can always let your cleverness shine through during the stages of design, analysis, and write-up. Don’t make things more difficult for yourself than you need to do” ( Davies 2007:20 ).

Rubin ( 2021 ) suggests four possible ways to develop a research question (there are many more, of course, but this can get you started). One way is to start with a theory that interests you and then select a topic where you can apply that theory. For example, you took a class on gender and society and learned about the “glass ceiling.” You could develop a study that tests that theory in a setting that has not yet been explored—maybe leadership at the Oregon Country Fair. The second way is to start with a topic that interests you and then go back to the books to find a theory that might explain it. This is arguably more difficult but often much more satisfying. Ask your professors for help—they might have ideas of theories or concepts that could be relevant or at least give you an idea of what books to read. The third way is to be very clever and select a question that already combines the topic and the theory. Rubin gives as one example sentencing disparities in criminology—this is both a topic and a theory or set of theories. You then just have to figure out particulars like setting and sample. I don’t know if I find this third way terribly helpful, but it might help you think through the possibilities. The fourth way involves identifying a puzzle or a problem, which can be either theoretical (something in the literature just doesn’t seem to make sense and you want to tackle addressing it) or empirical (something happened or is happening, and no one really understands why—think, for example, of mass school shootings).

Once you think you have an issue or topic that is worth exploring, you will need to (eventually) turn that into a good research question. A good research question is specific, clear, and feasible .

Specific . How specific a research question needs to be is somewhat related to the disciplinary conventions and whether the study is conceived inductively or deductively. In deductive research, one begins with a specific research question developed from the literature. You then collect data to test the theory or hypotheses accompanying your research question. In inductive research, however, one begins with data collection and analysis and builds theory from there. So naturally, the research question is a bit vaguer. In general, the more closely aligned to the natural sciences (and thus the deductive approach), the more a very tight and specific research question (along with specific, focused hypotheses) is required. This includes disciplines like psychology, geography, public health, environmental science, and marine resources management. The more one moves toward the humanities pole (and the inductive approach), the more looseness is permitted, as there is a general belief that we go into the field to find what is there, not necessarily what we imagine we are looking for (see figure 4.2). Disciplines such as sociology, anthropology, and gender and sexuality studies and some subdisciplines of public policy/public administration are closer to the humanities pole in this sense.

Natural Sciences are more likely to use the scientific method and be on the Quantitative side of the continuum. Humanities are more likely to use Interpretive methods and are on the Qualitative side of the continuum.

Regardless of discipline and approach, however, it is a good idea for beginning researchers to create a research question as specific as possible, as this will serve as your guide throughout the process. You can tweak it later if needed, but start with something specific enough that you know what it is you are doing and why. It is more difficult to deal with ambiguity when you are starting out than later in your career, when you have a better handle on what you are doing. Being under a time constraint means the more specific the question, the better. Questions should always specify contexts, geographical locations, and time frames. Go back to your practice research questions and make sure that these are included.

Clear . A clear research question doesn’t only need to be intelligible to any reader (which, of course, it should); it needs to clarify any meanings of particular words or concepts (e.g., What is excessive force?). Check all your concepts to see if there are ways you can clarify them further—for example, note that we shifted from impact of debt to impact of high debt load and specified this as beginning at $30,000. Ideally, we would use the literature to help us clarify what a high debt load is or how to define “excessive” force.

Feasible . In order to know if your question is feasible, you are going to have to think a little bit about your entire research design. For example, a question that asks about the real-time impact of COVID restrictions on learning outcomes would require a time machine. You could tweak the question to ask instead about the long-term impacts of COVID restrictions, as measured two years after their end. Or let’s say you are interested in assessing the damage of opioid abuse on small-town communities across the United States. Is it feasible to cover the entire US? You might need a team of researchers to do this if you are planning on on-the-ground observations. Perhaps a case study of one particular community might be best. Then your research question needs to be changed accordingly.

Here are some things to consider in terms of feasibility:

  • Is the question too general for what you actually intend to do or examine? (Are you specifying the world when you only have time to explore a sliver of that world?)
  • Is the question suitable for the time you have available? (You will need different research questions for a study that can be completed in a term than one where you have one to two years, as in a master’s program, or even three to eight years, as in a doctoral program.)
  • Is the focus specific enough that you know where and how to begin?
  • What are the costs involved in doing this study, including time? Will you need to travel somewhere, and if so, how will you pay for it?
  • Will there be problems with “access”? (More on this in later chapters, but for now, consider how you might actually find people to interview or places to observe and whether gatekeepers exist who might keep you out.)
  • Will you need to submit an application proposal for your university’s IRB (institutional review board)? If you are doing any research with live human subjects, you probably need to factor in the time and potential hassle of an IRB review (see chapter 8). If you are under severe time constraints, you might need to consider developing a research question that can be addressed with secondary sources, online content, or historical archives (see chapters 16 and 17).

In addition to these practicalities, you will also want to consider the research question in terms of what is best for you now. Are you engaged in research because you are required to be—jumping a hurdle for a course or for your degree? If so, you really do want to think about your project as training and develop a question that will allow you to practice whatever data collection and analysis techniques you want to develop. For example, if you are a grad student in a public health program who is interested in eventually doing work that requires conducting interviews with patients, develop a research question and research design that is interview based. Focus on the practicality (and practice) of the study more than the theoretical impact or academic contribution, in other words. On the other hand, if you are a PhD candidate who is seeking an academic position in the future, your research question should be pitched in a way to build theoretical knowledge as well (the phrasing is typically “original contribution to scholarship”).

The more time you have to devote to the study and the larger the project, the more important it is to reflect on your own motivations and goals when crafting a research question (remember chapter 2?). By “your own motivations and goals,” I mean what interests you about the social world and what impact you want your research to have, both academically and practically speaking. Many students have secret (or not-so-secret) plans to make the world a better place by helping address climate change, pointing out pressure points to fight inequities, or bringing awareness to an overlooked area of concern. My own work in graduate school was motivated by the last of these three—the not-so-secret goal of my research was to raise awareness about obstacles to success for first-generation and working-class college students. This underlying goal motivated me to complete my dissertation in a timely manner and then to further continue work in this area and see my research get published. I cared enough about the topic that I was not ready to put it away. I am still not ready to put it away. I encourage you to find topics that you can’t put away, ever. That will keep you going whenever things get difficult in the research process, as they inevitably will.

On the other hand, if you are an undergraduate and you really have very little time, some of the best advice I have heard is to find a study you really like and adapt it to a new context. Perhaps you read a study about how students select majors and how this differs by class ( Hurst 2019 ). You can try to replicate the study on a small scale among your classmates. Use the same research question, but revise for your context. You can probably even find the exact questions I  used and ask them in the new sample. Then when you get to the analysis and write-up, you have a comparison study to guide you, and you can say interesting things about the new context and whether the original findings were confirmed (similar) or not. You can even propose reasons why you might have found differences between one and the other.

Another way of thinking about research questions is to explicitly tie them to the type of purpose of your study. Of course, this means being very clear about what your ultimate purpose is! Marshall and Rossman ( 2016 ) break down the purpose of a study into four categories: exploratory, explanatory, descriptive, and emancipatory ( 78 ). Exploratory purpose types include wanting to investigate little-understood phenomena, or identifying or discovering important new categories of meaning, or generating hypotheses for further research. For these, research questions might be fairly loose: What is going on here? How are people interacting on this site? What do people talk about when you ask them about the state of the world? You are almost (but never entirely) starting from scratch. Be careful though—just because a topic is new to you does not mean it is really new. Someone else (or many other someones) may already have done this exploratory research. Part of your job is to find this out (more on this in “What Is a ‘Literature Review’?” in chapter 9). Descriptive purposes (documenting and describing a phenomenon) are similar to exploratory purposes but with a much clearer goal (description). A good research question for a descriptive study would specify the actions, events, beliefs, attitudes, structures, and/or processes that will be described.

Most researchers find that their topic has already been explored and described, so they move to trying to explain a relationship or phenomenon. For these, you will want research questions that capture the relationships of interest. For example, how does gender influence one’s understanding of police brutality (because we already know from the literature that it does, so now we are interested in understanding how and why)? Or what is the relationship between education and climate change denialism? If you find that prior research has already provided a lot of evidence about those relationships as well as explanations for how they work, and you want to move the needle past explanation into action, you might find yourself trying to conduct an emancipatory study. You want to be even more clear in acknowledging past research if you find yourself here. Then create a research question that will allow you to “create opportunities and the will to engage in social action” ( Marshall and Rossman 2016:78 ). Research questions might ask, “How do participants problematize their circumstances and take positive social action?” If we know that some students have come together to fight against student debt, how are they doing this, and with what success? Your purpose would be to help evaluate possibilities for social change and to use your research to make recommendations for more successful emancipatory actions.

Recap: Be specific. Be clear. Be practical. And do what you love.

Choosing an Approach or Tradition

Qualitative researchers may be defined as those who are working with data that is not in numerical form, but there are actually multiple traditions or approaches that fall under this broad category. I find it useful to know a little bit about the history and development of qualitative research to better understand the differences in these approaches. The following chart provides an overview of the six phases of development identified by Denzin and Lincoln ( 2005 ):

Table 4.1. Six Phases of Development

Year/Period Phase Focus
Pre-1945 Traditional Influence of positivism; anthropologists and ethnographers strive for objectivity when reporting observations in the field
1945-1970 Modernist Emphasis of methodological rigor and procedural formalism as a way of gaining acceptance
1970-1986 Blurred genres Large number of alternative approaches emerge, all competing with and contesting positivist and formalist approaches; e.g., structuralism, symbolic interactionism, ethnomethodology, constructionism
1980s-1990s Crisis of representation Attention turns to issues of power and privilege and the necessity of reflexivity around race, class, gender positions and identities; traditional notions of validity and neutrality were undermined
1990s-2000 Triple crisis Moving beyond issues of representation, questions raised about evaluation of qualitative research and the writing/presentation of it as well; more political and participatory forms emerge; qualitative research to advance social justice advocated
2000s... Postexperimental Boundaries expanded to include creative nonfiction, autobiographical ethnography, poetic representation, and other creative approaches

There are other ways one could present the history as well. Feminist theory and methodologies came to the fore in the 1970s and 1980s and had a lot to do with the internal critique of more positivist approaches. Feminists were quite aware that standpoint matters—that the identity of the researcher plays a role in the research, and they were ardent supporters of dismantling unjust power systems and using qualitative methods to help advance this mission. You might note, too, that many of the internal disputes were basically epistemological disputes about how we know what we know and whether one’s social location/position delimits that knowledge. Today, we are in a bountiful world of qualitative research, one that embraces multiple forms of knowing and knowledge. This is good, but it means that you, the student, have more choice when it comes to situating your study and framing your research question, and some will expect you to signal the choices you have made in any research protocols you write or publications and presentations.

Creswell’s ( 1998 ) definition of qualitative research includes the notion of distinct traditions of inquiry: “Qualitative research is an inquiry process of understanding based on distinct methodological traditions of inquiry that explore a social or human problem. The research builds complex,   holistic pictures, analyzes words, reports detailed views of informants , and conducted the study in a natural setting” (15; emphases added). I usually caution my students against taking shelter under one of these approaches, as, practically speaking, there is a lot of mixing of traditions among researchers. And yet it is useful to know something about the various histories and approaches, particularly as you are first starting out. Each tradition tends to favor a particular epistemological perspective (see chapter 3), a way of reasoning (see “ Advanced: Inductive versus Deductive Reasoning ”), and a data-collection technique.

There are anywhere from ten to twenty “traditions of inquiry,” depending on how one draws the boundaries. In my accounting, there are twelve, but three approaches tend to dominate the field.

Ethnography

Ethnography was developed from the discipline of anthropology, as the study of (other) culture(s). From a relatively positivist/objective approach to writing down the “truth” of what is observed during the colonial era (where this “truth” was then often used to help colonial administrators maintain order and exploit people and extract resources more effectively), ethnography was adopted by all kinds of social science researchers to get a better understanding of how groups of people (various subcultures and cultures) live their lives. Today, ethnographers are more likely to be seeking to dismantle power relations than to support them. They often study groups of people that are overlooked and marginalized, and sometimes they do the obverse by demonstrating how truly strange the familiar practices of the dominant group are. Ethnography is also central to organizational studies (e.g., How does this institution actually work?) and studies of education (e.g., What is it like to be a student during the COVID era?).

Ethnographers use methods of participant observation and intensive fieldwork in their studies, often living or working among the group under study for months at a time (and, in some cases, years). I’ve called this “deep ethnography,” and it is the subject of chapter 14. The data ethnographers analyze are copious “field notes” written while in the field, often supplemented by in-depth interviews and many more casual conversations. The final product of ethnographers is a “thick” description of the culture. This makes reading ethnographies enjoyable, as the goal is to write in such a way that the reader feels immersed in the culture.

There are variations on the ethnography, such as the autoethnography , where the researcher uses a systematic and rigorous study of themselves to better understand the culture in which they find themselves. Autoethnography is a relatively new approach, even though it is derived from one of the oldest approaches. One can say that it takes to heart the feminist directive to “make the personal political,” to underscore the connections between personal experiences and larger social and political structures. Introspection becomes the primary data source.

Grounded Theory

Grounded Theory holds a special place in qualitative research for a few reasons, not least of which is that nonqualitative researchers often mistakenly believe that Grounded Theory is the only qualitative research methodology . Sometimes, it is easier for students to explain what they are doing as “Grounded Theory” because it sounds “more scientific” than the alternative descriptions of qualitative research. This is definitely part of its appeal. Grounded Theory is the name given to the systematic inductive approach first developed by Glaser and Strauss in 1967, The Discovery of Grounded Theory: Strategies for Qualitative Research . Too few people actually read Glaser and Strauss’s book. It is both groundbreaking and fairly unremarkable at the same time. As a historical intervention into research methods generally, it is both a sharp critique of positivist methods in the social sciences (theory testing) and a rejection of purely descriptive accounts-building qualitative research. Glaser and Strauss argued for an approach whose goal was to construct (middle-level) theories from recursive data analysis of nonnumerical data (interviews and observations). They advocated a “constant comparative method” in which coding and analysis take place simultaneously and recursively. The demands are fairly strenuous. If done correctly, the result is the development of a new theory about the social world.

So why do I call this “fairly unremarkable”? To some extent, all qualitative research already does what Glaser and Strauss ( 1967 ) recommend, albeit without denoting the processes quite so specifically. As will be seen throughout the rest of this textbook, all qualitative research employs some “constant comparisons” through recursive data analyses. Where Grounded Theory sets itself apart from a significant number of qualitative research projects, however, is in its dedication to inductively building theory. Personally, I think it is important to understand that Glaser and Strauss were rejecting deductive theory testing in sociology when they first wrote their book. They were part of a rising cohort who rejected the positivist mathematical approaches that were taking over sociology journals in the 1950s and 1960s. Here are some of the comments and points they make against this kind of work:

Accurate description and verification are not so crucial when one’s purpose is to generate theory. ( 28 ; further arguing that sampling strategies are different when one is not trying to test a theory or generalize results)

Illuminating perspectives are too often suppressed when the main emphasis is verifying theory. ( 40 )

Testing for statistical significance can obscure from theoretical relevance. ( 201 )

Instead, they argued, sociologists should be building theories about the social world. They are not physicists who spend time testing and refining theories. And they are not journalists who report descriptions. What makes sociologists better than journalists and other professionals is that they develop theory from their work “In their driving efforts to get the facts [research sociologists] tend to forget that the distinctive offering of sociology to our society is sociological theory, not research description” ( 30–31 ).

Grounded Theory’s inductive approach can be off-putting to students who have a general research question in mind and a working hypothesis. The true Grounded Theory approach is often used in exploratory studies where there are no extant theories. After all, the promise of this approach is theory generation, not theory testing. Flying totally free at the start can be terrifying. It can also be a little disingenuous, as there are very few things under the sun that have not been considered before. Barbour ( 2008:197 ) laments that this approach is sometimes used because the researcher is too lazy to read the relevant literature.

To summarize, Glaser and Strauss justified the qualitative research project in a way that gave it standing among the social sciences, especially vis-à-vis quantitative researchers. By distinguishing the constant comparative method from journalism, Glaser and Strauss enabled qualitative research to gain legitimacy.

So what is it exactly, and how does one do it? The following stages provide a succinct and basic overview, differentiating the portions that are similar to/in accordance with qualitative research methods generally and those that are distinct from the Grounded Theory approach:

Step 1. Select a case, sample, and setting (similar—unless you begin with a theory to test!).

Step 2. Begin data collection (similar).

Step 3. Engage data analysis (similar in general but specificity of details somewhat unique to Grounded Theory): (1) emergent coding (initial followed by focused), (2) axial (a priori) coding , (3) theoretical coding , (4) creation of theoretical categories; analysis ends when “theoretical saturation ” has been achieved.

Grounded Theory’s prescriptive (i.e., it has a set of rules) framework can appeal to beginning students, but it is unnecessary to adopt the entire approach in order to make use of some of its suggestions. And if one does not exactly follow the Grounded Theory rulebook, it can mislead others if you tend to call what you are doing Grounded Theory when you are not:

Grounded theory continues to be a misunderstood method, although many researchers purport to use it. Qualitative researchers often claim to conduct grounded theory studies without fully understanding or adopting its distinctive guidelines. They may employ one or two of the strategies or mistake qualitative analysis for grounded theory. Conversely, other researchers employ grounded theory methods in reductionist, mechanistic ways. Neither approach embodies the flexible yet systematic mode of inquiry, directed but open-ended analysis, and imaginative theorizing from empirical data that grounded theory methods can foster. Subsequently, the potential of grounded theory methods for generating middle-range theory has not been fully realized ( Charmaz 2014 ).

Phenomenology

Where Grounded Theory sets itself apart for its inductive systematic approach to data analysis, phenomenologies are distinct for their focus on what is studied—in this case, the meanings of “lived experiences” of a group of persons sharing a particular event or circumstance. There are phenomenologies of being working class ( Charlesworth 2000 ), of the tourist experience ( Cohen 1979 ), of Whiteness ( Ahmed 2007 ). The phenomenon of interest may also be an emotion or circumstance. One can study the phenomenon of “White rage,” for example, or the phenomenon of arranged marriage.

The roots of phenomenology lie in philosophy (Husserl, Heidegger, Merleau-Ponty, Sartre) but have been adapted by sociologists in particular. Phenomenologists explore “how human beings make sense of experience and transform experience into consciousness, both individually and as shared meaning” ( Patton 2002:104 ).

One of the most important aspects of conducting a good phenomenological study is getting the sample exactly right so that each person can speak to the phenomenon in question. Because the researcher is interested in the meanings of an experience, in-depth interviews are the preferred method of data collection. Observations are not nearly as helpful here because people may do a great number of things without meaning to or without being conscious of their implications. This is important to note because phenomenologists are studying not “the reality” of what happens at all but an articulated understanding of a lived experience. When reading a phenomenological study, it is important to keep this straight—too often I have heard students critique a study because the interviewer didn’t actually see how people’s behavior might conflict with what they say (which is, at heart, an epistemological issue!).

In addition to the “big three,” there are many other approaches; some are variations, and some are distinct approaches in their own right. Case studies focus explicitly on context and dynamic interactions over time and can be accomplished with quantitative or qualitative methods or a mixture of both (for this reason, I am not considering it as one of the big three qualitative methods, even though it is a very common approach). Whatever methods are used, a contextualized deep understanding of the case (or cases) is central.

Critical inquiry is a loose collection of techniques held together by a core argument that understanding issues of power should be the focus of much social science research or, to put this another way, that it is impossible to understand society (its people and institutions) without paying attention to the ways that power relations and power dynamics inform and deform those people and institutions. This attention to power dynamics includes how research is conducted too. All research fundamentally involves issues of power. For this reason, many critical inquiry traditions include a place for collaboration between researcher and researched. Examples include (1) critical narrative analysis, which seeks to describe the meaning of experience for marginalized or oppressed persons or groups through storytelling; (2) participatory action research, which requires collaboration between the researcher and the research subjects or community of interest; and (3) critical race analysis, a methodological application of Critical Race Theory (CRT), which posits that racial oppression is endemic (if not always throughout time and place, at least now and here).

Do you follow a particular tradition of inquiry? Why?

Shawn Wilson’s book, Research Is Ceremony: Indigenous Research Methods , is my holy grail. It really flipped my understanding of research and relationships. Rather than thinking linearly and approaching research in a more canonical sense, Wilson shook my world view by drawing me into a pattern of inquiry that emphasized transparency and relational accountability. The Indigenous research paradigm is applicable in all research settings, and I follow it because it pushes me to constantly evaluate my position as a knowledge seeker and knowledge sharer.

Autoethnography takes the researcher as the subject. This is one approach that is difficult to explain to more quantitatively minded researchers, as it seems to violate many of the norms of “scientific research” as understood by them. First, the sample size is quite small—the n is 1, the researcher. Two, the researcher is not a neutral observer—indeed, the subjectivity of the researcher is the main strength of this approach. Autoethnographies can be extremely powerful for their depth of understanding and reflexivity, but they need to be conducted in their own version of rigor to stand up to scrutiny by skeptics. If you are skeptical, read one of the excellent published examples out there—I bet you will be impressed with what you take away. As they say, the proof is in the pudding on this approach.

Advanced: Inductive versus Deductive Reasoning

There has been a great deal of ink shed in the discussion of inductive versus deductive approaches, not all of it very instructive. Although there is a huge conceptual difference between them, in practical terms, most researchers cycle between the two, even within the same research project. The simplest way to explain the difference between the two is that we are using deductive reasoning when we test an existing theory (move from general to particular), and we are using inductive reasoning when we are generating theory (move from particular to general). Figure 4.2 provides a schematic of the deductive approach. From the literature, we select a theory about the impact of student loan debt: student loan debt will delay homeownership among young adults. We then formulate a hypothesis based on this theory: adults in their thirties with high debt loads will be less likely to own homes than their peers who do not have high debt loads. We then collect data to test the hypothesis and analyze the results. We find that homeownership is substantially lower among persons of color and those who were the first in their families to graduate from college. Notably, high debt loads did not affect homeownership among White adults whose parents held college degrees. We thus refine the theory to match the new findings: student debt loads delay homeownership among some young adults, thereby increasing inequalities in this generation. We have now contributed new knowledge to our collective corpus.

how to test research questions

The inductive approach is contrasted in figure 4.3. Here, we did not begin with a preexisting theory or previous literature but instead began with an observation. Perhaps we were conducting interviews with young adults who held high amounts of debt and stumbled across this observation, struck by how many were renting apartments or small houses. We then noted a pattern—not all the young adults we were talking to were renting; race and class seemed to play a role here. We would then probably expand our study in a way to be able to further test this developing theory, ensuring that we were not seeing anomalous patterns. Once we were confident about our observations and analyses, we would then develop a theory, coming to the same place as our deductive approach, but in reverse.

how to test research questions

A third form of reasoning, abductive (sometimes referred to as probabilistic reasoning) was developed in the late nineteenth century by American philosopher Charles Sanders Peirce. I have included some articles for further reading for those interested.

Among social scientists, the deductive approach is often relaxed so that a research question is set based on the existing literature rather than creating a hypothesis or set of hypotheses to test. Some journals still require researchers to articulate hypotheses, however. If you have in mind a publication, it is probably a good idea to take a look at how most articles are organized and whether specific hypotheses statements are included.

Table 4.2. Twelve Approaches. Adapted from Patton 2002:132-133.

Approach Home discipline /Data Collection Techniques
Ethnography Anthropology Fieldwork/Observations + supplemental interviews
Grounded theory Sociology Fieldwork/Observations + Interviews
Phenomenology Philosophy In-depth interviews
Constructivism Sociology Focus Groups; Interviews
Heuristic inquiry Psychology Self-reflections and fieldnotes + interviews
Ethnomethodology Sociology In-depth interviews + Fieldwork, including social experiments
Symbolic interaction Social psychology Focus Groups + Interviews
Semiotics Linguistics Textual analyses + interviews/focus groups
Hermeneutics Theology Textual analyses
Narrative analysis Literary criticism Interviews, Oral Histories, Textual Analyses, Historical Artefacts, Content Analyses
Ecological psychology Ecology Observation
Orientational/Standpoint approaches (critical theory, feminist theory) Law; Sociology PAR, Interviews, Focus Groups

Further Readings

The following readings have been examples of various approaches or traditions of inquiry:

Ahmed, Sara. 2007. “A Phenomenology of Whiteness.” Feminist Theory 8(2):149–168.

Charlesworth, Simon. 2000. A Phenomenology of Working-Class Experience . Cambridge: Cambridge University Press.*

Clandinin, D. Jean, and F. Michael Connelly. 2000. Narrative Inquiry: Experience and Story in Qualitative Research . San Francisco: Jossey-Bass.

Cohen, E. 1979. “A Phenomenology of Tourist Experiences.” Sociology 13(2):179–201.

Cooke, Bill, and Uma Kothari, eds. 2001. Participation: The New Tyranny? London: Zed Books. A critique of participatory action.

Corbin, Juliet, and Anselm Strauss. 2008. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory . 3rd ed. Thousand Oaks, CA: SAGE.

Crabtree, B. F., and W. L. Miller, eds. 1999. Doing Qualitative Research: Multiple Strategies . Thousand Oaks, CA: SAGE.

Creswell, John W. 1997. Qualitative Inquiry and Research Design: Choosing among Five Approaches. Thousand Oaks, CA: SAGE.

Glaser, Barney G., and Anselm Strauss. 1967. The Discovery of Grounded Theory: Strategies for Qualitative Research . New York: Aldine.

Gobo, Giampetro, and Andrea Molle. 2008. Doing Ethnography . Thousand Oaks, CA: SAGE.

Hancock, Dawson B., and Bob Algozzine. 2016. Doing Case Study Research: A Practical Guide for Beginning Research . 3rd ed. New York: Teachers College Press.

Harding, Sandra. 1987. Feminism and Methodology . Bloomington: Indiana University Press.

Husserl, Edmund. (1913) 2017. Ideas: Introduction to Pure Phenomenology . Eastford, CT: Martino Fine Books.

Rose, Gillian. 2012. Visual Methodologies . 3rd ed. London: SAGE.

Van der Riet, M. 2009. “Participatory Research and the Philosophy of Social Science: Beyond the Moral Imperative.” Qualitative Inquiry 14(4):546–565.

Van Manen, Max. 1990. Researching Lived Experience: Human Science for an Action Sensitive Pedagogy . Albany: State University of New York.

Wortham, Stanton. 2001. Narratives in Action: A Strategy for Research and Analysis . New York: Teachers College Press.

Inductive, Deductive, and Abductive Reasoning and Nomothetic Science in General

Aliseda, Atocha. 2003. “Mathematical Reasoning vs. Abductive Reasoning: A Structural Approach.” Synthese 134(1/2):25–44.

Bonk, Thomas. 1997. “Newtonian Gravity, Quantum Discontinuity and the Determination of Theory by Evidence.” Synthese 112(1):53–73. A (natural) scientific discussion of inductive reasoning.

Bonnell, Victoria E. 1980. “The Uses of Theory, Concepts and Comparison in Historical Sociology.” C omparative Studies in Society and History 22(2):156–173.

Crane, Mark, and Michael C. Newman. 1996. “Scientific Method in Environmental Toxicology.” Environmental Reviews 4(2):112–122.

Huang, Philip C. C., and Yuan Gao. 2015. “Should Social Science and Jurisprudence Imitate Natural Science?” Modern China 41(2):131–167.

Mingers, J. 2012. “Abduction: The Missing Link between Deduction and Induction. A Comment on Ormerod’s ‘Rational Inference: Deductive, Inductive and Probabilistic Thinking.’” Journal of the Operational Research Society 63(6):860–861.

Ormerod, Richard J. 2010. “Rational Inference: Deductive, Inductive and Probabilistic Thinking.” Journal of the Operational Research Society 61(8):1207–1223.

Perry, Charner P. 1927. “Inductive vs. Deductive Method in Social Science Research.” Southwestern Political and Social Science Quarterly 8(1):66–74.

Plutynski, Anya. 2011. “Four Problems of Abduction: A Brief History.” HOPOS: The Journal of the International Society for the History of Philosophy of Science 1(2):227–248.

Thompson, Bruce, and Gloria M. Borrello. 1992. “Different Views of Love: Deductive and Inductive Lines of Inquiry.” Current Directions in Psychological Science 1(5):154–156.

Tracy, Sarah J. 2012. “The Toxic and Mythical Combination of a Deductive Writing Logic for Inductive Qualitative Research.” Qualitative Communication Research 1(1):109–141.

A place or collection containing records, documents, or other materials of historical interest; most universities have an archive of material related to the university’s history, as well as other “special collections” that may be of interest to members of the community.

A person who introduces the researcher to a field site’s culture and population.  Also referred to as guides.  Used in ethnography .

A form of research and a methodological tradition of inquiry in which the researcher uses self-reflection and writing to explore personal experiences and connect this autobiographical story to wider cultural, political, and social meanings and understandings.  “Autoethnography is a research method that uses a researcher's personal experience to describe and critique cultural beliefs, practices, and experiences” ( Adams, Jones, and Ellis 2015 ).

The philosophical framework in which research is conducted; the approach to “research” (what practices this entails, etc.).  Inevitably, one’s epistemological perspective will also guide one’s methodological choices, as in the case of a constructivist who employs a Grounded Theory approach to observations and interviews, or an objectivist who surveys key figures in an organization to find out how that organization is run.  One of the key methodological distinctions in social science research is that between quantitative and qualitative research.

The process of labeling and organizing qualitative data to identify different themes and the relationships between them; a way of simplifying data to allow better management and retrieval of key themes and illustrative passages.  See coding frame and  codebook.

A later stage coding process used in Grounded Theory in which data is reassembled around a category, or axis.

A later stage-coding process used in Grounded Theory in which key words or key phrases capture the emergent theory.

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

A methodological tradition of inquiry that focuses on the meanings held by individuals and/or groups about a particular phenomenon (e.g., a “phenomenology of whiteness” or a “phenomenology of first-generation college students”).  Sometimes this is referred to as understanding “the lived experience” of a particular group or culture.  Interviews form the primary tool of data collection for phenomenological studies.  Derived from the German philosophy of phenomenology (Husserl 1913; 2017).

The number of individuals (or units) included in your sample

A form of reasoning which employs a “top-down” approach to drawing conclusions: it begins with a premise or hypothesis and seeks to verify it (or disconfirm it) with newly collected data.  Inferences are made based on widely accepted facts or premises.  Deduction is idea-first, followed by observations and a conclusion.  This form of reasoning is often used in quantitative research and less often in qualitative research.  Compare to inductive reasoning .  See also abductive reasoning .

A form of reasoning that employs a “bottom-up” approach to drawing conclusions: it begins with the collection of data relevant to a particular question and then seeks to build an argument or theory based on an analysis of that data.  Induction is observation first, followed by an idea that could explain what has been observed.  This form of reasoning is often used in qualitative research and seldom used in qualitative research.  Compare to deductive reasoning .  See also abductive reasoning .

An “interpretivist” form of reasoning in which “most likely” conclusions are drawn, based on inference.  This approach is often used by qualitative researchers who stress the recursive nature of qualitative data analysis.  Compare with deductive reasoning and inductive reasoning .

A form of social science research that generally follows the scientific method as established in the natural sciences.  In contrast to idiographic research , the nomothetic researcher looks for general patterns and “laws” of human behavior and social relationships.  Once discovered, these patterns and laws will be expected to be widely applicable.  Quantitative social science research is nomothetic because it seeks to generalize findings from samples to larger populations.  Most qualitative social science research is also nomothetic, although generalizability is here understood to be theoretical in nature rather than statistical .  Some qualitative researchers, however, espouse the idiographic research paradigm instead.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Skip to main content
  • Skip to ChatBot Assistant
  • Writing Resources

Evaluate Your Own Research Question

Evaluate the quality of your research question and the ease with which you should be able to answer it.

Ask yourself:

  • Does the question deal with a topic or issue that interests me enough to spark my own thoughts and opinions?
  • Is the question easily and fully researchable?
  • Statistics on airline crashes before and after
  • Statistics on other safety problems before and after
  • Information about maintenance practices before and after
  • Information about government safety requirements before and after
  • Is the scope of this information reasonable (e.g., can I really research 30 online writing programs developed over a span of 10 years?)
  • Given the type and scope of the information that I need, is my question too broad, too narrow, or okay?
  • What sources will be able to provide the information I need to answer my research question (journals, books, Internet, government documents, people)?
  • Can I access these sources?
  • Given my answers to the above questions, do I have a good-quality research question that I actually will be able to answer by doing research?

Contact your course mentor if you're not sure whether your research question fulfills the assignment.

Need Assistance?

If you would like assistance with any type of writing assignment, learning coaches are available to assist you. Please contact Academic Support by emailing [email protected].

Questions or feedback about SUNY Empire's Writing Support?

Contact us at [email protected] .

Smart Cookies

They're not just in our classes – they help power our website. Cookies and similar tools allow us to better understand the experience of our visitors. By continuing to use this website, you consent to SUNY Empire State University's usage of cookies and similar technologies in accordance with the university's Privacy Notice and Cookies Policy .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.53(4); 2010 Aug

Logo of canjsurg

Research questions, hypotheses and objectives

Patricia farrugia.

* Michael G. DeGroote School of Medicine, the

Bradley A. Petrisor

† Division of Orthopaedic Surgery and the

Forough Farrokhyar

‡ Departments of Surgery and

§ Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ont

Mohit Bhandari

There is an increasing familiarity with the principles of evidence-based medicine in the surgical community. As surgeons become more aware of the hierarchy of evidence, grades of recommendations and the principles of critical appraisal, they develop an increasing familiarity with research design. Surgeons and clinicians are looking more and more to the literature and clinical trials to guide their practice; as such, it is becoming a responsibility of the clinical research community to attempt to answer questions that are not only well thought out but also clinically relevant. The development of the research question, including a supportive hypothesis and objectives, is a necessary key step in producing clinically relevant results to be used in evidence-based practice. A well-defined and specific research question is more likely to help guide us in making decisions about study design and population and subsequently what data will be collected and analyzed. 1

Objectives of this article

In this article, we discuss important considerations in the development of a research question and hypothesis and in defining objectives for research. By the end of this article, the reader will be able to appreciate the significance of constructing a good research question and developing hypotheses and research objectives for the successful design of a research study. The following article is divided into 3 sections: research question, research hypothesis and research objectives.

Research question

Interest in a particular topic usually begins the research process, but it is the familiarity with the subject that helps define an appropriate research question for a study. 1 Questions then arise out of a perceived knowledge deficit within a subject area or field of study. 2 Indeed, Haynes suggests that it is important to know “where the boundary between current knowledge and ignorance lies.” 1 The challenge in developing an appropriate research question is in determining which clinical uncertainties could or should be studied and also rationalizing the need for their investigation.

Increasing one’s knowledge about the subject of interest can be accomplished in many ways. Appropriate methods include systematically searching the literature, in-depth interviews and focus groups with patients (and proxies) and interviews with experts in the field. In addition, awareness of current trends and technological advances can assist with the development of research questions. 2 It is imperative to understand what has been studied about a topic to date in order to further the knowledge that has been previously gathered on a topic. Indeed, some granting institutions (e.g., Canadian Institute for Health Research) encourage applicants to conduct a systematic review of the available evidence if a recent review does not already exist and preferably a pilot or feasibility study before applying for a grant for a full trial.

In-depth knowledge about a subject may generate a number of questions. It then becomes necessary to ask whether these questions can be answered through one study or if more than one study needed. 1 Additional research questions can be developed, but several basic principles should be taken into consideration. 1 All questions, primary and secondary, should be developed at the beginning and planning stages of a study. Any additional questions should never compromise the primary question because it is the primary research question that forms the basis of the hypothesis and study objectives. It must be kept in mind that within the scope of one study, the presence of a number of research questions will affect and potentially increase the complexity of both the study design and subsequent statistical analyses, not to mention the actual feasibility of answering every question. 1 A sensible strategy is to establish a single primary research question around which to focus the study plan. 3 In a study, the primary research question should be clearly stated at the end of the introduction of the grant proposal, and it usually specifies the population to be studied, the intervention to be implemented and other circumstantial factors. 4

Hulley and colleagues 2 have suggested the use of the FINER criteria in the development of a good research question ( Box 1 ). The FINER criteria highlight useful points that may increase the chances of developing a successful research project. A good research question should specify the population of interest, be of interest to the scientific community and potentially to the public, have clinical relevance and further current knowledge in the field (and of course be compliant with the standards of ethical boards and national research standards).

FINER criteria for a good research question

Feasible
Interesting
Novel
Ethical
Relevant

Adapted with permission from Wolters Kluwer Health. 2

Whereas the FINER criteria outline the important aspects of the question in general, a useful format to use in the development of a specific research question is the PICO format — consider the population (P) of interest, the intervention (I) being studied, the comparison (C) group (or to what is the intervention being compared) and the outcome of interest (O). 3 , 5 , 6 Often timing (T) is added to PICO ( Box 2 ) — that is, “Over what time frame will the study take place?” 1 The PICOT approach helps generate a question that aids in constructing the framework of the study and subsequently in protocol development by alluding to the inclusion and exclusion criteria and identifying the groups of patients to be included. Knowing the specific population of interest, intervention (and comparator) and outcome of interest may also help the researcher identify an appropriate outcome measurement tool. 7 The more defined the population of interest, and thus the more stringent the inclusion and exclusion criteria, the greater the effect on the interpretation and subsequent applicability and generalizability of the research findings. 1 , 2 A restricted study population (and exclusion criteria) may limit bias and increase the internal validity of the study; however, this approach will limit external validity of the study and, thus, the generalizability of the findings to the practical clinical setting. Conversely, a broadly defined study population and inclusion criteria may be representative of practical clinical practice but may increase bias and reduce the internal validity of the study.

PICOT criteria 1

Population (patients)
Intervention (for intervention studies only)
Comparison group
Outcome of interest
Time

A poorly devised research question may affect the choice of study design, potentially lead to futile situations and, thus, hamper the chance of determining anything of clinical significance, which will then affect the potential for publication. Without devoting appropriate resources to developing the research question, the quality of the study and subsequent results may be compromised. During the initial stages of any research study, it is therefore imperative to formulate a research question that is both clinically relevant and answerable.

Research hypothesis

The primary research question should be driven by the hypothesis rather than the data. 1 , 2 That is, the research question and hypothesis should be developed before the start of the study. This sounds intuitive; however, if we take, for example, a database of information, it is potentially possible to perform multiple statistical comparisons of groups within the database to find a statistically significant association. This could then lead one to work backward from the data and develop the “question.” This is counterintuitive to the process because the question is asked specifically to then find the answer, thus collecting data along the way (i.e., in a prospective manner). Multiple statistical testing of associations from data previously collected could potentially lead to spuriously positive findings of association through chance alone. 2 Therefore, a good hypothesis must be based on a good research question at the start of a trial and, indeed, drive data collection for the study.

The research or clinical hypothesis is developed from the research question and then the main elements of the study — sampling strategy, intervention (if applicable), comparison and outcome variables — are summarized in a form that establishes the basis for testing, statistical and ultimately clinical significance. 3 For example, in a research study comparing computer-assisted acetabular component insertion versus freehand acetabular component placement in patients in need of total hip arthroplasty, the experimental group would be computer-assisted insertion and the control/conventional group would be free-hand placement. The investigative team would first state a research hypothesis. This could be expressed as a single outcome (e.g., computer-assisted acetabular component placement leads to improved functional outcome) or potentially as a complex/composite outcome; that is, more than one outcome (e.g., computer-assisted acetabular component placement leads to both improved radiographic cup placement and improved functional outcome).

However, when formally testing statistical significance, the hypothesis should be stated as a “null” hypothesis. 2 The purpose of hypothesis testing is to make an inference about the population of interest on the basis of a random sample taken from that population. The null hypothesis for the preceding research hypothesis then would be that there is no difference in mean functional outcome between the computer-assisted insertion and free-hand placement techniques. After forming the null hypothesis, the researchers would form an alternate hypothesis stating the nature of the difference, if it should appear. The alternate hypothesis would be that there is a difference in mean functional outcome between these techniques. At the end of the study, the null hypothesis is then tested statistically. If the findings of the study are not statistically significant (i.e., there is no difference in functional outcome between the groups in a statistical sense), we cannot reject the null hypothesis, whereas if the findings were significant, we can reject the null hypothesis and accept the alternate hypothesis (i.e., there is a difference in mean functional outcome between the study groups), errors in testing notwithstanding. In other words, hypothesis testing confirms or refutes the statement that the observed findings did not occur by chance alone but rather occurred because there was a true difference in outcomes between these surgical procedures. The concept of statistical hypothesis testing is complex, and the details are beyond the scope of this article.

Another important concept inherent in hypothesis testing is whether the hypotheses will be 1-sided or 2-sided. A 2-sided hypothesis states that there is a difference between the experimental group and the control group, but it does not specify in advance the expected direction of the difference. For example, we asked whether there is there an improvement in outcomes with computer-assisted surgery or whether the outcomes worse with computer-assisted surgery. We presented a 2-sided test in the above example because we did not specify the direction of the difference. A 1-sided hypothesis states a specific direction (e.g., there is an improvement in outcomes with computer-assisted surgery). A 2-sided hypothesis should be used unless there is a good justification for using a 1-sided hypothesis. As Bland and Atlman 8 stated, “One-sided hypothesis testing should never be used as a device to make a conventionally nonsignificant difference significant.”

The research hypothesis should be stated at the beginning of the study to guide the objectives for research. Whereas the investigators may state the hypothesis as being 1-sided (there is an improvement with treatment), the study and investigators must adhere to the concept of clinical equipoise. According to this principle, a clinical (or surgical) trial is ethical only if the expert community is uncertain about the relative therapeutic merits of the experimental and control groups being evaluated. 9 It means there must exist an honest and professional disagreement among expert clinicians about the preferred treatment. 9

Designing a research hypothesis is supported by a good research question and will influence the type of research design for the study. Acting on the principles of appropriate hypothesis development, the study can then confidently proceed to the development of the research objective.

Research objective

The primary objective should be coupled with the hypothesis of the study. Study objectives define the specific aims of the study and should be clearly stated in the introduction of the research protocol. 7 From our previous example and using the investigative hypothesis that there is a difference in functional outcomes between computer-assisted acetabular component placement and free-hand placement, the primary objective can be stated as follows: this study will compare the functional outcomes of computer-assisted acetabular component insertion versus free-hand placement in patients undergoing total hip arthroplasty. Note that the study objective is an active statement about how the study is going to answer the specific research question. Objectives can (and often do) state exactly which outcome measures are going to be used within their statements. They are important because they not only help guide the development of the protocol and design of study but also play a role in sample size calculations and determining the power of the study. 7 These concepts will be discussed in other articles in this series.

From the surgeon’s point of view, it is important for the study objectives to be focused on outcomes that are important to patients and clinically relevant. For example, the most methodologically sound randomized controlled trial comparing 2 techniques of distal radial fixation would have little or no clinical impact if the primary objective was to determine the effect of treatment A as compared to treatment B on intraoperative fluoroscopy time. However, if the objective was to determine the effect of treatment A as compared to treatment B on patient functional outcome at 1 year, this would have a much more significant impact on clinical decision-making. Second, more meaningful surgeon–patient discussions could ensue, incorporating patient values and preferences with the results from this study. 6 , 7 It is the precise objective and what the investigator is trying to measure that is of clinical relevance in the practical setting.

The following is an example from the literature about the relation between the research question, hypothesis and study objectives:

Study: Warden SJ, Metcalf BR, Kiss ZS, et al. Low-intensity pulsed ultrasound for chronic patellar tendinopathy: a randomized, double-blind, placebo-controlled trial. Rheumatology 2008;47:467–71.

Research question: How does low-intensity pulsed ultrasound (LIPUS) compare with a placebo device in managing the symptoms of skeletally mature patients with patellar tendinopathy?

Research hypothesis: Pain levels are reduced in patients who receive daily active-LIPUS (treatment) for 12 weeks compared with individuals who receive inactive-LIPUS (placebo).

Objective: To investigate the clinical efficacy of LIPUS in the management of patellar tendinopathy symptoms.

The development of the research question is the most important aspect of a research project. A research project can fail if the objectives and hypothesis are poorly focused and underdeveloped. Useful tips for surgical researchers are provided in Box 3 . Designing and developing an appropriate and relevant research question, hypothesis and objectives can be a difficult task. The critical appraisal of the research question used in a study is vital to the application of the findings to clinical practice. Focusing resources, time and dedication to these 3 very important tasks will help to guide a successful research project, influence interpretation of the results and affect future publication efforts.

Tips for developing research questions, hypotheses and objectives for research studies

  • Perform a systematic literature review (if one has not been done) to increase knowledge and familiarity with the topic and to assist with research development.
  • Learn about current trends and technological advances on the topic.
  • Seek careful input from experts, mentors, colleagues and collaborators to refine your research question as this will aid in developing the research question and guide the research study.
  • Use the FINER criteria in the development of the research question.
  • Ensure that the research question follows PICOT format.
  • Develop a research hypothesis from the research question.
  • Develop clear and well-defined primary and secondary (if needed) objectives.
  • Ensure that the research question and objectives are answerable, feasible and clinically relevant.

FINER = feasible, interesting, novel, ethical, relevant; PICOT = population (patients), intervention (for intervention studies only), comparison group, outcome of interest, time.

Competing interests: No funding was received in preparation of this paper. Dr. Bhandari was funded, in part, by a Canada Research Chair, McMaster University.

  • What can IELTS do for you
  • Ways to take IELTS
  • Who accepts IELTS?
  • Sample test questions
  • IELTS Trial Test
  • Understanding your score
  • Trust IELTS
  • On test day
  • Test centres
  • IELTS One Skill Retake
  • Cancellations, refunds...
  • Access arrangements
  • Getting and sharing...
  • Improving your results
  • Academic Institutions
  • Why accept IELTS?
  • IELTS Scoring
  • Compare IELTS
  • IELTS for your sector
  • Get started with IELTS
  • Verifying IELTS results
  • Research reports
  • Test statistics​
  • Research funding
  • Awards and scholarships
  • Previously funded...
  • News and Insights

Need help finding something? Enter a search term below

African woman writing

Academic test - sample test questions

Ielts academic is your key to studying where you want to go., jump to section.

  • Academic - paper tests
  • Academic - computer tests
  • Computer practice experience

Preparing for your IELTS Academic test

We want you to do well in your test. To help you get ready, here are some sample tests for the Academic test.

Listening and Speaking tests are the same for IELTS Academic and IELTS General Training, but the Reading and Writing tests are different.

With these official practice materials you can:

  • get used to the test format
  • experience the types of tasks involved
  • test yourself under timed conditions
  • review your answers and compare them with model answers.

IELTS Academic - paper sample tests

Listening (30 minutes) .

The Listening test is the same for both IELTS Academic and IELTS General Training and consists of four recorded monologues and conversations. The following IELTS Listening sample tasks are to be used with the Answer Sheet and MP3 audio files and/or transcripts. Each answer sheet indicates which recording to listen to, or if a transcript is provided.

Listening sample tasks

  • Listening sample tasks (PDF 777 KB)  
  • Listening answer sheet (PDF 1 MB)  

Find out more about the Listening test

Academic Reading (60 minutes)

Texts for the Academic Reading test are taken from books, journals, magazines and newspapers.

A variety of tasks is used, including: multiple-choice questions, identifying information, identifying writer’s views/claims, matching information, matching headings, matching features, matching sentence endings, sentence completion, summary completion, note completion, table completion, flow-chart completion, diagram label completion, short-answer questions.

Academic Reading sample tasks

  • Academic Reading sample tasks (PDF 934 KB)
  • Academic Reading answer sheet (PDF 490 KB)

Find out more about the Academic Reading test

Academic Writing (60 minutes) 

The Academic Writing test consists of two writing tasks of 150 words and 250 words. In Task 1, you are asked to describe some visual information (graph/table/chart/diagram). You need to write 150 words in about 20 minutes. In Task 2 you are presented with a point of view or argument or problem. You need to write your response in 250 words in about 40 minutes.

Academic Writing sample tasks

  • Academic Writing sample tasks (PDF 1 MB)

Find out more about the Academic Writing test

Speaking (11–14 minutes)

In the Speaking test, you have a discussion with a certified examiner. It is interactive and as close to a real-life situation as a test can get. There are three parts to the test and each part fulfils a specific function in terms of interaction pattern, task input and test taker output. In Part 1, you answer questions about yourself and your family. In Part 2, you speak about a topic. In Part 3, you have a longer discussion on the topic. The Speaking test is the same for both IELTS Academic and IELTS General Training. Each of the three parts is designed to test a different aspect of your communication ability.

Speaking sample tasks

  • Speaking sample tasks (PDF 403 KB)  

Find out more about the Speaking test 

IELTS Academic - computer sample tests

Listening (30 minutes).

The Listening question types for IELTS on computer are the same as in the IELTS on paper test. 

A variety of tasks is used including: multiple choice, matching, plan/map/diagram labelling, form completion, note completion, table completion, flow-chart completion, summary completion, sentence completion, short-answer questions.

Listening Sample task Multiple Choice (one answer)

You will hear an extract from a Part 3 recording in which a student called Judy is discussing her research with her tutor and fellow students.

For each question, click on the correct answer.

  • Listening Sample task Multiple Choice (one answer)
  • Listening Sample task Multiple Choice (one answer) Answer Key (PDF 24 KB)
  • Listening Sample task Multiple Choice (one answer) Recording Transcript (PDF 84 KB)

You will hear an extract from a Part 1 recording in which two people are discussing a guide to a library.

For each question, click on the correct answers.

  • Listening Sample task Multiple Choice (more than one answer)
  • Listening Sample task Multiple Choice (more than one answer) Answer Key (PDF 23 KB) 
  • Listening Sample task Multiple Choice (more than one answer) Recording Transcript (PDF 78 KB)

Listening Sample task Plan/Map/Diagram Labelling (Type A)

You will hear an extract from Part 2 of the test in which a tour guide describes different places in a US town.

For each question, click on the correct space in the table.

  • Listening Sample task Plan/Map/Diagram Labelling
  • Listening Sample task Plan/Map/Diagram Labelling Answer Key (PDF 21 KB)
  • Listening Sample task Plan/Map/Diagram Recording Transcript (PDF 75 KB)

Listening Sample task Note Completion

You will hear an extract from a Part 1 recording in which two people are discussing second-hand furniture.

For each question, write your answer in the gap.

  • Listening Sample task Note Completion
  • Listening Sample task Note Completion Answer Key (PDF 22 KB)
  • Listening Sample task Note Completion Recording Transcript (PDF 92 KB)

Listening Sample task Table Completion

You will hear an extract from a Part 4 recording in which a university lecturer is giving a talk about research into ‘learner persistence’.

  • Listening Sample task Table Completion
  • Listening Sample task Table Completion Answer Key (PDF 26 KB)
  • Listening Sample task Table Completion Recording Transcript (PDF 31 KB)

Listening Sample task Flow-chart Completion (selecting from a list of words or phrases)

You will read an extract from a Part 3 recording in which two biology students are comparing their research on evidence of life on Earth and other planets.

For each question, click on the correct answer and move it into the gap.

  • Listening Sample task Flow-chart Completion
  • Listening Sample task Flow-chart Completion Answer Key (PDF 92 KB)
  • Listening Sample task Flow-chart Completion Recording Transcript (PDF 40 KB)

Listening Sample task Sentence Completion

You will hear an extract from a Part 3 recording in which two friends are discussing studying with the Open University.

  • Listening Sample task Sentence Completion
  • Listening Sample task Sentence Completion Answer Key (PDF 30 KB)
  • Listening Sample task Sentence Completion Recording Transcript (PDF 37 KB)
  • Listening Sample task Short Answer Questions

You will hear an extract from Part 2 of the test in which a representative from a clothing company is giving a talk to high school students.

  • Listening Sample task Short Answer Questions Answer Key (PDF 22 KB)
  • Listening Sample task Short Answer Questions Recording Transcript (PDF 78 KB)

The Academic Reading question types in IELTS on computer are the same as in the IELTS on paper test. A variety of tasks is used including: multiple choice, identifying information (True/False/Not Given), identifying a writer’s views/claims (Yes/No/Not Given), matching information, matching headings, matching features, matching sentence endings, summary completion, note completion, table completion, flow-chart completion, diagram label completion, short-answer questions.

  • Academic Reading Sample task Multiple Choice (one answer)

You will read an extract from a Part 1 text about older people in the workforce.

Click on the correct answer.

  • Academic Reading Sample task Multiple Choice (one answer) Answer Key (PDF 21 KB)
  • Academic Reading Sample task Multiple Choice (more than one answer)

Click on the correct answers.

  • Academic Reading Sample task Multiple Choice (more than one answer) Answer Key (PDF 21 KB)
  • Academic Reading Sample task Identifying Information (True/False/Not Given)

You will read an extract from a Part 1 text about the scientist Marie Curie.

  • Academic Reading Sample task Identifying Information (True/False/Not Given) Answer Key (PDF 17 KB)
  • Academic Reading Sample task Note Completion
  • Academic Reading Sample task Note Completion Answer Key (PDF 21 KB)
  • Academic Reading Sample task Table Completion

You will read an extract from a Part 1 text about dung beetles.

  • Academic Reading Sample task Table Completion Answer Key (PDF 17 KB)
  • Academic Reading Sample task Matching Features

You will read an extract from a Part 1 text about the development of rockets.

  • Academic Reading Sample task Matching Features Answer Key (PDF 107 KB)

Academic Reading Sample task Summary Completion (selecting words from the text)

You will read an extract from a Part 3 text about the ‘Plain English’ movement, which promotes the use of clear English.

  • Academic Reading Sample task Summary Completion (selecting words from the text)
  • Academic Reading Sample task Summary Completion (selecting words from the text) Answer Key (PDF 17 KB)

Academic Reading Sample task Summary Completion (selecting from a list of words or phrases)

You will read an extract from a Part 3 text about language.

  • Academic Reading Sample task Summary Completion (selecting from a list of words or phrases)
  • Academic Reading Sample task Summary Completion (selecting from a list of words or phrases) Answer Key (PDF 18 KB)
  • Academic Reading Sample task Sentence Completion

You will read a Part 2 text which discusses whether birds evolved from dinosaurs.

  • Academic Reading Sample task Sentence Completion Answer Key (PDF 17 KB)
  • Academic Reading Sample task Matching Sentence Endings

You will read an extract from a Part 3 text about the scientific community in London in the 1700s.

  • Academic Reading Sample task Matching Sentence Endings Answer Key (PDF 17 KB)

Academic Writing (60 minutes)

The Academic Writing question types in IELTS on computer are the same as in the IELTS on paper test.

In Part 1, you are presented with a graph, table, chart or diagram and are asked to describe, summarise or explain the information in your own words. You may be asked to describe and explain data, describe the stages of a process, how something works or describe an object or event. In Part 2, you are asked to write an essay in response to a point of view, argument or problem.

Academic Writing Sample tasks

  • Academic Writing Sample task Part 1
  • Academic Writing Sample task Part 2
  • Responses to Sample Part 2 with band scores and examiner comments (PDF 492 KB)

IELTS on computer practice experience

The practice experience will show you how everything will look on your computer ahead of the test day. As these are practice tests, they are not timed. There are some variations from the live tests, including the timer, highlighting, and notes functions performing differently.

In this Listening test sample , you will hear four different recordings.

You will hear each recording ONCE only.

The test is in four parts, with 40 questions in total.

This Academic Reading sample will show you three texts to read.

The test is in three parts, with 40 questions in total.

This Academic Writing sample consists of two writing tasks.

Golden Gate Bridge

Where can you go?

See where an IELTS result could take you.

Hand of a man using laptop computer

Book your test

Ready? Book your IELTS test now.

Man wearing white shirt reading

Our tips for a great experience.

  • Accessibility
  • Legal & policies

2024. IELTS is jointly owned by the British Council; IDP IELTS; and Cambridge University Press & Assessment

Prepare for the MCAT® Exam

New section.

Preparing for the MCAT® exam takes time and dedication. Balancing your preparation with an already busy schedule can be a challenge. The AAMC has resources and practice products to help you no matter where you are in the preparation process.

  • MCAT Official Prep Hub
  • Register for the MCAT Exam
  • Get Your Test Scores

Learn more about the new MCAT® Prep enhancements for the upcoming testing year. 

how to test research questions

Discover the complete list of foundational concepts, content categories, skills, and disciplines you will need to know for test day. We also offer the outline as a  course in the MCAT Official Prep Hub , with links to free open access resources covering the same content.

how to test research questions

Learn about the free AAMC MCAT® Official Prep resources that the AAMC offers to help you study.

how to test research questions

Get answers to your questions about MCAT® registration, scores, and more.

The AAMC Fee Assistance Program assists those who, without financial assistance, would be unable to take the MCAT exam or apply to medical schools that use the AMCAS. The benefits include discounted fees, free MCAT Official Prep products, and more.

Get a comprehensive overview of all MCAT Official Prep products, including pricing information and key features.

Learn about available resources to help you as you advise your students.

The AAMC offers bulk order purchasing for quantities of 10 or more MCAT Official Prep products.

Learn / Guides / Usability testing guide

Back to guides

Usability testing: your 101 introduction

A multi-chapter look at website usability testing, its benefits and methods, and how to get started with it.

Last updated

Reading time, take your first usability testing step today.

Sign up for a free Hotjar account and make sure your site behaves as you intend it to.

Usability testing is all about getting real people to interact with a website, app, or other product you've built and observing their behavior and reactions to it. Whether you start small by watching session recordings or go all out and rent a lab with eye-tracking equipment, usability testing is a necessary step to make sure you build an effective, efficient, and enjoyable experience for your users.

We start this guide with an introduction to:

What is usability testing

Why usability testing matters

What are the benefits of usability testing

What is not usability testing

The following chapters cover different  testing methods , the  usability questions  they can help you answer, how to  run a usability testing session , how to  analyze and evaluate  your testing results. Finally, we wrap up with 12  checklists and templates  to help you run efficient usability sessions, and the best usability testing tools . 

What is usability testing?

Usability testing is a method of  testing the functionality of a website, app, or other digital product by observing real users as they attempt to complete tasks on it . The users are usually observed by researchers working for a business during either an in-person or, more commonly, a remote usability testing session.

The goal of usability testing is to reveal areas of confusion and uncover pain points in the customer journey to highlight opportunities to improve the overall user experience. Usability evaluation seeks to gauge the practical functionality of the product, specifically how efficiently a user completes a pre-defined goal.

(Note: if all testing activities take place on a website, the terms 'usability testing' and ' website usability testing' can be used interchangeably—which is what we're going to do throughout the rest of this page.)

💡Did you know there are different types of usability tests ?

Moderated usability testing : a facilitator introduces the test to participants, answers their queries, and asks follow-up questions

Unmoderated usability testing : the participants conduct the test without direct supervision, usually with a script

Remote usability testing : the test participants (and the researcher, in the case of moderated usability testing) conduct the test online or, more rarely, over the phone

In-person usability testing : the test participants and the researcher(s) are in the same location

Hotjar Engage lets you conduct remote, moderated usability testing with your own users or testers from our pool of 175,000+ participants.

how to test research questions

What is the difference between usability testing and user testing?

While the terms are often used interchangeably, usability testing and user testing differ in scope. 

They are both, however, a part of UX testing—a more comprehensive approach aiming to analyze the user experience at every touchpoint, including users’ perception of a digital product or service’s performance, emotional response, perceived value, and satisfaction with UX design, as well as their overall impression of the company and brand.

how to test research questions

User testing is a research method that uses real people to evaluate a product or service by observing their interactions and gathering feedback. 

By comparison with usability testing, user testing insights reveal:

What users think about when using your product or service

How they perceive your product or service

What are their user needs

Usability testing, on the other hand, has a more focused approached, by seeking to answer questions like:

Are there bugs or other errors impacting user flow?

Can users complete their task efficiently?

Do they understand how to navigate the site?

Why is usability testing important?

Usability testing is done by real-life users who are likely to reveal issues that people familiar with a website can no longer identify—very often, in-depth knowledge makes it easy for designers, marketers, and product owners to miss a website's usability issues.

Bringing in new users to test your site and/or observing how real people are already using it are effective ways to determine whether your visitors:

Understand how your site works and don't get 'lost' or confused

Can complete the main actions they need to

Don't encounter usability issues or bugs 

Have a functional and efficient experience

Notice any other usability problems

This type of user research is exceptionally important with new products or new design updates: without it, you may be stuck with a  UX design  process that your team members understand, but your target audience will not.

I employ usability testing when I’m looking to gut-check myself as a designer. Sometimes I run designs by my cross-functional squad or the design team and we all have conflicting feedback. The catch is, we’re not always our user so it’s hard to sift through and agree on the best way forward. 

Usability testing cuts through the noise and reveals if the usability of a proposed design meets basic expectations. It’s a great way to quickly de-risk engineering investment. 

I also like to iterate on designs as we receive more and more information, so usability testing is a great way to move fast and not break too many things in the process.

Top 8 benefits of website usability testing

Your website can benefit from usability testing no matter where it is in the development process, from prototyping all the way to the finished product. You can also continue to test the user experience as you iterate and improve your product over time.

Employing tests with real users helps you:

Validate your prototype . Bring in users in the early stages of the development process, and test whether they’re experiencing any issues before locking down a final product. Do they encounter any bugs ? Does your site or product behave as expected when users interact with it? Testing on a prototype first can validate your concept and help you make plans for future functionality before you spend a lot of money to build out a complete website.

Confirm your product meets expectations.  Once your product is completed, test usability again to make sure everything works the way it was intended. How's the ease of use? Is something still missing in the interface?

Identify issues with complex flows . If there are functions on your site that need users to follow multiple steps (for example an ecommerce checkout process ), run usability testing to make sure these processes are as straightforward and intuitive as possible.

Complement and illuminate other data points . Usability testing can often provide the why behind data points accumulated from other methods: your funnel analysis might show you that visitors drop off your site , and conducting usability testing can highlight underlying issues with pages with high churn rate.

Catch minor errors . In addition to large-scale usability issues, usability testing can help identify smaller errors. A new set of eyes is more likely to pick up on broken links, site errors, and grammatical issues that have been inadvertently glossed over. Usability testing can also validate fixes made after identifying those errors.

💡Pro tip: enable console tracking in Hotjar and filter session recordings by ‘Error’ to watch sessions of users who ran into a JavaScript error.

Open the console from the recording player to understand where the issue comes from, fix the issue, and run a usability test to validate the fix.

Develop empathy.  It's not unusual for the people working on a project to develop tunnel vision around their product and forget they have access to knowledge that their typical website visitor may not have. Usability testing is a good way to develop some empathy for the real people who are using and will be using your site, and look at things from their perspective.

Get buy-in for change.  It's one thing to know about a website issue; it's another to see users actually struggle with it. When it's evident that something is being misunderstood by users, it's natural to want to make it right. Watching short clips of key usability testing findings can be a very persuasive way to lobby for change within your organization.

Ultimately provide a better user experience.  Great customer experience  is essential for a successful product. Usability testing can help you identify issues that wouldn't be uncovered otherwise and create the most user-friendly product possible.

What usability testing is not

There are several UX tools and user testing tools that help improve the customer experience , but don't really qualify as 'usability testing tools' because they don't explicitly evaluate the functionality of a product: 

A/B testing : A/B testing is a way to experiment with multiple versions of a web page to see which is most effective. While it can be used to test changes based on user testing, it is not a usability testing tool.

Focus groups : focus groups are a type of user testing , for which researchers gather a group of people together to discuss a specific topic. Usually, the goal is to learn people's opinions about a product or service, not to test how they use it.

Surveys : use surveys to gauge user experience. Because they do not allow you to actually observe visitors on the site in action, surveys are not considered usability testing—though they may be used in conjunction with it via a website usability survey .

Heatmaps : heatmaps offer a visual representation of how users interact with the page by showing the hottest (most engaged with) and coolest (least engaged with) parts of it. The click , scroll , and move maps allow you to see how users in aggregate engage with a website, but they are still technically not usability testing.

User acceptance testing : this is often the last phase of the software-testing process, where testers go through a calibrated set of steps to ensure the software works correctly. This is a technical test of QA (quality assurance), not a way to evaluate if the product is user-friendly and efficient. 

In-house proper use testing : people in your company probably test software all the time, but this is not usability testing. Employees are inherently biased, making them unable to give the kind of honest results that real users can.

How to get started

Your website's user interface should be straightforward and easy to use, and usability testing is an essential step in getting there. But to get the most actionable results, testing must be done correctly—you will need to reproduce normal-use conditions exactly.

One of the easiest ways to get started with usability testing is through  session recordings . Observing how visitors navigate your website can help you create the best user experience possible. 

Frequently asked questions about usability testing

What is website usability testing.

Website usability testing is the practice of evaluating the functionality of your website by observing visitors’ actions and behavior as they complete specific tasks.  Website usability testing lets you experience your site from the visitors’ perspective so you can identify opportunities to improve the user experience.

What is the purpose of usability testing?

Your in-depth knowledge of, and familiarity with, your website might prevent you from seeing its design or usability issues. When you run a website usability test,  users can identify issues with your site that you may have otherwise missed.  For example  website bugs , missing or broken elements, or an ineffective  call to action (CTA) .

What are some types of website usability tests?

The type of website usability test you need will be based on your available resources, target audience, and goals. The main types of usability tests are:

Remote or in-person

Moderated or unmoderated

Scripted or unscripted

For more detailed information about the types of usability tests and to determine which one you should try on your site, visit the  usability testing methods  chapter of this guide.

How do you run a usability test on a website?

Your goals and objectives will determine both the steps you’ll need to take to run a test on your website and the  usability testing questions  you’ll ask.

Having a plan before you start will help you organize the data and results you collect in an understandable way so you can improve the user experience. These  12 usability testing checklists and templates  are a good place to start.

A 5-step process for moderated usability testing could be:

Plan the session : nature of the study and logistical details like number of participants and moderators, as well as recording setup

Recruit participants : from your user base or via a tester recruitment tool

Design the task

Run the session : don’t forget to record it and take notes

Analyze the insights

Tip: if you want to get started with website usability testing right now, with minimal set-up, we recommend giving Hotjar Engage a try:

Bring your own users into the platform or recruit from our pool of 175,000+ participants

Involve more stakeholders by adding up to 4 moderators and 10 spectators from your team during the session

Focus on gathering insights from user feedback while the platform automatically records and transcripts the session

  • Type 2 Diabetes
  • Heart Disease
  • Digestive Health
  • Multiple Sclerosis
  • Diet & Nutrition
  • Supplements
  • Health Insurance
  • Public Health
  • Patient Rights
  • Caregivers & Loved Ones
  • End of Life Concerns
  • Health News
  • Thyroid Test Analyzer
  • Doctor Discussion Guides
  • Hemoglobin A1c Test Analyzer
  • Lipid Test Analyzer
  • Complete Blood Count (CBC) Analyzer
  • What to Buy
  • Editorial Process
  • Meet Our Medical Expert Board

11 Effective Tricks to Lower Your Blood Pressure Instantly

  • How to Lower It

Weight Management

Get enough exercise, quit smoking, reduce alcohol and caffeine consumption, eat dark chocolate, try supplements, improve sleep, reduce stress.

  • Take Medication
  • Can Water Help?

When to Seek Medical Help for a Hypertensive Crisis

Frequently asked questions.

The best ways to lower blood pressure rely on long-term lifestyle changes, like adopting a healthy diet and quitting smoking, supported by blood pressure medication . This can help you to achieve a healthy blood pressure, which for most adults is 120/80 millimeters of mercury (mmHg) or lower.

A high blood pressure diagnosis begins with a systolic (first number) reading of 130 or more, or a diastolic pressure (second number) of 80 or more. Unfortunately, there is no quick way to lower blood pressure without medical intervention and careful monitoring .

This article presents 11 tricks to lowering blood pressure and long-term decisions you can make to integrate these steps into your lifestyle. They include tips on reducing sodium intake, losing weight, reducing stress, and other ways to help you reach your blood pressure goals.

Kinga Krzeminska / Getty Images

Change Your Diet

People with high blood pressure are often told to eat less salt. Reducing the sodium in your diet can be difficult because many foods that you don't think of as salty actually contain a lot of sodium. You'll have to adjust your diet and monitor food labels; a dietitian can help with this.

Follow a Heart-Healthy Diet

According to one systematic review, the Dietary Approaches to Stop Hypertension (DASH) diet is the most effective dietary approach to lowering blood pressure. This diet was created and funded by the National Institute of Health’s National Heart, Lung, and Blood Institute (NHLBI), and involves limiting sodium to 2,300 milligrams a day; limiting fried, sugary, fatty, and processed foods; and eating more foods that are rich in magnesium, calcium, and potassium.

Calcium-Rich Foods

Foods high in calcium include:

  • Dairy products (milk, cheese, yogurt)
  • Leafy greens (kale, spinach)
  • Nuts and seeds
  • Canned salmon

Potassium-Rich Foods

Potassium is a key nutrient, with food sources that include:

  • Fruits (bananas, oranges, cantaloupe)
  • Vegetables (acorn squash, sweet potato, avocado)
  • Legumes (peas and beans)
  • Dairy (yogurt, milk)

Magnesium-Rich Foods

You can add magnesium to the diet by eating:

  • Peanut butter and nuts (almonds, cashews)
  • Meats (chicken, ground beef)
  • Vegetables (avocado, broccoli, carrots, potato)
  • Legumes and whole grains (rice, black beans)

There's some evidence that foods high in flavonols , including berries and apples, also can help to lower your blood pressure. Other studies support the role of dietary fiber in reducing blood pressure, with food sources including:

  • Fruits (cherries, pectin-rich citrus foods, and berries)
  • Whole grains (barley, wheat, oats)
  • Resistant starches (rice, beans and other legumes, potatoes)

If you are overweight and have high blood pressure, losing weight could help normalize your blood pressure. According to the CDC, this is because with less body fat, your heart will undergo less stress pumping blood throughout the body.

There is strong evidence to support regular exercise and physical activity as a way to lower blood pressure.

The AHA recommends adults get 150 minutes of moderate-intensity aerobic physical activity or 75 minutes of vigorous aerobic physical activity weekly. Two days of muscle-strengthening exercises per week is also recommended.

Some ways to get this exercise include:

  • Water aerobics
  • Walking or hiking
  • Barre, Pilates, or yoga classes
  • Weight lifting
  • Resistance band exercises

Smoking increases your risk of high blood pressure, heart attack , and stroke. Quitting smoking could make a big difference in your blood pressure.

Even switching to a less harmful alternative, like e-cigarettes, could benefit blood pressure. One study found that smokers who reduced or quit smoking by switching to e-cigarettes effectively lowered their blood pressure long-term.

Alcohol can raise your blood pressure. Try to reduce your alcohol consumption, especially if you're already at risk or have high blood pressure. The CDC recommends that men drink no more than two alcoholic drinks daily, and women no more than one.

Unlike alcohol, which can raise your blood pressure long-term, caffeine increases your blood pressure temporarily. Your blood pressure can be elevated for up to three hours after drinking coffee.

In order to get the most accurate blood pressure reading, avoid drinking coffee (or any caffeinated beverages) three hours before measurement. The good news is you don't have to cut out coffee entirely.

A Word From Verywell

The lifestyle modifications listed in this article—especially a healthy diet, weight loss, regular exercise, not smoking, and limiting alcohol—will not only help to manage your blood pressure but can also help to prevent type 2 diabetes, a host of cardiovascular disorders, and cancer.

Dark chocolate may help to lower blood pressure because of its flavonol content, which can relax blood vessels through vasodilation and improve blood flow.

There is still limited research evidence of dark chocolate's benefits in reducing blood pressure in humans, as the effect may not have clinical significance. However, a 2022 review of 31 studies found dark chocolate consumption may be better than cocoa drinks in delivering the amount of flavonol (notably epicatechins ) that may reduce blood pressure.

Certain dietary supplements may help to lower blood pressure, though both the American Heart Association (AHA) and American College of Cardiology (ACC) stress the importance of lasting diet and nutrition changes rather than supplements in treating hypertension.

Studies have demonstrated small improvements in blood pressure that occur in people taking calcium , potassium , and magnesium supplements, but research results can be mixed. One review of studies on vitamin D, for example, found evidence that people with adequate vitamin D intake (deficiencies are common) had a lower risk of hypertension, but it, too, called for more study.

Other supplements and alternative medicine options that may lower blood pressure include:

  • Vitamin C (which may also reduce the risk of stroke)
  • Coenzyme Q10
  • Omega-3 fatty acids
  • Folic acid (vitamin B9)

Keep in mind that any supplements you take can have side effects or contribute to drug interactions, so discuss their use with a healthcare provider.

Getting poor-quality sleep can increase your risk of high blood pressure. The American Heart Association (AHA) recommends six to eight hours of sleep per night to avoid cardiovascular (heart) issues.

If you have trouble getting enough quality sleep, consider talking to your healthcare provider. They may recommend a sleep study to see if there are other underlying causes, like sleep apnea , or lifestyle changes to encourage better sleep habits.

A stressful situation can raise your blood pressure temporarily, and chronic stress can raise your blood pressure long term. This is why stress management is one of the best ways to naturally lower blood pressure. However, "stressing less" is easier said than done.

The following stress-reduction techniques may help you lower your blood pressure:

  • Deep breathing techniques
  • Mindfulness meditation and other mindfulness-based techniques
  • Using a planner, calendar, or to-do list for better time management
  • Therapy or counseling
  • Crafting, walking, or another hobby or meaningful occupation

Take Medication 

Lifestyle steps typically are supported by medication to reduce blood pressure. If you have persistent high blood pressure, your healthcare provider may recommend a medication to lower your blood pressure.

The five types of medication used to lower blood pressure include:

  • Thiazide diuretics
  • Calcium channel blockers
  • Angiotensin-converting enzyme (ACE) inhibitors
  • Angiotensin receptor blockers (ARBs)
  • Beta-blockers

It is essential to follow your healthcare provider's instructions with any medication. Often, you have to take blood pressure medication at the same time each day for the best results.

Can Drinking Water Lower Blood Pressure?

Drinking enough water provides us with the optimal amount of fluids for our heart to pump without stressing it. Too little water can cause your blood pressure to lower or rise.

Being dehydrated can cause low blood pressure or orthostatic hypotension (a drop in blood pressure when changing position) due to low blood volume. Chronic dehydration, on the other hand, can lead to high blood pressure because your body reacts by constricting vessels.

Drinking a glass of water likely isn't going to immediately affect your blood pressure. However, maintaining optimal hydration can help manage your blood pressure in the long term.

Additional Drinks That Lower Blood Pressure Quickly

Some beverages also can help to lower blood pressure because they contain lycopene, potassium, or other elements associated with a health benefit for hypertension. These drinks include:

  • Tomato juice
  • Grapefruit juice
  • Green tea (the resveratrol can help lower blood pressure)

Keep in mind that you should not use grapefruit juice with a number of other medications , including calcium channel blockers used to treat blood pressure and statin drugs to lower cholesterol.

If a person is in hypertensive crisis (a dangerous, sudden spike in blood pressure in which the systolic pressure is 180 mmHg or higher, and/or the diastolic pressure is 120 mmHg or higher), they require immediate medical attention.

Medical treatment may involve delivering hypertensive drugs intravenously (into the vein).

There are many ways to lower blood pressure, but none is quick. Changes in diet, weight loss, exercise, medication, stress reduction, quitting smoking and alcohol, and improving sleep quality can all lower your blood pressure long term.

In the short term, be sure you are measuring your blood pressure correctly. Incorrect positioning, a full bladder, or physical activity immediately before a measurement can give an artificially high reading.

There is no way to safely and rapidly lower your blood pressure on your own. In an emergency medical setting, hypertensive drugs can be delivered intravenously to reduce blood pressure quickly. If you get a high blood pressure reading at home, try retaking it after resting for five minutes and ensure you are using the correct positioning.

Blood pressure is highest in the morning because it follows your body's circadian rhythm. Your blood pressure should be lower in the afternoon and evening.

Some ways to lower blood pressure without medication include mindful meditation, time and stress management techniques, eating less salt, regular exercise, the DASH diet, losing excess weight, quitting smoking, and reducing alcohol consumption. It is also important to ensure you are measuring your blood pressure correctly to avoid artificially high readings.

Virani S, Alonso A, Aparicio H, et al.  Heart disease and stroke statistics—2021 update .  Circulation . 2021 Jan;143(8):254-743. doi:10.1161/CIR.0000000000000950

American Heart Association. What is high blood pressure?

Valenzuela PL, Carrera-Bastos P, Gálvez BG, et al. Lifestyle interventions for the prevention and treatment of hypertension .  Nat Rev Cardiol . 2021;18(4):251-275. doi:10.1038/s41569-020-00437-9

National Institute of Health (NIH).  DASH eating plan .

Dietary Guidelines for Americans.  Food sources of calcium .

U.S. Department of Agriculture. Dietary Guidelines for Americans.  Food sources of potassium.

Office of Dietary Supplements.  Magnesium .

Ottaviani JI, Britten A, Lucarelli D, Luben R, Mulligan AA, Lentjes MA, et al .  Biomarker-estimated flavan-3-ol intake is associated with lower blood pressure in cross-sectional analysis in EPIC Norfolk .  Sci Rep . 2020 Oct 21;10(1):17964. doi:10.1038/s41598-020-74863-7

Khalid W, Arshad MS, Jabeen A, Muhammad Anjum F, Qaisrani TB, Suleria HAR. Fiber-enriched botanicals: A therapeutic tool against certain metabolic ailments . Food Sci Nutr . 2022 Aug 26;10(10):3203-3218. doi:10.1002/fsn3.2920. 

Centers for Disease Control and Prevention. Know your risk for high blood pressure .

American Heart Association. American heart association recommendations for physical activity in adults and kids .

Centers for Disease Control and Prevention. Prevent high blood pressure .

Farsalinos K, Cibella F, Caponnetto P, et al. Effect of continuous smoking reduction and abstinence on blood pressure and heart rate in smokers switching to electronic cigarettes .  Intern Emerg Med . 2016;11(1):85-94. doi:10.1007/s11739-015-1361-y

Chrysant SG. The impact of coffee consumption on blood pressure, cardiovascular disease and diabetes mellitus .  Expert Review of Cardiovascular Therapy . 2017;15(3):151-156. doi:10.1080/14779072.2017.1287563

Centers for Disease Control and Prevention. Measure your blood pressure .

Amoah I, Lim JJ, Osei EO, Arthur M, Tawiah P, Oduro IN, et al . Effect of Cocoa Beverage and Dark Chocolate Consumption on Blood Pressure in Those with Normal and Elevated Blood Pressure: A Systematic Review and Meta-Analysis . Foods . 2022 Jul 1;11(13):1962. doi: 10.3390/foods11131962. 

Whelton PK, Carey RM, Aronow WS, et al. 2017  ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/NMA/PCNA guideline for the prevention, detection, evaluation, and management of high blood pressure in adults: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines .  Hypertension . 2018;71:e13–e115. doi:10.1161/HYP.0000000000000065

  • Mehta V, Agawal S.  Does vitamin deficiency lead to hypertension?   Cureus.  2017 Feb;9(2):e1038. doi:10.7759/cureus.1038

An P, Wan S, Luo Y, Luo J, Zhang X, Zhou S, et al . Micronutrient Supplementation to Reduce Cardiovascular Risk . J Am Coll Cardiol . 2022 Dec 13;80(24):2269-2285. doi:10.1016/j.jacc.2022.09.048. 

Morelli MB, Gambardella J, Castellanos V, Trimarco V, Santulli G. Vitamin C and Cardiovascular Disease: An Update . Antioxidants (Basel). 2020 Dec 3;9(12):1227. doi:10.3390/antiox9121227. 

American Heart Association. Sleep, women, and heart disease.

American Heart Association. Managing stress to control high blood pressure .

Ponte Márquez PH, Feliu-Soler A, Solé-Villa MJ, et al. Benefits of mindfulness meditation in reducing blood pressure and stress in patients with arterial hypertension .  J Hum Hypertens . 2019;33(3):237-247. doi:10.1038/s41371-018-0130-6

Geiger C, Cramer H, Dobos G, et al. A systematic review and meta-analysis of mindfulness-based stress reduction for arterial hypertension .  J Hum Hypertens . 2022:1-9. doi:10.1038/s41371-022-00764-z

Park SH, Han KS. Blood pressure response to meditation and yoga: a systematic review and meta-analysis .  The Journal of Alternative and Complementary Medicine . 2017;23(9):685-695. doi:10.1089/acm.2016.0234

Salkic S, Batic-Mujanovic O, Ljuca F, et al.  Clinical presentation of hypertensive crises in emergency medical services . Mater Sociomed . 2014;26(1):12-16. doi:10.5455/msm.2014.26.12-16

Aronow WS.  Treatment of hypertensive emergencies .  Ann Transl Med . 2017;5(Suppl 1):S5. doi:10.21037/atm.2017.03.34

Cortés-Ríos J, Rodriguez-Fernandez M.  Circadian rhythm of blood pressure of dipper and non-dipper patients with essential hypertension: a mathematical modeling approach .  Front Physiol . 2021;11:536146. doi:10.3389/fphys.2020.536146

By Sarah Bence, OTR/L Bence is an occupational therapist with a range of work experience in mental healthcare settings. She is living with celiac disease and endometriosis.

Generative AI is here: How tools like ChatGPT could change your business

In the spirit of this article’s topic, we asked ChatGPT—a generative AI language model that can create original content in response to a user prompt—how it would set the context. Our prompt to the system:

Write the opening paragraph for an article about how transformative generative AI will be for business, in the style of McKinsey & Company.

ChatGPT’s response:

The rise of generative AI has the potential to be a major game-changer for businesses. This technology, which allows for the creation of original content by learning from existing data, has the power to revolutionize industries and transform the way companies operate. By enabling the automation of many tasks that were previously done by humans, generative AI has the potential to increase efficiency and productivity, reduce costs, and open up new opportunities for growth. As such, businesses that are able to effectively leverage the technology are likely to gain a significant competitive advantage.

Not perfect but overwhelmingly impressive.

For us and many executives we’ve spoken to recently, entering one prompt into ChatGPT, developed by OpenAI, was all it took to see the power of generative AI. In the first five days of its release, more than a million users logged into the platform to experience it for themselves. OpenAI’s servers can barely keep up with demand, regularly flashing a message that users need to return later when server capacity frees up.

Products like ChatGPT and GitHub Copilot, as well as the underlying AI models that power such systems (Stable Diffusion, DALL·E 2, GPT-3, to name a few), are taking technology into realms once thought to be reserved for humans. With generative AI, computers can now arguably exhibit creativity. They can produce original content in response to queries, drawing from data they’ve ingested and interactions with users. They can develop blogs, sketch package designs, write computer code , or even theorize on the reason for a production error.

This latest class of generative AI systems has emerged from foundation models—large-scale, deep learning models trained on massive, broad, unstructured data sets (such as text and images) that cover many topics. Developers can adapt the models for a wide range of use cases, with little fine-tuning required for each task. For example, GPT-3.5, the foundation model underlying ChatGPT, has also been used to translate text, and scientists used an earlier version of GPT to create novel protein sequences. In this way, the power of these capabilities is accessible to all, including developers who lack specialized machine learning skills and, in some cases, people with no technical background. Using foundation models can also reduce the time for developing new AI applications to a level rarely possible before.

Generative AI promises to make 2023 one of the most exciting years yet for AI. But as with every new technology, business leaders must proceed with eyes wide open, because the technology today presents many ethical and practical challenges.

Moving illustration of wavy blue lines that was produced using computer code

What CEOs need to know about generative AI

Pushing further into human realms.

More than a decade ago, we wrote an article in which we sorted economic activity into three buckets—production, transactions, and interactions—and examined the extent to which technology had made inroads into each. Machines and factory technologies transformed production by augmenting and automating human labor during the Industrial Revolution more than 100 years ago, and AI has further amped up efficiencies on the manufacturing floor. Transactions have undergone many technological iterations over approximately the same time frame, including most recently digitization and, frequently, automation.

Until recently, interaction labor, such as customer service, has experienced the least mature technological interventions. Generative AI is set to change that by undertaking interaction labor in a way that approximates human behavior closely and, in some cases, imperceptibly. That’s not to say these tools are intended to work without human input and intervention. In many cases, they are most powerful in combination with humans, augmenting their capabilities and enabling them to get work done faster and better.

Generative AI is also pushing technology into a realm thought to be unique to the human mind: creativity. The technology leverages its inputs (the data it has ingested and a user prompt) and experiences (interactions with users that help it “learn” new information and what’s correct/incorrect) to generate entirely new content. While dinner table debates will rage for the foreseeable future on whether this truly equates to creativity, most would likely agree that these tools stand to unleash more creativity into the world by prompting humans with starter ideas.

Business uses abound

These models are in the early days of scaling, but we’ve started seeing the first batch of applications across functions, including the following (exhibit):

  • Marketing and sales —crafting personalized marketing, social media, and technical sales content (including text, images, and video); creating assistants aligned to specific businesses, such as retail
  • Operations —generating task lists for efficient execution of a given activity
  • IT/engineering —writing, documenting, and reviewing code
  • Risk and legal —answering complex questions, pulling from vast amounts of legal documentation, and drafting and reviewing annual reports
  • R&D —accelerating drug discovery through better understanding of diseases and discovery of chemical structures

Excitement is warranted, but caution is required

The awe-inspiring results of generative AI might make it seem like a ready-set-go technology, but that’s not the case. Its nascency requires executives to proceed with an abundance of caution. Technologists are still working out the kinks, and plenty of practical and ethical issues remain open. Here are just a few:

  • Like humans, generative AI can be wrong. ChatGPT, for example, sometimes “hallucinates,” meaning it confidently generates entirely inaccurate information in response to a user question and has no built-in mechanism to signal this to the user or challenge the result. For example, we have observed instances when the tool was asked to create a short bio and it generated several incorrect facts for the person, such as listing the wrong educational institution.
  • Filters are not yet effective enough to catch inappropriate content. Users of an image-generating application that can create avatars from a person’s photo received avatar options from the system that portrayed them nude, even though they had input appropriate photos of themselves.
  • Systemic biases still need to be addressed. These systems draw from massive amounts of data that might include unwanted biases .
  • Individual company norms and values aren’t reflected. Companies will need to adapt the technology to incorporate their culture and values, an exercise that requires technical expertise and computing power beyond what some companies may have ready access to.
  • Intellectual-property questions are up for debate. When a generative AI model brings forward a new product design or idea based on a user prompt, who can lay claim to it? What happens when it plagiarizes a source based on its training data?

Would you like to learn more about QuantumBlack, AI by McKinsey ?

Initial steps for executives.

In companies considering generative AI, executives will want to quickly identify the parts of their business where the technology could have the most immediate impact and implement a mechanism to monitor it, given that it is expected to evolve quickly. A no-regrets move is to assemble a cross-functional team, including data science practitioners, legal experts, and functional business leaders, to think through basic questions, such as these:

  • Where might the technology aid or disrupt our industry and/or our business’s value chain?
  • What are our policies and posture? For example, are we watchfully waiting to see how the technology evolves, investing in pilots, or looking to build a new business? Should the posture vary across areas of the business?
  • Given the limitations of the models, what are our criteria for selecting use cases to target?
  • How do we pursue building an effective ecosystem of partners, communities, and platforms?
  • What legal and community standards should these models adhere to so we can maintain trust with our stakeholders?

Meanwhile, it’s essential to encourage thoughtful innovation across the organization, standing up guardrails along with sandboxed environments for experimentation, many of which are readily available via the cloud, with more likely on the horizon.

The innovations that generative AI could ignite for businesses of all sizes and levels of technological proficiency are truly exciting. However, executives will want to remain acutely aware of the risks that exist at this early stage of the technology’s development.

Michael Chui is a partner at the McKinsey Global Institute and a partner in McKinsey’s Bay Area office, where Roger Roberts is a partner and Lareina Yee is a senior partner.

Explore a career with us

Related articles.

digital lines stock photo

The state of AI in 2022—and a half decade in review

Executive's guide to AI

An executive’s guide to AI

Abstract silhouette of human surrounded by artificial intelligence in network space made up of dots on blue background

McKinsey Technology Trends Outlook 2023

how to test research questions

Use mail merge to send bulk email messages

To be able to send bulk email via mail merge, you must already have installed a MAPI-compatible email program such as Outlook or Gmail. 

The following process assumes that you already have the message you intend to send created and open in Microsoft Word. 

Prepare your main document

Go to Mailings > Start Mail Merge > E-mail Messages .

Start mail merge with Email messages selected

Set up your mailing list

The mailing list is your data source. For more information, see Data sources you can use for a mail merge .

If you don’t have a mailing list, you will be able to create one during mail merge. 

If you are using an Excel spreadsheet as your data source, format the ZIP/postal codes as text to avoid auto-deletion of any leading zeroes. For more information, see Format mail merge numbers, dates, and other values in Excel . 

If you want to use your Outlook contacts as a list source, make sure Outlook is your default email program and is the same version as Word.

Link your mailing list to your email message

Make sure your data source has a column for email addresses and that there is an email address for every intended recipient.

Go to Mailings > Select Recipients .

Choose a data source. For more information, see Data sources you can use for a mail merge .

Choose File > Save .

To learn more about editing, sorting, or filtering your mailing list, see Mail merge: Edit recipients .

Add personalized content to the email message

Go to Mailings > Greeting Line .

Choose a format. 

Choose OK to insert the merge field.

You can add other fields from your data source to your email message. For more information about this, see Insert mail merge fields .

Note:  After inserting fields, you will need to format your email manually. 

To learn how to fix any missing part of your addresses or other fields, see Mail merge: Match Fields .

To change the font, size, or spacing of the merged content, select the merge field name and make the needed changes.

Preview and finish

Next record button for mail merge preview results

Choose Finish & Merge > Send E-mail Messages .

Screenshot of the Mailings tab in Word, showing the Finish & Merge command and its options.

In the To box, choose the email address column or field from your mailing list.

Note:  Word sends an individual message to each email address. You cannot CC or BCC other recipients. You cannot add attachments, but you can include links, 

In the Subject line box, type a subject line for the message.

In the Mail format box, choose  HTML (the default setting) or choose  Plain text to send the document as the body of the email message. 

Under Send records , select one of the following:

All records (default).

Current record only the record viewable on your screen is sent the message.

From and To send only a range of records .

Choose OK to run mail merge.

Save the personalized message (optional)

Go to File > Save . When you save the main document, you also save its connection to the data source. To reuse, open the document and answer Yes when prompted to keep the connection to the data source.

Use mail merge to create and send bulk mail, labels, and envelopes

Mail merge - A free, 10-minute, video training

To be able to send bulk email via mail merge, you must already have installed a MAPI-compatible email program such as Outlook or Gmail.  

Create a main document in Word

Go to Mailings > Start Mail Merge > Email Messages .

mail merge email

In Word, created the email message that you intend to send.

The mailing list is your data source. For more info, see Data sources you can use for a mail merge .

If you don’t have a mailing list, you can create one during mail merge.

If you're using an Excel spreadsheet, format the ZIP/postal codes column as text to avoid auto deletion of leading zeroes. For more information about this, see Format mail merge numbers, dates, and other values in Excel .

If you want to use your Outlook contacts, make sure Outlook is your default email program and is the same version as Word.

Make sure your data source has a column for email addresses and that there's an email address for each intended recipient. 

Choose a data source. For more info, see Data sources you can use for a mail merge .

Learn how to edit, sort, or filter your mailing list here:  Mail merge: Edit recipients .

Add and format merge fields

Go to Mailings > Insert Merge Field , and then choose the fields to add.

On the Mailings tab, Insert Merge Field is highlighted

In your document, select Drag fields into this box or type text , and select the text to remove it.

Add and format the fields you want to be included in the email message, then select OK .

Preview and send

Go to Mailings > Preview Results to see how the email messages look.

On the Mailings tab, Preview Results is highlighted

To scroll through each email message, use the left and right arrow buttons on the Mailings tab. 

Select Preview Results again to add or remove merge fields.

When ready, go to Mailings > Finish & Merge > Merge to E-Mail .

Merge to E-Mail is unavailable if you have not selected your default email program.

Choose the To merge field, the subject, and whether to send as text, HTML, or as an attachment. When you send as an attachment, the email has no body text; instead, the message is sent as an attached document. 

Select Mail Merge To Outbox .

Facebook

Need more help?

Want more options.

Explore subscription benefits, browse training courses, learn how to secure your device, and more.

how to test research questions

Microsoft 365 subscription benefits

how to test research questions

Microsoft 365 training

how to test research questions

Microsoft security

how to test research questions

Accessibility center

Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge.

how to test research questions

Ask the Microsoft Community

how to test research questions

Microsoft Tech Community

how to test research questions

Windows Insiders

Microsoft 365 Insiders

Find solutions to common problems or get help from a support agent.

how to test research questions

Online support

Was this information helpful?

Thank you for your feedback.

IMAGES

  1. Research Question: Definition, Types, Examples, Quick Tips

    how to test research questions

  2. Research Questions

    how to test research questions

  3. How to Develop a Strong Research Question

    how to test research questions

  4. Hypothesis Testing Solved Examples(Questions and Solutions)

    how to test research questions

  5. How to Write a Good Research Question (w/ Examples)

    how to test research questions

  6. How to Develop a Strong Research Question

    how to test research questions

VIDEO

  1. AS test-research day 23rd Feb 2024

  2. Previous year Questions || Mock test || Model Paper || Paper -1 || Unit -2 || Quiz

  3. NISM RESEARCH ANALYST EXAM MOCK TEST

  4. The Proper Way To Test & Research To Find The TRUTH!

  5. MCQ Questions on Research Methodology Part 1

  6. HOW TO READ and ANALYZE A RESEARCH STUDY

COMMENTS

  1. 10 Research Question Examples to Guide your Research Project

    The first question asks for a ready-made solution, and is not focused or researchable. The second question is a clearer comparative question, but note that it may not be practically feasible. For a smaller research project or thesis, it could be narrowed down further to focus on the effectiveness of drunk driving laws in just one or two countries.

  2. PDF Testing Your Research Question Edited

    Testing Your Research Question The function of a research question is to: • clarify the area of concern, help to organise the project and give it coherence • identify the sort of information that needs to be collected and provide a framework for writing up the project

  3. Research Question 101

    Types of research questions. Now that we've defined what a research question is, let's look at the different types of research questions that you might come across. Broadly speaking, there are (at least) four different types of research questions - descriptive, comparative, relational, and explanatory. Descriptive questions ask what is happening. In other words, they seek to describe a ...

  4. How to Write a Research Question in 2024: Types, Steps, and Examples

    1. Start with a broad topic. A broad topic provides writers with plenty of avenues to explore in their search for a viable research question. Techniques to help you develop a topic into subtopics and potential research questions include brainstorming and concept mapping.

  5. Research Question Examples ‍

    A well-crafted research question (or set of questions) sets the stage for a robust study and meaningful insights. But, if you're new to research, it's not always clear what exactly constitutes a good research question. In this post, we'll provide you with clear examples of quality research questions across various disciplines, so that you can approach your research project with confidence!

  6. Creating a Good Research Question

    Important Factors: Consider Feasibility and Novelty. Sharmila Dorbala, MD, MPH, talks about how the questions become "a research umbrella.". David Sykes, MD, PhD, describes why feasibility, impact, and commitment are all crucial. Subha Ramani, PhD, MBBS, MMed, explains why it's important to consider stakeholders.

  7. Writing Strong Research Questions

    A good research question is essential to guide your research paper, dissertation, or thesis. All research questions should be: Focused on a single problem or issue. Researchable using primary and/or secondary sources. Feasible to answer within the timeframe and practical constraints. Specific enough to answer thoroughly.

  8. The Writing Center

    Most professional researchers focus on topics they are genuinely interested in studying. Writers should choose a broad topic about which they genuinely would like to know more. An example of a general topic might be "Slavery in the American South" or "Films of the 1930s.". Do some preliminary research on your general topic.

  9. Research Question: Definition, Types, Examples, Quick Tips

    A research question is an inquiry that the research attempts to answer. It is the heart of the systematic investigation. Research questions are the most important step in any research project. In essence, it initiates the research project and establishes the pace for the specific research A research question is:

  10. How to Write a Research Question: Types and Examples

    Choose a broad topic, such as "learner support" or "social media influence" for your study. Select topics of interest to make research more enjoyable and stay motivated. Preliminary research. The goal is to refine and focus your research question. The following strategies can help: Skim various scholarly articles.

  11. Formulating Your Research Question (RQ)

    In a research paper, the emphasis is on generating a unique question and then synthesizing diverse sources into a coherent essay that supports your argument about the topic. In other words, you integrate information from publications with your own thoughts in order to formulate an argument. Your topic is your starting place: from here, you will ...

  12. PDF Formulating a Research Question

    Formulating a Research Question. Every research project starts with a question. Your question will allow you to select, evaluate and interpret your sources systematically. The question you start with isn't set in stone, but will almost certainly be revisited and revised as you read. Every discipline allows for certain kinds of questions to be ...

  13. How to Write a Good Research Question (w/ Examples)

    It can be difficult to come up with a good research question, but there are a few steps you can follow to make it a bit easier. 1. Start with an interesting and relevant topic. Choose a research topic that is interesting but also relevant and aligned with your own country's culture or your university's capabilities.

  14. How to Develop a Good Research Question?

    This form of question looks to understand something without influencing the results. The objective of exploratory questions is to learn more about a topic without attributing bias or preconceived notions to it. Research Question Example: Asking how a chemical is used or perceptions around a certain topic. ii.

  15. Methods for Testing and Evaluating Survey Questions

    Methods for Testing and Evaluating Survey Questions. An examination of survey pretesting reveals a paradox. On the one hand, pretesting is the only way to evaluate in advance whether a questionnaire causes problems for interviewers or respondents. Consequently, both elementary textbooks and experienced researchers declare pretesting indispensable.

  16. How to Write the Research Questions

    Example Research Question (s) Here are examples of research problems and research questions to help you understand how to create a research question for a given research problem. Example Research Problem. Example Research Question (s) A small-scale company, 'A' in the UK, cannot allocate a marketing budget for next year due to their poor ...

  17. Chapter 4. Finding a Research Question and Approaches to Qualitative

    Better research question: ... You then collect data to test the theory or hypotheses accompanying your research question. In inductive research, however, one begins with data collection and analysis and builds theory from there. So naturally, the research question is a bit vaguer. In general, the more closely aligned to the natural sciences ...

  18. How to Test Your Research Questions with Statistics

    To test your research questions, you need to first define them clearly and precisely. A good research question should specify the following elements: the population of interest, the intervention ...

  19. PDF Chapter 4 Developing Research Questions: Hypotheses and Variables

    Experiments using sounds suggest that we are less responsive during stages 3 and 4 sleep (deep sleep) than during stages 1, 2, or REM sleep (lighter sleep). Thus, the researcher predicts that research participants will be less responsive to odors during stages 3 and 4 sleep than during the other stages of sleep.

  20. Evaluate Your Own Research Question

    For example, to answer the research question, "What impact has deregulation had on commercial airline safety?" will require certain types of information: Statistics on airline crashes before and after; Statistics on other safety problems before and after;

  21. Research questions, hypotheses and objectives

    Research question. Interest in a particular topic usually begins the research process, but it is the familiarity with the subject that helps define an appropriate research question for a study. 1 Questions then arise out of a perceived knowledge deficit within a subject area or field of study. 2 Indeed, Haynes suggests that it is important to know "where the boundary between current ...

  22. RESEARCH METHODS EXAM QUESTIONS, ANSWERS & MARKS

    Do you want to ace your research methods exam? Quizlet can help you with flashcards that cover the key concepts, definitions, and examples of research methods. Learn what is an experiment, an independent variable, a correlation, and more. Test yourself with multiple choice questions and answers, and get instant feedback. Quizlet is the easiest way to study research methods and prepare for your ...

  23. IELTS

    In the Speaking test, you have a discussion with a certified examiner. It is interactive and as close to a real-life situation as a test can get. There are three parts to the test and each part fulfils a specific function in terms of interaction pattern, task input and test taker output. In Part 1, you answer questions about yourself and your ...

  24. Prepare for the MCAT® Exam

    Learn about the free AAMC MCAT ® Official Prep resources that the AAMC offers to help you study. Learning through practice is key when it comes to the MCAT ® exam. Prepare for the exam with AAMC MCAT Official Prep products written by the test developers. Get answers to your questions about MCAT ® registration, scores, and more.

  25. Usability Testing: What It Is, Benefits, and What It Isn't

    User testing is a research method that uses real people to evaluate a product or service by observing their interactions and gathering feedback. By comparison with usability testing, user testing insights reveal: What users think about when using your product or service. How they perceive your product or service.

  26. 11 Tricks to Lower Your Blood Pressure

    Peanut butter and nuts (almonds, cashews) Meats (chicken, ground beef) Vegetables (avocado, broccoli, carrots, potato) Legumes and whole grains (rice, black beans) There's some evidence that foods high in flavonols , including berries and apples, also can help to lower your blood pressure.

  27. How generative AI & ChatGPT will change business

    ChatGPT's response: The rise of generative AI has the potential to be a major game-changer for businesses. This technology, which allows for the creation of original content by learning from existing data, has the power to revolutionize industries and transform the way companies operate. By enabling the automation of many tasks that were ...

  28. What Does a Data Analyst Do? Your 2024 Career Guide

    A data analyst gathers, cleans, and studies data sets to help solve problems. Here's how you can start on a path to become one. A data analyst collects, cleans, and interprets data sets in order to answer a question or solve a problem. They work in many industries, including business, finance, criminal justice, science, medicine, and government.

  29. Use mail merge to send bulk email messages

    Windows macOS. To be able to send bulk email via mail merge, you must already have installed a MAPI-compatible email program such as Outlook or Gmail. The following process assumes that you already have the message you intend to send created and open in Microsoft Word.