• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Overview of the Problem-Solving Mental Process

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

problem solving is a process of generating solutions from observed data

Rachel Goldman, PhD FTOS, is a licensed psychologist, clinical assistant professor, speaker, wellness expert specializing in eating behaviors, stress management, and health behavior change.

problem solving is a process of generating solutions from observed data

  • Identify the Problem
  • Define the Problem
  • Form a Strategy
  • Organize Information
  • Allocate Resources
  • Monitor Progress
  • Evaluate the Results

Frequently Asked Questions

Problem-solving is a mental process that involves discovering, analyzing, and solving problems. The ultimate goal of problem-solving is to overcome obstacles and find a solution that best resolves the issue.

The best strategy for solving a problem depends largely on the unique situation. In some cases, people are better off learning everything they can about the issue and then using factual knowledge to come up with a solution. In other instances, creativity and insight are the best options.

It is not necessary to follow problem-solving steps sequentially, It is common to skip steps or even go back through steps multiple times until the desired solution is reached.

In order to correctly solve a problem, it is often important to follow a series of steps. Researchers sometimes refer to this as the problem-solving cycle. While this cycle is portrayed sequentially, people rarely follow a rigid series of steps to find a solution.

The following steps include developing strategies and organizing knowledge.

1. Identifying the Problem

While it may seem like an obvious step, identifying the problem is not always as simple as it sounds. In some cases, people might mistakenly identify the wrong source of a problem, which will make attempts to solve it inefficient or even useless.

Some strategies that you might use to figure out the source of a problem include :

  • Asking questions about the problem
  • Breaking the problem down into smaller pieces
  • Looking at the problem from different perspectives
  • Conducting research to figure out what relationships exist between different variables

2. Defining the Problem

After the problem has been identified, it is important to fully define the problem so that it can be solved. You can define a problem by operationally defining each aspect of the problem and setting goals for what aspects of the problem you will address

At this point, you should focus on figuring out which aspects of the problems are facts and which are opinions. State the problem clearly and identify the scope of the solution.

3. Forming a Strategy

After the problem has been identified, it is time to start brainstorming potential solutions. This step usually involves generating as many ideas as possible without judging their quality. Once several possibilities have been generated, they can be evaluated and narrowed down.

The next step is to develop a strategy to solve the problem. The approach used will vary depending upon the situation and the individual's unique preferences. Common problem-solving strategies include heuristics and algorithms.

  • Heuristics are mental shortcuts that are often based on solutions that have worked in the past. They can work well if the problem is similar to something you have encountered before and are often the best choice if you need a fast solution.
  • Algorithms are step-by-step strategies that are guaranteed to produce a correct result. While this approach is great for accuracy, it can also consume time and resources.

Heuristics are often best used when time is of the essence, while algorithms are a better choice when a decision needs to be as accurate as possible.

4. Organizing Information

Before coming up with a solution, you need to first organize the available information. What do you know about the problem? What do you not know? The more information that is available the better prepared you will be to come up with an accurate solution.

When approaching a problem, it is important to make sure that you have all the data you need. Making a decision without adequate information can lead to biased or inaccurate results.

5. Allocating Resources

Of course, we don't always have unlimited money, time, and other resources to solve a problem. Before you begin to solve a problem, you need to determine how high priority it is.

If it is an important problem, it is probably worth allocating more resources to solving it. If, however, it is a fairly unimportant problem, then you do not want to spend too much of your available resources on coming up with a solution.

At this stage, it is important to consider all of the factors that might affect the problem at hand. This includes looking at the available resources, deadlines that need to be met, and any possible risks involved in each solution. After careful evaluation, a decision can be made about which solution to pursue.

6. Monitoring Progress

After selecting a problem-solving strategy, it is time to put the plan into action and see if it works. This step might involve trying out different solutions to see which one is the most effective.

It is also important to monitor the situation after implementing a solution to ensure that the problem has been solved and that no new problems have arisen as a result of the proposed solution.

Effective problem-solvers tend to monitor their progress as they work towards a solution. If they are not making good progress toward reaching their goal, they will reevaluate their approach or look for new strategies .

7. Evaluating the Results

After a solution has been reached, it is important to evaluate the results to determine if it is the best possible solution to the problem. This evaluation might be immediate, such as checking the results of a math problem to ensure the answer is correct, or it can be delayed, such as evaluating the success of a therapy program after several months of treatment.

Once a problem has been solved, it is important to take some time to reflect on the process that was used and evaluate the results. This will help you to improve your problem-solving skills and become more efficient at solving future problems.

A Word From Verywell​

It is important to remember that there are many different problem-solving processes with different steps, and this is just one example. Problem-solving in real-world situations requires a great deal of resourcefulness, flexibility, resilience, and continuous interaction with the environment.

Get Advice From The Verywell Mind Podcast

Hosted by therapist Amy Morin, LCSW, this episode of The Verywell Mind Podcast shares how you can stop dwelling in a negative mindset.

Follow Now : Apple Podcasts / Spotify / Google Podcasts

You can become a better problem solving by:

  • Practicing brainstorming and coming up with multiple potential solutions to problems
  • Being open-minded and considering all possible options before making a decision
  • Breaking down problems into smaller, more manageable pieces
  • Asking for help when needed
  • Researching different problem-solving techniques and trying out new ones
  • Learning from mistakes and using them as opportunities to grow

It's important to communicate openly and honestly with your partner about what's going on. Try to see things from their perspective as well as your own. Work together to find a resolution that works for both of you. Be willing to compromise and accept that there may not be a perfect solution.

Take breaks if things are getting too heated, and come back to the problem when you feel calm and collected. Don't try to fix every problem on your own—consider asking a therapist or counselor for help and insight.

If you've tried everything and there doesn't seem to be a way to fix the problem, you may have to learn to accept it. This can be difficult, but try to focus on the positive aspects of your life and remember that every situation is temporary. Don't dwell on what's going wrong—instead, think about what's going right. Find support by talking to friends or family. Seek professional help if you're having trouble coping.

Davidson JE, Sternberg RJ, editors.  The Psychology of Problem Solving .  Cambridge University Press; 2003. doi:10.1017/CBO9780511615771

Sarathy V. Real world problem-solving .  Front Hum Neurosci . 2018;12:261. Published 2018 Jun 26. doi:10.3389/fnhum.2018.00261

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Status.net

What is Problem Solving? (Steps, Techniques, Examples)

By Status.net Editorial Team on May 7, 2023 — 5 minutes to read

What Is Problem Solving?

Definition and importance.

Problem solving is the process of finding solutions to obstacles or challenges you encounter in your life or work. It is a crucial skill that allows you to tackle complex situations, adapt to changes, and overcome difficulties with ease. Mastering this ability will contribute to both your personal and professional growth, leading to more successful outcomes and better decision-making.

Problem-Solving Steps

The problem-solving process typically includes the following steps:

  • Identify the issue : Recognize the problem that needs to be solved.
  • Analyze the situation : Examine the issue in depth, gather all relevant information, and consider any limitations or constraints that may be present.
  • Generate potential solutions : Brainstorm a list of possible solutions to the issue, without immediately judging or evaluating them.
  • Evaluate options : Weigh the pros and cons of each potential solution, considering factors such as feasibility, effectiveness, and potential risks.
  • Select the best solution : Choose the option that best addresses the problem and aligns with your objectives.
  • Implement the solution : Put the selected solution into action and monitor the results to ensure it resolves the issue.
  • Review and learn : Reflect on the problem-solving process, identify any improvements or adjustments that can be made, and apply these learnings to future situations.

Defining the Problem

To start tackling a problem, first, identify and understand it. Analyzing the issue thoroughly helps to clarify its scope and nature. Ask questions to gather information and consider the problem from various angles. Some strategies to define the problem include:

  • Brainstorming with others
  • Asking the 5 Ws and 1 H (Who, What, When, Where, Why, and How)
  • Analyzing cause and effect
  • Creating a problem statement

Generating Solutions

Once the problem is clearly understood, brainstorm possible solutions. Think creatively and keep an open mind, as well as considering lessons from past experiences. Consider:

  • Creating a list of potential ideas to solve the problem
  • Grouping and categorizing similar solutions
  • Prioritizing potential solutions based on feasibility, cost, and resources required
  • Involving others to share diverse opinions and inputs

Evaluating and Selecting Solutions

Evaluate each potential solution, weighing its pros and cons. To facilitate decision-making, use techniques such as:

  • SWOT analysis (Strengths, Weaknesses, Opportunities, Threats)
  • Decision-making matrices
  • Pros and cons lists
  • Risk assessments

After evaluating, choose the most suitable solution based on effectiveness, cost, and time constraints.

Implementing and Monitoring the Solution

Implement the chosen solution and monitor its progress. Key actions include:

  • Communicating the solution to relevant parties
  • Setting timelines and milestones
  • Assigning tasks and responsibilities
  • Monitoring the solution and making adjustments as necessary
  • Evaluating the effectiveness of the solution after implementation

Utilize feedback from stakeholders and consider potential improvements. Remember that problem-solving is an ongoing process that can always be refined and enhanced.

Problem-Solving Techniques

During each step, you may find it helpful to utilize various problem-solving techniques, such as:

  • Brainstorming : A free-flowing, open-minded session where ideas are generated and listed without judgment, to encourage creativity and innovative thinking.
  • Root cause analysis : A method that explores the underlying causes of a problem to find the most effective solution rather than addressing superficial symptoms.
  • SWOT analysis : A tool used to evaluate the strengths, weaknesses, opportunities, and threats related to a problem or decision, providing a comprehensive view of the situation.
  • Mind mapping : A visual technique that uses diagrams to organize and connect ideas, helping to identify patterns, relationships, and possible solutions.

Brainstorming

When facing a problem, start by conducting a brainstorming session. Gather your team and encourage an open discussion where everyone contributes ideas, no matter how outlandish they may seem. This helps you:

  • Generate a diverse range of solutions
  • Encourage all team members to participate
  • Foster creative thinking

When brainstorming, remember to:

  • Reserve judgment until the session is over
  • Encourage wild ideas
  • Combine and improve upon ideas

Root Cause Analysis

For effective problem-solving, identifying the root cause of the issue at hand is crucial. Try these methods:

  • 5 Whys : Ask “why” five times to get to the underlying cause.
  • Fishbone Diagram : Create a diagram representing the problem and break it down into categories of potential causes.
  • Pareto Analysis : Determine the few most significant causes underlying the majority of problems.

SWOT Analysis

SWOT analysis helps you examine the Strengths, Weaknesses, Opportunities, and Threats related to your problem. To perform a SWOT analysis:

  • List your problem’s strengths, such as relevant resources or strong partnerships.
  • Identify its weaknesses, such as knowledge gaps or limited resources.
  • Explore opportunities, like trends or new technologies, that could help solve the problem.
  • Recognize potential threats, like competition or regulatory barriers.

SWOT analysis aids in understanding the internal and external factors affecting the problem, which can help guide your solution.

Mind Mapping

A mind map is a visual representation of your problem and potential solutions. It enables you to organize information in a structured and intuitive manner. To create a mind map:

  • Write the problem in the center of a blank page.
  • Draw branches from the central problem to related sub-problems or contributing factors.
  • Add more branches to represent potential solutions or further ideas.

Mind mapping allows you to visually see connections between ideas and promotes creativity in problem-solving.

Examples of Problem Solving in Various Contexts

In the business world, you might encounter problems related to finances, operations, or communication. Applying problem-solving skills in these situations could look like:

  • Identifying areas of improvement in your company’s financial performance and implementing cost-saving measures
  • Resolving internal conflicts among team members by listening and understanding different perspectives, then proposing and negotiating solutions
  • Streamlining a process for better productivity by removing redundancies, automating tasks, or re-allocating resources

In educational contexts, problem-solving can be seen in various aspects, such as:

  • Addressing a gap in students’ understanding by employing diverse teaching methods to cater to different learning styles
  • Developing a strategy for successful time management to balance academic responsibilities and extracurricular activities
  • Seeking resources and support to provide equal opportunities for learners with special needs or disabilities

Everyday life is full of challenges that require problem-solving skills. Some examples include:

  • Overcoming a personal obstacle, such as improving your fitness level, by establishing achievable goals, measuring progress, and adjusting your approach accordingly
  • Navigating a new environment or city by researching your surroundings, asking for directions, or using technology like GPS to guide you
  • Dealing with a sudden change, like a change in your work schedule, by assessing the situation, identifying potential impacts, and adapting your plans to accommodate the change.
  • How to Resolve Employee Conflict at Work [Steps, Tips, Examples]
  • How to Write Inspiring Core Values? 5 Steps with Examples
  • 30 Employee Feedback Examples (Positive & Negative)

General Problem-solving Process

Introduction

The following is a general problem-solving process that characterizes the steps that can be followed by any discipline when approaching and rationally solving a problem. When used in conjunction with reasoning and decision-making skills, the process works well for one or more participants. Its main purpose is to guide participants through a procedure for solving many types of problems that have a varying level of complexity.

More importantly, the process is both descriptive and prescriptive. This means it can be used to look at past, present, and potential future problems and their solutions in a clear systematic way that is consistent and able to be generalized. At each step along the way to a solution, various types of research must be conducted to successfully accomplish the steps of the process and thus arrive at an effective solution that is viable. A description of research follows the problem solving process. In both the problem solving and research processes, good decision-making, critical-thinking and self-assessment is vital to a high quality result. At each step in the process, the problem-solver may need to go back to earlier steps and reexamine decisions made. It is this revisiting of earlier choices that make the process iterative and allows for improvement of the final outcomes.

Steps in the General Problem-solving Process

  • Become aware of the problem
  • Define the problem
  • Choose the particular problem to be solved
  • Identify potential solutions
  • Evaluate the valid potential solutions to select the best one
  • Develop an action plan to implement the best solution

Become Aware of the Problem

The first step of any problem-solving process is becoming aware. This awareness can be generated from inside or outside the individual. Many times the awareness is part of a stated task or assignment given to the individual by someone else. In other cases, a person can observe a specific problem or a clear gap in knowledge that they feel must be addressed. In the end, as long as a problem is perceived by oneself or others, awareness of this problem is achieved. However, the level of awareness and the research associated with this level is vital to the initiation of the problem solving process.

Define the Problem

After the problem is recognized, research is conducted. Initially, research must be done to help define the problem as well as identify the assumptions being made and determine the parameters of the situation.

In the end, the main purpose of this step is to evaluate the constraints on the problem and the problem solver to better understand the goals that are trying to be reached. Once these goals are identified, the objectives that must be attained in order to reach the goals can be specified and utilized to help narrow the scope of the problem. Once the goals and objectives are clearly understood, the problem to be solved can be selected. An easy way to think of goals and objectives is that goals are what you hope to achieve while objectives are how you will go about accomplishing the goal.

Just as research might have been the impetus for engaging in the problem solving process—it made the problem-solver aware—research is vital to the specification of parameters and assumptions. The heart of this step is the series of decisions made to narrow the scope of the problem made by the problem-solver. Parameters are those factual boundaries and constraints set by the problem statement or discovered through research. Assumptions by contrast are those constraints that the problem-solver sets without having incontrovertible factual backing for those decisions. A clear understanding of the assumptions being made when engaging in the process is important. If an unsatisfactory outcome is reached, it may be necessary to adjust these assumptions. Even if the final solution is arrived at, knowing one’s assumptions assists the problem-solver in explaining and defending their conclusions.

Choose Which Problem to Be Solved

Once a goal and set of objectives has been specified and the parameters and assumptions have been identified, it is necessary to choose a particular problem to solve. Any large problem can be broken into smaller problems that are in turn broken into even smaller problems to be addressed. Each problem is an achievable goal that consists of objectives. Each of these objectives is a sub-problem that must be solved first in order to solve the larger overarching problem.

There are many different reasons to choose a particular problem to solve. It is important to do risk assessment on the problems involved and examine why the problem is being solved. There are many reasons why a particular problem is chosen as the one to solve. For example, the problem might be the most important, most immediate, most far reaching, or most politically important at the moment. Whatever the choice, the individual or group must have clear reasons why they choose the problem to be solved.

Once the aspects of the problem are known, the problem must be phrased as a question that each solution can answer affirmatively. An example of a problem statement might be "How might I increase the use of problem solving techniques by college graduates of four year universities in America today?" This specific type of question has four separate parts: question statement, active verb, object, and parameters and assumptions.

The first part is the question statement which transforms the problem into a question to be answered. It takes the form "How might I" or "In what ways might I." If the process is being undertaken by a group, it should be phrased as we instead of I. At times, an individual or a group may examine an issue concerning a third party. For example, students may work on problems facing their institution or that must be handled by the government. In this case, the question might become, "how might our school," or "In what way might the United States government." In all of these cases, the object is to create a question that must be answered as well as specify the group who is designated to answer it. Each solution must then apply to that group and be able to be accomplished by them as well.

            Next is the active verb or the action used to solve the problem. Some of the most useful of these active verbs are the ones that describe change without specifying an absolute end or any one action. For example: Accelerate, alleviate, broaden, increase, minimize, reduce, and stabilize. It is important to realize that the stronger the verb, the more difficult it might be to accomplish workable solutions. For example, it is easier to reduce crime than to eliminate it. Keep this in mind when choosing verbs because verb choice is vital to good solution finding. If necessary, two or more verbs can be used and should be separated by the following conjunctions: And, Or, or And/Or. To assist in the verb choice process, some active verbs are listed below:

Active Verbs

Figure 2 is a list action verbs that can be used when formulating a problem statement.

            The third part of the problem statement is the object of the sentence that relates to the problem being solved. The object states what is being acted upon by the verb to help solve the problem. Each solution must directly or indirectly affect this object. In our earlier statement, "How might I increase the use of problem solving techniques by college graduates of four year universities in America today?" the object is "use of problem solving."

            Finally, the parameters and assumptions that are bounding the solution are listed. These help to focus the solutions that are generated. Though parameters are not necessary, they are often useful to help limit and focus the scope of the process. Be careful not to leave too broad a problem. Broad problems lead to a wide number of solutions that can be difficult to choose between and implement with weak or ineffectual results. At the same time, an overly narrow problem statement can lead to a small number of solutions that provide little useable results. In our example, "college graduates of four year universities in America today?" are the parameters. This is identified with the conjunction ‘by’ and is used to mark who should have the use of problem solving increased.

            Once the problem statement is phrased properly, solutions can be generated. However, it is important to note that this statement might have to be modified as more research becomes available or as the remainder of the process is worked through. As the process is iterated, small modifications to the problem statement can be made and refinements in the scope and specificity accomplished through changes in the verb, object and parameters.

Identify Potential Solutions

Once the problem statement has been chosen, it is necessary to generate potential solutions. This is the most creative portion of the process. Even so, conducting research into existing solutions to the problem or similar problems is helpful to generate workable solutions. The main criteria for judging solutions in this step is simply whether or not they answer the problem statement with a ‘yes.’ At this point, it may also be possible to eliminate some solutions because they do not agree with commonly held moral and ethical guidelines. Even though not stated specifically, these guidelines are understood and assumed to be upheld when reviewing solutions. For example, a solution to global pollution might be to kill every human. This is obviously not a good solution even though it would give a ‘yes’ answer to the question of "How might we minimize global air pollution caused by humans?"

When working in groups, it is important to work together to generate solutions. Also, it should be realized that the solution process takes time depending upon the problem complexity. At this point, do not judge solutions for more than their ability to answer the stated problem questions with a "yes" because they will be evaluated more closely in the next step. Many times it is possible to use discarded solutions to develop new ideas for solutions. However, it is important to be able to distinguish between similar solutions. Saying the same thing in ten different ways may not be ten different solutions. Try to group similar solutions together. If all the solutions fall into one group, then perhaps the best solution is to implement that group with different variations for different cases of the problems. Just as there are many unique problems, the solutions to these problems are all unique and need to be adapted to the particular situations being discussed. This will be addressed in the last section of the problem solving process.

Evaluate the Valid Potential Solutions to Select a Best Solution

Once a list of potential solutions has been generated, the evaluation process can begin. First, a list of criteria for judging all solutions equally must be chosen. It is vital to eliminate personal bias towards particular solutions as well as to utilize a consistent set of criteria to evaluate all solutions fairly. For example: most cost effective, most socially acceptable, most easily implemented, most directly solves the problem, most far reaching effects, most lasting effects, least government intervention required, least limiting to development, or quickest to implement. It is important to have research and logical reasons for the criteria chosen as well as factual support for the rankings given to a particular solution for each criteria.

Once the criteria are chosen, they should be given a weighting. In most cases, all the criteria have the same weight. However, it is possible to give other weightings to criteria so that a particular factor is seen as more important. Many times, the cost, time to complete, or political nature of a project is more important than other factors and so that criteria may have a higher ranking than others used to judge.

            Once the criteria are chosen and weighted, all qualified solutions must then be ranked. Two types of procedures for ranking exist. If the number of solutions is large, usually greater than ten, an independent ranking must be conducted to narrow the number of choices. Each solution is listed along one side of a grid and then given a score for each criteria from 1-5 where 5 is the highest (other ranges can be used). The rankings for the various criteria are then totaled and a score for each solution is reported. These scores are compared to create a subset of solutions that have the highest score.

            If the number of solutions is initially small or the independent ranking has been conducted, the remaining solutions are placed into a grid with the criteria for a comparative analysis. Though all the solutions may be seen as good, the comparative analysis gives the best solution. The total number of solutions listed gives the range of numbers for each criteria. For example, if there are six (6) solutions, then the rankings will go from 1-6 with 6 being the highest. Each solution is ranked for each criteria in comparison to the other solutions for that criteria. However, within a criteria no two solutions can have the same number. If two are equal, the adjacent numbers should be added and then divided by 2. The result is then placed in the space for each solution. See the charts below for an example. If the question being asked was "How might we control development in order to preserve the integrity and character of the town of Bedminster?"

Sample Table of Potential Solutions

Figure A3. 3 is a list of the potential solutions to be evaluated.

Sample Table of Evaluation Criteria

Figure 4 is a list of the criteria to be used to evaluate the potential solutions.

Sample Table of a Comparative Analysis

Figure 5 is a comparative analysis of the solutions from the table in figure 6 based upon the listed criteria shown in figure 7 for the problem stated earlier. The values used for scoring range from 6 as most satisfies criteria to 1 that least satisfies criteria.

            Once all the solutions are ranked for all criteria and the weighting is applied appropriately, the scores for each solution are totaled. The highest score is then the best solution. If two solutions are close in score then there may be two solutions that are equally as good but differ in their strong points.

            It is important to remember that the criteria that are used to judge the solutions are reflective of the choices being made. Each criteria is a ruler or a gauge by which to measure an outcome. Different rulers will yield different results so be sure to choose the proper rulers as well as use them properly. In order to choose the correct ruler and interpret it in the correct way, it is necessary to understand many different disciplines and the tools they use. In the end, however, each individual must have good decision-making skills to choose and use criteria.

Develop an Action Plan to Implement the Solution

After selecting the best solution, it is necessary to give some thought to the way in which it might be implemented. Giving insight into funding, potential problems with implementing the solution, and the time frame of the solution is necessary for any workable solution to a problem. Not all solutions can be implemented. Unforeseen problems may arise as solutions are tested and put to work. Many times, unexpected resistance to solutions can be encountered. Other times, unacceptable results can require that another solution be used.

In some circumstances the problem may have been originally selected incorrectly, have been misunderstood, or have changed as a result of research or altered circumstances. In the end, mistakes happen and the action plan helps the problem solver be prepared for such eventualities. In any event, the action plan can be used to make others aware of potential problems that might be faced while putting the selected solution into effect. Even when solving a current problem, this process will automatically assist the problem solver in thinking of potential problems and thus assist in avoiding unwanted outcomes. Whatever the outcome, it is vital to understand that the choices made during this entire process rely upon research.

problem solving definition

Problem Solving Skills for the Digital Age

Lucid Content

Reading time: about 6 min

Let’s face it: Things don’t always go according to plan. Systems fail, wires get crossed, projects fall apart.

Problems are an inevitable part of life and work. They’re also an opportunity to think critically and find solutions. But knowing how to get to the root of unexpected situations or challenges can mean the difference between moving forward and spinning your wheels.

Here, we’ll break down the key elements of problem solving, some effective problem solving approaches, and a few effective tools to help you arrive at solutions more quickly.

So, what is problem solving?

Broadly defined, problem solving is the process of finding solutions to difficult or complex issues. But you already knew that. Understanding problem solving frameworks, however, requires a deeper dive.

Think about a recent problem you faced. Maybe it was an interpersonal issue. Or it could have been a major creative challenge you needed to solve for a client at work. How did you feel as you approached the issue? Stressed? Confused? Optimistic? Most importantly, which problem solving techniques did you use to tackle the situation head-on? How did you organize thoughts to arrive at the best possible solution?

Solve your problem-solving problem  

Here’s the good news: Good problem solving skills can be learned. By its nature, problem solving doesn’t adhere to a clear set of do’s and don’ts—it requires flexibility, communication, and adaptation. However, most problems you face, at work or in life, can be tackled using four basic steps.

First, you must define the problem . This step sounds obvious, but often, you can notice that something is amiss in a project or process without really knowing where the core problem lies. The most challenging part of the problem solving process is uncovering where the problem originated.

Second, you work to generate alternatives to address the problem directly. This should be a collaborative process to ensure you’re considering every angle of the issue.

Third, you evaluate and test potential solutions to your problem. This step helps you fully understand the complexity of the issue and arrive at the best possible solution.

Finally, fourth, you select and implement the solution that best addresses the problem.

Following this basic four-step process will help you approach every problem you encounter with the same rigorous critical and strategic thinking process, recognize commonalities in new problems, and avoid repeating past mistakes.

In addition to these basic problem solving skills, there are several best practices that you should incorporate. These problem solving approaches can help you think more critically and creatively about any problem:

You may not feel like you have the right expertise to resolve a specific problem. Don’t let that stop you from tackling it. The best problem solvers become students of the problem at hand. Even if you don’t have particular expertise on a topic, your unique experience and perspective can lend itself to creative solutions.

Challenge the status quo

Standard problem solving methodologies and problem solving frameworks are a good starting point. But don’t be afraid to challenge assumptions and push boundaries. Good problem solvers find ways to apply existing best practices into innovative problem solving approaches.

Think broadly about and visualize the issue

Sometimes it’s hard to see a problem, even if it’s right in front of you. Clear answers could be buried in rows of spreadsheet data or lost in miscommunication. Use visualization as a problem solving tool to break down problems to their core elements. Visuals can help you see bottlenecks in the context of the whole process and more clearly organize your thoughts as you define the problem.  

Hypothesize, test, and try again

It might be cliche, but there’s truth in the old adage that 99% of inspiration is perspiration. The best problem solvers ask why, test, fail, and ask why again. Whether it takes one or 1,000 iterations to solve a problem, the important part—and the part that everyone remembers—is the solution.

Consider other viewpoints

Today’s problems are more complex, more difficult to solve, and they often involve multiple disciplines. They require group expertise and knowledge. Being open to others’ expertise increases your ability to be a great problem solver. Great solutions come from integrating your ideas with those of others to find a better solution. Excellent problem solvers build networks and know how to collaborate with other people and teams. They are skilled in bringing people together and sharing knowledge and information.

4 effective problem solving tools

As you work through the problem solving steps, try these tools to better define the issue and find the appropriate solution.

Root cause analysis

Similar to pulling weeds from your garden, if you don’t get to the root of the problem, it’s bound to come back. A root cause analysis helps you figure out the root cause behind any disruption or problem, so you can take steps to correct the problem from recurring. The root cause analysis process involves defining the problem, collecting data, and identifying causal factors to pinpoint root causes and arrive at a solution.

root cause analysis example table

Less structured than other more traditional problem solving methods, the 5 Whys is simply what it sounds like: asking why over and over to get to the root of an obstacle or setback. This technique encourages an open dialogue that can trigger new ideas about a problem, whether done individually or with a group. Each why piggybacks off the answer to the previous why. Get started with the template below—both flowcharts and fishbone diagrams can also help you track your answers to the 5 Whys.

5 Whys analysis

Brainstorming

A meeting of the minds, a brain dump, a mind meld, a jam session. Whatever you call it, collaborative brainstorming can help surface previously unseen issues, root causes, and alternative solutions. Create and share a mind map with your team members to fuel your brainstorming session.

Gap analysis

Sometimes you don’t know where the problem is until you determine where it isn’t. Gap filling helps you analyze inadequacies that are preventing you from reaching an optimized state or end goal. For example, a content gap analysis can help a content marketer determine where holes exist in messaging or the customer experience. Gap analysis is especially helpful when it comes to problem solving because it requires you to find workable solutions. A SWOT analysis chart that looks at a problem through the lens of strengths, opportunities, opportunities, and threats can be a helpful problem solving framework as you start your analysis.

SWOT analysis

A better way to problem solve

Beyond these practical tips and tools, there are myriad methodical and creative approaches to move a project forward or resolve a conflict. The right approach will depend on the scope of the issue and your desired outcome.

Depending on the problem, Lucidchart offers several templates and diagrams that could help you identify the cause of the issue and map out a plan to resolve it.  Learn more about how Lucidchart can help you take control of your problem solving process .

About Lucidchart

Lucidchart, a cloud-based intelligent diagramming application, is a core component of Lucid Software's Visual Collaboration Suite. This intuitive, cloud-based solution empowers teams to collaborate in real-time to build flowcharts, mockups, UML diagrams, customer journey maps, and more. Lucidchart propels teams forward to build the future faster. Lucid is proud to serve top businesses around the world, including customers such as Google, GE, and NBC Universal, and 99% of the Fortune 500. Lucid partners with industry leaders, including Google, Atlassian, and Microsoft. Since its founding, Lucid has received numerous awards for its products, business, and workplace culture. For more information, visit lucidchart.com.

Related articles

How you can use creative problem solving at work.

Sometimes you're faced with challenges that traditional problem solving can't fix. Creative problem solving encourages you to find new, creative ways of thinking that can help you overcome the issue at hand more quickly.

Solve issues faster with the root cause analysis process

Root cause analysis refers to any problem-solving method used to trace an issue back to its origin. Learn how to complete a root cause analysis—we've even included templates to get you started.

Bring your bright ideas to life.

or continue with

By registering, you agree to our Terms of Service and you acknowledge that you have read and understand our Privacy Policy .

problem solving is a process of generating solutions from observed data

Search form

problem solving is a process of generating solutions from observed data

  • Table of Contents
  • Troubleshooting Guide
  • A Model for Getting Started
  • Justice Action Toolkit
  • Best Change Processes
  • Databases of Best Practices
  • Online Courses
  • Ask an Advisor
  • Subscribe to eNewsletter
  • Community Stories
  • YouTube Channel
  • About the Tool Box
  • How to Use the Tool Box
  • Privacy Statement
  • Workstation/Check Box Sign-In
  • Online Training Courses
  • Capacity Building Training
  • Training Curriculum - Order Now
  • Community Check Box Evaluation System
  • Build Your Toolbox
  • Facilitation of Community Processes
  • Community Health Assessment and Planning
  • Section 3. Defining and Analyzing the Problem

Chapter 17 Sections

  • Section 1. An Introduction to the Problem-Solving Process
  • Section 2. Thinking Critically
  • Section 4. Analyzing Root Causes of Problems: The "But Why?" Technique
  • Section 5. Addressing Social Determinants of Health and Development
  • Section 6. Generating and Choosing Solutions
  • Section 7. Putting Your Solution into Practice
  • Main Section

The nature of problems

Clarifying the problem, deciding to solve the problem, analyzing the problem.

We've all had our share of problems - more than enough, if you come right down to it. So it's easy to think that this section, on defining and analyzing the problem, is unnecessary. "I know what the problem is," you think. "I just don't know what to do about it."

Not so fast! A poorly defined problem - or a problem whose nuances you don't completely understand - is much more difficult to solve than a problem you have clearly defined and analyzed. The way a problem is worded and understood has a huge impact on the number, quality, and type of proposed solutions.

In this section, we'll begin with the basics, focusing primarily on four things. First, we'll consider the nature of problems in general, and then, more specifically, on clarifying and defining the problem you are working on. Then, we'll talk about whether or not you really want to solve the problem, or whether you are better off leaving it alone. Finally, we'll talk about how to do an in-depth analysis of the problem.

So, what is a problem? It can be a lot of things. We know in our gut when there is a problem, whether or not we can easily put it into words. Maybe you feel uncomfortable in a given place, but you're not sure why. A problem might be just the feeling that something is wrong and should be corrected. You might feel some sense of distress, or of injustice.

Stated most simply, a problem is the difference between what is , and what might or should be . "No child should go to bed hungry, but one-quarter of all children do in this country," is a clear, potent problem statement. Another example might be, "Communication in our office is not very clear." In this instance, the explanation of "what might or should be" is simply alluded to.

As these problems illustrate, some problems are more serious than others; the problem of child hunger is a much more severe problem than the fact that the new youth center has no exercise equipment, although both are problems that can and should be addressed. Generally, problems that affect groups of people - children, teenage mothers, the mentally ill, the poor - can at least be addressed and in many cases lessened using the process outlined in this Chapter.

Although your organization may have chosen to tackle a seemingly insurmountable problem, the process you will use to solve it is not complex. It does, however, take time, both to formulate and to fully analyze the problem. Most people underestimate the work they need to do here and the time they'll need to spend. But this is the legwork, the foundation on which you'll lay effective solutions. This isn't the time to take shortcuts.

Three basic concepts make up the core of this chapter: clarifying, deciding, and analyzing. Let's look at each in turn.

If you are having a problem-solving meeting, then you already understand that something isn't quite right - or maybe it's bigger than that; you understand that something is very, very wrong. This is your beginning, and of course, it makes most sense to...

  • Start with what you know . When group members walk through the door at the beginning of the meeting, what do they think about the situation? There are a variety of different ways to garner this information. People can be asked in advance to write down what they know about the problem. Or the facilitator can lead a brainstorming session to try to bring out the greatest number of ideas. Remember that a good facilitator will draw out everyone's opinions, not only those of the more vocal participants.
  • Decide what information is missing . Information is the key to effective decision making. If you are fighting child hunger, do you know which children are hungry? When are they hungry - all the time, or especially at the end of the month, when the money has run out? If that's the case, your problem statement might be, "Children in our community are often hungry at the end of the month because their parents' paychecks are used up too early."
Compare this problem statement on child hunger to the one given in "The nature of problems" above. How might solutions for the two problems be different?
  • Facts (15% of the children in our community don't get enough to eat.)
  • Inference (A significant percentage of children in our community are probably malnourished/significantly underweight.)
  • Speculation (Many of the hungry children probably live in the poorer neighborhoods in town.)
  • Opinion (I think the reason children go hungry is because their parents spend all of their money on cigarettes.)

When you are gathering information, you will probably hear all four types of information, and all can be important. Speculation and opinion can be especially important in gauging public opinion. If public opinion on your issue is based on faulty assumptions, part of your solution strategy will probably include some sort of informational campaign.

For example, perhaps your coalition is campaigning against the death penalty, and you find that most people incorrectly believe that the death penalty deters violent crime. As part of your campaign, therefore, you will probably want to make it clear to the public that it simply isn't true.

Where and how do you find this information? It depends on what you want to know. You can review surveys, interviews, the library and the internet.

  • Define the problem in terms of needs, and not solutions. If you define the problem in terms of possible solutions, you're closing the door to other, possibly more effective solutions. "Violent crime in our neighborhood is unacceptably high," offers space for many more possible solutions than, "We need more police patrols," or, "More citizens should have guns to protect themselves."
  • Define the problem as one everyone shares; avoid assigning blame for the problem. This is particularly important if different people (or groups) with a history of bad relations need to be working together to solve the problem. Teachers may be frustrated with high truancy rates, but blaming students uniquely for problems at school is sure to alienate students from helping to solve the problem.

You can define the problem in several ways; The facilitator can write a problem statement on the board, and everyone can give feedback on it, until the statement has developed into something everyone is pleased with, or you can accept someone else's definition of the problem, or use it as a starting point, modifying it to fit your needs.

After you have defined the problem, ask if everyone understands the terminology being used. Define the key terms of your problem statement, even if you think everyone understands them.

The Hispanic Health Coalition, has come up with the problem statement "Teen pregnancy is a problem in our community." That seems pretty clear, doesn't it? But let's examine the word "community" for a moment. You may have one person who defines community as "the city you live in," a second who defines it as, "this neighborhood" and a third who considers "our community" to mean Hispanics.

At this point, you have already spent a fair amount of time on the problem at hand, and naturally, you want to see it taken care of. Before you go any further, however, it's important to look critically at the problem and decide if you really want to focus your efforts on it. You might decide that right now isn't the best time to try to fix it. Maybe your coalition has been weakened by bad press, and chance of success right now is slim. Or perhaps solving the problem right now would force you to neglect another important agency goal. Or perhaps this problem would be more appropriately handled by another existing agency or organization.

You and your group need to make a conscious choice that you really do want to attack the problem. Many different factors should be a part of your decision. These include:

Importance . In judging the importance of the issue, keep in mind the f easibility . Even if you have decided that the problem really is important, and worth solving, will you be able to solve it, or at least significantly improve the situation? The bottom line: Decide if the good you can do will be worth the effort it takes. Are you the best people to solve the problem? Is someone else better suited to the task?

For example, perhaps your organization is interested in youth issues, and you have recently come to understand that teens aren't participating in community events mostly because they don't know about them. A monthly newsletter, given out at the high schools, could take care of this fairly easily. Unfortunately, you don't have much publishing equipment. You do have an old computer and a desktop printer, and you could type something up, but it's really not your forte. A better solution might be to work to find writing, design and/or printing professionals who would donate their time and/or equipment to create a newsletter that is more exciting, and that students would be more likely to want to read.

Negative impacts . If you do succeed in bringing about the solution you are working on, what are the possible consequences? If you succeed in having safety measures implemented at a local factory, how much will it cost? Where will the factory get that money? Will they cut salaries, or lay off some of their workers?

Even if there are some unwanted results, you may well decide that the benefits outweigh the negatives. As when you're taking medication, you'll put up with the side effects to cure the disease. But be sure you go into the process with your eyes open to the real costs of solving the problem at hand.

Choosing among problems

You might have many obstacles you'd like to see removed. In fact, it's probably a pretty rare community group that doesn't have a laundry list of problems they would like to resolve, given enough time and resources. So how do you decide which to start with?

A simple suggestion might be to list all of the problems you are facing, and whether or not they meet the criteria listed above (importance, feasibility, et cetera). It's hard to assign numerical values for something like this, because for each situation, one of the criteria may strongly outweigh the others. However, just having all of the information in front of the group can help the actual decision making a much easier task.

Now that the group has defined the problem and agreed that they want to work towards a solution, it's time to thoroughly analyze the problem. You started to do this when you gathered information to define the problem, but now, it's time to pay more attention to details and make sure everyone fully understands the problem.

Answer all of the question words.

The facilitator can take group members through a process of understanding every aspect of the problem by answering the "question words" - what, why, who, when, and how much. This process might include the following types of questions:

What is the problem? You already have your problem statement, so this part is more or less done. But it's important to review your work at this point.

Why does the problem exist? There should be agreement among meeting participants as to why the problem exists to begin with. If there isn't, consider trying one of the following techniques.

  • The "but why" technique. This simple exercise can be done easily with a large group, or even on your own. Write the problem statement, and ask participants, "Why does this problem exist?" Write down the answer given, and ask, "But why does (the answer) occur?"
"Children often fall asleep in class," But why? "Because they have no energy." But why? "Because they don't eat breakfast." But why?

Continue down the line until participants can comfortably agree on the root cause of the problem . Agreement is essential here; if people don't even agree about the source of the problem, an effective solution may well be out of reach.

  • Start with the definition you penned above.
  • Draw a line down the center of the paper. Or, if you are working with a large group of people who cannot easily see what you are writing, use two pieces.
  • On the top of one sheet/side, write "Restraining Forces."
  • On the other sheet/side, write, "Driving Forces."
  • Under "Restraining Forces," list all of the reasons you can think of that keep the situation the same; why the status quo is the way it is. As with all brainstorming sessions, this should be a "free for all;" no idea is too "far out" to be suggested and written down.
  • In the same manner, under "Driving Forces," list all of the forces that are pushing the situation to change.
  • When all of the ideas have been written down, group members can edit them as they see fit and compile a list of the important factors that are causing the situation.

Clearly, these two exercises are meant for different times. The "but why" technique is most effective when the facilitator (or the group as a whole) decides that the problem hasn't been looked at deeply enough and that the group's understanding is somewhat superficial. The force field analysis, on the other hand, can be used when people are worried that important elements of the problem haven't been noticed -- that you're not looking at the whole picture.

Who is causing the problem, and who is affected by it? A simple brainstorming session is an excellent way to determine this.

When did the problem first occur, or when did it become significant? Is this a new problem or an old one? Knowing this can give you added understanding of why the problem is occurring now. Also, the longer a problem has existed, the more entrenched it has become, and the more difficult it will be to solve. People often get used to things the way they are and resist change, even when it's a change for the better.

How much , or to what extent, is this problem occurring? How many people are affected by the problem? How significant is it? Here, you should revisit the questions on importance you looked at when you were defining the problem. This serves as a brief refresher and gives you a complete analysis from which you can work.

If time permits, you might want to summarize your analysis on a single sheet of paper for participants before moving on to generating solutions, the next step in the process. That way, members will have something to refer back to during later stages in the work.

Also, after you have finished this analysis, the facilitator should ask for agreement from the group. Have people's perceptions of the problem changed significantly? At this point, check back and make sure that everyone still wants to work together to solve the problem.

The first step in any effective problem-solving process may be the most important. Take your time to develop a critical definition, and let this definition, and the analysis that follows, guide you through the process. You're now ready to go on to generating and choosing solutions, which are the next steps in the problem-solving process, and the focus of the following section.

Print Resources

Avery, M., Auvine, B., Streibel, B., & Weiss, L. (1981). A handbook for consensus decision making: Building united judgement . Cambridge, MA: Center for Conflict Resolution.

Dale, D., & Mitiguy, N. Planning, for a change: A citizen's guide to creative planning and program development .

Dashiell, K. (1990). Managing meetings for collaboration and consensus . Honolulu, HI: Neighborhood Justice Center of Honolulu, Inc.

Interaction Associates (1987). Facilitator institute . San Francisco, CA: Author.

Lawson, L., Donant, F., & Lawson, J. (1982). Lead on! The complete handbook for group leaders . San Luis Obispo, CA: Impact Publishers.

Meacham, W. (1980). Human development training manual . Austin, TX: Human Development Training.

Morrison, E. (1994). Leadership skills: Developing volunteers for organizational success . Tucson, AZ: Fisher Books.  

lls-logo-main

  • Guide: Problem Solving

Author's Avatar

Daniel Croft

Daniel Croft is an experienced continuous improvement manager with a Lean Six Sigma Black Belt and a Bachelor's degree in Business Management. With more than ten years of experience applying his skills across various industries, Daniel specializes in optimizing processes and improving efficiency. His approach combines practical experience with a deep understanding of business fundamentals to drive meaningful change.

  • Last Updated: January 7, 2024
  • Learn Lean Sigma

Problem-solving stands as a fundamental skill, crucial in navigating the complexities of both everyday life and professional environments. Far from merely providing quick fixes, it entails a comprehensive process involving the identification, analysis, and resolution of issues.

This multifaceted approach requires an understanding of the problem’s nature, the exploration of its various components, and the development of effective solutions. At its core, problem-solving serves as a bridge from the current situation to a desired outcome, requiring not only the recognition of an existing gap but also the precise definition and thorough analysis of the problem to find viable solutions.

Table of Contents

What is problem solving.

Problem Solving

At its core, problem-solving is about bridging the gap between the current situation and the desired outcome. It starts with recognizing that a discrepancy exists, which requires intervention to correct or improve. The ability to identify a problem is the first step, but it’s equally crucial to define it accurately. A well-defined problem is half-solved, as the saying goes.

Analyzing the problem is the next critical step. This analysis involves breaking down the problem into smaller parts to understand its intricacies. It requires looking at the problem from various angles and considering all relevant factors – be they environmental, social, technical, or economic. This comprehensive analysis aids in developing a deeper understanding of the problem’s root causes, rather than just its symptoms.

Reverse brainstorming - problem solving - Idea generation

Finally, effective problem-solving involves the implementation of the chosen solution and its subsequent evaluation. This stage tests the practicality of the solution and its effectiveness in the real world. It’s a critical phase where theoretical solutions meet practical application.

The Nature of Problems

The nature of the problem significantly influences the approach to solving it. Problems vary greatly in their complexity and structure, and understanding this is crucial for effective problem-solving.

Simple vs. Complex Problems : Simple problems are straightforward, often with clear solutions. They usually have a limited number of variables and predictable outcomes. On the other hand, complex problems are multi-faceted. They involve multiple variables, stakeholders, and potential outcomes, often requiring a more sophisticated analysis and a multi-pronged approach to solving.

Structured vs. Unstructured Problems : Structured problems are well-defined. They follow a specific pattern or set of rules, making their outcomes more predictable. These problems often have established methodologies for solving. For example, mathematical problems usually fall into this category. Unstructured problems, in contrast, are more ambiguous. They lack a clear pattern or set of rules, making their outcomes uncertain. These problems require a more exploratory approach, often involving trial and error, to identify potential solutions.

Understanding the type of problem at hand is essential, as it dictates the approach. For instance, a simple problem might require a straightforward solution, while a complex problem might need a more comprehensive, step-by-step approach. Similarly, structured problems might benefit from established methodologies, whereas unstructured problems might need more innovative and creative problem-solving techniques.

The Problem-Solving Process

The process of problem-solving is a methodical approach that involves several distinct stages. Each stage plays a crucial role in navigating from the initial recognition of a problem to its final resolution. Let’s explore each of these stages in detail.

Step 1: Identifying the Problem

Problem Identification

Step 2: Defining the Problem

Once the problem is identified, the next step is to define it clearly and precisely. This is a critical phase because a well-defined problem often suggests its solution. Defining the problem involves breaking it down into smaller, more manageable parts. It also includes understanding the scope and impact of the problem. A clear definition helps in focusing efforts and resources efficiently and serves as a guide to stay on track during the problem-solving process.

Step 3: Analyzing the Problem

Analyze Data

Step 4: Generating Solutions

Brainstorming-7-Methods-Learnleansigma

Step 5: Evaluating and Selecting Solutions

After generating a list of possible solutions, the next step is to evaluate each one critically. This evaluation includes considering the feasibility, costs, benefits, and potential impact of each solution. Techniques like cost-benefit analysis, risk assessment, and scenario planning can be useful here. The aim is to select the solution that best addresses the problem in the most efficient and effective way, considering the available resources and constraints.

Step 6: Implementing the Solution

Solution

Step 7: Reviewing and Reflecting

The final stage in the problem-solving process is to review the implemented solution and reflect on its effectiveness and the process as a whole. This involves assessing whether the solution met its intended goals and what could have been done differently. Reflection is a critical part of learning and improvement. It helps in understanding what worked well and what didn’t, providing valuable insights for future problem-solving efforts.

Tools and Techniques for Effective Problem Solving

Problem-solving is a multifaceted endeavor that requires a variety of tools and techniques to navigate effectively. Different stages of the problem-solving process can benefit from specific strategies, enhancing the efficiency and effectiveness of the solutions developed. Here’s a detailed look at some key tools and techniques:

Brainstorming

Brainwriting

SWOT Analysis (Strengths, Weaknesses, Opportunities, Threats)

SWOT-Analysis-Learnleansigma

Root Cause Analysis

This is a method used to identify the underlying causes of a problem, rather than just addressing its symptoms. One popular technique within root cause analysis is the “ 5 Whys ” method. This involves asking “why” multiple times (traditionally five) until the fundamental cause of the problem is uncovered. This technique encourages deeper thinking and can reveal connections that aren’t immediately obvious. By addressing the root cause, solutions are more likely to be effective and long-lasting.

problem solving is a process of generating solutions from observed data

Mind Mapping

Sub-Branches Mind map

Each of these tools and techniques can be adapted to different types of problems and situations. Effective problem solvers often use a combination of these methods, depending on the nature of the problem and the context in which it exists. By leveraging these tools, one can enhance their ability to dissect complex problems, generate creative solutions, and implement effective strategies to address challenges.

Developing Problem-Solving Skills

Developing problem-solving skills is a dynamic process that hinges on both practice and introspection. Engaging with a diverse array of problems enhances one’s ability to adapt and apply different strategies. This exposure is crucial as it allows individuals to encounter various scenarios, ranging from straightforward to complex, each requiring a unique approach. Collaborating with others in teams is especially beneficial. It broadens one’s perspective, offering insights into different ways of thinking and approaching problems. Such collaboration fosters a deeper understanding of how diverse viewpoints can contribute to more robust solutions.

Reflection is equally important in the development of problem-solving skills. Reflecting on both successes and failures provides valuable lessons. Successes reinforce effective strategies and boost confidence, while failures are rich learning opportunities that highlight areas for improvement. This reflective practice enables one to understand what worked, what didn’t, and why.

Critical thinking is a foundational skill in problem-solving. It involves analyzing information, evaluating different perspectives, and making reasoned judgments. Creativity is another vital component. It pushes the boundaries of conventional thinking and leads to innovative solutions. Effective communication also plays a crucial role, as it ensures that ideas are clearly understood and collaboratively refined.

In conclusion, problem-solving is an indispensable skill set that blends analytical thinking, creativity, and practical implementation. It’s a journey from understanding the problem to applying a solution and learning from the outcome.

Whether dealing with simple or complex issues, or structured or unstructured challenges, the essence of problem-solving lies in a methodical approach and the effective use of various tools and techniques. It’s a skill that is honed over time, through experience, reflection, and the continuous development of critical thinking, creativity, and communication abilities. In mastering problem-solving, one not only addresses immediate issues but also builds a foundation for future challenges, leading to more innovative and effective outcomes.

  • Mourtos, N.J., Okamoto, N.D. and Rhee, J., 2004, February. Defining, teaching, and assessing problem solving skills . In  7th UICEE Annual Conference on Engineering Education  (pp. 1-5).
  • Foshay, R. and Kirkley, J., 2003. Principles for teaching problem solving.   Technical paper ,  4 (1), pp.1-16.

Q: What are the key steps in the problem-solving process?

A : The problem-solving process involves several key steps: identifying the problem, defining it clearly, analyzing it to understand its root causes, generating a range of potential solutions, evaluating and selecting the most viable solution, implementing the chosen solution, and finally, reviewing and reflecting on the effectiveness of the solution and the process used to arrive at it.

Q: How can brainstorming be effectively used in problem-solving?

A: Brainstorming is effective in the solution generation phase of problem-solving. It involves gathering a group and encouraging the free flow of ideas without immediate criticism. The goal is to produce a large quantity of ideas, fostering creative thinking. This technique helps in uncovering unique and innovative solutions that might not surface in a more structured setting.

Q: What is SWOT Analysis and how does it aid in problem-solving?

A : SWOT Analysis is a strategic planning tool used to evaluate the Strengths, Weaknesses, Opportunities, and Threats involved in a situation. In problem-solving, it aids by providing a clear understanding of the internal and external factors that could impact the problem and potential solutions. This analysis helps in formulating strategies that leverage strengths and opportunities while mitigating weaknesses and threats.

Q: Why is it important to understand the nature of a problem before solving it?

A : Understanding the nature of a problem is crucial as it dictates the approach for solving it. Problems can be simple or complex, structured or unstructured, and each type requires a different strategy. A clear understanding of the problem’s nature helps in applying the appropriate methods and tools for effective resolution.

Q: How does reflection contribute to developing problem-solving skills?

A : Reflection is a critical component in developing problem-solving skills. It involves looking back at the problem-solving process and the implemented solution to assess what worked well and what didn’t. Reflecting on both successes and failures provides valuable insights and lessons, helping to refine and improve problem-solving strategies for future challenges. This reflective practice enhances one’s ability to approach problems more effectively over time.

Picture of Daniel Croft

Daniel Croft is a seasoned continuous improvement manager with a Black Belt in Lean Six Sigma. With over 10 years of real-world application experience across diverse sectors, Daniel has a passion for optimizing processes and fostering a culture of efficiency. He's not just a practitioner but also an avid learner, constantly seeking to expand his knowledge. Outside of his professional life, Daniel has a keen Investing, statistics and knowledge-sharing, which led him to create the website learnleansigma.com, a platform dedicated to Lean Six Sigma and process improvement insights.

Free Lean Six Sigma Templates

Improve your Lean Six Sigma projects with our free templates. They're designed to make implementation and management easier, helping you achieve better results.

Join us on Linked In

Other Guides

Was this helpful.

A descriptive phase model of problem-solving processes

  • Original Paper
  • Open access
  • Published: 09 March 2021
  • Volume 53 , pages 737–752, ( 2021 )

Cite this article

You have full access to this open access article

problem solving is a process of generating solutions from observed data

  • Benjamin Rott   ORCID: orcid.org/0000-0002-8113-1584 1 ,
  • Birte Specht 2 &
  • Christine Knipping 3  

7608 Accesses

20 Citations

Explore all metrics

This article has been updated

Complementary to existing normative models, in this paper we suggest a descriptive phase model of problem solving. Real, not ideal, problem-solving processes contain errors, detours, and cycles, and they do not follow a predetermined sequence, as is presumed in normative models. To represent and emphasize the non-linearity of empirical processes, a descriptive model seemed essential. The juxtaposition of models from the literature and our empirical analyses enabled us to generate such a descriptive model of problem-solving processes. For the generation of our model, we reflected on the following questions: (1) Which elements of existing models for problem-solving processes can be used for a descriptive model? (2) Can the model be used to describe and discriminate different types of processes? Our descriptive model allows one not only to capture the idiosyncratic sequencing of real problem-solving processes, but simultaneously to compare different processes, by means of accumulation. In particular, our model allows discrimination between problem-solving and routine processes. Also, successful and unsuccessful problem-solving processes as well as processes in paper-and-pencil versus dynamic-geometry environments can be characterised and compared with our model.

Similar content being viewed by others

problem solving is a process of generating solutions from observed data

Competencies for Complexity: Problem Solving in the Twenty-First Century

problem solving is a process of generating solutions from observed data

Naturalising Problem Solving

problem solving is a process of generating solutions from observed data

The Noble Art of Problem Solving

Avoid common mistakes on your manuscript.

1 Introduction

Problem solving (PS)—in the sense of working on non-routine tasks for which the solver knows no previously learned scheme or algorithm designed to solve them (cf. Schoenfeld, 1985 , 1992b )—is an important aspect of doing mathematics (Halmos, 1980 ) as well as learning and teaching mathematics (Liljedahl et al. 2016 ). As one of several reasons, PS is used as a means to help students learn how to think mathematically (Schoenfeld, 1992b ). Hence, PS is part of mathematics curricula in almost all countries (e.g., KMK, 2004 ; NCTM, 2000 , 2014 ). Accordingly, PS has been a focus of interest of researchers for several decades, Pólya ( 1945 ) being one of the most prominent scholars interested in this activity.

Problem-solving processes (PS processes) can be characterised by their inner or their outer structure (Philipp, 2013 , pp. 39–40). The inner structure refers to (meta)cognitive processes such as heuristics, checks, or beliefs, whereas the outer structure refers to observable actions that can be characterised in phases like ‘understanding the problem’ or ‘devising a plan’, as well as the chronological sequence of such phases in a PS process. Our focus in this paper is on the outer structure, as it is directly accessible to teachers and researchers via observation.

In the research literature, there are various characterisations of PS processes. However, almost all of the existing models are normative , which means they represent idealised processes. They characterise PS processes according to distinct phases, in a predetermined sequence, which is why they are sometimes called ‘prescriptive’ instead of normative. These phases and their sequencing have been formulated as a norm for PS processes. Normative models are generally used as a pedagogical tool to guide students’ PS processes and to help them to become better problem solvers. The normative models in current research have mostly been derived from theoretical considerations. Nevertheless, real PS processes look different; they contain errors, detours, and cycles, and they do not follow a predetermined sequence. Actual processes like these are not considered in normative models. Accordingly, there are almost no models that guide teachers and researchers in observing, understanding, and analysing PS processes in their ‘non-smooth’ occurrences (cf. Fernandez et al. 1994 ; Rott, 2014 ). Our aim in this paper, therefore, is to address this research gap by suggesting a descriptive model.

A descriptive model enables not only the representation of real PS processes, but also reveals additional potential for analyses. Our model allows one systematically to compare several PS processes simultaneously by means of accumulation, which is an approach that to our knowledge has not been proposed before in the mathematics education community. In Sect.  6 , we show how this approach can be used to reveal ‘bumps and bruises’ of real students’ PS processes to illustrate the practical value of our descriptive model (Sect.  5.3 , Fig.  5 ). We show how our model allows one to discriminate problem-solving processes from routine processes when students work on tasks. We illustrate how differences between successful and unsuccessful processes can be identified using our model. We also reveal how students’ PS processes, working in a paper-and-pencil environment compared to working in a digital (dynamic geometry) environment , can be characterised and compared by means of our model.

Our descriptive model is based on intertwining theoretical considerations, in the form of a review of existing models, as well as on a video-study researching the processes shown by mathematics pre-service teachers working on geometrical problems.

2 Theoretical background

In this section, we first describe and compare aspects of existing models of PS processes (which are mostly normative) to characterize their potential and their limitations for analysing students’ PS processes (2.1). We then discuss why looking specifically at students’ PS processes in geometry and in dynamic geometry contexts is of particular value for developing a descriptive model of PS processes (2.2).

2.1 Models of problem-solving processes

Looking at models from mathematics, mathematics education, and psychology that describe the progression of PS processes, we find phase models, evolved by authors observing their own PS processes or those of people with whom the authors are familiar. So, the vast majority of existing PS process models are not based on ‘uninvolved’ empirical data (e.g., videotaped PS processes of students); they were actually not designed for the analysis of empirical data or to describe externally observed processes, which emphasises the need for a descriptive model.

2.1.1 Classic models of problem-solving processes

Two ‘basic types’ of phase models for PS processes have evolved in psychology and mathematics education. Any further models can be assigned to one or the other of these basic types: (1) the intuitive or creative type and (2) the logical type (Neuhaus, 2002 ).

Intuitive or creative models of PS processes originate in Poincaré’s ( 1908 ) introspective reflection on his own PS processes. Building on his thoughts, the mathematician Hadamard ( 1945 ) and the psychologist Wallas ( 1926 ) described PS processes with a particular focus on subconscious activities. Their ideas are most often summarised in a four-phase model: (i) After working on a difficult problem for some time and not finding a solution ( preparation ), (ii) the problem solver does and thinks of different things ( incubation ). (iii) After some more time—hours, or even weeks—suddenly, a genius idea appears ( illumination ), providing a solution or at least a significant step towards a solution of the problem; (iv) this idea has to be checked for correctness ( verification ).

So-called logical models of PS processes were introduced by Dewey ( 1910 ), describing five phases: (i) encountering a problem ( suggestions ), (ii) specifying the nature of the problem ( intellectualization ), (iii) approaching possible solutions ( the guiding idea and hypothesis ), (iv) developing logical consequences of the approach ( reasoning (in the narrower sense) ), and (v) accepting or rejecting the idea by experiments ( testing the hypothesis by action ). Unlike in Wallas’ model, there are no subconscious activities described in Dewey’s model. Pólya’s ( 1945 ) famous four-phase model—(i) understanding the problem , (ii) devising a plan , (iii) carrying out the plan , and (iv) looking back —manifests, according to Neuhaus ( 2002 ), references to Dewey’s work.

Research in mathematics education mainly focuses on logical models for describing PS processes, following Pólya or more recent variants of his model (see below). This is due to the fact that PS processes of the intuitive or creative type might take hours, days, or even weeks to allow for genuine incubation phases, and PS activities in the context of schooling and university teaching are mostly shorter and more contained. Therefore, we focus on logical models. In the following, we compare prominent PS process phase models that emerged in the last decades (see Fig.  1 ).

figure 1

Different phase models of problem-solving processes

2.1.2 Recent models of problem-solving processes

In Fig.  1 , different models are presented (for more details see the appendix). These build on and alter distinct aspects of Pólya’s model, especially envisioned phases and possible transitions between these phases. They mark this distinction by using different terminology for these nuanced differences in the phases. The models by Mason et al. ( 1982 ), Schoenfeld ( 1985 , Chapter 4), and Wilson et al. ( 1993 ; Fernandez et al. 1994 ) are normative; they are mostly used for teaching purposes, that is, to instruct students in becoming better problem solvers. Compared to actual PS processes, these models comprise simplifications; looking at and analysing students’ PS processes requires models which are suited to portray these uneven and cragged processes.

In several studies, actual PS processes are analysed; however, only a few of these studies use any of these normative models that describe the outer structure of PS processes. Even fewer studies present a descriptive model as part of their results. Some of the rare studies that attempt to derive such a model are presented in more detail in the appendix; their essential ideas are presented below (Artzt & Armour-Thomas, 1992 ; Jacinto & Carreira, 2017 ; Yimer & Ellerton, 2010 ).

2.1.3 Comparing models of problem-solving processes

In this section, we compare the previously mentioned as well as additional phase models with foci on (a) the different types of phases and (b) linearity or non-linearity of the portrayed PS processes. Figure  1 illustrates similarities and differences in these models, starting with those of Dewey ( 1910 ) and Pólya ( 1945 ) as these authors were the first to suggest such models. Schoenfeld ( 1985 ) and Mason et al. ( 1982 ) introduced this discussion to the mathematics education community, referring back to ideas of Pólya. Then, we discuss those of Wilson et al. ( 1993 ), and Yimer and Ellerton ( 2010 ), as examples of more recent models in mathematics education.

Different types of phases

The presented models comprise three, four, or more different phases. However, we do not think that this number is important per se; instead, it is interesting to see which activities are encompassed in these phases of the different models and in the extent and manner in which they follow Pólya’s formulation, adopt it, or go beyond his ideas. In Fig.  1 , we indicated Pólya’s phases with differently patterned layers in the background.

Dewey’s ( 1910 ) model starts with a phase (named “suggestions”) in which the problem solvers come into contact with a problem without already analysing or working on it. Such a phase is seldom found in phase models in the context of mathematics education. In mathematics though this phase at the beginning is typical and important, as Dewey already pointed out. In the context of teaching, on the other hand, PS mostly starts with a task handed to the students by their teachers. Analysing and working on the problem is expected right from the beginning; this is part of the nature of the provided task. So, in educational research the phase of “suggestions” is rarely mentioned, as it normally does not occur in students’ PS processes.

“Understanding the problem”, Pólya’s ( 1945 ) first phase, is comparable to the second phase (“intellectualization”) of Dewey’s model. In this phase, problem solvers are meant to make sense of the given problem and its conditions. Such a phase is used in all models, though often labelled slightly differently (see Fig.  1 for a juxtaposition). Artzt and Armour-Thomas, ( 1992 ) facing the empirical data of their study, differentiated this phase of “understanding the problem” into a first step, where students are meant to apprehend the task (“understanding”), and a second step, where students are actually expected to comprehend the problem (“analysing”); a similar differentiation is presented by Jacinto and Carreira ( 2017 ) into “grasping, noticing” and “interpreting” a problem.

The next two phases incorporate the actual work on the problem. Pólya describes these phases as “devising” and “carrying out a plan”. Especially the planning phase encompasses many different activities, such as looking for similar problems or generalizations. These two phases are also integral parts of the models by Wilson et al. ( 1993 ), and Yimer and Ellerton ( 2010 ) (see Fig.  1 ), or Jacinto and Carreira ( 2017 , there called “plan” and “create”). Mason et al. ( 1982 ) chose to combine both phases, calling this combined phase “attack”. According to their educational and research experience, they noted that both phases cannot be distinguished in most cases; therefore, a differentiation would not be helpful for learning PS and describing PS processes. Schoenfeld ( 1985 ), on the contrary, further differentiated those phases by splitting Pólya’s second phase into a structured “planning” (or “design”) phase and an unstructured “exploration” phase. When “planning”, one might adopt a known procedure or try a combination of known procedures in a new problem context. However, when known procedures do not help, working heuristically (e.g., looking at examples, counter-examples, or extreme cases) might be a way to approach the given problem in “exploration” (Schoenfeld, 1985 , p. 106). According to Schoenfeld, exploration is the “heuristic heart” of PS processes.

The last phase in Pólya’s model is “looking back”, the moment when a solution should be checked, other approaches should be explored, and methods used should be reflected upon. This phase is also present in other models (see Fig.  1 ). In their empirical approach, Yimer and Ellerton ( 2010 ), for example, differentiated this phase into two steps, namely, “evaluation” (i.e., checking the results), which refers to looking back on the recently solved problem, and “internalization” (i.e., reflecting the solution and the methods used), which focuses on what has been learnt by solving this problem and looks forward to using this recent experience for solving future problems. Jacinto and Carreira ( 2017 ) used the same “verifying” phase as Pólya, but added a “disseminating” phase for presenting solutions, as their final phase.

Other researchers (see the appendix) came to insights similar to those of these researchers, using slightly different terminologies when describing these phases or combinations of these phases.

Sequence of phases: linear or non-linear problem-solving processes

Other important aspects are transitions from one phase to another, and how such transitions occur. The graphical representations of different models in Fig.  1 not only indicate slightly different phases (and distinct labels for these phases), but also illustrate different understandings of how these phases are related and sequenced.

There are strictly linear models like Pólya’s ( 1945 ), which outline four phases that should be passed through when solving a problem, in the given order. Of course, Pólya as a mathematician knew that PS processes are not always linear; in his normative model, however, he proposed such a stepwise procedure, which has often been criticised (cf. Wilson et al. 1993 ). Mason et al. ( 1982 ) and Schoenfeld ( 1985 ) discarded this strict linearity, including forward and backward steps between analysing, planning, and exploring (or attacking, respectively) a problem. Thereafter, PS processes linearly proceed towards the looking back equivalents of their models. Wilson et al. ( 1993 ) presented a fully “dynamic, cyclic interpretation of Polya’s stages” (p. 60) and included forward and backward steps between all phases, even after “looking back”. The same is true for Yimer and Ellerton ( 2010 ), who included transitions between all phases in their model.

As we illustrate later, transitions from one phase to another reflect also characteristic features of routine and non-routine processes in general, and can be also distinctive for students’ PS processes in traditional paper-and-pencil environments compared to Dynamic Geometry Software (DGS) contexts. Our descriptive model of PS processes, which we present in Sect.  5 , also evolved by comparative analyses of students’ PS processes in both learning contexts. Thus, we comment briefly in 2.2 on what existing research has found in this respect so far.

2.2 Problem solving in geometry and dynamic geometry software

Overall, geometry is especially suited for learning mathematical PS in general and PS strategies or heuristics in particular (see Schoenfeld, 1985 ). Notably, many geometric problems can be illustrated in models, sketches, and drawings, or can be solved looking at special cases or working backwards (ibid.). Additionally, the objects of action (at least in Euclidean geometry) and the permitted actions (e.g., constructions with compasses and ruler) are easy to understand. Therefore, in our empirical study (see Sect.  4 ), we opted for PS processes in geometry contexts, knowing that other contexts could be equally fruitful.

One particular tool to support learning and working in the context of geometry, since the 1980s, is DGS, which is characterised by three features, namely, dragmode, macro-constructions, and locus of points (Sträßer, 2002 ). With these features, DGS can be used not only for verification purposes, but also for guided discoveries as well as working heuristically (e.g., Jacinto & Carreira, 2017 ). However, as Gawlick ( 2002 ) pointed out, to profit from such an environment, students—especially low achievers—need some time to get accustomed to handling the software. Comparing DGS and paper-and-pencil environments, Koyuncu et al. ( 2015 ) observed that in a study with two pre-service teachers, “[b]oth participants had a tendency toward using algebraic solutions in the [paper-and-pencil based] environment, whereas they used geometric solutions in the [DGS based] environment.” (p. 857 f.). These potential differences between PS processes in paper-and-pencil versus DGS environments are interesting for research and practice. Therefore, we compared students’ PS processes in these two environments in our empirical study.

3 Research questions

With regard to research on PS processes, it is striking that there is only a small number of studies, often with a low number of participants, that present and apply a descriptive model of PS processes. Further, the identified models are not suited for comparing PS processes across groups of students, but can only describe cases. Last but not least, in most empirical studies, the selection of phases that are included, and the assumption of (non-)linearity, are not discussed and/or justified. In all these respects, we see a research gap. Contributing to filling this gap was one of the motivations for the study presented here. Based on the existing research literature, we formulated two main research questions:

What elements of the already discussed PS process models can be used for a descriptive model? In particular, what is necessary so that such a descriptive model enables

a recognition of types of phases and an identification of phases in actual PS processes as well as

an identification of the sequence (i.e., the order, linear or non-linear) of phases and transitions between phases?

Can the model be used to describe and discriminate among different types of PS processes, for example

routine and non-routine processes,

successful and not successful processes, or

paper-and-pencil vs. DGS processes?

These questions guided our study and the motivation for developing a descriptive model of PS processes. Next, we present the methodology, before we discuss results of our empirical study and present our model.

4 Methodology

In a previous empirical study, we looked at PS processes of pre-service teacher students in geometry contexts. The data in this study were enormously rich and challenged us in their analyses in many ways. Existing PS models did not allow us to harvest fully this rich data corpus and we realised that with respect to our empirical data, we needed a descriptive model. So we formulated the research questions listed above in order to explore the potential and necessary extensions of the existing normative PS process models. We changed our perspective and focused on the development of an empirically grounded theoretical model. We required an approach that would allow us to mine the data of our empirical study and to provide a conceptualisation that could be helpful for further research on students’ problem-solving processes. The methodological approach we used is described in the following.

4.1 Our empirical study

About 250 pre-service teacher students attended a course on Elementary Geometry , which was conceived and conducted by the third author at a university in Northern Germany. The course lasted for one semester (14 weeks); each week, a two-hour lecture for all students as well as eight 2-h tutorials for up to 30 students each, supervised by tutors (advanced students), took place. Four tutorials (U1, Ulap2, U3, and Ulap4) were involved in this study: in U1 and U3 the students worked in a paper-and-pencil environment, in Ulap2 and Ulap4 the students used laptop computers to work in a DGS environment. (The abbreviations consist of U, the first letter ‘Uebung’, German for tutorial, with an added ‘lap’ for groups which used laptop computers as well as an individual number.) Students worked on weekly exercises, which were discussed in the tutorials. In addition, over the course of the semester, in groups of three or four, the students worked on five geometric problems in the tutorials (approx. 45 min for each problem), accompanied by as little tutor help as possible. In this paper, we focus on these five problems. See the appendix for additional information regarding the organisation of our study.

The five problems were chosen so that students had the opportunity to solve a variety of non-routine tasks, which at the same time did not require too much advanced knowledge that students might not have.

For each of the five problems, two groups from each of the four tutorials were observed. Each problem was therefore worked on by four groups with and four groups without DGS (minus some data loss because of students missing tutorials or technical difficulties). The collected data were videos of the groups working on these problems ( processes ), notes by the students ( products ), as well as observers’ notes. Overall, 33 processes (15 from paper-and-pencil as well as 18 from DGS groups) from all five problems, with a combined duration of 25 h, were analysed. For space reasons, we cannot discuss all five problems in detail here. Instead, we present three of the five problems here; the other two can be found in the appendix.

4.1.1 The problems

Regarding the ‘Shortest Detour’ (Fig.  2 , top), as long as A and B are on different sides of the straight line, a line segment from A to B is the shortest way. When A and B are on the same side of g , an easy (not the only) way to solve this problem is by reflecting one of the points, e.g. A , on g and then constructing the line segment from the reflection of A to B , as reflections preserve lengths.

figure 2

Three of the five problems used in our study

Part a) of the ‘Three Beaches’ problem (Fig.  2 , bottom), finding the incircle of an equilateral triangle, should be a routine-procedure as this topic had been discussed in the lecture. Students working on part b) of this problem needed to realize that in an equilateral triangle, all points have the same sum of distances to the sides (Viviani’s problem). This could be justified by showing that the three perpendiculars of a point to a side in such a triangle add up to the height of this triangle, for example by geometrical addition or by calculating areas.

Like Problem (4), Problem (3) (Fig.  2 , middle) contained an a)-part which is a routine task—finding the circumcircle of a (non-regular) triangle—and a b)-part that constitutes a problem for the students.

These tasks were chosen because they actually represented problems for our students, and expected PS processes appeared neither too long nor too short for a reasonable workload by students and for our analyses. Further, the problems covered the content of the accompanying lecture, and the problems could be solved both with and without DGS.

Differences between working with and without DGS: With DGS many examples can be generated quickly, so that an overview of the situation and the solution can be obtained in a short time. For the justifications, however, with and without DGS, students had to reflect, think, and reason to find appropriate arguments.

4.2 Framework for the analysis of the empirical data

For the analyses of our students’ PS processes, we used the protocol analysis framework by Schoenfeld ( 1985 , Chapter 9) with adaptations and operationalizations by Rott ( 2014 ), following two phases of coding.

Process coding: With his framework, Schoenfeld ( 1985 ) intended to “identify major turning points in a solution. This is done by parsing a [PS process] into macroscopic chunks called episodes” (p. 314). An episode is “a period of time during which an individual or a problem-solving group is engaged in one large task […] or a closely related body of tasks in the service of the same goal […]” (p. 292). Please note, the term “episode” refers to coded process data, whereas “phase” refers to parts of PS models. Schoenfeld (p. 296) continued: “Once a protocol has been parsed into episodes, each episode is characterized” using one of six categories (see also Schoenfeld, 1992a , p. 189):

Reading or rereading the problem.

Analysing the problem (in a coherent and structured way).

Exploring aspects of the problem (in a much less structured way than in Analysis).

Planning all or part of a solution.

Implementing a plan.

Verifying a solution.

According to Schoenfeld ( 1985 ), Planning-Implementation can be coded simultaneously.

The idea of episodes as macroscopic chunks implies a certain length, thus individual statements do not comprise an episode; for example, quickly checking an interim result is not coded as a verification episode. Also, PS processes are coded by watching videos, not by reading transcripts (Schoenfeld, 1992a ).

Schoenfeld’s framework was chosen to answer our first research question, for two reasons. (i) The episode types he proposed cover a lot of the variability of phases also identified by us (see Sect.  2.1.3 ). (ii) Coding episodes and coding episode types in independent steps offers the possibility of adding inductively new types of episodes.

After parsing a PS process into episodes, we coded the episodes with Schoenfeld’s categories (deductive categories), but also generated new episode types to characterize these episodes (inductive categories). While coding, we observed initial difficulties in coding the deductive episodes reliably; especially differentiating between Analysis and Exploration episode types was difficult (as predicted by Schoenfeld, 1992a , p. 194). We noticed that Schoenfeld’s ( 1985 , Chapter 9) empirical framework referred to his theoretical model of PS processes (ibid., Chapter 4) which was based on Pólya’s ( 1945 ) list of questions and guidelines. Recognizing an analogy between Schoenfeld’s framework and Pólya’s work (see Fig.  1 ), we were able to operationalize their descriptions in a coding manual (see Rott, 2014 ).

When the deductive episode types did not fit our observations, we inductively added a new episode type. This happened three times. Especially in the DGS environment, where students showed behaviour that was not directly related to solving the task, new types of activities occurred. For example, students talked about the software and how to use it. This kind of behaviour was coded by us as Organization . When it took students more than 30 s to write down their findings (without developing any new results or ideas), this episode was coded as Writing . Discussions about things which were not related to mathematics, but for example daily life, were coded as Digression . These codings were used only when activities did not align with numbers 2–6 of Schoenfeld’s list.

This coding of the videotapes was done independently by different research assistants and the first author. We then applied the “percentage of agreement” ( P A ) approach to compute the interrater-agreement as described in the TIMSS 1999 video study (Jacobs et al. 2003 , pp. 99–105), gaining more than P A  = 0.7 for parsing PS processes into episodes and more than P A  = 0.85 for characterizing the episode types. More importantly, every process was coded by at least two raters. Whenever those codes did not coincide, we attained agreement by recoding together (as in Schoenfeld’s study, 1992a , p. 194).

Product coding : To be able to compare successful and unsuccessful PS processes, students’ products produced in the 45-min sessions were rated. Because the focus was on processes, product rating finally was reduced to a dichotomous right/wrong coding without going into detail regarding students’ argumentations (these will be analysed and the results reported in forthcoming papers). Rating was done independently by a research assistant and the first author with an interrater-agreement of Cohen’s kappa > 0.9. Differing cases were discussed and recoded consensually.

5 Results of our empirical study and implications for our descriptive model

In this section, we briefly illustrate results of our data analyses, which underline the need to go beyond existing models. We summarize key findings of our empirical study and illustrate how these have contributed to the development of our descriptive model of PS processes. After this, we highlight how answering our research questions based on our theoretical and empirical analyses contributes to the development of our descriptive model. Finally, we present and describe our descriptive model.

5.1 Sample problem-solving processes and codings to illustrate the procedure of analysis

To illustrate our analyses and codings of students’ PS processes, we present three sample processes, the first two in detail and the third one only briefly. The first two were paper-and-pencil processes and stem from the same group of students, belonging to parts a) and b) of the ‘Three Beaches’ problem. The third process shows a group of students working on the ‘Shortest Detour’ problem with DGS. Our codings of the different episodes are highlighted in italics .

5.1.1 Group U1-C, Three Beaches (part a))

After reading the Three Beaches problem (00:25–01:30), the three students of group C from tutorial U1 try to understand it. They remember the Airport problem in which they had to find a point with the same distance to all three vertices of a triangle and they try to identify the differences between both problems. The students wonder whether they should again use the perpendicular bisectors of the sides of the triangle or the bisectors of the angles of the triangle ( Analysis , 01:30–05:05). They agree to use the bisectors of the angles and construct their solutions with compasses and ruler. One of the students claims that in the case of an equilateral triangle, perpendicular and angle bisectors would be identical and convinces the others by constructing a triangle and both bisectors with compasses and ruler ( Planning-Implementation , 05:05–06:05). Finally, the students verify their solution by discussing the meaning of the distance from a point to the sides of a triangle, as they initially were not sure how to measure this distance (06:05–07:40). Even though the Analysis episode was quite long (see Fig.  3 ), this part of the task was actually not a problem for the students as they remembered a way to solve it.

figure 3

Process codings of the group U1-C, working on the ‘Three Beaches’ problem

5.1.2 Group U1-C, three beaches (part b))

After reading part b) of the problem (07:45–07:55), the students discuss whether the requested point is the same as in a) ( Analysis , 07:55–10:25). They agree to try out and construct a triangle each, place points in it, draw perpendiculars to the sides, and measure the distances. One student asks whether it is allowed to place the point on a vertex and thus have two distances become zero ( Exploration , 10:25–15:30). After this, the students discuss the meaning of distance, particularly the meaning of a distance related to a side of a triangle. They agree that any point on a side, even the vertex, would satisfy the condition of the problem, thus being a suitable site for the ‘house’ ( Analysis , 15:30–16:40). The students wonder why the distance from one vertex to its opposing side (the height of the triangle) is as large as the sum of the distances from the centre of the incircle (from part a)). They remember that the angle bisectors intersect each other in a ratio of 1/3 to 2/3. Thereafter, they continue to place points in their triangles (not on sides) and measure their distances. They finally agree on the [wrong] hypothesis that any point on the angle bisectors is a point with a minimal sum of the distances to the sides; other points in the triangle would have a slightly larger sum [because of inaccuracies in their drawings]. They realize, however, that they cannot give any reasons for their solution ( Exploration , 16:40–32:30). The codings are represented in Fig.  3 (right).

5.1.3 Group Ulap2-TV, shortest detour

Ulap2-TV working on the shortest detour problem (Fig.  4 ) is an example of a process with more transitions. The students solve the first case of the problem ( A and B on different sides of g ) within 5 min ( Planning-Implementation , Verification ) and then explore the second case ( A and B on the same side of g ) for more than 17 min before solving the problem.

figure 4

Process coding of the group Ulap2-TV, working on the ‘Shortest Detour’ problem

We selected these three PS processes from our study, as they are examples of our empirical data in several aspects: They illustrate both learning environments (paper-and-pencil and DGS), they incorporate all types of episodes (except for Digression ) and, therefore, all types of phases discussed in the PS research literature, and they include linear and cyclic progressions (see below). The routine process (Three Beaches, part a)) is rather atypical as the students take a lot of time analysing the task, before implementing routine techniques ( Planning-Implementation ). The two PS processes (Three Beaches, part b) and Shortest Detour) are typical for our students, spending a lot of time in Exploration episodes. In the DGS environment, we see that the students take some time to handle the software ( Organization ). Compared to free-hand drawings in paper-and-pencil environments, the students in the DGS environment need to think about constructions ( Planning ) before exploring the situation.

5.2 From theoretical models and empirical results to a descriptive model of problem-solving processes

In the following, the coded episodes from all 33 PS processes of our empirical study are used to answer the first research question. What parts or phases of the established models are suited to describe the analysed processes? Which transitions between phases can be observed? The systematic comparison of PS models from the literature (Sect.  2.1.3 ) is the theoretical underpinning of answering these questions. This process aims at generating a descriptive process model suitable for representing students’ actual PS processes.

5.2.1 Different types of episodes that are suited to describing empirical processes

Within the observed processes, all of Schoenfeld’s episode types could be identified with high interrater agreement. Thus, based on our data, we saw no need to merge phases like Understanding and Planning , even though some models suggest doing so.

More specifically, structured approaches of Planning could be differentiated from unstructured approaches which we call Explorations as suggested by Schoenfeld ( 1985 , Chapters 4 & 9) (in 6 out of 33 non-routine processes, both Exploration and Planning were coded).

Furthermore, in some processes, Planning and Implementation episodes can be differentiated from each other (as suggested by Pólya, 1945 ); there are, however, processes in which those two episode types cannot be distinguished as the problem solvers often do not announce their plans (as predicted by Mason et al. 1982 ). In those PS processes, these two episode types are merged to Planning-Implementation (as done by Schoenfeld as well).

Verification episodes are rare, but can be found in our data. As our students do not show signs of trying to reflect on their use of PS strategies, we decided not to distinguish this episode type into ‘checking’ and ‘reflection’.

Incubation and illumination could not be observed in our sample. This was expected as the students did not have the time to incubate.

Altogether, the following theoretically recorded phases could be identified in our empirical data and are part of our model: understanding (analysis), exploration, planning, implementation (sometimes as planning-implementation), and verification.

5.2.2 Transitions between phases: linearity and non-linearity of the processes

Apart from the phases that occur, the transitions between these phases are of interest. Transitions have been coded between nearly all possible ordered pairs of episode types. If the phases proceed according to Pólya’s or Schoenfeld’s model ( Analysis → Exploration → Planning → Implementation → Verification ), we consider this as a linear process. If phases are omitted within a process but this order is still intact we regard this process still as ‘linear’. In contrast, a process is considered by us as non-linear or cyclic, if this order is violated (e.g., Planning → Exploration ). We also checked whether non-linear processes are cyclic in the sense of Wilson et al. (backward steps are possible after all types of episodes), or whether they are cyclic in the sense of Schoenfeld and Mason et al. (backward steps only before Implementation ).

The first sample process (Three Beaches, part a) illustrates a strictly linear approach as in Pólya’s model, represented in the descending order of the time bars (Fig.  3 , left). The second example (Three Beaches, part b) shows a cyclic process as after the first Exploration , an Analysis was coded (Fig.  3 , right). The third example (Shortest Detour) starts in a linear way; then, after a first Verification , the students go back to Planning-Implementation and Exploration episodes. Thus, overall, their process is cyclic (and not in a way that would fit Schoenfeld’s model as the linear order is broken after a Verification ).

We checked all our process codings for their order of episodes (see Table 1 ). In our sample, a third of the processes are non-linear; thus, a strictly linear model is not suited to describing our students’ PS processes.

5.3 Deriving a model for describing problem-solving processes

Using the results of our empirical study as described in Sects.  5.1 and 5.2 , our findings result in a descriptive model of PS processes. We consider this model as an answer to our first research question. We identified phases from (mostly normative) models in our data, then empirically refined these phases, and took the relevance of their sequencing into account as illustrated in Fig.  5 .

figure 5

Descriptive model of problem-solving processes

In our descriptive model (see Fig.  5 ), we distinguish between structured ( Planning ) and unstructured ( Exploration ) approaches in accordance with the model of Schoenfeld ( 1985 ). It is also possible to differentiate between explicit planning ( Planning and Implementation coded separately) as well as implicit planning, which means (further) developing a plan while executing it ( Planning and Implementation coded jointly), as suggested by Mason et al. ( 1982 ). Our descriptive model combines ideas from different models in the literature. Furthermore, linear processes can be displayed (using only arrows that point downwards in the direction of the solution) as can non-linear processes (using at least one arrow that points upwards). Therefore, with this model, linear and non-linear PS processes can explicitly be distinguished from each other. Please note that we use ‘(verified) solution’ with a restriction in brackets, as not all processes lead to a verified or even correct solution. Our model is a model of the outer structure as it describes the observable sequence of the different phases.

In the following, we illustrate how far our descriptive model can also respond to our second research question. We use it to describe, as well as to distinguish different types of PS processes.

6 Using our descriptive model to analyse problem-solving processes

Below, we illustrate how our descriptive model (Fig.  5 ) can be used to analyse and compare students’ PS processes. We first reconstruct different processes of student groups and then propose a new way to represent typical transitions in students’ PS processes.

6.1 Representing students’ problem-solving processes

In contrast to the process coding by Schoenfeld, which contains specific information about the duration of episodes, our analyses are more abstract. We focus on the empirically found types of episodes and transitions between these episodes. This is done following Schoenfeld ( 1985 ), who emphasised: “The juncture between episodes is, in most cases, where managerial decisions (or their absence) will make or break a solution” (p. 300). Focusing on the transitions between episodes is one important characteristic that distinguishes different types of PS processes. Using our descriptive model allows one to do this.

For each process, the transitions between episodes can be displayed with our model (Fig.  5 ). In the following, we consider only the five content-related episode types, but not Reading , Organization , Writing , and Digression, as activities of the latter types of episodes do not contribute to the solution and they are not ordered as in Pólya’s or Schoenfeld’s phases.

For example, the routine process of group U1-C (Three Beaches, part a), see Sect.  5.1 ), starts with an Analysis , followed by a merged Planning-Implementation and a Verification or, in short: [A,P-I,V]; thereafter, this process ends. This means, there are four different transitions in this process indicated by arrows: Start → A, A → P-I, P-I → V, and V → End. Thus, in Fig.  6 (left), these transitions are illustrated with arrows. In this case, these transitions each occur only once, which is indicated by a circled number 1.

figure 6

Translation from Schoenfeld codings to a representation using the descriptive model; the circled numbers indicate the number of times a transition occurs

The second example (U1-C, Three Beaches, part b)) consists of the following episodes: Analysis–Exploration–Analysis–Exploration [A,E,A,E]. This means that there are five transitions in this process: Start → A, A → E, E → A, A → E, and E → End (see Fig.  6 , middle). Please notice that the transition A → E is observed twice.

The final example shows group Ulap2-TV (Shortest Detour), which starts with a Planning-Implementation and proceeds through [P-I,V,P,E,P-I,V] with a total of seven transitions, two of which are P-I → V (ignoring Organization and Writing , Fig.  6 , right).

This reduction to transitions, neglecting the exact order and the duration of episodes, enables one to do a specific comparison of processes and an accumulation of several PS processes (e.g., from all DGS processes, see Sect.  6.2 ). The focus is now on transitions and how often they happen, which indicates different types of PS processes as shown below. This ‘translation’ from the Schoenfeld coding to the representation in our descriptive model has been done for all 33 processes. The directions of the arrows indicate from which phase to which the transitions are occurring, e.g., from analysis to planning; the numbers on the arrows show how often these transitions were coded (they do not indicate an order).

The three selected processes already show clearly different paths, for example, linear vs. cyclic (see Sect. 2.2.4).

6.2 Characterizing types of problem-solving processes by accumulation

Students’ PS processes can be successful or non-successful or conducted in paper-and-pencil or DGS contexts. Looking at different groups of students simultaneously can be fruitful, as such accumulations allow one to look at patterns in existing transitions. Our descriptive model allows one to consider several processes at once, via accumulation.

Representations of single processes, as presented in Fig.  6 and in the boxes in Fig.  7 , can be combined by adding up all coded transitions (which would be impossible with time bars used by Schoenfeld). For such an accumulation, we count all transitions between types of episodes and display them in numbers next to the arrows representing the number of those transitions. For example, six of the processes in the outer boxes start with a transition from the given problem to Planning , while one process begins with an Analysis . This is shown in the centre box by the numbers 6 and 1 in the arrows from the given problem to Planning and Analysis , respectively (see Fig.  7 for the combination of all processes regarding task 3a). Arrows were drawn only where transitions actually occurred in this task. Looking at the arrows that start at the ‘given problem’ or that lead to the ‘(verified) solution’, one can see how many processes were accumulated. All episode types (small boxes) must have the same number of transitions towards as well as from this episode type.

figure 7

Centre rectangle: Accumulation of seven different group processes regarding task 3a)

To show the usability of our model, we distinguish between working on routine tasks and on problems in Sect.  6.2.1 ; thereafter, the routine processes are not further considered.

6.2.1 Routine vs. non-routine processes

In our study, two sub-tasks (3a) and 4a)) were routine tasks in which the students were asked to find special points in triangles. If we look at the accumulations of those processes in our model, clear patterns emerge: There are no Exploration episodes at all, either in the seven processes of task 3a) (Fig.  8 , left) nor in the eight processes of tasks 4a) (Fig.  8 , middle). Instead, there are Planning and/or Implementation episodes in all 15 processes. In some of those processes, Planning and Implementation can clearly be coded as two separate episodes. In other processes, it is not possible to discriminate between these episode types as two distinct episodes in the empirical data (see Fig.  8 ).

figure 8

Accumulation of seven processes for the routine task 3a) (left) and eight processes for task 4a) (middle), 15 processes in total (right)

Most processes (12 out of 15) show no need for analysing the task but start directly with Planning and/or Implementation . Even though there are five Verification episodes, these verifications are often only short checking activities with no reflection in the sense of Pólya; however, the length and quality of an episode cannot be seen in the model. Additionally, all of these 15 processes are linear (as can be seen by the arrows, which point only downwards).

In contrast to these routine tasks, non-routine processes are often non-linear and contain at least one Exploration episode. In Fig.  9 , in direct comparison to Fig.  8 , the seven PS processes of problem 3a) (left), the eight PS processes of problem 3b) (middle), and an accumulation of the 15 PS processes (right) are shown. Overall, in these 15 processes, 17 Exploration episodes were coded, which can be seen in Fig.  9 (right): 4 processes start with an Exploration ; 12 times there is an Exploration after an Analysis episode; and once after Planning-Implementation .

figure 9

Accumulation of seven processes for problem 3b) (left) and eight processes for problem 4b) (middle), 15 problem-solving processes in total (right)

In Fig.  10 (right), an accumulation is given of all 33 PS processes of all five problems. The differences of the routine and the PS processes (e.g., the latter containing Exploration episodes and being cyclic) can be seen by comparing Figs.  8 and 9 .

figure 10

Accumulation of transitions in problem-solving processes, paper-and-pencil (left) vs. DGS (middle); all problem-solving processes (right)

6.2.2 Successful and unsuccessful problem-solving processes

One of Schoenfeld's ( 1985 ) major results was the importance of self-regulatory activities in PS processes. Schoenfeld was not able to characterize successful PS processes; however, he identified characteristics of processes that did not end in a verified solution. The unsuccessful problem solvers were most often those who missed out on self-regulatory activities (i.e., controlling interim results or planning next steps); they engaged in a behaviour that Schoenfeld called “wild goose chase” and that he described this way:

Approximately 60% of the protocols were of the type [...], where the students read the problem, picked a solution direction (often with little analysis or rationalization), and then pursued that approach until they ran out of time. In contrast, successful solution attempts came in a variety of shapes and sizes—but they consistently contained a significant amount of self-regulatory activity, which could clearly be seen as contributing to the problem solvers’ success. (Schoenfeld, 1992a , p. 195)

We made similar observations looking at the processes of our students; several of them, who did not show any signs of structured actions or process evaluations, were not able to solve the tasks. Thus, to test if this observation was statistically significant, we had to operationalize the PS type “wild goose chase”, as Schoenfeld had provided no operational definition for this phenomenon. A process is considered by us to be a “wild goose chase”, if it consists of only Exploration or Analysis & Exploration episodes, whereas processes that are not of this type contain planning and/or verifying activities (only considering content-related episode types). In our descriptive model, by definition, wild goose chase processes look like the process manifested by U1-C (Three Beaches, part b) (Fig.  6 , middle).

To check if the kind of behaviour in these processes is interrelated with success or failure of the related products (see Sect.  4.2 ), a chi-square test was used (because of the nominal character of the process categories, no Pearson or Spearman correlation could be calculated). The null hypothesis was ‘there is no correlation between the PS type wild goose chase and (no) success in the product’.

The entries in Table 2 consist of the observed numbers of process–product combinations; the expected numbers assuming statistical independence (calculated by the marginal totals) are added in parentheses. The entries in the main diagonal are apparently higher than the expected values. The test shows a significant correlation ( p  < 0.01) between the problem solvers’ behaviour and their success. Therefore, the null hypothesis can be rejected, there is a correlation between showing wild goose chase-behaviour in PS processes and not being successful in solving the problem.

6.2.3 Paper-and-pencil vs. DGS environment processes

Looking at the processes of the non-routine tasks indicates that the tasks were ‘problems’ for the students, as these processes showed no signs of routine behaviour (see Sect.  6.2.1 ). Instead, we see many transitions between different episodes and the typical cyclic structure of PS processes. Comparing accumulations of all 15 paper-and-pencil with all 18 DGS PS processes, we see some interesting differences, which our model helps to reveal (see Fig.  10 ). The time the students worked on the problem was set in the tutorials and, therefore, identical in both environments and in all processes. At the end of this paper, we discuss three aspects that our comparisons revealed; more detailed analyses are planned for forthcoming papers.

We coded more transitions in DGS than in paper-and-pencil processes (73 transitions in 18 DGS processes, in short: 73/18 or on average 4 transitions per DGS process compared to 52/15 or 3.5 transitions per paper-and-pencil process). If transitions are a sign of self-regulation (Schoenfeld, 1985 ; Wilson et al. 1993 ), our students in the DGS environment seem to better regulate their processes (please note that Organization episodes are not counted here; including them would further add transitions to DGS processes). However, there might be more transitions (and thus episodes) in DGS processes because of having more time for exploring situations and generating examples, which does not take as much time as in paper-and-pencil processes.

We see more Planning (and Implementation ) episodes in DGS than in paper-and-pencil processes (9/18 or Planning in 50% of the DGS processes compared to 2/15 or 13% in paper-and-pencil processes). Using Schoenfeld’s conceptualization of Planning and Exploration episodes, the DGS processes seem to be more structured—especially since there are less Exploration episodes in DGS than in paper-and-pencil processes (17/18 compared to 21/15), even though there are more episodes in the DGS environment (see above). There seems to be a need for students in the DGS environment to plan their actions, especially when it comes to complex constructions that cannot be sketched freehandedly as in the paper-and-pencil environment. Considering the success of the students (6 solutions in the DGS environment compared to 3 in the paper-and-pencil environment), this hypothesis is supported. As already existing research indicates, better regulated PS processes should be more successful. Please note that successful solutions cannot be obtained by stating only correct hypotheses, which would favour the DGS environment; solutions coded as ‘correct’ had to be argued for.

We double checked our codings to make sure that this result was not an artefact of the coding, that the students actually planned their actions, not only using the DGS (which was coded in Organization episodes). This result could be due to our setting, as our student peer groups had only one computer and thus needed to talk about their actions. In future studies, it should be investigated if this phenomenon can be replicated in environments in which each student has his or her own computer.

We also observed more Verification episodes in DGS compared to paper-and-pencil processes (7/18 or 39% compared to 2/15 or 13%). There could be different reasons for this observation, e.g., students not trusting the technology, or just the simplicity of using the dragmode to check results compared to making drawings in the paper-and-pencil environment.

The results of using our descriptive model for comparisons of PS processes appear to be insightful. The purpose of this section was to illustrate these insights and the use of our empirical model of PS processes. Accumulating PS processes of several groups is a key to enabling comparisons such as the ones presented.

7 Discussion

The goal of this paper was to present a descriptive model of PS processes, that is, a model suited to the description and analyses of empirically observed PS processes. So far, existing research has mainly discussed and applied normative models for PS processes, which are generally used to instruct people, particularly students, about ideal ways of approaching problems. There exist a few, well accepted, models of PS processes in mathematics education (Fig.  1 ); however, these models only partly allow represention of and emphasis on the non-linearity of real and empirical PS processes, and they do not have the potential to compare processes across groups of students. For the generation of our descriptive model of PS processes, following our first research question, (1) the existing models were compared. It turned out that similarities and fine differences exist between the current normative models, especially regarding the phases of PS processes and their sequencing. We identified which elements of the existing models could be useful for the generation of a descriptive model, linking theoretical considerations from research literature with regard to our empirical data. Analysing PS processes of students working on geometric problems, we observed that distinctive episodes (esp. the distinction between Planning and Exploration ) and transitions between episodes, were essential. Classifying the episodes was mostly possible with the existing models, but characterising their transitions and sequencing required extension of the existing models, which resulted in a juxtaposition of components for our descriptive phase model (e.g., allowing us to code, separately or in combination, Planning-Implementation or to regard the (non-)linearity of processes).

Our generated descriptive model turned out not only to provide valuable insights into problems solving processes of students, but also with respect to our second research question, (2), to compare, contrast, and characterise the idiosyncratic characteristics of students’ PS processes (using Explorations or not, linear or cyclic processes, including Verification and Planning or not). Our developed descriptive model can be used to analyse processes of students ‘at once’, in accumulation, which allowed us to group and characterise comparisons of students’ processes, which was not possible with the existing models. As demonstrated in Sect.  6.2 , our model further allows one to distinguish students’ PS processes while working on routine versus problem tasks. Applying our descriptive model to routine tasks, we detected linear processes, whereas in problem tasks cyclic processes were characteristic. Furthermore, in routine tasks, no Exploration episodes could be coded. Most of the students expressed no need for analysing the task but started directly with Planning and/or Implementation.

Our descriptive model also allows one to recognize a type of PS behaviour already described by Schoenfeld ( 1992a ) as “wild goose chases”. Our data illustrated that wild goose chase processes are statistically correlated with unsuccessful attempts at solving the given problems.

In addition, our descriptive model indicated differences between paper and pencil and DGS processes. In the latter context, students showed more transitions, more Planning (and Implementation ), and more Verification episodes. This result revealed significantly different approaches that students embarked on when working on problems in paper and pencil or DGS environments. These findings might indicate that in the DGS environment in our study, students better regulated their processes (cf. Schoenfeld, 1985 , 1992b ; Wilson et al. 1993 )—a hypothesis yet to be confirmed.

A limitation of our study might be the difficulty of the problems given to our students; only 9 of 33 processes ended with a correct solution. In future studies, problems should be used that better differentiate between successful and unsuccessful problem solvers. Also, our descriptive model has so far been grounded only in university students’ geometric PS processes. Even though geometry is particularly suited for learning mathematical PS in general and heuristics in specific (see Schoenfeld, 1985 ), other contexts and fields of mathematics might highlight other challenges students face. Further empirical evidence is needed to see how far our model is also useful and suitable to describe other contexts with respect to specifics of their mathematical fields. Following some of our ideas and insights, Rott ( 2014 ) has already conducted such a study: fifth graders working on problems from geometry, number theory, combinatorics, and arithmetic. Similar results as in the study presented here, were seen and indicate the value of our descriptive model. More research in this regard is a desideratum.

Regarding teaching, using our model can be helpful to discuss with students on a meta-level these documented distinct phases of PS processes, transitions between them, and the possibility of going back to each phase during a PS process. This might help students to be aware of their processes, of different ways to gain a solution and justification, and to be more flexible during PS processes. More reflection on this aspect is also a desideratum for future research.

Change history

12 march 2021.

ESM was added

Artzt, A., & Armour-Thomas, E. (1992). Development of a cognitive-metacognitive framework for protocol analysis of mathematical problem solving in small groups. Cognition and Instruction, 9 (2), 137–175.

Article   Google Scholar  

Dewey, J. (1910). How we think . D C Heath.

Book   Google Scholar  

Fernandez, M. L., Hadaway, N., & Wilson, J. W. (1994). Problem solving: Managing it all. The Mathematics Teacher, 87 (3), 195–199.

Gawlick, T. (2002). On Dynamic Geometry Software in the regular classroom. ZDM, 34 (3), 85–92.

Google Scholar  

Hadamard, J. (1945). The psychology of invention in the mathematical field . Princeton University Press.

Halmos, P. R. (1980). The heart of mathematics. The American Mathematical Monthly, 87 (7), 519–524.

Jacinto, H., & Carreira, S. (2017). Mathematical problem solving with technology: The techno-mathematical fluency of a student-with-GeoGebra. International Journal of Science and Mathematics Education, 15, 1115–1136.

Jacobs, J., Garnier, H., Gallimore, R., Hollingsworth, H., Givvin, K. B., Rust, K. et al. (2003). Third International Mathematics and Science Study 1999 Video Study Technical Report. Volume 1: Mathematics . National Center for Education Statistics. Institute of Education Statistics, U. S. Department of Education.

KMK. (2004). Beschlüsse der Kultusministerkonferenz. Bildungsstandards im Fach Mathematik für den Mittleren Schulabschluss [Educational standards in mathematics for secondary school leaving certificates] . Wolters Kluwer.

Koyuncu, I., Akyuz, D., & Cakiroglu, E. (2015). Investigation plane geometry problem-solving strategies of prospective mathematics teachers in technology and paper-and-pencil environments. International Journal of Mathematics and Science Education, 13, 837–862.

Liljedahl, P., Santos-Trigo, M., Malaspina, U., & Bruder, R. (2016). Problem solving in mathematics education . ICME Topical Surveys.

Mason, J., Burton, L., & Stacey, K. (1982). Thinking mathematically . Pearson.

NCTM. (2000). National Council of Teachers of Mathematics. Principles and standards of school mathematics . NCTM.

NCTM. (2014). Principles to actions. Ensuring mathematical success for all . NCTM.

Neuhaus, K. (2002). Die Rolle des Kreativitätsproblems in der Mathematikdidaktik [The role of the creativity problem in mathematics education] . Dr. Köster.

Philipp, K. (2013). Experimentelles Denken. Theoretische und empirische Konkretisierung einer mathematischen Kompetenz [Experimental thinking] . Springer.

Poincaré, H. (1908). Science et méthode [Science and method] . Flammarion.

Pólya, G. (1945). How to solve it . Princeton University Press.

Rott, B. (2014). Mathematische Problembearbeitungsprozesse von Fünftklässlern—Entwicklung eines deskriptiven Phasenmodells [Problem-solving processes of fifth graders: Developing a descriptive phase model]. Journal für Mathematik-Didaktik, 35, 251–282.

Schoenfeld, A. H. (1985). Mathematical problem solving . Academic Press.

Schoenfeld, A. H. (1992a). On paradigms and methods: What do you do when the ones you know don’t do what you want them to? Issues in the analysis of data in the form of videotapes. The Journal of the Learning Sciences, 2 (2), 179–214.

Schoenfeld, A. H. (1992b). Learning to think mathematically: Problem solving, metacognition, and sensemaking in mathematics. In D. A. Grouws (Ed.), Handbook for research on mathematics teaching and learning (pp. 334–370). MacMillan.

Sträßer, R. (2002). Research on dynamic geometry software (DGS)—An introduction. ZDM, 34 (3), 65.

Wallas, G. (1926). The art of thought . C.A. Watts & Co.

Wilson, J. W., Fernandez, M. L., & Hadaway, N. (1993). Mathematical problem solving. In P. S. Wilson (Ed.), Research ideas for the classroom: High school mathematics (pp. 57–77). MacMillan.

Yimer, A., & Ellerton, N. F. (2010). A five-phase model for mathematical problem solving: Identifying synergies in pre-service-teachers’ metacognitive and cognitive actions. ZDM - The International Journal on Mathematics Education, 42, 245–261.

Download references

Open Access funding enabled and organized by Projekt DEAL.. Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

Universität zu Köln, Köln, Germany

Benjamin Rott

Universität Oldenburg, Oldenburg, Germany

Birte Specht

Universität Bremen, Bremen, Germany

Christine Knipping

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Benjamin Rott .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (PDF 172 KB)

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Rott, B., Specht, B. & Knipping, C. A descriptive phase model of problem-solving processes. ZDM Mathematics Education 53 , 737–752 (2021). https://doi.org/10.1007/s11858-021-01244-3

Download citation

Accepted : 18 February 2021

Published : 09 March 2021

Issue Date : August 2021

DOI : https://doi.org/10.1007/s11858-021-01244-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mathematical problem solving
  • Descriptive process model
  • Find a journal
  • Publish with us
  • Track your research

35 problem-solving techniques and methods for solving complex problems

Problem solving workshop

Design your next session with SessionLab

Join the 150,000+ facilitators 
using SessionLab.

Recommended Articles

A step-by-step guide to planning a workshop, how to create an unforgettable training session in 8 simple steps, 18 free facilitation resources we think you’ll love.

  • 47 useful online tools for workshop planning and meeting facilitation

All teams and organizations encounter challenges as they grow. There are problems that might occur for teams when it comes to miscommunication or resolving business-critical issues . You may face challenges around growth , design , user engagement, and even team culture and happiness. In short, problem-solving techniques should be part of every team’s skillset.

Problem-solving methods are primarily designed to help a group or team through a process of first identifying problems and challenges , ideating possible solutions , and then evaluating the most suitable .

Finding effective solutions to complex problems isn’t easy, but by using the right process and techniques, you can help your team be more efficient in the process.

So how do you develop strategies that are engaging, and empower your team to solve problems effectively?

In this blog post, we share a series of problem-solving tools you can use in your next workshop or team meeting. You’ll also find some tips for facilitating the process and how to enable others to solve complex problems.

Let’s get started! 

How do you identify problems?

How do you identify the right solution.

  • Tips for more effective problem-solving

Complete problem-solving methods

  • Problem-solving techniques to identify and analyze problems
  • Problem-solving techniques for developing solutions

Problem-solving warm-up activities

Closing activities for a problem-solving process.

Before you can move towards finding the right solution for a given problem, you first need to identify and define the problem you wish to solve. 

Here, you want to clearly articulate what the problem is and allow your group to do the same. Remember that everyone in a group is likely to have differing perspectives and alignment is necessary in order to help the group move forward. 

Identifying a problem accurately also requires that all members of a group are able to contribute their views in an open and safe manner. It can be scary for people to stand up and contribute, especially if the problems or challenges are emotive or personal in nature. Be sure to try and create a psychologically safe space for these kinds of discussions.

Remember that problem analysis and further discussion are also important. Not taking the time to fully analyze and discuss a challenge can result in the development of solutions that are not fit for purpose or do not address the underlying issue.

Successfully identifying and then analyzing a problem means facilitating a group through activities designed to help them clearly and honestly articulate their thoughts and produce usable insight.

With this data, you might then produce a problem statement that clearly describes the problem you wish to be addressed and also state the goal of any process you undertake to tackle this issue.  

Finding solutions is the end goal of any process. Complex organizational challenges can only be solved with an appropriate solution but discovering them requires using the right problem-solving tool.

After you’ve explored a problem and discussed ideas, you need to help a team discuss and choose the right solution. Consensus tools and methods such as those below help a group explore possible solutions before then voting for the best. They’re a great way to tap into the collective intelligence of the group for great results!

Remember that the process is often iterative. Great problem solvers often roadtest a viable solution in a measured way to see what works too. While you might not get the right solution on your first try, the methods below help teams land on the most likely to succeed solution while also holding space for improvement.

Every effective problem solving process begins with an agenda . A well-structured workshop is one of the best methods for successfully guiding a group from exploring a problem to implementing a solution.

In SessionLab, it’s easy to go from an idea to a complete agenda . Start by dragging and dropping your core problem solving activities into place . Add timings, breaks and necessary materials before sharing your agenda with your colleagues.

The resulting agenda will be your guide to an effective and productive problem solving session that will also help you stay organized on the day!

problem solving is a process of generating solutions from observed data

Tips for more effective problem solving

Problem-solving activities are only one part of the puzzle. While a great method can help unlock your team’s ability to solve problems, without a thoughtful approach and strong facilitation the solutions may not be fit for purpose.

Let’s take a look at some problem-solving tips you can apply to any process to help it be a success!

Clearly define the problem

Jumping straight to solutions can be tempting, though without first clearly articulating a problem, the solution might not be the right one. Many of the problem-solving activities below include sections where the problem is explored and clearly defined before moving on.

This is a vital part of the problem-solving process and taking the time to fully define an issue can save time and effort later. A clear definition helps identify irrelevant information and it also ensures that your team sets off on the right track.

Don’t jump to conclusions

It’s easy for groups to exhibit cognitive bias or have preconceived ideas about both problems and potential solutions. Be sure to back up any problem statements or potential solutions with facts, research, and adequate forethought.

The best techniques ask participants to be methodical and challenge preconceived notions. Make sure you give the group enough time and space to collect relevant information and consider the problem in a new way. By approaching the process with a clear, rational mindset, you’ll often find that better solutions are more forthcoming.  

Try different approaches  

Problems come in all shapes and sizes and so too should the methods you use to solve them. If you find that one approach isn’t yielding results and your team isn’t finding different solutions, try mixing it up. You’ll be surprised at how using a new creative activity can unblock your team and generate great solutions.

Don’t take it personally 

Depending on the nature of your team or organizational problems, it’s easy for conversations to get heated. While it’s good for participants to be engaged in the discussions, ensure that emotions don’t run too high and that blame isn’t thrown around while finding solutions.

You’re all in it together, and even if your team or area is seeing problems, that isn’t necessarily a disparagement of you personally. Using facilitation skills to manage group dynamics is one effective method of helping conversations be more constructive.

Get the right people in the room

Your problem-solving method is often only as effective as the group using it. Getting the right people on the job and managing the number of people present is important too!

If the group is too small, you may not get enough different perspectives to effectively solve a problem. If the group is too large, you can go round and round during the ideation stages.

Creating the right group makeup is also important in ensuring you have the necessary expertise and skillset to both identify and follow up on potential solutions. Carefully consider who to include at each stage to help ensure your problem-solving method is followed and positioned for success.

Document everything

The best solutions can take refinement, iteration, and reflection to come out. Get into a habit of documenting your process in order to keep all the learnings from the session and to allow ideas to mature and develop. Many of the methods below involve the creation of documents or shared resources. Be sure to keep and share these so everyone can benefit from the work done!

Bring a facilitator 

Facilitation is all about making group processes easier. With a subject as potentially emotive and important as problem-solving, having an impartial third party in the form of a facilitator can make all the difference in finding great solutions and keeping the process moving. Consider bringing a facilitator to your problem-solving session to get better results and generate meaningful solutions!

Develop your problem-solving skills

It takes time and practice to be an effective problem solver. While some roles or participants might more naturally gravitate towards problem-solving, it can take development and planning to help everyone create better solutions.

You might develop a training program, run a problem-solving workshop or simply ask your team to practice using the techniques below. Check out our post on problem-solving skills to see how you and your group can develop the right mental process and be more resilient to issues too!

Design a great agenda

Workshops are a great format for solving problems. With the right approach, you can focus a group and help them find the solutions to their own problems. But designing a process can be time-consuming and finding the right activities can be difficult.

Check out our workshop planning guide to level-up your agenda design and start running more effective workshops. Need inspiration? Check out templates designed by expert facilitators to help you kickstart your process!

In this section, we’ll look at in-depth problem-solving methods that provide a complete end-to-end process for developing effective solutions. These will help guide your team from the discovery and definition of a problem through to delivering the right solution.

If you’re looking for an all-encompassing method or problem-solving model, these processes are a great place to start. They’ll ask your team to challenge preconceived ideas and adopt a mindset for solving problems more effectively.

  • Six Thinking Hats
  • Lightning Decision Jam
  • Problem Definition Process
  • Discovery & Action Dialogue
Design Sprint 2.0
  • Open Space Technology

1. Six Thinking Hats

Individual approaches to solving a problem can be very different based on what team or role an individual holds. It can be easy for existing biases or perspectives to find their way into the mix, or for internal politics to direct a conversation.

Six Thinking Hats is a classic method for identifying the problems that need to be solved and enables your team to consider them from different angles, whether that is by focusing on facts and data, creative solutions, or by considering why a particular solution might not work.

Like all problem-solving frameworks, Six Thinking Hats is effective at helping teams remove roadblocks from a conversation or discussion and come to terms with all the aspects necessary to solve complex problems.

2. Lightning Decision Jam

Featured courtesy of Jonathan Courtney of AJ&Smart Berlin, Lightning Decision Jam is one of those strategies that should be in every facilitation toolbox. Exploring problems and finding solutions is often creative in nature, though as with any creative process, there is the potential to lose focus and get lost.

Unstructured discussions might get you there in the end, but it’s much more effective to use a method that creates a clear process and team focus.

In Lightning Decision Jam, participants are invited to begin by writing challenges, concerns, or mistakes on post-its without discussing them before then being invited by the moderator to present them to the group.

From there, the team vote on which problems to solve and are guided through steps that will allow them to reframe those problems, create solutions and then decide what to execute on. 

By deciding the problems that need to be solved as a team before moving on, this group process is great for ensuring the whole team is aligned and can take ownership over the next stages. 

Lightning Decision Jam (LDJ)   #action   #decision making   #problem solving   #issue analysis   #innovation   #design   #remote-friendly   The problem with anything that requires creative thinking is that it’s easy to get lost—lose focus and fall into the trap of having useless, open-ended, unstructured discussions. Here’s the most effective solution I’ve found: Replace all open, unstructured discussion with a clear process. What to use this exercise for: Anything which requires a group of people to make decisions, solve problems or discuss challenges. It’s always good to frame an LDJ session with a broad topic, here are some examples: The conversion flow of our checkout Our internal design process How we organise events Keeping up with our competition Improving sales flow

3. Problem Definition Process

While problems can be complex, the problem-solving methods you use to identify and solve those problems can often be simple in design. 

By taking the time to truly identify and define a problem before asking the group to reframe the challenge as an opportunity, this method is a great way to enable change.

Begin by identifying a focus question and exploring the ways in which it manifests before splitting into five teams who will each consider the problem using a different method: escape, reversal, exaggeration, distortion or wishful. Teams develop a problem objective and create ideas in line with their method before then feeding them back to the group.

This method is great for enabling in-depth discussions while also creating space for finding creative solutions too!

Problem Definition   #problem solving   #idea generation   #creativity   #online   #remote-friendly   A problem solving technique to define a problem, challenge or opportunity and to generate ideas.

4. The 5 Whys 

Sometimes, a group needs to go further with their strategies and analyze the root cause at the heart of organizational issues. An RCA or root cause analysis is the process of identifying what is at the heart of business problems or recurring challenges. 

The 5 Whys is a simple and effective method of helping a group go find the root cause of any problem or challenge and conduct analysis that will deliver results. 

By beginning with the creation of a problem statement and going through five stages to refine it, The 5 Whys provides everything you need to truly discover the cause of an issue.

The 5 Whys   #hyperisland   #innovation   This simple and powerful method is useful for getting to the core of a problem or challenge. As the title suggests, the group defines a problems, then asks the question “why” five times, often using the resulting explanation as a starting point for creative problem solving.

5. World Cafe

World Cafe is a simple but powerful facilitation technique to help bigger groups to focus their energy and attention on solving complex problems.

World Cafe enables this approach by creating a relaxed atmosphere where participants are able to self-organize and explore topics relevant and important to them which are themed around a central problem-solving purpose. Create the right atmosphere by modeling your space after a cafe and after guiding the group through the method, let them take the lead!

Making problem-solving a part of your organization’s culture in the long term can be a difficult undertaking. More approachable formats like World Cafe can be especially effective in bringing people unfamiliar with workshops into the fold. 

World Cafe   #hyperisland   #innovation   #issue analysis   World Café is a simple yet powerful method, originated by Juanita Brown, for enabling meaningful conversations driven completely by participants and the topics that are relevant and important to them. Facilitators create a cafe-style space and provide simple guidelines. Participants then self-organize and explore a set of relevant topics or questions for conversation.

6. Discovery & Action Dialogue (DAD)

One of the best approaches is to create a safe space for a group to share and discover practices and behaviors that can help them find their own solutions.

With DAD, you can help a group choose which problems they wish to solve and which approaches they will take to do so. It’s great at helping remove resistance to change and can help get buy-in at every level too!

This process of enabling frontline ownership is great in ensuring follow-through and is one of the methods you will want in your toolbox as a facilitator.

Discovery & Action Dialogue (DAD)   #idea generation   #liberating structures   #action   #issue analysis   #remote-friendly   DADs make it easy for a group or community to discover practices and behaviors that enable some individuals (without access to special resources and facing the same constraints) to find better solutions than their peers to common problems. These are called positive deviant (PD) behaviors and practices. DADs make it possible for people in the group, unit, or community to discover by themselves these PD practices. DADs also create favorable conditions for stimulating participants’ creativity in spaces where they can feel safe to invent new and more effective practices. Resistance to change evaporates as participants are unleashed to choose freely which practices they will adopt or try and which problems they will tackle. DADs make it possible to achieve frontline ownership of solutions.

7. Design Sprint 2.0

Want to see how a team can solve big problems and move forward with prototyping and testing solutions in a few days? The Design Sprint 2.0 template from Jake Knapp, author of Sprint, is a complete agenda for a with proven results.

Developing the right agenda can involve difficult but necessary planning. Ensuring all the correct steps are followed can also be stressful or time-consuming depending on your level of experience.

Use this complete 4-day workshop template if you are finding there is no obvious solution to your challenge and want to focus your team around a specific problem that might require a shortcut to launching a minimum viable product or waiting for the organization-wide implementation of a solution.

8. Open space technology

Open space technology- developed by Harrison Owen – creates a space where large groups are invited to take ownership of their problem solving and lead individual sessions. Open space technology is a great format when you have a great deal of expertise and insight in the room and want to allow for different takes and approaches on a particular theme or problem you need to be solved.

Start by bringing your participants together to align around a central theme and focus their efforts. Explain the ground rules to help guide the problem-solving process and then invite members to identify any issue connecting to the central theme that they are interested in and are prepared to take responsibility for.

Once participants have decided on their approach to the core theme, they write their issue on a piece of paper, announce it to the group, pick a session time and place, and post the paper on the wall. As the wall fills up with sessions, the group is then invited to join the sessions that interest them the most and which they can contribute to, then you’re ready to begin!

Everyone joins the problem-solving group they’ve signed up to, record the discussion and if appropriate, findings can then be shared with the rest of the group afterward.

Open Space Technology   #action plan   #idea generation   #problem solving   #issue analysis   #large group   #online   #remote-friendly   Open Space is a methodology for large groups to create their agenda discerning important topics for discussion, suitable for conferences, community gatherings and whole system facilitation

Techniques to identify and analyze problems

Using a problem-solving method to help a team identify and analyze a problem can be a quick and effective addition to any workshop or meeting.

While further actions are always necessary, you can generate momentum and alignment easily, and these activities are a great place to get started.

We’ve put together this list of techniques to help you and your team with problem identification, analysis, and discussion that sets the foundation for developing effective solutions.

Let’s take a look!

  • The Creativity Dice
  • Fishbone Analysis
  • Problem Tree
  • SWOT Analysis
  • Agreement-Certainty Matrix
  • The Journalistic Six
  • LEGO Challenge
  • What, So What, Now What?
  • Journalists

Individual and group perspectives are incredibly important, but what happens if people are set in their minds and need a change of perspective in order to approach a problem more effectively?

Flip It is a method we love because it is both simple to understand and run, and allows groups to understand how their perspectives and biases are formed. 

Participants in Flip It are first invited to consider concerns, issues, or problems from a perspective of fear and write them on a flip chart. Then, the group is asked to consider those same issues from a perspective of hope and flip their understanding.  

No problem and solution is free from existing bias and by changing perspectives with Flip It, you can then develop a problem solving model quickly and effectively.

Flip It!   #gamestorming   #problem solving   #action   Often, a change in a problem or situation comes simply from a change in our perspectives. Flip It! is a quick game designed to show players that perspectives are made, not born.

10. The Creativity Dice

One of the most useful problem solving skills you can teach your team is of approaching challenges with creativity, flexibility, and openness. Games like The Creativity Dice allow teams to overcome the potential hurdle of too much linear thinking and approach the process with a sense of fun and speed. 

In The Creativity Dice, participants are organized around a topic and roll a dice to determine what they will work on for a period of 3 minutes at a time. They might roll a 3 and work on investigating factual information on the chosen topic. They might roll a 1 and work on identifying the specific goals, standards, or criteria for the session.

Encouraging rapid work and iteration while asking participants to be flexible are great skills to cultivate. Having a stage for idea incubation in this game is also important. Moments of pause can help ensure the ideas that are put forward are the most suitable. 

The Creativity Dice   #creativity   #problem solving   #thiagi   #issue analysis   Too much linear thinking is hazardous to creative problem solving. To be creative, you should approach the problem (or the opportunity) from different points of view. You should leave a thought hanging in mid-air and move to another. This skipping around prevents premature closure and lets your brain incubate one line of thought while you consciously pursue another.

11. Fishbone Analysis

Organizational or team challenges are rarely simple, and it’s important to remember that one problem can be an indication of something that goes deeper and may require further consideration to be solved.

Fishbone Analysis helps groups to dig deeper and understand the origins of a problem. It’s a great example of a root cause analysis method that is simple for everyone on a team to get their head around. 

Participants in this activity are asked to annotate a diagram of a fish, first adding the problem or issue to be worked on at the head of a fish before then brainstorming the root causes of the problem and adding them as bones on the fish. 

Using abstractions such as a diagram of a fish can really help a team break out of their regular thinking and develop a creative approach.

Fishbone Analysis   #problem solving   ##root cause analysis   #decision making   #online facilitation   A process to help identify and understand the origins of problems, issues or observations.

12. Problem Tree 

Encouraging visual thinking can be an essential part of many strategies. By simply reframing and clarifying problems, a group can move towards developing a problem solving model that works for them. 

In Problem Tree, groups are asked to first brainstorm a list of problems – these can be design problems, team problems or larger business problems – and then organize them into a hierarchy. The hierarchy could be from most important to least important or abstract to practical, though the key thing with problem solving games that involve this aspect is that your group has some way of managing and sorting all the issues that are raised.

Once you have a list of problems that need to be solved and have organized them accordingly, you’re then well-positioned for the next problem solving steps.

Problem tree   #define intentions   #create   #design   #issue analysis   A problem tree is a tool to clarify the hierarchy of problems addressed by the team within a design project; it represents high level problems or related sublevel problems.

13. SWOT Analysis

Chances are you’ve heard of the SWOT Analysis before. This problem-solving method focuses on identifying strengths, weaknesses, opportunities, and threats is a tried and tested method for both individuals and teams.

Start by creating a desired end state or outcome and bare this in mind – any process solving model is made more effective by knowing what you are moving towards. Create a quadrant made up of the four categories of a SWOT analysis and ask participants to generate ideas based on each of those quadrants.

Once you have those ideas assembled in their quadrants, cluster them together based on their affinity with other ideas. These clusters are then used to facilitate group conversations and move things forward. 

SWOT analysis   #gamestorming   #problem solving   #action   #meeting facilitation   The SWOT Analysis is a long-standing technique of looking at what we have, with respect to the desired end state, as well as what we could improve on. It gives us an opportunity to gauge approaching opportunities and dangers, and assess the seriousness of the conditions that affect our future. When we understand those conditions, we can influence what comes next.

14. Agreement-Certainty Matrix

Not every problem-solving approach is right for every challenge, and deciding on the right method for the challenge at hand is a key part of being an effective team.

The Agreement Certainty matrix helps teams align on the nature of the challenges facing them. By sorting problems from simple to chaotic, your team can understand what methods are suitable for each problem and what they can do to ensure effective results. 

If you are already using Liberating Structures techniques as part of your problem-solving strategy, the Agreement-Certainty Matrix can be an invaluable addition to your process. We’ve found it particularly if you are having issues with recurring problems in your organization and want to go deeper in understanding the root cause. 

Agreement-Certainty Matrix   #issue analysis   #liberating structures   #problem solving   You can help individuals or groups avoid the frequent mistake of trying to solve a problem with methods that are not adapted to the nature of their challenge. The combination of two questions makes it possible to easily sort challenges into four categories: simple, complicated, complex , and chaotic .  A problem is simple when it can be solved reliably with practices that are easy to duplicate.  It is complicated when experts are required to devise a sophisticated solution that will yield the desired results predictably.  A problem is complex when there are several valid ways to proceed but outcomes are not predictable in detail.  Chaotic is when the context is too turbulent to identify a path forward.  A loose analogy may be used to describe these differences: simple is like following a recipe, complicated like sending a rocket to the moon, complex like raising a child, and chaotic is like the game “Pin the Tail on the Donkey.”  The Liberating Structures Matching Matrix in Chapter 5 can be used as the first step to clarify the nature of a challenge and avoid the mismatches between problems and solutions that are frequently at the root of chronic, recurring problems.

Organizing and charting a team’s progress can be important in ensuring its success. SQUID (Sequential Question and Insight Diagram) is a great model that allows a team to effectively switch between giving questions and answers and develop the skills they need to stay on track throughout the process. 

Begin with two different colored sticky notes – one for questions and one for answers – and with your central topic (the head of the squid) on the board. Ask the group to first come up with a series of questions connected to their best guess of how to approach the topic. Ask the group to come up with answers to those questions, fix them to the board and connect them with a line. After some discussion, go back to question mode by responding to the generated answers or other points on the board.

It’s rewarding to see a diagram grow throughout the exercise, and a completed SQUID can provide a visual resource for future effort and as an example for other teams.

SQUID   #gamestorming   #project planning   #issue analysis   #problem solving   When exploring an information space, it’s important for a group to know where they are at any given time. By using SQUID, a group charts out the territory as they go and can navigate accordingly. SQUID stands for Sequential Question and Insight Diagram.

16. Speed Boat

To continue with our nautical theme, Speed Boat is a short and sweet activity that can help a team quickly identify what employees, clients or service users might have a problem with and analyze what might be standing in the way of achieving a solution.

Methods that allow for a group to make observations, have insights and obtain those eureka moments quickly are invaluable when trying to solve complex problems.

In Speed Boat, the approach is to first consider what anchors and challenges might be holding an organization (or boat) back. Bonus points if you are able to identify any sharks in the water and develop ideas that can also deal with competitors!   

Speed Boat   #gamestorming   #problem solving   #action   Speedboat is a short and sweet way to identify what your employees or clients don’t like about your product/service or what’s standing in the way of a desired goal.

17. The Journalistic Six

Some of the most effective ways of solving problems is by encouraging teams to be more inclusive and diverse in their thinking.

Based on the six key questions journalism students are taught to answer in articles and news stories, The Journalistic Six helps create teams to see the whole picture. By using who, what, when, where, why, and how to facilitate the conversation and encourage creative thinking, your team can make sure that the problem identification and problem analysis stages of the are covered exhaustively and thoughtfully. Reporter’s notebook and dictaphone optional.

The Journalistic Six – Who What When Where Why How   #idea generation   #issue analysis   #problem solving   #online   #creative thinking   #remote-friendly   A questioning method for generating, explaining, investigating ideas.

18. LEGO Challenge

Now for an activity that is a little out of the (toy) box. LEGO Serious Play is a facilitation methodology that can be used to improve creative thinking and problem-solving skills. 

The LEGO Challenge includes giving each member of the team an assignment that is hidden from the rest of the group while they create a structure without speaking.

What the LEGO challenge brings to the table is a fun working example of working with stakeholders who might not be on the same page to solve problems. Also, it’s LEGO! Who doesn’t love LEGO! 

LEGO Challenge   #hyperisland   #team   A team-building activity in which groups must work together to build a structure out of LEGO, but each individual has a secret “assignment” which makes the collaborative process more challenging. It emphasizes group communication, leadership dynamics, conflict, cooperation, patience and problem solving strategy.

19. What, So What, Now What?

If not carefully managed, the problem identification and problem analysis stages of the problem-solving process can actually create more problems and misunderstandings.

The What, So What, Now What? problem-solving activity is designed to help collect insights and move forward while also eliminating the possibility of disagreement when it comes to identifying, clarifying, and analyzing organizational or work problems. 

Facilitation is all about bringing groups together so that might work on a shared goal and the best problem-solving strategies ensure that teams are aligned in purpose, if not initially in opinion or insight.

Throughout the three steps of this game, you give everyone on a team to reflect on a problem by asking what happened, why it is important, and what actions should then be taken. 

This can be a great activity for bringing our individual perceptions about a problem or challenge and contextualizing it in a larger group setting. This is one of the most important problem-solving skills you can bring to your organization.

W³ – What, So What, Now What?   #issue analysis   #innovation   #liberating structures   You can help groups reflect on a shared experience in a way that builds understanding and spurs coordinated action while avoiding unproductive conflict. It is possible for every voice to be heard while simultaneously sifting for insights and shaping new direction. Progressing in stages makes this practical—from collecting facts about What Happened to making sense of these facts with So What and finally to what actions logically follow with Now What . The shared progression eliminates most of the misunderstandings that otherwise fuel disagreements about what to do. Voila!

20. Journalists  

Problem analysis can be one of the most important and decisive stages of all problem-solving tools. Sometimes, a team can become bogged down in the details and are unable to move forward.

Journalists is an activity that can avoid a group from getting stuck in the problem identification or problem analysis stages of the process.

In Journalists, the group is invited to draft the front page of a fictional newspaper and figure out what stories deserve to be on the cover and what headlines those stories will have. By reframing how your problems and challenges are approached, you can help a team move productively through the process and be better prepared for the steps to follow.

Journalists   #vision   #big picture   #issue analysis   #remote-friendly   This is an exercise to use when the group gets stuck in details and struggles to see the big picture. Also good for defining a vision.

Problem-solving techniques for developing solutions 

The success of any problem-solving process can be measured by the solutions it produces. After you’ve defined the issue, explored existing ideas, and ideated, it’s time to narrow down to the correct solution.

Use these problem-solving techniques when you want to help your team find consensus, compare possible solutions, and move towards taking action on a particular problem.

  • Improved Solutions
  • Four-Step Sketch
  • 15% Solutions
  • How-Now-Wow matrix
  • Impact Effort Matrix

21. Mindspin  

Brainstorming is part of the bread and butter of the problem-solving process and all problem-solving strategies benefit from getting ideas out and challenging a team to generate solutions quickly. 

With Mindspin, participants are encouraged not only to generate ideas but to do so under time constraints and by slamming down cards and passing them on. By doing multiple rounds, your team can begin with a free generation of possible solutions before moving on to developing those solutions and encouraging further ideation. 

This is one of our favorite problem-solving activities and can be great for keeping the energy up throughout the workshop. Remember the importance of helping people become engaged in the process – energizing problem-solving techniques like Mindspin can help ensure your team stays engaged and happy, even when the problems they’re coming together to solve are complex. 

MindSpin   #teampedia   #idea generation   #problem solving   #action   A fast and loud method to enhance brainstorming within a team. Since this activity has more than round ideas that are repetitive can be ruled out leaving more creative and innovative answers to the challenge.

22. Improved Solutions

After a team has successfully identified a problem and come up with a few solutions, it can be tempting to call the work of the problem-solving process complete. That said, the first solution is not necessarily the best, and by including a further review and reflection activity into your problem-solving model, you can ensure your group reaches the best possible result. 

One of a number of problem-solving games from Thiagi Group, Improved Solutions helps you go the extra mile and develop suggested solutions with close consideration and peer review. By supporting the discussion of several problems at once and by shifting team roles throughout, this problem-solving technique is a dynamic way of finding the best solution. 

Improved Solutions   #creativity   #thiagi   #problem solving   #action   #team   You can improve any solution by objectively reviewing its strengths and weaknesses and making suitable adjustments. In this creativity framegame, you improve the solutions to several problems. To maintain objective detachment, you deal with a different problem during each of six rounds and assume different roles (problem owner, consultant, basher, booster, enhancer, and evaluator) during each round. At the conclusion of the activity, each player ends up with two solutions to her problem.

23. Four Step Sketch

Creative thinking and visual ideation does not need to be confined to the opening stages of your problem-solving strategies. Exercises that include sketching and prototyping on paper can be effective at the solution finding and development stage of the process, and can be great for keeping a team engaged. 

By going from simple notes to a crazy 8s round that involves rapidly sketching 8 variations on their ideas before then producing a final solution sketch, the group is able to iterate quickly and visually. Problem-solving techniques like Four-Step Sketch are great if you have a group of different thinkers and want to change things up from a more textual or discussion-based approach.

Four-Step Sketch   #design sprint   #innovation   #idea generation   #remote-friendly   The four-step sketch is an exercise that helps people to create well-formed concepts through a structured process that includes: Review key information Start design work on paper,  Consider multiple variations , Create a detailed solution . This exercise is preceded by a set of other activities allowing the group to clarify the challenge they want to solve. See how the Four Step Sketch exercise fits into a Design Sprint

24. 15% Solutions

Some problems are simpler than others and with the right problem-solving activities, you can empower people to take immediate actions that can help create organizational change. 

Part of the liberating structures toolkit, 15% solutions is a problem-solving technique that focuses on finding and implementing solutions quickly. A process of iterating and making small changes quickly can help generate momentum and an appetite for solving complex problems.

Problem-solving strategies can live and die on whether people are onboard. Getting some quick wins is a great way of getting people behind the process.   

It can be extremely empowering for a team to realize that problem-solving techniques can be deployed quickly and easily and delineate between things they can positively impact and those things they cannot change. 

15% Solutions   #action   #liberating structures   #remote-friendly   You can reveal the actions, however small, that everyone can do immediately. At a minimum, these will create momentum, and that may make a BIG difference.  15% Solutions show that there is no reason to wait around, feel powerless, or fearful. They help people pick it up a level. They get individuals and the group to focus on what is within their discretion instead of what they cannot change.  With a very simple question, you can flip the conversation to what can be done and find solutions to big problems that are often distributed widely in places not known in advance. Shifting a few grains of sand may trigger a landslide and change the whole landscape.

25. How-Now-Wow Matrix

The problem-solving process is often creative, as complex problems usually require a change of thinking and creative response in order to find the best solutions. While it’s common for the first stages to encourage creative thinking, groups can often gravitate to familiar solutions when it comes to the end of the process. 

When selecting solutions, you don’t want to lose your creative energy! The How-Now-Wow Matrix from Gamestorming is a great problem-solving activity that enables a group to stay creative and think out of the box when it comes to selecting the right solution for a given problem.

Problem-solving techniques that encourage creative thinking and the ideation and selection of new solutions can be the most effective in organisational change. Give the How-Now-Wow Matrix a go, and not just for how pleasant it is to say out loud. 

How-Now-Wow Matrix   #gamestorming   #idea generation   #remote-friendly   When people want to develop new ideas, they most often think out of the box in the brainstorming or divergent phase. However, when it comes to convergence, people often end up picking ideas that are most familiar to them. This is called a ‘creative paradox’ or a ‘creadox’. The How-Now-Wow matrix is an idea selection tool that breaks the creadox by forcing people to weigh each idea on 2 parameters.

26. Impact and Effort Matrix

All problem-solving techniques hope to not only find solutions to a given problem or challenge but to find the best solution. When it comes to finding a solution, groups are invited to put on their decision-making hats and really think about how a proposed idea would work in practice. 

The Impact and Effort Matrix is one of the problem-solving techniques that fall into this camp, empowering participants to first generate ideas and then categorize them into a 2×2 matrix based on impact and effort.

Activities that invite critical thinking while remaining simple are invaluable. Use the Impact and Effort Matrix to move from ideation and towards evaluating potential solutions before then committing to them. 

Impact and Effort Matrix   #gamestorming   #decision making   #action   #remote-friendly   In this decision-making exercise, possible actions are mapped based on two factors: effort required to implement and potential impact. Categorizing ideas along these lines is a useful technique in decision making, as it obliges contributors to balance and evaluate suggested actions before committing to them.

27. Dotmocracy

If you’ve followed each of the problem-solving steps with your group successfully, you should move towards the end of your process with heaps of possible solutions developed with a specific problem in mind. But how do you help a group go from ideation to putting a solution into action? 

Dotmocracy – or Dot Voting -is a tried and tested method of helping a team in the problem-solving process make decisions and put actions in place with a degree of oversight and consensus. 

One of the problem-solving techniques that should be in every facilitator’s toolbox, Dot Voting is fast and effective and can help identify the most popular and best solutions and help bring a group to a decision effectively. 

Dotmocracy   #action   #decision making   #group prioritization   #hyperisland   #remote-friendly   Dotmocracy is a simple method for group prioritization or decision-making. It is not an activity on its own, but a method to use in processes where prioritization or decision-making is the aim. The method supports a group to quickly see which options are most popular or relevant. The options or ideas are written on post-its and stuck up on a wall for the whole group to see. Each person votes for the options they think are the strongest, and that information is used to inform a decision.

All facilitators know that warm-ups and icebreakers are useful for any workshop or group process. Problem-solving workshops are no different.

Use these problem-solving techniques to warm up a group and prepare them for the rest of the process. Activating your group by tapping into some of the top problem-solving skills can be one of the best ways to see great outcomes from your session.

  • Check-in/Check-out
  • Doodling Together
  • Show and Tell
  • Constellations
  • Draw a Tree

28. Check-in / Check-out

Solid processes are planned from beginning to end, and the best facilitators know that setting the tone and establishing a safe, open environment can be integral to a successful problem-solving process.

Check-in / Check-out is a great way to begin and/or bookend a problem-solving workshop. Checking in to a session emphasizes that everyone will be seen, heard, and expected to contribute. 

If you are running a series of meetings, setting a consistent pattern of checking in and checking out can really help your team get into a groove. We recommend this opening-closing activity for small to medium-sized groups though it can work with large groups if they’re disciplined!

Check-in / Check-out   #team   #opening   #closing   #hyperisland   #remote-friendly   Either checking-in or checking-out is a simple way for a team to open or close a process, symbolically and in a collaborative way. Checking-in/out invites each member in a group to be present, seen and heard, and to express a reflection or a feeling. Checking-in emphasizes presence, focus and group commitment; checking-out emphasizes reflection and symbolic closure.

29. Doodling Together  

Thinking creatively and not being afraid to make suggestions are important problem-solving skills for any group or team, and warming up by encouraging these behaviors is a great way to start. 

Doodling Together is one of our favorite creative ice breaker games – it’s quick, effective, and fun and can make all following problem-solving steps easier by encouraging a group to collaborate visually. By passing cards and adding additional items as they go, the workshop group gets into a groove of co-creation and idea development that is crucial to finding solutions to problems. 

Doodling Together   #collaboration   #creativity   #teamwork   #fun   #team   #visual methods   #energiser   #icebreaker   #remote-friendly   Create wild, weird and often funny postcards together & establish a group’s creative confidence.

30. Show and Tell

You might remember some version of Show and Tell from being a kid in school and it’s a great problem-solving activity to kick off a session.

Asking participants to prepare a little something before a workshop by bringing an object for show and tell can help them warm up before the session has even begun! Games that include a physical object can also help encourage early engagement before moving onto more big-picture thinking.

By asking your participants to tell stories about why they chose to bring a particular item to the group, you can help teams see things from new perspectives and see both differences and similarities in the way they approach a topic. Great groundwork for approaching a problem-solving process as a team! 

Show and Tell   #gamestorming   #action   #opening   #meeting facilitation   Show and Tell taps into the power of metaphors to reveal players’ underlying assumptions and associations around a topic The aim of the game is to get a deeper understanding of stakeholders’ perspectives on anything—a new project, an organizational restructuring, a shift in the company’s vision or team dynamic.

31. Constellations

Who doesn’t love stars? Constellations is a great warm-up activity for any workshop as it gets people up off their feet, energized, and ready to engage in new ways with established topics. It’s also great for showing existing beliefs, biases, and patterns that can come into play as part of your session.

Using warm-up games that help build trust and connection while also allowing for non-verbal responses can be great for easing people into the problem-solving process and encouraging engagement from everyone in the group. Constellations is great in large spaces that allow for movement and is definitely a practical exercise to allow the group to see patterns that are otherwise invisible. 

Constellations   #trust   #connection   #opening   #coaching   #patterns   #system   Individuals express their response to a statement or idea by standing closer or further from a central object. Used with teams to reveal system, hidden patterns, perspectives.

32. Draw a Tree

Problem-solving games that help raise group awareness through a central, unifying metaphor can be effective ways to warm-up a group in any problem-solving model.

Draw a Tree is a simple warm-up activity you can use in any group and which can provide a quick jolt of energy. Start by asking your participants to draw a tree in just 45 seconds – they can choose whether it will be abstract or realistic. 

Once the timer is up, ask the group how many people included the roots of the tree and use this as a means to discuss how we can ignore important parts of any system simply because they are not visible.

All problem-solving strategies are made more effective by thinking of problems critically and by exposing things that may not normally come to light. Warm-up games like Draw a Tree are great in that they quickly demonstrate some key problem-solving skills in an accessible and effective way.

Draw a Tree   #thiagi   #opening   #perspectives   #remote-friendly   With this game you can raise awarness about being more mindful, and aware of the environment we live in.

Each step of the problem-solving workshop benefits from an intelligent deployment of activities, games, and techniques. Bringing your session to an effective close helps ensure that solutions are followed through on and that you also celebrate what has been achieved.

Here are some problem-solving activities you can use to effectively close a workshop or meeting and ensure the great work you’ve done can continue afterward.

  • One Breath Feedback
  • Who What When Matrix
  • Response Cards

How do I conclude a problem-solving process?

All good things must come to an end. With the bulk of the work done, it can be tempting to conclude your workshop swiftly and without a moment to debrief and align. This can be problematic in that it doesn’t allow your team to fully process the results or reflect on the process.

At the end of an effective session, your team will have gone through a process that, while productive, can be exhausting. It’s important to give your group a moment to take a breath, ensure that they are clear on future actions, and provide short feedback before leaving the space. 

The primary purpose of any problem-solving method is to generate solutions and then implement them. Be sure to take the opportunity to ensure everyone is aligned and ready to effectively implement the solutions you produced in the workshop.

Remember that every process can be improved and by giving a short moment to collect feedback in the session, you can further refine your problem-solving methods and see further success in the future too.

33. One Breath Feedback

Maintaining attention and focus during the closing stages of a problem-solving workshop can be tricky and so being concise when giving feedback can be important. It’s easy to incur “death by feedback” should some team members go on for too long sharing their perspectives in a quick feedback round. 

One Breath Feedback is a great closing activity for workshops. You give everyone an opportunity to provide feedback on what they’ve done but only in the space of a single breath. This keeps feedback short and to the point and means that everyone is encouraged to provide the most important piece of feedback to them. 

One breath feedback   #closing   #feedback   #action   This is a feedback round in just one breath that excels in maintaining attention: each participants is able to speak during just one breath … for most people that’s around 20 to 25 seconds … unless of course you’ve been a deep sea diver in which case you’ll be able to do it for longer.

34. Who What When Matrix 

Matrices feature as part of many effective problem-solving strategies and with good reason. They are easily recognizable, simple to use, and generate results.

The Who What When Matrix is a great tool to use when closing your problem-solving session by attributing a who, what and when to the actions and solutions you have decided upon. The resulting matrix is a simple, easy-to-follow way of ensuring your team can move forward. 

Great solutions can’t be enacted without action and ownership. Your problem-solving process should include a stage for allocating tasks to individuals or teams and creating a realistic timeframe for those solutions to be implemented or checked out. Use this method to keep the solution implementation process clear and simple for all involved. 

Who/What/When Matrix   #gamestorming   #action   #project planning   With Who/What/When matrix, you can connect people with clear actions they have defined and have committed to.

35. Response cards

Group discussion can comprise the bulk of most problem-solving activities and by the end of the process, you might find that your team is talked out! 

Providing a means for your team to give feedback with short written notes can ensure everyone is head and can contribute without the need to stand up and talk. Depending on the needs of the group, giving an alternative can help ensure everyone can contribute to your problem-solving model in the way that makes the most sense for them.

Response Cards is a great way to close a workshop if you are looking for a gentle warm-down and want to get some swift discussion around some of the feedback that is raised. 

Response Cards   #debriefing   #closing   #structured sharing   #questions and answers   #thiagi   #action   It can be hard to involve everyone during a closing of a session. Some might stay in the background or get unheard because of louder participants. However, with the use of Response Cards, everyone will be involved in providing feedback or clarify questions at the end of a session.

Save time and effort discovering the right solutions

A structured problem solving process is a surefire way of solving tough problems, discovering creative solutions and driving organizational change. But how can you design for successful outcomes?

With SessionLab, it’s easy to design engaging workshops that deliver results. Drag, drop and reorder blocks  to build your agenda. When you make changes or update your agenda, your session  timing   adjusts automatically , saving you time on manual adjustments.

Collaborating with stakeholders or clients? Share your agenda with a single click and collaborate in real-time. No more sending documents back and forth over email.

Explore  how to use SessionLab  to design effective problem solving workshops or  watch this five minute video  to see the planner in action!

problem solving is a process of generating solutions from observed data

Over to you

The problem-solving process can often be as complicated and multifaceted as the problems they are set-up to solve. With the right problem-solving techniques and a mix of creative exercises designed to guide discussion and generate purposeful ideas, we hope we’ve given you the tools to find the best solutions as simply and easily as possible.

Is there a problem-solving technique that you are missing here? Do you have a favorite activity or method you use when facilitating? Let us know in the comments below, we’d love to hear from you! 

' src=

thank you very much for these excellent techniques

' src=

Certainly wonderful article, very detailed. Shared!

' src=

Your list of techniques for problem solving can be helpfully extended by adding TRIZ to the list of techniques. TRIZ has 40 problem solving techniques derived from methods inventros and patent holders used to get new patents. About 10-12 are general approaches. many organization sponsor classes in TRIZ that are used to solve business problems or general organiztational problems. You can take a look at TRIZ and dwonload a free internet booklet to see if you feel it shound be included per your selection process.

Leave a Comment Cancel reply

Your email address will not be published. Required fields are marked *

cycle of workshop planning steps

Going from a mere idea to a workshop that delivers results for your clients can feel like a daunting task. In this piece, we will shine a light on all the work behind the scenes and help you learn how to plan a workshop from start to finish. On a good day, facilitation can feel like effortless magic, but that is mostly the result of backstage work, foresight, and a lot of careful planning. Read on to learn a step-by-step approach to breaking the process of planning a workshop into small, manageable chunks.  The flow starts with the first meeting with a client to define the purposes of a workshop.…

problem solving is a process of generating solutions from observed data

How does learning work? A clever 9-year-old once told me: “I know I am learning something new when I am surprised.” The science of adult learning tells us that, in order to learn new skills (which, unsurprisingly, is harder for adults to do than kids) grown-ups need to first get into a specific headspace.  In a business, this approach is often employed in a training session where employees learn new skills or work on professional development. But how do you ensure your training is effective? In this guide, we'll explore how to create an effective training session plan and run engaging training sessions. As team leader, project manager, or consultant,…

problem solving is a process of generating solutions from observed data

Facilitation is more and more recognized as a key component of work, as employers and society are faced with bigger and more complex problems and ideas. From facilitating meetings to big, multi-stakeholder strategy development workshops, the facilitator's skillset is more and more in demand. In this article, we will go through a list of the best online facilitation resources, including newsletters, podcasts, communities, and 10 free toolkits you can bookmark and read to upskill and improve your facilitation practice. When designing activities and workshops, you'll probably start by using templates and methods you are familiar with. Soon enough, you'll need to expand your range and look for facilitation methods and…

Design your next workshop with SessionLab

Join the 150,000 facilitators using SessionLab

Sign up for free

The Peak Performance Center

The Peak Performance Center

The pursuit of performance excellence, problem solving.

Problem solving is the process of identifying problems and their causes, developing and evaluating possible solutions, and implementing an action or strategy based upon the analysis in order to achieve a desired goal or outcome.

The ultimate goal of problem solving is to eliminate a problem.  Thus, the problem solving process involves identifying a problem, gathering information, generating and evaluating options and implementing solutions.

The Problem Solving Process

The best way to attack a problem is to view the problem is an opportunity for improvement.  Therefore, it is helpful to have a structure to follow to make sure that nothing is overlooked. 

The problem solving process consists of a sequence of steps when correctly followed most often leads to a successful solution.  The first steps involve defining and analyzing the problem to be solved.  The best way to define the problem is to write down a concise statement which summarizes the problem, and then write down where you want to be after the problem has been resolved. 

The next steps involve generating and analyzing potential solutions for the problem.  The final steps involve selecting and implementing the best course of action.  It is always recommended to evaluate the solution after it is implemented.

Steps in Problem Solving

  • Identify and Define the Problem
  • Analyze the Problem
  • Generate Potential Solutions
  • Analyzing the Solution
  • Selecting the best Solution(s)
  • Implement the Solution
  • Evaluate the Solution

Development Series

Our Development Series provides strategies, techniques, and tools to help individuals and teams to effectively solve problems.  In this series, you will learn techniques for extracting good information from a large amount of data, as well as help break large, seemingly unmanageable problems down into achievable parts.

Tips and Strategies including how to:

  • List the attributes of the problem.
  • Identify possible causes of problem.
  • Look at the problem from more than one perspective.
  • Extract maximum information from facts.
  • Expand the boundaries of the problem.
  • Think outside the box for solutions.

Through our Tool Box , we provide tools and techniques to solve problems including a SWOT analysis, Risk analysis, and decision making matrix.  The tools in this section help you breakdown and understand complicated problems.   Unfortunately, without these tools, problems might seem overwhelming and very complex.

Tools for Problem Solving

  • Cause & Effect Diagrams can be useful for making sure that all factors relating to a problem have been considered. 
  • SWOT Analysis helps to work out a successful strategy in a competitive environment. 
  • Risk Analysis provides a formal framework for identifying the potential risks, and helps to work out a strategy for controlling them.

By using these tools and techniques you can ensure that you carry out the best analysis possible.  These tools give you a starting point and a road map in the problem solving process.

problem solving is a process of generating solutions from observed data

Copyright © 2024 | WordPress Theme by MH Themes

web analytics

National Academies Press: OpenBook

Taking Science to School: Learning and Teaching Science in Grades K-8 (2007)

Chapter: 5 generating and evaluating scientific evidence and explanations, 5 generating and evaluating scientific evidence and explanations.

Major Findings in the Chapter:

Children are far more competent in their scientific reasoning than first suspected and adults are less so. Furthermore, there is great variation in the sophistication of reasoning strategies across individuals of the same age.

In general, children are less sophisticated than adults in their scientific reasoning. However, experience plays a critical role in facilitating the development of many aspects of reasoning, often trumping age.

Scientific reasoning is intimately intertwined with conceptual knowledge of the natural phenomena under investigation. This conceptual knowledge sometimes acts as an obstacle to reasoning, but often facilitates it.

Many aspects of scientific reasoning require experience and instruction to develop. For example, distinguishing between theory and evidence and many aspects of modeling do not emerge without explicit instruction and opportunities for practice.

In this chapter, we discuss the various lines of research related to Strand 2—generate and evaluate evidence and explanations. 1 The ways in which

scientists generate and evaluate scientific evidence and explanations have long been the focus of study in philosophy, history, anthropology, and sociology. More recently, psychologists and learning scientists have begun to study the cognitive and social processes involved in building scientific knowledge. For our discussion, we draw primarily from the past 20 years of research in developmental and cognitive psychology that investigates how children’s scientific thinking develops across the K-8 years.

We begin by developing a broad sketch of how key aspects of scientific thinking develop across the K-8 years, contrasting children’s abilities with those of adults. This contrast allows us to illustrate both how children’s knowledge and skill can develop over time and situations in which adults’ and children’s scientific thinking are similar. Where age differences exist, we comment on what underlying mechanisms might be responsible for them. In this research literature, two broad themes emerge, which we take up in detail in subsequent sections of the chapter. The first is the role of prior knowledge in scientific thinking at all ages. The second is the importance of experience and instruction.

Scientific investigation, broadly defined, includes numerous procedural and conceptual activities, such as asking questions, hypothesizing, designing experiments, making predictions, using apparatus, observing, measuring, being concerned with accuracy, precision, and error, recording and interpreting data, consulting data records, evaluating evidence, verification, reacting to contradictions or anomalous data, presenting and assessing arguments, constructing explanations (to oneself and others), constructing various representations of the data (graphs, maps, three-dimensional models), coordinating theory and evidence, performing statistical calculations, making inferences, and formulating and revising theories or models (e.g., Carey et al., 1989; Chi et al., 1994; Chinn and Malhotra, 2001; Keys, 1994; McNay and Melville, 1993; Schauble et al., 1995; Slowiaczek et al., 1992; Zachos et al., 2000). As noted in Chapter 2 , over the past 20 to 30 years, the image of “doing science” emerging from across multiple lines of research has shifted from depictions of lone scientists conducting experiments in isolated laboratories to the image of science as both an individual and a deeply social enterprise that involves problem solving and the building and testing of models and theories.

Across this same period, the psychological study of science has evolved from a focus on scientific reasoning as a highly developed form of logical thinking that cuts across scientific domains to the study of scientific thinking as the interplay of general reasoning strategies, knowledge of the natural phenomena being studied, and a sense of how scientific evidence and explanations are generated. Much early research on scientific thinking and inquiry tended to focus primarily either on conceptual development or on the development of reasoning strategies and processes, often using very

simplified reasoning tasks. In contrast, many recent studies have attempted to describe a larger number of the complex processes that are deployed in the context of scientific inquiry and to describe their coordination. These studies often engage children in firsthand investigations in which they actively explore multivariable systems. In such tasks, participants initiate all phases of scientific discovery with varying amounts of guidance provided by the researcher. These studies have revealed that, in the context of inquiry, reasoning processes and conceptual knowledge are interdependent and in fact facilitate each other (Schauble, 1996; Lehrer et al. 2001).

It is important to note that, across the studies reviewed in this chapter, researchers have made different assumptions about what scientific reasoning entails and which aspects of scientific practice are most important to study. For example, some emphasize the design of well-controlled experiments, while others emphasize building and critiquing models of natural phenomena. In addition, some researchers study scientific reasoning in stripped down, laboratory-based tasks, while others examine how children approach complex inquiry tasks in the context of the classroom. As a result, the research base is difficult to integrate and does not offer a complete picture of students’ skills and knowledge related to generating and evaluating evidence and explanations. Nor does the underlying view of scientific practice guiding much of the research fully reflect the image of science and scientific understanding we developed in Chapter 2 .

TRENDS ACROSS THE K-8 YEARS

Generating evidence.

The evidence-gathering phase of inquiry includes designing the investigation as well as carrying out the steps required to collect the data. Generating evidence entails asking questions, deciding what to measure, developing measures, collecting data from the measures, structuring the data, systematically documenting outcomes of the investigations, interpreting and evaluating the data, and using the empirical results to develop and refine arguments, models, and theories.

Asking Questions and Formulating Hypotheses

Asking questions and formulating hypotheses is often seen as the first step in the scientific method; however, it can better be viewed as one of several phases in an iterative cycle of investigation. In an exploratory study, for example, work might start with structured observation of the natural world, which would lead to formulation of specific questions and hypotheses. Further data might then be collected, which lead to new questions,

revised hypotheses, and yet another round of data collection. The phase of asking questions also includes formulating the goals of the activity and generating hypotheses and predictions (Kuhn, 2002).

Children differ from adults in their strategies for formulating hypotheses and in the appropriateness of the hypotheses they generate. Children often propose different hypotheses from adults (Klahr, 2000), and younger children (age 10) often conduct experiments without explicit hypotheses, unlike 12- to 14-year-olds (Penner and Klahr, 1996a). In self-directed experimental tasks, children tend to focus on plausible hypotheses and often get stuck focusing on a single hypothesis (e.g., Klahr, Fay, and Dunbar, 1993). Adults are more likely to consider multiple hypotheses (e.g., Dunbar and Klahr, 1989; Klahr, Fay, and Dunbar, 1993). For both children and adults, the ability to consider many alternative hypotheses is a factor contributing to success.

At all ages, prior knowledge of the domain under investigation plays an important role in the formulation of questions and hypotheses (Echevarria, 2003; Klahr, Fay, and Dunbar, 1993; Penner and Klahr, 1996b; Schauble, 1990, 1996; Zimmerman, Raghavan, and Sartoris, 2003). For example, both children and adults are more likely to focus initially on variables they believe to be causal (Kanari and Millar, 2004; Schauble, 1990, 1996). Hypotheses that predict expected results are proposed more frequently than hypotheses that predict unexpected results (Echevarria, 2003). The role of prior knowledge in hypothesis formulation is discussed in greater detail later in the chapter.

Designing Experiments

The design of experiments has received extensive attention in the research literature, with an emphasis on developmental changes in children’s ability to build experiments that allow them to identify causal variables. Experimentation can serve to generate observations in order to induce a hypothesis to account for the pattern of data produced (discovery context) or to test the tenability of an existing hypothesis under consideration (confirmation/ verification context) (Klahr and Dunbar, 1988). At a minimum, one must recognize that the process of experimentation involves generating observations that will serve as evidence that will be related to hypotheses.

Ideally, experimentation should produce evidence or observations that are interpretable in order to make the process of evidence evaluation uncomplicated. One aspect of experimentation skill is to isolate variables in such a way as to rule out competing hypotheses. The control of variables is a basic strategy that allows valid inferences and narrows the number of possible experiments to consider (Klahr, 2000). Confounded experiments, those in which variables have not been isolated correctly, yield indetermi-

nate evidence, thereby making valid inferences and subsequent knowledge gain difficult, if not impossible.

Early approaches to examining experimentation skills involved minimizing the role of prior knowledge in order to focus on the strategies that participants used. That is, the goal was to examine the domain-general strategies that apply regardless of the content to which they are applied. For example, building on the research tradition of Piaget (e.g., Inhelder and Piaget, 1958), Siegler and Liebert (1975) examined the acquisition of experimental design skills by fifth and eighth graders. The problem involved determining how to make an electric train run. The train was connected to a set of four switches, and the children needed to determine the particular on/off configuration required. The train was in reality controlled by a secret switch, so that the discovery of the correct solution was postponed until all 16 combinations were generated. In this task, there was no principled reason why any one of the combinations would be more or less likely, and success was achieved by systematically testing all combinations of a set of four switches. Thus the task involved no domain-specific knowledge that would constrain the hypotheses about which configuration was most likely. A similarly knowledge-lean task was used by Kuhn and Phelps (1982), similar to a task originally used by Inhelder and Piaget (1958), involving identifying reaction properties of a set of colorless fluids. Success on the task was dependent on the ability to isolate and control variables in the set of all possible fluid combinations in order to determine which was causally related to the outcome. The study extended over several weeks with variations in the fluids used and the difficulty of the problem.

In both studies, the importance of practice and instructional support was apparent. Siegler and Liebert’s study included two experimental groups of children who received different kinds of instructional support. Both groups were taught about factors, levels, and tree diagrams. One group received additional, more elaborate support that included practice and help representing all possible solutions with a tree diagram. For fifth graders, the more elaborate instructional support improved their performance compared with a control group that did not receive any support. For eighth graders, both kinds of instructional support led to improved performance. In the Kuhn and Phelps task, some students improved over the course of the study, although an abrupt change from invalid to valid strategies was not common. Instead, the more typical pattern was one in which valid and invalid strategies coexisted both within and across sessions, with a pattern of gradual attainment of stable valid strategies by some students (the stabilization point varied but was typically around weeks 5-7).

Since this early work, researchers have tended to investigate children’s and adults’ performance on experimental design tasks that are more knowledge rich and less constrained. Results from these studies indicate that, in

general, adults are more proficient than children at designing informative experiments. In a study comparing adults with third and sixth graders, adults were more likely to focus on experiments that would be informative (Klahr, Fay, and Dunbar, 1993). Similarly, Schauble (1996) found that during the initial 3 weeks of exploring a domain, children and adults considered about the same number of possible experiments. However, when they began experimentation of another domain in the second 3 weeks of the study, adults considered a greater range of possible experiments. Over the full 6 weeks, children and adults conducted approximately the same number of experiments. Thus, children were more likely to conduct unintended duplicate or triplicate experiments, making their experimentation efforts less informative relative to the adults, who were selecting a broader range of experiments. Similarly, children are more likely to devote multiple experimental trials to variables that were already well understood, whereas adults move on to exploring variables they did not understand as well (Klahr, Fay, and Dunbar, 1993; Schauble, 1996). Evidence also indicates, however, that dimensions of the task often have a greater influence on performance than age (Linn, 1978, 1980; Linn, Chen, and Their, 1977; Linn and Levine, 1978).

With respect to attending to one feature at a time, children are less likely to control one variable at a time than adults. For example, Schauble (1996) found that across two task domains, children used controlled comparisons about a third of the time. In contrast, adults improved from 50 percent usage on the first task to 63 percent on the second task. Children usually begin by designing confounded experiments (often as a means to produce a desired outcome), but with repeated practice begin to use a strategy of changing one variable at time (e.g., Kuhn, Schauble, and Garcia-Mila, 1992; Kuhn et al. 1995; Schauble, 1990).

Reminiscent of the results of the earlier study by Kuhn and Phelps, both children and adults display intraindividual variability in strategy usage. That is, multiple strategy usage is not unique to childhood or periods of developmental transition (Kuhn et al., 1995). A robust finding is the coexistence of valid and invalid strategies (e.g., Kuhn, Schuable, and Garcia-Mila, 1992; Garcia-Mila and Andersen, 2005; Gleason and Schauble, 2000; Schauble, 1990; Siegler and Crowley, 1991; Siegler and Shipley, 1995). That is, participants may progress to the use of a valid strategy, but then return to an inefficient or invalid strategy. Similar use of multiple strategies has been found in research on the development of other academic skills, such as mathematics (e.g., Bisanz and LeFevre, 1990; Siegler and Crowley, 1991), reading (e.g., Perfetti, 1992), and spelling (e.g., Varnhagen, 1995). With respect to experimentation strategies, an individual may begin with an invalid strategy, but once the usefulness of changing one variable at a time is discovered, it is not immediately used exclusively. The newly discovered, effective strategy is only slowly incorporated into an individual’s set of strategies.

An individual’s perception of the goals of an investigation also has an important effect on the hypotheses they generate and their approach to experimentation. Individuals tend to differ in whether they see the overarching goal of an inquiry task as seeking to identify which factors make a difference (scientific) or seeking to produce a desired effect (engineering). It is a question for further research if these different approaches characterize an individual, or if they are invoked by task demand or implicit assumptions.

In a direct exploration of the effect of adopting scientific versus engineering goals, Schauble, Klopfer, and Raghavan (1991) provided fifth and sixth graders with an “engineering context” and a “science context.” When the children were working as scientists, their goal was to determine which factors made a difference and which ones did not. When the children were working as engineers, their goal was optimization, that is, to produce a desired effect (i.e., the fastest boat in the canal task). When working in the science context, the children worked more systematically, by establishing the effect of each variable, alone and in combination. There was an effort to make inclusion inferences (i.e., an inference that a factor is causal) and exclusion inferences (i.e., an inference that a factor is not causal). In the engineering context, children selected highly contrastive combinations and focused on factors believed to be causal while overlooking factors believed or demonstrated to be noncausal. Typically, children took a “try-and-see” approach to experimentation while acting as engineers, but they took a theory-driven approach to experimentation when acting as scientists. Schauble et al. (1991) found that children who received the engineering instructions first, followed by the scientist instructions, made the greatest improvements. Similarly, Sneider et al. (1984) found that students’ ability to plan and critique experiments improved when they first engaged in an engineering task of designing rockets.

Another pair of contrasting approaches to scientific investigation is the theorist versus the experimentalist (Klahr and Dunbar, 1998; Schauble, 1990). Similar variation in strategies for problem solving have been observed for chess, puzzles, physics problems, science reasoning, and even elementary arithmetic (Chase and Simon, 1973; Klahr and Robinson, 1981; Klayman and Ha, 1989; Kuhn et al., 1995; Larkin et al., 1980; Lovett and Anderson, 1995, 1996; Simon, 1975; Siegler, 1987; Siegler and Jenkins, 1989). Individuals who take a theory-driven approach tend to generate hypotheses and then test the predictions of the hypotheses. Experimenters tend to make data-driven discoveries, by generating data and finding the hypothesis that best summarizes or explains that data. For example, Penner and Klahr (1996a) asked 10-to 14-year-olds to conduct experiments to determine how the shape, size, material, and weight of an object influence sinking times. Students’ approaches to the task could be classified as either “prediction oriented” (i.e., a theorist: “I believe that weight makes a difference) or “hypothesis oriented” (i.e., an

experimenter: “I wonder if …”). The 10-year-olds were more likely to take a prediction (or demonstration) approach, whereas the 14-year-olds were more likely to explicitly test a hypothesis about an attribute without a strong belief or need to demonstrate that belief. Although these patterns may characterize approaches to any given task, it has yet to be determined if such styles are idiosyncratic to the individual and likely to remain stable across varying tasks, or if different styles might emerge for the same person depending on task demands or the domain under investigation.

Observing and Recording

Record keeping is an important component of scientific investigation in general, and of self-directed experimental tasks especially, because access to and consulting of cumulative records are often important in interpreting evidence. Early studies of experimentation demonstrated that children are often not aware of their own memory limitations, and this plays a role in whether they document their work during an investigation (e.g., Siegler and Liebert, 1975). Recent studies corroborate the importance of an awareness of one’s own memory limitations while engaged in scientific inquiry tasks, regardless of age. Spontaneous note-taking or other documentation of experimental designs and results may be a factor contributing to the observed developmental differences in performance on both experimental design tasks and in evaluation of evidence. Carey et al. (1989) reported that, prior to instruction, seventh graders did not spontaneously keep records when trying to determine and keep track of which substance was responsible for producing a bubbling reaction in a mixture of yeast, flour, sugar, salt, and warm water. Nevertheless, even though preschoolers are likely to produce inadequate and uninformative notations, they can distinguish between the two when asked to choose between them (Triona and Klahr, in press). Dunbar and Klahr (1988) also noted that children (grades 3-6) were unlikely to check if a current hypothesis was or was not consistent with previous experimental results. In a study by Trafton and Trickett (2001), undergraduates solving scientific reasoning problems in a computer environment were more likely to achieve correct performance when using the notebook function (78 percent) than were nonusers (49 percent), showing that this issue is not unique to childhood.

In a study of fourth graders’ and adults’ spontaneous use of notebooks during a 10-week investigation of multivariable systems, all but one of the adults took notes, whereas only half of the children took notes. Moreover, despite variability in the amount of notebook usage in both groups, on average adults made three times more notebook entries than children did. Adults’ note-taking remained stable across the 10 weeks, but children’s frequency of use decreased over time, dropping to about half of their initial

usage. Children rarely reviewed their notes, which typically consisted of conclusions, but not the variables used or the outcomes of the experimental tests (i.e., the evidence for the conclusion was not recorded) (Garcia-Mila and Andersen, 2005).

Children may differentially record the results of experiments, depending on familiarity or strength of prior theories. For example, 10- to 14-year-olds recorded more data points when experimenting with factors affecting force produced by the weight and surface area of boxes than when they were experimenting with pendulums (Kanari and Millar, 2004). Overall, it is a fairly robust finding that children are less likely than adults to record experimental designs and outcomes or to review what notes they do keep, despite task demands that clearly necessitate a reliance on external memory aids.

Given the increasing attention to the importance of metacognition for proficient performance on such tasks (e.g., Kuhn and Pearsall, 1998, 2000), it is important to determine at what point children and early adolescents recognize their own memory limitations as they navigate through a complex task. Some studies show that children’s understanding of how their own memories work continues to develop across the elementary and middle school grades (Siegler and Alibali, 2005). The implication is that there is no particular age or grade level when memory and limited understanding of one’s own memory are no longer a consideration. As such, knowledge of how one’s own memory works may represent an important moderating variable in understanding the development of scientific reasoning (Kuhn, 2001). For example, if a student is aware that it will be difficult for her to remember the results of multiple trials, she may be more likely to carefully record each outcome. However, it may also be the case that children, like adult scientists, need to be inducted into the practice of record keeping and the use of records. They are likely to need support to understand the important role of records in generating scientific evidence and supporting scientific arguments.

Evaluating Evidence

The important role of evidence evaluation in the process of scientific activity has long been recognized. Kuhn (1989), for example, has argued that the defining feature of scientific thinking is the set of skills involved in differentiating and coordinating theory and evidence. Various strands of research provide insight on how children learn to engage in this phase of scientific inquiry. There is an extensive literature on the evaluation of evidence, beginning with early research on identifying patterns of covariation and cause that used highly structured experimental tasks. More recently researchers have studied how children evaluate evidence in the context of self-directed experimental tasks. In real-world contexts (in contrast to highly controlled laboratory tasks) the process of evidence evaluation is very messy

and requires an understanding of error and variation. As was the case for hypothesis generation and the design of experiments, the role of prior knowledge and beliefs has emerged as an important influence on how individuals evaluate evidence.

Covariation Evidence

A number of early studies on the development of evidence evaluation skills used knowledge-lean tasks that asked participants to evaluate existing data. These data were typically in the form of covariation evidence—that is, the frequency with which two events do or do not occur together. Evaluation of covariation evidence is potentially important in regard to scientific thinking because covariation is one potential cue that two events are causally related. Deanna Kuhn and her colleagues carried out pioneering work on children’s and adults’ evaluation of covariation evidence, with a focus on how participants coordinate their prior beliefs about the phenomenon with the data presented to them (see Box 5-1 ).

Results across a series of studies revealed continuous improvement of the skills involved in differentiating and coordinating theory and evidence, as well as bracketing prior belief while evaluating evidence, from middle childhood (grades 3 and 6) to adolescence (grade 9) to adulthood (Kuhn, Amsel, and O’Loughlin, 1988). These skills, however, did not appear to develop to an optimal level even among adults. Even adults had a tendency to meld theory and evidence into a single mental representation of “the way things are.”

Participants had a variety of strategies for keeping theory and evidence in alignment with one another when they were in fact discrepant. One tendency was to ignore, distort, or selectively attend to evidence that was inconsistent with a favored theory. For example, the protocol from one ninth grader demonstrated that upon repeated instances of covariation between type of breakfast roll and catching colds, he would not acknowledge this relationship: “They just taste different … the breakfast roll to me don’t cause so much colds because they have pretty much the same thing inside” (Kuhn, Amsel, and O’Loughlin, 1998, p. 73).

Another tendency was to adjust a theory to fit the evidence, a process that was most often outside an individual’s conscious awareness and control. For example, when asked to recall their original beliefs, participants would often report a theory consistent with the evidence that was presented, and not the theory as originally stated. Take the case of one ninth grader who did not believe that type of condiment (mustard versus ketchup) was causally related to catching colds. With each presentation of an instance of covariation evidence, he acknowledged the evidence and elaborated a theory based on the amount of ingredients or vitamins and the temperature of the

food the condiment was served with to make sense of the data (Kuhn, Amsel, and O’Loughlin, 1988, p. 83). Kuhn argued that this tendency suggests that the student’s theory does not exist as an object of cognition. That is, a theory and the evidence for that theory are undifferentiated—they do not exist as separate cognitive entities. If they do not exist as separate entities, it is not possible to flexibly and consciously reflect on the relation of one to the other.

A number of researchers have criticized Kuhn’s findings on both methodological and theoretical grounds. Sodian, Zaitchik, and Carey (1991), for example, questioned the finding that third and sixth grade children cannot distinguish between their beliefs and the evidence, pointing to the complex-

ity of the tasks Kuhn used as problematic. They chose to employ simpler tasks that involved story problems about phenomena for which children did not hold strong beliefs. Children’s performance on these tasks demonstrated that even first and second graders could differentiate a hypothesis from the evidence. Likewise, Ruffman et al. (1993) used a simplified task and showed that 6-year-olds were able to form a causal hypothesis based on a pattern of covariation evidence. A study of children and adults (Amsel and Brock, 1996) indicated an important role of prior beliefs, especially for children. When presented with evidence that disconfirmed prior beliefs, children from both grade levels tended to make causal judgments consistent with their prior beliefs. When confronted with confirming evidence, however, both groups of children and adults made similar judgments. Looking across these studies provides insight into the conditions under which children are more or less proficient at coordinating theory and evidence. In some situations, children are better at distinguishing prior beliefs from evidence than the results of Kuhn et al. suggest.

Koslowksi (1996) criticized Kuhn et al.’s work on more theoretical grounds. She argued that reliance on knowledge-lean tasks in which participants are asked to suppress their prior knowledge may lead to an incomplete or distorted picture of the reasoning abilities of children and adults. Instead, Koslowski suggested that using prior knowledge when gathering and evaluating evidence is a valid strategy. She developed a series of experiments to support her thesis and to explore the ways in which prior knowledge might play a role in evaluating evidence. The results of these investigations are described in detail in the later section of this chapter on the role of prior knowledge.

Evidence in the Context of Investigations

Researchers have also looked at reasoning about cause in the context of full investigations of causal systems. Two main types of multivariable systems are used in these studies. In the first type of system, participants are involved in a hands-on manipulation of a physical system, such as a ramp (e.g., Chen and Klahr, 1999; Masnick and Klahr, 2003) or a canal (e.g., Gleason and Schauble, 2000; Kuhn, Schauble, and Garcia-Mila, 1992). The second type of system is a computer simulation, such as the Daytona microworld in which participants discover the factors affecting the speed of race cars (Schauble, 1990). A variety of virtual environments have been created in domains such as electric circuits (Schauble et al., 1992), genetics (Echevarria, 2003), earthquake risk, and flooding risk (e.g., Keselman, 2003).

The inferences that are made based on self-generated experimental evidence are typically classified as either causal (or inclusion), noncausal (or exclusion), indeterminate, or false inclusion. All inference types can be fur-

ther classified as valid or invalid. Invalid inclusion, by definition, is of particular interest because in self-directed experimental contexts, both children and adults often infer based on prior beliefs that a variable is causal, when in reality it is not.

Children tend to focus on making causal inferences during their initial explorations of a causal system. In a study in which children worked to discover the causal structure of a computerized microworld, fifth and sixth graders began by producing confounded experiments and relied on prior knowledge or expectations (Schauble, 1990). As a result, in their early explorations of the causal system, they were more likely to make incorrect causal inferences. In a direct comparison of adults and children (Schauble, 1996), adults also focused on making causal inferences, but they made more valid inferences because their experimentation was more often done using a control-of-variables strategy. Overall, children’s inferences were valid 44 percent of the time, compared with 72 percent for adults. The fifth and sixth graders improved over the course of six sessions, starting at 25 percent but improving to almost 60 percent valid inferences (Schauble, 1996). Adults were more likely than children to make inferences about which variables were noncausal or inferences of indeterminacy (80 and 30 percent, respectively) (Schauble, 1996).

Children’s difficulty with inferences of noncausality also emerged in a study of 10- to 14-year-olds who explored factors influencing the swing of a pendulum or the force needed to pull a box along a level surface (Kanari and Millar, 2004). Only half of the students were able draw correct conclusions about factors that did not covary with outcome. Students were likely to either selectively record data, selectively attend to data, distort or reinterpret the data, or state that noncovariation experimental trials were “inconclusive.” Such tendencies are reminiscent of other findings that some individuals selectively attend to or distort data in order to preserve a prior theory or belief (Kuhn, Amsel, and O’Loughlin, 1988; Zimmerman, Raghavan, and Sartoris, 2003).

Some researchers suggest children’s difficulty with noncausal or indeterminate inferences may be due both to experience and to the inherent complexity of the problem. In terms of experience, in the science classroom it is typical to focus on variables that “make a difference,” and therefore students struggle when testing variables that do not covary with the outcome (e.g., the weight of a pendulum does not affect the time of swing or the vertical height of a weight does not affect balance) (Kanari and Millar, 2004). Also, valid exclusion and indeterminacy inferences may be conceptually more complex, because they require one to consider a pattern of evidence produced from several experimental trials (Kuhn et al., 1995; Schauble, 1996). Looking across several trials may require one to review cumulative records of previous outcomes. As has been suggested previously, children do not

often have the memory skills to either record information, record sufficient information, or consult such information when it has been recorded.

The importance of experience is highlighted by the results of studies conducted over several weeks with fifth and sixth graders. After several weeks with a task, children started making more exclusion inferences (that factors are not causal) and indeterminacy inferences (that one cannot make a conclusive judgment about a confounded comparison) and did not focus solely on causal inferences (e.g., Keselman, 2003; Schauble, 1996). They also began to distinguish between an informative and an uninformative experiment by attending to or controlling other factors leading to an improved ability to make valid inferences. Through repeated exposure, invalid inferences, such as invalid inclusions, dropped in frequency. The tendency to begin to make inferences of indeterminacy suggests that students developed more awareness of the adequacy or inadequacy of their experimentation strategies for generating sufficient and interpretable evidence.

Children and adults also differ in generating sufficient evidence to support inferences. In contexts in which it is possible, children often terminate their search early, believing that they have determined a solution to the problem (e.g., Dunbar and Klahr, 1989). In studies over several weeks in which children must continue their investigation (e.g., Schauble et al., 1991), this is less likely because of the task requirements. Children are also more likely to refer to the most recently generated evidence. They may jump to a conclusion after a single experiment, whereas adults typically need to see the results of several experiments (e.g., Gleason and Schauble, 2000).

As was found with experimentation, children and adults display intraindividual variability in strategy usage with respect to inference types. Likewise, the existence of multiple inference strategies is not unique to childhood (Kuhn et al., 1995). In general, early in an investigation, individuals focus primarily on identifying factors that are causal and are less likely to consider definitely ruling out factors that are not causal. However, a mix of valid and invalid inference strategies co-occur during the course of exploring a causal system. As with experimentation, the addition of a valid inference strategy to an individual’s repertoire does not mean that they immediately give up the others. Early in investigations, there is a focus on causal hypotheses and inferences, whether they are warranted or not. Only with additional exposure do children start to make inferences of noncausality and indeterminacy. Knowledge change and experience—gaining a better understanding of the causal system via experimentation—was associated with the use of valid experimentation and inference strategies.

THE ROLE OF PRIOR KNOWLEDGE

In the previous section we reviewed evidence on developmental differences in using scientific strategies. Across multiple studies, prior knowledge

emerged as an important influence on several parts of the process of generating and evaluating evidence. In this section we look more closely at the specific ways that prior knowledge may shape part of the process. Prior knowledge includes conceptual knowledge, that is, knowledge of the natural world and specifically of the domain under investigation, as well as prior knowledge and beliefs about the purpose of an investigation and the goals of science more generally. This latter kind of prior knowledge is touched on here and discussed in greater detail in the next chapter.

Beliefs About Causal Mechanism and Plausibility

In response to research on evaluation of covariation evidence that used knowledge-lean tasks or even required participants to suppress prior knowledge, Koslowski (1996) argued that it is legitimate and even helpful to consider prior knowledge when gathering and evaluating evidence. The world is full of correlations, and consideration of plausibility, causal mechanism, and alternative causes can help to determine which correlations between events should be taken seriously and which should be viewed as spurious. For example, the identification of the E. coli bacterium allows a causal relationship between hamburger consumption and certain types of illness or mortality. Because of the absence of a causal mechanism, one does not consider seriously the correlation between ice cream consumption and violent crime rate as causal, but one looks for other covarying quantities (such as high temperatures) that may be causal for both behaviors and thus explain the correlation.

Koslowski (1996) presented a series of experiments that demonstrate the interdependence of theory and evidence in legitimate scientific reasoning (see Box 5-2 for an example). In most of these studies, all participants (sixth graders, ninth graders, and adults) did take mechanism into consideration when evaluating evidence in relation to a hypothesis about a causal relationship. Even sixth graders considered more than patterns of covariation when making causal judgments (Koslowksi and Okagaki, 1986; Koslowski et al., 1989). In fact, as discussed in the previous chapter, results of studies by Koslowski (1996) and others (Ahn et al., 1995) indicate that children and adults have naïve theories about the world that incorporate information about both covariation and causal mechanism.

The plausibility of a mechanism also plays a role in reasoning about cause. In some situations, scientific progress occurs by taking seemingly implausible correlations seriously (Wolpert, 1993). Similarly, Koslowski argued that if people rely on covariation and mechanism information in an interdependent and judicious manner, then they should pay attention to implausible correlations (i.e., those with no apparent mechanism) when the implausible correlation occurs repeatedly. For example, discovering the cause of Kawasaki’s syndrome depended on taking seriously the implausible cor-

relation between the illness and having recently cleaned carpets. Similarly, Thagard (1998a, 1998b) describes the case of researchers Warren and Marshall, who proposed that peptic ulcers could be caused by a bacterium, and their efforts to have their theory accepted by the medical community. The bacterial theory of ulcers was initially rejected as implausible, given the assumption that the stomach is too acidic to allow bacteria to survive.

Studies with both children and adults reveal links between reasoning about mechanism and the plausibility of that mechanism (Koslowski, 1996). When presented with an implausible covariation (e.g., improved gas mileage and color of car), participants rated the causal status of the implausible cause (color) before and after learning about a possible way that the cause could bring about the effect (improved gas mileage). In this example, par-

ticipants learned that the color of the car affects the driver’s alertness (which affects driving quality, which in turn affects gas mileage). At all ages, participants increased their causal ratings after learning about a possible mediating mechanism. The presence of a possible mechanism in addition to a large number of covariations (four or more) was taken to indicate the possibility of a causal relationship for both plausible and implausible covariations. When either generating or assessing mechanisms for plausible covariations, all age groups (sixth and ninth graders and adults) were comparable. When the covariation was implausible, sixth graders were more likely to generate dubious mechanisms to account for the correlation.

The role of prior knowledge, especially beliefs about causal mechanism and plausibility, is also evident in hypothesis formation and the design of investigations. Individuals’ prior beliefs influence the choice of which hypotheses to test, including which hypotheses are tested first, repeatedly, or receive the most time and attention (e.g., Echevarria, 2003; Klahr, Fay, and Dunbar, 1993; Penner and Klahr, 1996b; Schauble, 1990, 1996; Zimmerman, Raghavan, and Sartoris, 2003). For example, children’s favored theories sometimes result in the selection of invalid experimentation and evidence evaluation heuristics (e.g., Dunbar and Klahr, 1989; Schauble, 1990). Plausibility of a hypothesis may serve as a guide for which experiments to pursue. Klahr, Fay, and Dunbar (1993) provided third and sixth grade children and adults with hypotheses to test that were incorrect but either plausible or implausible. For plausible hypotheses, children and adults tended to go about demonstrating the correctness of the hypothesis rather than setting up experiments to decide between rival hypotheses. For implausible hypotheses, adults and some sixth graders proposed a plausible rival hypothesis and set up an experiment that would discriminate between the two. Third graders tended to propose a plausible hypothesis but then ignore or forget the initial implausible hypothesis, getting sidetracked in an attempt to demonstrate that the plausible hypothesis was correct.

Recognizing the interdependence of theory and data in the evaluation of evidence and explanations, Chinn and Brewer (2001) proposed that people evaluate evidence by building a mental model of the interrelationships between theories and data. These models integrate patterns of data, procedural details, and the theoretical explanation of the observed findings (which may include unobservable mechanisms, such as molecules, electrons, enzymes, or intentions and desires). The information and events can be linked by different kinds of connections, including causal, contrastive, analogical, and inductive links. The mental model may then be evaluated by considering the plausibility of these links. In addition to considering the links between, for example, data and theory, the model might also be evaluated by appealing to alternate causal mechanisms or alternate explanations. Essentially, an individual seeks to “undermine one or more of the links in the

model” (p. 337). If no reasons to be critical can be identified, the individual may accept the new evidence or theoretical interpretation.

Some studies suggest that the strength of prior beliefs, as well as the personal relevance of those beliefs, may influence the evaluation of the mental model (Chinn and Malhotra, 2002; Klaczynski, 2000; Klaczynski and Narasimham, 1998). For example, when individuals have reason to disbelieve evidence (e.g., because it is inconsistent with prior belief), they will search harder for flaws in the data (Kunda, 1990). As a result, individuals may not find the evidence compelling enough to reassess their cognitive model. In contrast, beliefs about simple empirical regularities may not be held with such conviction (e.g., the falling speed of heavy versus light objects), making it easier to change a belief in response to evidence.

Evaluating Evidence That Contradicts Prior Beliefs

Anomalous data or evidence refers to results that do not fit with one’s current beliefs. Anomalous data are considered very important by scientists because of their role in theory change, and they have been used by science educators to promote conceptual change. The idea that anomalous evidence promotes conceptual change (in the scientist or the student) rests on a number of assumptions, including that individuals have beliefs or theories about natural or social phenomena, that they are capable of noticing that some evidence is inconsistent with those theories, that such evidence calls into question those theories, and, in some cases, that a belief or theory will be altered or changed in response to the new (anomalous) evidence (Chinn and Brewer, 1998). Chinn and Brewer propose that there are eight possible responses to anomalous data. Individuals can (1) ignore the data; (2) reject the data (e.g., because of methodological error, measurement error, bias); (3) acknowledge uncertainty about the validity of the data; (4) exclude the data as being irrelevant to the current theory; (5) hold the data in abeyance (i.e., withhold a judgment about the relation of the data to the initial theory); (6) reinterpret the data as consistent with the initial theory; (7) accept the data and make peripheral change or minor modification to the theory; or (8) accept the data and change the theory. Examples of all of these responses were found in undergraduates’ responses to data that contradicted theories to explain the mass extinction of dinosaurs and theories about whether dinosaurs were warm-blooded or cold-blooded.

In a series of studies, Chinn and Malhotra (2002) examined how fourth, fifth, and sixth graders responded to experimental data that were inconsistent with their existing beliefs. Experiments from physical science domains were selected in which the outcomes produced either ambiguous or unambiguous data, and for which the findings were counterintuitive for most children. For example, most children assume that a heavy object falls faster

than a light object. When the two objects are dropped simultaneously, there is some ambiguity because it is difficult to observe both objects. An example of a topic that is counterintuitive but results in unambiguous evidence is the reaction temperature of baking soda added to vinegar. Children believe that either no change in temperature will occur, or that the fizzing causes an increase in temperature. Thermometers unambiguously show a temperature drop of about 4 degrees centigrade.

When examining the anomalous evidence produced by these experiments, children’s difficulties seemed to occur in one of four cognitive processes: observation, interpretation, generalization, or retention (Chinn and Malhotra, 2002). For example, prior belief may influence what is “observed,” especially in the case of data that are ambiguous, and children may not perceive the two objects as landing simultaneously. Inferences based on this faulty observation will then be incorrect. At the level of interpretation, even if individuals accurately observed the outcome, they might not shift their theory to align with the evidence. They can fail to do so in many ways, such as ignoring or distorting the data or discounting the data because they are considered flawed. At the level of generalization, an individual may accept, for example, that these particular heavy and light objects fell at the same rate but insist that the same rule may not hold for other situations or objects. Finally, even when children appeared to change their beliefs about an observed phenomenon in the immediate context of the experiment, their prior beliefs reemerged later, indicating a lack of long-term retention of the change.

Penner and Klahr (1996a) investigated the extent to which children’s prior beliefs affect their ability to design and interpret experiments. They used a domain in which most children hold a strong belief that heavier objects sink in fluid faster than light objects, and they examined children’s ability to design unconfounded experiments to test that belief. In this study, for objects of a given composition and shape, sink times for heavy and light objects are nearly indistinguishable to an observer. For example, the sink times for the stainless steel spheres weighing 65 gm and 19 gm were .58 sec and .62 sec, respectively. Only one of the eight children (out of 30) who chose to directly contrast these two objects continued to explore the reason for the unexpected finding that the large and small spheres had equivalent sink times. The process of knowledge change was not straightforward. For example, some children suggested that the size of the smaller steel ball offset the fact that it weighed less because it was able to move through the water as fast as the larger, heavier steel ball. Others concluded that both weight and shape make a difference. That is, there was an attempt to reconcile the evidence with prior knowledge and expectations by appealing to causal mechanisms, alternate causes, or enabling conditions.

What is also important to note about the children in the Penner and Klahr study is that they did in fact notice the surprising finding, rather than

ignore or misrepresent the data. They tried to make sense of the outcome by acting as a theorist who conjectures about the causal mechanisms, boundary conditions, or other ad hoc explanations (e.g., shape) to account for the results of an experiment. In Chinn and Malhotra’s (2002) study of students’ evaluation of observed evidence (e.g., watching two objects fall simultaneously), the process of noticing was found to be an important mediator of conceptual change.

Echevarria (2003) examined seventh graders’ reactions to anomalous data in the domain of genetics and whether they served as a catalyst for knowledge construction during the course of self-directed experimentation. Students in the study completed a 3-week unit on genetics that involved genetics simulation software and observing plant growth. In both the software and the plants, students investigated or observed the transmission of one trait. Anomalies in the data were defined as outcomes that were not readily explainable on the basis of the appearance of the parents.

In general, the number of hypotheses generated, the number of tests conducted, and the number of explanations generated were a function of students’ ability to encounter, notice, and take seriously an anomalous finding. The majority of students (80 percent) developed some explanation for the pattern of anomalous data. For those who were unable to generate an explanation, it was suggested that the initial knowledge was insufficient and therefore could not undergo change as a result of the encounter with “anomalous” evidence. Analogous to case studies in the history of science (e.g., Simon, 2001), these students’ ability to notice and explore anomalies was related to their level of domain-specific knowledge (as suggested by Pasteur’s oft quoted maxim “serendipity favors the prepared mind”). Surprising findings were associated with an increase in hypotheses and experiments to test these potential explanations, but without the domain knowledge to “notice,” anomalies could not be exploited.

There is some evidence that, with instruction, students’ ability to evaluate anomalous data improves (Chinn and Malhotra, 2002). In a study of fourth, fifth, and sixth graders, one group of students was instructed to predict the outcomes of three experiments that produce counterintuitive but unambiguous data (e.g., reaction temperature). A second group answered questions that were designed to promote unbiased observations and interpretations by reflecting on the data. A third group was provided with an explanation of what scientists expected to find and why. All students reported their prediction of the outcome, what they observed, and their interpretation of the experiment. They were then tested for generalizations, and a retention test followed 9-10 days later. Fifth and sixth graders performed better than did fourth graders. Students who heard an explanation of what scientists expected to find and why did best. Further analyses suggest that the explanation-based intervention worked by influencing students’ initial

predictions. This correct prediction then influenced what was observed. A correct observation then led to correct interpretations and generalizations, which resulted in conceptual change that was retained. A similar pattern of results was found using interventions employing either full or reduced explanations prior to the evaluation of evidence.

Thus, it appears that children were able to change their beliefs on the basis of anomalous or unexpected evidence, but only when they were capable of making the correct observations. Difficulty in making observations was found to be the main cognitive process responsible for impeding conceptual change (i.e., rather than interpretation, generalization, or retention). Certain interventions, in particular those involving an explanation of what scientists expected to happen and why, were very effective in mediating conceptual change when encountering counterintuitive evidence. With particular scaffolds, children made observations independent of theory, and they changed their beliefs based on observed evidence.

THE IMPORTANCE OF EXPERIENCE AND INSTRUCTION

There is increasing evidence that, as in the case of intellectual skills in general, the development of the component skills of scientific reasoning “cannot be counted on to routinely develop” (Kuhn and Franklin, 2006, p. 47). That is, young children have many requisite skills needed to engage in scientific thinking, but there are also ways in which even adults do not show full proficiency in investigative and inference tasks. Recent research efforts have therefore been focused on how such skills can be promoted by determining which types of educational interventions (e.g., amount of structure, amount of support, emphasis on strategic or metastrategic skills) will contribute most to learning, retention, and transfer, and which types of interventions are best suited to different students. There is a developing picture of what children are capable of with minimal support, and research is moving in the direction of ascertaining what children are capable of, and when, under conditions of practice, instruction, and scaffolding. It may one day be possible to tailor educational opportunities that neither under- or overestimate children’s ability to extract meaningful experiences from inquiry-based science classes.

Very few of the early studies focusing on the development of experimentation and evidence evaluation skills explicitly addressed issues of instruction and experience. Those that did, however, indicated an important role of experience and instruction in supporting scientific thinking. For example, Siegler and Liebert (1975) incorporated instructional manipulations aimed at teaching children about variables and variable levels with or without practice on analogous tasks. In the absence of both instruction and

extended practice, no fifth graders and a small minority of eighth graders were successful. Kuhn and Phelps (1982) reported that, in the absence of explicit instruction, extended practice over several weeks was sufficient for the development and modification of experimentation and inference strategies. Later studies of self-directed experimentation also indicate that frequent engagement with the inquiry environment alone can lead to the development and modification of cognitive strategies (e.g., Kuhn, Schauble, and Garcia-Mila, 1992; Schauble et al., 1991).

Some researchers have suggested that even simple prompts, which are often used in studies of students’ investigation skills, may provide a subtle form of instruction intervention (Klahr and Carver, 1995). Such prompts may cue the strategic requirements of the task, or they may promote explanation or the type of reflection that could induce a metacognitive or metastrategic awareness of task demands. Because of their role in many studies of revealing students’ thinking generation, it may be very difficult to tease apart the relative contributions of practice from the scaffolding provided by researcher prompts.

In the absence of instruction or prompts, students may not routinely ask questions of themselves, such as “What are you going to do next?” “What outcome do you predict?” “What did you learn?” and “How do you know?” Questions such as these may promote self-explanation, which has been shown to enhance understanding in part because it facilitates the integration of newly learned material with existing knowledge (Chi et al., 1994). Questions such as the prompts used by researchers may serve to promote such integration. Chinn and Malhotra (2002) incorporated different kinds of interventions, aimed at promoting conceptual change in response to anomalous experimental evidence. Interventions included practice at making predictions, reflecting on data, and explanation. The explanation-based interventions were most successful at promoting conceptual change, retention, and generalization. The prompts used in some studies of self-directed experimentation are very likely to serve the same function as the prompts used by Chi et al. (1994). Incorporating such prompts in classroom-based inquiry activities could serve as a powerful teaching tool, given that the use of self-explanation in tutoring systems (human and computer interface) has been shown to be quite effective (e.g., Chi, 1996; Hausmann and Chi, 2002).

Studies that compare the effects of different kinds of instruction and practice opportunities have been conducted in the laboratory, with some translation to the classroom. For example, Chen and Klahr (1999) examined the effects of direct and indirect instruction of the control of variables strategy on students’ (grades 2-4) experimentation and knowledge acquisition. The instructional intervention involved didactic teaching of the control-of-variables strategy, along with examples and probes. Indirect (or implicit) training involved the use of systematic probes during the course of children’s

experimentation. A control group did not receive instruction or probes. No group received instruction on domain knowledge for any task used (springs, ramps, sinking objects). For the students who received instruction, use of the control-of-variables strategy increased from 34 percent prior to instruction to 65 percent after, with 61-64 percent usage maintained on transfer tasks that followed after 1 day and again after 7 months, respectively. No such gains were evident for the implicit training or control groups.

Instruction about control of variables improved children’s ability to design informative experiments, which in turn facilitated conceptual change in a number of domains. They were able to design unconfounded experiments, which facilitated valid causal and noncausal inferences, resulting in a change in knowledge about how various multivariable causal systems worked. Significant gains in domain knowledge were evident only for the instruction group. Fourth graders showed better skill retention at long-term assessment than second or third graders.

The positive impact of instruction on control of variables also appears to translate to the classroom (Toth, Klahr, and Chen, 2000; Klahr, Chen and Toth, 2001). Fourth graders who received instruction in the control-of-variables strategy in their classroom increased their use of the strategy, and their domain knowledge improved. The percentage of students who were able to correctly evaluate others’ research increased from 28 to 76 percent.

Instruction also appears to promote longer term use of the control-of-variables strategy and transfer of the strategy to a new task (Klahr and Nigam, 2004). Third and fourth graders who received instruction were more likely to master the control-of-variables strategy than students who explored a multivariable system on their own. Interestingly, although the group that received instruction performed better overall, a quarter of the students who explored the system on their own also mastered the strategy. These results raise questions about the kinds of individual differences that may allow for some students to benefit from the discovery context, but not others. That is, which learner traits are associated with the success of different learning experiences?

Similar effects of experience and instruction have been demonstrated for improving students’ ability to use evidence from multiple records and make correct inferences from noncausal variables (Keselman, 2003). In many cases, students show some improvement when they are given the opportunity for practice, but greater improvement when they receive instruction (Kuhn and Dean, 2005).

Long-term studies of students’ learning in the classroom with instructional support and structured experiences over months and years reveal children’s potential to engage in sophisticated investigations given the appropriate experiences (Metz, 2004; Lehrer and Schauble, 2005). For example, in one classroom-based study, second and fourth and fifth graders took part

in a curriculum unit on animal behavior that emphasized domain knowledge, whole-class collaboration, scaffolded instruction, and discussions about the kinds of questions that can and cannot be answered by observational records (Metz, 2004). Pairs or triads of students then developed a research question, designed an experiment, collected and analyzed data, and presented their findings on a research poster. Such studies have demonstrated that, with appropriate support, students in grades K-8 and students from a variety of socioeconomic, cultural, and linguistic backgrounds can be successful in generating and evaluating scientific evidence and explanations (Kuhn and Dean, 2005; Lehrer and Schauble, 2005; Metz, 2004; Warren, Rosebery, and Conant, 1994).

KNOWLEDGE AND SKILL IN MODELING

The picture that emerges from developmental and cognitive research on scientific thinking is one of a complex intertwining of knowledge of the natural world, general reasoning processes, and an understanding of how scientific knowledge is generated and evaluated. Science and scientific thinking are not only about logical thinking or conducting carefully controlled experiments. Instead, building knowledge in science is a complex process of building and testing models and theories, in which knowledge of the natural world and strategies for generating and evaluating evidence are closely intertwined. Working from this image of science, a few researchers have begun to investigate the development of children’s knowledge and skills in modeling.

The kinds of models that scientists construct vary widely, both within and across disciplines. Nevertheless, the rhetoric and practice of science are governed by efforts to invent, revise, and contest models. By modeling, we refer to the construction and test of representations that serve as analogues to systems in the real world (Lehrer and Schauble, 2006). These representations can be of many forms, including physical models, computer programs, mathematical equations, or propositions. Objects and relations in the model are interpreted as representing theoretically important objects and relations in the represented world. Models are useful in summarizing known features and predicting outcomes—that is, they can become elements of or representations of theories. A key hurdle for students is to understand that models are not copies; they are deliberate simplifications. Error is a component of all models, and the precision required of a model depends on the purpose for its current use.

The forms of thinking required for modeling do not progress very far without explicit instruction and fostering (Lehrer and Schauble, 2000). For this reason, studies of modeling have most often taken place in classrooms over sustained periods of time, often years. These studies provide a pro-

vocative picture of the sophisticated scientific thinking that can be supported in classrooms if students are provided with the right kinds of experiences over extended periods of time. The instructional approaches used in studies of students’ modeling, as well as the approach to curriculum that may be required to support the development of modeling skills over multiple years of schooling, are discussed in the chapters in Part III .

Lehrer and Schauble (2000, 2003, 2006) reported observing characteristic shifts in the understanding of modeling over the span of the elementary school grades, from an early emphasis on literal depictional forms, to representations that are progressively more symbolic and mathematically powerful. Diversity in representational and mathematical resources both accompanied and produced conceptual change. As children developed and used new mathematical means for characterizing growth, they understood biological change in increasingly dynamic ways. For example, once students understood the mathematics of ratio and changing ratios, they began to conceive of growth not as simple linear increase, but as a patterned rate of change. These transitions in conception and representation appeared to support each other, and they opened up new lines of inquiry. Children wondered whether plant growth was like animal growth, and whether the growth of yeast and bacteria on a Petri dish would show a pattern like the growth of a single plant. These forms of conceptual development required a context in which teachers systematically supported a restricted set of central ideas, building successively on earlier concepts over the grades of schooling.

Representational Systems That Support Modeling

The development of specific representational forms and notations, such as graphs, tables, computer programs, and mathematical expressions, is a critical part of engaging in mature forms of modeling. Mathematics, data and scale models, diagrams, and maps are particularly important for supporting science learning in grades K-8.

Mathematics

Mathematics and science are, of course, separate disciplines. Nevertheless, for the past 200 years, the steady press in science has been toward increasing quantification, visualization, and precision (Kline, 1980). Mathematics in all its forms is a symbol system that is fundamental to both expressing and understanding science. Often, expressing an idea mathematically results in noticing new patterns or relationships that otherwise would not be grasped. For example, elementary students studying the growth of organisms (plants, tobacco hornworms, populations of bacteria) noted that when they graphed changes in heights over the life span, all the organisms

studied produced an emergent S-shaped curve. However, such seeing depended on developing a “disciplined perception” (Stevens and Hall, 1998), a firm grounding in a Cartesian system. Moreover, the shape of the curve was determined in light of variation, accounted for by selecting and connecting midpoints of intervals that defined piece-wise linear segments. This way of representing typical growth was contentious, because some midpoints did not correspond to any particular case value. This debate was therefore a pathway toward the idealization and imagined qualities of the world necessary for adopting a modeling stance. The form of the growth curve was eventually tested in other systems, and its replications inspired new questions. For example, why would bacteria populations and plants be describable by the same growth curve? In this case and in others, explanatory models and data models mutually bootstrapped conceptual development (Lehrer and Schauble, 2002).

It is not feasible in this report to summarize the extensive body of research in mathematics education, but one point is especially critical for science education: the need to expand elementary school mathematics beyond arithmetic to include space and geometry, measurement, and data/ uncertainty. The National Council of Teachers of Mathematics standards (2000) has strongly supported this extension of early mathematics, based on their judgment that arithmetic alone does not constitute a sufficient mathematics education. Moreover, if mathematics is to be used as a resource for science, the resource base widens considerably with a broader mathematical base, affording students a greater repertoire for making sense of the natural world.

For example, consider the role of geometry and visualization in comparing crystalline structures or evaluating the relationship between the body weights and body structures of different animals. Measurement is a ubiquitous part of the scientific enterprise, although its subtleties are almost always overlooked. Students are usually taught procedures for measuring but are rarely taught a theory of measure. Educators often overestimate children’s understanding of measurement because measuring tools—like rulers or scales—resolve many of the conceptual challenges of measurement for children, so that they may fail to grasp the idea that measurement entails the iteration of constant units, and that these units can be partitioned. It is reasonably common, for example, for even upper elementary students who seem proficient at measuring lengths with rulers to tacitly hold the theory that measuring merely entails the counting of units between boundaries. If these students are given unconnected units (say, tiles of a constant length) and asked to demonstrate how to measure a length, some of them almost always place the units against the object being measured in such a way that the first and last tile are lined up flush with the end of the object measured. This arrangement often requires leaving spaces between units. Diagnosti-

cally, these spaces do not trouble a student who holds this “boundary-filling” conception of measurement (Lehrer, 2003; McClain et al., 1999).

Researchers agree that scientific thinking entails the coordination of theory with evidence (Klahr and Dunbar, 1988; Kuhn, Amsel, and O’Loughlin, 1988), but there are many ways in which evidence may vary in both form and complexity. Achieving this coordination therefore requires tools for structuring and interpreting data and error. Otherwise, students’ interpretation of evidence cannot be accountable. There have been many studies of students’ reasoning about data, variation, and uncertainty, conducted both by psychologists (Kahneman, Solvic, and Tversky, 1982; Konold, 1989; Nisbett et al., 1983) and by educators (Mokros and Russell, 1995; Pollatsek, Lima, and Well, 1981; Strauss and Bichler, 1988). Particularly pertinent here are studies that focus on data modeling (Lehrer and Romberg, 1996), that is, how reasoning with data is recruited as a way of investigating genuine questions about the world.

Data modeling is, in fact, what professionals do when they reason with data and statistics. It is central to a variety of enterprises, including engineering, medicine, and natural science. Scientific models are generated with acute awareness of their entailments for data, and data are recorded and structured as a way of making progress in articulating a scientific model or adjudicating among rival models. The tight relationship between model and data holds generally in domains in which inquiry is conducted by inscribing, representing, and mathematizing key aspects of the world (Goodwin, 2000; Kline, 1980; Latour, 1990).

Understanding the qualities and meaning of data may be enhanced if students spend as much attention on its generation as on its analysis. First and foremost, students need to grasp the notion that data are constructed to answer questions (Lehrer, Giles, and Schauble, 2002). The National Council of Teachers of Mathematics (2000) emphasizes that the study of data should be firmly anchored in students’ inquiry, so that they “address what is involved in gathering and using the data wisely” (p. 48). Questions motivate the collection of certain types of information and not others, and many aspects of data coding and structuring also depend on the question that motivated their collection. Defining the variables involved in addressing a research question, considering the methods and timing to collect data, and finding efficient ways to record it are all involved in the initial phases of data modeling. Debates about the meaning of an attribute often provoke questions that are more precise.

For example, a group of first graders who wanted to learn which student’s pumpkin was the largest eventually understood that they needed to agree

whether they were interested in the heights of the pumpkins, their circumferences, or their weights (Lehrer et al., 2001). Deciding what to measure is bound up with deciding how to measure. As the students went on to count the seeds in their pumpkins (they were pursuing a question about whether there might be relationship between pumpkin size and number of seeds), they had to make decisions about whether they would include seeds that were not full grown and what criteria would be used to decide whether any particular seed should be considered mature.

Data are inherently a form of abstraction: an event is replaced by a video recording, a sensation of heat is replaced by a pointer reading on a thermometer, and so on. Here again, the tacit complexity of tools may need to be explained. Students often have a fragile grasp of the relationship between the event of interest and the operation (hence, the output) of a tool, whether that tool is a microscope, a pan balance, or a “simple” ruler. Some students, for example, do not initially consider measurement to be a form of comparison and may find a balance a very confusing tool. In their mind, the number displayed on a scale is the weight of the object. If no number is displayed, weight cannot be found.

Once the data are recorded, making sense of them requires that they be structured. At this point, students sometimes discover that their data require further abstraction. For example, as they categorized features of self-portraits drawn by other students, a group of fourth graders realized that it would not be wise to follow their original plan of creating 23 categories of “eye type” for the 25 portraits that they wished to categorize (DiPerna, 2002). Data do not come with an inherent structure; rather, structure must be imposed (Lehrer, Giles, and Schauble, 2002). The only structure for a set of data comes from the inquirers’ prior and developing understanding of the phenomenon under investigation. He imposes structure by selecting categories around which to describe and organize the data.

Students also need to mentally back away from the objects or events under study to attend to the data as objects in their own right, by counting them, manipulating them to discover relationships, and asking new questions of already collected data. Students often believe that new questions can be addressed only with new data; they rarely think of querying existing data sets to explore questions that were not initially conceived when the data were collected (Lehrer and Romberg, 1996).

Finally, data are represented in various ways in order to see or understand general trends. Different kinds of displays highlight certain aspects of the data and hide others. An important educational agenda for students, one that extends over several years, is to come to understand the conventions and properties of different kinds of data displays. We do not review here the extensive literature on students’ understanding of different kinds of representational displays (tables, graphs of various kinds, distributions), but, for

purposes of science, students should not only understand the procedures for generating and reading displays, but they should also be able to critique them and to grasp the communicative advantages and disadvantages of alternative forms for a given purpose (diSessa, 2004; Greeno and Hall, 1997). The structure of the data will affect the interpretation. Data interpretation often entails seeking and confirming relationships in the data, which may be at varying levels of complexity. For example, simple linear relationships are easier to spot than inverse relationships or interactions (Schauble, 1990), and students often fail to entertain the possibility that more than one relationship may be operating.

The desire to interpret data may further inspire the creation of statistics, such as measures of center and spread. These measures are a further step of abstraction beyond the objects and events originally observed. Even primary grade students can learn to consider the overall shape of data displays to make interpretations based on the “clumps” and “holes” in the data. Students often employ multiple criteria when trying to identify a “typical value” for a set of data. Many young students tend to favor the mode and justify their choice on the basis of repetition—if more than one student obtained this value, perhaps it is to be trusted. However, students tend to be less satisfied with modes if they do not appear near the center of the data, and they also shy away from measures of center that do not have several other values clustered near them (“part of a clump”). Understanding the mean requires an understanding of ratio, and if students are merely taught to “average” data in a procedural way without having a well-developed sense of ratio, their performance notoriously tends to degrade into “average stew”—eccentric procedures for adding and dividing things that make no sense (Strauss and Bichler, 1988). With good instruction, middle and upper elementary students can simultaneously consider the center and the spread of the data. Students can also generate various forms of mathematical descriptions of error, especially in contexts of measurement, where they can readily grasp the relationships between their own participation in the act of measuring and the resulting variation in measures (Petrosino, Lehrer, and Schauble, 2003).

Scale Models, Diagrams, and Maps

Although data representations are central to science, they are not, of course, the only representations students need to use and understand. Perhaps the most easily interpretable form of representation widely used in science is scale models. Physical models of this kind are used in science education to make it possible for students to visualize objects or processes that are at a scale that makes their direct perception impossible or, alternatively, that permits them to directly manipulate something that otherwise

they could not handle. The ease or difficulty with which students understand these models depends on the complexity of the relationships being communicated. Even preschoolers can understand scale models used to depict location in a room (DeLoache, 2004). Primary grade students can pretty readily overcome the influence of the appearance of the model to focus on and investigate the way it functions (Penner et al., 1997), but middle school students (and some adults) struggle to work out the positional relationships of the earth, the sun, and the moon, which involves not only reconciling different perspectives with respect to perspective and frame (what one sees standing on the earth, what one would see from a hypothetical point in space), but also visualizing how these perspectives would change over days and months (see, for example, the detailed curricular suggestions at the web site http://www.wcer.wisc.edu/ncisla/muse/ ).

Frequently, students are expected to read or produce diagrams, often integrating the information from the diagram with information from accompanying text (Hegarty and Just, 1993; Mayer, 1993). The comprehensibility of diagrams seems to be governed less by domain-general principles than by the specifics of the diagram and its viewer. Comprehensibility seems to vary with the complexity of what is portrayed, the particular diagrammatic details and features, and the prior knowledge of the user.

Diagrams can be difficult to understand for a host of reasons. Sometimes the desired information is missing in the first place; sometimes, features of the diagram unwittingly play into an incorrect preconception. For example, it has been suggested that the common student misconception that the earth is closer to the sun in the summer than in the winter may be due in part to the fact that two-dimensional representations of the three-dimensional orbit make it appear as if the foreshortened orbit is indeed closer to the sun at some points than at others.

Mayer (1993) proposes three common reasons why diagrams mis-communicate: some do not include explanatory information (they are illustrative or decorative rather than explanatory), some lack a causal chain, and some fail to map the explanation to a familiar or recognizable context. It is not clear that school students misperceive diagrams in ways that are fundamentally different from the perceptions of adults. There may be some diagrammatic conventions that are less familiar to children, and children may well have less knowledge about the phenomena being portrayed, but there is no reason to expect that adult novices would respond in fundamentally different ways. Although they have been studied for a much briefer period of time, the same is probably true of complex computer displays.

Finally, there is a growing developmental literature on students’ understanding of maps. Maps can be particularly confusing because they preserve some analog qualities of the space being represented (e.g., relative position and distance) but also omit or alter features of the landscape in ways that

require understanding of mapping conventions. Young children often initially confuse maps of the landscape with pictures of objects in the landscape. It is much easier for youngsters to represent objects than to represent large-scale space (which is the absence of or frame for objects). Students also may struggle with orientation, perspective (the traditional bird’s eye view), and mathematical descriptions of space, such as polar coordinate representations (Lehrer and Pritchard, 2002; Liben and Downs, 1993).

CONCLUSIONS

There is a common thread throughout the observations of this chapter that has deep implications for what one expects from children in grades K-8 and how their science learning should be structured. In almost all cases, the studies converge to the position that the skills under study develop with age, but also that this development is significantly enhanced by prior knowledge, experience, and instruction.

One of the continuing themes evident from studies on the development of scientific thinking is that children are far more competent than first suspected, and likewise that adults are less so. Young children experiment, but their experimentation is generally not systematic, and their observations as well as their inferences may be flawed. The progression of ability is seen with age, but it is not uniform, either across individuals or for a given individual. There is variation across individuals at the same age, as well as variation within single individuals in the strategies they use. Any given individual uses a collection of strategies, some more valid than others. Discovering a valid strategy does not mean that an individual, whether a child or an adult, will use the strategy consistently across all contexts. As Schauble (1996, p. 118) noted:

The complex and multifaceted nature of the skills involved in solving these problems, and the variability in performance, even among the adults, suggest that the developmental trajectory of the strategies and processes associated with scientific reasoning is likely to be a very long one, perhaps even lifelong . Previous research has established the existence of both early precursors and competencies … and errors and biases that persist regardless of maturation, training, and expertise.

One aspect of cognition that appears to be particularly important for supporting scientific thinking is awareness of one’s own thinking. Children may be less aware of their own memory limitations and therefore may be unsystematic in recording plans, designs, and outcomes, and they may fail to consult such records. Self-awareness of the cognitive strategies available is also important in order to determine when and why to employ various strategies. Finally, awareness of the status of one’s own knowledge, such as

recognizing the distinctions between theory and evidence, is important for reasoning in the context of scientific investigations. This last aspect of cognition is discussed in detail in the next chapter.

Prior knowledge, particularly beliefs about causality and plausibility, shape the approach to investigations in multiple ways. These beliefs influence which hypotheses are tested, how experiments are designed, and how evidence is evaluated. Characteristics of prior knowledge, such as its type, strength, and relevance, are potential determinants of how new evidence is evaluated and whether anomalies are noticed. Knowledge change occurs as a result of the encounter.

Finally, we conclude that experience and instruction are crucial mediators of the development of a broad range of scientific skills and of the degree of sophistication that children exhibit in applying these skills in new contexts. This means that time spent doing science in appropriately structured instructional frames is a crucial part of science education. It affects not only the level of skills that children develop, but also their ability to think about the quality of evidence and to interpret evidence presented to them. Students need instructional support and practice in order to become better at coordinating their prior theories and the evidence generated in investigations. Instructional support is also critical for developing skills for experimental design, record keeping during investigations, dealing with anomalous data, and modeling.

Ahn, W., Kalish, C.W., Medin, D.L., and Gelman, S.A. (1995). The role of covariation versus mechanism information in causal attribution. Cognition, 54, 299-352.

Amsel, E., and Brock, S. (1996). The development of evidence evaluation skills. Cognitive Development, 11 , 523-550.

Bisanz, J., and LeFevre, J. (1990). Strategic and nonstrategic processing in the development of mathematical cognition. In. D. Bjorklund (Ed.), Children’s strategies: Contemporary views of cognitive development (pp. 213-243). Hillsdale, NJ: Lawrence Erlbaum Associates.

Carey, S., Evans, R., Honda, M., Jay, E., and Unger, C. (1989). An experiment is when you try it and see if it works: A study of grade 7 students’ understanding of the construction of scientific knowledge. International Journal of Science Education, 11 , 514-529.

Chase, W.G., and Simon, H.A. (1973). The mind’s eye in chess. In W.G. Chase (Ed.), Visual information processing . New York: Academic.

Chen, Z., and Klahr, D. (1999). All other things being equal: Children’s acquisition of the control of variables strategy. Child Development, 70, 1098-1120.

Chi, M.T.H. (1996). Constructing self-explanations and scaffolded explanations in tutoring. Applied Cognitive Psychology, 10, 33-49.

Chi, M.T.H., de Leeuw, N., Chiu, M., and Lavancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439-477.

Chinn, C.A., and Brewer, W.F. (1998). An empirical test of a taxonomy of responses to anomalous data in science. Journal of Research in Science Teaching, 35, 623-654.

Chinn, C.A., and Brewer, W. (2001). Model of data: A theory of how people evaluate data. Cognition and Instruction , 19 (3), 323-343.

Chinn, C.A., and Malhotra, B.A. (2001). Epistemologically authentic scientific reasoning. In K. Crowley, C.D. Schunn, and T. Okada (Eds.), Designing for science: Implications from everyday, classroom, and professional settings (pp. 351-392). Mahwah, NJ: Lawrence Erlbaum Associates.

Chinn, C.A., and Malhotra, B.A. (2002). Children’s responses to anomalous scientific data: How is conceptual change impeded? Journal of Educational Psychology, 94, 327-343.

DeLoache, J.S. (2004). Becoming symbol-minded. Trends in Cognitive Sciences, 8 , 66-70.

DiPerna, E. (2002). Data models of ourselves: Body self-portrait project. In R. Lehrer and L. Schauble (Eds.), Investigating real data in the classroom: Expanding children’s understanding of math and science. Ways of knowing in science and mathematics series . Willington, VT: Teachers College Press.

diSessa, A.A. (2004). Metarepresentation: Native competence and targets for instruction. Cognition and Instruction, 22 (3), 293-331.

Dunbar, K., and Klahr, D. (1989). Developmental differences in scientific discovery strategies. In D. Klahr and K. Kotovsky (Eds.), Complex information processing: The impact of Herbert A. Simon (pp. 109-143). Hillsdale, NJ: Lawrence Erlbaum Associates.

Echevarria, M. (2003). Anomalies as a catalyst for middle school students’ knowledge construction and scientific reasoning during science inquiry. Journal of Educational Psychology, 95, 357-374 .

Garcia-Mila, M., and Andersen, C. (2005). Developmental change in notetaking during scientific inquiry. Manuscript submitted for publication.

Gleason, M.E., and Schauble, L. (2000). Parents’ assistance of their children’s scientific reasoning. Cognition and Instruction, 17 (4), 343-378.

Goodwin, C. (2000). Introduction: Vision and inscription in practice. Mind, Culture, and Activity , 7 , 1-3.

Greeno, J., and Hall, R. (1997). Practicing representation: Learning with and about representational forms. Phi Delta Kappan, January, 361-367.

Hausmann, R., and Chi, M. (2002) Can a computer interface support self-explaining? The International Journal of Cognitive Technology , 7 (1).

Hegarty, M., and Just, A. (1993). Constructing mental models of machines from text and diagrams. Journal of Memory and Language , 32 , 717-742.

Inhelder, B., and Piaget, J. (1958). The growth of logical thinking from childhood to adolescence . New York: Basic Books.

Kahneman, D., Slovic, P, and Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases . New York: Cambridge University Press.

Kanari, Z., and Millar, R. (2004). Reasoning from data: How students collect and interpret data in science investigations. Journal of Research in Science Teaching , 41 , 17.

Keselman, A. (2003). Supporting inquiry learning by promoting normative understanding of multivariable causality. Journal of Research in Science Teaching, 40, 898-921.

Keys, C.W. (1994). The development of scientific reasoning skills in conjunction with collaborative writing assignments: An interpretive study of six ninth-grade students. Journal of Research in Science Teaching, 31, 1003-1022.

Klaczynski, P.A. (2000). Motivated scientific reasoning biases, epistemological beliefs, and theory polarization: A two-process approach to adolescent cognition. Child Development , 71 (5), 1347-1366.

Klaczynski, P.A., and Narasimham, G. (1998). Development of scientific reasoning biases: Cognitive versus ego-protective explanations. Developmental Psychology, 34 (1), 175-187.

Klahr, D. (2000). Exploring science: The cognition and development of discovery processes. Cambridge, MA: MIT Press.

Klahr, D., and Carver, S.M. (1995). Scientific thinking about scientific thinking. Monographs of the Society for Research in Child Development, 60, 137-151.

Klahr, D., Chen, Z., and Toth, E.E. (2001). From cognition to instruction to cognition: A case study in elementary school science instruction. In K. Crowley, C.D. Schunn, and T. Okada (Eds.), Designing for science: Implications from everyday, classroom, and professional settings (pp. 209-250). Mahwah, NJ: Lawrence Erlbaum Associates.

Klahr, D., and Dunbar, K. (1988). Dual search space during scientific reasoning. Cognitive Science, 12, 1-48.

Klahr, D., Fay, A., and Dunbar, K. (1993). Heuristics for scientific experimentation: A developmental study. Cognitive Psychology, 25, 111-146.

Klahr, D., and Nigam, M. (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological Science, 15 (10), 661-667.

Klahr, D., and Robinson, M. (1981). Formal assessment of problem solving and planning processes in preschool children. Cognitive Psychology , 13 , 113-148.

Klayman, J., and Ha, Y. (1989). Hypothesis testing in rule discovery: Strategy, structure, and content. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15 (4), 596-604.

Kline, M. (1980). Mathematics: The loss of certainty . New York: Oxford University Press.

Konold, C. (1989). Informal conceptions of probability. Cognition and Instruction , 6 , 59-98.

Koslowski, B. (1996). Theory and evidence: The development of scientific reasoning. Cambridge, MA: MIT Press.

Koslowski, B., and Okagaki, L. (1986). Non-human indices of causation in problem-solving situations: Causal mechanisms, analogous effects, and the status of rival alternative accounts. Child Development, 57, 1100-1108.

Koslowski, B., Okagaki, L., Lorenz, C., and Umbach, D. (1989). When covariation is not enough: The role of causal mechanism, sampling method, and sample size in causal reasoning. Child Development, 60, 1316-1327.

Kuhn, D. (1989). Children and adults as intuitive scientists . Psychological Review, 96 , 674-689.

Kuhn, D. (2001). How do people know? Psychological Science, 12, 1-8.

Kuhn, D. (2002). What is scientific thinking and how does it develop? In U. Goswami (Ed.), Blackwell handbook of childhood cognitive development (pp. 371-393). Oxford, England: Blackwell.

Kuhn, D., Amsel, E., and O’Loughlin, M. (1988). The development of scientific thinking skills. Orlando, FL: Academic Press.

Kuhn, D., and Dean, D. (2005). Is developing scientific thinking all about learning to control variables? Psychological Science, 16 (11), 886-870.

Kuhn, D., and Franklin, S. (2006). The second decade: What develops (and how)? In W. Damon, R.M. Lerner, D. Kuhn, and R.S. Siegler (Eds.), Handbook of child psychology, volume 2, cognition, peception, and language, 6th edition (pp. 954-994). Hoboken, NJ: Wiley.

Kuhn, D., Garcia-Mila, M., Zohar, A., and Andersen, C. (1995). Strategies of knowledge acquisition. Monographs of the Society for Research in Child Development, Serial No. 245 (60), 4.

Kuhn, D., and Pearsall, S. (1998). Relations between metastrategic knowledge and strategic performance. Cognitive Development, 13, 227-247.

Kuhn, D., and Pearsall, S. (2000). Developmental origins of scientific thinking. Journal of Cognition and Development, 1, 113-129.

Kuhn, D., and Phelps, E. (1982). The development of problem-solving strategies. In H. Reese (Ed.), Advances in child development and behavior ( vol. 17, pp. 1-44). New York: Academic Press.

Kuhn, D., Schauble, L., and Garcia-Mila, M. (1992). Cross-domain development of scientific reasoning. Cognition and Instruction, 9, 285-327.

Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108, 480-498.

Larkin, J.H., McDermott, J., Simon, D.P, and Simon, H.A. (1980). Expert and novice performance in solving physics problems. Science , 208 , 1335-1342.

Latour, B. (1990). Drawing things together. In M. Lynch and S. Woolgar (Eds.), Representation in scientific practice (pp. 19-68). Cambridge, MA: MIT Press.

Lehrer, R. (2003). Developing understanding of measurement. In J. Kilpatrick, W.G. Martin, and D.E. Schifter (Eds.), A research companion to principles and standards for school mathematics (pp. 179-192). Reston, VA: National Council of Teachers of Mathematics.

Lehrer, R., Giles, N., and Schauble, L. (2002). Data modeling. In R. Lehrer and L. Schauble (Eds.), Investigating real data in the classroom: Expanding children’s understanding of math and science (pp. 1-26). New York: Teachers College Press.

Lehrer, R., and Pritchard, C. (2002). Symbolizing space into being. In K. Gravemeijer, R. Lehrer, B. van Oers, and L. Verschaffel (Eds.), Symbolization, modeling and tool use in mathematics education (pp. 59-86). Dordrecht, The Netherlands: Kluwer Academic.

Lehrer, R., and Romberg, T. (1996). Exploring children’s data modeling. Cognition and Instruction , 14 , 69-108.

Lehrer, R., and Schauble, L. (2000). The development of model-based reasoning. Journal of Applied Developmental Psychology, 21 (1), 39-48.

Lehrer, R., and Schauble, L. (2002). Symbolic communication in mathematics and science: Co-constituting inscription and thought. In E.D. Amsel and J. Byrnes (Eds.), Language, literacy, and cognitive development: The development and consequences of symbolic communicat i on (pp. 167-192). Mahwah, NJ: Lawrence Erlbaum Associates.

Lehrer, R., and Schauble, L. (2003). Origins and evolution of model-based reasoning in mathematics and science. In R. Lesh and H.M. Doerr (Eds.), Beyond constructivism: A models and modeling perspective on mathematics problem-solving, learning, and teaching (pp. 59-70). Mahwah, NJ: Lawrence Erlbaum Associates.

Lehrer, R., and Schauble, L., (2005). Developing modeling and argument in the elementary grades. In T.A. Rombert, T.P. Carpenter, and F. Dremock (Eds.), Understanding mathematics and science matters (Part II: Learning with understanding). Mahwah, NJ: Lawrence Erlbaum Associates.

Lehrer, R., and Schauble, L. (2006). Scientific thinking and science literacy. In W. Damon, R. Lerner, K.A. Renninger, and I.E. Sigel (Eds.), Handbook of child psychology, 6th edition (vol. 4). Hoboken, NJ: Wiley.

Lehrer, R., Schauble, L., Strom, D., and Pligge, M. (2001). Similarity of form and substance: Modeling material kind. In D. Klahr and S. Carver (Eds.), Cognition and instruction: 25 years of progress (pp. 39-74). Mahwah, NJ: Lawrence Erlbaum Associates.

Liben, L.S., and Downs, R.M. (1993). Understanding per son-space-map relations: Cartographic and developmental perspectives. Developmental Psychology, 29 , 739-752.

Linn, M.C. (1978). Influence of cognitive style and training on tasks requiring the separation of variables schema. Child Development , 49 , 874-877.

Linn, M.C. (1980). Teaching students to control variables: Some investigations using free choice experiences. In S. Modgil and C. Modgil (Eds.), Toward a theory of psychological development within the Piagettian framework . Windsor Berkshire, England: National Foundation for Educational Research.

Linn, M.C., Chen, B., and Thier, H.S. (1977). Teaching children to control variables: Investigations of a free choice environment. Journal of Research in Science Teaching , 14 , 249-255.

Linn, M.C., and Levine, D.I. (1978). Adolescent reasoning: Influence of question format and type of variables on ability to control variables. Science Education , 62 (3), 377-388.

Lovett, M.C., and Anderson, J.R. (1995). Making heads or tails out of selecting problem-solving strategies. In J.D. Moore and J.F. Lehman (Eds.), Proceedings of the seventieth annual conference of the Cognitive Science Society (pp. 265-270). Hillsdale, NJ: Lawrence Erlbaum Associates.

Lovett, M.C., and Anderson, J.R. (1996). History of success and current context in problem solving. Cognitive Psychology , 31 (2), 168-217.

Masnick, A.M., and Klahr, D. (2003). Error matters: An initial exploration of elementary school children’s understanding of experimental error. Journal of Cognition and Development, 4 , 67-98.

Mayer, R. (1993). Illustrations that instruct. In R. Glaser (Ed.), Advances in instructional psychology (vol. 4, pp. 253-284). Hillsdale, NJ: Lawrence Erlbaum Associates.

McClain, K., Cobb, P., Gravemeijer, K., and Estes, B. (1999). Developing mathematical reasoning within the context of measurement. In L. Stiff (Ed.), Developing mathematical reasoning, K-12 (pp. 93-106). Reston, VA: National Council of Teachers of Mathematics.

McNay, M., and Melville, K.W. (1993). Children’s skill in making predictions and their understanding of what predicting means: A developmental study. Journal of Research in Science Teaching , 30, 561-577.

Metz, K.E. (2004). Children’s understanding of scientific inquiry: Their conceptualization of uncertainty in investigations of their own design. Cognition and Instruction, 22( 2), 219-290.

Mokros, J., and Russell, S. (1995). Children’s concepts of average and representativeness. Journal for Research in Mathematics Education, 26 (1), 20-39.

National Council of Teachers of Mathematics. (2000). Principles and standards for school mathematics. Reston, VA: Author.

Nisbett, R.E., Krantz, D.H., Jepson, C., and Kind, Z. (1983). The use of statistical heuristics in everyday inductive reasoning. Psychological Review, 90 , 339-363.

Penner, D., Giles, N.D., Lehrer, R., and Schauble, L. (1997). Building functional models: Designing an elbow. Journal of Research in Science Teaching, 34(2) , 125-143.

Penner, D.E., and Klahr, D. (1996a). The interaction of domain-specific knowledge and domain-general discovery strategies: A study with sinking objects. Child Development, 67, 2709-2727.

Penner, D.E., and Klahr, D. (1996b). When to trust the data: Further investigations of system error in a scientific reasoning task. Memory and Cognition, 24, 655-668 .

Perfetti, CA. (1992). The representation problem in reading acquisition. In P.B. Gough, L.C. Ehri, and R. Treiman (Eds.), Reading acquisition (pp. 145-174). Hillsdale, NJ: Lawrence Erlbaum Associates.

Petrosino, A., Lehrer, R., and Schauble, L. (2003). Structuring error and experimental variation as distribution in the fourth grade. Mathematical Thinking and Learning, 5 (2-3), 131-156.

Pollatsek, A., Lima, S., and Well, A.D. (1981). Concept or computation: Students’ misconceptions of the mean. Educational Studies in Mathematics , 12, 191-204.

Ruffman, T., Perner, I., Olson, D.R., and Doherty, M. (1993). Reflecting on scientific thinking: Children’s understanding of the hypothesis-evidence relation. Child Development, 64 (6), 1617-1636.

Schauble, L. (1990). Belief revision in children: The role of prior knowledge and strategies for generating evidence. Journal of Experimental Child Psychology , 49 (1), 31-57.

Schauble, L. (1996). The development of scientific reasoning in knowledge-rich contexts. Developmental Psychology , 32 (1), 102-119.

Schauble, L., Glaser, R., Duschl, R., Schulze, S., and John, J. (1995). Students’ understanding of the objectives and procedures of experimentation in the science classroom. Journal of the Learning Sciences , 4 (2), 131-166.

Schauble, L., Glaser, R., Raghavan, K., and Reiner, M. (1991). Causal models and experimentation strategies in scientific reasoning. Journal of the Learning Sciences , 1 (2), 201-238.

Schauble, L., Glaser, R., Raghavan, K., and Reiner, M. (1992). The integration of knowledge and experimentation strategies in understanding a physical system. Applied Cognitive Psychology , 6 , 321-343.

Schauble, L., Klopfer, L.E., and Raghavan, K. (1991). Students’ transition from an engineering model to a science model of experimentation. Journal of Research in Science Teaching , 28 (9), 859-882.

Siegler, R.S. (1987). The perils of averaging data over strategies: An example from children’s addition. Journal of Experimental Psychology: General, 116, 250-264 .

Siegler, R.S., and Alibali, M.W. (2005). Children’s thinking (4th ed.). Upper Saddle River, NJ: Prentice Hall.

Siegler, R.S., and Crowley, K. (1991). The microgenetic method: A direct means for studying cognitive development. American Psychologist , 46 , 606-620.

Siegler, R.S., and Jenkins, E. (1989). How children discover new strategies . Hillsdale, NJ: Lawrence Erlbaum Associates.

Siegler, R.S., and Liebert, R.M. (1975). Acquisition of formal experiment. Developmental Psychology , 11 , 401-412.

Siegler, R.S., and Shipley, C. (1995). Variation, selection, and cognitive change. In T. Simon and G. Halford (Eds.), Developing cognitive competence: New approaches to process modeling (pp. 31-76). Hillsdale, NJ: Lawrence Erlbaum Associates.

Simon, H.A. (1975). The functional equivalence of problem solving skills. Cognitive Psychology, 7 , 268-288.

Simon, H.A. (2001). Learning to research about learning. In S.M. Carver and D. Klahr (Eds.), Cognition and instruction: Twenty-five years of progress (pp. 205-226). Mahwah, NJ: Lawrence Erlbaum Associates.

Slowiaczek, L.M., Klayman, J., Sherman, S.J., and Skov, R.B. (1992). Information selection and use in hypothesis testing: What is a good question, and what is a good answer. Memory and Cognition, 20 (4), 392-405.

Sneider, C., Kurlich, K., Pulos, S., and Friedman, A. (1984). Learning to control variables with model rockets: A neo-Piagetian study of learning in field settings. Science Education , 68 (4), 463-484.

Sodian, B., Zaitchik, D., and Carey, S. (1991). Young children’s differentiation of hypothetical beliefs from evidence. Child Development, 62 (4), 753-766.

Stevens, R., and Hall, R. (1998). Disciplined perception: Learning to see in technoscience. In M. Lampert and M.L. Blunk (Eds.), Talking mathematics in school: Studies of teaching and learning (pp. 107-149). Cambridge, MA: Cambridge University Press.

Strauss, S., and Bichler, E. (1988). The development of children’s concepts of the arithmetic average. Journal for Research in Mathematics Education, 19 (1), 64-80.

Thagard, P. (1998a). Ulcers and bacteria I: Discovery and acceptance. Studies in History and Philosophy of Science. Part C: Studies in History and Philosophy of Biology and Biomedical Sciences, 29, 107-136.

Thagard, P. (1998b). Ulcers and bacteria II: Instruments, experiments, and social interactions. Studies in History and Philosophy of Science. Part C: Studies in History and Philosophy of Biology and Biomedical Sciences, 29 (2), 317-342.

Toth, E.E., Klahr, D., and Chen, Z. (2000). Bridging research and practice: A cognitively-based classroom intervention for teaching experimentation skills to elementary school children. Cognition and Instruction , 18 (4), 423-459.

Trafton, J.G., and Trickett, S.B. (2001). Note-taking for self-explanation and problem solving. Human-Computer Interaction, 16, 1-38.

Triona, L., and Klahr, D. (in press). The development of children’s abilities to produce external representations. In E. Teubal, J. Dockrell, and L. Tolchinsky (Eds.), Notational knowledge: Developmental and historical perspectives . Rotterdam, The Netherlands: Sense.

Varnhagen, C. (1995). Children’s spelling strategies. In V. Berninger (Ed.), The varieties of orthographic knowledge: Relationships to phonology, reading and writing (vol. 2, pp. 251-290). Dordrecht, The Netherlands: Kluwer Academic.

Warren, B., Rosebery, A., and Conant, F. (1994). Discourse and social practice: Learning science in language minority classrooms. In D. Spencer (Ed.), Adult biliteracy in the United States (pp. 191-210). McHenry, IL: Delta Systems.

Wolpert, L. (1993). The unnatural nature of science . London, England: Faber and Faber.

Zachos, P., Hick, T.L., Doane, W.E.I., and Sargent, C. (2000). Setting theoretical and empirical foundations for assessing scientific inquiry and discovery in educational programs. Journal of Research in Science Teaching, 37 (9), 938-962.

Zimmerman, C., Raghavan, K., and Sartoris, M.L. (2003). The impact of the MARS curriculum on students’ ability to coordinate theory and evidence. International Journal of Science Education, 25, 1247-1271.

What is science for a child? How do children learn about science and how to do science? Drawing on a vast array of work from neuroscience to classroom observation, Taking Science to School provides a comprehensive picture of what we know about teaching and learning science from kindergarten through eighth grade. By looking at a broad range of questions, this book provides a basic foundation for guiding science teaching and supporting students in their learning. Taking Science to School answers such questions as:

  • When do children begin to learn about science? Are there critical stages in a child's development of such scientific concepts as mass or animate objects?
  • What role does nonschool learning play in children's knowledge of science?
  • How can science education capitalize on children's natural curiosity?
  • What are the best tasks for books, lectures, and hands-on learning?
  • How can teachers be taught to teach science?

The book also provides a detailed examination of how we know what we know about children's learning of science—about the role of research and evidence. This book will be an essential resource for everyone involved in K-8 science education—teachers, principals, boards of education, teacher education providers and accreditors, education researchers, federal education agencies, and state and federal policy makers. It will also be a useful guide for parents and others interested in how children learn.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

STEM Problem Solving: Inquiry, Concepts, and Reasoning

Aik-ling tan.

Natural Sciences and Science Education, meriSTEM@NIE, National Institute of Education, Nanyang Technological University, Singapore, Singapore

Yann Shiou Ong

Yong sim ng, jared hong jie tan, associated data.

The datasets used and analysed during the current study are available from the corresponding author on reasonable request.

Balancing disciplinary knowledge and practical reasoning in problem solving is needed for meaningful learning. In STEM problem solving, science subject matter with associated practices often appears distant to learners due to its abstract nature. Consequently, learners experience difficulties making meaningful connections between science and their daily experiences. Applying Dewey’s idea of practical and science inquiry and Bereiter’s idea of referent-centred and problem-centred knowledge, we examine how integrated STEM problem solving offers opportunities for learners to shuttle between practical and science inquiry and the kinds of knowledge that result from each form of inquiry. We hypothesize that connecting science inquiry with practical inquiry narrows the gap between science and everyday experiences to overcome isolation and fragmentation of science learning. In this study, we examine classroom talk as students engage in problem solving to increase crop yield. Qualitative content analysis of the utterances of six classes of 113 eighth graders and their teachers were conducted for 3 hours of video recordings. Analysis showed an almost equal amount of science and practical inquiry talk. Teachers and students applied their everyday experiences to generate solutions. Science talk was at the basic level of facts and was used to explain reasons for specific design considerations. There was little evidence of higher-level scientific conceptual knowledge being applied. Our observations suggest opportunities for more intentional connections of science to practical problem solving, if we intend to apply higher-order scientific knowledge in problem solving. Deliberate application and reference to scientific knowledge could improve the quality of solutions generated.

Introduction

As we enter to second quarter of the twenty-first century, it is timely to take stock of both the changes and demands that continue to weigh on our education system. A recent report by World Economic Forum highlighted the need to continuously re-position and re-invent education to meet the challenges presented by the disruptions brought upon by the fourth industrial revolution (World Economic Forum, 2020 ). There is increasing pressure for education to equip children with the necessary, relevant, and meaningful knowledge, skills, and attitudes to create a “more inclusive, cohesive and productive world” (World Economic Forum, 2020 , p. 4). Further, the shift in emphasis towards twenty-first century competencies over mere acquisition of disciplinary content knowledge is more urgent since we are preparing students for “jobs that do not yet exist, technology that has not yet been invented, and problems that has yet exist” (OECD, 2018 , p. 2). Tan ( 2020 ) concurred with the urgent need to extend the focus of education, particularly in science education, such that learners can learn to think differently about possibilities in this world. Amidst this rhetoric for change, the questions that remained to be answered include how can science education transform itself to be more relevant; what is the role that science education play in integrated STEM learning; how can scientific knowledge, skills and epistemic practices of science be infused in integrated STEM learning; what kinds of STEM problems should we expose students to for them to learn disciplinary knowledge and skills; and what is the relationship between learning disciplinary content knowledge and problem solving skills?

In seeking to understand the extent of science learning that took place within integrated STEM learning, we dissected the STEM problems that were presented to students and examined in detail the sense making processes that students utilized when they worked on the problems. We adopted Dewey’s ( 1938 ) theoretical idea of scientific and practical/common-sense inquiry and Bereiter’s ideas of referent-centred and problem-centred knowledge building process to interpret teacher-students’ interactions during problem solving. There are two primary reasons for choosing these two theoretical frameworks. Firstly, Dewey’s ideas about the relationship between science inquiry and every day practical problem-solving is important in helping us understand the role of science subject matter knowledge and science inquiry in solving practical real-world problems that are commonly used in STEM learning. Secondly, Bereiter’s ideas of referent-centred and problem-centred knowledge augment our understanding of the types of knowledge that students can learn when they engage in solving practical real-world problems.

Taken together, Dewey’s and Bereiter’s ideas enable us to better understand the types of problems used in STEM learning and their corresponding knowledge that is privileged during the problem-solving process. As such, the two theoretical lenses offered an alternative and convincing way to understand the actual types of knowledge that are used within the context of integrated STEM and help to move our understanding of STEM learning beyond current focus on examining how engineering can be used as an integrative mechanism (Bryan et al., 2016 ) or applying the argument of the strengths of trans-, multi-, or inter-disciplinary activities (Bybee, 2013 ; Park et al., 2020 ) or mapping problems by the content and context as pure STEM problems, STEM-related problems or non-STEM problems (Pleasants, 2020 ). Further, existing research (for example, Gale et al., 2000 ) around STEM education focussed largely on description of students’ learning experiences with insufficient attention given to the connections between disciplinary conceptual knowledge and inquiry processes that students use to arrive at solutions to problems. Clarity in the role of disciplinary knowledge and the related inquiry will allow for more intentional design of STEM problems for students to learn higher-order knowledge. Applying Dewey’s idea of practical and scientific inquiry and Bereiter’s ideas of referent-centred and problem-centred knowledge, we analysed six lessons where students engaged with integrated STEM problem solving to propose answers to the following research questions: What is the extent of practical and scientific inquiry in integrated STEM problem solving? and What conceptual knowledge and problem-solving skills are learnt through practical and science inquiry during integrated STEM problem solving?

Inquiry in Problem Solving

Inquiry, according to Dewey ( 1938 ), involves the direct control of unknown situations to change them into a coherent and unified one. Inquiry usually encompasses two interrelated activities—(1) thinking about ideas related to conceptual subject-matter and (2) engaging in activities involving our senses or using specific observational techniques. The National Science Education Standards released by the National Research Council in the US in 1996 defined inquiry as “…a multifaceted activity that involves making observations; posing questions; examining books and other sources of information to see what is already known; planning investigations; reviewing what is already known in light of experimental evidence; using tools to gather, analyze, and interpret data; proposing answers, explanations, and predictions; and communicating the results. Inquiry requires identification of assumptions, use of critical and logical thinking, and consideration of alternative explanations” (p. 23). Planning investigation; collecting empirical evidence; using tools to gather, analyse and interpret data; and reasoning are common processes shared in the field of science and engineering and hence are highly relevant to apply to integrated STEM education.

In STEM education, establishing the connection between general inquiry and its application helps to link disciplinary understanding to epistemic knowledge. For instance, methods of science inquiry are popular in STEM education due to the familiarity that teachers have with scientific methods. Science inquiry, a specific form of inquiry, has appeared in many science curriculum (e.g. NRC, 2000 ) since Dewey proposed in 1910 that learning of science should be perceived as both subject-matter and a method of learning science (Dewey, 1910a , 1910b ). Science inquiry which involved ways of doing science should also encompass the ways in which students learn the scientific knowledge and investigative methods that enable scientific knowledge to be constructed. Asking scientifically orientated questions, collecting empirical evidence, crafting explanations, proposing models and reasoning based on available evidence are affordances of scientific inquiry. As such, science should be pursued as a way of knowing rather than merely acquisition of scientific knowledge.

Building on these affordances of science inquiry, Duschl and Bybee ( 2014 ) advocated the 5D model that focused on the practice of planning and carrying out investigations in science and engineering, representing two of the four disciplines in STEM. The 5D model includes science inquiry aspects such as (1) deciding on what and how to measure, observe and sample; (2) developing and selecting appropriate tools to measure and collect data; (3) recording the results and observations in a systematic manner; (4) creating ways to represent the data and patterns that are observed; and (5) determining the validity and the representativeness of the data collected. The focus on planning and carrying out investigations in the 5D model is used to help teachers bridge the gap between the practices of building and refining models and explanation in science and engineering. Indeed, a common approach to incorporating science inquiry in integrated STEM curriculum involves student planning and carrying out scientific investigations and making sense of the data collected to inform engineering design solution (Cunningham & Lachapelle, 2016 ; Roehrig et al., 2021 ). Duschl and Bybee ( 2014 ) argued that it is needful to design experiences for learners to appreciate that struggles are part of problem solving in science and engineering. They argued that “when the struggles of doing science is eliminated or simplified, learners get the wrong perceptions of what is involved when obtaining scientific knowledge and evidence” (Duschl & Bybee, 2014 , p. 2). While we concur with Duschl and Bybee about the need for struggles, in STEM learning, these struggles must be purposeful and grade appropriate so that students will also be able to experience success amidst failure.

The peculiar nature of science inquiry was scrutinized by Dewey ( 1938 ) when he cross-examined the relationship between science inquiry and other forms of inquiry, particularly common-sense inquiry. He positioned science inquiry along a continuum with general or common-sense inquiry that he termed as “logic”. Dewey argued that common-sense inquiry serves a practical purpose and exhibits features of science inquiry such as asking questions and a reliance on evidence although the focus of common-sense inquiry tends to be different. Common-sense inquiry deals with issues or problems that are in the immediate environment where people live, whereas the objects of science inquiry are more likely to be distant (e.g. spintronics) from familiar experiences in people’s daily lives. While we acknowledge the fundamental differences (such as novel discovery compared with re-discovering science, ‘messy’ science compared with ‘sanitised’ science) between school science and science that is practiced by scientists, the subject of interest in science (understanding the world around us) remains the same.

The unfamiliarity between the functionality and purpose of science inquiry to improve the daily lives of learners does little to motivate learners to learn science (Aikenhead, 2006 ; Lee & Luykx, 2006 ) since learners may not appreciate the connections of science inquiry in their day-to-day needs and wants. Bereiter ( 1992 ) has also distinguished knowledge into two forms—referent-centred and problem-centred. Referent-centred knowledge refers to subject-matter that is organised around topics such as that in textbooks. Problem-centred knowledge is knowledge that is organised around problems, whether they are transient problems, practical problems or problems of explanations. Bereiter argued that referent-centred knowledge that is commonly taught in schools is limited in their applications and meaningfulness to the lives of students. This lack of familiarity and affinity to referent-centred knowledge is likened to the science subject-matter knowledge that was mentioned by Dewey. Rather, it is problem-centred knowledge that would be useful when students encounter problems. Learning problem-centred knowledge will allow learners to readily harness the relevant knowledge base that is useful to understand and solve specific problems. This suggests a need to help learners make the meaningful connections between science and their daily lives.

Further, Dewey opined that while the contexts in which scientific knowledge arise could be different from our daily common-sense world, careful consideration of scientific activities and applying the resultant knowledge to daily situations for use and enjoyment is possible. Similarly, in arguing for problem-centred knowledge, Bereiter ( 1992 ) questioned the value of inert knowledge that plays no role in helping us understand or deal with the world around us. Referent-centred knowledge has a higher tendency to be inert due to the way that the knowledge is organised and the way that the knowledge is encountered by learners. For instance, learning about the equation and conditions for photosynthesis is not going to help learners appreciate how plants are adapted for photosynthesis and how these adaptations can allow plants to survive changes in climate and for farmers to grow plants better by creating the best growing conditions. Rather, students could be exposed to problems of explanations where they are asked to unravel the possible reasons for low crop yield and suggest possible ways to overcome the problem. Hence, we argue here that the value of the referent knowledge is that they form the basis and foundation for the students to be able to discuss or suggest ways to overcome real life problems. Referent-centred knowledge serves as part of the relevant knowledge base that can be harnessed to solve specific problems or as foundational knowledge students need to progress to learn higher-order conceptual knowledge that typically forms the foundations or pillars within a discipline. This notion of referent-centred knowledge serving as foundational knowledge that can be and should be activated for application in problem-solving situation is shown by Delahunty et al. ( 2020 ). They found that students show high reliance on memory when they are conceptualising convergent problem-solving tasks.

While Bereiter argues for problem-centred knowledge, he cautioned that engagement should be with problems of explanation rather than transient or practical problems. He opined that if learners only engage in transient or practical problem alone, they will only learn basic-category types of knowledge and fail to understand higher-order conceptual knowledge. For example, for photosynthesis, basic-level types of knowledge included facts about the conditions required for photosynthesis, listing the products formed from the process of photosynthesis and knowing that green leaves reflect green light. These basic-level knowledges should intentionally help learners learn higher-level conceptual knowledge that include learners being able to draw on the conditions for photosynthesis when they encounter that a plant is not growing well or is exhibiting discoloration of leaves.

Transient problems disappear once a solution becomes available and there is a high likelihood that we will not remember the problem after that. Practical problems, according to Bereiter are “stuck-door” problems that could be solved with or without basic-level knowledge and often have solutions that lacks precise definition. There are usually a handful of practical strategies, such as pulling or pushing the door harder, kicking the door, etc. that will work for the problems. All these solutions lack a well-defined approach related to general scientific principles that are reproducible. Problems of explanations are the most desirable types of problems for learners since these are problems that persist and recur such that they can become organising points for knowledge. Problems of explanations consist of the conceptual representations of (1) a text base that serves to represent the text content and (2) a situation model that shows the portion of the world in which the text is relevant. The idea of text base to represent text content in solving problems of explanations is like the idea of domain knowledge and structural knowledge (refers to knowledge of how concepts within a domain are connected) proposed by Jonassen ( 2000 ). He argued that both types of knowledges are required to solve a range of problems from well-structured problems to ill-structured problems with a simulated context, to simple ill-structured problems and to complex ill-structured problems.

Jonassen indicated that complex ill-structured problems are typically design problems and are likely to be the most useful forms of problems for learners to be engaged in inquiry. Complex ill-structured design problems are the “wicked” problems that Buchanan ( 1992 ) discussed. Buchanan’s idea is that design aims to incorporate knowledge from different fields of specialised inquiry to become whole. Complex or wicked problems are akin to the work of scientists who navigate multiple factors and evidence to offer models that are typically oversimplified, but they apply them to propose possible first approximation explanations or solutions and iteratively relax constraints or assumptions to refine the model. The connections between the subject matter of science and the design process to engineer a solution are delicate. While it is important to ensure that practical concerns and questions are taken into consideration in designing solutions (particularly a material artefact) to a practical problem, the challenge here lies in ensuring that creativity in design is encouraged even if students initially lack or neglect the scientific conceptual understanding to explain/justify their design. In his articulation of wicked problems and the role of design thinking, Buchanan ( 1992 ) highlighted the need to pay attention to category and placement. Categories “have fixed meanings that are accepted within the framework of a theory or a philosophy and serve as the basis for analyzing what already exist” (Buchanan, 1992 , p. 12). Placements, on the other hand, “have boundaries to shape and constrain meaning, but are not rigidly fixed and determinate” (p. 12).

The difference in the ideas presented by Dewey and Bereiter lies in the problem design. For Dewey, scientific knowledge could be learnt from inquiring into practical problems that learners are familiar with. After all, Dewey viewed “modern science as continuous with, and to some degree an outgrowth and refinement of, practical or ‘common-sense’ inquiry” (Brown, 2012 ). For Bereiter, he acknowledged the importance of familiar experiences, but instead of using them as starting points for learning science, he argued that practical problems are limiting in helping learners acquire higher-order knowledge. Instead, he advocated for learners to organize their knowledge around problems that are complex, persistent and extended and requiring explanations to better understand the problems. Learners are to have a sense of the kinds of problems to which the specific concept is relevant before they can be said to have grasp the concept in a functionally useful way.

To connect between problem solving, scientific knowledge and everyday experiences, we need to examine ways to re-negotiate the disciplinary boundaries (such as epistemic understanding, object of inquiry, degree of precision) of science and make relevant connections to common-sense inquiry and to the problem at hand. Integrated STEM appears to be one way in which the disciplinary boundaries of science can be re-negotiated to include practices from the fields of technology, engineering and mathematics. In integrated STEM learning, inquiry is seen more holistically as a fluid process in which the outcomes are not absolute but are tentative. The fluidity of the inquiry process is reflected in the non-deterministic inquiry approach. This means that students can use science inquiry, engineering design, design process or any other inquiry approaches that fit to arrive at the solution. This hybridity of inquiry between science, common-sense and problems allows for some familiar aspects of the science inquiry process to be applied to understand and generate solutions to familiar everyday problems. In attempting to infuse elements of common-sense inquiry with science inquiry in problem-solving, logic plays an important role to help learners make connections. Hypothetically, we argue that with increasing exposure to less familiar ways of thinking such as those associated with science inquiry, students’ familiarity with scientific reasoning increases, and hence such ways of thinking gradually become part of their common-sense, which students could employ to solve future relevant problems. The theoretical ideas related to complexities of problems, the different forms of inquiry afforded by different problems and the arguments for engaging in problem solving motivated us to examine empirically how learners engage with ill-structured problems to generate problem-centred knowledge. Of particular interest to us is how learners and teachers weave between practical and scientific reasoning as they inquire to integrate the components in the original problem into a unified whole.

The integrated STEM activity in our study was planned using the S-T-E-M quartet instructional framework (Tan et al., 2019 ). The S-T-E-M quartet instructional framework positions complex, persistent and extended problems at its core and focusses on the vertical disciplinary knowledge and understanding of the horizontal connections between the disciplines that could be gained by learners through solving the problem (Tan et al., 2019 ). Figure  1 depicts the disciplinary aspects of the problem that was presented to the students. The activity has science and engineering as the two lead disciplines. It spanned three 1-h lessons and required students to both learn and apply relevant scientific conceptual knowledge to solve a complex, real-world problem through processes that resemble the engineering design process (Wheeler et al., 2019 ).

An external file that holds a picture, illustration, etc.
Object name is 11191_2021_310_Fig1_HTML.jpg

Connections across disciplines in integrate STEM activity

In the first session (1 h), students were introduced to the problem and its context. The problem pertains to the issue of limited farmland in a land scarce country that imports 90% of food (Singapore Food Agency [SFA], 2020 ). The students were required to devise a solution by applying knowledge of the conditions required for photosynthesis and plant growth to design and build a vertical farming system to help farmers increase crop yield with limited farmland. This context was motivated by the government’s effort to generate interests and knowledge in farming to achieve the 30 by 30 goal—supplying 30% of country’s nutritional needs by 2030. The scenario was a fictitious one where they were asked to produce 120 tonnes of Kailan (a type of leafy vegetable) with two hectares of land instead of the usual six hectares over a specific period. In addition to the abovementioned constraints, the teacher also discussed relevant success criteria for evaluating the solution with the students. Students then researched about existing urban farming approaches. They were given reading materials pertaining to urban farming to help them understand the affordances and constraints of existing solutions. In the second session (6 h), students engaged in ideation to generate potential solutions. They then designed, built and tested their solution and had opportunities to iteratively refine their solution. Students were given a list of materials (e.g. mounting board, straws, ice-cream stick, glue, etc.) that they could use to design their solutions. In the final session (1 h), students presented their solution and reflected on how well their solution met the success criteria. The prior scientific conceptual knowledge that students require to make sense of the problem include knowledge related to plant nutrition, namely, conditions for photosynthesis, nutritional requirements of Kailin and growth cycle of Kailin. The problem resembles a real-world problem that requires students to engage in some level of explanation of their design solution.

A total of 113 eighth graders (62 boys and 51 girls), 14-year-olds, from six classes and their teachers participated in the study. The students and their teachers were recruited as part of a larger study that examined the learning experiences of students when they work on integrated STEM activities that either begin with a problem, a solution or are focused on the content. Invitations were sent to schools across the country and interested schools opted in for the study. For the study reported here, all students and teachers were from six classes within a school. The teachers had all undergone 3 h of professional development with one of the authors on ways of implementing the integrated STEM activity used in this study. During the professional development session, the teachers learnt about the rationale of the activity, familiarize themselves with the materials and clarified the intentions and goals of the activity. The students were mostly grouped in groups of three, although a handful of students chose to work independently. The group size of students was not critical for the analysis of talk in this study as the analytic focus was on the kinds of knowledge applied rather than collaborative or group think. We assumed that the types of inquiry adopted by teachers and students were largely dependent on the nature of problem. Eighth graders were chosen for this study since lower secondary science offered at this grade level is thematic and integrated across biology, chemistry and physics. Furthermore, the topic of photosynthesis is taught under the theme of Interactions at eighth grade (CPDD, 2021 ). This thematic and integrated nature of science at eighth grade offered an ideal context and platform for integrated STEM activities to be trialled.

The final lessons in a series of three lessons in each of the six classes was analysed and reported in this study. Lessons where students worked on their solutions were not analysed because the recordings had poor audibility due to masking and physical distancing requirements as per COVID-19 regulations. At the start of the first lesson, the instructions given by the teacher were:

You are going to present your models. Remember the scenario that you were given at the beginning that you were tasked to solve using your model. …. In your presentation, you have to present your prototype and its features, what is so good about your prototype, how it addresses the problem and how it saves costs and space. So, this is what you can talk about during your presentation. ….. pay attention to the presentation and write down questions you like to ask the groups after the presentation… you can also critique their model, you can evaluate, critique and ask questions…. Some examples of questions you can ask the groups are? Do you think your prototype can achieve optimal plant growth? You can also ask questions specific to their models.

Data collection

Parental consent was sought a month before the start of data collection. The informed consent adhered to confidentiality and ethics guidelines as described by the Institutional Review Board. The data collection took place over a period of one month with weekly video recording. Two video cameras, one at the front and one at the back of the science laboratory were set up. The front camera captured the students seated at the front while the back video camera recorded the teacher as well as the groups of students at the back of the laboratory. The video recordings were synchronized so that the events captured from each camera can be interpreted from different angles. After transcription of the raw video files, the identities of students were substituted with pseudonyms.

Data analysis

The video recordings were analysed using the qualitative content analysis approach. Qualitative content analysis allows for patterns or themes and meanings to emerge from the process of systematic classification (Hsieh & Shannon, 2005 ). Qualitative content analysis is an appropriate analytic method for this study as it allows us to systematically identify episodes of practical inquiry and science inquiry to map them to the purposes and outcomes of these episodes as each lesson unfolds.

In total, six h of video recordings where students presented their ideas while the teachers served as facilitator and mentor were analysed. The video recordings were transcribed, and the transcripts were analysed using the NVivo software. Our unit of analysis is a single turn of talk (one utterance). We have chosen to use utterances as proxy indicators of reasoning practices based on the assumption that an utterance relates to both grammar and context. An utterance is a speech act that reveals both meaning and intentions of the speaker within specific contexts (Li, 2008 ).

Our research analytical lens is also interpretative in nature and the validity of our interpretation is through inter-rater discussion and agreement. Each utterance at the speaker level in transcripts was examined and coded either as relevant to practical reasoning or scientific reasoning based on the content. The utterances could be a comment by the teacher, a question by a student or a response by another student. Deductive coding is deployed with the two codes, practical reasoning and scientific reasoning derived from the theoretical ideas of Dewey and Bereiter as described earlier. Practical reasoning refers to utterances that reflect commonsensical knowledge or application of everyday understanding. Scientific reasoning refers to utterances that consist of scientifically oriented questions, scientific terms, or the use of empirical evidence to explain. Examples of each type of reasoning are highlighted in the following section. Each coded utterance is then reviewed for detailed description of the events that took place that led to that specific utterance. The description of the context leading to the utterance is considered an episode. The episodes and codes were discussed and agreed upon by two of the authors. Two coders simultaneously watched the videos to identify and code the episodes. The coders interpreted the content of each utterance, examine the context where the utterance was made and deduced the purpose of the utterance. Once each coder has established the sense-making aspect of the utterance in relation to the context, a code of either practical reasoning or scientific reasoning is assigned. Once that was completed, the two coders compared their coding for similarities and differences. They discussed the differences until an agreement was reached. Through this process, an agreement of 85% was reached between the coders. Where disagreement persisted, codes of the more experienced coder were adopted.

Results and Discussion

The specific STEM lessons analysed were taken from the lessons whereby students presented the model of their solutions to the class for peer evaluation. Every group of students stood in front of the class and placed their model on the bench as they presented. There was also a board where they could sketch or write their explanations should they want to. The instructions given by the teacher to the students were to explain their models and state reasons for their design.

Prevalence of Reasoning

The 6h of videos consists of 1422 turns of talk. Three hundred four turns of talk (21%) were identified as talk related to reasoning, either practical reasoning or scientific reasoning. Practical reasoning made up 62% of the reasoning turns while 38% were scientific reasoning (Fig. ​ (Fig.2 2 ).

An external file that holds a picture, illustration, etc.
Object name is 11191_2021_310_Fig2_HTML.jpg

Frequency of different types of reasoning

The two types of reasoning differ in the justifications that are used to substantiate the claims or decisions made. Table ​ Table1 1 describes the differences between the two categories of reasoning.

Types of reasoning used in the integrated STEM activity

Applications of Scientific Reasoning

Instances of engagement with scientific reasoning (for instance, using scientific concepts to justify, raising scientifically oriented questions, or providing scientific explanations) revolved around the conditions for photosynthesis and the concept of energy conversion when students were presenting their ideas or when they were questioned by their peers. For example, in explaining the reason for including fish in their plant system, one group of students made connection to cyclical energy transfer: “…so as the roots of the plants submerged in the water, faeces from the fish will be used as fertilizers so that the plant can grow”. The students considered how organic matter that is still trapped within waste materials can be released and taken up by plants to enhance the growth. The application of scientific reasoning made their design one that is innovative and sustainable as evaluated by the teacher. Some students attempted more ecofriendly designs by considering energy efficiencies through incorporating water turbines in their farming systems. They applied the concept of different forms of energy and energy conversion when their peers inquired about their design. The same scientific concepts were explained at different levels of details by different students. At one level, the students explained in a purely descriptive manner of what happens to the different entities in their prototypes, with implied changes to the forms of energy─ “…spins then generates electricity. So right, when the water falls down, then it will spin. The water will fall on the fan blade thing, then it will spin and then it generates electricity. So, it saves electricity, and also saves water”. At another level, students defended their design through an explanation of energy conversion─ “…because when the water flows right, it will convert gravitational potential energy so, when it reaches the bottom, there is not really much gravitational potential energy”. While these instances of applying scientific reasoning indicated that students have knowledge about the scientific phenomena and can apply them to assist in the problem-solving process, we are not able to establish if students understood the science behind how the dynamo works to generate electricity. Students in eighth grade only need to know how a generator works at a descriptive level and the specialized understanding how a dynamo works is beyond the intended learning outcomes at this grade level.

The application of scientific concepts for justification may not always be accurate. For instance, the naïve conception that students have about plants only respiring at night and not in the day surfaced when one group of students tried to justify the growth rates of Kailan─ “…I mean, they cannot be making food 24/7 and growing 24/7. They have nighttime for a reason. They need to respire”. These students do not appreciate that plants respire in the day as well, and hence respiration occurs 24/7. This naïve conception that plants only respire at night is one that is common among learners of biology (e.g. Svandova, 2014 ) since students learn that plant gives off oxygen in the day and takes in oxygen at night. The hasty conclusion to that observation is that plants carry out photosynthesis in the day and respire at night. The relative rates of photosynthesis and respiration were not considered by many students.

Besides naïve conceptions, engagement with scientific ideas to solve a practical problem offers opportunities for unusual and alternative ideas about science to surface. For instance, another group of students explained that they lined up their plants so that “they can take turns to absorb sunlight for photosynthesis”. These students appear to be explaining that the sun will move and depending on the position of the sun, some plants may be under shade, and hence rates of photosynthesis are dependent on the position of the sun. However, this idea could also be interpreted as (1) the students failed to appreciate that sunlight is everywhere, and (2) plants, unlike animals, particularly humans, do not have the concept of turn-taking. These diverse ideas held by students surfaced when students were given opportunities to apply their knowledge of photosynthesis to solve a problem.

Applications of Practical Reasoning

Teachers and students used more practical reasoning during an integrated STEM activity requiring both science and engineering practices as seen from 62% occurrence of practical reasoning compared with 38% for scientific reasoning. The intention of the activity to integrate students’ scientific knowledge related to plant nutrition to engineering practice of building a model of vertical farming system could be the reason for the prevalence of practical reasoning. The practical reasoning used related to structural design considerations of the farming system such as how water, light and harvesting can be carried out in the most efficient manner. Students defended the strengths of designs using logic based on their everyday experiences. In the excerpt below (transcribed verbatim), we see students applied their everyday experiences when something is “thinner” (likely to mean narrower), logically it would save space. Further, to reach a higher level, you use a machine to climb up.

Excerpt 1. “Thinner, more space” Because it is more thinner, so like in terms of space, it’s very convenient. So right, because there is – because it rotates right, so there is this button where you can stop it. Then I also installed steps, so that – because there are certain places you can’t reach even if you stop the – if you stop the machine, so when you stop it and you climb up, and then you see the condition of the plants, even though it costs a lot of labour, there is a need to have an experienced person who can grow plants. Then also, when like – when water reach the plants, cos the plants I want to use is soil-based, so as the water reach the soil, the soil will xxx, so like the water will be used, and then we got like – and then there’s like this filter that will filter like the dirt.

In the examples of practical reasoning, we were not able to identify instances where students and teachers engaged with discussion around trade-off and optimisation. Understanding constraints, trade-offs and optimisations are important ideas in informed design matrix for engineering as suggested by Crismond and Adams ( 2012 ). For instance, utterances such as “everything will be reused”, “we will be saving space”, “it looks very flimsy” or “so that it can contains [sic] the plants” were used. These utterances were made both by students while justifying their own prototypes and also by peers who challenged the design of others. Longer responses involving practical reasoning were made based on common-sense, everyday logic─ “…the product does not require much manpower, so other than one or two supervisors like I said just now, to harvest the Kailan, hence, not too many people need to be used, need to be hired to help supervise the equipment and to supervise the growth”. We infer that the higher instances of utterances related to practical reasoning could be due to the presence of more concrete artefacts that is shown, and the students and teachers were more focused on questioning the structure at hand. This inference was made as instructions given by the teacher at the start of students’ presentation focus largely on the model rather than the scientific concepts or reasoning behind the model.

Intersection Between Scientific and Practical Reasoning

Comparing science subject matter knowledge and problem-solving to the idea of categories and placement (Buchanan, 1992 ), subject matter is analogous to categories where meanings are fixed with well-established epistemic practices and norms. The problem-solving process and design of solutions are likened to placements where boundaries are less rigid, hence opening opportunities for students’ personal experiences and ideas to be presented. Placements allow students to apply their knowledge from daily experiences and common-sense logic to justify decisions. Common-sense knowledge and logic are more accessible, and hence we observe higher frequency of usage. Comparatively, while science subject matter (categories) is also used, it is observed less frequently. This could possibly be due either to less familiarity with the subject matter or lack of appropriate opportunity to apply in practical problem solving. The challenge for teachers during implementation of a STEM problem-solving activity, therefore, lies in the balance of the application of scientific and practical reasoning to deepen understanding of disciplinary knowledge in the context of solving a problem in a meaningful manner.

Our observations suggest that engaging students with practical inquiry tasks with some engineering demands such as the design of modern farm systems offers opportunities for them to convert their personal lived experiences into feasible concrete ideas that they can share in a public space for critique. The peer critique following the sharing of their practical ideas allows for both practical and scientific questions to be asked and for students to defend their ideas. For instance, after one group of students presented their prototype that has silvered surfaces, a student asked a question: “what is the function of the silver panels?”, to which his peers replied : “Makes the light bounce. Bounce the sunlight away and then to other parts of the tray.” This question indicated that students applied their knowledge that shiny silvered surfaces reflect light, and they used this knowledge to disperse the light to other trays where the crops were growing. An example of a practical question asked was “what is the purpose of the ladder?”, to which the students replied: “To take the plants – to refill the plants, the workers must climb up”. While the process of presentation and peer critique mimic peer review in the science inquiry process, the conceptual knowledge of science may not always be evident as students paid more attention to the design constraints such as lighting, watering, and space that was set in the activity. Given the context of growing plants, engagement with the science behind nutritional requirements of plants, the process of photosynthesis, and the adaptations of plants could be more deliberately explored.

The goal of our work lies in applying the theoretical ideas of Dewey and Bereiter to better understand reasoning practices in integrate STEM problem solving. We argue that this is a worthy pursue to better understand the roles of scientific reasoning in practical problem solving. One of the goals of integrated STEM education in schools is to enculture students into the practices of science, engineering and mathematics that include disciplinary conceptual knowledge, epistemic practices, and social norms (Kelly & Licona, 2018 ). In the integrated form, the boundaries and approaches to STEM learning are more diverse compared with monodisciplinary ways of problem solving. For instance, in integrated STEM problem solving, besides scientific investigations and explanations, students are also required to understand constraints, design optimal solutions within specific parameters and even to construct prototypes. For students to learn the ways of speaking, doing and being as they participate in integrated STEM problem solving in schools in a meaningful manner, students could benefit from these experiences.

With reference to the first research question of What is the extent of practical and scientific reasoning in integrated STEM problem solving, our analysis suggests that there are fewer instances of scientific reasoning compared with practical reasoning. Considering the intention of integrated STEM learning and adopting Bereiter’s idea that students should learn higher-order conceptual knowledge through engagement with problem solving, we argue for a need for scientific reasoning to be featured more strongly in integrated STEM lessons so that students can gain higher order scientific conceptual knowledge. While the lessons observed were strong in design and building, what was missing in generating solutions was the engagement in investigations, where learners collected or are presented with data and make decisions about the data to allow them to assess how viable the solutions are. Integrated STEM problems can be designed so that science inquiry can be infused, such as carrying out investigations to figure out relationships between variables. Duschl and Bybee ( 2014 ) have argued for the need to engage students in problematising science inquiry and making choices about what works and what does not.

With reference to the second research question , What is achieved through practical and scientific reasoning during integrated STEM problem solving? , our analyses suggest that utterance for practical reasoning are typically used to justify the physical design of the prototype. These utterances rely largely on what is observable and are associated with basic-level knowledge and experiences. The higher frequency of utterances related to practical reasoning and the nature of the utterances suggests that engagement with practical reasoning is more accessible since they relate more to students’ lived experiences and common-sense. Bereiter ( 1992 ) has urged educators to engage learners in learning that is beyond basic-level knowledge since accumulation of basic-level knowledge does not lead to higher-level conceptual learning. Students should be encouraged to use scientific knowledge also to justify their prototype design and to apply scientific evidence and logic to support their ideas. Engagement with scientific reasoning is preferred as conceptual knowledge, epistemic practices and social norms of science are more widely recognised compared with practical reasoning that are likely to be more varied since they rely on personal experiences and common-sense. This leads us to assert that both context and content are important in integrated STEM learning. Understanding the context or the solution without understanding the scientific principles that makes it work makes the learning less meaningful since we “…cannot strip learning of its context, nor study it in a ‘neutral’ context. It is always situated, always relayed to some ongoing enterprise”. (Bruner, 2004 , p. 20).

To further this discussion on how integrated STEM learning experiences harness the ideas of practical and scientific reasoning to move learners from basic-level knowledge to higher-order conceptual knowledge, we propose the need for further studies that involve working with teachers to identify and create relevant problems-of-explanations that focuses on feasible, worthy inquiry ideas such as those related to specific aspects of transportation, alternative energy sources and clean water that have impact on the local community. The design of these problems can incorporate opportunities for systematic scientific investigations and scaffolded such that there are opportunities to engage in epistemic practices of the constitute disciplines of STEM. Researchers could then examine the impact of problems-of-explanations on students’ learning of higher order scientific concepts. During the problem-solving process, more attention can be given to elicit students’ initial and unfolding ideas (practical) and use them as a basis to start the science inquiry process. Researchers can examine how to encourage discussions that focus on making meaning of scientific phenomena that are embedded within specific problems. This will help students to appreciate how data can be used as evidence to support scientific explanations as well as justifications for the solutions to problems. With evidence, learners can be guided to work on reasoning the phenomena with explanatory models. These aspects should move engagement in integrated STEM problem solving from being purely practice to one that is explanatory.

Limitations

There are four key limitations of our study. Firstly, the degree of generalisation of our observations is limited. This study sets out to illustrate what how Dewey and Bereiter’s ideas can be used as lens to examine knowledge used in problem-solving. As such, the findings that we report here is limited in its ability to generalise across different contexts and problems. Secondly, the lessons that were analysed came from teacher-frontal teaching and group presentation of solution and excluded students’ group discussions. We acknowledge that there could potentially be talk that could involve practical and scientific reasonings within group work. There are two practical consideration for choosing to analyse the first and presentation segments of the suite of lesson. Firstly, these two lessons involved participation from everyone in class and we wanted to survey the use of practical and scientific reasoning by the students as a class. Secondly, methodologically, clarity of utterances is important for accurate analysis and as students were wearing face masks during the data collection, their utterances during group discussions lack the clarity for accurate transcription and analysis. Thirdly, insights from this study were gleaned from a small sample of six classes of students. Further work could involve more classes of students although that could require more resources devoted to analysis of the videos. Finally, the number of students varied across groups and this could potentially affect the reasoning practices during discussions.

Acknowledgements

The authors would like to acknowledge the contributions of the other members of the research team who gave their comment and feedback in the conceptualization stage.

Authors’ Contribution

The first author conceptualized, researched, read, analysed and wrote the article.

The second author worked on compiling the essential features and the variations tables.

The third and fourth authors worked with the first author on the ideas and refinements of the idea.

This study is funded by Office of Education Research grant OER 24/19 TAL.

Data Availability

Declarations.

The authors declare that they have no competing interests.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Aik-Ling Tan, Email: [email protected] .

Yann Shiou Ong, Email: [email protected] .

Yong Sim Ng, Email: [email protected] .

Jared Hong Jie Tan, Email: moc.liamg@derajeijgnohnat .

  • Aikenhead GS. Science education for everyday life: Evidence-based practice. Teachers College Press; 2006. [ Google Scholar ]
  • Bereiter C. Referent-centred and problem-centred knowledge: Elements of an educational epistemology. Interchange. 1992; 23 (4):337–361. doi: 10.1007/BF01447280. [ CrossRef ] [ Google Scholar ]
  • Breiner JM, Johnson CC, Harkness SS, Koehler CM. What is STEM? A discussion about conceptions of STEM in education and partnership. School Science and Mathematics. 2012; 112 (1):3–11. doi: 10.1111/j.194908594.2011.00109.x. [ CrossRef ] [ Google Scholar ]
  • Brown, M. J. (2012). John Dewey’s logic of science. HOPS: The Journal of the International Society for the History of Philosophy of Science, 2 (2), 258–306.
  • Bruner, J. (2004). The psychology of learning: A short history (pp.13–20). Winter: Daedalus.
  • Bryan LA, Moore TJ, Johnson CC, Roehrig GH. Integrated STEM education. In: Johnson CC, Peters-Burton EE, Moore TJ, editors. STEM road map: A framework for integrated STEM education. Routledge; 2016. pp. 23–37. [ Google Scholar ]
  • Buchanan R. Wicked problems in design thinking. Design Issues. 1992; 8 (2):5–21. doi: 10.2307/1511637. [ CrossRef ] [ Google Scholar ]
  • Bybee RW. The case for STEM education: Challenges and opportunities. NSTA Press; 2013. [ Google Scholar ]
  • Crismond DP, Adams RS. The informed design teaching and learning matrix. Journal of Engineering Education. 2012; 101 (4):738–797. doi: 10.1002/j.2168-9830.2012.tb01127.x. [ CrossRef ] [ Google Scholar ]
  • Cunningham, C. M., & Lachapelle, P. (2016). Experiences to engage all students. Educational Designer , 3(9), 1–26. https://www.educationaldesigner.org/ed/volume3/issue9/article31/
  • Curriculum Planning and Development Division [CPDD] (2021). 2021 Lower secondary science express/ normal (academic) teaching and learning syllabus . Singapore: Ministry of Education.
  • Delahunty T, Seery N, Lynch R. Exploring problem conceptualization and performance in STEM problem solving contexts. Instructional Science. 2020; 48 :395–425. doi: 10.1007/s11251-020-09515-4. [ CrossRef ] [ Google Scholar ]
  • Dewey J. Logic: The theory of inquiry. Henry Holt and Company Inc; 1938. [ Google Scholar ]
  • Dewey J. Science as subject-matter and as method. Science. 1910; 31 (787):121–127. doi: 10.1126/science.31.787.121. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dewey J. How we think. D.C. Heath & Co Publishers; 1910. [ Google Scholar ]
  • Duschl, R. A., & Bybee, R. W. (2014). Planning and carrying out investigations: an entry to learning and to teacher professional development around NGSS science and engineering practices. International Journal of STEM Education, 1 (12). DOI: 10.1186/s40594-014-0012-6.
  • Gale, J., Alemder, M., Lingle, J., & Newton, S (2000). Exploring critical components of an integrated STEM curriculum: An application of the innovation implementation framework. International Journal of STEM Education, 7(5), 10.1186/s40594-020-0204-1.
  • Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research. 2005; 15 (9):1277–1288. doi: 10.1177/1049732305276687. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jonassen DH. Toward a design theory of problem solving. ETR&D. 2000; 48 (4):63–85. doi: 10.1007/BF02300500. [ CrossRef ] [ Google Scholar ]
  • Kelly, G., & Licona, P. (2018). Epistemic practices and science education. In M. R. Matthews (Ed.), History, philosophy and science teaching: New perspectives (pp. 139–165). Cham, Switzerland: Springer. 10.1007/978-3-319-62616-1.
  • Lee O, Luykx A. Science education and student diversity: Synthesis and research agenda. Cambridge University Press; 2006. [ Google Scholar ]
  • Li D. The pragmatic construction of word meaning in utterances. Journal of Chinese Language and Computing. 2008; 18 (3):121–137. [ Google Scholar ]
  • National Research Council . The National Science Education standards. National Academy Press; 1996. [ Google Scholar ]
  • National Research Council (2000). Inquiry and the national science education standards: A guide for teaching and learning. Washington, DC: The National Academies Press. 10.17226/9596.
  • OECD (2018). The future of education and skills: Education 2030. Downloaded on October 3, 2020 from https://www.oecd.org/education/2030/E2030%20Position%20Paper%20(05.04.2018).pdf
  • Park, W., Wu, J.-Y., & Erduran, S. (2020) The nature of STEM disciplines in science education standards documents from the USA, Korea and Taiwan: Focusing on disciplinary aims, values and practices.  Science & Education, 29 , 899–927.
  • Pleasants J. Inquiring into the nature of STEM problems: Implications for pre-college education. Science & Education. 2020; 29 :831–855. doi: 10.1007/s11191-020-00135-5. [ CrossRef ] [ Google Scholar ]
  • Roehrig, G. H., Dare, E. A., Ring-Whalen, E., & Wieselmann, J. R. (2021). Understanding coherence and integration in integrated STEM curriculum. International Journal of STEM Education, 8(2), 10.1186/s40594-020-00259-8
  • SFA (2020). The food we eat . Downloaded on May 5, 2021 from https://www.sfa.gov.sg/food-farming/singapore-food-supply/the-food-we-eat
  • Svandova K. Secondary school students’ misconceptions about photosynthesis and plant respiration: Preliminary results. Eurasia Journal of Mathematics, Science, & Technology Education. 2014; 10 (1):59–67. doi: 10.12973/eurasia.2014.1018a. [ CrossRef ] [ Google Scholar ]
  • Tan M. Context matters in science education. Cultural Studies of Science Education. 2020 doi: 10.1007/s11422-020-09971-x. [ CrossRef ] [ Google Scholar ]
  • Tan, A.-L., Teo, T. W., Choy, B. H., & Ong, Y. S. (2019). The S-T-E-M Quartet. Innovation and Education , 1 (1), 3. 10.1186/s42862-019-0005-x
  • Wheeler LB, Navy SL, Maeng JL, Whitworth BA. Development and validation of the Classroom Observation Protocol for Engineering Design (COPED) Journal of Research in Science Teaching. 2019; 56 (9):1285–1305. doi: 10.1002/tea.21557. [ CrossRef ] [ Google Scholar ]
  • World Economic Forum (2020). Schools of the future: Defining new models of education for the fourth industrial revolution. Retrieved on Jan 18, 2020 from https://www.weforum.org/reports/schools-of-the-future-defining-new-models-of-education-for-the-fourth-industrial-revolution/

loading

How it works

For Business

Join Mind Tools

Article • 4 min read

The Problem-Solving Process

Looking at the basic problem-solving process to help keep you on the right track.

By the Mind Tools Content Team

Problem-solving is an important part of planning and decision-making. The process has much in common with the decision-making process, and in the case of complex decisions, can form part of the process itself.

We face and solve problems every day, in a variety of guises and of differing complexity. Some, such as the resolution of a serious complaint, require a significant amount of time, thought and investigation. Others, such as a printer running out of paper, are so quickly resolved they barely register as a problem at all.

problem solving is a process of generating solutions from observed data

Despite the everyday occurrence of problems, many people lack confidence when it comes to solving them, and as a result may chose to stay with the status quo rather than tackle the issue. Broken down into steps, however, the problem-solving process is very simple. While there are many tools and techniques available to help us solve problems, the outline process remains the same.

The main stages of problem-solving are outlined below, though not all are required for every problem that needs to be solved.

problem solving is a process of generating solutions from observed data

1. Define the Problem

Clarify the problem before trying to solve it. A common mistake with problem-solving is to react to what the problem appears to be, rather than what it actually is. Write down a simple statement of the problem, and then underline the key words. Be certain there are no hidden assumptions in the key words you have underlined. One way of doing this is to use a synonym to replace the key words. For example, ‘We need to encourage higher productivity ’ might become ‘We need to promote superior output ’ which has a different meaning.

2. Analyze the Problem

Ask yourself, and others, the following questions.

  • Where is the problem occurring?
  • When is it occurring?
  • Why is it happening?

Be careful not to jump to ‘who is causing the problem?’. When stressed and faced with a problem it is all too easy to assign blame. This, however, can cause negative feeling and does not help to solve the problem. As an example, if an employee is underperforming, the root of the problem might lie in a number of areas, such as lack of training, workplace bullying or management style. To assign immediate blame to the employee would not therefore resolve the underlying issue.

Once the answers to the where, when and why have been determined, the following questions should also be asked:

  • Where can further information be found?
  • Is this information correct, up-to-date and unbiased?
  • What does this information mean in terms of the available options?

3. Generate Potential Solutions

When generating potential solutions it can be a good idea to have a mixture of ‘right brain’ and ‘left brain’ thinkers. In other words, some people who think laterally and some who think logically. This provides a balance in terms of generating the widest possible variety of solutions while also being realistic about what can be achieved. There are many tools and techniques which can help produce solutions, including thinking about the problem from a number of different perspectives, and brainstorming, where a team or individual write as many possibilities as they can think of to encourage lateral thinking and generate a broad range of potential solutions.

4. Select Best Solution

When selecting the best solution, consider:

  • Is this a long-term solution, or a ‘quick fix’?
  • Is the solution achievable in terms of available resources and time?
  • Are there any risks associated with the chosen solution?
  • Could the solution, in itself, lead to other problems?

This stage in particular demonstrates why problem-solving and decision-making are so closely related.

5. Take Action

In order to implement the chosen solution effectively, consider the following:

  • What will the situation look like when the problem is resolved?
  • What needs to be done to implement the solution? Are there systems or processes that need to be adjusted?
  • What will be the success indicators?
  • What are the timescales for the implementation? Does the scale of the problem/implementation require a project plan?
  • Who is responsible?

Once the answers to all the above questions are written down, they can form the basis of an action plan.

6. Monitor and Review

One of the most important factors in successful problem-solving is continual observation and feedback. Use the success indicators in the action plan to monitor progress on a regular basis. Is everything as expected? Is everything on schedule? Keep an eye on priorities and timelines to prevent them from slipping.

If the indicators are not being met, or if timescales are slipping, consider what can be done. Was the plan realistic? If so, are sufficient resources being made available? Are these resources targeting the correct part of the plan? Or does the plan need to be amended? Regular review and discussion of the action plan is important so small adjustments can be made on a regular basis to help keep everything on track.

Once all the indicators have been met and the problem has been resolved, consider what steps can now be taken to prevent this type of problem recurring? It may be that the chosen solution already prevents a recurrence, however if an interim or partial solution has been chosen it is important not to lose momentum.

Problems, by their very nature, will not always fit neatly into a structured problem-solving process. This process, therefore, is designed as a framework which can be adapted to individual needs and nature.

Join Mind Tools and get access to exclusive content.

This resource is only available to Mind Tools members.

Already a member? Please Login here

Sign-up to our newsletter

Subscribing to the Mind Tools newsletter will keep you up-to-date with our latest updates and newest resources.

Subscribe now

Business Skills

Personal Development

Leadership and Management

Member Extras

Most Popular

Latest Updates

Article az45dcz

Pain Points Podcast - Presentations Pt 2

Article ad84neo

NEW! Pain Points - How Do I Decide?

Mind Tools Store

About Mind Tools Content

Discover something new today

Finding the Best Mix in Training Methods

Using Mediation To Resolve Conflict

Resolving conflicts peacefully with mediation

How Emotionally Intelligent Are You?

Boosting Your People Skills

Self-Assessment

What's Your Leadership Style?

Learn About the Strengths and Weaknesses of the Way You Like to Lead

Recommended for you

Talent management overview.

This Document Provides an Overview of Talent Management

Business Operations and Process Management

Strategy Tools

Customer Service

Business Ethics and Values

Handling Information and Data

Project Management

Knowledge Management

Self-Development and Goal Setting

Time Management

Presentation Skills

Learning Skills

Career Skills

Communication Skills

Negotiation, Persuasion and Influence

Working With Others

Difficult Conversations

Creativity Tools

Self-Management

Work-Life Balance

Stress Management and Wellbeing

Coaching and Mentoring

Change Management

Team Management

Managing Conflict

Delegation and Empowerment

Performance Management

Leadership Skills

Developing Your Team

Talent Management

Problem Solving

Decision Making

Member Podcast

  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

problem solving is a process of generating solutions from observed data

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading Change and Organizational Renewal
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

What Is Creative Problem-Solving & Why Is It Important?

Business team using creative problem-solving

  • 01 Feb 2022

One of the biggest hindrances to innovation is complacency—it can be more comfortable to do what you know than venture into the unknown. Business leaders can overcome this barrier by mobilizing creative team members and providing space to innovate.

There are several tools you can use to encourage creativity in the workplace. Creative problem-solving is one of them, which facilitates the development of innovative solutions to difficult problems.

Here’s an overview of creative problem-solving and why it’s important in business.

Access your free e-book today.

What Is Creative Problem-Solving?

Research is necessary when solving a problem. But there are situations where a problem’s specific cause is difficult to pinpoint. This can occur when there’s not enough time to narrow down the problem’s source or there are differing opinions about its root cause.

In such cases, you can use creative problem-solving , which allows you to explore potential solutions regardless of whether a problem has been defined.

Creative problem-solving is less structured than other innovation processes and encourages exploring open-ended solutions. It also focuses on developing new perspectives and fostering creativity in the workplace . Its benefits include:

  • Finding creative solutions to complex problems : User research can insufficiently illustrate a situation’s complexity. While other innovation processes rely on this information, creative problem-solving can yield solutions without it.
  • Adapting to change : Business is constantly changing, and business leaders need to adapt. Creative problem-solving helps overcome unforeseen challenges and find solutions to unconventional problems.
  • Fueling innovation and growth : In addition to solutions, creative problem-solving can spark innovative ideas that drive company growth. These ideas can lead to new product lines, services, or a modified operations structure that improves efficiency.

Design Thinking and Innovation | Uncover creative solutions to your business problems | Learn More

Creative problem-solving is traditionally based on the following key principles :

1. Balance Divergent and Convergent Thinking

Creative problem-solving uses two primary tools to find solutions: divergence and convergence. Divergence generates ideas in response to a problem, while convergence narrows them down to a shortlist. It balances these two practices and turns ideas into concrete solutions.

2. Reframe Problems as Questions

By framing problems as questions, you shift from focusing on obstacles to solutions. This provides the freedom to brainstorm potential ideas.

3. Defer Judgment of Ideas

When brainstorming, it can be natural to reject or accept ideas right away. Yet, immediate judgments interfere with the idea generation process. Even ideas that seem implausible can turn into outstanding innovations upon further exploration and development.

4. Focus on "Yes, And" Instead of "No, But"

Using negative words like "no" discourages creative thinking. Instead, use positive language to build and maintain an environment that fosters the development of creative and innovative ideas.

Creative Problem-Solving and Design Thinking

Whereas creative problem-solving facilitates developing innovative ideas through a less structured workflow, design thinking takes a far more organized approach.

Design thinking is a human-centered, solutions-based process that fosters the ideation and development of solutions. In the online course Design Thinking and Innovation , Harvard Business School Dean Srikant Datar leverages a four-phase framework to explain design thinking.

The four stages are:

The four stages of design thinking: clarify, ideate, develop, and implement

  • Clarify: The clarification stage allows you to empathize with the user and identify problems. Observations and insights are informed by thorough research. Findings are then reframed as problem statements or questions.
  • Ideate: Ideation is the process of coming up with innovative ideas. The divergence of ideas involved with creative problem-solving is a major focus.
  • Develop: In the development stage, ideas evolve into experiments and tests. Ideas converge and are explored through prototyping and open critique.
  • Implement: Implementation involves continuing to test and experiment to refine the solution and encourage its adoption.

Creative problem-solving primarily operates in the ideate phase of design thinking but can be applied to others. This is because design thinking is an iterative process that moves between the stages as ideas are generated and pursued. This is normal and encouraged, as innovation requires exploring multiple ideas.

Creative Problem-Solving Tools

While there are many useful tools in the creative problem-solving process, here are three you should know:

Creating a Problem Story

One way to innovate is by creating a story about a problem to understand how it affects users and what solutions best fit their needs. Here are the steps you need to take to use this tool properly.

1. Identify a UDP

Create a problem story to identify the undesired phenomena (UDP). For example, consider a company that produces printers that overheat. In this case, the UDP is "our printers overheat."

2. Move Forward in Time

To move forward in time, ask: “Why is this a problem?” For example, minor damage could be one result of the machines overheating. In more extreme cases, printers may catch fire. Don't be afraid to create multiple problem stories if you think of more than one UDP.

3. Move Backward in Time

To move backward in time, ask: “What caused this UDP?” If you can't identify the root problem, think about what typically causes the UDP to occur. For the overheating printers, overuse could be a cause.

Following the three-step framework above helps illustrate a clear problem story:

  • The printer is overused.
  • The printer overheats.
  • The printer breaks down.

You can extend the problem story in either direction if you think of additional cause-and-effect relationships.

4. Break the Chains

By this point, you’ll have multiple UDP storylines. Take two that are similar and focus on breaking the chains connecting them. This can be accomplished through inversion or neutralization.

  • Inversion: Inversion changes the relationship between two UDPs so the cause is the same but the effect is the opposite. For example, if the UDP is "the more X happens, the more likely Y is to happen," inversion changes the equation to "the more X happens, the less likely Y is to happen." Using the printer example, inversion would consider: "What if the more a printer is used, the less likely it’s going to overheat?" Innovation requires an open mind. Just because a solution initially seems unlikely doesn't mean it can't be pursued further or spark additional ideas.
  • Neutralization: Neutralization completely eliminates the cause-and-effect relationship between X and Y. This changes the above equation to "the more or less X happens has no effect on Y." In the case of the printers, neutralization would rephrase the relationship to "the more or less a printer is used has no effect on whether it overheats."

Even if creating a problem story doesn't provide a solution, it can offer useful context to users’ problems and additional ideas to be explored. Given that divergence is one of the fundamental practices of creative problem-solving, it’s a good idea to incorporate it into each tool you use.

Brainstorming

Brainstorming is a tool that can be highly effective when guided by the iterative qualities of the design thinking process. It involves openly discussing and debating ideas and topics in a group setting. This facilitates idea generation and exploration as different team members consider the same concept from multiple perspectives.

Hosting brainstorming sessions can result in problems, such as groupthink or social loafing. To combat this, leverage a three-step brainstorming method involving divergence and convergence :

  • Have each group member come up with as many ideas as possible and write them down to ensure the brainstorming session is productive.
  • Continue the divergence of ideas by collectively sharing and exploring each idea as a group. The goal is to create a setting where new ideas are inspired by open discussion.
  • Begin the convergence of ideas by narrowing them down to a few explorable options. There’s no "right number of ideas." Don't be afraid to consider exploring all of them, as long as you have the resources to do so.

Alternate Worlds

The alternate worlds tool is an empathetic approach to creative problem-solving. It encourages you to consider how someone in another world would approach your situation.

For example, if you’re concerned that the printers you produce overheat and catch fire, consider how a different industry would approach the problem. How would an automotive expert solve it? How would a firefighter?

Be creative as you consider and research alternate worlds. The purpose is not to nail down a solution right away but to continue the ideation process through diverging and exploring ideas.

Which HBS Online Entrepreneurship and Innovation Course is Right for You? | Download Your Free Flowchart

Continue Developing Your Skills

Whether you’re an entrepreneur, marketer, or business leader, learning the ropes of design thinking can be an effective way to build your skills and foster creativity and innovation in any setting.

If you're ready to develop your design thinking and creative problem-solving skills, explore Design Thinking and Innovation , one of our online entrepreneurship and innovation courses. If you aren't sure which course is the right fit, download our free course flowchart to determine which best aligns with your goals.

problem solving is a process of generating solutions from observed data

About the Author

IMAGES

  1. Draw A Map Showing The Problem Solving Process

    problem solving is a process of generating solutions from observed data

  2. 6 steps of the problem solving process

    problem solving is a process of generating solutions from observed data

  3. 7 steps in problem solving

    problem solving is a process of generating solutions from observed data

  4. five step process for problem solving is

    problem solving is a process of generating solutions from observed data

  5. four steps of problem solving process

    problem solving is a process of generating solutions from observed data

  6. Problem Solution Mapping

    problem solving is a process of generating solutions from observed data

VIDEO

  1. Solving Recurrence relation Using Generating Function

  2. Solving recurrence relation using generating function/problem 2 @mathsforever6408

  3. How To Test For Stationarity in Time Series (Step by Step) #finance

  4. Techniques for Generating Solutions to a Problem{Unit-6 part-2}[Leadership & Managment){BSN/Post-RN}

  5. Combination Problem With Repeats Solved With Generating Function

  6. Permutations And Exponential Generating Functions: Introduction & Example Part A

COMMENTS

  1. The Problem-Solving Process

    Problem-solving is a mental process that involves discovering, analyzing, and solving problems. The ultimate goal of problem-solving is to overcome obstacles and find a solution that best resolves the issue. The best strategy for solving a problem depends largely on the unique situation. In some cases, people are better off learning everything ...

  2. What is Problem Solving? Steps, Process & Techniques

    Finding a suitable solution for issues can be accomplished by following the basic four-step problem-solving process and methodology outlined below. Step. Characteristics. 1. Define the problem. Differentiate fact from opinion. Specify underlying causes. Consult each faction involved for information. State the problem specifically.

  3. What is Problem Solving? (Steps, Techniques, Examples)

    The problem-solving process typically includes the following steps: Identify the issue: Recognize the problem that needs to be solved. Analyze the situation: Examine the issue in depth, gather all relevant information, and consider any limitations or constraints that may be present. Generate potential solutions: Brainstorm a list of possible ...

  4. The Problem Solving Process

    The Problem Solving process consists of a sequence of sections that fit together depending on the type of problem to be solved. These are: Problem Definition. Problem Analysis. Generating possible Solutions. Analyzing the Solutions. Selecting the best Solution (s). The process is only a guide for problem solving.

  5. Problem solving

    Problem solving is the process of achieving a goal by overcoming obstacles, a frequent part of most activities. Problems in need of solutions range from simple personal tasks (e.g. how to turn on an appliance) to complex issues in business and technical fields. The former is an example of simple problem solving (SPS) addressing one issue ...

  6. General Problem-solving Process

    Figure 1. Steps in the General Problem-solving Process. Become aware of the problem. Define the problem. Choose the particular problem to be solved. Identify potential solutions. Evaluate the valid potential solutions to select the best one. Develop an action plan to implement the best solution.

  7. Problem Solving Definition and Methodology

    Broadly defined, problem solving is the process of finding solutions to difficult or complex issues. But you already knew that. Understanding problem solving frameworks, however, requires a deeper dive. Think about a recent problem you faced. Maybe it was an interpersonal issue.

  8. The Art of Effective Problem Solving: A Step-by-Step Guide

    Step 1 - Define the Problem. The definition of the problem is the first step in effective problem solving. This may appear to be a simple task, but it is actually quite difficult. This is because problems are frequently complex and multi-layered, making it easy to confuse symptoms with the underlying cause.

  9. Section 3. Defining and Analyzing the Problem

    Take your time to develop a critical definition, and let this definition, and the analysis that follows, guide you through the process. You're now ready to go on to generating and choosing solutions, which are the next steps in the problem-solving process, and the focus of the following section.

  10. Full article: A framework to foster problem-solving in STEM and

    Merrill et al. (Citation 2017, 73) focus on the following five general steps: identifying a problem, defining the problem, generating solutions, evaluating/choosing/enacting solutions, and assessing the outcome.With respect to problem-solving in mathematics, the seminal contribution of Polya (Citation 1957) highlights four steps: understanding the problem, devising a plan, carrying out the ...

  11. Guide: Problem Solving

    Problem-solving stands as a fundamental skill, crucial in navigating the complexities of both everyday life and professional environments. Far from merely providing quick fixes, it entails a comprehensive process involving the identification, analysis, and resolution of issues. This multifaceted approach requires an understanding of the problem's nature, the exploration of its various ...

  12. PDF INTRODUCTION TO AI AND PRODUCTION SYSTEMS

    Problem solving is a process of generating solutions from observed or given data. It is however not always possible to use direct methods (ie go directly from data to solution). Instead, problem solving often need to use indirect or model-based methods.

  13. A descriptive phase model of problem-solving processes

    Complementary to existing normative models, in this paper we suggest a descriptive phase model of problem solving. Real, not ideal, problem-solving processes contain errors, detours, and cycles, and they do not follow a predetermined sequence, as is presumed in normative models. To represent and emphasize the non-linearity of empirical processes, a descriptive model seemed essential. The ...

  14. Real World Problem-Solving

    2.2. Analytical problem-solving. In psychology and neuroscience, problem-solving broadly refers to the inferential steps taken by an agent 4 that leads from a given state of affairs to a desired goal state (Barbey and Barsalou, 2009).The agent does not immediately know how this goal can be reached and must perform some mental operations (i.e., thinking) to determine a solution (Duncker, 1945).

  15. THE PROBLEM-SOLVING PROCESS Flashcards

    Step 1: Define the Problem. Differentiate fact from opinion. Specify underlying causes. Consult each faction involved for information. State the problem specifically. Identify what standard or expectation is violated. Determine in which process the problem lies. Avoid trying to solve the problem without data.

  16. 35 problem-solving techniques and methods for solving complex problems

    Brainstorming is part of the bread and butter of the problem-solving process and all problem-solving strategies benefit from getting ideas out and challenging a team to generate solutions quickly. With Mindspin, participants are encouraged not only to generate ideas but to do so under time constraints and by slamming down cards and passing them on.

  17. Data Science Skills 101: How to Solve Any Problem, Part II

    Whilst this may feel like 'cheating', similar approaches have been used to solve real life problems. In fact this solution is very similar to the 'kernel trick' employed in the Support Vector Machine (SVM) algorithm. A machine learning approach that has been used globally to solve problems from text classification to financial forecasting.

  18. Problem Solving

    The problem solving process consists of a sequence of steps when correctly followed most often leads to a successful solution. The first steps involve defining and analyzing the problem to be solved. The best way to define the problem is to write down a concise statement which summarizes the problem, and then write down where you want to be after the problem has been resolved.

  19. Intelligence and Creativity in Problem Solving: The Importance of Test

    In design problems the quality of cognitive production depends, in part, on the abilities to reflect on one's own creative behavior (Boden, 1996) and to monitor how far along in the process one is in solving it (Gabora, 2002). Hence, design problems are especially suited to study more complex problem solving processes.

  20. Generating and Evaluating Scientific Evidence and Explanations

    The evidence-gathering phase of inquiry includes designing the investigation as well as carrying out the steps required to collect the data. Generating evidence entails asking questions, deciding what to measure, developing measures, collecting data from the measures, structuring the data, systematically documenting outcomes of the investigations, interpreting and evaluating the data, and ...

  21. STEM Problem Solving: Inquiry, Concepts, and Reasoning

    The problem-solving process and design of solutions are likened to placements where boundaries are less rigid, hence opening opportunities for students' personal experiences and ideas to be presented. Placements allow students to apply their knowledge from daily experiences and common-sense logic to justify decisions.

  22. The Problem-Solving Process

    Please Login here. Although problem-solving is something everyone does on a daily basis, many people lack confidence in their ability. Here we look at the basic problem-solving process to help keep you on the right track.

  23. 21 st Century Skill "Problem Solving": Defining the Concept

    know the way to complete a task, the problem occurs. Problem-solving is a process, which. involves systematic observation and critical thinking to find an appropriate solution or way to. reach the ...

  24. What Is Creative Problem-Solving & Why Is It Important?

    Its benefits include: Finding creative solutions to complex problems: User research can insufficiently illustrate a situation's complexity. While other innovation processes rely on this information, creative problem-solving can yield solutions without it. Adapting to change: Business is constantly changing, and business leaders need to adapt.