MIT xPRO

WATCH THE PROGRAM DEMO VIDEO

By submitting your information, you are agreeing to receive periodic information about online programs from MIT related to the content of this course.

Machine Learning, Modeling, and Simulation: Engineering Problem-Solving in the Age of AI

Demystify machine learning through computational engineering principles and applications in this two-course program from MIT

Submit your information to discover what makes this machine learning program different and how you'll learn with MIT xPRO.

A HANDS-ON APPROACH TO ENGINEERING PROBLEM-SOLVING

The advent of big data, cloud computing, and machine learning are revolutionizing how many professionals approach their work. These technologies offer exciting new ways for engineers to tackle real-world challenges. But with little exposure to these new computational methods, engineers lacking data science or experience in modern computational methods might feel left behind.

This two-course online certificate program brings a hands-on approach to understanding the computational tools used in modern engineering problem-solving. 

Leveraging the rich experience of the faculty at the MIT Center for Computational Science and Engineering (CCSE), this program connects your science and engineering skills to the principles of machine learning and data science. With an emphasis on the application of these methods, you will put these new skills into practice in real time.

Image

5 weeks per course

ENROLL NOW

AFTER THIS PROGRAM, YOU WILL:

Learn how to simulate complex physical processes in your work using discretization methods and numerical algorithms.

Assess and respond to cost-accuracy tradeoffs in simulation and optimization, and make decisions about how to deploy computational resources.

Understand optimization techniques and their fundamental role in machine learning.

Practice real-world forecasting and risk assessment using probabilistic methods.

Recognize the limitations of machine learning and what MIT researchers are doing to resolve them.

Learn about current research in machine learning at the MIT CCSE and how it might impact your work in the future.

COURSES IN THIS PROGRAM

iStock-939501682-SMALL

Machine Learning, Modeling, and Simulation Principles

Course 1 of 2 in the Machine Learning, Modeling, and Simulation online program

View Weekly Schedule

iStock-1153050933-SMALL

Applying Machine Learning to Engineering and Science

Course 2 of 2 in the Machine Learning, Modeling, and Simulation online program

View WEEKLY SCHEDULE

The MIT xPRO Learning Experience

We bring together an innovative pedagogy paired with world-class faculty.

computational modeling problem solving

LEARN BY DOING

Practice processes and methods through simulations, assessments, case studies, and tools.

computational modeling problem solving

LEARN FROM OTHERS

Connect with an international community of professionals interested in solving complex problems.

computational modeling problem solving

LEARN ON DEMAND

Access all of the content online and watch videos on the go.

computational modeling problem solving

REFLECT AND APPLY

Bring your new skills to your organization, through examples from technical work environments and ample prompts for reflection.

computational modeling problem solving

DEMONSTRATE YOUR SUCCESS

Earn a Professional Certificate and Continuing Education Units (CEUs) from MIT.

computational modeling problem solving

LEARN FROM THE BEST

Access cutting edge, research-based multimedia content developed by MIT professors & industry experts.

THIS PROGRAM IS FOR YOU IF...

You have a bachelor's degree in engineering (e.g., mechanical, civil, aerospace, chemical, materials, nuclear, biological, electrical, etc.) or the physical sciences.

You have proficient knowledge of college-level mathematics including differential calculus, linear algebra, and statistics.

You have some experience with MATLAB (R). Programming experience is not necessary, but knowledge of MATLAB (R) is very useful.

Your industry is or will be impacted by machine learning.

Prepare for this program with free online   resources.

JUSTIFY YOUR PROFESSIONAL DEVELOPMENT

Many companies offer professional development benefits to their employees but sometimes starting the conversation is the hardest part of the process.

Use these talking points, stats, and email template to advocate for your professional development through MIT xPRO's online professional certificate program Machine Learning, Modeling, and Simulation: Engineering Problem-Solving in the Age of AI.  

GET MY GUIDE

What Learners Are Saying

Mit xpro learners are not only scientists, engineers, technicians, managers and consultants – they are change agents. they take the initiative, push boundaries, and define the future..

Vivian_DSouza

Vivian D'Souza, Model Based Systems Analysis Engineer at Dana Incorporated

"The course was a fantastic blend of concepts and practical applications. Professor Youssef's content is unmatched to other similar courses that I've tried, and not to mention his enthusiasm for the topic is contagious."

Rachael Naoum

Rachael Naoum, Product Definition Engineer at Dassault Systems

"I loved this course. At first, I was a bit intimidated, it's been a while since I've done any hardcore math. However, the layout of this course made it super easy to follow along with all the concepts and I felt very well-guided throughout the graded assignments. I like how the lectures are broken into short videos for each topic, making it easy to replay and digest. Very well-thought-out course."

Tolga Kaya

Tolga Kaya, Professor of Electrical and Computer Engineering at Sacred Heart University

"This course allowed me to dig deeper [into] the foundations of machine learning and the underlying mechanism of the main algorithms that are used. As a MATLAB user, I particularly appreciated the utilization of MATLAB instead of straight black box python libraries."

MLx_Bear

Bill Bear, Agile Transformation Coach

"Great course for learning the concepts and methods behind machine learning! The course was prepared and delivered in a thoughtful way that provided good challenges and plenty of helpful information. This is just the kind of result that I have come to expect with MIT xPRO."

Juharasha_Shaik

Juharasha Shaik, Senior Staff Software Engineer at Visa

"This course has helped me gain more understanding of the various algorithms that can be applied to the problems that we face during data analysis and modeling. I definitely recommend this course [for a] great understanding of the available tools/algorithms/methods to analyze various use cases and help model the solution."

MIT FACULTY

MLx_Marzouk

Youssef M. Marzouk

Faculty Co-Director of MIT Center of Computational Engineering, Professor of Aeronautics & Astronautics and Director of Aerospace Computational Design Laboratory, MIT

George Barbastathis

George Barbastathis

Professor of Mechanical Engineering, MIT

Heather Kulik

Heather Kulik

Associate Professor of Chemical Engineering, MIT

John Williams

John Williams

Professor Civil & Environmental Engineering, MIT

Themistoklis Sapsis

Themistoklis Sapsis

Associate Professor of Mechanical & Ocean Engineering, MIT

Buehler

Markus Buehler

McAfee Professor of Engineering & Head, Department of Civil & Environmental Engineering, MIT

Richard Braatz

Richard Braatz

Edwin R. Gilliland Professor of Chemical Engineering, MIT

Justin Solomon

Justin Solomon

Associate Professor of Electrical Engineering and Computer Science, MIT

Laurent Demanet

Laurent Demanet

Professor of Applied Mathematics & Director of MIT's Earth Resources Laboratory

NOT READY TO ENROLL?

Watch a free demo video..

Machine Learning offers important new capabilities for solving today’s complex problems, but it’s not a panacea. To get beyond the hype, engineers and scientists must discern how and where machine learning tools are the best option — and where they are not.

Submit your information in the form above and watch a short demo video on the online program — what makes it different from other machine learning courses, what you'll learn, and how you will learn it.

Propel Your Career On Your Terms

Technology is accelerating at an unprecedented pace causing disruption across all levels of business. Tomorrow’s leaders must demonstrate technical expertise as well as leadership acumen in order to maintain a technical edge over the competition while driving innovation in an ever-changing environment.

MIT uniquely understands this challenge and how to solve it with decades of experience developing technical professionals. MIT xPRO’s online learning programs leverage vetted content from world-renowned experts to make learning accessible anytime, anywhere. Designed using cutting-edge research in the neuroscience of learning, MIT xPRO programs are application focused, helping professionals build their skills on the job.

Embrace change. Enhance your skill set. Keep learning. MIT xPRO is with you each step of the way.

Have questions about the program?

MIT Press

On the site

  • Introduction to Computation and Programming Using Python

Introduction to Computation and Programming Using Python

Introduction to Computation and Programming Using Python , third edition

With application to computational modeling and understanding data.

by John V. Guttag

ISBN: 9780262542364

Pub date: January 5, 2021

  • Publisher: The MIT Press

664 pp. , 7 x 9 in , 140

ISBN: 9780262363433

Pub date: March 2, 2021

eTextbook rental

  • 9780262542364
  • Published: January 2021
  • 9780262363433
  • Published: March 2021
  • MIT Press Bookstore
  • Penguin Random House
  • Barnes and Noble
  • Bookshop.org
  • Books a Million

Other Retailers:

  • Amazon.co.uk
  • Waterstones
  • Description

The new edition of an introduction to the art of computational problem solving using Python.

This book introduces students with little or no prior programming experience to the art of computational problem solving using Python and various Python libraries, including numpy, matplotlib, random, pandas, and sklearn. It provides students with skills that will enable them to make productive use of computational techniques, including some of the tools and techniques of data science for using computation to model and interpret data as well as substantial material on machine learning.

The book is based on an MIT course and was developed for use not only in a conventional classroom but in a massive open online course (MOOC). It contains material suitable for a two-semester introductory computer science sequence.

This third edition has expanded the initial explanatory material, making it a gentler introduction to programming for the beginner, with more programming examples and many more “finger exercises.” A new chapter shows how to use the Pandas package for analyzing time series data. All the code has been rewritten to make it stylistically consistent with the PEP 8 standards. Although it covers such traditional topics as computational complexity and simple algorithms, the book focuses on a wide range of topics not found in most introductory texts, including information visualization, simulations to model randomness, computational techniques to understand data, and statistical techniques that inform (and misinform) as well as two related but relatively advanced topics: optimization problems and dynamic programming. The book also includes a Python 3 quick reference guide.

All of the code in the book and an errata sheet are available on the book's web page on the MIT Press website.

John V. Guttag is the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT.

Additional Material

Table of Contents

Introduction to Computer Science and Programming edX Course

Author's Website with Code and Errata

Author Video - MIT, edX, and OCW Courses

Introduction to Computer Science and Programming OpenCourseWare

Author Video - Accessibility at Different Levels

Author Video - New Chapters and Sections

Author Video - Use of the Book in Courses

Related Books

Essentials of Compilation

Popular Searches

Next generation science.

  • Designing Challenge Based Science Learning
  • Unit Library

What is Computational Thinking?

  • Inclusive Integration of Computational Thinking
  • Data Practices
  • Creating Algorithms
  • Understanding Systems with Computational Models

Computational thinking is an interrelated set of skills and practices for solving complex problems, a way to learn topics in many disciplines, and a necessity for fully participating in a computational world.

Many different terms are used when talking about computing, computer science, computational thinking, and programming. Computing encompasses the skills and practices in both computer science and computational thinking. While computer science is an individual academic discipline, computational thinking is a problem-solving approach that integrates across activities, and programming is the practice of developing a set of instructions that a computer can understand and execute, as well as debugging, organizing, and applying that code to appropriate problem-solving contexts. The skills and practices requiring computational thinking are broader, leveraging concepts and skills from computer science and applying them to other contexts, such as core academic disciplines (e.g. arts, English language arts, math, science, social studies) and everyday problem solving. For educators integrating computational thinking into their classrooms, we believe computational thinking is best understood as a series of interrelated skills and competencies.

A Venn diagram showing the relationship between computer science (CS), computational thinking (CT), programming and computing.

Figure 1. The relationship between computer science (CS), computational thinking (CT), programming and computing.

In order to integrate computational thinking into K-12 teaching and learning, educators must define what students need to know and be able to do to be successful computational thinkers. Our recommended framework has three concentric circles.

  • Computational thinking skills , in the outermost circle, are the cognitive processes necessary to engage with computational tools to solve problems. These skills are the foundation to engage in any computational problem solving and should be integrated into early learning opportunities in K-3.
  • Computational thinking practices , in the middle circle, combine multiple computational skills to solve an applied problem. Students in the older grades (4-12) may use these practices to develop artifacts such as a computer program, data visualization, or computational model.
  • Inclusive pedagogies , in the innermost circle, are strategies for engaging all learners in computing, connecting applications to students’ interests and experiences, and providing opportunities to acknowledge, and combat biases and stereotypes within the computing field.

A pie chart extruding from a Venn diagram to illustrate a framework for computational thinking integration.

Figure 2. A framework for computational thinking integration.

What does inclusive computational thinking look like in a classroom? In the image below, we provide examples of inclusive computing pedagogies in the classroom. The pedagogies are divided into three categories to emphasize different pedagogical approaches to inclusivity. Designing Accessible Instruction refers to strategies teachers should use to engage all learners in computing. Connecting to Students’ Interests, Homes, and Communities refers to drawing on the experiences of students to design learning experiences that are connected with their homes, communities, interests and experiences to highlight the relevance of computing in their lives. Acknowledging and Combating Inequity refers to a teacher supporting students to recognize and take a stand against the oppression of marginalized groups in society broadly and specifically in computing. Together these pedagogical approaches promote a more inclusive computational thinking classroom environment, life-relevant learning, and opportunities to critique and counter inequalities. Educators should attend to each of the three approaches as they plan and teach lessons, especially related to computing.

Examples of inclusive pedagogies for teaching computing

Figure 3. Examples of inclusive pedagogies for teaching computing in the classroom adapted from Israel et al., 2017; Kapor Center, 2021; Madkins et al., 2020; National Center for Women & Information Technology, 2021b; Paris & Alim, 2017; Ryoo, 2019; CSTeachingTips, 2021

Micro-credentials for computational thinking

A micro-credential is a digital certificate that verifies an individual’s competence in a specific skill or set of skills. To earn a micro-credential, teachers submit evidence of student work from classroom activities, as well as documentation of lesson planning and reflection.

Because the integration of computational thinking is new to most teachers, micro-credentials can be a useful tool for professional learning and/or credentialing pathways. Digital Promise has created micro-credentials for Computational Thinking Practices . These micro-credentials are framed around practices because the degree to which students have built foundational skills cannot be assessed until they are manifested through the applied practices.

Visit Digital Promise’s micro-credential platform to find out more and start earning micro-credentials today!

Sign up for updates!

Read the new OECD publication on supporting teachers to use digital tools for developing and assessing 21st century competences.

computational modeling problem solving

  • Applications
  • Karel the Turtle
  • Betty's Brain
  • Game Creator by Cand.li
  • Competences
  • PILA for Research

Competency framework

Conceptual framework of the PILA Computational Problem Solving module

What is computational problem solving.

‘Computational problem solving’  is the iterative process of developing  computational solutions to problems. Computational solutions are expressed as logical sequences of steps (i.e. algorithms), where each step is precisely defined so that it can be expressed in a form that can be executed by a computer. Much of the process of computational problem solving is thus oriented towards finding ways to use the power of computers to design new solutions or execute existing solutions more efficiently.

Using computation to solve problems requires the ability to think in a certain way, which is often referred to as ‘computational thinking’. The term originally referred to the capacity to formulate problems as a defined set of inputs (or rules) producing a defined set of outputs. Today, computational thinking has been expanded to include thinking with many levels of abstractions (e.g. reducing complexity by removing unnecessary information), simplifying problems by decomposing them into parts and identifying repeated patterns, and examining how well a solution scales across problems.

Why is computational problem solving important and useful?

Computers and the technologies they enable play an increasingly central role in jobs and everyday life. Being able to use computers to solve problems is thus an important competence for students to develop in order to thrive in today’s digital world. Even people who do not plan a career in computing can benefit from developing computational problem solving skills because these skills enhance how people understand and solve a wide range of problems beyond computer science.

This skillset can be connected to multiple domains of education, and particularly to subjects like science, technology, engineering or mathematics (STEM) and the social sciences. Computing has revolutionised the practices of science, and the ability to use computational tools to carry out scientific inquiry is quickly becoming a required skillset in the modern scientific landscape. As a consequence, teachers who are tasked with preparing students for careers in these fields must understand how this competence develops and can be nurtured. At school, developing computational problem solving skills should be an interdisciplinary activity that involves creating media and other digital artefacts to design, execute, and communicate solutions, as well as to learn about the social and natural world through the exploration, development and use of computational models.

Is computational problem solving the same as knowing a programming language?

A programming language is an artificial language used to write instructions (i.e. code) that can be executed by a computer. However, writing computer code requires many skills beyond knowing the syntax of a specific programming language. Effective programmers must be able to apply the general practices and concepts involved in computational thinking and problem solving. For example, programmers have to understand the problem at hand, explore how it can be simplified, and identify how it relates to other problems they have already solved. Thus, computational problem solving is a skillset that can be employed in different human endeavours, including programming. When employed in the context of programming, computational problem solving ensures that programmers can use their knowledge of a programming language to solve problems effectively and efficiently. 

Students can develop computational problem solving skills without the use of a technical programming language (e.g. JavaScript, Python). In the PILA module, the focus is not on whether students can read or use a certain programming language, but rather on how well students can use computational problem solving skills and practices to solve problems (i.e. to “think” like a computer scientist).

How is computational problem solving assessed in PILA?

Computational problem solving is assessed in PILA by asking students to work through dynamic problems in open-ended digital environments where they have to interpret, design, or debug computer programs (i.e. sequences of code in a visual format). PILA provides ‘learning assessments’, which are assessment experiences that include resources and structured support (i.e. scaffolds) for learning. During these experiences, students iteratively develop programs using various forms of support, such as tutorials, automated feedback, hints and worked examples. The assessments are cumulative, asking students to use what they practiced in earlier tasks when completing successive, more complex tasks.

To ensure that the PILA module focuses on foundational computational problem solving skills and that the material is accessible to all secondary school students no matter their knowledge of programming languages, the module includes an assessment application, ‘Karel World’, that employs an accessible block-based visual programming language. Block-based environments prevent syntax errors while still retaining the concepts and practices that are foundational to programming. These environments work well to introduce novices to programming and help develop their computational problem solving skills, and can be used to generate a wide spectrum of problems from very easy to very hard.

What is assessed in the PILA module on computational problem solving?

Computational problem solving skills.

The module assesses the following set of complementary problem solving skills, which are distinct yet are often used together in order to create effective and efficient solutions to complex problems:

• Decompose problems

Decomposition is the act of breaking down a problem goal into a set of smaller, more manageable sub-goals that can be addressed individually. The sub-goals can be further broken down into more fine-grained sub-goals to reach the granularity necessary for solving the entire problem.

• Recognise and address patterns

Pattern recognition refers to the ability to identify elements that repeat within a problem and can thus be solved through the same operations. Adressing repeating patterns means instructing a computer to iterate given operations until the desired result is achieved. This requires identifying the repeating instructions and defining the conditions governing the duration of the repetition.

• Generalise solutions

Generalisation is the thinking process that results in identifying similarities or common differences across problems to define problem categories. Generalising solution results in producing programs that work across similar problems through the use of ‘abstractions’, such as blocks of organised, reusable sequence(s) of instructions.

• Systematically test and debug

Solving a complex computational problem is an adaptive process that follows iterative cycles of ideation, testing, debugging, and further development. Computational problem solving involves systematically evaluating the state of one’s own work, identifying when and how a given operation requires fixing, and implementing the needed corrections.

Programming concepts

In order to apply these skills to the programming tasks presented in the module, students have to master the below set of programming concepts. These concepts can be isolated but are more often used in concert to solve computational problems:

• Sequences

Sequences are lists of step-by-step instructions that are carried out consecutively and specify the behavior or action that should be produced. In Karel World, for example, students learn to build a sequence of block commands to instruct a turtle to move around the world, avoiding barriers (e.g. walls) and performing certain actions (e.g. pick up or place stones).

• Conditionals

Conditional statements allow a specific set of commands to be carried out only if certain criteria are met. For example, in Karel World, the turtle can be instructed to pick up stones ‘if stones are present’.

To create more concise and efficient instructions, loops can communicate an action or set of actions that are repeated under a certain condition. The repeat command indicates that a given action (i.e. place stone) should be repeated through a real value (i.e. 9 times). A loop could also include a set of commands that repeat as long as a Boolean condition is true, such as ‘while stones are present’.

• Functions

Creating a function helps organise a program by abstracting longer, more complex pieces of code into one single step. By removing repetitive areas of code and assigning higher-level steps, functions make it easier to understand and reason about the various steps of the program, as well as facilitate its use by others. A simple example in Karel World is the function that instructs the turtle to ‘turn around’, which consists of turning left twice.

How is student performance evaluated in the PILA module?

Student performance in the module is evaluated through rubrics. The rubrics are structured in levels, that succinctly describe how students progress in their mastery of the computational problem solving skills and associated concepts. The levels in the rubric (see Table 1) are defined by the complexity of the problems that are presented to the students (simple, relatively complex or complex) and by the behaviours students are expected to exhibit while solving the problem (e.g., using functions, conducting tests). Each problem in the module is mapped to one or more skills (the rows in the rubric) and classified according to its complexity (the columns in the rubric). Solving a problem in the module and performing a set of expected programming operations thus provide evidence that supports the claims about the student presented in the rubric. The more problems at a given cell of the rubric the student solves, the more conclusive is the evidence that the student has reached the level corresponding to that cell. 

Please note: the rubric is updated as feedback is received from teachers on the clarity and usefulness of the descriptions.

computational modeling problem solving

Table 1 . Rubric for computational problem solving skills

Learning management skills

The performance of students on the PILA module depends not just on their mastery of computational problem solving skills and concepts, but also on their capacity to effectively manage their work in the digital learning environment. The complex tasks included in the module invite students to monitor, adapt and reflect on their understanding and progress. The assessment will capture data on students’ ability to regulate these aspects of their own work and will communicate to teachers the extent to which their students can:

• Use resources

PILA tasks provide resources such as worked examples that students can refer to as they build their own solution. Students use resources effectively when they recognise that they have a knowledge gap or need help after repeated failures and proceed to accessing a learning resource.

• Adapt to feedback

As students work through a PILA assessment, they receive different types of automated feedback (e.g.: ‘not there yet’, ‘error: front is blocked’, ‘try using fewer blocks’). Students who can successfully adapt are able to perform actions that are consistent with the feedback, for example inserting a repetition block in their program after the feedback ‘try using fewer blocks’.

• Evaluate own performance

In the assessment experiences designed by experts in PILA, the final task is a complex, open challenge. Upon completion of this task, students are asked to evaluate their own performance and this self-assessment is compared with their actual performance on the task.

• Stay engaged

The assessment will also collect information on the extent to which students are engaged throughout the assessment experience. Evidence on engagement is collected through questions that are included in a survey at the end of the assessment, and through information on students’ use of time and number of attempts.  

Learn about computational problem solving-related learning trajectories:

  • Rich, K. M., Strickland, C., Binkowski, T. A., Moran, C., & Franklin, D. (2017). K-8 Learning Trajectories Derived from Research Literature: Sequence, Repetition, Conditionals. Proceedings of the 2017 ACM Conference on International Computing Education Research, 182–190.
  • Rich, K. M., Strickland, C., Binkowski, T. A., & Franklin, D. (2019). A K-8 Debugging Learning Trajectory Derived from Research Literature. Proceedings of the 50th ACM Technical Symposium on Computer Science Education, 745–751. https://doi.org/10.1145/3287324.3287396
  • Rich, K. M., Binkowski, T. A., Strickland, C., & Franklin, D. (2018). Decomposition: A K-8 Computational Thinking Learning Trajectory. Proceedings of the 2018 ACM Conference on International Computing Education Research  - ICER ’18, 124–132. https://doi.org/10.1145/3230977.3230979

Learn about the connection between computational thinking and STEM education:

  • Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2015). Defining Computational Thinking for Mathematics and Science Classrooms. Journal of Science Education and Technology, 25(1), 127–147. doi:10.1007/s10956-015-9581-5

Learn how students apply computational problem solving to Scratch:

  • Brennan, K., & Resnick, M. (2012). Using artifact-based interviews to study the development of computational thinking in interactive media design. Paper presented at annual American Educational Research Association meeting, Vancouver, BC, Canada.

Associated Content

computational modeling problem solving

Take a step further

© Organisation for Economic Co-operation and Development

Computational Mathematics Bachelor of Science Degree

An blue cyberpunk-like illustration of many layers of partially transparent numbers and formulas.

Request Info about undergraduate study Visit Apply

RIT’s computational mathematics major emphasizes problem-solving using mathematical models to identify solutions in business, science, engineering, and more.

Accelerated Bachelor’s/ Master’s Available

Co-op/Internship Encouraged

STEM-OPT Visa Eligible

Outcomes Rate of RIT Graduates from this degree

Average First-Year Salary of RIT Graduates from this degree

Ranking for Mathematicians on Best Business Jobs List, U.S. News & World Report , 2020

Overview for Computational Mathematics BS

Why major in computational mathematics at rit.

  • Learn by Doing: Gain experience through an experiential learning component of the program approved by the School of Mathematical Sciences .
  • Real World Experience:  With RIT’s cooperative education and internship program you'll earn more than a degree. You’ll gain practical hands-on experience that sets you apart. 
  • Strong Career Paths: Recent computational mathematics graduates are employed at Carbon Black, iCitizen, Amazon, National Security Agency, KJT Group, Department of Defense, and Hewlett Packard.

What is Computational Mathematics?

Computational mathematics, or computational and applied mathematics, focuses on using numerical methods and algorithms to solve mathematical problems and perform mathematical computations with the aid of computers. It bridges the gap between theoretical mathematics and practical applications in various fields, including science, engineering, finance, and more.

RIT’s Computational Mathematics Major

The computational mathematics bachelor's degree combines the beauty and logic of mathematics with the application of today’s fastest and most powerful computers. At RIT, you get the solid foundation in both mathematics and computational methods that you need to be successful in the field or in graduate school.

RIT’s computational mathematics major uses computers as problem-solving tools to come up with mathematical solutions to real-world problems in engineering, operations research, economics, business, and other areas of science.

Computational Mathematics Degree Curriculum 

The skills you learn in the computational mathematics degree can be applied to everyday life, from computing security and telecommunication networking to routes for school buses and delivery companies. The degree provides computational mathematics courses such as:

  • Differential equations
  • Graph theory
  • Abstract and linear algebra
  • Mathematical modeling
  • Numerical analysis

Students are required to complete an experiential learning component of the program, as approved by the School of Mathematical Sciences . Students are encouraged to participate in research opportunities or cooperative education experiences . You will gain extensive computing skills through a number of high-level programming, system design, and other computer science courses.

Furthering Your Education in Computational Mathematics

Combined accelerated bachelor’s/master’s degrees.

Today’s careers require advanced degrees grounded in real-world experience. RIT’s  Combined Accelerated Bachelor’s/Master’s Degrees  enable you to earn both a bachelor’s and a master’s degree in as little as five years of study, all while gaining the valuable hands-on experience that comes from co-ops, internships, research, study abroad, and more.

+1 MBA:  Students who enroll in a qualifying undergraduate degree have the opportunity to add an MBA to their bachelor’s degree after their first year of study, depending on their program. Learn how the  +1 MBA  can accelerate your learning and position you for success.

Secure your spot.

Next Steps to Enroll

From deposit to housing, view the first five steps for accepted first-year students.

What’s next for accepted students

Careers and Experiential Learning

Typical job titles, cooperative education.

What’s different about an RIT education? It’s the career experience you gain by completing  cooperative education and internships  with top companies in every single industry. You’ll earn more than a degree. You’ll gain real-world career experience that sets you apart. It’s exposure–early and often–to a variety of professional work environments, career paths, and industries. 

Co-ops and internships take your knowledge and turn it into know-how. Science co-ops include a range of hands-on experiences, from co-ops and internships and work in labs to undergraduate research and clinical experience in health care settings. These opportunities provide the hands-on experience that enables you to apply your scientific, math, and health care knowledge in professional settings while you make valuable connections between classwork and real-world applications.

Although cooperative education is optional for computational mathematics students, it may be used to fulfill the experiential learning component of the program. Students have worked in a variety of settings on problem-solving teams with engineers, biologists, computer scientists, physicists, and marketing specialists.

National Labs Career Events and Recruiting

The  Office of Career Services and Cooperative Education  offers National Labs and federally-funded Research Centers from all research areas and sponsoring agencies a  variety of options  to connect with and recruit students. Students connect with employer partners to gather information on their laboratories and explore co-op, internship, research, and full-time opportunities.  These  national labs  focus on scientific discovery, clean energy development, national security, technology advancements, and more. Recruiting events include our university-wide Fall Career Fair, on-campus and virtual interviews, information sessions, 1:1 networking with lab representatives, and a National Labs Resume Book available to all labs.

Featured Profiles

Nidhi Baindur - RIT’s second recipient of the Newman Civic Fellowship.

A Beacon of Public Leadership at RIT Wins Newman Civic Fellowship

Student Nidhi Baindur was awarded a Newman Civic Fellowship for her role as a change-maker and public problem-solver at RIT.

headshot of David Longo

Artificial Intelligence, Mathematics, and Designing Mini Protein Drugs

David Longo ’10 (computational math)

David Longo ’10, CEO of Ordaōs, shares his experience at RIT and explains how computational mathematics allowed him to think outside the box to come up with advanced solutions.

headshot of Keegan Kresge

Computational Mathematics and a Future in Cryptography

Keegan Kresge ‘22 (computational mathematics)

Keegan Kresge loves math and programming, making him the perfect fit for cryptography. After completing his degree in computational mathematics, he plans to work at the Department of Defense.

Curriculum for 2023-2024 for Computational Mathematics BS

Current Students: See Curriculum Requirements

Computational Mathematics, BS degree, typical course sequence

Please see General Education Curriculum (GE) for more information.

(WI) Refers to a writing intensive course within the major.

Please see Wellness Education Requirement for more information. Students completing bachelor's degrees are required to complete two different Wellness courses.

† Three of the program electives must be MATH or STAT courses with course numbers of at least 250, and either Graph Theory (MATH-351) or Numerical Linear Algebra (MATH-412) must be one of the three courses. Three of the program elective courses must be chosen from SWEN-261, MATH-305, ISTE-470, CMPE-570, EEEE-346, EEEE-547, (ISEE-301 or MATH-301), BIOL-235, BIOL-470, PHYS-377, ENGL-581, IGME-386, and CSCI courses numbered at least 250.

‡ Students will satisfy this requirement by taking either University Physics I (PHYS-211) and University Physics II (PHYS-212) or General & Analytical Chemistry I and Lab (CHMG-141/145) and General & Analytical Chemistry II and Lab (CHMG-142/146) or General Biology I and Lab (BIOL-101/103) and General Biology II and Lab (BIOL-102/104).

§ Students are required to complete an experiential learning component of the program: MATH-501 Experiential Learning Requirement in Mathematics, as approved by the School of Mathematics and Statistics. Students are urged to fulfill this requirement by participating in research opportunities or co-op experiences; students can also fulfill this requirement by taking MATH-500 Senior Capstone in Mathematics as a program elective. 

Combined Accelerated Bachelor's/Master's Degrees

The curriculum below outlines the typical course sequence(s) for combined accelerated degrees available with this bachelor's degree.

Computational Mathematics, BS degree/Applied and Computational Mathematics (thesis option), MS degree, typical course sequence

Computational mathematics, bs degree/applied and computational mathematics (project option), ms degree, typical course sequence, computational mathematics, bs degree/computer science, ms degree, typical course sequence, admissions and financial aid.

This program is STEM designated when studying on campus and full time.

First-Year Admission

A strong performance in a college preparatory program is expected. This includes:

  • 4 years of English
  • 3 years of social studies and/or history
  • 4 years of mathematics is  required  and must include algebra, geometry, algebra 2/trigonometry, and pre-calculus. Calculus is  preferred .
  • 2-3 years of science is required and must include chemistry or physics; both are recommended .

Transfer Admission

Transfer course recommendations without associate degree Courses in liberal arts, physics, math, and chemistry

Appropriate associate degree programs for transfer AS degree in liberal arts with math/science option

Learn How to Apply

Financial Aid and Scholarships

100% of all incoming first-year and transfer students receive aid.

RIT’s personalized and comprehensive financial aid program includes scholarships, grants, loans, and campus employment programs. When all these are put to work, your actual cost may be much lower than the published estimated cost of attendance. Learn more about financial aid and scholarships

Nathan Cahill Headshot

Nathan Cahill

Nathaniel Barlow Headshot

Nathaniel Barlow

Kara Maki Headshot

Undergraduate Research Opportunities

Many students join research teams and engage in research projects starting as early as their first year. Participation in undergraduate research leads to the development of real-world skills, enhanced problem-solving techniques, and broader career opportunities. Our students have opportunities to travel to national conferences for presentations and also become contributing authors on peer-reviewed manuscripts. Explore the variety of mathematics and statistics undergraduate research projects happening across the university.

Latest News

July 20, 2023

person wearing P P E and using a microscope. There are graphics of medical icons and strands of D N A overlayed on the photo.

AI has secured a footing in drug discovery. Where does it go from here?   

PharmaVoice talks to David Longo ’10 (computational mathematics), CEO of drug design company Ordaos, about artificial intelligence in drug development.

June 20, 2023

large group of people posing outside for a photo.

Sign-Speak joins AWS Impact Accelerator   

The Rochester Beacon features Nikolas Kelly '20 (supply chain management), co-founder and chief product officer of Sign-Speak, and Nicholas Wilkins '19 (computational mathematics), '19 MS (computer science).

May 17, 2023

graphic reads, more international fellowships and scholarships.

RIT students awarded international fellowships and scholarships

Several RIT students from a variety of colleges and academic disciplines have been awarded prestigious international fellowships and scholarships.

Modeling a Problem-Solving Approach Through Computational Thinking for Teaching Programming

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

  • Open access
  • Published: 03 August 2020

Fostering computational thinking through educational robotics: a model for creative computational problem solving

  • Morgane Chevalier   ORCID: orcid.org/0000-0002-9115-1992 1 , 2   na1 ,
  • Christian Giang 2 , 3   na1 ,
  • Alberto Piatti 3 &
  • Francesco Mondada 2  

International Journal of STEM Education volume  7 , Article number:  39 ( 2020 ) Cite this article

17k Accesses

87 Citations

35 Altmetric

Metrics details

Educational robotics (ER) is increasingly used in classrooms to implement activities aimed at fostering the development of students’ computational thinking (CT) skills. Though previous works have proposed different models and frameworks to describe the underlying concepts of CT, very few have discussed how ER activities should be implemented in classrooms to effectively foster CT skill development. Particularly, there is a lack of operational frameworks, supporting teachers in the design, implementation, and assessment of ER activities aimed at CT skill development. The current study therefore presents a model that allows teachers to identify relevant CT concepts for different phases of ER activities and aims at helping them to appropriately plan instructional interventions. As an experimental validation, the proposed model was used to design and analyze an ER activity aimed at overcoming a problem that is often observed in classrooms: the trial-and-error loop, i.e., an over-investment in programming with respect to other tasks related to problem-solving.

Two groups of primary school students participated in an ER activity using the educational robot Thymio. While one group completed the task without any imposed constraints, the other was subjected to an instructional intervention developed based on the proposed model. The results suggest that (i) a non-instructional approach for educational robotics activities (i.e., unlimited access to the programming interface) promotes a trial-and-error behavior; (ii) a scheduled blocking of the programming interface fosters cognitive processes related to problem understanding, idea generation, and solution formulation; (iii) progressively adjusting the blocking of the programming interface can help students in building a well-settled strategy to approach educational robotics problems and may represent an effective way to provide scaffolding.

Conclusions

The findings of this study provide initial evidence on the need for specific instructional interventions on ER activities, illustrating how teachers could use the proposed model to design ER activities aimed at CT skill development. However, future work should investigate whether teachers can effectively take advantage of the model for their teaching activities. Moreover, other intervention hypotheses have to be explored and tested in order to demonstrate a broader validity of the model.

Introduction

Educational robotics (ER) activities are becoming increasingly popular in classrooms. Among others, ER activities have been praised for the development of important twenty-first century skills such as creativity (Eguchi, 2014 ; Negrini & Giang, 2019 ; Romero, Lepage, & Lille, 2017 ) and collaboration (Denis & Hubert, 2001 ; Giang et al., 2019 ). Due to its increasing popularity, ER is also often used to implement activities aimed at fostering CT skills of students. Such activities usually require students to practice their abilities in problem decomposition, abstraction, algorithm design, debugging, iteration, and generalization, representing six main facets of CT (Shute, Sun, & Asbell-Clarke, 2017 ). Indeed, previous works have argued that ER can be considered an appropriate tool for the development of CT skills (Bers, Flannery, Kazakoff, & Sullivan, 2014 ; Bottino & Chioccariello, 2014 ; Catlin & Woollard, 2014 ; Chalmers, 2018 ; Eguchi, 2016 ; Leonard et al., 2016 ; Miller & Nourbakhsh, 2016 ).

Nevertheless, studies discussing how to implement ER activities for CT skills development in classrooms, still appear to be scarce. The latest meta-analyses carried out on ER and CT (Hsu, Chang, & Hung, 2018 ; Jung & Won, 2018 ; Shute et al., 2017 ) have mentioned only four works between 2006 and 2018 that elaborated how ER activities should be implemented in order to foster CT skills in K-5 education (particularly for grades 3 and 4, i.e., for students of age between 8 and 10 years old). Another recent work (Ioannou & Makridou, 2018 ) has shown that there are currently only nine empirical investigations at the intersection of ER and CT in K-12. Among the recommendations for researchers that were presented in this work, the authors stated that it is important to “work on a practical framework for the development of CT through robotics.” A different study (Atmatzidou & Demetriadis, 2016 ) has pointed out that there is a lack of “explicit teacher guidance on how to organize a well-guided ER activity to promote students’ CT skills.”

In the meta-analysis of Shute et al. ( 2017 ), the authors reviewed the state of the art of existing CT models and concluded that the existing models were inconsistent, causing “problems in designing interventions to support CT learning.” They therefore synthesized the information and proposed a new CT model represented by the six main facets mentioned above (Shute et al., 2017 ). Though the authors suggested that this model may provide a framework to guide assessment and support of CT skills, the question remains whether teachers can take advantage of such models and put them into practice. In order to support teachers in the design, implementation, and assessment of activities addressing these CT components, it can be presumed that more operational frameworks are needed. Particularly, such frameworks should provide ways to identify specific levers that teachers can adjust to promote the development of CT skills of students in ER activities.

To address this issue, the present work aims at providing an operational framework for ER activities taking into consideration two main aspects of CT, computation, and creativity, embedded in the context of problem-solving situations. The objective of the present study is to obtain a framework that allows teachers to effectively design ER activities for CT development, anticipate what could occur in class during the activities, and accordingly plan specific interventions. Moreover, such framework could potentially allow teachers to assess the activities in terms of CT competencies developed by the students.

To verify the usefulness of the proposed framework, it has been used to design and analyze an ER activity aimed at overcoming a situation that is often observed in classrooms: the trial-and-error loop. It represents an over-investment in programming with respect to other problem-solving tasks during ER activities. In the current study, an ER activity has been developed and proposed to two groups of primary school pupils: a test group and a control group, each performing the same task under different experimental conditions. The students were recorded during the activity and the videos have been analyzed by two independent evaluators to study the effectiveness of instructional interventions designed according to the proposed framework and implemented in the experimental condition for the test groups.

In the following section, past works at the intersection of ER and CT are summarized. This is followed by the presentation of the creative computational problem-solving (CCPS) model for ER activities aimed at CT skills development. Subsequently, the research questions addressed in this study are described, as well as the methods for the experimental validation of this study. This is followed by the presentation of the experimental results and a discussion on these findings. The paper finally concludes with a summary on the possible implications and the limitations of the study.

What three meta-analyses at the crossroads of ER and CT have shown

The idea of using robots in classrooms has a long history—indeed, much time has passed since the idea was first promoted by Papert in the late 1970s (Papert, 1980 ). On the other hand, the use of ER to foster CT skills development appears to be more recent—it was in 2006 that Jeannette Wing introduced the expression “computational thinking” to the educational context (Wing, 2006 ). It is therefore not surprising that only three meta-analyses have examined the studies conducted on this topic between 2006 and 2017 (Hsu et al., 2018 ; Jung & Won, 2018 ; Shute et al., 2017 ).

In the first meta-analysis (Jung & Won, 2018 ), Jung and Won describe a systematic and thematic review on existing ER literature ( n = 47) for K-5 education. However, only four out of the 47 analyzed articles related ER to CT and only two of them (Bers et al., 2014 ; Sullivan, Bers, & Mihm, 2017 ) conveyed specific information about how to teach and learn CT, yet limited to K-2 education (with the Kibo robot and the TangibleK platform).

In a second meta-analysis (Hsu et al., 2018 ), Hsu et al. conducted a meta-review on CT in academic journals ( n = 120). As a main result, the authors concluded that “CT has mainly been applied to activities of program design and computer science.” This focus on programming seems to be common and has been questioned before—among others, it has been argued that CT competencies consist of more than just skills related to programming.

A similar conclusion was found in the third meta-analysis (Shute et al., 2017 ). Shute et al. conducted a review among literature on CT in K-16 education ( n = 45) and stated that “considering CT as knowing how to program may be too limiting.” Nevertheless, the relation between programming and CT seems to be interlinked. The authors suggested that “CT skills are not the same as programming skills (Ioannidou, Bennett, Repenning, Koh, & Basawapatna, 2011 ), but being able to program is one benefit of being able to think computationally (Israel, Pearson, Tapia, Wherfel, & Reese, 2015 ).”

According to the findings of these three recent meta-analyses, it appears that there is still a lack of studies focusing on how ER can be used for CT skills development. Except for two studies that specifically describe how to teach and learn CT with ER in K-2 education, operational frameworks guiding the implementation of ER activities for students, especially those aged between 8 and 10 years old (i.e., grades 3 and 4), are still scarce. It also emerges that in the past, activities aimed at CT development have focused too much on the programming aspects. However, CT competences go beyond the limitations on pure coding skills and ER activities should therefore be designed accordingly.

CT development with ER is more than just programming a robot

Because robots can be programmed, ER has often been considered a relevant medium for CT skill development. However, many researchers have also argued that CT is not only programming. As illustrated by Li et al. ( 2020 ), CT should be considered “as a model of thinking that is more about thinking than computing” (p.4). In the work of Bottino and Chioccariello ( 2014 ), the authors are reminiscent of what Papert claimed about the use of robots (Papert, 1980 ). Programming concrete objects such as robots support students’ active learning as robots can “provide immediate feedback and concept reification.” The programming activity is thus not the only one that is important for CT skills development. Instead, evaluating (i.e., testing and observing) can be considered equally important. In the 5c21 framework for CT competency of Romero et al. ( 2017 ), both activities therefore represent separate components: create a computer program (COMP5) and evaluation and iterative improvement (COMP6).

While the evaluation of a solution after programming appears to be natural for most ER activities, it seems that activities prior to programming often receive far less attention. Indeed, it is also relevant to explore what activities are required before programming, that is to say, before translating an algorithm into a programming language for execution by a robot. Several efforts have shown that different activities can be carried out before the programming activity (Giannakoulas & Xinogalos, 2018 ; Kazimoglu, Kiernan, Bacon, & MacKinnon, 2011 , 2012 ). For instance, puzzle games such as Lightbot (Yaroslavski, 2014 ) can be used to convey basic concepts needed before programming (Giannakoulas & Xinogalos, 2018 ). In another work (Kazimoglu et al., 2012 ), a fine effort has been made to put in parallel the task to be done by the student (with a virtual bot) and the cognitive activity implied. This is how the authors sustained the CT skills of students before programming. The integration of such instructional initiatives prior to programming is usually aimed at introducing fundamental concepts necessary for the programming activities. Indeed, code literacy (COMP3) and technological system literacy (COMP4) have been described as two other components in the framework of Romero et al. ( 2017 ), and they have been considered important prerequisites for the use of programmable objects.

But even if students meet these prerequisites, there are other important processes that they should go through prior to the creation of executable code. The two following components in the framework of Romero et al. are related to these processes: problem identification (COMP1) and organize and model the situation (COMP2). However, it appears that in the design of ER activities, these aspects are often not given enough attention. In a classroom environment, robots and computers often attract students’ attention to such an extent that the students tend to dive into a simple trial-and-error approach instead of developing proper solution strategies. Due to the prompt feedback of the machine, students receive an immediate validation of their strategy, reinforcing their perception of controllability (Viau, 2009 ), but this also causes them to easily enter in a trial-and-error loop (Shute et al., 2017 ). In many different learning situations, however, researchers have shown that a pure trial-and-error-approach may limit skill development (Antle, 2013 ; Sadik, Leftwich, & Nadiruzzaman, 2017 ; Tsai, Hsu, & Tsai, 2012 ). In the context of inquiry-based science learning, Bumbacher et al. have shown that students who were instructed to follow a Predict-Observe-Explain (POE) strategy, forcing them to take breaks between actions, gained better conceptual understanding than students who used the same manipulative environment without any instructions (Bumbacher, Salehi, Wieman, & Blikstein, 2018 ). The strategic use of pauses has also been investigated by Perez et al. ( 2017 ) in the context of students who worked with a virtual lab representing a DC circuit construction kit. The authors argued that strategic pauses can represent opportunities for reflection and planning and are highly associated with productive learning. A similar approach has been discussed by Dillenbourg ( 2013 ) who introduced a paper token to a tangible platform to prevent students from running simulations without reflection. Only when students gave a satisfactory answer to the teacher about the predicted behavior of the platform, they were given the paper token to execute the simulations.

However, to this day such instructional interventions have not been applied to activities involving ER. As a matter of fact, many ER activities are conducted without any specific instructional guidance.

As elaborated before, the development of CT skills with ER should involve students in different phases that occur prior as well as after the creation of programming code. While most of the time, the evaluation of a solution after programming appears to be natural, the phases required prior to programming are usually less emphasized. These preceding phases, however, incorporate processes related to many important facets of CT and should therefore be equally addressed. The following section therefore introduces a model for ER activities that allows teachers to identify all relevant phases related to different CT skills. Based on this model, teachers may accordingly plan instructional interventions to foster the development of such CT skills.

The CCPS model

Educational robotic systems for the development of ct skills.

Educational robotics activities are typically based on three main components: one or more educational robots, an interaction interface allowing the user to communicate with the robot and one or more tasks to be solved on a playground (Fig. 1 ).

figure 1

Example of an educational robotics (ER) activity. The figure exemplifies a typical situation encountered in ER activities. One or more problem solvers work on a playground and confront a problem situation involving an Educational Robotics System (ERS), consisting of one or more robots, an interaction interface and one or more tasks to be solved

This set of components is fundamental to any kind of ER activity and has been previously referred to as an Educational Robotics System (ERS) by Giang, Piatti, and Mondada ( 2019 ). When an ERS is used for the development of CT skills, the given tasks are often formulated as open-ended problems that need to be solved. These problems are usually statements requiring the modification of a given perceptual reality (virtual or concrete) through a creative act in order to satisfy a set of conditions. In most cases, a playground relates to the environment (offering the range of possibilities) in which the problem is embedded. The modification can consist in the creation of a new entity or the realization of an event inside the playground, respectively, the acquisition of new knowledge about the playground itself. A modification that satisfies the conditions of the problem is called solution. The problem solver is a human (or a group of humans), that is able to understand and interpret the given problem, to create ideas for its resolution, to use the interaction interface to transform these ideas into a behavior executed by the robot, and to evaluate the solution represented by the behavior of the robot. The language of the problem solver and the language of the robot are usually different. While the (human) problem solver’s language consists of natural languages (both oral and written), graphical, or iconic representations and other perceptual semiotic registers, the (artificial) language of the robot consists of formal languages (i.e., machine languages, binary code). Consequently, the problem should be stated in the problem solver’s language, while the solution has to be implemented in the robot’s language. For the problem solver, the robot’s language is a sort of foreign language that he/she should know in order to communicate with the robot. On the other hand, the robot usually does not communicate directly with the problem solver but generates a modification of the playground that the problem solver can perceive through his/her senses. To facilitate the interaction, the robot embeds a sort of translator between the robot’s language and the problem solver’s language. Indeed, graphical or text programming languages allow part of the language of the robot to be shown and written in iconic representations that can be directly perceived by the problem solver.

Combining creative problem solving and computational problem solving

It has often been claimed that CT is a competence related to the process of problem solving that is contextualized in “computational” situations (Barr & Stephenson, 2011 ; Dierbach, 2012 ; Haseski, Ilic, & Tugtekin, 2018 ; Perkovic, Settle, Hwang, & Jones, 2010 ; Weintrop et al., 2016 ). These processes involve in particular the understanding of a given problem situation, the design of a solution, and the implementation in executable code. At the same time, some researchers have pointed out that the development of CT competencies also involves a certain creative act (Brennan & Resnick, 2012 ; DeSchryver & Yadav, 2015 ; Repenning et al., 2015 ; Romero et al., 2017 ; Shute et al., 2017 ). This perspective refers to creative problem solving which involves “a series of distinct mental operations such as collecting information, defining problems, generating ideas, developing solutions, and taking action” (Puccio, 1999 ). In a different context, creative problem solving has been described as a cooperative iterative process (Lumsdaine & Lumsdaine, 1994 ) involving different persons with different mindsets and thinking modes and consisting of five phases: problem definition (detective and explorer), idea generation (artist), idea synthesis (engineer), idea evaluation (judge), and solution implementation (producer) (Lumsdaine & Lumsdaine, 1994 ).

The creative computational problem solving (CCPS) model presented in the current study, represents a hybrid model combining these two perspectives and adapting them to the context of ERS. Similar to the model of Lumsdaine and Lumsdaine ( 1994 ), the proposed model involves the definition of different phases and iterations. However, while Lumsdaine and Lumsdaine’s model describes the interactions between different human actors, each taking a specific role in the problem-solving process, this model considers the fact that different human actors interact with one or more artificial actors, i.e., the robot(s), to implement the solution. The CCPS model is a structure of five phases, in which transitions are possible, in each moment, from any phase to any other (Fig. 2 ).

figure 2

Phases and transitions of the CCPS model. The graph illustrates the six different phases (green dots) that students pass through when working on ER activities and all the possible transitions between them (gray arrows). The theoretically most efficient problem-solving cycle is highlighted in black. The cycle usually starts with a given problem situation that needs first to be understood by the problem solver (green arrow)

The first three phases of the model can be related to the initial phases of the creative problem-solving model presented in the work of Puccio ( 1999 ): understanding the problem, generating ideas, and planning for action (i.e., solution finding, acceptance-finding). While the first two phases (understanding the problem and generating ideas) are very similar to Puccio’s model, the third phase in this model (formulating the robot’s behavior) is influenced by the fact that the action should be performed by an artificial agent (i.e., a robot). On the other hand, the last two phases of this model can be related to computational problem solving: the fourth phase (programming the behavior) describes the creation of executable code for the robot and the fifth phase (evaluating the solution) consists in the evaluation of the execution of the code (i.e., the robot’s behavior).

The phases of the CCPS model

Based on the conceptual framework of ERS (Giang, Piatti, & Mondada, 2019 ), the CCPS model describes the different phases that students should go through when ERS is used for CT skills development (Fig. 2 ).

It is a structure of five main phases that theoretically, in the most effective case, are completed one after the other and then repeated iteratively.

Understanding the problem (USTD)

In this phase, the problem solver identifies the given problem (see COMP1 in the 5c21 framework of Romero et al. ( 2017 )) through abstraction and decomposition (Shute et al., 2017 ) in order to identify the desired modification of the playground. Here, abstraction is considered as the process of “identifying and extracting relevant information to define main ideas” (Hsu et al., 2018 ). This phase takes as input the given problem situation, usually expressed in the language of the problem solver (e.g., natural language, graphical representations). The completion of this phase is considered successful if the problem solver identifies an unambiguous transformation of the playground that has to be performed by the robot. The output of the phase is the description of the required transformation of the playground.

Generating ideas (IDEA)

The problem-solver sketches one or more behavior ideas for the robot that could satisfy the conditions given in the problem, i.e., modify the playground in the desired way. This phase requires a creative act, i.e., “going from intention to realization” (Duchamp, 1967 ). The input to this phase is the description of the transformation of the playground that has to be performed by the robot. The phase is completed successfully when one or more behaviors are sketched that have the potential of inducing the desired transformation of the playground. The sketches of the different behaviors are the output of this phase.

Formulating the behavior (FORM)

A behavior idea is transformed into a formulation of the robot’s behavior while considering the physical constraints of the playground and by mobilizing the knowledge related to the characteristics of the robot (see COMP4 in Romero et al. ( 2017 )). To do so, the problem solver has to organize and model the situation efficiently (like in COMP2 in Romero et al. ( 2017 )). The input to this phase is the sketch of a behavior, selected among those produced in the preceding phase. The phase is performed successfully when the behavior sketch is transformed into a complete formulation of the robot’s behavior. The behavior formulation is expressed as algorithms in the problem solver’s language, describing “logical and ordered instructions for rendering a solution to the problem” (Shute et al., 2017 ). This is considered the output of this phase.

Programming the behavior (PROG)

In this phase, the problem solver creates a program (see COMP5 in Romero et al. ( 2017 )) to transform the behavior formulation into a behavior expressed by the robot. Prerequisites for succeeding in this phase are the necessary computer science literacy and the knowledge of the specific programming language of the robot or its interface, respectively (see COMP3 in Romero et al. ( 2017 )). Moreover, this phase serves for debugging (Shute et al., 2017 ), allowing the problem solver to revise previous implementations. The input to this phase is the robot’s behavior expressed in the problem solver’s language. The phase is performed successfully when the formulated behavior of the robot is completely expressed in the robot’s language and executed. The output of this phase is the programmed behavior in the robot’s language and its execution so that, once the robot is introduced to the playground, it results in a transformation of the playground.

Evaluating the solution (EVAL)

While the robot performs a modification of the playground according to the programmed behavior, the problem solver observes the realized modification of the playground and evaluates its correspondence to the conditions of the problems and its adequacy in general. As described in Lumsdaine and Lumsdaine ( 1994 ), the problem solver acts as a “judge” in this phase. The input to this phase is the transformation of the playground observed by the problem solver. The observed transformation is compared with the conditions expressed in the given problem. Then, the problem solver has to decide if the programmed behavior can be considered an appropriate solution of the problem, or if it has to be refined, corrected, or completely redefined. This phase is therefore crucial to identify the next step of iteration (see Shute et al. ( 2017 ) and COMP2 in Romero et al. ( 2017 )). As a result, the transitions in the CCPS model can either be terminated or continued through a feedback transition to one of the other phases.

Finally, an additional sixth phase, called off-task behavior (OFFT) , was included to account for situations where the problem solver is not involved in the problem-solving process. This phase was not considered a priori, however, the experiments with students showed that off-task behavior is part of the reality in classrooms and should therefore be included in the model. Moreover, in reality, transitions between phases do not necessarily occur in the order presented. Therefore, the model also accounts for transitions between non-adjacent phases as well as for transitions into the off-task behavior phase (light arrows in Fig. 2 ). In order to facilitate the presentation of these transitions, the matrix representation of the model is introduced hereafter (Fig. 3 ).

figure 3

Matrix representation of the CCPS model. The figure depicts all phases of the CCPS model and transitions between them using a matrix representation. The rows i of the matrix describe the phases from which a transition is outgoing, while the columns j describe the phases towards which the transition is made (e.g., ff 23 describes the transition from the phase USTD towards the phase IDEA). In this representation, feedforward transitions (i.e., transitions from a phase to one of the subsequent ones) are on the upper triangular matrix (green). Feedback transitions (i.e., transitions from one phase to one of the preceding ones) are on the lower triangular matrix (red). Self-transitions (i.e., the remaining in a phase) are not considered in this representation (dashes). The theoretically most efficient problem-solving cycle is highlighted in yellow (ff 23 –ff 34 –ff 45 –ff 56 –ff 62 )

In this representation, a feedforward (ff) is the transition from a phase to any of the subsequent ones and feedback (fb) is the transition from a phase to any of the preceding ones. Consequently, ff ij denotes the feedforward from phase i to phase j , where i < j and fb ij denotes the feedback from phase i to phase j, where j < i. With six states, there are in theory 15 possible feedforward (upper triangular matrix) and 15 possible feedback transitions (lower triangular matrix), as represented in Fig. 3 . Although some of the transitions seem meaningless, they might however be observed in reality and are therefore kept in the model. For instance, it seems that the transition ff 36 would not be possible, since a solution can only be evaluated if it has been programmed. However, especially in the context of group work, it might be possible that a generated idea is immediately followed by an evaluation phase, which was implemented by another student going through the programming phase. Finally, it can be assumed that feedback transitions usually respond to instabilities in previous phases. In this model, special emphasis is therefore placed on the cycle considering the transitions ff 23 -ff 34 -ff 45 -ff 56 -fb 62 (highlighted in yellow in Fig. 3 ), which, as presented before, correspond to the theoretically most efficient cycle within the CCPS model.

Research question

As presented before, one common situation encountered in ER activities is the trial-and-error loop that in the CCPS model corresponds to an over-emphasis on feedforward and feedback transitions between PROG and EVAL phases. However, this condition might not be the most favorable for CT skill development, since the remaining phases (USTD, IDEA, and FORM) of the CCPS model are neglected. Consequently, this leads to the following research question: In a problem-solving activity involving educational robotics, how can the activation of all the CT processes related to the CCPS model be encouraged? In order to foster transitions towards all phases, the current study suggests to expressly generate an USTD-IDEA-FORM loop upstream so that students would not enter the PROG-EVAL loop without being able to leave it. To do so, a temporary blocking of the PROG phase (i.e., blocking the access to the programming interface) is proposed as an instructional intervention. Based on the findings of similar approaches implemented for inquiry-based learning (Bumbacher et al., 2018 ; Dillenbourg, 2013 ; Perez et al., 2017 ), the main idea is to introduce strategic pauses to the students to reinforce the three phases preceding the PROG phase. However, creating one loop to replace another is not a sustainable solution. With time, it is also important to adjust the instructional intervention into a “partial blocking,” so that students can progressively advance in the problem-solving process. At a later stage, students should therefore be allowed to use the programming interface (i.e., enter the PROG phase); however, they should not be allowed to run their code on the robot, to prevent them from entering the trial-and-error loop between PROG and EVAL. Based on these instructional interventions, the current study aims at addressing the following research sub-questions:

Does a non-instructional approach for ER activities (i.e., unlimited access to the programming interface) promote a trial-and-error approach?

Does a blocking of the programming interface foster cognitive processes related to problem understanding, idea generation, and solution formulation, does a partial blocking (i.e., the possibility to use the programming interface without executing the code on the robot) help students to gradually advance in the problem-solving process.

The resulting operational hypotheses are as follows:

Compared to the control group, the students subject to the blocking of the programming interface (total then partial) will activate all the CT processes of the CCPS model.

Compared to the test group, the students not subject to the blocking of the programming interface will mostly activate the PROG-EVAL phases of the CT processes of the CCPS model.

To test these hypotheses, an experiment using a test group and a control group was set up, with test groups that were subject to blocking of the programming interface and control groups that had free access to it.

The proposed CCPS model was evaluated in a research study with 29 primary school students (for details see “Participants” subsection). In groups of 2–3 students, the participants were asked to solve the robot lawnmower mission with the Thymio robot. In this activity, the robot has to be programmed in a way such that it drives autonomously around a lawn area, covering as much of the area as possible. Based on the CCPS model and the presented instructional interventions, two different experimental conditions were implemented for the activity and the students randomly and equally assigned to each condition. The activities of all groups were recorded on video, which subsequently were analyzed by two independent evaluators.

The robot lawnmower mission

The playground of the robot lawnmower mission consists of a fenced lawn area of 45cm × 45cm size (Fig. 4 ).

figure 4

Playground of the robot lawnmower mission with Thymio. The playground consists of a wooden fence surrounding an area (45 × 45cm) representing the lawn. One of the nine squares represents the garage, the starting position of the mission (left). The task is to program the Thymio robot so that it passes over all eight lawn squares while avoiding any collisions with the fence (right)

The fence is constructed using wood, and the lawn area is represented by eight squares of equal size with an imprinted lawn pattern. A ninth square is imprinted with a brick pattern and placed at the bottom right corner of the area, representing a garage, i.e., the starting point of the Thymio lawnmower robot. In this activity, the students have to program a lawnmower behavior, which autonomously drives the robot out of its garage and in the best case, makes it pass over all eight lawn squares while avoiding any collision with the fence. The interest of using the Thymio robot to carry out this mission is twofold: on the one hand, this robot has many sensors and actuators (Mondada et al., 2017 ; Riedo, Chevalier, Magnenat, & Mondada, 2013 ; Riedo, Rétornaz, Bergeron, Nyffeler, & Mondada, 2012 ). On the other hand, among the different programming languages that can be used with Thymio, one is the graphical language VPL (Shin, Siegwart, & Magnenat, 2014 ). The VPL platform (Fig. 5 ) represents parts of the robot’s language by graphical icons that can be directly interpreted by human problem solvers, particularly facilitating transitions from FORM to PROG phases. Students can implement their solutions by simple drag-and-drop actions, without the need of extensive efforts on learning complex syntax beforehand. However, in contrast to sequential programming languages, the robot cannot simply be instructed to move a certain distance towards a given direction. Instead, in the event-based programming language VPL, students have to reflect on how to use the robot’s sensors and actuators to generate a desired behavior. The openness and uncertainty of the task thus requires the students to leverage many competences related to computational thinking.

figure 5

Illustration of proximity between the VPL programming interface and the Thymio robot. The figure illustrates the iconic representation of programming commands in the VPL programming platform (left). The icons were designed to be as close as possible to the characteristics of the Thymio robot (right)

Participants

A total of 29 primary school students (13 girls and 16 boys between 9 and 10 years old) participated in an experimental study with the purpose of evaluating the proposed CCPS model. Prior to the study, all students have been introduced to the Thymio robot and the VPL programming interface through several school lessons (1-h per week for 12 weeks). The participation of the students in this study was approved by their guardians (parents) and class and school leaders (teachers and principal). A statement on ethics approval and consent was issued by The Coordination Committee for Educational Research in the Canton of Vaud (Switzerland).

Experimental procedure

At the beginning of the experimental session, all students were randomly assigned to groups of two or three. Each group of students was then randomly assigned to one of the two experimental conditions (test or control). The experimental procedures for the groups in each condition were different:

Control groups

The activity for the control groups started with a short introduction, where the goal and the rules of the mission were explained by one of the experimenters. The students were then given 40 min to implement their lawnmower robot. During the whole time period, they were allowed to use everything that was provided to them: the playground, the Thymio robot, and the VPL programming interface. No additional constraints were imposed.

Test groups

The experimental procedure for the test groups differed in the structure of the activity. Following the introductory speech, the activity started with 10 min of blocking of the programming interface. The students were given access to the playground and the Thymio robot, but they were not allowed to use the VPL programming platform. After this phase, the blocking was released, and the students were allowed to use everything for 10 min. This was followed by a partial blocking phase of 10 min, where the students had access to everything including the VPL platform, but they were not allowed to execute any code on the robot. For the last 10 min, the blocking was released again, and the students were allowed to use everything that was provided to them.

The study was conducted in two consecutive sessions of 45 min, one for each experimental condition. The test group (7 girls and 8 boys) started the mission first, while the control group (6 girls and 8 boys) went on a guided museum exhibition. After the completion of the first session, both groups switched. During each session, the five groups of the same experimental condition worked on the Thymio lawnmower mission simultaneously. Each group was provided a playground, a Thymio robot and a computer with the VPL platform installed. The sessions were supervised by two experimenters who provided technical support and addressed the students’ questions regarding the task assignment. However, the experimenters did not provide any support regarding the solution of the lawnmower mission. Each group, as well as their interactions with the VPL platform and the playground, was recorded on video for later analysis.

Video analysis

Based on a socio-constructivist approach, this study relied upon in situ observations to capture the interactions of the students with each other as well as with the different cognitive artifacts (the robot, the interface, and the playground). The videos recorded from the experimental sessions were analyzed in several steps. Prior to individual analyses, two evaluators met to discuss and agree on appropriate observables (visual and verbal) indicating transitions towards the different phases of the CCPS model. Therefore, both evaluators first analyzed various prerecorded ER activities together. The videos were recorded from different kinds of ER activities and allowed to establish criterion standards (Sharpe & Koperwas, 2003 ) that are not limited to one specific ER activity. The whole procedure was aimed at streamlining the way both evaluators would perform their individual analyses. Subsequently, both evaluators performed the behavioral analysis independently, sequentially mapping the behaviors of the students during the robot lawnmower activity to the different phases of the CCPS model. The mappings were made under the assumption that a student can only be in one of the six phases of the CCPS model at a time. Each evaluator performed the mapping based on their interpretation of the behavior of the students, while considering the criterion standards that have been established beforehand. Transitions to the first three phases of the CCPS model were mainly mapped based on the students’ verbalizations, such as “How can we do that?” (USTD), “Ah, I have an idea!” (IDEA) or “If this sensor detects the wall, the robot turns left” (FORM). In contrast, transitions to the last two phases were mostly based on visual observations (e.g., a student starting to use the computer (PROG) or a student watching the Thymio robot after executing the program (EVAL)). Students who were clearly not involved in the activity were mapped to the off-task behavior phase (OFFT). Two state graphs were created for each student (one by each evaluator) using a software dedicated to the creation of activity chronicles such as Actograph (SymAlgo Technologies, Paris, France) and a numerical computing tool such as Matlab (MathWorks, Natick, Massachusetts, USA). Following this step, both evaluators compared their state graphs against each other and discussed any major discrepancies between their evaluations. Major discrepancies were considered segments in the state graphs in which both evaluators did not agree on the same behavior for more than 1 min. The corresponding video scene was reviewed by both evaluators together to achieve a mutual decision. Based on this decision, the state graphs of the evaluators were modified accordingly. Subsequently, the continuous state graphs of both evaluators were discretized into equally spaced time segments of one second. Finally, Cohen’s Kappa was computed for the discretized pair of state graphs of each student, in order to validate the inter-rater reliability of the performed video analyses. Therefore, confusion matrices were created for the observations made by both researchers. Agreement between both evaluators was quantified by the number of times both evaluators agreed on mapping the same phase of the CCPS model to a student’s behavior. The Kappa values were then calculated for the observations made for each student, using the formula presented in (Bakeman & Gottman, 1997 ) and taking into account the proportion of agreement observed and the proportion expected by chance. The range of the values for Cohen’s Kappa was 0.59 < k < 0.84 (Fig. 6 ), which according to the literature (Landis & Koch, 1977 ) can be interpreted as a substantial agreement between both evaluators. Finally, the state graphs created by the first evaluator were used to perform further analyses. Based on these state graphs, the time spent in each state of the CCPS model was computed for each student, as well as the total number of transitions made between different phases. Overall, 2162 phase transitions were mapped for the total of 400 min of recordings: 1072 transitions for the test groups and 1090 for the control groups.

figure 6

Cohen’s Kappa values for two independent evaluators. Dots indicate Cohen’s Kappa values calculated for each student based on the independent observations of two evaluators. The dark horizontal line indicates the mean value, dark gray areas one standard deviation, and light gray areas the 95% confidence intervals

Transition matrices were created for each student, illustrating the changes from one state of the CCPS model to another during each quarter of the activity (first, second, third, and last 10 min). All transitions made by the students in each condition (test and control) were then summed up to analyze the overall dynamics of the two experimental groups (Fig. 7 ). Moreover, the total time spent in each phase was analyzed for both groups during each quarter of the activity

figure 7

Transition matrices and total phase times. The top rows show the transition matrices for the test (first row) and control groups (second row) for the first, second, third, and last 10 min of the activity. The entries in the matrices denote the total number of transitions made between the different phases for each group. Transitions with higher occurrences are highlighted with darker colors. The last row shows the total time spent in each phase by both groups for the four quarters of the activity. Colored dots show data points for each student of the test (blue) and control (red) groups. Dark horizontal lines indicate the mean values, dark-colored areas one standard deviation, and light-colored areas the 95% confidence intervals

The transition matrices for the control groups showed similar dynamics for all quarters of the activity. Transitions were found on the upper and lower triangular part of the matrices, highlighting the occurrences of both feedforward and feedback transitions. Most occurrences were found for transitions between PROG and EVAL phases. In contrast, transitions from and towards USTD, IDEA, FORM, and OFFT phases appeared to be less frequent. When looking at the total time spent in each phase, a similar trend was observed: especially in the first three quarters of the activity, students of the control group predominantly spent their time in PROG and EVAL phases (on average 22 out of 30 min), while USTD, IDEA, FORM, and OFFT phases were observed less frequently (8 out of 30 min). In the last quarter of the activity, PROG and EVAL remained more prevalent compared to USTD, IDEA, and FORM phases; however, a similar amount of time was now also spent on off-task behavior (OFFT, 3 out of 10 min).

Also, in test groups, both feedforward and feedback transitions were observed; however, the dynamics varied during the different quarters of the activity. The behavior in the second and last quarter (no blocking conditions) was very similar to the behavior of the control groups: the great majority of transitions was observed between PROG and EVAL phases, while transitions to and from other phases were comparatively lower. However, when looking at the transition matrices for the first and third quarter of the activity, remarkable differences were found. Due to the blocking of the VPL programming interface in the first quarter, students were not able to enter any of the PROG or EVAL phases and were thus forced to shift their attention towards the remaining phases. For the third quarter, on the other hand, a more even distribution among all phases was found. Since the partial blocking condition allowed the students to work with the VPL platform (without the possibility to send the program to the robot), transitions to PROG phases could be observed. Moreover, since students have already programmed their robot during the previous quarter of the activity, they were also able to make transitions to EVAL phases.

Interestingly, there was a high number of transitions between PROG and EVAL phases, indicating a rigorous debugging of the students’ previous implementation, since new implementations could not be executed and tested (i.e., trial-and-error was not possible). The blocking conditions also influenced the total time the students spent in each of the phases. Compared to the control group, there was a more even distribution among the phases for the first three quarters of the activity. On average, students spent 13 out of 30 min in PROG and EVAL phases and 12 out of 30 min in USTD, IDEA, and FORM phases. During the first three quarters, the times spent on off-task behavior (OFFT) by the test groups were very similar to the ones by the control groups. However, in contrast to the control groups, off-task behavior also remained on a similar level in the last quarter of the activity.

In order to further investigate the effect of the initial blocking condition, transition graphs were generated for the first 10 min of the activity (Fig. 8 ).

figure 8

Initial transition graphs for test and control groups. The figure shows the transitions graphs for both groups for the first 10 min of the activity. The green arrows indicate the probabilities for the first transitions of the activity. The size of the dots, representing the six phases of the CCPS model, is proportional to the amount of time spent by the groups in each phase. The gray arrows represent the transition probabilities between phases. Higher transition probabilities are represented by thicker and darker lines. The value for the most probable outgoing transition for each phase is given next to the corresponding arrow

These graphs depict the transition probability from one phase to another for both groups as well as the total time spent in each phase. Moreover, the initial transition for each student was determined, i.e., the first phase they entered when the activity started. In the test groups, all fifteen students started the activity with the USTD phase, corresponding to the start of the theoretically most efficient cycle of the CCPS model (see Fig. 2 ). In the control groups, this behavior was not observed for all students. Although the majority started with the USTD phase, three of the fourteen students entered the activity by directly going to the PROG phase.

Moreover, when comparing the transition probabilities between the phases, remarkable differences were found for both groups: the results showed that the transition graph for the test groups matched well with the first part of the theoretically most effective cycle in the CCPS model. For these groups, the blocking condition hindered any transition from and towards PROG and EVAL phases. Starting from the USTD phase, students would therefore most likely continue with IDEA, then FORM phases, and then eventually return to USTD for another iteration. Although other feedforward and feedback transitions were observed, they appeared to be less likely. If students showed off-task behavior, they would most likely return to the activity through the USTD phase.

The total time spent in each of the four phases was evenly distributed. For the control groups on the other hand, no blocking conditions were imposed. From the transition graph, it can be seen that the activities of the control groups were more centered around PROG and EVAL phases. Once the students would enter the PROG phase, the most likely transition was towards the EVAL phase and vice versa. Although transitions towards other phases were observed, the probability of leaving this PROG-EVAL loop was comparatively low. Moreover, the effect of this loop was reinforced by the fact that most transitions from USTD, IDEA, FORM, and OFFT phases were directed towards PROG or EVAL phases, resulting in an uneven distribution of the time spent in each phase: during the first 10 min of the activity, the students of the control groups spent almost 7 min in PROG and EVAL phases and less than 2 min on USTD, IDEA, and FORM phases.

In order to illustrate the dynamics at individual levels, the state graphs, transition matrices, and transition graphs for one exemplary student of each group are presented (Fig. 9 ).

figure 9

State graphs, transition matrices, and transition graphs for two example students. The figure shows the data for two students, each exemplifying what was observed in the test and control groups, respectively. The top rows show the complete state graphs for the student from the test (first row) and the control group (second row), displaying in which phase each student was at each moment of the activity. Moreover, the students’ interaction with the playground is highlighted (PLAY). The third row shows the transition matrices for both students for the whole 40 min of the activity. The entries in the matrices denote the total number of transitions made between the different phases for each student. The last row shows the transitions graphs for both students for the whole 40 min of the activity

The data is shown for the whole 40 min of the activity. It can be observed that the student from the control group immediately started the activity by jumping into the PROG phase. Throughout the activity, the student spent most of the time only in PROG and EVAL phases, sporadically transitioning to one of the other phases that were then followed by transitions back to the PROG-EVAL loop. The inclination towards these phases is highlighted in the corresponding transition graph, which clearly demonstrates that this student strongly neglected the preceding USTD, IDEA, and FORM phases. The student from the test group on the other hand showed a more balanced distribution among the five phases of the CCPS model. Indeed, the transition matrix of this student showed a more even dispersion for the transitions towards different phases. Interestingly, a high number of transitions were found for the path USTD–IDEA–FORM–PROG–EVAL–USTD, indicating an inclination towards the theoretically most efficient cycle of the CCPS model. From the state graphs, it can also be observed that the student from the test group performed more playground interactions (11 times), i.e., interactions with the robot or the lawn area, compared to the student from the control group (2 times). A similar result was observed when analyzing the overall data for playground interactions of each experimental group (in total 93 interactions for the test groups and 59 interactions for the control groups).

Finally, the performance of each group’s lawnmower was quantified by the highest number of lawn squares that the robot managed to cover without collision. The results showed that three groups (two tests and one control) managed to complete the task, covering all eight lawn squares with their lawnmower robot. Five groups (three tests and two controls) covered six squares and two groups (both control) covered only 4 squares. The number of squares was only quantified for trajectories that started from the garage and that were not random.

The effect of non-instructional approaches for ER activities

Usually in educational robotics activities, students work in pairs or groups to solve one or more problems, especially when these activities are aimed at the development of computational thinking skills students are faced with open-ended problems that they have to solve in collaboration. By doing so, they benefit from the “dynamic feedback inherent in dialog and the creation of cognitive conflict” (Hoyles, 1985 ). In many cases, teachers let the students work in the project without any particular constraints (Buss & Gamboa, 2017 ; Sadik et al., 2017 ). In the present study, the control groups were left in this situation which corresponds to a non-instructional approach for ER activities in classrooms. However, the results of this study showed that under these circumstances, students spent most of their time in phases related to programming and evaluating. It was observed that, once entered in this loop, students would hardly change their strategies and barely work on any of the other phases presented in the CCPS model. This result suggests an answer to the first research question addressed in this study:

Indeed, the students from the control groups spent on average almost two-thirds of their time in programming and evaluating which does not leave much time to develop other skills (understanding the problem, generating ideas, formulating a behavior). This large amount of time spent is thus a clue showing that students were plunged in a trial-and-error loop. Moreover, the results showed that the probability of leaving this PROG-EVAL loop was low and that it proves a lack of organization in the strategy: students should go from trial-and-error to systematic testing providing “evidence of problem decomposition” (Shute et al., 2017 ). In the current study, the population had the same background and knowledge: at the age of 9 to 10 years, such a behavior is usual while the students are just in the process of building proper problem-solving strategies. Indeed, as soon as the students from the test groups had access to the computer, they also spent a lot of time in the PROG-EVAL phases. Based on these results, the main conclusion is that in non-instructional classroom approaches where the teacher does not intervene in the instructional design of ER activities and does not put constraints on the students, the latter stays most of the time in a PROG-EVAL loop.

The effect of blocking and partially blocking the programming interface

In this study, the test groups had undergone an instructional intervention while they had to solve a problem involving programming. Indeed, in order to prevent them from immediately entering the PROG-EVAL loop, a blocking of the programming interface was imposed in order to require them to shift their attention to the other phases of the CCPS model. The second intervention condition that was tested was equally verified by this experimental study:

Indeed, in the test groups, students were forced to shift their attention towards other phases since their possibilities to enter PROG and EVAL phases were limited by the given constraints. It was observed that given the same amount of time, the test groups better distributed their cognitive efforts in total (measured by a similar amount of time spent in USTD-IDEA-FORM compared to PROG-EVAL). Although no specific instructions were given to the students, their behavior tended to converge towards the theoretically most efficient cycle of the CCPS model. Students started the activity by trying to understand the problem, and they then generated ideas and subsequently suggested formulations of the behavior of the robot. These iterations can be explained by the feedback inherent to dialog which occurs during collaborative situations (Hoyles, 1985 ). Whereas it is not an unusual behavior to go directly for programming (Buss & Gamboa, 2017 ; Sadik et al., 2017 ), no results were found a priori in state-of-the-art literature considering the behavior observed under a blocking condition of the programming interface. The results of this study therefore raise the question of how this blocking condition effectively influences the learning outcomes. It is assumed that the fact that students perform more transitions towards USTD, IDEA, and FORM phases would help them on the development and reflection of their problem-solving process. As shown in previous work on inquiry-based learning (Bumbacher et al., 2018 ; Dillenbourg, 2013 ; Perez et al., 2017 ), introducing these kinds of strategic pauses may help students to better reflect prior to taking actions. Applying this principle to ER activities could substantially enhance the learning outcomes, especially with regard to the development of CT skills. Indeed, in the present study, students from the test group iterated the USTD-IDEA-FORM loop (performing both feedforward and feedback transitions between those phases) in the first 10 min of the activity, arguing and anticipating what could happen afterwards. This cognitive state in which they dived into seems to allow students to distance themselves from the programming act to better reflect on the “creative act” (Duchamp, 1967 ).

Another finding related to the effect of this instructional intervention is that test groups seemed to interact more with the playground and the robot than the control groups. As the latter favored a PROG-EVAL loop, they were more likely to be immersed in the programming interface. In contrast, since the test groups did not have access to the computers at the beginning, they appeared to be more inclined towards using the playground and the robot as means to express their thoughts and findings. This mediation is a key element on which it is then possible to intervene. In fact, this is what happened when the experimental condition was altered in a partial blocking at the beginning of the second half of the experiment. The findings from the study allowed verification of the third intervention hypothesis:

The transition from a full blocking of the programming interface to a partial blocking can be considered as a way to provide scaffolding. Thanks to this scaffolding, during half of the time, the students were able to build a well-settled strategy to solve the problem and they were therefore mostly able to iterate the theoretically most efficient cycle of the CCPS model (USTD–IDEA–FORM–PROG–EVAL–USTD). Moreover, during this partial blocking, the high number of transitions between PROG and EVAL phases suggests a rigorous debugging of previous implementations, an element which has been considered important for the development of CT competencies (Bers et al., 2014 ; Shute et al., 2017 ). The students iteratively worked on the commonly identified issues and still had to predict possible behaviors because the partial blocking prevented them from the execution of the new program code. This condition could be considered beneficial to help students develop skills related to CT. For instance, among the test groups, during the partial blocking, two students decided to set up a writing strategy for programming the robot. While they were told that they could use the programming interface without executing their program, they decided to keep their paper strategy arguing that “it’s the same.” This example shows the effect of fading from a blocking to the partial blocking, and these experimental conditions can therefore be considered an interesting scaffolding tool for teachers.

The overall performances of the implemented lawnmower robots illustrated higher task completion rates for the test groups. All five test groups managed to cover at least six lawn squares and two of them completed the mission covering all eight squares. In the control groups instead, there were two groups who did not manage to cover more than four squares and two groups not more than six. Although one group managed to finish the mission using a pure trial-and-error approach, the effective CT skills development for this group might be questionable. Indeed, previous work has argued that trial-and-error strategies may not be considered optimal, since they may “support task completion but not skills development” (Antle, 2013 ).

Off-task behavior as part of the activity

While designing the CCPS model, it was initially not obvious to include off-task as a separate phase of the model. However, while observing the students during the experiment, it appeared that the off-task behavior (OFFT) was indeed part of the reality in classrooms. Consequently, it was decided to include it as an additional phase of the model. It appeared that the dropout from the activity was a residue that was found in both the test and control groups. However, the distribution of this residue was not equivalent between the two types of groups. In contrast to the test groups, off-task behavior increased significantly during the last quarter of the activity (after 30 min) for control groups. It seems that the working modalities in the test groups (i.e., blocking and partial blocking) may foster engagement in the task in a longer term, compared to the unconstrained modality for the control groups. As described, the scaffolding in the access to programming allows students to have a progression over time. This may facilitate their immersion in the activity and thus results in more effective learning time. In this sense, it seems that the implementation of the blocking conditions can also minimize off-task behavior in classroom situations, which could possibly lead to more efficient learning activities.

The findings reported in this article have provided empirical evidence that (i) a non-instructional approach for educational robotics activities (i.e., unlimited access to the programming interface) can promote a trial-and-error behavior; (ii) a scheduled blocking of the programming interface can foster cognitive processes related to problem understanding, idea generation, and solution formulation; (iii) a progressive adjustment of the blocking of the programming interface can help students in building a well-settled strategy to approach educational robotics problems and therefore may represent an effective way to provide instructional scaffolding. Taking these findings into account, this study provides initial evidence on the need for specific instructional interventions on ER activities and illustrates how teachers could use the proposed model to design ER activities aimed at CT skill development. The findings of this study thus allow to make a transition from theoretical to more operational frameworks as recommended by Ioannou and Makridou ( 2018 ). The CCPS model is indeed inspired by existing CT models (Romero et al., 2017 ; Shute et al., 2017 ), but it makes a distinct contribution regarding the transfer to the classroom by providing teachers with explicit guidance on the implementation, as previously recommended by Atmatzidou and Demetriadis ( 2016 ). Indeed, this study offers to teachers and researchers a conceptualization of five cognitive states (USTD IDEA–FORM–PROG–EVAL) which is adapted to ER activities and K-5 students. In the present work, the main pedagogical lever that has been manipulated was the blocking of the programming interface. This intervention proved to be an effective way to help students cover a more complete spectrum of CT competencies, in contrast to a non-instructional modality, in which they mainly focus on their programming skills. As a matter of fact, this intervention can be easily implemented by teachers regardless of the type of robot used. Consequently, this study also addresses the lack of research on CT for K-5 classrooms, particularly grades 3 and 4, i.e., students of age between 8 and 10 years old. However, the presented findings are not limited to this age range and may possibly be extended to younger and older students. Finally, the establishment of this model and especially of its mechanics could appear as a step forward in the implementation of the CT in the classroom through ER activities.

Limitations and future work

Although the results of this study appear to be promising, further studies are needed to draw more substantial conclusions. Due to school regulations, access to classrooms is limited for research purposes, hence in this study, the experiments were conducted with a small sample size. Nevertheless, considering the 2162 mapped transitions, this size could be considered sufficient for the purpose of this research, which is namely to verify the present model. However, as the main goal of the CCPS model is to support teachers in the design, implementation, and evaluation of ER activities, future work should investigate whether teachers really perceive an added value of the model for their teaching activities. Moreover, other intervention hypotheses could be explored and tested, in order to demonstrate more exhaustive validity of the model. Furthermore, in order to present evidence for the effectiveness as a reference model for ER activities, future longitudinal studies should investigate the effective learning gains evoked by the interventions proposed by the model. In this regard, the findings of the present study may provide a good starting point for the design and executions of such studies.

Availability of data and materials

The data sets generated and analyzed during the current study are not publicly available due to the sensitivity of the data of the under-age participants but are available from the corresponding author on reasonable request.

Antle, A. N. (2013). Exploring how children use their hands to think: An embodied interactional analysis. Behaviour & Information Technology , 32 (9), 938–954.

Google Scholar  

Atmatzidou, S., & Demetriadis, S. (2016). Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences. Robotics and Autonomous Systems , 75 , 661–670.

Bakeman, R., & Gottman, J. M. (1997). Observing interaction: An introduction to sequential analysis , (2nd ed., ). New York: Cambridge University Press.

Barr, V., & Stephenson, C. (2011). Bringing computational thinking to k-12: What is involved and what is the role of the computer science education community? Inroads , 2 (1), 48–54.

Bers, M. U., Flannery, L., Kazakoff, E. R., & Sullivan, A. (2014). Computational thinking and tinkering: Exploration of an early childhood robotics curriculum. Computers & Education , 72 , 145–157.

Bottino, R., & Chioccariello, A. (2014). Computational thinking: Videogames, educational robotics, and other powerful ideas to think with. In T. Brinda, N. Reynolds, R. Romeike, & A. Schwill (Eds.), Key Competencies in Informatics and ICT (KEYCIT), 7 , (pp. 301–309). Potsdam: Universitätsverlag Potsdam Available at http://nbn-resolving.de/urn:nbn:de:kobv:517-opus4-70325 . Accessed 6 June 2020.

Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking. In Paper presented at the annual meeting of the American Educational Research Association (AERA) . Vancouver: Available at https://web.media.mit.edu/~kbrennan/files/Brennan_Resnick_AERA2012_CT.pdf . Accessed 6 June 2020.

Bumbacher, E., Salehi, S., Wieman, C., & Blikstein, P. (2018). Tools for science inquiry learning: Tool affordances, experimentation strategies, and conceptual understanding. Journal of Science Education and Technology , 27 (3), 215–235.

Buss, A., & Gamboa, R. (2017). Teacher transformations in developing computational thinking: Gaming and robotics use in after-school settings. In P. Rich, & C. Hodges (Eds.), Emerging research, practice, and policy on computational thinking , (pp. 189–203). Cham: Springer. https://doi.org/10.1007/978-3-319-52691-1 .

Chapter   Google Scholar  

Catlin, D., & Woollard, J. (2014). Educational robots and computational thinking. In M. Merdan, W. Lepuschitz, G. Koppensteiner, R. Balogh, & D. Obdrzalek (Eds.), Robotics in Education : current research and innovations , (pp. 144–151). Cham: Springer. https://doi.org/10.1007/978-3-030-26945-6 .

Chalmers, C. (2018). Robotics and computational thinking in primary school. International Journal of Child-Computer Interaction , 17 , 93–100.

Denis, B., & Hubert, S. (2001). Collaborative learning in an educational robotics environment. Computers in Human Behavior , 17 (5-6), 465–480.

DeSchryver, M. D., & Yadav, A. (2015). Creative and computational thinking in the context of new literacies: Working with teachers to scaffold complex technology-mediated approaches to teaching and learning. Journal of Technology and Teacher Education, 23(3), 411–431. Waynesville: Society for Information Technology & Teacher Education. Available at https://www.learntechlib.org/primary/p/151572/ . Accessed 6 June 2020.

Dierbach, C. (2012). Introduction to computer science using python: A computational problem-solving focus . Hoboken: Wiley Publishing.

Dillenbourg, P. (2013). Design for classroom orchestration. Computers & Education , 69 , 485–492.

Duchamp, M. (1967). The creative act [audio recording] . New York: Roaring Fork Press Available at https://www.youtube.com/watch?v=lqLSZdX0IbQ (min.4:20). Accessed 6 June 2020.

Eguchi, A. (2014). Robotics as a learning tool for educational transformation. In D. Alimisis, G. Granosik, & M. Moro (Eds.), 4th International Workshop Teaching Robotics, Teaching with Robotics & 5th International Conference Robotics in Education , (pp. 27–34). Padova: RIE ISBN 978-88-95872-06-3. Available at http://www.terecop.eu/TRTWR-RIE2014/files/00_WFr1/00_WFr1_04.pdf . Accessed 6 June 2020.

Eguchi, A. (2016). Computational thinking with educational robotics. In G. Chamblee, & L. Langub (Eds.), Proceedings of society for information technology & teacher education international conference , (pp. 79–84). Savannah: Association for the Advancement of Computing in Education (AACE) Available at https://www.learntechlib.org/p/172306 . Accessed 6 June 2020.

Giang, C., Chevalier, M., Negrini, L., Peleg, R., Bonnet, E., Piatti, A., & Mondada, F. (2019). Exploring escape games as a teaching tool in educational robotics. Educational Robotics in the Context of the Maker Movement , 946 , 95.

Giang, C., Piatti, A., & Mondada, F. (2019). Heuristics for the development and evaluation of educational robotics systems. IEEE Transactions on Education .

Giannakoulas, A., & Xinogalos, S. (2018). A pilot study on the effectiveness and acceptance of an educational game for teaching programming concepts to primary school students. Education and Information Technologies , 23 (5), 2029–2052.

Haseski, H. I., Ilic, U., & Tugtekin, U. (2018). Defining a new 21st century skill-computational thinking: Concepts and trends. International Education Studies , 11 (4), 29–42.

Hoyles, C. (1985). What is the point of group discussion in mathematics? Educational studies in mathematics , 16 (2), 205–214.

Hsu, T.-C., Chang, S.-C., & Hung, Y.-T. (2018). How to learn and how to teach computational thinking: Suggestions based on a review of the literature. Computers & Education , 126 , 296–310.

Ioannidou, A., Bennett, V., Repenning, A., Koh, K. H., & Basawapatna, A. (2011). Computational thinking patterns. In Paper presented at the annual meeting of the American Educational Research Association (AERA) . New Orleans: Available at https://files.eric.ed.gov/fulltext/ED520742.pdf . Accessed 6 June 2020.

Ioannou, A., & Makridou, E. (2018). Exploring the potentials of educational robotics in the development of computational thinking: A summary of current research and practical proposal for future work. Education and Information Technologies , 23 (6), 2531–2544.

Israel, M., Pearson, J. N., Tapia, T., Wherfel, Q. M., & Reese, G. (2015). Supporting all learners in school-wide computational thinking: A cross-case qualitative analysis. Computers & Education , 82 , 263–279.

Jung, S. E., & Won, E.-s. (2018). Systematic review of research trends in robotics education for young children. Sustainability , 10 (4), 905.

Kazimoglu, C., Kiernan, M., Bacon, L., & MacKinnon, L. (2011). Understanding computational thinking before programming: Developing guidelines for the design of games to learn introductory programming through game-play. International Journal of Game-Based Learning (IJGBL) , 1 (3), 30–52.

Kazimoglu, C., Kiernan, M., Bacon, L., & MacKinnon, L. (2012). Learning programming at the computational thinking level via digital game-play. Procedia Computer Science , 9 , 522–531.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics , 33 , 159–174.

Leonard, J., Buss, A., Gamboa, R., Mitchell, M., Fashola, O. S., Hubert, T., & Almughyirah, S. (2016). Using robotics and game design to enhance children’s self-efficacy, stem attitudes, and computational thinking skills. Journal of Science Education and Technology , 25 (6), 860–876.

Li, Y., Schoenfeld, A. H., diSessa, A. A., Graesser, A. C., Benson, L. C., English, L. D., & Duschl, R. A. (2020). Computational thinking is more about thinking than computing. Journal for STEM Education Research , 3 , 1–18. https://doi.org/10.1007/s41979-020-00030-2 .

Article   Google Scholar  

Lumsdaine, E., & Lumsdaine, M. (1994). Creative problem solving. IEEE Potentials , 13 (5), 4–9.

Miller, D. P., & Nourbakhsh, I. (2016). Robotics for education. In B. Siciliano, & O. Khatib (Eds.), Handbook of robotics , (2nd ed., pp. 2115–2134). Cham: Springer. https://doi.org/10.1007/978-3-319-32552-1_79 .

Mondada, F., Bonani, M., Riedo, F., Briod, M., Pereyre, L., Rétornaz, P., & Magnenat, S. (2017). Bringing robotics to formal education: The thymio open-source hardware robot. IEEE Robotics & Automation Magazine , 24 (1), 77–85.

Negrini, L., & Giang, C. (2019). How do pupils perceive educational robotics as a tool to improve their 21st century skills? Journal of e-Learning and Knowledge Society , 15 (2). https://doi.org/10.20368/1971-8829/1628 .

Papert, S. (1980). Mindstorms: Computers, children, and powerful ideas . New York: Basic Books.

Perez, S., Massey-Allard, J., Butler, D., Ives, J., Bonn, D., Yee, N., & Roll, I. (2017). Identifying productive inquiry in virtual labs using sequence mining. In E. André, R. Baker, X. Hu, M. Rodrigo, & B. Du Boulay (Eds.), International conference on artificial intelligence in education , (pp. 287–298). Wuhan: Springer. https://doi.org/10.1007/978-3-319-61425-0_24 .

Perkovic, L., Settle, A., Hwang, S., & Jones, J. (2010). A framework for computational thinking across the curriculum. In A. Clear, & L. Dag (Eds.), Proceedings of the 15 th annual conference on innovation and technology in computer science education (ITiCSE) , (pp. 123–127). New York: Association for Computing Machinery (ACM). https://doi.org/10.1145/1822090.1822126 .

Puccio, G. (1999). Creative problem solving preferences: Their identification and implications. Creativity and Innovation Management , 8 (3), 171–178.

Repenning, A., Webb, D., Koh, K., Nickerson, H., Miller, S., Brand, C., … Repenning, N. (2015). Scalable game design: A strategy to bring systemic computer science education to schools through game design and simulation creation. ACM Transactions on Computing Education (TOCE) , 15 (2), 11. https://doi.org/10.1145/2700517 .

Riedo, F., Chevalier, M., Magnenat, S., & Mondada, F. (2013). Thymio II, a robot that grows wiser with children. In Proceedings of the 2013 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO) , (pp. 187–193). Tokyo: IEEE. https://doi.org/10.1109/ARSO.2013.6705527 .

Riedo, F., Rétornaz, P., Bergeron, L., Nyffeler, N., & Mondada, F. (2012). A two years informal learning experience using the Thymio robot. In U. Rückert, S. Joaquin, & W. Felix (Eds.), Advances in Autonomous Mini Robots , (pp. 37–48). Berlin: Springer. https://doi.org/10.1007/978-3-642-27482-4_7 .

Romero, M., Lepage, A., & Lille, B. (2017). Computational thinking development through creative programming in higher education. International Journal of Educational Technology in Higher Education , 14 (1), 42.

Sadik, O., Leftwich, A.-O., & Nadiruzzaman, H. (2017). Computational thinking conceptions and misconceptions: Progression of preservice teacher thinking during computer science lesson planning. In P. Rich, & C. Hodges (Eds.), Emerging research, practice, and policy on computational thinking , (pp. 221–238). Cham: Springer. https://doi.org/10.1007/978-3-319-52691-1_14 .

Sharpe, T. L., & Koperwas, J. (2003). Behavior and sequential analyses: Principles and practice . Sage Publications, Inc. https://doi.org/10.4135/9781412983518 .

Shin, J., Siegwart, R., & Magnenat, S. (2014). Visual programming language for Thymio II robot. In Paper presented at the Conference on interaction design and children (idc’14) . Aarhus: Available at http://se.inf.ethz.ch/people/shin/publications/shin_idc14.pdf . Accessed 6 June 2020.

Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Research Review , 22 , 142–158.

Sullivan, A., Bers, M., & Mihm, C. (2017). Imagining, playing, and coding with kibo: Using robotics to foster computational thinking in young children. In Proceedings of the International Conference on Computational Thinking Education . Wanchai: Available at https://ase.tufts.edu/devtech/publications/Sullivan_Bers_Mihm_KIBOHongKong%20.pdf . Accessed 6 June 2020.

Tsai, M.-J., Hsu, C.-Y., & Tsai, C.-C. (2012). Investigation of high school students’ online science information searching performance: The role of implicit and explicit strategies. Journal of Science Education and Technology , 21 (2), 246–254.

Viau, R. (2009). La motivation en contexte scolaire , (2nd ed., ). Bruxelles: De Boeck ISBN ISBN: 978-2-8041-1148-9.

Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology , 25 (1), 127–147.

Wing, J. M. (2006). Computational thinking. Communications of the ACM , 49 (3), 33–35.

Yaroslavski, D. (2014). How does lightbot teach programming? Lightbot.com Available at https://lightbot.com/Lightbot_HowDoesLightbotTeachProgramming.pdf . Accessed 6 June 2020.

Download references

Acknowledgements

The authors would like to thank the Coordination Committee for Educational Research of the Canton of Vaud (Switzerland) as well as the school administration and all teachers, students, and their parents for the participation in the study. Moreover, the authors would like to thank Melissa Skweres for proofreading the manuscript.

This work was partially supported by the Swiss National Science Foundation NCCR Robotics.

Author information

Morgane Chevalier and Christian Giang contributed equally to this work.

Authors and Affiliations

Haute Ecole Pédagogique (HEP) du Canton de Vaud, Avenue de Cour, 33, 1014, Lausanne, Switzerland

Morgane Chevalier

Mobots Group of the Biorobotics Laboratory, Ecole Polytechnique Fédérale de Lausanne (EPFL), Route Cantonale, 1015, Lausanne, Switzerland

Morgane Chevalier, Christian Giang & Francesco Mondada

Department of Education and Learning (DFA), University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Piazza S. Francesco 19, 6600, Locarno, Switzerland

Christian Giang & Alberto Piatti

You can also search for this author in PubMed   Google Scholar

Contributions

M.C. designed the model, carried out experiments, analyzed data, and wrote the paper; C.G. designed the model, carried out experiments, analyzed data, and wrote the paper; A.P. designed the model, carried out experiments, and wrote the paper; F.M. designed the model and wrote the paper. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Morgane Chevalier .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Chevalier, M., Giang, C., Piatti, A. et al. Fostering computational thinking through educational robotics: a model for creative computational problem solving. IJ STEM Ed 7 , 39 (2020). https://doi.org/10.1186/s40594-020-00238-z

Download citation

Received : 18 March 2020

Accepted : 06 July 2020

Published : 03 August 2020

DOI : https://doi.org/10.1186/s40594-020-00238-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Computational thinking
  • Educational robotics
  • Instructional intervention
  • Problem solving
  • Trial-and-error

computational modeling problem solving

University of Notre Dame

Preconditioning Hybrid Discontinuous Galerkin Schemes for Incompressible Flow and Linear Elasticity Problems

Date created, date modified, defense date, research director(s), committee members.

  • Doctor of Philosophy

Degree Level

  • Doctoral Dissertation

Library Record

Oclc number, program name.

  • Applied and Computational Mathematics and Statistics

Usage metrics

Dissertations

ScienceDaily

New computer algorithm supercharges climate models and could lead to better predictions of future climate change

Earth System Models -- complex computer models which describe Earth processes and how they interact -- are critical for predicting future climate change. By simulating the response of our land, oceans and atmosphere to manmade greenhouse gas emissions, these models form the foundation for predictions of future extreme weather and climate event scenarios, including those issued by the UN Intergovernmental Panel on Climate Change (IPCC).

However, climate modellers have long faced a major problem. Because Earth System Models integrate many complicated processes, they cannot immediately run a simulation; they must first ensure that it has reached a stable equilibrium representative of real-world conditions before the industrial revolution. Without this initial settling period -- referred to as the "spin-up" phase -- the model can "drift," simulating changes that may be erroneously attributed to manmade factors.

Unfortunately, this process is extremely slow as it requires running the model for many thousands of model years which, for IPCC simulations, can take as much as two years on some of the world's most powerful supercomputers.

However, a study in Science Advances by a University of Oxford scientist funded by the Agile Initiative describes a new computer algorithm which can be applied to Earth System Models to drastically reduce spin-up time. During tests on models used in IPCC simulations, the algorithm was on average 10 times faster at spinning up the model than currently-used approaches, reducing the time taken to achieve equilibrium from many months to under a week.

Study author Samar Khatiwala, Professor of Earth Sciences at the University of Oxford's Department of Earth Sciences, who devised the algorithm, said: 'Minimising model drift at a much lower cost in time and energy is obviously critical for climate change simulations, but perhaps the greatest value of this research may ultimately be to policy makers who need to know how reliable climate projections are.'

Currently, the lengthy spin-up time of many IPCC models prevents climate researchers from running their model at a higher resolution and defining uncertainty through carrying out repeat simulations. By drastically reducing the spin-up time, the new algorithm will enable researchers to investigate how subtle changes to the model parameters can alter the output -- which is critical for defining the uncertainty of future emission scenarios.

Professor Khatiwala's new algorithm employs a mathematical approach known as sequence acceleration, which has its roots with the famous mathematician Euler. In the 1960s this idea was applied by D. G. Anderson to speed-up the solution of Schrödinger's equation, which predicts how matter behaves at the microscopic level. So important is this problem that more than half the world's supercomputing power is currently devoted to solving it, and 'Anderson Acceleration', as it is now known, is one of the most commonly used algorithms employed for it.

Professor Khatiwala realised that Anderson Acceleration might also be able to reduce model spin-up time since both problems are of an iterative nature: an output is generated and then fed back into the model many times over. By retaining previous outputs and combining them into a single input using Anderson's scheme, the final solution is achieved much more quickly.

Not only does this make the spin-up process much faster and less computationally expensive, but the concept can be applied to the huge variety of different models that are used to investigate, and inform policy on, issues ranging from ocean acidification to biodiversity loss. With research groups around the world beginning to spin-up their models for the next IPCC report, due in 2029, Professor Khatiwala is working with a number of them, including the UK Met Office, to trial his approach and software in their models.

Professor Helene Hewitt OBE, Co-chair for the Coupled Model Intercomparison Project (CMIP) Panel, which will inform the next IPCC report, commented: 'Policymakers rely on climate projections to inform negotiations as the world tries to meet the Paris Agreement. This work is a step towards reducing the time it takes to produce those critical climate projections.'

Professor Colin Jones Head of the NERC/Met Office sponsored UK Earth system modelling, commented on the findings: 'Spin-up has always been prohibitively expensive in terms of computational cost and time. The new approaches developed by Professor Khatiwala have the promise to break this logjam and deliver a quantum leap in the efficiency of spinning up such complex models and, as a consequence, greatly increase our ability to deliver timely, robust estimates of global climate change.'

  • Environmental Awareness
  • Computer Modeling
  • Mathematical Modeling
  • Spintronics Research
  • Early Climate
  • Origin of Life
  • Global climate model
  • Meteorology
  • 3D computer graphics
  • Mathematical model
  • Climate model
  • Computer simulation
  • Weather forecasting
  • Consensus of scientists regarding global warming

Story Source:

Materials provided by University of Oxford . Note: Content may be edited for style and length.

Journal Reference :

  • Samar Khatiwala. Efficient spin-up of Earth System Models using sequence acceleration . Science Advances , 2024; 10 (18) DOI: 10.1126/sciadv.adn2839

Cite This Page :

Explore More

  • New-To-Nature Enzyme Containing Boron Created
  • New Target for Potential Leukemia Therapy
  • 'Wraparound' Implants for Spinal Cord Injuries
  • Climate May Influence Seismic Activity
  • Atmosphere Surrounding Super-Earth?
  • How Continents Stabilized
  • Pressure to Be 'Perfect' Causing Burnout
  • New Type of Memory State
  • Mini-Robots to Clean Up Microplastics, Microbes
  • Sound-Suppressing Silk Can Create Quiet Spaces

Trending Topics

Strange & offbeat.

Help | Advanced Search

Computer Science > Computation and Language

Title: comm: collaborative multi-agent, multi-reasoning-path prompting for complex problem solving.

Abstract: Large Language Models (LLMs) have shown great ability in solving traditional natural language tasks and elementary reasoning tasks with appropriate prompting techniques. However, their ability is still limited in solving complicated science problems. In this work, we aim to push the upper bound of the reasoning capability of LLMs by proposing a collaborative multi-agent, multi-reasoning-path (CoMM) prompting framework. Specifically, we prompt LLMs to play different roles in a problem-solving team, and encourage different role-play agents to collaboratively solve the target task. In particular, we discover that applying different reasoning paths for different roles is an effective strategy to implement few-shot prompting approaches in the multi-agent scenarios. Empirical results demonstrate the effectiveness of the proposed methods on two college-level science problems over competitive baselines. Our further analysis shows the necessity of prompting LLMs to play different roles or experts independently. We release the code at: this https URL

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

A framework for supporting systems thinking and computational thinking through constructing models

  • Open access
  • Published: 24 July 2022
  • Volume 50 , pages 933–960, ( 2022 )

Cite this article

You have full access to this open access article

computational modeling problem solving

  • Namsoo Shin   ORCID: orcid.org/0000-0002-5900-2073 1 ,
  • Jonathan Bowers 1 ,
  • Steve Roderick 2 ,
  • Cynthia McIntyre 2 ,
  • A. Lynn Stephens 2 ,
  • Emil Eidin 1 ,
  • Joseph Krajcik 1 &
  • Daniel Damelin 2  

5864 Accesses

12 Citations

Explore all metrics

We face complex global issues such as climate change that challenge our ability as humans to manage them. Models have been used as a pivotal science and engineering tool to investigate, represent, explain, and predict phenomena or solve problems that involve multi-faceted systems across many fields. To fully explain complex phenomena or solve problems using models requires both systems thinking (ST) and computational thinking (CT). This study proposes a theoretical framework that uses modeling as a way to integrate ST and CT. We developed a framework to guide the complex process of developing curriculum, learning tools, support strategies, and assessments for engaging learners in ST and CT in the context of modeling. The framework includes essential aspects of ST and CT based on selected literature, and illustrates how each modeling practice draws upon aspects of both ST and CT to support explaining phenomena and solving problems. We use computational models to show how these ST and CT aspects are manifested in modeling.

Similar content being viewed by others

computational modeling problem solving

Systems Thinking and Modeling: From Butterfly Posture to Artificial Intelligence

computational modeling problem solving

What Is Systems Thinking?

computational modeling problem solving

Avoid common mistakes on your manuscript.

Introduction

The primary goals of science education for all learners are to explain natural phenomena, solve problems, and make informed decisions about actions and policies that may impact their lives, their local environments, and our global community. Models—representations that abstract and simplify a system by focusing on key features—have been used as a pivotal science and engineering tool to investigate, represent, explain, and predict phenomena across many fields (Harrison & Treagust, 2000 ; National Research Council, 2012 ; Schwarz et al., 2017 ). For example, En-ROADS, a computational model and simulator developed by Climate Interactive, an environmental think tank, has been used to educate members of the U.S. Congress, the U.S. State Department, the Chinese government, and the office of the UN Secretary-General on climate change on the impacts of proposed policies on global warming. The model was also a centerpiece of multiple presentations at the UN Climate Change Conference in Scotland in 2021 (Madubuegwn et al., 2021 ).

Modeling, which includes developing, testing, evaluating, revising, and using a model (National Research Council, 2012 ; Schwarz et al., 2017 ; Sengupta et al., 2013 ), necessitates many different thinking processes such as problem decomposition (Grover & Pea, 2018 ), causal reasoning (Levy & Wilensky, 2008 ), pattern recognition (Berland & Wilensky, 2015 ), algorithmic thinking (Lee & Malyn-Smith, 2020 ), and analysis and interpretation of data (Shute et al., 2017 ). These thinking processes often necessitate systems thinking and computational thinking, which are intrinsically linked to modeling (Richmond, 1994 ; Wing, 2017 ).

Richmond ( 1994 ) defines systems thinking (ST) as “the art and science of making reliable inferences about behavior by developing an increasingly deep understanding of underlying structure” (p. 139). To help manage the complexities of climate change, experts use a systems thinking approach to guide decision making and inform policy design (Holz et al., 2018 ; Sterman & Booth Sweeney, 2007). In the context of modeling, ST helps us to comprehend the causal connections between the components of a model––the elements of a problem or phenomenon that affect the system’s behavior (e.g., the relationship between the melting of polar ice caps and the increase in global temperatures)––and how changes in one component (e.g., the rate of melting of the polar ice caps) can cascade to other components and potentially affect the status of the entire system.

However, the multifaceted interactions within complex systems quickly outpace our ability to mentally simulate and predict system behavior, especially in systems that include time delays and feedback, common features of many complex phenomena (Cronin et al., 2009 ). For example, in the case of climate change, there is a delay between changing CO 2 emissions and the cumulative effect on CO 2 concentrations in the atmosphere (Sterman & Sweeney, 2002 ). Even highly educated people with strong mathematics backgrounds have trouble understanding and predicting the behavior of a system with these features (Cronin et al., 2009 ), and perform poorly when attempting to forecast the effect of any particular intervention (Sterman & Sweeney, 2002 ). Therefore, ST alone may not be sufficient for investigation of potential solutions to complex problems such as climate change.

To model successfully, we also need computational thinking (Wing, 2008 ). Computational thinking (CT) provides conceptual tools for finding answers to problems involving complex, multidimensional systems by applying logical and algorithmic thinking (Berland & Wilensky, 2015 ; Lee & Malyn-Smith, 2020 ). Wing ( 2006 ) views CT as thinking like a computer scientist, not like a computer, and as a competency appropriate and available to everyone, not only for computer scientists in science and engineering fields. In particular, CT helps us create algorithms by identifying patterns in phenomena to automate the transformation of data so that we can predict other phenomena in similar systems. Although CT does not require the use of a computer, a computer’s processing speed is helpful for testing solutions efficiently. For instance, scientists use computational models to test the effect of various policies on the amount of CO 2 accumulating in the atmosphere in order to reduce the trapping of solar energy in our environment. Thus, to fully explain complex phenomena or solve problems using models requires both ST and CT (National Research Council, 2012 ).

A key challenge for science, technology, engineering, and mathematics (STEM) educators and researchers is to develop learning environments that provide opportunities for learners to experience how scientists approach explaining complex phenomena and solving ill-structured problems (Krajcik & Shin, 2022 ; National Research Council, 2000 ). Because science and engineering practices (e.g., modeling, computational thinking) should be integrated with scientific ideas including disciplinary core ideas and crosscutting concepts (e.g., systems and system modeling) in meaningful contexts to promote deep learning (National Research Council, 2012 ; NGSS Lead States 2013 ), we argue that incorporating modeling, ST, and CT into existing STEM subjects supports K-12 learners in making sense of phenomena and solving problems.

We propose a theoretical framework that foregrounds modeling and highlights how both ST and CT are involved in the process of modeling phenomena or problems. Because modeling is a key science practice for learners across K-12 education (National Research Council, 2012 ; NGSS Lead States 2013 ), it would be beneficial to expand the opportunities for applying ST and CT in the context of modeling. To successfully support learners in modeling with ST and CT in various STEM disciplines in K-12 education, educators and researchers need to further develop and explore learning environments that incorporate ST and CT in the modeling practices.

We postulate that this framework can (a) guide curriculum developers and teachers in integrating ST and CT in the context of modeling in multiple STEM courses, (b) assist software developers and curriculum designers in developing effective learning tools and pedagogical supports that involve learners in modeling, ST, and CT, and (c) help researchers and teachers in measuring learners’ understanding and application of modeling, ST, and CT. In this paper, we present our framework, which is based on a literature review of ST and CT as well as examples of models to illustrate how ST and CT are integrated in and support the modeling practices. We start by defining ST, CT, and the modeling practices that form the foundation of our framework, briefly introduce our framework, then identify ST and CT aspects. Finally, we describe the framework and associated aspects of ST and CT in each modeling practice by illustrating how the ST and CT aspects support the modeling process.

Theoretical background

What is systems thinking.

Systems thinking has been emphasized in K-12 science standards in the U.S. for nearly three decades (NGSS Lead States, 2013 ; National Research Council, 2007 , 2012 ; American Association for the Advancement of Science, 1993 ). With the release of A Framework for K-12 Science Education (National Research Council, 2012 ), ST has been incorporated in the crosscutting concept of systems and system modeling . ST provides learners with a unique lens that, when combined with scientific ideas and practices, can enhance sense-making and problem-solving, and is particularly well suited for addressing the complexity found in many social and scientific phenomena—from human health and physiology to climate change. Although thinking in a systemic way about persistent problems has been around for centuries (Richardson, 1994), the term “systems thinking” (ST), as used in the literature across a wide variety of disciplines, has not been clearly defined.

From the world of business, Senge ( 1990 ) sees systems thinking as a paradigm shift toward consideration of the system as a whole, with a focus on interrelationships and change over time. Forrester ( 1961 ), Senge’s mentor and creator of the discipline known as system dynamics, speaks of “system awareness … a formal awareness of the interactions between the parts of a system” (p. 5). Meadows ( 2008 ) refers to a “system lens” and stresses that “seeing the relationship between structure and behavior we can begin to understand how a system works” (p. 1). Richmond ( 1994 ) and Sterman ( 1994 ) view ST and learning as being synergistically connected. Richmond views ST as both a paradigm and a learning method. He describes the ST paradigm as a lens through which one comes to view complex systems holistically, as well as a set of practical tools for developing and refining that lens. The “tools” that he describes closely match commonly defined modeling practices (National Research Council, 2012 ; Schwarz et al., 2009 ). More recently, systems thinking has been described as a set of skills that can be used as an aid to understanding complex systems and their behavior (Benson, 2007 ; Ben-Zvi Assaraf & Orion, 2005 ). In particular, Arnold and Wade ( 2015 ) define ST as “a set of synergistic analytic skills used to improve the capability of identifying and understanding systems” (p. 675).

For these authors ST represents a worldview, a way of thinking about the world that emerges as an individual grows in ability and willingness to see it holistically. Disciplined application of ST tools and skills supports and potentially alters one’s worldview, and one’s worldview conditions the choices one makes about the use of the tools. Building on the long tradition of ST applications in business together with more recent integration in K-12 education, ST can be defined operationally as the ability to understand a problem or phenomenon as a system of interacting elements that produces emergent behavior.

What is computational thinking?

Computational thinking is an important skill that is related to many disciplines (e.g., mathematics, biology, chemistry, design, economics, neuroscience, statistics), as well as numerous aspects of our daily life (e.g., optimizing everyday financial decisions or navigating daily commutes to minimize time spent in traffic). However, while computer science and CT have been key drivers of scientific development and innovation for several decades, only recently has CT been emphasized as a major academic learning goal in K-12 science education.

There is a wide range of perspectives on how to define CT—from a STEM-centered approach (Berland & Wilensky, 2015 ; National Research Council, 2012 ; Weintrop et al., 2016 ) to a more generic problem-solving approach (Barr & Stephenson, 2011 ; Grover & Pea, 2018 ; Lee & Malyn-Smith, 2020 ; Wing, 2006 ). A Framework for K-12 Science Education defines CT as utilizing computational tools (e.g., programming simulations and models) grounded in mathematics to collect, generate, and analyze large data sets, identify patterns and relationships, and model complex phenomena in ways that were previously impossible (National Research Council, 2012 ). Similar to the Framework , the STEM-centered approach describes CT as connected to mathematics for supporting data collection and analysis or testing hypotheses in a productive and efficient way, but also views CT as centering on sense-making processes (Psycharis & Kallia, 2017 ; Schwarz et al., 2017 ; Weintrop et al., 2016 ). In this view, although CT is intertwined with aspects of using specific rules (with quantitative data) to program computers to build models and simulations, it is more than an algorithmic approach to problem-solving (Brennan & Resnick, 2012 ; Shute et al., 2017 ). Instead, it is a more comprehensive, scientific way to foster sense-making that encourages learners to ask, test, and refine their understandings of how phenomena occur or how to solve problems.

Many researchers suggest that CT means “thinking like a computer scientist” when confronted with a problem (Grover & Pea, 2018 ; Nardelli, 2019 ; Shute et al., 2017 ; Wing, 2008 ). These scholars elaborate on the definition of CT, focusing on computational problem-solving processes, such as breaking a complex problem into smaller problems to trace and find solutions (Grover & Pea, 2018 ; Shute et al., 2017 ; Türker & Pala, 2020 ; Wing, 2006 ). Building on the ideas put forth by Wing ( 2006 ) and Grover and Pea (2018), as well as the view of the sense-making process from the STEM approach, we define CT operationally as a way of explaining phenomena or solving problems that utilizes an iterative and quantitative approach for exploring, unpacking, synthesizing, and predicting the behavior of phenomena using computational algorithmic methods.

What are modeling practices?

Modeling enables learners to investigate questions, make sense of phenomena, and explore solutions to problems by connecting and synthesizing their knowledge from a variety of sources into a coherent and scientific view of the world (National Research Council, 2011 , 2012 ; Schwarz et al., 2017 ). Modeling includes several important practices, including building, evaluating, revising, and using models (National Research Council, 2012 ; Schwarz et al., 2017 ). Research shows that learners can deepen their understanding of scientific ideas through the development, use, evaluation, and revision of models (Schwarz & White, 2005 ; Wen et al., 2018 ; Wilkerson et al., 2018 ). Although modeling can be conducted without using computational programs (e.g., paper-pencil modeling), we are specifically interested in supporting student engagement in modeling, ST, and CT, so we are narrowing our focus to a computational approach. Therefore, we propose a Framework for Computational Systems Modeling that elucidates the synergy between modeling, ST, and CT. Building on the descriptions of the modeling process in the literature (Halloun, 2007 ; Martinez-Moyano & Richardson, 2013 ; Metcalf-Jackson et al., 2000 ; National Research Council, 2012 ; Schwarz et al., 2009 ), our framework includes five modeling practices: M1) characterize problem or phenomenon to model, M2) define the boundaries of the system, M3) design and construct model structure, M4) test, evaluate, and debug model behavior, and M5) use model to explain and predict behavior of phenomenon or design solution to a problem (Fig.  1 ).

figure 1

Computational Systems Modeling Framework

In Fig.  1 , the center set of boxes describes five modeling practices and the cyclic nature of the modeling process (represented by the arrows). The process is highly iterative, with involvement in one practice influencing both future and previous practices, inviting reflection and model revision throughout. Engagement in each of these practices necessitates the employment of aspects of ST (left side of framework diagram) and CT (right side of framework diagram). To develop the Framework for Computational Systems Modeling, we conducted a literature review study to identify and define essential aspects of ST and CT. Specifically, we explored important aspects of these two types of thinking necessary for the modeling practices. Through this exploration we considered the implications of these aspects for developing curriculum, learning tools, pedagogical and scaffolding strategies for teaching and learning, and valid assessments for promoting and monitoring learner progress. We, therefore, investigated two guiding questions: (1) What are the key aspects of each type of thinking? and (2) How do aspects of ST and CT intersect with and support modeling practices? Below we explain our review process.

We employed the integrative review approach (Snyder, 2019 ) as we analyzed and synthesized literature, including experimental and non-experimental studies, as well as data from theoretical literature and opinion or position articles (e.g., books, book chapters, practitioner articles) on ST and CT. An integrative review method is appropriate for critically examining and analyzing secondary data about research topics for generating a new framework (Snyder, 2019 ; Whittemore & Knafl, 2005 ).

Literature search and selection strategies

Because our focus is on modeling as a process for making sense of phenomena, we took the view of Richmond ( 1994 ) as our starting point for ST and the view of Wing ( 2006 ) for CT, since both authors emphasize these thinking processes for learners to understand phenomena. We embarked on a literature review related to the two scholars’ research, and then extended our search using authors’ names who published studies related to the definition of ST or CT using Google Scholar. Our inclusion criteria of literature are (1) written between 1994 and 2021 with the keyword “system [and systems] thinking” and between 2006 and 2021 with the keyword “computational thinking”; and (2) directly related to the definition of ST or CT. We excluded authors who used previously defined ideas related to ST or CT and did not uniquely contribute new ideas. From these search results, we collected 80 manuscripts and narrowed our search to 55 manuscripts by selecting one representative manuscript in cases when an author had published several manuscripts (e.g., Richmond 1994 from Richmond 1993 , 1994 , 1997 ). In this way, our analysis aims to avoid misleading results that might be influenced by including the same authors’ ideas repeatedly. Our analysis included 27 of 45 manuscripts that defined ST and 28 of 35 manuscripts that defined CT.

Data analysis

We used the following filters sequentially, as we extracted a list of aspects of both ST and CT to create a usable framework: (a) the ST or CT aspect is described widely across the ST or CT literature in multiple scholars’ works, (b) the ST or CT aspect is not overly broad or generic; we excluded aspects that are ubiquitous across fields but not specific to ST or CT, and (c) the ST or CT aspect is operationalizable through observable behaviors (e.g., tangible artifacts or discussion).

We first created an initial list of aspects based on Richmond’s ( 1994 ) definition of ST and Wing’s ( 2006 ) definition of CT, respectively, to extract and sort information from each selected literature. For example, the aspects include causal reasoning, identifying interconnections, and predicting system behavior for ST (Fig.  1 left side), and problem decomposition, artifact creation, and debugging for CT (Fig.  1 right side). Second, each manuscript was reviewed by two of the authors independently, sorting texts based on which aspects were described. Third, the two authors confirmed their sorting of the texts by discussing each manuscript. Then, five of the authors reviewed the aggregated texts associated with each aspect presented from 27 ST and 28 CT manuscripts. As we reviewed the literature pertaining to definitions of ST and CT, we revised the aspects, expanding or dividing them as we gained new information and identified associated sub-aspects. All authors discussed the aspects and finalized them by resolving discrepancies and clarifying ambiguities. A shared spreadsheet was used to store and analyze all data.

Findings and discussion of literature review

Aspects and sub-aspects of systems thinking.

From the reviews of ST literature in both business and education, and using our filters, we identified five aspects of systems thinking: (ST1) defining a system (boundaries and structure), (ST2) engaging in causal reasoning, (ST3) identifying interconnections and feedback, (ST4) framing problems or phenomena in terms of behavior over time, and (ST5) predicting system behavior based on system structure (Fig.  1 left side). Figure  2 shows the distribution of ST aspects that emerged from our review of 27 manuscripts.

figure 2

Distribution of systems thinking aspects. ( Note: ST1. Defining a system [boundaries and structure]; ST2. Engaging in causal reasoning; ST3. Identifying interconnections and feedback; ST4. Framing problems or phenomena in terms of behavior over time; ST5. Predicting system behavior based on system structure)

ST1, defining a system (boundaries and structure) , requires an individual to clearly identify a system’s function and be as specific as possible about the phenomenon to be understood or the problem to be addressed. Table  1 presents a summary of the various ways 15 manuscripts have described defining a system .

Defining a system focuses attention on internal system structure (relevant elements and interactions among them to produce system behaviors) and limits the tendency to link extraneous outside factors to behavior. This focus allows learners to more clearly define the spatial and temporal limits of the system they wish to explain or understand (Hopper & Stave, 2008 ) Meadows ( 2008 ) discusses the importance of “system boundaries” and the need to clearly define a goal, problem, or purpose when attempting to think systemically. Considering scale is critical when deciding what content is necessary to explain the system behavior of interest (Arnold & Wade, 2017 ; Yoon et al., 2018 ).

To operationalize defining a system three sub-aspects are necessary: (ST1a) identifying relevant elements within the system’s defined boundaries (Arnold & Wade, 2015 ), (ST1b) evaluating the appropriateness of elements to see if their elimination significantly impacts the overall behavior of the system in relation to the question being explored (Ben-Zvi Assaraf & Orion, 2005 ; Weintrop et al., 2016 ), and (ST1c) determining the inputs and outputs of the system (Yoon et al., 2018 ).

ST2, engaging in causal reasoning , includes examination of the relationships and interactions of system elements. Causal reasoning is described as a key aspect of ST by most researchers, appearing in 25 manuscripts (Fig.  2 ). Table  1 presents a summary of the descriptions of causal reasoning. In addition, two other manuscripts imply the importance of causal reasoning as they state that causal reasoning serves as a foundation for recognizing interconnections and feedback loops in a system (ST3). For example, identifying interconnections among elements in a system (e.g., events, entities, or processes) requires understanding of one-to-one causal relationships (Forrester, 1994 ; Ossimitz, 2000 ).

For deep understanding of phenomena, learners should be able to describe both direct (impact of one element upon another) and indirect (the effects of multiple causal connections acting together in extended chains) relationships among various elements in a system of the phenomenon (Kim, 1999 ; Jacobson & Wilensky, 2006 ). Based on our review of causal reasoning, three sub-aspects are operationalizable: (ST2a) recognizing cause and effect relationships among elements (Arnold & Wade, 2015 ), (ST2b) quantitatively (or semi-quantitatively) defining proximal causal relationships between elements (Grotzer et al., 2017 ), and (ST2c) identifying (or predicting) the behavioral impacts of multiple causal relationships (Sterman, 2002 ; Levy & Wilensky, 2011 ).

ST3, identifying interconnections and feedback , involves analyzing causal chains that result in circular structural patterns (feedback structures) within a system. This aspect of ST was proposed by all 27 manuscripts (Fig.  2 ). Many studies in ST have focused on helping learners identify the relationships of interdependencies of system elements (Jacobson & Wilensky, 2006 ; Levy & Wilensky, 2011 ; Samon & Levy 2020 ; Yoon et al., 2018 ) (Table  1 ). Feedback structures are created when chains loop back upon themselves, creating closed loops of cause and effect (Jacobson et al., 2011 ; Pallant & Lee, 2017 ; Richmond, 1994 ). There are two basic types of feedback loops: balancing (or negative) feedback that tends to stabilize system behavior and reinforcing (or positive) feedback that causes behavior to diverge away from equilibrium (Booth-Sweeney & Sterman, 2000 ; Meadows, 2008 ).

This aspect provides learners an expansion of perspective, from one that focuses primarily on the analysis of individual elements and interactions to one that includes consideration of how the system and its constituent parts interact and relate to one another as a whole (Ben-Zvi Assaraf & Orion, 2005 ). Based on our literature review, it can be operationalized in two sub-aspects: (ST3a) identifying circular structures of causal relationships (Danish et al., 2017 ; Grotzer et al., 2017 ; Ossimitz, 2000 ) and (ST3b) recognizing balancing and reinforcing feedback structures and their relationship to the stability and growth within a system (Fisher, 2018 ; Meadows, 2008 ).

ST4, framing problems or phenomena in terms of behavior over time , requires that learners distinguish between phenomena that are best described as evolving over time and those that are not. Twenty-seven manuscripts reported that framing problems in terms of behavior over time is an important aspect of ST (Fig.  2 ), as shown in Table  1 . Many advocates of ST refer to the recognition of the link between structure and behavior as “dynamic thinking” and acknowledge proficiency with it as difficult to obtain without the use of systems thinking tools (Booth-Sweeney & Sterman, 2000 ; Grotzer et al., 2017 ; Plate & Monroe, 2014 ).

This aspect is especially important when change over time is crucial for thoroughly understanding a system’s behavior. Some phenomena are best investigated without consideration of change over time, for instance, in open systems where change to a system input affects all of the internally connected system aspects. Other phenomena are better described using the cumulative patterns of change observed in a system’s state over time. Phenomena that evolve in this way are not significantly impacted by external factors but have an internal feedback structure that dictates how change will occur as time passes (Booth-Sweeney & Sterman, 2000 , 2007 ). This aspect is operationalizable in two sub-aspects: (ST4a) determining the time frame necessary to describe a problem or phenomenon (Sterman, 1994 , 2002 ) and (ST4b) recognizing time-related behavioral patterns that are common both within and across systems (Nguyen & Santagata, 2021 ; Pallant & Lee, 2017 ; Riess & Mischo, 2010 ; Tripto et al., 2018 ).

ST5, predicting system behavior based on system structure , necessitates an understanding of how both direct causal relationships and more comprehensive substructures (e.g., feedback loops, accumulating variables, and interactions among them) influence behaviors common to all systems (Forrester, 1994 ). This aspect was proposed in 18 of 27 manuscripts we reviewed (Table  1 ). Although a subset of manuscripts discussed this aspect specifically, those that do not imply that predicting system behavior based on system structure is important to systems thinking (e.g., describe learning activities such as predicting behaviors based on graphs or data sets) (Hmelo-Silver et al., 2017 ).

This aspect offers learners help in characterizing complex systems by identifying common structures that allows one to generalize about the connection between system structure and behavior and develop guidelines that can be applied to systems of different types (Laszlo, 1996 ). There are three operationalizable sub-aspects: (ST5a) identifying how individual cause and effect relationships impact the broader system behavior (Barth-Cohen, 2018 ), (ST5b) recognizing how the various substructures of a system (specific types and combinations of variables within systems) influence its behavior (Danish et al., 2017 ), and (ST5c) predicting how specific structural modifications will change the dynamics of a system (Richmond, 1994 ).

Through our analysis of the literature, we synthesized 13 sub-aspects associated with five ST aspects. Aspects that are vague or difficult to measure, such as using experiential evidence from the real world together with simulations to “challenge the boundaries of mental (and formal) models” (Booth-Sweeney & Sterman, 2000 , p. 250) were not included, nor were aspects that are not commonly included in the literature. In addition, aspects related to CT and modeling practices, such as Richmond’s ( 1994 ) “quantitative thinking,” Hopper and Stave’s (2008) “using conceptual models and creating simulation models,” and Arnold and Wade’s (2017) “reducing complexity by modeling systems conceptually” were not listed in ST, but are included in CT or modeling.

Aspects and sub-aspects of computational thinking

From the review of the literature, we identified five key CT aspects in the context of modeling using the aforementioned filters: (CT1) decomposing problems such that they are computationally solvable, (CT2) creating artifacts using algorithmic thinking, (CT3) generating, organizing, and interpreting data, (CT4) testing and debugging, and (CT5) making iterative refinements (Fig.  1 right side). Figure  3 shows the distribution of CT aspects represented in the 28 manuscripts.

figure 3

Distribution of computational thinking aspects. ( Note: CT1. Decomposing problems such that they are computationally solvable; CT2. Creating artifacts using algorithmic thinking; CT3. Generating, organizing, and interpreting data; CT4. Testing and debugging; CT5. Making iterative refinements)

CT1, decomposing problems such that they are computationally solvable , consists of identifying elements and relationships observable in problems or phenomena and characterizing them in a way that is quantifiable and thus calculable. Problem decomposition deconstructs a problem into its constituent parts to make it computationally solvable and more manageable (Grover & Pea, 2018 ). All 28 manuscripts we reviewed specify this aspect (Fig.  3 ) in various ways as an essential characteristic of CT for understanding and representing problems to readily solve, as shown in Table  2 . Based on the literature, this aspect is operationalizable in three sub-aspects: (CT1a) describing a clear goal or question that can be answered, as well as an approach to answering the question using computational tools (Irgens et al., 2020 ; Shute et al., 2017 ; Wang et al., 2021 ), (CT1b) identifying the essential elements of a phenomenon or problem (Anderson, 2016 ; Türker & Pala, 2020 ), and (CT1c) describing elements in such a way that they are calculable for use in a computational representation of the phenomenon or problem (Brennan & Resnick, 2012 ; Chen et al., 2017 ; Hutchins et al., 2020 ; Lee & Malyn-Smith, 2020 ).

CT2, creating artifacts using algorithmic thinking , refers to developing a computational representation so that the output can explain and predict real-world phenomena. This is the essential aspect of CT, as proposed in all manuscripts with extensive descriptions as shown in Table  2 . This is a unique aspect of CT in that algorithmic thinking provides precise step-by-step procedures to generate problem solutions and involves defining a set of operations for manipulating variables to produce an output as a result of those manipulations (Ogegbo & Ramnarain, 2021 ; Sengupta et al., 2013 ; Shute et al., 2017 ). Weintrop and colleagues ( 2016 ) described CT in the form of a taxonomy focusing on creating computational artifacts (e.g., programming, algorithm development, and creating computational abstractions). This aspect is operationalizable in two sub-aspects: (CT2a) parameterizing relevant elements and defining relationships among elements so that a machine or human can interpret them (Anderson, 2016 ; Chen et al., 2017 ; Nardelli, 2019 ; Yadav et al., 2014 ) and (CT2b) encoding elements and relationships into an algorithmic form that can be executed (Aho, 2012 ; Hadad et al., 2020 ; Ogegbo & Ramnarain, 2021 ; Sengupta et al., 2013 ; Shute et al., 2017 ).

CT3, generating, organizing, and interpreting data , involves identifying meaningful patterns from a rich set of data to answer a question (ISTE & CSTA, 2011; National Research Council, 2010 , 2012 ; Weintrop et al., 2016 ). This aspect has gained more attention as a unique characteristic of CT recently with the realization of the importance of data mining, data analytics, and machine learning (Lee & Malyn-Smith, 2020 ). All manuscripts described this aspect as pattern recognition using abstract thinking (Anderson, 2016 ; Shute et al., 2017 ), data practices (Türker & Pala, 2020 ), or data management (Lee & Malyn-Smith, 2020 ). Weintrop and colleagues ( 2016 ) considered CT in terms of data practices that involve mathematical reasoning skills such as collecting, creating, manipulating, analyzing, and visualizing data. During this process, it is critical to find distinctive patterns and correlations in data, make claims, and draw conclusions (Grover & Pea, 2018 ). This aspect is operationalizable in two sub-aspects based on our review: (CT3a) planning for, generating, and organizing data using visual representations (e.g., tables, graphs, or maps) (Basu et al., 2016 ; Hutchins et al., 2020 ) and (CT3b) analyzing and interpreting data to identify relationships and trends (Ogegbo & Ramnarain, 2021 ; Shute et al., 2017 ).

CT4, testing and debugging , refers to evaluating the appropriateness of a computational solution based on the goal as well as the available supporting evidence (Grover & Pea, 2018 ; ISTE & CSTA, 2011; Weintrop et al., 2016 ). All manuscripts described the importance of this aspect as an evaluation or verification of the solution (Anderson, 2016 ; Basu et al., 2016 ), or in terms of fixing behavior, troubleshooting, or systematic trial and error processes (Aho, 2012 ; Brennan & Resnick, 2012 ; Sullivan & Heffernan 2016 ) (Table  2 ). It involves comparing a solution with real-world data or expected outcomes to refine the solution and analyze whether the solution behaves as expected (Grover & Pea, 2013 ; Weintrop et al., 2016 ). Through analyzing the manuscripts, this aspect is operationalizable in three sub-aspects: (CT4a) detecting issues in an inappropriate solution (Basu et al., 2016 ; Sullivan & Heffernan, 2016 ), (CT4b) fixing issues based on the behavior of the artifact (Aho, 2012 ; Brennan & Resnick, 2012 ), and (CT4c) confirming the solution using a range of inputs (Kolikant, 2011 ; Sengupta et al., 2013 ).

CT5, making iterative refinements , is a process of repeatedly making gradual modifications to account for new evidence and new insights collected through observations (of the phenomenon and the output of a computational artifact), readings, and discussions (Grover & Pea, 2018 ; ISTE & CSTA, 2011; Shute et al., 2017 ; Weintrop et al., 2016 ). Seventeen authors refer to this aspect in terms of iterative and incremental refinement or development (Brennan & Resnick, 2012 ; Hutchins et al., 2020 ; Ogegbo & Ramnarain, 2021 ; Shute et al., 2017 ; Tang et al., 2020 ) (Fig.  3 ; Table  2 ). The authors we reviewed imply that iterative revision or refinement processes are essential for CT. Those who do not specify this aspect seem to include it in the process of CT2, developing computational artifacts, as they expect learners to revise their artifacts multiple times as they gain more knowledge about the phenomenon (Nardelli, 2019 ; Selby & Woollard, 2013 ). To refine a solution learners articulate the differences between their solution and the underlying phenomenon and reflect on the limitations of their solution. Through the review of CT literature, this aspect is operationalizable in three sub-aspects: (CT5a) making changes based on new conceptual understandings (Barr & Stephenson, 2011 ), (CT5b) making changes based on a comparison between computational outputs and validating data sources (Chen et al., 2017 ), and (CT5c) making changes due to an unexpected algorithmic behavior (Brennan & Resnick, 2012 ; Sengupta et al., 2013 ; Shute et al., 2017 ).

The analysis of our CT literature review synthesizes 13 sub-aspects associated with the five CT aspects. We did not include (1) generic features (e.g., generation, creativity, collaboration, critical thinking) although multiple authors listed them as CT, (2) perception or disposition features (e.g., confidence in dealing with complexity, persistence in working with difficult problems, tolerance for ambiguity, or the ability to deal with open-ended problems) (ISTE & CSTA, 2011), and (3) overly broad features (e.g., abstraction, problem-solving processes), which were often contextualized into more specific aspects of CT. For example, some authors unpack “abstraction” into the selection of essential steps by reducing repeated steps (Grover & Pea, 2018 ), which is covered by CT2 , creating artifacts using algorithmic thinking , in our framework.

Integration of ST and CT in modeling

Integration of st and ct in st literature.

Some researchers view ST as related to CT through quantitative thinking (Booth Sweeney & Sterman, 2000; Richmond 1994 ) and creating simulation models (Arnold & Wade, 2017 ; Barth-Cohen, 2018 ; Dickes et al., 2016 ; Forrester, 1971 ; Stave & Hopper, 2007 ). Forrester regarded systems thinking as “a method for analyzing complex systems that uses computer simulation models to reveal how known structures and policies often produce unexpected and troublesome behavior” (1971, p. 115). Because of the reference to computer simulation models, we interpret this description as combining ST and CT through modeling. Computational modeling thus provides new ways to explore, understand, and represent interconnections among system elements, as well as to observe the output of system behaviors (Wilkerson et al., 2018 ).

Integration of ST and CT in CT literature

Researchers in CT (Berland & Wilensky, 2015 ; Brennan & Resnick, 2012 ; Lee & Malyn-Smith 2020 ; Sengupta et al., 2013 ; Shute et al., 2017 ; Weintrop et al., 2016 ; Wing, 2011 , 2017 ) claim that CT and ST are intertwined and support each other for successfully managing and solving complex problems across STEM disciplines. Wing ( 2017 ) contends that CT is “using abstraction and decomposition when designing a large complex system” (p. 8). CT supports representing the interrelationships among sub-parts in a system that are computational in nature and which form larger complex systems (Berland & Wilensky, 2015 ; Brennan & Resnick, 2012 ; Lee & Malyn-Smith 2020 ). For example, while ST supports learners to conceptualize a problem as a system of interacting elements, they use CT to make the relationships tractable through algorithms. This results in learners understanding larger and more complex systems (using ST) and finding solutions efficiently (using CT).

While CT overlaps with ST, Shute and colleagues ( 2017 ) distinguish CT from ST in that CT aims to design efficient solutions to problems through computation while ST focuses on constructing and analyzing various relationships among elements in a system for explaining and generalizing them to other similar systems. Although there are relationships between the two ways of thinking, we view CT and ST as co-equal in the context of modeling because of their unique characteristics. Our framework thus defines CT and ST as separate entities (Fig.  1 ).

Integration of ST and CT in computational modeling

Our review of ST and CT literature shows that computational modeling is a promising context for learners to engage in ST and CT (Arnold & Wade, 2017 ; Barr & Stephenson, 2011 ; Fisher, 2018 ; Hopper & Stave, 2008 ; Kolikant, 2011 ). Since ST and CT are intrinsically linked to computational modeling, they support learners’ modeling practices (Fisher, 2018 ). Computational models are non-static representations of phenomena that can be simulated by a computer or a human and differ from static model representations (e.g., paper-pencil models) because they produce output values.

Efforts to bridge CT and STEM in K-12 science have centered prominently on building and using computational models (Ogegbo & Ramnarain, 2021 ; Sengupta et al., 2013 ; Shute et al., 2017 ; Sullivan & Heffernan, 2016 ). Computational models provide useful teaching and learning tools for integrating CT into STEM to make scientific ideas accessible to learners and enhance student understanding of phenomena (Nguyen & Santagata, 2021 ). As learners begin to build models they can define the components and structural features so that a computer can interpret model behavior. CT aids learners in modeling for investigating, representing, and understanding a phenomenon or a system (Irgens et al., 2020 ; Sullivan & Heffernan, 2016 ).

Scholars have also developed computational modeling tools to promote ST (Levy & Wilensky, 2011 ; Richmond, 1994 ; Samon & Levy, 2019; Wilensky & Resnick 1999 ), and argue that the ability to effectively use computer simulations is an important aspect of ST (National Research Council, 2011 ). ST supports learners in modeling to define the boundaries of the system (Dickes et al., 2016 ) and reduce the complexity of a system conceptually (Arnold & Wade, 2017 ). Their research shows that learners develop proficiency for scientific ideas and ST while building computational models (e.g., Stella [Richmond, 1994 ], NetLogo [Levy & Wilensky, 2011 ; Samon & Levy, 2019; Yoon et al., 2017 ]). Below is a description of our framework, which encapsulates how modeling can integrate ST and CT, and how ST and CT can support learners’ modeling practices.

Framework for computational systems modeling

The framework illustrates how each modeling practice draws upon aspects of both ST and CT to support explaining phenomena and solving problems (Fig.  1 ). We use example models created using SageModeler to show how ST and CT aspects are manifested in modeling. SageModeler is a free, web-based, open-source computational modeling tool with several affordances ( https://sagemodeler.concord.org ). Learners have: (1) multiple ways of building models (system diagrams, static equilibrium models Footnote 1 , and dynamic time-based models), (2) multiple forms (visual and textual) of representing variables and relationships that are customizable by the learner, (3) multiple ways of defining functional relationships between variables without having to write equations or computer code, and (4) multiple pathways for generating visualizations of model output. In order to better illustrate our approach to integrating ST and CT in the context of modeling, we describe how these features of SageModeler can be used to support ST and CT in each modeling practice.

M1. Characterize a problem or phenomenon to model

The ability to characterize a problem or phenomenon to model supports learners in gaining a firm conceptual understanding of the phenomenon and helps to facilitate the modeling process by narrowing the scope of the phenomenon and determining the best modeling approach to apply (Dickes et al., 2016 ; Hutchins et al., 2020 ). Models are often built as aids for understanding a problem or perplexing observation. When learners face a problem or encounter a phenomenon to be understood or explained, they need to clearly define the problem or ask a “central question” to be investigated (Meadows, 2008 ). This allows the learner to delineate model boundaries (see M2 below), choosing only those elements and connections that are deemed relevant to the question. At this stage, these elements reflect learners’ general observations of the phenomenon or problem and may initially lack the specificity needed to design a computational model. For example, when learners consider anthropogenic climate change, they are likely to first identify the elements as “carbon dioxide,” “ocean,” “ice caps,” “agriculture,” “human activity,” etc. This step helps foster learners’ initial sense-making.

To fully engage in this practice, learners need to use ST1, defining a system, ST4, framing problems or phenomena in terms of behavior over time when appropriate, and CT1, decomposing problems such that they are computationally solvable . ST1 and CT1 are important in this modeling practice because they help learners focus on the question that needs to be answered computationally (Brennan & Resnick, 2012 ; Lee & Malyn-Smith 2020 ). Science educators propose that “an explicit model of a system under study” (National Research Council, 2012 , p. 90) can be a potential learning tool for deep understanding. In SageModeler, learners can create a text box for writing their questions, and select to use a number of different modeling strategies, which supports ST1, ST4, and CT1 (Fig.  4 ).

figure 4

Dynamic time-based model. ( Note. Learners build a dynamic time-based model with drag-and-drop features and a research question by converting the elements into measurable variables and setting relationships between them.)

ST4, framing problems or phenomena in terms of behavior over time , can be helpful if the phenomenon being studied displays dynamic behavior. The understanding of how different aspects of the phenomenon evolve can aid in determining which elements of the system should be included in the model to be built (see M2 below). Such understanding also helps learners choose an approach for building a model (e.g., dynamic or static equilibrium). One key feature in SageModeler is that it allows for the development of dynamic models containing feedback structures that can more accurately portray the behaviors of real-world phenomena over time and directly support learners in ST4 (Fig.  4 ). However, this type of modeling may not be appropriate for all phenomena and should only be used when it is necessary to explain how the behavior of the system changes over time. When characterizing a problem or phenomenon to model, learners need to consider whether a static or dynamic structure will better suit their purpose. Although specifics of that structure will likely emerge as the model is being constructed, the type of model chosen and its purposes may influence the choice of system boundaries.

M2. Define the boundaries of the system and M3. Design and construct model structure

These two practices are often connected (hence a box surrounds them in the framework) and tend to occur in a synchronous fashion as learners build and revise models.

M2. Define the boundaries of the system

When learners define the boundaries of the system, they break down the system into specific elements that better suit the aims of their question and facilitate modeling. Within this practice it is essential to consider the size and scope of the question under study by reviewing the elements, selecting those essential to understanding the behavior of the system (Anderson, 2016 ; Türker & Pala, 2020 ), and ignoring irrelevant elements. ST1, defining a system, CT1, decomposing problems such that they are computationally solvable , and CT2, creating artifacts using algorithmic thinking , support learners as they define the boundaries of the system.

ST1, defining a system , guides learners to examine the system and consider what is included and what is excluded in the model, in other words, specifying the boundary of the system being modeled (Arnold & Wade, 2017 ). ST1 also encompasses considerations of how the model components are linked to each other to form model structures that will impact the emergent behavior of the system. CT1, decomposing problems such that they are computationally solvable , is vital to this modeling practice because it breaks down complex phenomena into logical sequences of cause and effect that can be described computationally (Aho, 2012 ; Basu et al., 2016 ; Berland & Wilensky, 2015 ; Sengupta et al., 2013 ; Shute et al., 2017 ; Türker & Pala, 2020 ). Learners also use CT2, creating artifacts using algorithmic thinking , at this stage to redefine and encode the elements as measurable variables (Cansu & Cansu, 2019 ; Kolikant, 2011 ). Learners must determine how the elements they have chosen are causally connected and transform their abstract conceptual understanding of the system into concrete language that can be encoded meaningfully and can be computed. For example, an element previously identified as “human activity” could be redefined as variables, such as “amount of fossil fuels burned in power plants,” “amount of greenhouse gases,” and “amount of forest fires” (Fig.  4 ).

M3. Design and construct model structure

After defining measurable variables, learners can begin to design and construct model structure. When designing and constructing model structure, learners are actively involved in defining relationships among variables within the model. In a model of climate change, for example, learners set a relationship between “temperature of the Earth” and the “# of ice caps melting” variables by defining functional relationships, as shown in Fig.  4 . This modeling practice encourages learners to carefully examine cause and effect relationships within the system in a model (Levy & Wilensky, 2008 , 2009 , 2011 ; Wilensky & Resnick, 1999 ). ST2, engaging in causal reasoning , supports learners to describe both direct and indirect relationships among various components of a system model (Grotzer, 2003 , 2017).

As learners continue to build and revise their models, the goal is to move their attention from simple relationships between two adjacent variables towards observing the cumulative behavior of longer causal chains (such as the relationship between the “amount of fossil fuels burned” and the “# of ice caps melting”), as well as broader structural patterns, such as feedback loops (Dickes et al., 2016 ; Fisher, 2018 ; Grotzer, 2003 , 2017; Jacobson et al., 2011 ). Knowledge of the connections between model structure and behavior support learners in designing and building the model appropriately (Meadows, 2008 ; Perkins & Grotzer, 2005 ). Learners engaging in this modeling practice have an opportunity to use ST3, identifying interconnections and feedback . In turn, familiarity with ST4, framing problems or phenomena in terms of behavior over time , helps learners, when appropriate, to determine which variables represent accumulations in a system and how other variables interact with those accumulations over time. ST4 is also important when considering the length of time over which a model is to be simulated. A time frame that is too short may not reveal important behaviors in the model while one that is too long may hide important behavioral detail.

The aspect of CT2, creating artifacts using algorithmic thinking , is important in this modeling practice, particularly in building computational models, which encode variables and relationships such that a computer can utilize this encoding to run a simulation (Nardelli, 2019 ; Ogegbo & Ramnarain, 2021 ; Sengupta et al., 2013 ). SageModeler takes a semiquantitative approach to defining how one variable affects another, and how accumulations and flows change over time in dynamic models. Initial values of the variables are set using a slider that goes from “low” to “high” (Fig.  5 ) and learners use words with associated graphs to define the links between variables (Fig.  4 ). The links between variables also change visually to show how those relationships are defined. These features support learners in ST2, engaging in causal reasoning , ST3, identifying interconnections and feedback , and CT2, creating computational artifacts .

M4. Test, evaluate, and debug model behavior

As learners construct computational models, they are constantly revising those models based on new evidence, incorporating new variables to match their growing understanding of the system or removing irrelevant variables because they are outside the scope and scale of the question (Basu et al., 2016 ; Brennan & Resnick, 2012 ). During revisions learners consider relationships they have set among variables and whether or not they result in accurate or expected behaviors when the model is simulated (Hadad et al., 2020 ; Lee et al., 2020 ). This iterative testing and evaluation continues until the learner is satisfied that the created artifact sufficiently represents the phenomenon or system under consideration. This modeling practice combines ST5, predicting system behavior based on system structure, CT3, generating, organizing, and interpreting data, CT4, testing and debugging , and CT5, making iterative refinements .

A major advantage of computational models is the opportunity to run a simulation, allowing learners to generate output from the model and test if their model matches their conceptual understanding of the phenomenon. Simulation encourages learners to use ST5, predicting system behavior based on system structure , as they anticipate the model’s output based on the visual representation of the model’s structure. If learners’ computational models do not behave as expected by comparing their conceptual understanding with the model’s output using CT3, generating, organizing, and interpreting data (Aho, 2012 ; Selby & Woollard, 2013 ; Türker & Pala 2020 ) and CT4, testing and debugging (Barr & Stephenson, 2011 ; Sengupta et al., 2013 ; Sullivan & Heffernan, 2016 ; Yadav et al., 2014 ), they can use CT5, make iterative refinements , and re-engage in M2 and M3 by redefining the system under study and revising their models.

In addition to utilizing simulation outputs, learners also make use of data from real-world measurements or experiments to help evaluate and make iterative changes to their models. Such external validation helps learners recognize how their model structures do or do not reflect the system they are modeling and helps guide their subsequent model revisions. Generating model output supports learner involvement in CT3, generating, organizing, and interpreting data , and CT4, testing and debugging . Pattern recognition and identifying relationships in data are important in the creation of an abstract model because they support learners in evaluating the behaviors of a model (Lee & Malyn-Smith, 2020 ; Shute et al., 2017 ). The entire M4 practice is supported by CT5, making iterative refinements , to lead learners when revising their models systematically based on evidence.

Once variables are chosen and linked together by relationships that have been defined semi-quantitatively, SageModeler can simulate the model and generate model output that can be compared with expected behavior and external validating data sources (Fig.  5 ). Additionally, learners can create multivariate graphs using simulation output or external data to show the effect of any variable on any other variable in the model and to validate model output. For example, in Fig.  5 , learners test their models by running simulations and changing the starting values of input variables (e.g., “# of cars burning gasoline”) to explore how downstream variables (e.g., “global temperature”) change, thus examining if the simulation outputs met their expectations. Learners also generate a graph between two key variables (“# of cars burning gasoline” and “global temperature”) to test the model. Research findings on learners who build, revise, and test computational models with SageModeler show promising impacts on student learning while they engage in ST and CT (Eidin et al., 2020 ).

figure 5

Testing, evaluating and debugging a model. ( Testing, evaluating and debugging a model . Note. Learners examine a model through simulation and data generation)

M5. Use model to explain and predict behavior of phenomenon or design solution to a problem

Once a model reaches a level of functionality where it appropriately and consistently illustrates the behavior of the system under exploration, learners engage in the modeling practice of using the model to explain and predict behavior of phenomenon or design solution to a problem. This requires that learners utilize ST2, engaging in causal reasoning , and ST5, predicting system behavior based on system structure . Learners must read and interpret the model as a series of interconnected relationships among variables and make sense of the model output before they can use their model to facilitate a verbal or written explanation of the phenomenon or anticipate the outcome of an internal or external intervention on system behavior. Because the model serves as a tool to explain or predict a phenomenon or solve a problem, examining the usability of the model is critical (Schwarz & White, 2005 ). Therefore, learners should be able to articulate the differences between their model and the underlying real-world phenomenon, reflecting on both the limitations and usability of their model.

Further, CT3, the practice of generating, organizing, and interpreting data , supports learners as they compare model output to data collected from the real-world phenomenon and assess the similarities and differences between them. By engaging in the construction of a computational model that mimics reality and considering the limits of the model to produce accurate behavior, learners gain an understanding about the power of modeling to leverage learning and increase intuition about complex systems.

Table  3 summarizes how learners are engaged in 5 ST aspects with 13 associated sub-aspects and 5 CT aspects with 13 associated sub-aspects through modeling.

Implications and future directions

This framework serves as a foundation for developing curriculum, teacher and learner supports, assessments, and research instruments to promote, monitor, and explore how learners engage in ST and CT through model building, testing, evaluating, and revising. Specific aspects of ST and CT can guide the design of supports to help learners participate in knowledge construction through modeling, and can help researchers and practitioners develop indicators (evidence) that can clearly describe measurable behaviors that show whether learners use the desired ST and CT aspects. This approach provides a direction for designing activities to produce specific learner-generated knowledge products that can support modeling practices and their corresponding ST and CT aspects in K-12 STEM curricula.

Further studies in the context of well-developed curricula aligned with the Framework for Computational Systems Modeling are required to (1) explore additional ST and CT aspects learners use within the context of modeling, (2) confirm that these aspects can be observed through associated sub-aspects in modeling contexts, and (3) describe how and when learners use the ST and CT aspects through the five modeling practices defined in the framework.

Limitations

As is typical in an integrative review approach (Snyder, 2019 ), our literature review might be biased based on our conceptual understanding of modeling, ST, and CT because we limited it to scholars within our defined set of search criteria. Due to the broad conceptualizations of CT and ST and a wide range of fields where these are applicable, our literature collection may have missed relevant studies. Given the breadth of these fields, it is difficult to condense all of the literature into one coherent manuscript. As such we emphasized aspects of modeling, ST, and CT that synergized and supported each other.

Conclusions

Modeling, systems thinking, and computational thinking are important for an educated STEM workforce and the general public to explain and predict scientific phenomena and to solve pressing global and local problems (National Research Council, 2012 ). ST and CT in the context of modeling are critical for professionals in science and engineering to advance knowledge about the natural world and for civic engagement by the public to understand and evaluate proposed solutions to local and global problems. We suggest that schools provide learners with more opportunities to develop, test, and revise computational models and thus use aspects of both systems thinking and computational thinking.

Additional opportunities exist for learning scientists to carry out an integrated and comprehensive research and development program in a range of learning contexts for exploring the relationship between modeling, ST, CT, and student learning. Such a program of research should aim to integrate ST and CT through modeling to create pedagogically appropriate teaching and learning materials, and to develop and collect evidence to confirm learner engagement in modeling, ST, and CT. Our efforts in developing a framework contribute to this mission to educate learners as science-literate citizens who are proficient in building and using models that utilize a systems thinking perspective while taking advantage of the computational power of algorithms to explain and predict phenomena or seek answers and develop a range of potential solutions to problems that plague our society and world.

Availability of data and material

Not applicable. Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Code Availability

Not applicable.

A static equilibrium model consists of a set of variables linked by relationships that define how one variable influences another. Any change to an input variable is immediately reflected in new values calculated for each variable in the system. There is no time component to this type of system model. Any change to the input instantaneously results in a new model state.

Aho, A. V. (2012). Computation and computational thinking. The Computer Journal , 55(7), 832–835. https://doi.org/10.1093/comjnl/bxs074

Article   Google Scholar  

Anderson, N. D. (2016). A call for computational thinking in undergraduate psychology. Psychology Learning & Teaching , 15(3), 226–234

Arnold, R. D., & Wade, J. P. (2015). A definition of systems thinking: A systems approach. Procedia Computer Science , 44, 669–678

Arnold, R. D., & Wade, J. P. (2017). A complete set of systems thinking skills. Insight , 20(3), 9–17

Barth-Cohen, L. (2018). Threads of local continuity between centralized and decentralized causality: Transitional explanations for the behavior of a complex system. Instructional Science , 46(5), 681–705

Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community? Acm Inroads , 2(1), 48–54. https://doi.org/10.1145/1929887.1929905

Basu, S., Biswas, G., Sengupta, P., Dickes, A., Kinnebrew, J. S., & Clark, D. (2016). Identifying middle school students’ challenges in computational thinking-based science learning. Research and Practice in Technology-enhanced Learning , 11(1), 13. https://doi.org/10.1007/s11257-017-9187-0

Benson, T. A. (2007). Developing a systems thinking capacity in learners of all ages. Waters Center for Systems Thinking. WatersCenterST.org . Retrieved December 17, 2021, from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.535.9175&rep=rep1&type=pdf

Ben-Zvi Assaraf, O., & Orion, N. (2005). Development of system thinking skills in the context of Earth system education. Journal of Research in Science Teaching , 42(5), 518–560

Berland, M., & Wilensky, U. (2015). Comparing virtual and physical robotics environments for supporting complex systems and computational thinking. Journal of Science Education and Technology , 24(5), 628–647

Booth-Sweeney, L. B., & Sterman, J. D. (2000). Bathtub dynamics: Initial results of a systems thinking inventory. System Dynamics Review: The Journal of the System Dynamics Society , 16(4), 249–286

Booth-Sweeney, L. B., & Sterman, J. D. (2007). Thinking about systems: Student and teacher conceptions of natural and social systems. System Dynamics Review: The Journal of the System Dynamics Society , 23(2–3), 285–311

Brennan, K., & Resnick, M. (2012, April). Using artifact-based interviews to study the development of computational thinking in interactive media design. In Annual American Educational Research Association Meeting, Vancouver, BC, Canada . Retrieved May 19, 2022, from https://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=8D8C7AFCB470A17FA08153DA29D22AF8?doi=10.1.1.296.6602&rep=rep1&type=pdf

Cansu, S. K., & Cansu, F. K. (2019). An overview of computational thinking. International Journal of Computer Science Education in Schools , 3(1). https://doi.org/10.21585/ijcses.v3i1.53

Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M. (2017). Assessing elementary students’ computational thinking in everyday reasoning and robotics programming. Computers & Education , 109, 162–175

Cronin, M. A., Gonzalez, C., & Sterman, J. D. (2009). Why don’t well-educated adults understand accumulation? A challenge to researchers, educators, and citizens. Organizational Behavior and Human Decision Processes , 108(1), 116–130

Danish, J., Saleh, A., Andrade, A., & Bryan, B. (2017). Observing complex systems thinking in the zone of proximal development. Instructional Science , 45(1), 5–24

Dickes, A. C., Sengupta, P., Farris, A. V., & Basu, S. (2016). Development of mechanistic reasoning and multilevel explanations of ecology in third grade using agent-based models. Science Education , 100(4), 734–776

Draper, F. (1993). A proposed sequence for developing systems thinking in a grades 4–12 curriculum. System Dynamics Review , 9(2), 207–214

Eidin, E., Bielik, T., Touitou, I., Bowers, J., McIntyre, C., Damelin, D. (2020, June 21–23). Characterizing advantages and challenges for students engaging in computational thinking and systems thinking through model construction . The Interdisciplinarity of the Learning Sciences, 14th International Conference of the Learning Sciences, Volume 1 (pp. 183–190). Nashville, Tennessee: International Society of the Learning Sciences. https://repository.isls.org//handle/1/6460 (conference canceled, online)

Fisher, D. (2018). Reflections on teaching system dynamics to secondary school students for over 20 years. Systems , 6(20), 12

Forrester, J. W. (1961). Industrial dynamics . Productivity Press

Forrester, J. W. (1971). Counterintuitive behavior of social systems. Theory and Decision , 2(2), 109–140

Forrester, J. W. (1994). System dynamics, systems thinking, and soft OR. System Dynamics Review , 10(2-3), 245–256

Grotzer, T. A., & Basca, B. B. (2003). How does grasping the underlying causal structures of ecosystems impact students’ understanding? Journal of Biological Education , 38(1), 16–29

Grotzer, T. A., Solis, S. L., Tutwiler, M. S., & Cuzzolino, M. P. (2017). A study of students’ reasoning about probabilistic causality: Implications for understanding complex systems and for instructional design. Instructional Science , 45(1), 25–52

Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher , 42(1), 38–43. https://doi.org/10.3102/0013189X12463051

Grover, S., & Pea, R. (2018). Computational thinking: A competency whose time has come. In S. Sentence, E. Barendsen, & C. Schulte (Eds.), Computer science education: Perspectives on teaching and learning in school (pp. 19–38). Bloomsbury Academic

Hadad, R., Thomas, K., Kachovska, M., & Yin, Y. (2020). Practicing formative assessment for computational thinking in making environments. Journal of Science Education and Technology , 29(1), 162–173

Halloun, I. A. (2007). Modeling theory in science education (24 vol.). Springer Science & Business Media

Harrison, A. G., & Treagust, D. F. (2000). A typology of school science models. International Journal of Science Education , 22(9), 1011–1026

Hmelo-Silver, C. E., Jordan, R., Eberbach, C., & Sinha, S. (2017). Systems learning with a conceptual representation: A quasi-experimental study. Instructional Science , 45(1), 53–72. https://doi.org/10.1007/s11251-016-9392-y

Holz, C., Siegel, L. S., Johnston, E., Jones, A. P., & Sterman, J. (2018). Ratcheting ambition to limit warming to 1.5 C–trade-offs between emission reductions and carbon dioxide removal. Environmental Research Letters , 13(6), 064028. http://hdl.handle.net/1721.1/121076

Hopper, M., & Stave, K. A. (2008). Assessing the effectiveness of systems thinking interventions in the classroom. In The 26th International Conference of the System Dynamics Society (pp. 1–26). Athens, Greece

Hutchins, N. M., Biswas, G., Maróti, M., Lédeczi, Á., Grover, S., Wolf, R. … McElhaney, K. (2020). C2STEM: A system for synergistic learning of physics and computational thinking. Journal of Science Education and Technology , 29(1), 83–100. https://link.springer.com/article/10.1007/s10956-019-09804-9

ISTE (International Society for Technology in Education) & CSTA (Computer Science Teachers Association) (2011). Computational thinking teacher resources . Retrieved December 17, 2021, from https://cdn.iste.org/www-root/Computational_Thinking_Operational_Definition_ISTE.pdf

Irgens, G. A., Dabholkar, S., Bain, C., Woods, P., Hall, K., Swanson, H. … Wilensky, U. (2020). Modeling and measuring high school students’ computational thinking practices in science. Journal of Science Education and Technology , 29(1), 137–161. https://doi.org/10.1007/s10956-020-09811-1

Jacobson, M. J., & Wilensky, U. (2006). Complex systems in education: Scientific and educational importance and implications for the learning sciences. The Journal of the Learning Sciences , 15(1), 11–34

Jacobson, M. J., Kapur, M., So, H. J., & Lee, J. (2011). The ontologies of complexity and learning about complex systems. Instructional Science , 39(5), 763–783

Kim, D. H. (1999). Introduction to systems thinking (Vol. 16). Pegasus Communications

Kolikant, Y. B. D. (2011). Computer science education as a cultural encounter: a socio-cultural framework for articulating teaching difficulties. Instructional Science , 39(4), 543–559

Krajcik, J., & Shin, N. (2022). Project-based learning. In R. K. Sawyer (Ed.), Cambridge handbook of the learning sciences 3rd edition (pp. 72–92). Cambridge University Press

Laszlo, E. (1996). The systems view of the world: A holistic vision for our time . Hampton Press

Lee, I., Grover, S., Martin, F., Pillai, S., & Malyn-Smith, J. (2020). Computational thinking from a disciplinary perspective: Integrating computational thinking in K-12 science, technology, engineering, and mathematics education. Journal of Science Education and Technology , 29(1), 1–8. https://doi.org/10.1007/s10956-019-09803-w

Lee, I., & Malyn-Smith, J. (2020). Computational thinking integration patterns along the framework defining computational thinking from a disciplinary perspective. Journal of Science Education and Technology , 29(1), 9–18

Levy, S. T., & Wilensky, U. (2008). Inventing a “mid-level” to make ends meet: Reasoning between the levels of complexity. Cognition and Instruction , 26(1), 1–47

Levy, S. T., & Wilensky, U. (2009). Crossing levels and representations: The Connected Chemistry (CC1) curriculum. Journal of Science Education and Technology , 18(3), 224–242

Levy, S. T., & Wilensky, U. (2011). Mining students’ inquiry actions for understanding of complex systems. Computers & Education , 56(3), 556–573

Madubuegwn, C. E., Okechukwu, G. P., Dominic, O. E., Nwagbo, S., & Ibekaku, U. K. (2021). Climate change and challenges of global interventions: A critical analysis of Kyoto protocol and Paris agreement. Journal of Policy and Development Studies , 13(1), 01–10. https://www.researchgate.net/publication/354872613

Google Scholar  

Martinez-Moyano, I. J., & Richardson, G. P. (2013). Best practices in system dynamics modeling. System Dynamics Review , 29(2), 102–123

Meadows, D. (2008). Thinking in systems: A primer . Chelsea Green Publishing

Metcalf, J. S., Krajcik, J., Soloway, E. (2000). Model-It: A design retrospective. In Jacobson, M. J., Kozma, R. B. (Eds.), Innovations in science and mathematics education (pp. 77–115). Mahwah, NJ: Lawrence Erlbaum

Nardelli, E. (2019). Do we really need computational thinking? Communications of the ACM , 62(2), 32–35

National Research Council. (2000). How people learn: Brain, mind, experience, and school: Expanded edition . The National Academies Press

National Research Council. (2007). Taking science to school: Learning and teaching science in grades K-8 . National Academies Press

National Research Council. (2010). Report of a workshop on the scope and nature of computational thinking . National Academies Press

National Research Council. (2011). Key points expressed by presenters and discussants. Report of a workshop on the pedagogical aspects of computational thinking (pp. 6–35). The National Academies Press. https://doi.org/10.17226/13170

National Research Council. (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas . National Academies Press

NGSS Lead States. (2013). Next generation science standards: For states, by states . The National Academies Press

Nguyen, H., & Santagata, R. (2021). Impact of computer modeling on learning and teaching systems thinking. Journal of Research in Science Teaching , 58(5), 661–688

Ogegbo, A. A., & Ramnarain, U. (2021). A systematic review of computational thinking in science classrooms. Studies in Science Education , 1–28. https://doi.org/10.1080/03057267.2021.1963580

Ossimitz, G. (2000). Teaching system dynamics and systems thinking in Austria and Germany. In The 18th International Conference of the System Dynamics Society . Bergen, Norway

Pallant, A., & Lee, H. S. (2017). Teaching sustainability through system dynamics: Exploring stocks and flows embedded in dynamic computer models of an agricultural land management system. Journal of Geoscience Education , 65(2), 146–157

Perkins, D. N., & Grotzer, T. A. (2005). Dimensions of causal understanding: The role of complex causal models in students’ understanding of science. Studies in Science Education , 14(1), 117–166. https://doi.org/10.1080/03057260508560216

Plate, R., & Monroe, M. (2014). A structure for assessing systems thinking. The Creative Learning Exchange , 23(1), 1–3

Psycharis, S., & Kallia, M. (2017). The effects of computer programming on high school students’ reasoning skills and mathematical self-efficacy and problem solving. Instructional Science , 45(5), 583–602

American Association for the Advancement of Science (1993). Project 2061: Benchmarks for Science Literacy . New York: Oxford University Press.

Richmond, B. (1993). Systems thinking: Critical thinking skills for the 1990s and beyond. System Dynamics Review , 9(2), 113–133. https://doi.org/10.1002/sdr.4260090203

Richmond, B. (1994). System dynamics/systems thinking: Let’s just get on with it. System Dynamics Review , 10(2–3), 135–157

Richmond, B. (1997). The thinking in systems thinking: how can we make it easier to master? The Systems Thinker , 8(2), 1–5

​​Riess, W., & Mischo, C. (2010). Promoting systems thinking through biology lessons. International Journal of Science Education , 32(6), 705–725

Samon, S., & Levy, S. T. (2020). Interactions between reasoning about complex systems and conceptual understanding in learning chemistry. Journal of Research in Science Teaching , 57(1), 58–86. https://doi.org/10.1002/tea.21585

Schwarz, C. V., Passmore, C., & Reiser, B. J. (2017). Helping students make sense of the world using next generation science and engineering practices . NSTA Press

Schwarz, C. V., & White, B. Y. (2005). Metamodeling knowledge: Developing students’ understanding of scientific modeling. Cognition and Instruction , 23(2), 165–205

Schwarz, C. V., Reiser, B. J., Davis, E. A., Kenyon, L., Achér, A., Fortus, D. … Krajcik, J. (2009). Developing a learning progression for scientific modeling: Making scientific modeling accessible and meaningful for learners. Journal of Research in Science Teaching , 46(6), 632–654

Selby, C. C., & Woollard, J. (2013). 5–8 March). Computational thinking: the developing definition. Special Interest Group on Computer Science Education , Atlanta GA. Retrieved December 17, 2021, from https://core.ac.uk/download/pdf/17189251.pdf

Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization . Currency Doubleday

Sengupta, P., Kinnebrew, J. S., Basu, S., Biswas, G., & Clark, D. (2013). Integrating computational thinking with K-12 science education using agent-based computation: A theoretical framework. Education and Information Technologies , 18(2), 351–380

Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Research Review , 22(1), 142–158

Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research , 104, 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039

Stave, K., & Hopper, M. (2007). What constitutes systems thinking? A proposed taxonomy. In 25th International Conference of the System Dynamics Society . Boston, Massachusetts. Retrieved March 25, 2022, from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.4065&rep=rep1&type=pdf

Sterman, J. D. (1994). Learning in and about complex systems. System Dynamics Review , 10(23), 291–330

Sterman, J. D. (2002). May 29–30). System dynamics: Systems thinking and modeling for a complex world. Massachusetts Institute of Technology Engineering Systems Division. MIT Sloan School of Management. http://hdl.handle.net/1721.1/102741

Sterman, J. D., & Sweeney, L. B. (2002). Cloudy skies: assessing public understanding of global warming. System Dynamics Review: The Journal of the System Dynamics Society , 18(2), 207–240. https://dspace.mit.edu/bitstream/handle/1721.1/102741/esd-wp-2003-01.13.pdf?sequence=1

Sullivan, F. R., & Heffernan, J. (2016). Robotic construction kits as computational manipulatives for learning in the STEM disciplines. Journal of Research on Technology in Education , 48(2), 105–128

Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education , 148, 103798. https://doi.org/10.1016/j.compedu.2019.103798

Tripto, J., Ben-Zvi Assaraf, O., & Amit, M. (2018). Recurring patterns in the development of high school biology students’ system thinking over time. Instructional Science , 46(5), 639–680

Türker, P. M., & Pala, F. K. (2020). The effect of algorithm education on students’ computer programming self-efficacy perceptions and computational thinking skills. International Journal of Computer Science Education in Schools , 3(3), 19–32. https://doi.org/10.21585/ijcses.v3i3.69

Wang, C., Shen, J., & Chao, J. (2021). Integrating computational thinking in STEM education: A literature review. International Journal of Science and Mathematics Education , 1–24

Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology , 25(1), 127–147

Wen, C., Chang, C., Chang, M., Chiang, S. F., Liu, C., Hwang, F., & Tsai, C. (2018). The learning analytics of model-based learning facilitated by a problem-solving simulation game. Instructional Science , 46(6), 847–867

Whittemore, R., & Knafl, K. (2005). The integrative review: Updated methodology. Journal of Advanced Nursing , 52(5), 546–553

Wilensky, U., & Resnick, M. (1999). Thinking in levels: A dynamic systems approach to making sense of the world. Journal of Science Education and Technology , 8(1), 3–19. https://doi.org/10.1023/A:1009421303064

Wilkerson, M. H., Shareff, R., Laina, V., & Gravel, B. (2018). Epistemic gameplay and discovery in computational model-based inquiry activities. Instructional Science , 46(1), 35–60

Wing, J. M. (2006). Computational thinking. Communications of the ACM , 49(3), 33–35

Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences , 366(1881), 3717–3725

Wing, J. M. (2011). Computational thinking–What and why? News from the School of Computer Science , 6, 20–23

Wing, J. M. (2017). Computational thinking’s influence on research and education for all. Italian Journal of Educational Technology , 25(2), 7–14.  https://doi.org/10.17471/2499-4324/922

Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational thinking in elementary and secondary teacher education. ACM Transactions on Computing Education (TOCE) , 14(1), 1–16

Yoon, S. A., Anderson, E., Koehler-Yom, J., Evans, C., Park, M., Sheldon, J. … Klopfer, E. (2017). Teaching about complex systems is no simple matter: Building effective professional development for computer-supported complex systems instruction. Instructional Science , 45(1), 99–121

Yoon, S. A., Goh, S., & Park, M. (2018). Teaching and learning about complex systems in K–12 science education: A review of empirical studies 1995–2015. Review of Educational Research , 88(2), 285–325. https://doi.org/10.3102/0034654317746090

Download references

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant Nos. DRL-1842035 and DRL-1842037. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

National Science Foundation under Grant Nos. DRL-1842035 and DRL-1842037.

Author information

Authors and affiliations.

CREATE for STEM Institute, Michigan State University, 620 Farm Lane, Suite 115, 48824, East Lansing, MI, USA

Namsoo Shin, Jonathan Bowers, Emil Eidin & Joseph Krajcik

The Concord Consortium, 25 Love Lane, 01742, Concord, MA, USA

Steve Roderick, Cynthia McIntyre, A. Lynn Stephens & Daniel Damelin

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the conceptualization of the framework. The first draft of the manuscript was written by Namsoo Shin, Jonathan Bowers, Steve Roderick, and Cynthia McIntyre and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. Namsoo Shin conducted literature reviews and led the direction of the manuscript. Jonathan Bowers co-wrote the manuscript throughout the entire writing process of this manuscript. Steve Roderick conducted initial literature reviews and wrote the first draft of the systems thinking section, and selected students’ models based on the framework as examples. Cynthia McIntyre co-wrote and reviewed the description of the framework. Lynn Stephens searched, conducted, and co-wrote extensive literature reviews of systems thinking. Emanuel Eidin provided necessary manuscripts and reviewed the analysis of the literature. Joseph Krajcik oversaw the direction of the manuscript, and iteratively reviewed and revised the manuscript. Daniel Damelin led the entire development of the framework.

Corresponding author

Correspondence to Namsoo Shin .

Ethics declarations

Conflict of interest.

The authors declare that they have no competing interests.

Ethics approval and consent to participate

We obtained ethics approval and consent to participate.

Consent for publication

All authors agreed with the content, all gave explicit consent to submit, and all obtained consent from the responsible authorities at the institute/organization where the work was carried out before the work was submitted.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Shin, N., Bowers, J., Roderick, S. et al. A framework for supporting systems thinking and computational thinking through constructing models. Instr Sci 50 , 933–960 (2022). https://doi.org/10.1007/s11251-022-09590-9

Download citation

Received : 01 February 2021

Revised : 26 March 2022

Accepted : 10 April 2022

Published : 24 July 2022

Issue Date : December 2022

DOI : https://doi.org/10.1007/s11251-022-09590-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systems thinking
  • Computational thinking
  • Computational modeling
  • Science Education
  • High School
  • Find a journal
  • Publish with us
  • Track your research

share this!

May 1, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

New computer algorithm supercharges climate models and could lead to better predictions of future climate change

by University of Oxford

climate change

Earth System Models—complex computer models that describe Earth processes and how they interact—are critical for predicting future climate change. By simulating the response of our land, oceans and atmosphere to manmade greenhouse gas emissions, these models form the foundation for predictions of future extreme weather and climate event scenarios, including those issued by the UN Intergovernmental Panel on Climate Change (IPCC).

However, climate modelers have long faced a major problem. Because Earth System Models integrate many complicated processes, they cannot immediately run a simulation; they must first ensure that it has reached a stable equilibrium representative of real-world conditions before the industrial revolution. Without this initial settling period—referred to as the "spin-up" phase—the model can "drift," simulating changes that may be erroneously attributed to manmade factors.

Unfortunately, this process is extremely slow as it requires running the model for many thousands of model years which, for IPCC simulations, can take as much as two years on some of the world's most powerful supercomputers.

However, a study published in Science Advances by a University of Oxford scientist describes a new computer algorithm which can be applied to Earth System Models to drastically reduce spin-up time.

During tests on models used in IPCC simulations, the algorithm was on average 10 times faster at spinning up the model than currently-used approaches, reducing the time taken to achieve equilibrium from many months to under a week.

Study author Samar Khatiwala, Professor of Earth Sciences at the University of Oxford's Department of Earth Sciences, who devised the algorithm, said, "Minimizing model drift at a much lower cost in time and energy is obviously critical for climate change simulations, but perhaps the greatest value of this research may ultimately be to policy makers who need to know how reliable climate projections are."

Currently, the lengthy spin-up time of many IPCC models prevents climate researchers from running their model at a higher resolution and defining uncertainty through carrying out repeat simulations.

By drastically reducing the spin-up time, the new algorithm will enable researchers to investigate how subtle changes to the model parameters can alter the output—which is critical for defining the uncertainty of future emission scenarios.

Professor Khatiwala's new algorithm employs a mathematical approach known as sequence acceleration, which has its roots with the famous mathematician Euler.

In the 1960s this idea was applied by D. G. Anderson to speed-up the solution of Schrödinger's equation, which predicts how matter behaves at the microscopic level. So important is this problem that more than half the world's supercomputing power is currently devoted to solving it, and "Anderson Acceleration," as it is now known, is one of the most commonly used algorithms employed for it.

Professor Khatiwala realized that Anderson Acceleration might also be able to reduce model spin-up time since both problems are of an iterative nature: an output is generated and then fed back into the model many times over. By retaining previous outputs and combining them into a single input using Anderson's scheme, the final solution is achieved much more quickly.

Not only does this make the spin-up process much faster and less computationally expensive, but the concept can be applied to the huge variety of different models that are used to investigate, and inform policy on, issues ranging from ocean acidification to biodiversity loss.

With research groups around the world beginning to spin-up their models for the next IPCC report, due in 2029, Professor Khatiwala is working with a number of them, including the UK Met Office, to trial his approach and software in their models.

Professor Helene Hewitt OBE, Co-chair for the Coupled Model Intercomparison Project (CMIP) Panel, which will inform the next IPCC report, said, "Policymakers rely on climate projections to inform negotiations as the world tries to meet the Paris Agreement. This work is a step towards reducing the time it takes to produce those critical climate projections."

Professor Colin Jones Head of the NERC/Met Office sponsored UK Earth system modeling, said, "Spin-up has always been prohibitively expensive in terms of computational cost and time. The new approaches developed by Professor Khatiwala have the promise to break this logjam and deliver a quantum leap in the efficiency of spinning up such complex models and, as a consequence, greatly increase our ability to deliver timely, robust estimates of global climate change."

Journal information: Science Advances

Provided by University of Oxford

Explore further

Feedback to editors

computational modeling problem solving

Every drop counts: New algorithm tracks Texas's daily reservoir evaporation rates

4 hours ago

computational modeling problem solving

Genetic study finds early summer fishing can have an evolutionary impact, resulting in smaller salmon

7 hours ago

computational modeling problem solving

Researchers discovery family of natural compounds that selectively kill parasites

computational modeling problem solving

Study suggests heavy snowfall and rain may contribute to some earthquakes

computational modeling problem solving

The spread of misinformation varies by topic and by country in Europe, study finds

computational modeling problem solving

Webb presents best evidence to date for rocky exoplanet atmosphere

computational modeling problem solving

Human activity is making it harder for scientists to interpret oceans' past

8 hours ago

computational modeling problem solving

Quantum simulators solve physics puzzles with colored dots

computational modeling problem solving

Chemists produce new-to-nature enzyme containing boron

computational modeling problem solving

Improving timing precision of millisecond pulsars using polarization

Relevant physicsforums posts, a very puzzling rock or a pallasite / mesmosiderite or a nothing burger, m 4.8 - whitehouse station, new jersey, us.

11 hours ago

What is global warming due to?

16 hours ago

The Secrets of Prof. Verschure's Rosetta Stones

May 2, 2024

Large eruption at Ruang volcano, Indonesia

Tidal friction and global warming.

Apr 20, 2024

More from Earth Sciences

Related Stories

computational modeling problem solving

Extreme weather forecasts: Algorithm 'nudges' existing climate simulations closer to future reality

Mar 26, 2024

computational modeling problem solving

New report underscores importance of microbes in climate change modeling

Jun 5, 2023

computational modeling problem solving

Scientists combine climate models for more accurate projections

Nov 16, 2023

computational modeling problem solving

New cloud model could help with climate research

Feb 21, 2024

computational modeling problem solving

Expert discusses machine learning and climate modeling

Feb 14, 2019

computational modeling problem solving

Family trees clarify relationships among climate models

Jul 18, 2023

Recommended for you

computational modeling problem solving

Computer models suggest modern plate tectonics are due to blobs left behind by cosmic collision

12 hours ago

computational modeling problem solving

Study informs climate resilience strategies in urban, rural areas

10 hours ago

computational modeling problem solving

Study reveals new mechanism to explain how continents stabilized

computational modeling problem solving

Study reveals late Pleistocene island weathering, precipitation in the Western Pacific Warm Pool

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • Español (Latam)
  • Bahasa Indonesia
  • Português (Brasil)

Gemini 1.5: Our next-generation model, now available for Private Preview in Google AI Studio

Last week, we released Gemini 1.0 Ultra in Gemini Advanced. You can try it out now by signing up for a Gemini Advanced subscription . The 1.0 Ultra model, accessible via the Gemini API, has seen a lot of interest and continues to roll out to select developers and partners in Google AI Studio .

Today, we’re also excited to introduce our next-generation Gemini 1.5 model , which uses a new Mixture-of-Experts (MoE) approach to improve efficiency. It routes your request to a group of smaller "expert” neural networks so responses are faster and higher quality.

Developers can sign up for our Private Preview of Gemini 1.5 Pro , our mid-sized multimodal model optimized for scaling across a wide-range of tasks. The model features a new, experimental 1 million token context window, and will be available to try out in  Google AI Studio . Google AI Studio is the fastest way to build with Gemini models and enables developers to easily integrate the Gemini API in their applications. It’s available in 38 languages across 180+ countries and territories .

1,000,000 tokens: Unlocking new use cases for developers

Before today, the largest context window in the world for a publicly available large language model was 200,000 tokens. We’ve been able to significantly increase this — running up to 1 million tokens consistently, achieving the longest context window of any large-scale foundation model. Gemini 1.5 Pro will come with a 128,000 token context window by default, but today’s Private Preview will have access to the experimental 1 million token context window.

We’re excited about the new possibilities that larger context windows enable. You can directly upload large PDFs, code repositories, or even lengthy videos as prompts in Google AI Studio. Gemini 1.5 Pro will then reason across modalities and output text.

1) Upload multiple files and ask questions We’ve added the ability for developers to upload multiple files, like PDFs, and ask questions in Google AI Studio. The larger context window allows the model to take in more information — making the output more consistent, relevant and useful. With this 1 million token context window, we’ve been able to load in over 700,000 words of text in one go.

Sorry, your browser doesn't support playback for this video

2) Query an entire code repository

The large context window also enables a deep analysis of an entire codebase, helping Gemini models grasp complex relationships, patterns, and understanding of code. A developer could upload a new codebase directly from their computer or via Google Drive, and use the model to onboard quickly and gain an understanding of the code.

3) Add a full length video

Gemini 1.5 Pro can also reason across up to 1 hour of video. When you attach a video, Google AI Studio breaks it down into thousands of frames (without audio), and then you can perform highly sophisticated reasoning and problem-solving tasks since the Gemini models are multimodal.

More ways for developers to build with Gemini models

In addition to bringing you the latest model innovations, we’re also making it easier for you to build with Gemini:

  • Easy tuning. Provide a set of examples, and you can customize Gemini for your specific needs in minutes from inside Google AI Studio. This feature rolls out in the next few days. 
  • New developer surfaces . Integrate the Gemini API to build new AI-powered features today with new Firebase Extensions , across your development workspace in Project IDX , or with our newly released Google AI Dart SDK . 
  • Lower pricing for Gemini 1.0 Pro . We’re also updating the 1.0 Pro model, which offers a good balance of cost and performance for many AI tasks. Today’s stable version is priced 50% less for text inputs and 25% less for outputs than previously announced. The upcoming pay-as-you-go plans for AI Studio are coming soon.

Since December, developers of all sizes have been building with Gemini models, and we’re excited to turn cutting edge research into early developer products in Google AI Studio . Expect some latency in this preview version due to the experimental nature of the large context window feature, but we’re excited to start a phased rollout as we continue to fine-tune the model and get your feedback. We hope you enjoy experimenting with it early on, like we have.

Get ready for Google I/O: Program lineup revealed

Get ready for Google I/O: Program lineup revealed

Gemini 1.5 Pro Now Available in 180+ Countries; with Native Audio Understanding, System Instructions, JSON Mode and more

Gemini 1.5 Pro Now Available in 180+ Countries; with Native Audio Understanding, System Instructions, JSON Mode and more

Publish your Keras models on Kaggle and Hugging Face

Publish your Keras models on Kaggle and Hugging Face

Achieving privacy compliance with your CI/CD: A guide for compliance teams

Achieving privacy compliance with your CI/CD: A guide for compliance teams

Gemma Family Expands with Models Tailored for Developers and Researchers

Gemma Family Expands with Models Tailored for Developers and Researchers

IMAGES

  1. Core Concepts Of Computational Thinking For Problem Solving

    computational modeling problem solving

  2. Computational Thinking Defined. What is Computational Thinking and

    computational modeling problem solving

  3. PPT

    computational modeling problem solving

  4. 5 Step Problem Solving Model

    computational modeling problem solving

  5. PPT

    computational modeling problem solving

  6. Problem Solving Using Computational Thinking

    computational modeling problem solving

VIDEO

  1. 1. Introduction, Optimization Problems (MIT 6.0002 Intro to Computational Thinking and Data Science)

  2. Computational Thinking: What Is It? How Is It Used?

  3. Computational Problems Solving and Algorithms-video-v-1

  4. Computational Thinking

  5. Computational Thinking and problem solving, by Miles Berry

  6. How to Solve a Problem in Four Steps: The IDEA Model

COMMENTS

  1. Computational Thinking for Problem Solving

    Computational thinking is a problem-solving process in which the last step is expressing the solution so that it can be executed on a computer. However, before we are able to write a program to implement an algorithm, we must understand what the computer is capable of doing -- in particular, how it executes instructions and how it uses data. ...

  2. Mathematical modeling and problem solving: from fundamentals to

    The rapidly advancing fields of machine learning and mathematical modeling, greatly enhanced by the recent growth in artificial intelligence, are the focus of this special issue. This issue compiles extensively revised and improved versions of the top papers from the workshop on Mathematical Modeling and Problem Solving at PDPTA'23, the 29th International Conference on Parallel and Distributed ...

  3. Machine Learning, Modeling, and Simulation Principles

    Find out what MIT xPRO can do for your team. INQUIRE NOW. Enroll in MIT's Machine Learning, Modeling & Simulation Principles Online Course and learn from MIT faculty and industry experts. In this online course, you will explore the computational tools used in engineering problem-solving.

  4. Computational Thinking Defined

    Computational Thinking is a set of techniques for solving complex problems that can be classified into three steps: Problem Specification, Algorithmic Expression, and Solution Implementation & Evaluation.The principles involved in each step of the Computational Thinking approach are listed above and discussed in detail below.

  5. Machine Learning, Modeling, and Simulation: Engineering Problem-Solving

    Leveraging the rich experience of the faculty at the MIT Center for Computational Science and Engineering (CCSE), this program connects your science and engineering skills to the principles of machine learning and data science. With an emphasis on the application of these methods, you will put these new skills into practice in real time.

  6. Introduction to Computation and Programming Using Python

    The new edition of an introduction to the art of computational problem solving using Python.This book introduces students with little or no prior programming... Skip to content. Books. Column. View all subjects; ... With Application to Computational Modeling and Understanding Data. by John V. Guttag. Paperback. $75.00. Paperback. ISBN ...

  7. Demystifying computational thinking

    The main commonality between CT and mathematical thinking is problem solving processes (Wing, 2008). Fig. 1 shows the full set of shared concepts of computational and mathematical thinking: problem solving, modeling, data analysis and interpretation, and statistics and probability. Download : Download high-res image (369KB)

  8. Introduction to Elementary Computational Modeling

    With an emphasis on problem solving, this book introduces the basic principles and fundamental concepts of computational modeling. It emphasizes reasoning and conceptualizing problems, the elementary mathematical modeling, and the implementation using computing concepts and principles. Examples are included that demonstrate the computation and visualization of the implemented models. The ...

  9. Problem Solving and Mathematical Modeling

    The theory of constraints (ToC) plays crucial role in the engineering problem solving and modeling [4, 5]. The important and limiting factors which are crucial to consider in the problem solving come from the ToC. ... In daily life, we come across different types of problems like social problem, managerial problems, computational problems, etc ...

  10. Computational Modeling and Problem Solving in the Networked World

    These two articles present philosophical perspectives on computation, covering a variety of traditional and newer methods for modeling, solving, and explaining mathematical models. The next set includes articles that study machine learning and computational heuristics, and is followed by articles that address issues in performance testing of ...

  11. What is Computational Thinking?

    Computational thinking skills, in the outermost circle, are the cognitive processes necessary to engage with computational tools to solve problems. These skills are the foundation to engage in any computational problem solving and should be integrated into early learning opportunities in K-3. Computational thinking practices, in the middle ...

  12. Computational Problem Solving Conceptual Framework

    Solving a complex computational problem is an adaptive process that follows iterative cycles of ideation, testing, debugging, and further development. Computational problem solving involves systematically evaluating the state of one's own work, identifying when and how a given operation requires fixing, and implementing the needed corrections.

  13. Computational Mathematics BS

    This course serves as an introduction to computational thinking using a problem-centered approach. Specific topics covered include: expression of algorithms in pseudo code and a programming language; functional and imperative programming techniques; control structures; problem solving using recursion; basic searching and sorting; elementary data structures such as lists, trees, and graphs; and ...

  14. Computational Models of Problem Solving

    models of memory and problem solving include (but are not limited to) the following: 1. Analysis of human problem-solving protocols and the computational prop-erties they suggest. 2. Methods for gaining access to past examples among a variety of instances in memory. 3. Mapping of similarities between cases and the instantiation of rules in new

  15. Modeling a Problem-Solving Approach Through Computational Thinking for

    Modeling a Problem-Solving Approach Through Computational Thinking for Teaching Programming Abstract: Contribution: A problem-solving approach (PSA) model derived from major computational thinking (CT) concepts. This model can be utilized to formulate solutions for different algorithmic problems and translate them into effective active learning ...

  16. Fostering computational thinking through educational robotics: a model

    The creative computational problem solving (CCPS) model presented in the current study, represents a hybrid model combining these two perspectives and adapting them to the context of ERS. Similar to the model of Lumsdaine and Lumsdaine , the proposed model involves the definition of different phases and iterations. However, while Lumsdaine and ...

  17. Computationally inspired cognitive modeling

    This problem solving model is used in our daily lives not only in computer science, but also in language, history, science, mathematics and art. While there is such a thing as 'disconnected' computational thinking, modern computational thinking often includes a solution containing technology, such as a computer, to execute an algorithm.

  18. Developing computational skills through simulation based problem

    This setup provides a computational environment in which simulations can be started and analyzed in the same notebook. A key learning activity is a project in which students tackle a given task over a period of approximately 2 months in a small group. Our experience is that the self-paced problem-solving nature of the project work -- combined ...

  19. Solving Computational Problems

    Throughout these lessons we will advocate a five-step approach to solving computational problems. Below, we outline the approach and relate it to the problem at hand. ... Before we can employ a computer we must find a mathematical model for the problem. We will model the earth as a perfect sphere of radius 4000 miles and the earth's population ...

  20. Computational science

    Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes Algorithms (numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve sciences (e.g, physical ...

  21. Defining Computational Thinking for Mathematics and Science ...

    A primary motivation for introducing computational thinking practices into science and mathematics classrooms is the rapidly changing nature of these disciplines as they are practiced in the professional world (Bailey and Borwein 2011; Foster 2006; Henderson et al. 2007).In the last 20 years, nearly every field related to science and mathematics has seen the growth of a computational counterpart.

  22. Creative Problem Solving in Large Language and Vision Models -- What

    In this paper, we discuss approaches for integrating Computational Creativity (CC) with research in large language and vision models (LLVMs) to address a key limitation of these models, i.e., creative problem solving. We present preliminary experiments showing how CC principles can be applied to address this limitation through augmented prompting. With this work, we hope to foster discussions ...

  23. A Nonlinear Programming Approach to Solving Interval-Valued ...

    Initially, fuzzy sets and intuitionistic fuzzy sets were used to address real-world problems with imprecise data. Eventually, the notion of the hesitant fuzzy set was formulated to handle decision makers' reluctance to accept asymmetric information. However, in certain scenarios, asymmetric information is gathered in terms of a possible range of acceptance and nonacceptance by players rather ...

  24. Examining primary students' mathematical problem-solving in a

    2.1 Computational thinking. Computational thinking (CT) is a term describing the thought processes involved in formulating a problem and expressing its solutions in such a way that a computer can effectively carry out the sequence (Wing 2006); hence, CT is a powerful cognitive tool for problem-solving across all spectra of human inquiry.As Papert argued, "computer presence could contribute ...

  25. Preconditioning Hybrid Discontinuous Galerkin Schemes for

    A significant part of the dissertation is devoted to overcoming the computational inefficiency in solving the condensed symmetric and indefinite global linear systems arising from these schemes. A uniform block-diagonal preconditioner is developed, demonstrating robustness with respect to mesh size and model parameters.

  26. New computer algorithm supercharges climate models and ...

    A study describes a new computer algorithm which can be applied to Earth System Models to drastically reduce the time needed to prepare these in order to make accurate predictions of future ...

  27. CoMM: Collaborative Multi-Agent, Multi-Reasoning-Path Prompting for

    Large Language Models (LLMs) have shown great ability in solving traditional natural language tasks and elementary reasoning tasks with appropriate prompting techniques. However, their ability is still limited in solving complicated science problems. In this work, we aim to push the upper bound of the reasoning capability of LLMs by proposing a collaborative multi-agent, multi-reasoning-path ...

  28. A framework for supporting systems thinking and computational thinking

    The framework includes essential aspects of ST and CT based on selected literature, and illustrates how each modeling practice draws upon aspects of both ST and CT to support explaining phenomena and solving problems. We use computational models to show how these ST and CT aspects are manifested in modeling.

  29. New computer algorithm supercharges climate models and could lead to

    However, climate modelers have long faced a major problem. Because Earth System Models integrate many complicated processes, they cannot immediately run a simulation; they must first ensure that ...

  30. Gemini 1.5: Our next-generation model, now available for Private

    Last week, we released Gemini 1.0 Ultra in Gemini Advanced. You can try it out now by signing up for a Gemini Advanced subscription.The 1.0 Ultra model, accessible via the Gemini API, has seen a lot of interest and continues to roll out to select developers and partners in Google AI Studio.. Today, we're also excited to introduce our next-generation Gemini 1.5 model, which uses a new Mixture ...