Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

What It Takes to Think Deeply About Complex Problems

  • Tony Schwartz

solving complex human problems

Three ways to embrace a more nuanced, spacious perspective.

The problems we’re facing often seem as intractable as they do complex. But as Albert Einstein famously observed, “We cannot solve our problems with the same level of thinking that created them.” So what does it take to increase the complexity of our thinking? To cultivate a more nuanced, spacious perspective, start by challenging your convictions. Ask yourself, “What am I not seeing here?” and “What else might be true?” Second, do your most challenging task first every day, when your mind is fresh and before distractions arise. And third, pay attention to how you’re feeling. Embracing complexity means learning to better manage tough emotions like fear and anger.

The problems we’re facing often seem as complex as they do intractable. And as Albert Einstein is often quoted as saying, “We cannot solve our problems with the same level of thinking that created them.” So what does it take to increase the complexity of our thinking?

solving complex human problems

  • Tony Schwartz is the CEO of The Energy Project and the author of The Way We’re Working Isn’t Working . Become a fan of The Energy Project on Facebook .

Partner Center

CONCEPTUAL ANALYSIS article

Complex problem solving: what it is and what it is not.

\r\nDietrich Drner

  • 1 Department of Psychology, University of Bamberg, Bamberg, Germany
  • 2 Department of Psychology, Heidelberg University, Heidelberg, Germany

Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems.

Succeeding in the 21st century requires many competencies, including creativity, life-long learning, and collaboration skills (e.g., National Research Council, 2011 ; Griffin and Care, 2015 ), to name only a few. One competence that seems to be of central importance is the ability to solve complex problems ( Mainzer, 2009 ). Mainzer quotes the Nobel prize winner Simon (1957) who wrote as early as 1957:

The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problem whose solution is required for objectively rational behavior in the real world or even for a reasonable approximation to such objective rationality. (p. 198)

The shift from well-defined to ill-defined problems came about as a result of a disillusion with the “general problem solver” ( Newell et al., 1959 ): The general problem solver was a computer software intended to solve all kind of problems that can be expressed through well-formed formulas. However, it soon became clear that this procedure was in fact a “special problem solver” that could only solve well-defined problems in a closed space. But real-world problems feature open boundaries and have no well-determined solution. In fact, the world is full of wicked problems and clumsy solutions ( Verweij and Thompson, 2006 ). As a result, solving well-defined problems and solving ill-defined problems requires different cognitive processes ( Schraw et al., 1995 ; but see Funke, 2010 ).

Well-defined problems have a clear set of means for reaching a precisely described goal state. For example: in a match-stick arithmetic problem, a person receives a false arithmetic expression constructed out of matchsticks (e.g., IV = III + III). According to the instructions, moving one of the matchsticks will make the equations true. Here, both the problem (find the appropriate stick to move) and the goal state (true arithmetic expression; solution is: VI = III + III) are defined clearly.

Ill-defined problems have no clear problem definition, their goal state is not defined clearly, and the means of moving towards the (diffusely described) goal state are not clear. For example: The goal state for solving the political conflict in the near-east conflict between Israel and Palestine is not clearly defined (living in peaceful harmony with each other?) and even if the conflict parties would agree on a two-state solution, this goal again leaves many issues unresolved. This type of problem is called a “complex problem” and is of central importance to this paper. All psychological processes that occur within individual persons and deal with the handling of such ill-defined complex problems will be subsumed under the umbrella term “complex problem solving” (CPS).

Systematic research on CPS started in the 1970s with observations of the behavior of participants who were confronted with computer simulated microworlds. For example, in one of those microworlds participants assumed the role of executives who were tasked to manage a company over a certain period of time (see Brehmer and Dörner, 1993 , for a discussion of this methodology). Today, CPS is an established concept and has even influenced large-scale assessments such as PISA (“Programme for International Student Assessment”), organized by the Organization for Economic Cooperation and Development ( OECD, 2014 ). According to the World Economic Forum, CPS is one of the most important competencies required in the future ( World Economic Forum, 2015 ). Numerous articles on the subject have been published in recent years, documenting the increasing research activity relating to this field. In the following collection of papers we list only those published in 2010 and later: theoretical papers ( Blech and Funke, 2010 ; Funke, 2010 ; Knauff and Wolf, 2010 ; Leutner et al., 2012 ; Selten et al., 2012 ; Wüstenberg et al., 2012 ; Greiff et al., 2013b ; Fischer and Neubert, 2015 ; Schoppek and Fischer, 2015 ), papers about measurement issues ( Danner et al., 2011a ; Greiff et al., 2012 , 2015a ; Alison et al., 2013 ; Gobert et al., 2015 ; Greiff and Fischer, 2013 ; Herde et al., 2016 ; Stadler et al., 2016 ), papers about applications ( Fischer and Neubert, 2015 ; Ederer et al., 2016 ; Tremblay et al., 2017 ), papers about differential effects ( Barth and Funke, 2010 ; Danner et al., 2011b ; Beckmann and Goode, 2014 ; Greiff and Neubert, 2014 ; Scherer et al., 2015 ; Meißner et al., 2016 ; Wüstenberg et al., 2016 ), one paper about developmental effects ( Frischkorn et al., 2014 ), one paper with a neuroscience background ( Osman, 2012 ) 1 , papers about cultural differences ( Güss and Dörner, 2011 ; Sonnleitner et al., 2014 ; Güss et al., 2015 ), papers about validity issues ( Goode and Beckmann, 2010 ; Greiff et al., 2013c ; Schweizer et al., 2013 ; Mainert et al., 2015 ; Funke et al., 2017 ; Greiff et al., 2017 , 2015b ; Kretzschmar et al., 2016 ; Kretzschmar, 2017 ), review papers and meta-analyses ( Osman, 2010 ; Stadler et al., 2015 ), and finally books ( Qudrat-Ullah, 2015 ; Csapó and Funke, 2017b ) and book chapters ( Funke, 2012 ; Hotaling et al., 2015 ; Funke and Greiff, 2017 ; Greiff and Funke, 2017 ; Csapó and Funke, 2017a ; Fischer et al., 2017 ; Molnàr et al., 2017 ; Tobinski and Fritz, 2017 ; Viehrig et al., 2017 ). In addition, a new “Journal of Dynamic Decision Making” (JDDM) has been launched ( Fischer et al., 2015 , 2016 ) to give the field an open-access outlet for research and discussion.

This paper aims to clarify aspects of validity: what should be meant by the term CPS and what not? This clarification seems necessary because misunderstandings in recent publications provide – from our point of view – a potentially misleading picture of the construct. We start this article with a historical review before attempting to systematize different positions. We conclude with a working definition.

Historical Review

The concept behind CPS goes back to the German phrase “komplexes Problemlösen” (CPS; the term “komplexes Problemlösen” was used as a book title by Funke, 1986 ). The concept was introduced in Germany by Dörner and colleagues in the mid-1970s (see Dörner et al., 1975 ; Dörner, 1975 ) for the first time. The German phrase was later translated to CPS in the titles of two edited volumes by Sternberg and Frensch (1991) and Frensch and Funke (1995a) that collected papers from different research traditions. Even though it looks as though the term was coined in the 1970s, Edwards (1962) used the term “dynamic decision making” to describe decisions that come in a sequence. He compared static with dynamic decision making, writing:

In dynamic situations, a new complication not found in the static situations arises. The environment in which the decision is set may be changing, either as a function of the sequence of decisions, or independently of them, or both. It is this possibility of an environment which changes while you collect information about it which makes the task of dynamic decision theory so difficult and so much fun. (p. 60)

The ability to solve complex problems is typically measured via dynamic systems that contain several interrelated variables that participants need to alter. Early work (see, e.g., Dörner, 1980 ) used a simulation scenario called “Lohhausen” that contained more than 2000 variables that represented the activities of a small town: Participants had to take over the role of a mayor for a simulated period of 10 years. The simulation condensed these ten years to ten hours in real time. Later, researchers used smaller dynamic systems as scenarios either based on linear equations (see, e.g., Funke, 1993 ) or on finite state automata (see, e.g., Buchner and Funke, 1993 ). In these contexts, CPS consisted of the identification and control of dynamic task environments that were previously unknown to the participants. Different task environments came along with different degrees of fidelity ( Gray, 2002 ).

According to Funke (2012) , the typical attributes of complex systems are (a) complexity of the problem situation which is usually represented by the sheer number of involved variables; (b) connectivity and mutual dependencies between involved variables; (c) dynamics of the situation, which reflects the role of time and developments within a system; (d) intransparency (in part or full) about the involved variables and their current values; and (e) polytely (greek term for “many goals”), representing goal conflicts on different levels of analysis. This mixture of features is similar to what is called VUCA (volatility, uncertainty, complexity, ambiguity) in modern approaches to management (e.g., Mack et al., 2016 ).

In his evaluation of the CPS movement, Sternberg (1995) compared (young) European approaches to CPS with (older) American research on expertise. His analysis of the differences between the European and American traditions shows advantages but also potential drawbacks for each side. He states (p. 301): “I believe that although there are problems with the European approach, it deals with some fundamental questions that American research scarcely addresses.” So, even though the echo of the European approach did not enjoy strong resonance in the US at that time, it was valued by scholars like Sternberg and others. Before attending to validity issues, we will first present a short review of different streams.

Different Approaches to CPS

In the short history of CPS research, different approaches can be identified ( Buchner, 1995 ; Fischer et al., 2017 ). To systematize, we differentiate between the following five lines of research:

(a) The search for individual differences comprises studies identifying interindividual differences that affect the ability to solve complex problems. This line of research is reflected, for example, in the early work by Dörner et al. (1983) and their “Lohhausen” study. Here, naïve student participants took over the role of the mayor of a small simulated town named Lohhausen for a simulation period of ten years. According to the results of the authors, it is not intelligence (as measured by conventional IQ tests) that predicts performance, but it is the ability to stay calm in the face of a challenging situation and the ability to switch easily between an analytic mode of processing and a more holistic one.

(b) The search for cognitive processes deals with the processes behind understanding complex dynamic systems. Representative of this line of research is, for example, Berry and Broadbent’s (1984) work on implicit and explicit learning processes when people interact with a dynamic system called “Sugar Production”. They found that those who perform best in controlling a dynamic system can do so implicitly, without explicit knowledge of details regarding the systems’ relations.

(c) The search for system factors seeks to identify the aspects of dynamic systems that determine the difficulty of complex problems and make some problems harder than others. Representative of this line of research is, for example, work by Funke (1985) , who systematically varied the number of causal effects within a dynamic system or the presence/absence of eigendynamics. He found, for example, that solution quality decreases as the number of systems relations increases.

(d) The psychometric approach develops measurement instruments that can be used as an alternative to classical IQ tests, as something that goes “beyond IQ”. The MicroDYN approach ( Wüstenberg et al., 2012 ) is representative for this line of research that presents an alternative to reasoning tests (like Raven matrices). These authors demonstrated that a small improvement in predicting school grade point average beyond reasoning is possible with MicroDYN tests.

(e) The experimental approach explores CPS under different experimental conditions. This approach uses CPS assessment instruments to test hypotheses derived from psychological theories and is sometimes used in research about cognitive processes (see above). Exemplary for this line of research is the work by Rohe et al. (2016) , who test the usefulness of “motto goals” in the context of complex problems compared to more traditional learning and performance goals. Motto goals differ from pure performance goals by activating positive affect and should lead to better goal attainment especially in complex situations (the mentioned study found no effect).

To be clear: these five approaches are not mutually exclusive and do overlap. But the differentiation helps to identify different research communities and different traditions. These communities had different opinions about scaling complexity.

The Race for Complexity: Use of More and More Complex Systems

In the early years of CPS research, microworlds started with systems containing about 20 variables (“Tailorshop”), soon reached 60 variables (“Moro”), and culminated in systems with about 2000 variables (“Lohhausen”). This race for complexity ended with the introduction of the concept of “minimal complex systems” (MCS; Greiff and Funke, 2009 ; Funke and Greiff, 2017 ), which ushered in a search for the lower bound of complexity instead of the higher bound, which could not be defined as easily. The idea behind this concept was that whereas the upper limits of complexity are unbound, the lower limits might be identifiable. Imagine starting with a simple system containing two variables with a simple linear connection between them; then, step by step, increase the number of variables and/or the type of connections. One soon reaches a point where the system can no longer be considered simple and has become a “complex system”. This point represents a minimal complex system. Despite some research having been conducted in this direction, the point of transition from simple to complex has not been identified clearly as of yet.

Some years later, the original “minimal complex systems” approach ( Greiff and Funke, 2009 ) shifted to the “multiple complex systems” approach ( Greiff et al., 2013a ). This shift is more than a slight change in wording: it is important because it taps into the issue of validity directly. Minimal complex systems have been introduced in the context of challenges from large-scale assessments like PISA 2012 that measure new aspects of problem solving, namely interactive problems besides static problem solving ( Greiff and Funke, 2017 ). PISA 2012 required test developers to remain within testing time constraints (given by the school class schedule). Also, test developers needed a large item pool for the construction of a broad class of problem solving items. It was clear from the beginning that MCS deal with simple dynamic situations that require controlled interaction: the exploration and control of simple ticket machines, simple mobile phones, or simple MP3 players (all of these example domains were developed within PISA 2012) – rather than really complex situations like managerial or political decision making.

As a consequence of this subtle but important shift in interpreting the letters MCS, the definition of CPS became a subject of debate recently ( Funke, 2014a ; Greiff and Martin, 2014 ; Funke et al., 2017 ). In the words of Funke (2014b , p. 495):

It is funny that problems that nowadays come under the term ‘CPS’, are less complex (in terms of the previously described attributes of complex situations) than at the beginning of this new research tradition. The emphasis on psychometric qualities has led to a loss of variety. Systems thinking requires more than analyzing models with two or three linear equations – nonlinearity, cyclicity, rebound effects, etc. are inherent features of complex problems and should show up at least in some of the problems used for research and assessment purposes. Minimal complex systems run the danger of becoming minimal valid systems.

Searching for minimal complex systems is not the same as gaining insight into the way how humans deal with complexity and uncertainty. For psychometric purposes, it is appropriate to reduce complexity to a minimum; for understanding problem solving under conditions of overload, intransparency, and dynamics, it is necessary to realize those attributes with reasonable strength. This aspect is illustrated in the next section.

Importance of the Validity Issue

The most important reason for discussing the question of what complex problem solving is and what it is not stems from its phenomenology: if we lose sight of our phenomena, we are no longer doing good psychology. The relevant phenomena in the context of complex problems encompass many important aspects. In this section, we discuss four phenomena that are specific to complex problems. We consider these phenomena as critical for theory development and for the construction of assessment instruments (i.e., microworlds). These phenomena require theories for explaining them and they require assessment instruments eliciting them in a reliable way.

The first phenomenon is the emergency reaction of the intellectual system ( Dörner, 1980 ): When dealing with complex systems, actors tend to (a) reduce their intellectual level by decreasing self-reflections, by decreasing their intentions, by stereotyping, and by reducing their realization of intentions, (b) they show a tendency for fast action with increased readiness for risk, with increased violations of rules, and with increased tendency to escape the situation, and (c) they degenerate their hypotheses formation by construction of more global hypotheses and reduced tests of hypotheses, by increasing entrenchment, and by decontextualizing their goals. This phenomenon illustrates the strong connection between cognition, emotion, and motivation that has been emphasized by Dörner (see, e.g., Dörner and Güss, 2013 ) from the beginning of his research tradition; the emergency reaction reveals a shift in the mode of information processing under the pressure of complexity.

The second phenomenon comprises cross-cultural differences with respect to strategy use ( Strohschneider and Güss, 1999 ; Güss and Wiley, 2007 ; Güss et al., 2015 ). Results from complex task environments illustrate the strong influence of context and background knowledge to an extent that cannot be found for knowledge-poor problems. For example, in a comparison between Brazilian and German participants, it turned out that Brazilians accept the given problem descriptions and are more optimistic about the results of their efforts, whereas Germans tend to inquire more about the background of the problems and take a more active approach but are less optimistic (according to Strohschneider and Güss, 1998 , p. 695).

The third phenomenon relates to failures that occur during the planning and acting stages ( Jansson, 1994 ; Ramnarayan et al., 1997 ), illustrating that rational procedures seem to be unlikely to be used in complex situations. The potential for failures ( Dörner, 1996 ) rises with the complexity of the problem. Jansson (1994) presents seven major areas for failures with complex situations: acting directly on current feedback; insufficient systematization; insufficient control of hypotheses and strategies; lack of self-reflection; selective information gathering; selective decision making; and thematic vagabonding.

The fourth phenomenon describes (a lack of) training and transfer effects ( Kretzschmar and Süß, 2015 ), which again illustrates the context dependency of strategies and knowledge (i.e., there is no strategy that is so universal that it can be used in many different problem situations). In their own experiment, the authors could show training effects only for knowledge acquisition, not for knowledge application. Only with specific feedback, performance in complex environments can be increased ( Engelhart et al., 2017 ).

These four phenomena illustrate why the type of complexity (or degree of simplicity) used in research really matters. Furthermore, they demonstrate effects that are specific for complex problems, but not for toy problems. These phenomena direct the attention to the important question: does the stimulus material used (i.e., the computer-simulated microworld) tap and elicit the manifold of phenomena described above?

Dealing with partly unknown complex systems requires courage, wisdom, knowledge, grit, and creativity. In creativity research, “little c” and “BIG C” are used to differentiate between everyday creativity and eminent creativity ( Beghetto and Kaufman, 2007 ; Kaufman and Beghetto, 2009 ). Everyday creativity is important for solving everyday problems (e.g., finding a clever fix for a broken spoke on my bicycle), eminent creativity changes the world (e.g., inventing solar cells for energy production). Maybe problem solving research should use a similar differentiation between “little p” and “BIG P” to mark toy problems on the one side and big societal challenges on the other. The question then remains: what can we learn about BIG P by studying little p? What phenomena are present in both types, and what phenomena are unique to each of the two extremes?

Discussing research on CPS requires reflecting on the field’s research methods. Even if the experimental approach has been successful for testing hypotheses (for an overview of older work, see Funke, 1995 ), other methods might provide additional and novel insights. Complex phenomena require complex approaches to understand them. The complex nature of complex systems imposes limitations on psychological experiments: The more complex the environments, the more difficult is it to keep conditions under experimental control. And if experiments have to be run in labs one should bring enough complexity into the lab to establish the phenomena mentioned, at least in part.

There are interesting options to be explored (again): think-aloud protocols , which have been discredited for many years ( Nisbett and Wilson, 1977 ) and yet are a valuable source for theory testing ( Ericsson and Simon, 1983 ); introspection ( Jäkel and Schreiber, 2013 ), which seems to be banned from psychological methods but nevertheless offers insights into thought processes; the use of life-streaming ( Wendt, 2017 ), a medium in which streamers generate a video stream of think-aloud data in computer-gaming; political decision-making ( Dhami et al., 2015 ) that demonstrates error-proneness in groups; historical case studies ( Dörner and Güss, 2011 ) that give insights into the thinking styles of political leaders; the use of the critical incident technique ( Reuschenbach, 2008 ) to construct complex scenarios; and simulations with different degrees of fidelity ( Gray, 2002 ).

The methods tool box is full of instruments that have to be explored more carefully before any individual instrument receives a ban or research narrows its focus to only one paradigm for data collection. Brehmer and Dörner (1993) discussed the tensions between “research in the laboratory and research in the field”, optimistically concluding “that the new methodology of computer-simulated microworlds will provide us with the means to bridge the gap between the laboratory and the field” (p. 183). The idea behind this optimism was that computer-simulated scenarios would bring more complexity from the outside world into the controlled lab environment. But this is not true for all simulated scenarios. In his paper on simulated environments, Gray (2002) differentiated computer-simulated environments with respect to three dimensions: (1) tractability (“the more training subjects require before they can use a simulated task environment, the less tractable it is”, p. 211), correspondence (“High correspondence simulated task environments simulate many aspects of one task environment. Low correspondence simulated task environments simulate one aspect of many task environments”, p. 214), and engagement (“A simulated task environment is engaging to the degree to which it involves and occupies the participants; that is, the degree to which they agree to take it seriously”, p. 217). But the mere fact that a task is called a “computer-simulated task environment” does not mean anything specific in terms of these three dimensions. This is one of several reasons why we should differentiate between those studies that do not address the core features of CPS and those that do.

What is not CPS?

Even though a growing number of references claiming to deal with complex problems exist (e.g., Greiff and Wüstenberg, 2015 ; Greiff et al., 2016 ), it would be better to label the requirements within these tasks “dynamic problem solving,” as it has been done adequately in earlier work ( Greiff et al., 2012 ). The dynamics behind on-off-switches ( Thimbleby, 2007 ) are remarkable but not really complex. Small nonlinear systems that exhibit stunningly complex and unstable behavior do exist – but they are not used in psychometric assessments of so-called CPS. There are other small systems (like MicroDYN scenarios: Greiff and Wüstenberg, 2014 ) that exhibit simple forms of system behavior that are completely predictable and stable. This type of simple systems is used frequently. It is even offered commercially as a complex problem-solving test called COMPRO ( Greiff and Wüstenberg, 2015 ) for business applications. But a closer look reveals that the label is not used correctly; within COMPRO, the used linear equations are far from being complex and the system can be handled properly by using only one strategy (see for more details Funke et al., 2017 ).

Why do simple linear systems not fall within CPS? At the surface, nonlinear and linear systems might appear similar because both only include 3–5 variables. But the difference is in terms of systems behavior as well as strategies and learning. If the behavior is simple (as in linear systems where more input is related to more output and vice versa), the system can be easily understood (participants in the MicroDYN world have 3 minutes to explore a complex system). If the behavior is complex (as in systems that contain strange attractors or negative feedback loops), things become more complicated and much more observation is needed to identify the hidden structure of the unknown system ( Berry and Broadbent, 1984 ; Hundertmark et al., 2015 ).

Another issue is learning. If tasks can be solved using a single (and not so complicated) strategy, steep learning curves are to be expected. The shift from problem solving to learned routine behavior occurs rapidly, as was demonstrated by Luchins (1942) . In his water jar experiments, participants quickly acquired a specific strategy (a mental set) for solving certain measurement problems that they later continued applying to problems that would have allowed for easier approaches. In the case of complex systems, learning can occur only on very general, abstract levels because it is difficult for human observers to make specific predictions. Routines dealing with complex systems are quite different from routines relating to linear systems.

What should not be studied under the label of CPS are pure learning effects, multiple-cue probability learning, or tasks that can be solved using a single strategy. This last issue is a problem for MicroDYN tasks that rely strongly on the VOTAT strategy (“vary one thing at a time”; see Tschirgi, 1980 ). In real-life, it is hard to imagine a business manager trying to solve her or his problems by means of VOTAT.

What is CPS?

In the early days of CPS research, planet Earth’s dynamics and complexities gained attention through such books as “The limits to growth” ( Meadows et al., 1972 ) and “Beyond the limits” ( Meadows et al., 1992 ). In the current decade, for example, the World Economic Forum (2016) attempts to identify the complexities and risks of our modern world. In order to understand the meaning of complexity and uncertainty, taking a look at the worlds’ most pressing issues is helpful. Searching for strategies to cope with these problems is a difficult task: surely there is no place for the simple principle of “vary-one-thing-at-a-time” (VOTAT) when it comes to global problems. The VOTAT strategy is helpful in the context of simple problems ( Wüstenberg et al., 2014 ); therefore, whether or not VOTAT is helpful in a given problem situation helps us distinguish simple from complex problems.

Because there exist no clear-cut strategies for complex problems, typical failures occur when dealing with uncertainty ( Dörner, 1996 ; Güss et al., 2015 ). Ramnarayan et al. (1997) put together a list of generic errors (e.g., not developing adequate action plans; lack of background control; learning from experience blocked by stereotype knowledge; reactive instead of proactive action) that are typical of knowledge-rich complex systems but cannot be found in simple problems.

Complex problem solving is not a one-dimensional, low-level construct. On the contrary, CPS is a multi-dimensional bundle of competencies existing at a high level of abstraction, similar to intelligence (but going beyond IQ). As Funke et al. (2018) state: “Assessment of transversal (in educational contexts: cross-curricular) competencies cannot be done with one or two types of assessment. The plurality of skills and competencies requires a plurality of assessment instruments.”

There are at least three different aspects of complex systems that are part of our understanding of a complex system: (1) a complex system can be described at different levels of abstraction; (2) a complex system develops over time, has a history, a current state, and a (potentially unpredictable) future; (3) a complex system is knowledge-rich and activates a large semantic network, together with a broad list of potential strategies (domain-specific as well as domain-general).

Complex problem solving is not only a cognitive process but is also an emotional one ( Spering et al., 2005 ; Barth and Funke, 2010 ) and strongly dependent on motivation (low-stakes versus high-stakes testing; see Hermes and Stelling, 2016 ).

Furthermore, CPS is a dynamic process unfolding over time, with different phases and with more differentiation than simply knowledge acquisition and knowledge application. Ideally, the process should entail identifying problems (see Dillon, 1982 ; Lee and Cho, 2007 ), even if in experimental settings, problems are provided to participants a priori . The more complex and open a given situation, the more options can be generated (T. S. Schweizer et al., 2016 ). In closed problems, these processes do not occur in the same way.

In analogy to the difference between formative (process-oriented) and summative (result-oriented) assessment ( Wiliam and Black, 1996 ; Bennett, 2011 ), CPS should not be reduced to the mere outcome of a solution process. The process leading up to the solution, including detours and errors made along the way, might provide a more differentiated impression of a person’s problem-solving abilities and competencies than the final result of such a process. This is one of the reasons why CPS environments are not, in fact, complex intelligence tests: research on CPS is not only about the outcome of the decision process, but it is also about the problem-solving process itself.

Complex problem solving is part of our daily life: finding the right person to share one’s life with, choosing a career that not only makes money, but that also makes us happy. Of course, CPS is not restricted to personal problems – life on Earth gives us many hard nuts to crack: climate change, population growth, the threat of war, the use and distribution of natural resources. In sum, many societal challenges can be seen as complex problems. To reduce that complexity to a one-hour lab activity on a random Friday afternoon puts it out of context and does not address CPS issues.

Theories about CPS should specify which populations they apply to. Across populations, one thing to consider is prior knowledge. CPS research with experts (e.g., Dew et al., 2009 ) is quite different from problem solving research using tasks that intentionally do not require any specific prior knowledge (see, e.g., Beckmann and Goode, 2014 ).

More than 20 years ago, Frensch and Funke (1995b) defined CPS as follows:

CPS occurs to overcome barriers between a given state and a desired goal state by means of behavioral and/or cognitive, multi-step activities. The given state, goal state, and barriers between given state and goal state are complex, change dynamically during problem solving, and are intransparent. The exact properties of the given state, goal state, and barriers are unknown to the solver at the outset. CPS implies the efficient interaction between a solver and the situational requirements of the task, and involves a solver’s cognitive, emotional, personal, and social abilities and knowledge. (p. 18)

The above definition is rather formal and does not account for content or relations between the simulation and the real world. In a sense, we need a new definition of CPS that addresses these issues. Based on our previous arguments, we propose the following working definition:

Complex problem solving is a collection of self-regulated psychological processes and activities necessary in dynamic environments to achieve ill-defined goals that cannot be reached by routine actions. Creative combinations of knowledge and a broad set of strategies are needed. Solutions are often more bricolage than perfect or optimal. The problem-solving process combines cognitive, emotional, and motivational aspects, particularly in high-stakes situations. Complex problems usually involve knowledge-rich requirements and collaboration among different persons.

The main differences to the older definition lie in the emphasis on (a) the self-regulation of processes, (b) creativity (as opposed to routine behavior), (c) the bricolage type of solution, and (d) the role of high-stakes challenges. Our new definition incorporates some aspects that have been discussed in this review but were not reflected in the 1995 definition, which focused on attributes of complex problems like dynamics or intransparency.

This leads us to the final reflection about the role of CPS for dealing with uncertainty and complexity in real life. We will distinguish thinking from reasoning and introduce the sense of possibility as an important aspect of validity.

CPS as Combining Reasoning and Thinking in an Uncertain Reality

Leading up to the Battle of Borodino in Leo Tolstoy’s novel “War and Peace”, Prince Andrei Bolkonsky explains the concept of war to his friend Pierre. Pierre expects war to resemble a game of chess: You position the troops and attempt to defeat your opponent by moving them in different directions.

“Far from it!”, Andrei responds. “In chess, you know the knight and his moves, you know the pawn and his combat strength. While in war, a battalion is sometimes stronger than a division and sometimes weaker than a company; it all depends on circumstances that can never be known. In war, you do not know the position of your enemy; some things you might be able to observe, some things you have to divine (but that depends on your ability to do so!) and many things cannot even be guessed at. In chess, you can see all of your opponent’s possible moves. In war, that is impossible. If you decide to attack, you cannot know whether the necessary conditions are met for you to succeed. Many a time, you cannot even know whether your troops will follow your orders…”

In essence, war is characterized by a high degree of uncertainty. A good commander (or politician) can add to that what he or she sees, tentatively fill in the blanks – and not just by means of logical deduction but also by intelligently bridging missing links. A bad commander extrapolates from what he sees and thus arrives at improper conclusions.

Many languages differentiate between two modes of mentalizing; for instance, the English language distinguishes between ‘thinking’ and ‘reasoning’. Reasoning denotes acute and exact mentalizing involving logical deductions. Such deductions are usually based on evidence and counterevidence. Thinking, however, is what is required to write novels. It is the construction of an initially unknown reality. But it is not a pipe dream, an unfounded process of fabrication. Rather, thinking asks us to imagine reality (“Wirklichkeitsfantasie”). In other words, a novelist has to possess a “sense of possibility” (“Möglichkeitssinn”, Robert Musil; in German, sense of possibility is often used synonymously with imagination even though imagination is not the same as sense of possibility, for imagination also encapsulates the impossible). This sense of possibility entails knowing the whole (or several wholes) or being able to construe an unknown whole that could accommodate a known part. The whole has to align with sociological and geographical givens, with the mentality of certain peoples or groups, and with the laws of physics and chemistry. Otherwise, the entire venture is ill-founded. A sense of possibility does not aim for the moon but imagines something that might be possible but has not been considered possible or even potentially possible so far.

Thinking is a means to eliminate uncertainty. This process requires both of the modes of thinking we have discussed thus far. Economic, political, or ecological decisions require us to first consider the situation at hand. Though certain situational aspects can be known, but many cannot. In fact, von Clausewitz (1832) posits that only about 25% of the necessary information is available when a military decision needs to be made. Even then, there is no way to guarantee that whatever information is available is also correct: Even if a piece of information was completely accurate yesterday, it might no longer apply today.

Once our sense of possibility has helped grasping a situation, problem solvers need to call on their reasoning skills. Not every situation requires the same action, and we may want to act this way or another to reach this or that goal. This appears logical, but it is a logic based on constantly shifting grounds: We cannot know whether necessary conditions are met, sometimes the assumptions we have made later turn out to be incorrect, and sometimes we have to revise our assumptions or make completely new ones. It is necessary to constantly switch between our sense of possibility and our sense of reality, that is, to switch between thinking and reasoning. It is an arduous process, and some people handle it well, while others do not.

If we are to believe Tuchman’s (1984) book, “The March of Folly”, most politicians and commanders are fools. According to Tuchman, not much has changed in the 3300 years that have elapsed since the misguided Trojans decided to welcome the left-behind wooden horse into their city that would end up dismantling Troy’s defensive walls. The Trojans, too, had been warned, but decided not to heed the warning. Although Laocoön had revealed the horse’s true nature to them by attacking it with a spear, making the weapons inside the horse ring, the Trojans refused to see the forest for the trees. They did not want to listen, they wanted the war to be over, and this desire ended up shaping their perception.

The objective of psychology is to predict and explain human actions and behavior as accurately as possible. However, thinking cannot be investigated by limiting its study to neatly confined fractions of reality such as the realms of propositional logic, chess, Go tasks, the Tower of Hanoi, and so forth. Within these systems, there is little need for a sense of possibility. But a sense of possibility – the ability to divine and construe an unknown reality – is at least as important as logical reasoning skills. Not researching the sense of possibility limits the validity of psychological research. All economic and political decision making draws upon this sense of possibility. By not exploring it, psychological research dedicated to the study of thinking cannot further the understanding of politicians’ competence and the reasons that underlie political mistakes. Christopher Clark identifies European diplomats’, politicians’, and commanders’ inability to form an accurate representation of reality as a reason for the outbreak of World War I. According to Clark’s (2012) book, “The Sleepwalkers”, the politicians of the time lived in their own make-believe world, wrongfully assuming that it was the same world everyone else inhabited. If CPS research wants to make significant contributions to the world, it has to acknowledge complexity and uncertainty as important aspects of it.

For more than 40 years, CPS has been a new subject of psychological research. During this time period, the initial emphasis on analyzing how humans deal with complex, dynamic, and uncertain situations has been lost. What is subsumed under the heading of CPS in modern research has lost the original complexities of real-life problems. From our point of view, the challenges of the 21st century require a return to the origins of this research tradition. We would encourage researchers in the field of problem solving to come back to the original ideas. There is enough complexity and uncertainty in the world to be studied. Improving our understanding of how humans deal with these global and pressing problems would be a worthwhile enterprise.

Author Contributions

JF drafted a first version of the manuscript, DD added further text and commented on the draft. JF finalized the manuscript.

Authors Note

After more than 40 years of controversial discussions between both authors, this is the first joint paper. We are happy to have done this now! We have found common ground!

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank the Deutsche Forschungsgemeinschaft (DFG) for the continuous support of their research over many years. Thanks to Daniel Holt for his comments on validity issues, thanks to Julia Nolte who helped us by translating German text excerpts into readable English and helped us, together with Keri Hartman, to improve our style and grammar – thanks for that! We also thank the two reviewers for their helpful critical comments on earlier versions of this manuscript. Finally, we acknowledge financial support by Deutsche Forschungsgemeinschaft and Ruprecht-Karls-Universität Heidelberg within their funding programme Open Access Publishing .

  • ^ The fMRI-paper from Anderson (2012) uses the term “complex problem solving” for tasks that do not fall in our understanding of CPS and is therefore excluded from this list.

Alison, L., van den Heuvel, C., Waring, S., Power, N., Long, A., O’Hara, T., et al. (2013). Immersive simulated learning environments for researching critical incidents: a knowledge synthesis of the literature and experiences of studying high-risk strategic decision making. J. Cogn. Eng. Deci. Mak. 7, 255–272. doi: 10.1177/1555343412468113

CrossRef Full Text | Google Scholar

Anderson, J. R. (2012). Tracking problem solving by multivariate pattern analysis and hidden markov model algorithms. Neuropsychologia 50, 487–498. doi: 10.1016/j.neuropsychologia.2011.07.025

PubMed Abstract | CrossRef Full Text | Google Scholar

Barth, C. M., and Funke, J. (2010). Negative affective environments improve complex solving performance. Cogn. Emot. 24, 1259–1268. doi: 10.1080/02699930903223766

Beckmann, J. F., and Goode, N. (2014). The benefit of being naïve and knowing it: the unfavourable impact of perceived context familiarity on learning in complex problem solving tasks. Instruct. Sci. 42, 271–290. doi: 10.1007/s11251-013-9280-7

Beghetto, R. A., and Kaufman, J. C. (2007). Toward a broader conception of creativity: a case for “mini-c” creativity. Psychol. Aesthetics Creat. Arts 1, 73–79. doi: 10.1037/1931-3896.1.2.73

Bennett, R. E. (2011). Formative assessment: a critical review. Assess. Educ. Princ. Policy Pract. 18, 5–25. doi: 10.1080/0969594X.2010.513678

Berry, D. C., and Broadbent, D. E. (1984). On the relationship between task performance and associated verbalizable knowledge. Q. J. Exp. Psychol. 36, 209–231. doi: 10.1080/14640748408402156

Blech, C., and Funke, J. (2010). You cannot have your cake and eat it, too: how induced goal conflicts affect complex problem solving. Open Psychol. J. 3, 42–53. doi: 10.2174/1874350101003010042

Brehmer, B., and Dörner, D. (1993). Experiments with computer-simulated microworlds: escaping both the narrow straits of the laboratory and the deep blue sea of the field study. Comput. Hum. Behav. 9, 171–184. doi: 10.1016/0747-5632(93)90005-D

Buchner, A. (1995). “Basic topics and approaches to the study of complex problem solving,” in Complex Problem Solving: The European Perspective , eds P. A. Frensch and J. Funke (Hillsdale, NJ: Erlbaum), 27–63.

Google Scholar

Buchner, A., and Funke, J. (1993). Finite state automata: dynamic task environments in problem solving research. Q. J. Exp. Psychol. 46A, 83–118. doi: 10.1080/14640749308401068

Clark, C. (2012). The Sleepwalkers: How Europe Went to War in 1914 . London: Allen Lane.

Csapó, B., and Funke, J. (2017a). “The development and assessment of problem solving in 21st-century schools,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds B. Csapó and J. Funke (Paris: OECD Publishing), 19–31.

Csapó, B., and Funke, J. (eds) (2017b). The Nature of Problem Solving. Using Research to Inspire 21st Century Learning. Paris: OECD Publishing.

Danner, D., Hagemann, D., Holt, D. V., Hager, M., Schankin, A., Wüstenberg, S., et al. (2011a). Measuring performance in dynamic decision making. Reliability and validity of the Tailorshop simulation. J. Ind. Differ. 32, 225–233. doi: 10.1027/1614-0001/a000055

CrossRef Full Text

Danner, D., Hagemann, D., Schankin, A., Hager, M., and Funke, J. (2011b). Beyond IQ: a latent state-trait analysis of general intelligence, dynamic decision making, and implicit learning. Intelligence 39, 323–334. doi: 10.1016/j.intell.2011.06.004

Dew, N., Read, S., Sarasvathy, S. D., and Wiltbank, R. (2009). Effectual versus predictive logics in entrepreneurial decision-making: differences between experts and novices. J. Bus. Ventur. 24, 287–309. doi: 10.1016/j.jbusvent.2008.02.002

Dhami, M. K., Mandel, D. R., Mellers, B. A., and Tetlock, P. E. (2015). Improving intelligence analysis with decision science. Perspect. Psychol. Sci. 10, 753–757. doi: 10.1177/1745691615598511

Dillon, J. T. (1982). Problem finding and solving. J. Creat. Behav. 16, 97–111. doi: 10.1002/j.2162-6057.1982.tb00326.x

Dörner, D. (1975). Wie Menschen eine Welt verbessern wollten [How people wanted to improve a world]. Bild Der Wissenschaft 12, 48–53.

Dörner, D. (1980). On the difficulties people have in dealing with complexity. Simulat. Gam. 11, 87–106. doi: 10.1177/104687818001100108

Dörner, D. (1996). The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. New York, NY: Basic Books.

Dörner, D., Drewes, U., and Reither, F. (1975). “Über das Problemlösen in sehr komplexen Realitätsbereichen,” in Bericht über den 29. Kongreß der DGfPs in Salzburg 1974, Band 1 , ed. W. H. Tack (Göttingen: Hogrefe), 339–340.

Dörner, D., and Güss, C. D. (2011). A psychological analysis of Adolf Hitler’s decision making as commander in chief: summa confidentia et nimius metus. Rev. Gen. Psychol. 15, 37–49. doi: 10.1037/a0022375

Dörner, D., and Güss, C. D. (2013). PSI: a computational architecture of cognition, motivation, and emotion. Rev. Gen. Psychol. 17, 297–317. doi: 10.1037/a0032947

Dörner, D., Kreuzig, H. W., Reither, F., and Stäudel, T. (1983). Lohhausen. Vom Umgang mit Unbestimmtheit und Komplexität. Bern: Huber.

Ederer, P., Patt, A., and Greiff, S. (2016). Complex problem-solving skills and innovativeness – evidence from occupational testing and regional data. Eur. J. Educ. 51, 244–256. doi: 10.1111/ejed.12176

Edwards, W. (1962). Dynamic decision theory and probabiIistic information processing. Hum. Factors 4, 59–73. doi: 10.1177/001872086200400201

Engelhart, M., Funke, J., and Sager, S. (2017). A web-based feedback study on optimization-based training and analysis of human decision making. J. Dynamic Dec. Mak. 3, 1–23.

Ericsson, K. A., and Simon, H. A. (1983). Protocol Analysis: Verbal Reports As Data. Cambridge, MA: Bradford.

Fischer, A., Greiff, S., and Funke, J. (2017). “The history of complex problem solving,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds B. Csapó and J. Funke (Paris: OECD Publishing), 107–121.

Fischer, A., Holt, D. V., and Funke, J. (2015). Promoting the growing field of dynamic decision making. J. Dynamic Decis. Mak. 1, 1–3. doi: 10.11588/jddm.2015.1.23807

Fischer, A., Holt, D. V., and Funke, J. (2016). The first year of the “journal of dynamic decision making.” J. Dynamic Decis. Mak. 2, 1–2. doi: 10.11588/jddm.2016.1.28995

Fischer, A., and Neubert, J. C. (2015). The multiple faces of complex problems: a model of problem solving competency and its implications for training and assessment. J. Dynamic Decis. Mak. 1, 1–14. doi: 10.11588/jddm.2015.1.23945

Frensch, P. A., and Funke, J. (eds) (1995a). Complex Problem Solving: The European Perspective. Hillsdale, NJ: Erlbaum.

Frensch, P. A., and Funke, J. (1995b). “Definitions, traditions, and a general framework for understanding complex problem solving,” in Complex Problem Solving: The European Perspective , eds P. A. Frensch and J. Funke (Hillsdale, NJ: Lawrence Erlbaum), 3–25.

Frischkorn, G. T., Greiff, S., and Wüstenberg, S. (2014). The development of complex problem solving in adolescence: a latent growth curve analysis. J. Educ. Psychol. 106, 1004–1020. doi: 10.1037/a0037114

Funke, J. (1985). Steuerung dynamischer Systeme durch Aufbau und Anwendung subjektiver Kausalmodelle. Z. Psychol. 193, 435–457.

Funke, J. (1986). Komplexes Problemlösen - Bestandsaufnahme und Perspektiven [Complex Problem Solving: Survey and Perspectives]. Heidelberg: Springer.

Funke, J. (1993). “Microworlds based on linear equation systems: a new approach to complex problem solving and experimental results,” in The Cognitive Psychology of Knowledge , eds G. Strube and K.-F. Wender (Amsterdam: Elsevier Science Publishers), 313–330.

Funke, J. (1995). “Experimental research on complex problem solving,” in Complex Problem Solving: The European Perspective , eds P. A. Frensch and J. Funke (Hillsdale, NJ: Erlbaum), 243–268.

Funke, J. (2010). Complex problem solving: a case for complex cognition? Cogn. Process. 11, 133–142. doi: 10.1007/s10339-009-0345-0

Funke, J. (2012). “Complex problem solving,” in Encyclopedia of the Sciences of Learning , Vol. 38, ed. N. M. Seel (Heidelberg: Springer), 682–685.

Funke, J. (2014a). Analysis of minimal complex systems and complex problem solving require different forms of causal cognition. Front. Psychol. 5:739. doi: 10.3389/fpsyg.2014.00739

Funke, J. (2014b). “Problem solving: what are the important questions?,” in Proceedings of the 36th Annual Conference of the Cognitive Science Society , eds P. Bello, M. Guarini, M. McShane, and B. Scassellati (Austin, TX: Cognitive Science Society), 493–498.

Funke, J., Fischer, A., and Holt, D. V. (2017). When less is less: solving multiple simple problems is not complex problem solving—A comment on Greiff et al. (2015). J. Intell. 5:5. doi: 10.3390/jintelligence5010005

Funke, J., Fischer, A., and Holt, D. V. (2018). “Competencies for complexity: problem solving in the 21st century,” in Assessment and Teaching of 21st Century Skills , eds E. Care, P. Griffin, and M. Wilson (Dordrecht: Springer), 3.

Funke, J., and Greiff, S. (2017). “Dynamic problem solving: multiple-item testing based on minimally complex systems,” in Competence Assessment in Education. Research, Models and Instruments , eds D. Leutner, J. Fleischer, J. Grünkorn, and E. Klieme (Heidelberg: Springer), 427–443.

Gobert, J. D., Kim, Y. J., Pedro, M. A. S., Kennedy, M., and Betts, C. G. (2015). Using educational data mining to assess students’ skills at designing and conducting experiments within a complex systems microworld. Think. Skills Creat. 18, 81–90. doi: 10.1016/j.tsc.2015.04.008

Goode, N., and Beckmann, J. F. (2010). You need to know: there is a causal relationship between structural knowledge and control performance in complex problem solving tasks. Intelligence 38, 345–352. doi: 10.1016/j.intell.2010.01.001

Gray, W. D. (2002). Simulated task environments: the role of high-fidelity simulations, scaled worlds, synthetic environments, and laboratory tasks in basic and applied cognitive research. Cogn. Sci. Q. 2, 205–227.

Greiff, S., and Fischer, A. (2013). Measuring complex problem solving: an educational application of psychological theories. J. Educ. Res. 5, 38–58.

Greiff, S., Fischer, A., Stadler, M., and Wüstenberg, S. (2015a). Assessing complex problem-solving skills with multiple complex systems. Think. Reason. 21, 356–382. doi: 10.1080/13546783.2014.989263

Greiff, S., Stadler, M., Sonnleitner, P., Wolff, C., and Martin, R. (2015b). Sometimes less is more: comparing the validity of complex problem solving measures. Intelligence 50, 100–113. doi: 10.1016/j.intell.2015.02.007

Greiff, S., Fischer, A., Wüstenberg, S., Sonnleitner, P., Brunner, M., and Martin, R. (2013a). A multitrait–multimethod study of assessment instruments for complex problem solving. Intelligence 41, 579–596. doi: 10.1016/j.intell.2013.07.012

Greiff, S., Holt, D. V., and Funke, J. (2013b). Perspectives on problem solving in educational assessment: analytical, interactive, and collaborative problem solving. J. Problem Solv. 5, 71–91. doi: 10.7771/1932-6246.1153

Greiff, S., Wüstenberg, S., Molnár, G., Fischer, A., Funke, J., and Csapó, B. (2013c). Complex problem solving in educational contexts—something beyond g: concept, assessment, measurement invariance, and construct validity. J. Educ. Psychol. 105, 364–379. doi: 10.1037/a0031856

Greiff, S., and Funke, J. (2009). “Measuring complex problem solving: the MicroDYN approach,” in The Transition to Computer-Based Assessment. New Approaches to Skills Assessment and Implications for Large-Scale Testing , eds F. Scheuermann and J. Björnsson (Luxembourg: Office for Official Publications of the European Communities), 157–163.

Greiff, S., and Funke, J. (2017). “Interactive problem solving: exploring the potential of minimal complex systems,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds B. Csapó and J. Funke (Paris: OECD Publishing), 93–105.

Greiff, S., and Martin, R. (2014). What you see is what you (don’t) get: a comment on Funke’s (2014) opinion paper. Front. Psychol. 5:1120. doi: 10.3389/fpsyg.2014.01120

Greiff, S., and Neubert, J. C. (2014). On the relation of complex problem solving, personality, fluid intelligence, and academic achievement. Learn. Ind. Diff. 36, 37–48. doi: 10.1016/j.lindif.2014.08.003

Greiff, S., Niepel, C., Scherer, R., and Martin, R. (2016). Understanding students’ performance in a computer-based assessment of complex problem solving: an analysis of behavioral data from computer-generated log files. Comput. Hum. Behav. 61, 36–46. doi: 10.1016/j.chb.2016.02.095

Greiff, S., Stadler, M., Sonnleitner, P., Wolff, C., and Martin, R. (2017). Sometimes more is too much: a rejoinder to the commentaries on Greiff et al. (2015). J. Intell. 5:6. doi: 10.3390/jintelligence5010006

Greiff, S., and Wüstenberg, S. (2014). Assessment with microworlds using MicroDYN: measurement invariance and latent mean comparisons. Eur. J. Psychol. Assess. 1, 1–11. doi: 10.1027/1015-5759/a000194

Greiff, S., and Wüstenberg, S. (2015). Komplexer Problemlösetest COMPRO [Complex Problem-Solving Test COMPRO]. Mödling: Schuhfried.

Greiff, S., Wüstenberg, S., and Funke, J. (2012). Dynamic problem solving: a new assessment perspective. Appl. Psychol. Measure. 36, 189–213. doi: 10.1177/0146621612439620

Griffin, P., and Care, E. (2015). “The ATC21S method,” in Assessment and Taching of 21st Century Skills , eds P. Griffin and E. Care (Dordrecht, NL: Springer), 3–33.

Güss, C. D., and Dörner, D. (2011). Cultural differences in dynamic decision-making strategies in a non-linear, time-delayed task. Cogn. Syst. Res. 12, 365–376. doi: 10.1016/j.cogsys.2010.12.003

Güss, C. D., Tuason, M. T., and Orduña, L. V. (2015). Strategies, tactics, and errors in dynamic decision making in an Asian sample. J. Dynamic Deci. Mak. 1, 1–14. doi: 10.11588/jddm.2015.1.13131

Güss, C. D., and Wiley, B. (2007). Metacognition of problem-solving strategies in Brazil, India, and the United States. J. Cogn. Cult. 7, 1–25. doi: 10.1163/156853707X171793

Herde, C. N., Wüstenberg, S., and Greiff, S. (2016). Assessment of complex problem solving: what we know and what we don’t know. Appl. Meas. Educ. 29, 265–277. doi: 10.1080/08957347.2016.1209208

Hermes, M., and Stelling, D. (2016). Context matters, but how much? Latent state – trait analysis of cognitive ability assessments. Int. J. Sel. Assess. 24, 285–295. doi: 10.1111/ijsa.12147

Hotaling, J. M., Fakhari, P., and Busemeyer, J. R. (2015). “Dynamic decision making,” in International Encyclopedia of the Social & Behavioral Sciences , 2nd Edn, eds N. J. Smelser and P. B. Batles (New York, NY: Elsevier), 709–714.

Hundertmark, J., Holt, D. V., Fischer, A., Said, N., and Fischer, H. (2015). System structure and cognitive ability as predictors of performance in dynamic system control tasks. J. Dynamic Deci. Mak. 1, 1–10. doi: 10.11588/jddm.2015.1.26416

Jäkel, F., and Schreiber, C. (2013). Introspection in problem solving. J. Problem Solv. 6, 20–33. doi: 10.7771/1932-6246.1131

Jansson, A. (1994). Pathologies in dynamic decision making: consequences or precursors of failure? Sprache Kogn. 13, 160–173.

Kaufman, J. C., and Beghetto, R. A. (2009). Beyond big and little: the four c model of creativity. Rev. Gen. Psychol. 13, 1–12. doi: 10.1037/a0013688

Knauff, M., and Wolf, A. G. (2010). Complex cognition: the science of human reasoning, problem-solving, and decision-making. Cogn. Process. 11, 99–102. doi: 10.1007/s10339-010-0362-z

Kretzschmar, A. (2017). Sometimes less is not enough: a commentary on Greiff et al. (2015). J. Intell. 5:4. doi: 10.3390/jintelligence5010004

Kretzschmar, A., Neubert, J. C., Wüstenberg, S., and Greiff, S. (2016). Construct validity of complex problem solving: a comprehensive view on different facets of intelligence and school grades. Intelligence 54, 55–69. doi: 10.1016/j.intell.2015.11.004

Kretzschmar, A., and Süß, H.-M. (2015). A study on the training of complex problem solving competence. J. Dynamic Deci. Mak. 1, 1–14. doi: 10.11588/jddm.2015.1.15455

Lee, H., and Cho, Y. (2007). Factors affecting problem finding depending on degree of structure of problem situation. J. Educ. Res. 101, 113–123. doi: 10.3200/JOER.101.2.113-125

Leutner, D., Fleischer, J., Wirth, J., Greiff, S., and Funke, J. (2012). Analytische und dynamische Problemlösekompetenz im Lichte internationaler Schulleistungsvergleichsstudien: Untersuchungen zur Dimensionalität. Psychol. Rundschau 63, 34–42. doi: 10.1026/0033-3042/a000108

Luchins, A. S. (1942). Mechanization in problem solving: the effect of einstellung. Psychol. Monogr. 54, 1–95. doi: 10.1037/h0093502

Mack, O., Khare, A., Krämer, A., and Burgartz, T. (eds) (2016). Managing in a VUCA world. Heidelberg: Springer.

Mainert, J., Kretzschmar, A., Neubert, J. C., and Greiff, S. (2015). Linking complex problem solving and general mental ability to career advancement: does a transversal skill reveal incremental predictive validity? Int. J. Lifelong Educ. 34, 393–411. doi: 10.1080/02601370.2015.1060024

Mainzer, K. (2009). Challenges of complexity in the 21st century. An interdisciplinary introduction. Eur. Rev. 17, 219–236. doi: 10.1017/S1062798709000714

Meadows, D. H., Meadows, D. L., and Randers, J. (1992). Beyond the Limits. Vermont, VA: Chelsea Green Publishing.

Meadows, D. H., Meadows, D. L., Randers, J., and Behrens, W. W. (1972). The Limits to Growth. New York, NY: Universe Books.

Meißner, A., Greiff, S., Frischkorn, G. T., and Steinmayr, R. (2016). Predicting complex problem solving and school grades with working memory and ability self-concept. Learn. Ind. Differ. 49, 323–331. doi: 10.1016/j.lindif.2016.04.006

Molnàr, G., Greiff, S., Wüstenberg, S., and Fischer, A. (2017). “Empirical study of computer-based assessment of domain-general complex problem-solving skills,” in The Nature of Problem Solving: Using research to Inspire 21st Century Learning , eds B. Csapó and J. Funke (Paris: OECD Publishing), 125–141.

National Research Council (2011). Assessing 21st Century Skills: Summary of a Workshop. Washington, DC: The National Academies Press.

Newell, A., Shaw, J. C., and Simon, H. A. (1959). A general problem-solving program for a computer. Comput. Automat. 8, 10–16.

Nisbett, R. E., and Wilson, T. D. (1977). Telling more than we can know: verbal reports on mental processes. Psychol. Rev. 84, 231–259. doi: 10.1037/0033-295X.84.3.231

OECD (2014). “PISA 2012 results,” in Creative Problem Solving: Students’ Skills in Tackling Real-Life problems , Vol. 5 (Paris: OECD Publishing).

Osman, M. (2010). Controlling uncertainty: a review of human behavior in complex dynamic environments. Psychol. Bull. 136, 65–86. doi: 10.1037/a0017815

Osman, M. (2012). The role of reward in dynamic decision making. Front. Neurosci. 6:35. doi: 10.3389/fnins.2012.00035

Qudrat-Ullah, H. (2015). Better Decision Making in Complex, Dynamic Tasks. Training with Human-Facilitated Interactive Learning Environments. Heidelberg: Springer.

Ramnarayan, S., Strohschneider, S., and Schaub, H. (1997). Trappings of expertise and the pursuit of failure. Simulat. Gam. 28, 28–43. doi: 10.1177/1046878197281004

Reuschenbach, B. (2008). Planen und Problemlösen im Komplexen Handlungsfeld Pflege. Berlin: Logos.

Rohe, M., Funke, J., Storch, M., and Weber, J. (2016). Can motto goals outperform learning and performance goals? Influence of goal setting on performance, intrinsic motivation, processing style, and affect in a complex problem solving task. J. Dynamic Deci. Mak. 2, 1–15. doi: 10.11588/jddm.2016.1.28510

Scherer, R., Greiff, S., and Hautamäki, J. (2015). Exploring the relation between time on task and ability in complex problem solving. Intelligence 48, 37–50. doi: 10.1016/j.intell.2014.10.003

Schoppek, W., and Fischer, A. (2015). Complex problem solving – single ability or complex phenomenon? Front. Psychol. 6:1669. doi: 10.3389/fpsyg.2015.01669

Schraw, G., Dunkle, M., and Bendixen, L. D. (1995). Cognitive processes in well-defined and ill-defined problem solving. Appl. Cogn. Psychol. 9, 523–538. doi: 10.1002/acp.2350090605

Schweizer, F., Wüstenberg, S., and Greiff, S. (2013). Validity of the MicroDYN approach: complex problem solving predicts school grades beyond working memory capacity. Learn. Ind. Differ. 24, 42–52. doi: 10.1016/j.lindif.2012.12.011

Schweizer, T. S., Schmalenberger, K. M., Eisenlohr-Moul, T. A., Mojzisch, A., Kaiser, S., and Funke, J. (2016). Cognitive and affective aspects of creative option generation in everyday life situations. Front. Psychol. 7:1132. doi: 10.3389/fpsyg.2016.01132

Selten, R., Pittnauer, S., and Hohnisch, M. (2012). Dealing with dynamic decision problems when knowledge of the environment is limited: an approach based on goal systems. J. Behav. Deci. Mak. 25, 443–457. doi: 10.1002/bdm.738

Simon, H. A. (1957). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations , 2nd Edn. New York, NY: Macmillan.

Sonnleitner, P., Brunner, M., Keller, U., and Martin, R. (2014). Differential relations between facets of complex problem solving and students’ immigration background. J. Educ. Psychol. 106, 681–695. doi: 10.1037/a0035506

Spering, M., Wagener, D., and Funke, J. (2005). The role of emotions in complex problem solving. Cogn. Emot. 19, 1252–1261. doi: 10.1080/02699930500304886

Stadler, M., Becker, N., Gödker, M., Leutner, D., and Greiff, S. (2015). Complex problem solving and intelligence: a meta-analysis. Intelligence 53, 92–101. doi: 10.1016/j.intell.2015.09.005

Stadler, M., Niepel, C., and Greiff, S. (2016). Easily too difficult: estimating item difficulty in computer simulated microworlds. Comput. Hum. Behav. 65, 100–106. doi: 10.1016/j.chb.2016.08.025

Sternberg, R. J. (1995). “Expertise in complex problem solving: a comparison of alternative conceptions,” in Complex Problem Solving: The European Perspective , eds P. A. Frensch and J. Funke (Hillsdale, NJ: Erlbaum), 295–321.

Sternberg, R. J., and Frensch, P. A. (1991). Complex Problem Solving: Principles and Mechanisms. (eds) R. J. Sternberg and P. A. Frensch. Hillsdale, NJ: Erlbaum.

Strohschneider, S., and Güss, C. D. (1998). Planning and problem solving: differences between brazilian and german students. J. Cross-Cult. Psychol. 29, 695–716. doi: 10.1177/0022022198296002

Strohschneider, S., and Güss, C. D. (1999). The fate of the Moros: a cross-cultural exploration of strategies in complex and dynamic decision making. Int. J. Psychol. 34, 235–252. doi: 10.1080/002075999399873

Thimbleby, H. (2007). Press On. Principles of Interaction. Cambridge, MA: MIT Press.

Tobinski, D. A., and Fritz, A. (2017). “EcoSphere: a new paradigm for problem solving in complex systems,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds B. Csapó and J. Funke (Paris: OECD Publishing), 211–222.

Tremblay, S., Gagnon, J.-F., Lafond, D., Hodgetts, H. M., Doiron, M., and Jeuniaux, P. P. J. M. H. (2017). A cognitive prosthesis for complex decision-making. Appl. Ergon. 58, 349–360. doi: 10.1016/j.apergo.2016.07.009

Tschirgi, J. E. (1980). Sensible reasoning: a hypothesis about hypotheses. Child Dev. 51, 1–10. doi: 10.2307/1129583

Tuchman, B. W. (1984). The March of Folly. From Troy to Vietnam. New York, NY: Ballantine Books.

Verweij, M., and Thompson, M. (eds) (2006). Clumsy Solutions for A Complex World. Governance, Politics and Plural Perceptions. New York, NY: Palgrave Macmillan. doi: 10.1057/9780230624887

Viehrig, K., Siegmund, A., Funke, J., Wüstenberg, S., and Greiff, S. (2017). “The heidelberg inventory of geographic system competency model,” in Competence Assessment in Education. Research, Models and Instruments , eds D. Leutner, J. Fleischer, J. Grünkorn, and E. Klieme (Heidelberg: Springer), 31–53.

von Clausewitz, C. (1832). Vom Kriege [On war]. Berlin: Dämmler.

Wendt, A. N. (2017). The empirical potential of live streaming beyond cognitive psychology. J. Dynamic Deci. Mak. 3, 1–9. doi: 10.11588/jddm.2017.1.33724

Wiliam, D., and Black, P. (1996). Meanings and consequences: a basis for distinguishing formative and summative functions of assessment? Br. Educ. Res. J. 22, 537–548. doi: 10.1080/0141192960220502

World Economic Forum (2015). New Vsion for Education Unlocking the Potential of Technology. Geneva: World Economic Forum.

World Economic Forum (2016). Global Risks 2016: Insight Report , 11th Edn. Geneva: World Economic Forum.

Wüstenberg, S., Greiff, S., and Funke, J. (2012). Complex problem solving — more than reasoning? Intelligence 40, 1–14. doi: 10.1016/j.intell.2011.11.003

Wüstenberg, S., Greiff, S., Vainikainen, M.-P., and Murphy, K. (2016). Individual differences in students’ complex problem solving skills: how they evolve and what they imply. J. Educ. Psychol. 108, 1028–1044. doi: 10.1037/edu0000101

Wüstenberg, S., Stadler, M., Hautamäki, J., and Greiff, S. (2014). The role of strategy knowledge for the application of strategies in complex problem solving tasks. Technol. Knowl. Learn. 19, 127–146. doi: 10.1007/s10758-014-9222-8

Keywords : complex problem solving, validity, assessment, definition, MicroDYN

Citation: Dörner D and Funke J (2017) Complex Problem Solving: What It Is and What It Is Not. Front. Psychol. 8:1153. doi: 10.3389/fpsyg.2017.01153

Received: 14 March 2017; Accepted: 23 June 2017; Published: 11 July 2017.

Reviewed by:

Copyright © 2017 Dörner and Funke. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Joachim Funke, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Bryan Lindsley

How To Solve Complex Problems

In today’s increasingly complex world, we are constantly faced with ill-defined problems that don’t have a clear solution. From poverty and climate change to crime and addiction, complex situations surround us. Unlike simple problems with a pre-defined or “right” answer, complex problems share several basic characteristics that make them hard to solve. While these problems can be frustrating and overwhelming, they also offer an opportunity for growth and creativity. Complex problem-solving skills are the key to addressing these tough issues.

In this article, I will discuss simple versus complex problems, define complex problem solving, and describe why it is so important in complex dynamic environments. I will also explain how to develop problem-solving skills and share some tips for effectively solving complex problems.

How is simple problem-solving different from complex problem-solving?

Solving problems is about getting from a currently undesirable state to an intended goal state. In other words, about bridging the gap between “what is” and “what ought to be”. However, the challenge of reaching a solution varies based on the kind of problem that is being solved. There are generally three different kinds of problems you should consider.

Simple problems have one problem solution. The goal is to find that answer as quickly and efficiently as possible. Puzzles are classic examples of simple problem solving. The objective is to find the one correct solution out of many possibilities.

Puzzles complex problem-solving

Problems are different from puzzles in that they don’t have a known problem solution. As such, many people may agree that there is an issue to be solved, but they may not agree on the intended goal state or how to get there. In this type of problem, people spend a lot of time debating the best solution and the optimal way to achieve it.

Messes are collections of interrelated problems where many stakeholders may not even agree on what the issue is. Unlike problems where there is agreement about what the problem is, in messes, there isn’t agreement amongst stakeholders. In other words, even “what is” can’t be taken for granted. Most complex social problems are messes, made up of interrelated social issues with ill-defined boundaries and goals.

Problems and messes can be complicated or complex

Puzzles are simple, but problems and messes exist on a continuum between complicated and complex. Complicated problems are technical in nature. There may be many involved variables, but the relationships are linear. As a result, complicated problems have step-by-step, systematic solutions. Repairing an engine or building a rocket may be difficult because of the many parts involved, but it is a technical problem we call complicated.

On the other hand, solving a complex problem is entirely different. Unlike complicated problems that may have many variables with linear relationships, a complex problem is characterized by connectivity patterns that are harder to understand and predict.

Characteristics of complex problems and messes

So what else makes a problem complex? Here are seven additional characteristics (from Funke and Hester and Adams ).

  • Lack of information. There is often a lack of data or information about the problem itself. In some cases, variables are unknown or cannot be measured.
  • Many goals. A complex problem has a mix of conflicting objectives. In some sense, every stakeholder involved with the problem may have their own goals. However, with limited resources, not all goals can be simultaneously satisfied.
  • Unpredictable feedback loops. In part due to many variables connected by a range of different relationships, a change in one variable is likely to have effects on other variables in the system. However, because we do not know all of the variables it will affect, small changes can have disproportionate system-wide effects. These unexpected events that have big, unpredictable effects are sometimes called Black Swans.
  • Dynamic. A complex problem changes over time and there is a significant impact based on when you act. In other words, because the problem and its parts and relationships are constantly changing, an action taken today won’t have the same effects as the same action taken tomorrow.
  • Time-delayed. It takes a while for cause and effect to be realized. Thus it is very hard to know if any given intervention is working.
  • Unknown unknowns. Building off the previous point about a lack of information, in a complex problem you may not even know what you don’t know. In other words, there may be very important variables that you are not even aware of.
  • Affected by (error-prone) humans. Simply put, human behavior tends to be illogical and unpredictable. When humans are involved in a problem, avoiding error may be impossible.

What is complex problem-solving?

“Complex problem solving” is the term for how to address a complex problem or messes that have the characteristics listed above.

Since a complex problem is a different phenomenon than a simple or complicated problem, solving them requires a different approach. Methods designed for simple problems, like systematic organization, deductive logic, and linear thinking don’t work well on their own for a complex problem.

And yet, despite its importance, there isn’t complete agreement about what exactly it is.

How is complex problem solving defined by experts?

Let’s look at what scientists, researchers, and system thinkers have come up with in terms of a definition for solving a complex problem. 

As a series of observations and informed decisions

For many employers, the focus is on making smart decisions. These must weigh the future effects to the company of any given solution. According to Indeed.com , it is defined as “a series of observations and informed decisions used to find and implement a solution to a problem. Beyond finding and implementing a solution, complex problem solving also involves considering future changes to circumstance, resources, and capabilities that may affect the trajectory of the process and success of the solution. Complex problem solving also involves considering the impact of the solution on the surrounding environment and individuals.”

As using information to review options and develop solutions

For others, it is more of a systematic way to consider a range of options. According to O*NET ,  the definition focuses on “identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.”

As a self-regulated psychological process

Others emphasize the broad range of skills and emotions needed for change. In addition, they endorse an inspired kind of pragmatism. For example, Dietrich Dorner and Joachim Funke define it as “a collection of self-regulated psychological processes and activities necessary in dynamic environments to achieve ill-defined goals that cannot be reached by routine actions. Creative combinations of knowledge and a broad set of strategies are needed. Solutions are often more bricolage than perfect or optimal. The problem-solving process combines cognitive, emotional, and motivational aspects, particularly in high-stakes situations. Complex problems usually involve knowledge-rich requirements and collaboration among different persons.”

As a novel way of thinking and reasoning

Finally, some emphasize the multidisciplinary nature of knowledge and processes needed to tackle a complex problem. Patrick Hester and Kevin MacG. Adams have stated that “no single discipline can solve truly complex problems. Problems of real interest, those vexing ones that keep you up at night, require a discipline-agnostic approach…Simply they require us to think systemically about our problem…a novel way of thinking and reasoning about complex problems that encourages increased understanding and deliberate intervention.”

A synthesis definition

By pulling the main themes of these definitions together, we can get a sense of what complex problem-solvers must do:

Gain a better understanding of the phenomena of a complex problem or mess. Use a discipline-agnostic approach in order to develop deliberate interventions. Take into consideration future impacts on the surrounding environment.

Why is complex problem solving important?

Many efforts aimed at complex social problems like reducing homelessness and improving public health – despite good intentions giving more effort than ever before – are destined to fail because their approach is based on simple problem-solving. And some efforts might even unwittingly be contributing to the problems they’re trying to solve. 

Einstein said that “We can’t solve problems by using the same kind of thinking we used when we created them.” I think he could have easily been alluding to the need for more complex problem solvers who think differently. So what skills are required to do this?

What are complex problem-solving skills?

The skills required to solve a complex problem aren’t from one domain, nor are they an easily-packaged bundle. Rather, I like to think of them as a balancing act between a series of seemingly opposite approaches but synthesized. This brings a sort of cognitive dissonance into the process, which is itself informative.

It brings F. Scott Fitzgerald’s maxim to mind: 

“The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function. One should, for example, be able to see that things are hopeless yet be determined to make them otherwise.” 

To see the problem situation clearly, for example, but also with a sense of optimism and possibility.

Here are the top three dialectics to keep in mind:

Thinking and reasoning

Reasoning is the ability to make logical deductions based on evidence and counterevidence. On the other hand, thinking is more about imagining an unknown reality based on thoughts about the whole picture and how the parts could fit together. By thinking clearly, one can have a sense of possibility that prepares the mind to deduce the right action in the unique moment at hand.

As Dorner and Funke explain: “Not every situation requires the same action,  and we may want to act this way or another to reach this or that goal. This appears logical, but it is a logic based on constantly shifting grounds: We cannot know whether necessary conditions are met, sometimes the assumptions we have made later turn out to be incorrect, and sometimes we have to revise our assumptions or make completely new ones. It is necessary to constantly switch between our sense of possibility and our sense of reality, that is, to switch between thinking and reasoning. It is an arduous process, and some people handle it well, while others do not.”

Analysis and reductionism combined with synthesis and holism

It’s important to be able to use scientific processes to break down a complex problem into its parts and analyze them. But at the same time, a complex problem is more than the sum of its parts. In most cases, the relationships between the parts are more important than the parts themselves. Therefore, decomposing problems with rigor isn’t enough. What’s needed, once problems are reduced and understood, is a way of understanding the relationships between various components as well as putting the pieces back together. However, synthesis and holism on their own without deductive analysis can often miss details and relationships that matter.  

What makes this balancing act more difficult is that certain professions tend to be trained in and prefer one domain over the other. Scientists prefer analysis and reductionism whereas most social scientists and practitioners default to synthesis and holism. Unfortunately, this divide of preferences results in people working in their silos at the expense of multi-disciplinary approaches that together can better “see” complexity.

seeing complex problem solving

Situational awareness and self-awareness 

Dual awareness is the ability to pay attention to two experiences simultaneously. In the case of complex problems, context really matters. In other words, problem-solving exists in an ecosystem of environmental factors that are not incidental. Personal and cultural preferences play a part as do current events unfolding over time. But as a problem solver, knowing the environment is only part of the equation. 

The other crucial part is the internal psychological process unique to every individual who also interacts with the problem and the environment. Problem solvers inevitably come into contact with others who may disagree with them, or be advancing seemingly counterproductive solutions, and these interactions result in emotions and motivations. Without self-awareness, we can become attached to our own subjective opinions, fall in love with “our” solutions, and generally be driven by the desire to be seen as problem solvers at the expense of actually solving the problem.

By balancing these three dialectics, practitioners can better deal with uncertainty as well as stay motivated despite setbacks. Self-regulation among these seemingly opposite approaches also reminds one to stay open-minded.

How do you develop complex problem-solving skills?

There is no one answer to this question, as the best way to develop them will vary depending on your strengths and weaknesses. However, there are a few general things that you can do to improve your ability to solve problems.

Ground yourself in theory and knowledge

First, it is important to learn about systems thinking and complexity theories. These frameworks will help you understand how complex systems work, and how different parts of a system interact with each other. This conceptual understanding will allow you to identify potential solutions to problems more quickly and effectively.

Practice switching between approaches

Second, practice switching between the dialectics mentioned above. For example, in your next meeting try to spend roughly half your time thinking and half your time reasoning. The important part is trying to get habituated to regularly switching lenses. It may seem disjointed at first, but after a while, it becomes second nature to simultaneously see how the parts interact and the big picture.

Focus on the specific problem phenomena

Third, it may sound obvious, but people often don’t spend very much time studying the problem itself and how it functions. In some sense, becoming a good problem-solver involves becoming a problem scientist. Your time should be spent regularly investigating the phenomena of “what is” rather than “what ought to be”. A holistic understanding of the problem is the required prerequisite to coming up with good solutions.

Stay curious

Finally, after we have worked on a problem for a while, we tend to think we know everything about it, including how to solve it. Even if we’re working on a problem, which may change dynamically from day to day, we start treating it more like a puzzle with a definite solution. When that happens, we can lose our motivation to continue learning about the problem. This is very risky because it closes the door to learning from others, regardless of whether we completely agree with them or not.

As Neils Bohr said, “Two different perspectives or models about a system will reveal truths regarding the system that are neither entirely independent nor entirely compatible.”

By staying curious, we can retain our ability to learn on a daily basis.

Tips for how to solve complex problems

Focus on processes over results.

It’s easy to get lost in utopian thinking. Many people spend so much time on “what ought to be” that they forget that problem solving is about the gap between “what is” and “what ought to be”. It is said that “life is a journey, not a destination.” The same is true for complex problem-solving. To do it well, a problem solver must focus on enjoying the process of gaining a holistic understanding of the problem. 

Adaptive and iterative methods and tools

A variety of adaptive and iterative methods have been developed to address complexity. They share a laser focus on gaining holistic understanding with tools that best match the phenomena of complexity. They are also non-ideological, trans-disciplinary, and flexible. In most cases, your journey through a set of steps won’t be linear. Rather, as you think and reason, analyze and synthesize, you’ll jump around to get a holistic picture.

adapting complex problem-solving

In my online course , we generally follow a seven-step method:

  • Get clear sight with a complex problem-solving frame
  • Establish a secure base of operation
  • Gain a deep understanding of the problem
  • Create an interactive model of the problem
  • Develop an impact strategy
  • Create an action plan and implement
  • Embed systemic solutions

Of course, each of these steps involves testing to see what works and consistently evaluating our process and progress.

Resolution is about systematically managing a problem over time

One last thing to keep in mind. Most social problems are not just solved one day, never to return. In reality,  most complex problems are managed, not solved. For all practical purposes, what this means is that “the solution” is a way of systematically dealing with the problem over time. Some find this disappointing, but it’s actually a pragmatic pointer to think about resolution – a way move problems in the right direction – rather than final solutions.

Problem solvers regularly train and practice

If you need help developing your complex problem-solving skills, I have an online class where you can learn everything you need to know. 

Sign up today and learn how to be successful at making a difference in the world!

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 07 October 2016

How Humans Solve Complex Problems: The Case of the Knapsack Problem

  • Carsten Murawski 1 &
  • Peter Bossaerts 1 , 2  

Scientific Reports volume  6 , Article number:  34851 ( 2016 ) Cite this article

13k Accesses

30 Citations

70 Altmetric

Metrics details

  • Computational models
  • Human behaviour

Life presents us with problems of varying complexity. Yet, complexity is not accounted for in theories of human decision-making. Here we study instances of the knapsack problem, a discrete optimisation problem commonly encountered at all levels of cognition, from attention gating to intellectual discovery. Complexity of this problem is well understood from the perspective of a mechanical device like a computer. We show experimentally that human performance too decreased with complexity as defined in computer science. Defying traditional economic principles, participants spent effort way beyond the point where marginal gain was positive, and economic performance increased with instance difficulty. Human attempts at solving the instances exhibited commonalities with algorithms developed for computers, although biological resource constraints–limited working and episodic memories–had noticeable impact. Consistent with the very nature of the knapsack problem, only a minority of participants found the solution–often quickly–but the ones who did appeared not to realise. Substantial heterogeneity emerged, suggesting why prizes and patents, schemes that incentivise intellectual discovery but discourage information sharing, have been found to be less effective than mechanisms that reveal private information, such as markets.

Similar content being viewed by others

solving complex human problems

Task-independent metrics of computational hardness predict human cognitive performance

solving complex human problems

Rational use of cognitive resources in human planning

solving complex human problems

Human inference reflects a normative balance of complexity and accuracy

Introduction.

The knapsack problem (KP) is a combinatorial optimisation problem with the goal of finding, in a set of items of given values and weights, the subset of items with the highest total value, subject to a total weight constraint 1 , 2 ( Supplementary Methods 1.1 ). It is a member of the complexity class non-deterministic polynomial-time (NP) hard 2 , 3 , 4 . For those problems, there are no known efficient solution algorithms, that is, algorithms whose computational time only grows as a polynomial of the size of the problem’s instances 5 . This feature obtains despite the fact that one can compute relatively fast whether a given candidate solution reaches a certain value level.

The KP permeates the lives of humans (and non-human animals). It emerges at a low level of cognition, for example in the choice of stimuli to attend to (visual, auditory, tactile). At an intermediate level of cognition, the KP can be recognised in tasks like budgeting and time management 6 . At the highest level, it occurs for example in production (cutting problems 7 ), logistics (bin packing problems 8 ), combinatorial auctions, and, in financial economics, in portfolio optimisation 9 . It has also been argued that the KP reflects an important aspect of innovation and intellectual discovery 10 , 11 , 12 , 13 , 14 . Consistent with this view, a recent empirical study showed that most patents filed at the U.S. Patent Office between 1790 and 2010 were for inventions that combined existing technologies in novel ways 14 . Furthermore, the KP is related to mental disorders. For example, several of the symptoms of attention-deficit/hyperactivity disorder can be regarded as impaired ability to solve the KP 6 .

Here, we examine whether the degree of complexity of a computational problem affects human behaviour, and if so, what strategies humans resort to. Twenty healthy participants attempted to solve eight instances of the 0–1 knapsack problem 12 , administered on a computer ( Fig. 1a–b , Supplementary Methods 1.6 ). In each instance, participants were presented with a set of items of differing values and weights ( Table S1 ). Participants used a mouse to add available items to, and remove items from, the knapsack. Total value and weight of the selected items were displayed at the top of the screen. Participants were offered two attempts per instance. The time limit per attempt was four minutes. Participants were asked to press the space bar when they believed that they had found the solution of an instance, to submit the solution. Submitted attempts were rewarded based on an economic performance measure that increased in closeness in values between the solution submitted (“attained value”) and the optimal knapsack (“optimal solution value”).

figure 1

Overview of experimental paradigm.

( a ) Example instance of the 0–1 knapsack problem with five items. The goal is to find the subset of available items that maximises total value of the knapsack, subject to a capacity constraint. Values and weights of available items are provided in the table. The capacity of the knapsack is 7. ( b ) In the experiment, instances were presented on a computer screen. Each item was represented by a square. An item could be added to and removed from the knapsack by clicking on it. Selected items were displayed in green. ( c ) The state of the knapsack can be represented by an array of 0 s and 1 s with length equal to the number of items available. ( d ) Admissible states (possible compositions of the knapsack that do not violate the weight constraint) can be represented as vertices in a graph. The position of a vertex on the abscissa is determined by the value of the knapsack it represents, and the position on the ordinate is determined by the distance, in item space, of the vertex from the optimal solution, i.e., by the number of items that need to be removed and added in order to reach the optimal solution (CG; see Supplementary Methods 1.2 ).

Performance

Computational performance, defined as the proportion of attempts (“score”) of an instance in which participants found the maximum value, was 37.4%. The mean time on task at submission was 172 s (SD = 57 s), and 97.5% of attempts were submitted before the time limit of four minutes. While computational performance indicates that our instances were difficult indeed, there was substantial variability across instances (min = 0.027, M = 0.367, max = 0.744, SD = 0.19; Fig. 2a ), and across participants (min = 0.062, M = 0.374, max = 0.562, SD = 0.157; Fig. 2b ). Computational performance was significantly higher than what participants would have achieved by implementing stochastic search (‘trial and error’). Participants on average submitted only 14 different sets of items (SD = 7). In comparison, the average number of possible configurations of full (capacity-constrained) knapsacks in the different instances was 1,381. The chance that any capacity-constrained knapsack was the optimal one was a mere 0.7%, implying an expected computational performance from random search equal to only 0.10 (SD = 0.05). The total number of correct attempts was significantly above chance ( P <  0.001, one-sided binomial test). This suggests that searches were systematic, that is, based on a non-random protocol.

figure 2

Variation in performance.

( a ) Success rates (proportion of successful attempts) for each of the eight instances administered in the task (see Table S1 for instance properties). ( b ) Success rates for each of the 20 participants. Blue lines indicate standard errors of means.

Economic performance was defined as the value attained in the submitted solution as a percentage of optimal solution value. Participant earnings were based on economic performance ( Supplementary Methods 1.6 ). We found that, on average, participants submitted solutions with value equal to 97.4% of optimal solution value. This value was significantly higher than the expected economic performance of an algorithm that randomly picks a capacity-constrained knapsack, which was 85.3% ( P <  0.001, one-sample t-test, t (307) = 36.382). Similar to computational performance, economic performance varied more by instance (min = 95.8%, M = 97.4%, max = 99.0%, SD = 1.1%) than by participant (min = 88.9%, M = 97.4%, max = 99.3%, SD = 2.4%).

We examined whether participant behaviour in our task was consistent with key principles of decision theory. One such principle is that economic performance is related to effort extended. As effort increases, economic performance should too, that is, effort is revealed through (economic) performance. In the context of the KP, it is not immediately clear, however, how to define effort. For a computational device like a Turing machine, effort could be defined as the number of computational steps or running time of an algorithm that solves the problem. Since we cannot directly observe the number of computations performed by participants, we relied on proxies. Or first proxy is the length of the sequence of item additions to and removals from the knapsack in an attempt. These sequences can be mapped into a graph where vertices correspond to possible solutions, and edges indicate additions/removals of items ( Fig. 1c–d , Fig. S1 ; Supplementary Methods 1.2 ). Path length then corresponds to the number of additions/removals (number of steps) required to travel from the initial vertex (empty knapsack) to a particular vertex. Terminal vertices in the graph correspond to capacity-constrained knapsacks: further additions would violate the weight constraint.

We first investigated whether path length measured human effort. According to decision theory, if path length is an appropriate measure of effort, economic performance should increase with path length. We found that economic performance increased in path length of attempts ( P <  0.05, main effect of path length, linear mixed model (LMM) with random effects on intercept for individual participants; Supplementary Results 2.3 , Table S3 Model 3; Fig. 3a ).

figure 3

Economic Performance, Effort and Difficulty.

( a ) Scatter plot of economic performance (value attained as percentage of optimal solution value) and effort measured as mean length of the search path across attempts of an instance (instance means). Black dashed line: 100%, red dashed line: main effect of path length, linear mixed model (LMM) with random effects on intercept for individual participants ( Table S3 Model 3), r: Pearson correlation, β : coefficient estimate of main effect. ( b ) Scatter plot of economic performance and effort measured as mean amount of clock time spent across attempts of an instance (instance means). Red dashed line: main effect of path length, LMM with random effects on intercept for individual participants ( Table S3 Model 4). ( c ) Mean marginal gain in economic performance across all attempts, per step in the search path. ( d ) Scatter plot of economic performance against difficulty (as revealed by mean success rate across attempts of an instance; inverted scale) (instance means).

A related, alternative measure of effort is clock time. In principle, there was no opportunity cost to time in our experimental setting, because participants were barred from other activities while engaged in our task. Thus, clock time in itself cannot measure effort. However, clock time increases in the number of computations performed, as well as other dimensions of effort. The latter include differences in energy required across types of computations. These dimensions of effort are not recognised as effort in Turing machines, but may constitute effort for humans. As such, clock time may provide a catch-all measure of effort. Indeed, we found that time spent on solving an instance was related to higher economic performance ( P <  0.05, main effect of clock time, LMM with with random effects on intercept for individual participants; Supplementary Results 2.3 , Table S3 Model 4; Fig. 3b ).

In our task, earnings increased with a reduction in distance of value of submitted solution relative to optimal solution. As such, participants were not incentivised to find the optimal solution but only to get as close as possible to the optimal solution value. We therefore did not expect our measures of effort to correlate with computational performance, that is, whether the optimal solution was reached. Path length was indeed not related to computational performance ( P >  0.05, main effect of path length, generalised linear mixed model (GLMM) with random effects on intercept for participants; Supplementary Results 2.3 , Table S3 Model 1); nor was time spent on an attempt ( P >  0.05, main effect of clock time, GLMM with random effects for participants on intercept, Supplementary Results 2.3 , Table S3 Model 2).

Difficulty and Economic Performance

A second key economic principle is related to marginal value of effort. A traditional economic agent (“Homo Economicus”) would search for a solution until marginal gain equaled marginal cost of effort. Measuring effort in terms of our ‘catch-all’ metric, clock time, marginal cost of effort must have been strictly positive because participants almost never exhausted the time limit in their attempts. With strictly positive marginal cost of effort, we therefore expected mean marginal gain to remain strictly positive throughout their effort spending. To this end, we plotted economic gain against time ( Fig. S2a ). Contrary to expectations, mean marginal gain quickly dropped to zero, and participants continued to search well beyond this point. As such, most of the search time was spent at zero marginal gain, violating one of the basic principles of economic theory. The same picture emerged when we plotted marginal gain in economic performance against path length, our other measure of effort ( Fig. 3c ).

Recently, the principle that economic agents work until marginal gain equals marginal cost, has become the subject of criticism 15 . An alternative proposal is that economic agents are boundedly rational and, due to their cognitive limitations, work only until they reach an aspiration point, which is less computationally demanding. Even if this was the case, in our setting economic gains should have remained positive until participants stopped spending effort, while they were not. One might rebut that our results could be explained by unrealistically high aspiration levels, and that participants therefore continued to spend effort even if economic performance did not increase. However, in this case, we would expect participants to spend effort until the end of allotted time, which they did in only 12 out of 320 attempts (see above).

Turning to difficulty, instance difficulty could be measured by success rate, that is, computational performance, which should reveal instance difficulty as experienced by our participants. One can reasonably posit that, as difficulty increases, marginal gain in economic performance decreases faster as effort increases. Because of this, we expected participants to have spent less effort, and hence, have attained lower economic performance, in instances where they were less successful. Contrary to this expectation, we actually found that economic performance increased with instance difficulty (Pearson correlation r  = 0.838; P <  0.01; Fig. 3d ). This is rather paradoxical and raises a question: how could participants have sensed which instances were more difficult? To answer this question, we need to explore the nature of instance difficulty for humans, which we turn to in the next section.

The positive correlation between difficulty and economic performance could have been the result of the overall negative relation between path length from a solution to the optimum solution ( C G , see Supplementary Methods 1.2 ), on the one hand, and solution value, on the other hand. To be specific, in our case, the average correlation between the values of the vertices and their distances from the optimum solution was − 0.22 (min =  − 0.41; max = 0.04; Supplementary Methods 1.2 ). Because of this negative relation, solution attempts with higher economic performance (that is, with values closer to the optimum value) tended to be farther away from the optimum. Nevertheless, while computational performance was higher in instances where this negative relation was stronger, the correlation was insignificant (Pearson correlation r  =  − 0.256; P  > 0.05).

In the next step, we investigated what constituted difficulty for participants. We wanted to determine whether difficulty was related to the notion of (computational) complexity as developed on the basis of idealised models of computation such as a Turing machine. However, we also went beyond those notions of complexity and investigated the link between difficulty and the potential for computational mistakes.

For a computational problem like the KP, the number of computations required to solve the problem in a mathematical model of computation such as a Turing machine, increases in the number of items. We found that computational performance indeed decreased in the number of items in an instance, that is, instances with more items were revealed to be more difficult ( P <  0.001, main effect of number of items, GLMM with random effects on intercept for individual participants; Supplementary Results 2.5 , Table S4 Model 1). This is evidence for a relation between instance difficulty as experienced by humans, and as defined on the basis of a mathematical model of computation such as a Turing machine.

Computer science assigns computational problems to complexity classes, which are sets of functions that can be computed within given resource bounds 16 . Resource constraints of particular interest are time and memory. The problem of finding the solution to the 0–1 KP is a member of the complexity class non-deterministic polynomial-time hard. Membership of the complexity class is determined based on the worst instances of the problem. Other instances may require far less resources. To investigate the effect of computational complexity on human behaviour, we constructed measures that are related to the computational resources required by various computer algorithms designed for the 0–1 KP. We then related those measures to task performance.

In particular, we considered two classes of algorithms ( Supplementary Methods 1.3, 1.6 ). One class of algorithms are uninformed search algorithms, which find the solution by exhaustive listing of elements in the search space 17 . The latter can be formalised as a graph and thus its size can be expressed as the number of vertices in this graph ( Supplementary Methods 1.2 ). We found that difficulty experienced by humans increased with the total number of vertices in an instance’s graph ( P  < 0.001, main effect of number of vertices, GLMM with participant random effects on intercept; Supplementary Results 2.5 , Table S4 Model 2), as well as with the number of terminal vertices ( P  < 0.01, main effect of number of terminal vertices, GLMM with participant random effects on intercept; Supplementary Results 2.5 , Table S4 Model 3).

Another class of algorithms are informed search algorithms, which use a set of rules or heuristics to guide search 17 . The first algorithm we considered is based on dynamic programming 2 . It represents the KP in a way so that it can be solved by dynamic programming. The representation is a two-dimensional matrix of size equal to (number of items) × (base-2 logarithm of instance capacity), which is referred to here as input size . With this representation, a Turing machine with no memory constraints can solve the instance in polynomial time as a function of the size of the representation ( Supplementary Methods 1.3, 1.5 ). However, this representation requires working memory larger than that of a typical human. For our first instance, for example, a Turing machine would need a memory of size 10 × log 2 1900 ≈ 109 bits, many times larger than the capacity of human working memory 18 . Therefore, we did not expect that humans would resort to dynamic programming. Consistent with this conjecture, the relation between input size and computational performance was not significant ( P  > 0.05, main effect of input size, GLMM with participant random effects on intercept; Supplementary Results 2.5 , Table S4 Model 4).

Another important algorithm for the 0–1 KP is the Sahni- k algorithm 19 ( Supplementary Methods 1.3, 1.5 ). It solves an instance by a combination of brute-force search through a subset of items of cardinality k , and subsequently the greedy algorithm 20 . The greedy algorithm fills up the knapsack by picking items in reverse order of their value-to-weight ratio. If k equals zero, the Sahni- k algorithm coincides with the greedy algorithm. If k is equal to the number of items in the solution, the algorithm is similar to a brute-force search through the entire search space. k is proportional to the number of computational steps and the memory required to compute the solution of an instance. We define complexity of an instance as the minimum level of k necessary to solve the instance with the Sahni- k algorithm. If human approaches to the KP shared features with the Sahni- k algorithm, we would expect human-experienced difficulty to increase, and hence success rates to decrease, with k . Indeed, we found that computational performance was negatively correlated with Sahni- k (P  < 0.001, main effect of Sahni- k , GLMM with participant random effects on intercept; Supplementary Results 2.5 , Table S4 Model 5; Fig. 4b ).

figure 4

Properties of search.

( a ) Scatter plot of mean success rates across attempts of an instance and number of vertices in the graph of the instance. Red dashed line: main effect of number of vertices, general linear mixed model (GLMM) with random effects on intercept for individual participants ( Table S4 Model 2), r: Pearson correlation, β : coefficient estimate of main effect. ( b ) Mean success rates across attempts of instances, stratified by Sahni-k level ( Supplementary Methods 1.5 ). Blue lines: standard errors of means. ( c ) Scatter plot of mean success rates across attempts of an instance and Pearson correlation of item values and weights. Red dashed line: main effect of Pearson correlation between values and weights in instance, GLMM with random effects on intercept for individual participants ( Table S4 Model 6), r: Pearson correlation, β : coefficient estimate of main effect. ( d ) Time course of choice frequencies of items. The items available in an instance were sorted in reverse order of their density (value-to-weight ratio; vertical axis). The heat map shows average choice frequencies for the items for the first 11 steps (horizontal axis) in the search path across all attempts.

Many of the solution algorithms for the KP require arithmetic, which can be difficult for humans. For example, humans find it difficult to divide sequences of two numbers. Inaccuracies are more likely when numerators and denominators are highly correlated, that is, when the conditioning number is high 21 . In the greedy algorithm, items are ranked in descending order of their ratio of value over weight, and the knapsack is filled in this order until capacity is reached. The ranking will be less accurate if values and weights are more highly correlated. When correlation is perfect (as in our instance 7), ranking is impossible, and the greedy algorithm cannot be implemented. Therefore, if humans in part resort to applying the greedy algorithm, human-experienced difficulty should increase with, and economic performance should decrease with, the correlation between values and weights. We found indeed that computational performance was negatively related to this correlation ( P <  0.05, main effect of conditioning number, GLMM with participant random effects on intercept; Supplementary Results 2.5 , Table S4 Model 6, Fig. 4c ). However, the correlation did not explain differences in computational performance that Sahni- k could not capture ( P >  0.05, interaction effect Sahni- k  × conditioning number, GLMM with participant random effects on intercept; Supplementary Results 2.5 , Table S4 Model 7). This may have been the result of low power: with only 8 instances, insufficient independent variability is obtained across the Sahni-k metric and value-weight correlation.

Solution Search Strategies

We found that economic performance decreased with Sahni- k , suggesting that participants’ search approaches combined the greedy algorithm with protocol that provided a disciplined, though not error-free, complement. This made us hypothesise that, at least in the early stages of solution attempts, our participants tended to use the greedy algorithm. Consistent with this hypothesis, we found that in the first few steps of their attempts, participants were more likely to add items with the highest value-to-weight ratio ( Fig. 4d , Fig. S3 ). Since participants searched longer and with more success than the greedy algorithm ( Supplementary Results 2.7 ), we conjectured that their complimentary protocol featured aspects of a branch-and-bound algorithm, where one starts with the greedy algorithm but then searches for improvements until a termination criterion is reached 22 ( Supplementary Methods 1.3 ). For high Sahni- k instances, where branch-and-bound algorithms deviate from from the greedy algorithm, we expected participants who were trying to replace multiple items at once, to realise a higher score. Indeed, with higher Sahni- k , only branch-and-bound algorithms that consider replacing several items at once can successfully find the optimal solution. We therefore tested whether individual differences in computational performance could be explained by the interaction between Sahni- k and the number of items replaced simultaneously between solution attempts. We measured the latter as the length of the shortest path between two full knapsack attempts, that is, between two terminal vertices in the instance graph ( Supplementary Methods 1.2 ). The interaction term was indeed significant ( P <  0.01, interaction Sahni- k  × mean distance, GLMM with random effect for instances on intercept; Supplementary Results 2.7 , Table S6 ). As such, heterogeneity in search strategies across participants could be partly explained by how many items participants reconsidered and changed during their searches, and hence how complex the search protocol was as the distance between the greedy algorithm and the optimal solution increased.

Many algorithms depend on partial or complete memory of the search history in an attempt, for example, the highest value achieved so far and the items contained in the highest-valued knapsack. Here, too, humans may encounter difficulties, related to episodic memory . For example, some participants may not remember well why an item was put into the knapsack early on, and therefore decide to keep it in “because it must have been reasonable at the time.” We examined to what extent there was a tendency not to eliminate incorrect items that were added early on, and whether this determined computational performance. To this end, we considered the distribution of the age of incorrect items that were eventually deleted ( Fig. 5 ). We measured age as the number of steps taken since the beginning of an attempt and until deletion, as a fraction of the total number of steps taken in an attempt; age equals 1 if the item was the first one to be added to the knapsack and removed only at the end of the attempt. The vast majority of deleted incorrect items were added very recently (M = 0.2920, SE = 0.0001); only rarely did participants eliminate incorrect items that were added to the knapsack early on. A similar pattern emerged for correct items that were deleted (M = 0.2352, SE = 0.0001; Fig. 5 ). This means that participants were reluctant to re-consider incorrect items that were added early on, introducing path dependence in search. Mean age of correct items was significantly higher than age of incorrect items ( P <  0.001, two-sample t -test, t  = 6.98) and their distributions were significantly different ( P <  0.001, Kolmogorov-Smirnov test for independence of samples, D  = 0.10). Direct evidence for path dependence emerged when we found that the initial full knapsack heavily determined eventual computational success: distance (path length) to the submitted solution from the optimum depended significantly on distance to optimum from the first terminal vertex ( P <  0.001, main effect of distance of first vertex, LMM with random effects for participants on intercept; Supplementary Results 2.8 , Table S7 Model 1).

figure 5

Distribution of item ages.

Histogram of age of items ( Supplementary Results 2.8 ). Age of an item was calculated as the fraction of number of steps taken since the beginning of an attempt (age equals 1 if the item was the first added to the knapsack). Green bars: correct items, red bars: incorrect items: M corr : mean age and standard error of mean of correct items, M incorr : mean age and standard error of mean of incorrect items.

Heterogeneity

We also examined variation in searches between participants. We found that there was little overlap in the knapsacks that participants attempted. On average, individuals only visited 3.6% of all vertices in the instance graph ( Fig. S4a ), yet combining all attempts, the twenty participants visited 42.1% of all vertices ( Fig. S4b , Supplementary Results 2.6 ). This means that the group of 20 participants together explored more than fifteen times more of the search space than each individual participant explored on their own. This suggests that there was substantial heterogeneity in search strategies (Video S1–16). However, in all instances, at least one participant found the solution quickly ( Fig. S5 ).

In the theory of computation, the problem of finding the solution of an instance, referred to as a optimisation version of the KP, is distinguished from the problem of deciding whether a given target value or greater is obtainable, referred to as the decision version. Given that the optimisation version of KP is NP-hard and the decision version is NP-complete, we wanted to determine whether participants who found the solution knew that they found the solution, and found it again in the second attempt. To test this, we examined their second attempts at the same instance. Among participants who found the solution in their first attempt, 31.2% did not solve the instance in the second attempt ( Supplementary Results 2.9 ), which suggests that participants mostly were not aware that they had found the solution.

It has often been argued that humans resort to heuristics in order to solve complex problems 23 . Two questions have remained unanswered, however: (i) What type of complexity affects human decision-making? (ii) Are the heuristics humans use, adapted to this complexity? Here, we investigated how computational complexity affects human decision-making. We discovered that various measures of complexity explained computational performance, and hence, difficulty. Most prominently, the Sahni- k measure predicted success. This measure increases in the minimal number of items over which a combinatorial search has to be performed before the remainder of the knapsack can be filled using the greedy algorithm and the optimal solution can be attained.

This result replicates a finding of our earlier study 12 . In this study, 124 participants were asked to solve the same set of instances used in the present study. In this study, too, success rates were negatively correlated with Sahni- k 12 , suggesting that the reported relation between behaviour and computational complexity is robust.

The concept of computational complexity is defined with regards to mathematical models of computation, such as a Turing machine, an ideal model of computation. Our results demonstrate clearly that computational complexity, and hence the theory of computation, does help to understand when and how complexity impacts human decision-making. Our results were obtained in the context of the KP, a canonical example of a complex computational and decision problem, and future research should investigate to what extent computational complexity affects human behaviour in other problems.

Like for a Turing machine, we discovered that effort can be measured in terms of number of computations. However, we also found that this gives an incomplete picture. Our results indicate that problem-solving ability was limited by biological constraints including limited working and episodic memories as well as difficulty with arithmetic. Success rates decreased with Sahni- k , likely because more working memory is required for instances with higher Sahni- k . Working memory is known to vary between individuals; in our task, better working memory allows one to consider replacing multiple items at once, which is needed in order to be more successful at solving more difficult instances. Unwillingness to reconsider items selected many steps ago suggests that episodic memory constraints also affected behaviour. We further observed difficulties with division of numbers, required for successful implementation of the greedy algorithm. However, the number of instances participants were asked to solve was insufficient to disambiguate the influence of Sahni- k and division errors.

Despite memory and other computational constraints, participants scored well on our instances. We attribute this to algorithmic flexibility: humans appear not to stick to a single algorithm, but instead opportunistically change search procedures when encountering difficulties, for example, by deviating from the greedy algorithm when values and weights are highly correlated. Algorithmic flexibility was also evident in the surprisingly low overlap in composition of knapsacks attempted by the different participants.

To overcome memory and other computational constraints, humans could be provided with tools to facilitate solving the KP. A table akin to the one used in dynamic programming may resolve working memory problems. Likewise, forcing participants to re-consider items included early on in the search may help avoid the cognitive inflexibility caused by episodic memory. Explicit ranking of items by their value-to-weight ratios may facilitate the use of the greedy algorithm in the early stages of the search process.

One of our key findings is that participants tended to spend more effort on, and hence, improved economic performance in, more difficult problems. Relatedly, they tended to search way beyond the point where marginal economic gain decreased to zero. These results contradict basic principles of economic theory. Our results may explain important situations where humans do not trade off marginal utility against marginal effort, such as in labour-leisure choices. For instance, taxi drivers do not increase time spent on the job when marginal gains are high; instead, they increase effort per unit of economic gain on “difficult” days (days when passengers are harder to locate) 24 . An important question is how participants in our experiment were able to recognise that an instance was more difficult, because the measure that explained difficulty, Sahni- k , cannot be constructed until one knows the solution of an instance.

Our results cast doubt on a basic tenet of preference-based decision theory, namely, the assumption that decision-makers always optimise 25 . Decision theory assumes that agents will be able to identify the option they most prefer when faced with any finite, though potentially large, set of alternatives from the choice space. Our results suggest, however, that humans cannot find the optimum in many instances of an ubiquitous choice problem. Our results therefore cast doubt on the empirical relevance of modern preference-based decision theory. One real-world example of the KP is attention gating, which has been the subject of study in the emerging theory of optimal inattention 26 . There, economic agents choose optimally which items to attend to and which to ignore, trading off the benefits of attending to more items and the cost of doing so. It is questionable whether humans can execute the policies that emerge from such a preference-based theory of inattention. Our critique cannot be brushed aside by the argument that optimisation is merely “as if” (agents merely have to choose “as if” they optimise). The ability to represent choice in terms of optimisation rests on a basic axiom of preference theory, namely, completeness. If the optimisation problem is intractable, the preferences it represents are effectively incomplete. In general, we would advocate further development of a decision theory that takes seriously the computational demands involved 27 , 28 , 29 .

Across participants, we found little overlap in the knapsacks that they visited. This implies, first, that there does not seem to be a “representative heuristic” which captures the essence of all participants’ strategies. Second, it implies that information sharing has the potential to increase the number of participants that manage to solve a given instance. Markets are one way to indirectly share information, through prices. Our earlier study, involving 124 participants, indeed demonstrated that markets perform better than mechanisms such as prizes or patents that do not involve information sharing 12 . Our findings therefore have major implications for incentivising crucial human activities that can be thought of as solving instances of the KP, such as intellectual discovery and innovation 10 , 11 , 12 , 13 , 14 . There, recent advances have generally been attributed to the establishment of the patent system. But the patent system discourages information sharing, and as a result, could be improved upon through some information sharing mechanism. Or maybe this has already happened, because indeed, along with the introduction of patents, we have also witnessed a surge in the use of markets, which does lead to information sharing 12 . In any case, future research should more closely examine the relation between the KP and intellectual discovery and innovation.

We found that computational performance decreased with the size of the search space measured by the size of the graph of an instance. This result may potentially explain earlier findings that the number of decision options negatively affects choice, a phenomenon referred to as “choice overload” 30 . Our findings offer an algorithmic explanation of this phenomenon: the larger the number of choice options, the larger the search space through which the decision-maker has to search, and the less likely the decision-maker is able to discover the solution.

Interpretability of our results may be somewhat limited by the relatively small size of our sample (20 participants). Nevertheless, each participant solved 16 instances of the KP, resulting in 320 trials, and one of the main results, on the relation between participant success and a metric of difficulty from computer science, replicates the findings of an earlier study 12 with 124 participants. This should suffice for the purpose at hand, which was to investigate average behavior.

Given the central importance of the KP in human life, we would advocate further investigation as a way to gain a more comprehensive understanding of human decision-making. This is bound to improve welfare not only through better incentivisation of, for example, intellectual discovery and innovation. It likewise promises to contribute to health through better diagnosis and treatment of mental disorders that implicate choice that is detrimental to successful KP solving, because of lack of cognitive flexibility (obsessive-compulsive disorder 31 ), sub-optimal time allocation (attention deficit/hyperactivity syndrome 32 ), or stringent adherence to “rational” economic principles (autism spectrum disorder 33 ).

Participants and experimental task

Twenty human volunteers (age range = 18–30, mean age = 21.9, 10 female, 10 male), recruited from the general population, took part in the study. Inclusion criteria were based on age (minimum = 18 years, maximum = 35 years), right-handedness and normal or corrected-to-normal vision. The experimental protocol was approved by The University of Melbourne Human Research Ethics Committee (Ethics ID 1443290), and written informed consent was obtained from all participants prior to commencement of the experimental sessions. Experiments were performed in accordance with all relevant guidelines and regulations.

Participants were asked to solve eight instances of the 0–1 knapsack problem 2 . For each instance, participants had to select from a set of items of given values and weights, the subset of items with the highest total value, subject to a total weight constraint ( Supplementary Tables , Table S1 ). The instances used in this study were used in a prior study 12 and differed significantly in their computational complexity ( Table S2 ).

The instances were displayed on a computer display (1000 × 720 pixels; Fig. 1b ). Each item was represented by a square. Value and weight of an item were displayed at the centre of the square. The size of an item was proportional to its weight and the colour (share of blue) was proportional to its value. At the top of the screen, total value, total weight and weight constraint of the knapsack were displayed. When the mouse was moved over an item, a black frame around the square appeared and the counters at the top of the screen added this items’ value and weight to the totals. When the mouse was moved over an item that could not be added to the knapsack at that time, because its addition would have violated the weight constraint, the counters turned red. An item was selected into the knapsack by clicking on it. Once an item was selected into the knapsack, it turned green. The item stayed green until it was removed from the knapsack (by clicking on it again). A solution was submitted by pressing the space bar. An attempt was automatically terminated after 240 s and time remaining was displayed by a progress bar in the top-right corner of the screen.

Each participant had two attempts per instance. The order of instances was randomised across an experimental session. We recorded the time course of selection of items to and removals from the knapsack. To make the task incentive compatible, participants received a payment proportional to the values of their attempts (between $0 and $4 per attempt). In addition, participants received a show-up fee of $5.

Data analysis

For each attempt, we recorded the sequence of additions of items to and removals of items from the knapsack. Each element in this sequence represents a state of the knapsack, and each state of the knapsack corresponds to a vertex in the graph G of the instance (the first element of the sequence always corresponds to the initial vertex of G , and the last element always corresponds to the participant’s proposed solution of the instance). A sequence of additions and removals can be represented as a path in the graph (see Supplementary Methods 1.2 ).

For each attempt, we recorded the time when the attempt was submitted as well as the sequence of additions and removals of items. For each step in this sequence, we computed the total value of items selected as well as the distance C G ( i, s ) to the solution vertex s from the vertex i in the graph representing this subset of items (see Supplementary Methods 1.2 ). The subset of items selected at the time of submission was the participant’s proposed solution of the instance. The attempt was marked correct if the subset of items in the participant’s proposed solution was the solution of the instance (that is, C G ( i, s ) was equal to zero), and incorrect otherwise.

To evaluate an attempt in value space, we computed the value of the proposed solution normalised by the value of the solution, on which the reward schedule depended. We also computed the difference between the proposed solution and the mean of the values of all terminal vertices in the graph representing the problem. The latter is the mean of the values of all maximally admissible knapsacks, which is equal to the expected value of randomly selecting items into the knapsack until the knapsack is full.

All analyses were performed in Python (version 2.7.6) and R (version 3.2.0).

Additional Information

How to cite this article : Murawski, C. and Bossaerts, P. How Humans Solve Complex Problems: The Case of the Knapsack Problem. Sci. Rep. 6 , 34851; doi: 10.1038/srep34851 (2016).

Mathews, G. B. On the partition of numbers. Proceedings of the London Mathematical Society 28, 486–490 (1897).

MATH   Google Scholar  

Kellerer, H., Pferschy, U. & Pisinger, D. Knapsack Problems (Springer Science & Business Media, 2004).

Cook, S. A. An overview of computational complexity. Communications of the ACM 26, 400–408 (1983).

Article   MathSciNet   Google Scholar  

Karp, R. M. Reducibility among Combinatorial Problems. In Complexity of Computer Computations 85–103 (Springer: US,, 1972).

Chapter   Google Scholar  

Moore, C. & Mertens, S. The Nature of Computation (Oxford University Press, Oxford, 2011).

Torralva, T., Gleichgerrcht, E., Lischinsky, A., Roca, M. & Manes, F. “Ecological” and highly demanding executive tasks detect real-life deficits in high-functioning adult ADHD patients. Journal of Attention Disorders 17, 11–19 (2013).

Article   Google Scholar  

de Carvalho, V. Exact solution of cutting stock problems using column generation and branch-and-bound. International Transactions in Operational Research 5, 35–44 (1998).

Pisinger, D. Heuristics for the container loading problem. European Journal of Operational Research 141, 382–392 (2002).

Mansini, R. & Speranza, M. G. Heuristic algorithms for the portfolio selection problem with minimum transaction lots. European Journal of Operational Research 114, 219–233 (1999).

Arthur, W. B. & Polak, W. The evolution of technology within a simple computer model. Complexity 11, 23–31 (2006).

Article   ADS   Google Scholar  

Arthur, W. B. The Nature of Technology: What It Is and How It Evolves (Allen Lane, London, 2009).

Meloso, D., Copic, J. & Bossaerts, P. Promoting intellectual discovery: patents versus markets. Science 323, 1335–1339 (2009).

Article   CAS   ADS   Google Scholar  

Wagner, A. Arrival of the Fittest: Solving Evolution’s Greatest Puzzle (Oneworld, London, 2014).

Youn, H., Strumsky, D., Bettencourt, L. M. A. & Lobo, J. Invention as a combinatorial process: evidence from us patents. Journal of the Royal Society Interface 12, 20150272 (2015).

Simon, H. A. Theories of decision-making in economics and behavioral science. The American Economic Review 253–283 (1959).

Arora, S. & Barak, B. Computational Complexity: A Modern Approach (Cambridge University Press, Cambridge, 2010).

Russell, S. & Norvig, P. Artificial Intelligence (Pearson, Harlow, 2014).

Cowan, N. The magical mystery four: How is working memory capacity limited, and why ? Current Directions in Psychological Science 19, 51–57 (2010).

Sahni, S. Approximate algorithms for the 0–1 knapsack problem. Journal of the ACM 22, 115–124 (1975).

Cormen, T. H., Leiserson, C., Rivest, R. L. & Stein, C. Introduction to Algorithms (MIT Press, Cambridge, MA, 2001).

Pisinger, D. Where are the hard knapsack problems ? Computers & Operations Research 32, 2271–2284 (2004).

Land, A. H. & Doig, A. An automatic method for solving discrete optimization problems. Econometrica 28, 497–520 (1960).

Tversky, A. & Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. Science 185, 1124–1131 (1974).

Camerer, C., Babcock, L., Loewenstein, G. & Thaler, R. Labor supply of new york city cabdrivers: One day at a time. The Quarterly Journal of Economics 407–441 (1997).

McFadden, D. Rationality for economists ? Journal of Risk and Uncertainty 19, 73–105 (1999).

Sims, C. A. Implications of rational inattention. Journal of Monetary Economics 50, 665–690 (2003).

Lewis, R. L., Howes, A. & Singh, S. Computational rationality: Linking mechanisms and behavior through bounded utility maximisation. Trends in Cognitive Science 6, 279–311 (2014).

Google Scholar  

Gershman, S. J., Horvitz, E. J. & Tenenbaum, J. B. Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science 349, 273–278 (2015).

Article   CAS   ADS   MathSciNet   Google Scholar  

Parkes, D. C. & Wellman, M. P. Economic reasoning and artificial intelligence. Science 349, 267–272 (2015).

Toffler, A. Future Shock (Random House, New York, NY, 1970).

Clayton, I. C., Richards, J. C. & Edwards, C. J. Selective attention in obsessive–compulsive disorder. Journal of Abnormal Psychology 108, 171 (1999).

Article   CAS   Google Scholar  

Adler, L. A. & Chua, H. C. Management of ADHD in adults. Journal of Clinical Psychiatry 63, 29–35 (2002).

PubMed   Google Scholar  

Sally, D. & Hill, E. The development of interpersonal strategy: Autism, theory-of-mind, cooperation and fairness. Journal of Economic Psychology 27, 73–97 (2006).

Download references

Acknowledgements

The authors thank Hayley Warren for assistance with data acquisition. C.M. was supported by a Strategic Research Initiatives Fund grant from the Faculty of Business and Economics, The University of Melbourne.

Author information

Authors and affiliations.

Department of Finance, The University of Melbourne, Parkville, 3010, Victoria, Australia

Carsten Murawski & Peter Bossaerts

The Florey Institute of Neuroscience and Mental Health, Parkville, 3010, Victoria, Australia

Peter Bossaerts

You can also search for this author in PubMed   Google Scholar

Contributions

C.M. and P.B. designed the experiment, collected and analysed the data, and wrote the manuscript.

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Electronic supplementary material

Supplementary information, rights and permissions.

This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

Reprints and permissions

About this article

Cite this article.

Murawski, C., Bossaerts, P. How Humans Solve Complex Problems: The Case of the Knapsack Problem. Sci Rep 6 , 34851 (2016). https://doi.org/10.1038/srep34851

Download citation

Received : 03 December 2015

Accepted : 16 September 2016

Published : 07 October 2016

DOI : https://doi.org/10.1038/srep34851

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Computational complexity drives sustained deliberation.

  • William R. Stauffer

Nature Neuroscience (2023)

Computational strategies for deliberative thought

Learning rewards from exploratory demonstrations using probabilistic temporal ranking.

  • Michael Burke
  • Subramanian Ramamoorthy

Autonomous Robots (2023)

  • Juan Pablo Franco
  • Karlo Doroc
  • Carsten Murawski

Scientific Reports (2022)

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

solving complex human problems

  • Product overview
  • All features
  • App integrations

CAPABILITIES

  • project icon Project management
  • Project views
  • Custom fields
  • Status updates
  • goal icon Goals and reporting
  • Reporting dashboards
  • workflow icon Workflows and automation
  • portfolio icon Resource management
  • Time tracking
  • my-task icon Admin and security
  • Admin console
  • asana-intelligence icon Asana AI
  • list icon Personal
  • premium icon Starter
  • briefcase icon Advanced
  • Goal management
  • Organizational planning
  • Campaign management
  • Creative production
  • Content calendars
  • Marketing strategic planning
  • Resource planning
  • Project intake
  • Product launches
  • Employee onboarding
  • View all uses arrow-right icon
  • Project plans
  • Team goals & objectives
  • Team continuity
  • Meeting agenda
  • View all templates arrow-right icon
  • Work management resources Discover best practices, watch webinars, get insights
  • What's new Learn about the latest and greatest from Asana
  • Customer stories See how the world's best organizations drive work innovation with Asana
  • Help Center Get lots of tips, tricks, and advice to get the most from Asana
  • Asana Academy Sign up for interactive courses and webinars to learn Asana
  • Developers Learn more about building apps on the Asana platform
  • Community programs Connect with and learn from Asana customers around the world
  • Events Find out about upcoming events near you
  • Partners Learn more about our partner programs
  • Support Need help? Contact the Asana support team
  • Asana for nonprofits Get more information on our nonprofit discount program, and apply.

Featured Reads

solving complex human problems

  • Project planning |
  • How to solve problems using the design ...

How to solve problems using the design thinking process

Sarah Laoyan contributor headshot

The design thinking process is a problem-solving design methodology that helps you develop solutions in a human-focused way. Initially designed at Stanford’s d.school, the five stage design thinking method can help solve ambiguous questions, or more open-ended problems. Learn how these five steps can help your team create innovative solutions to complex problems.

As humans, we’re approached with problems every single day. But how often do we come up with solutions to everyday problems that put the needs of individual humans first?

This is how the design thinking process started.

What is the design thinking process?

The design thinking process is a problem-solving design methodology that helps you tackle complex problems by framing the issue in a human-centric way. The design thinking process works especially well for problems that are not clearly defined or have a more ambiguous goal.

One of the first individuals to write about design thinking was John E. Arnold, a mechanical engineering professor at Stanford. Arnold wrote about four major areas of design thinking in his book, “Creative Engineering” in 1959. His work was later taught at Stanford’s Hasso-Plattner Institute of Design (also known as d.school), a design institute that pioneered the design thinking process. 

This eventually led Nobel Prize laureate Herbert Simon to outline one of the first iterations of the design thinking process in his 1969 book, “The Sciences of the Artificial.” While there are many different variations of design thinking, “The Sciences of the Artificial” is often credited as the basis. 

Anatomy of Work Special Report: How to spot—and overcome—the most crucial enterprise challenges

Learn how enterprises can improve processes and productivity, no matter how complex your organization is. With fewer redundancies, leaders and their teams can hit goals faster.

[Resource Card] AOW Blog Image

A non-linear design thinking approach

Design thinking is not a linear process. It’s important to understand that each stage of the process can (and should) inform the other steps. For example, when you’re going through user testing, you may learn about a new problem that didn’t come up during any of the previous stages. You may learn more about your target personas during the final testing phase, or discover that your initial problem statement can actually help solve even more problems, so you need to redefine the statement to include those as well. 

Why use the design thinking process

The design thinking process is not the most intuitive way to solve a problem, but the results that come from it are worth the effort. Here are a few other reasons why implementing the design thinking process for your team is worth it.

Focus on problem solving

As human beings, we often don’t go out of our way to find problems. Since there’s always an abundance of problems to solve, we’re used to solving problems as they occur. The design thinking process forces you to look at problems from many different points of view. 

The design thinking process requires focusing on human needs and behaviors, and how to create a solution to match those needs. This focus on problem solving can help your design team come up with creative solutions for complex problems. 

Encourages collaboration and teamwork

The design thinking process cannot happen in a silo. It requires many different viewpoints from designers, future customers, and other stakeholders . Brainstorming sessions and collaboration are the backbone of the design thinking process.

Foster innovation

The design thinking process focuses on finding creative solutions that cater to human needs. This means your team is looking to find creative solutions for hyper specific and complex problems. If they’re solving unique problems, then the solutions they’re creating must be equally unique.

The iterative process of the design thinking process means that the innovation doesn’t have to end—your team can continue to update the usability of your product to ensure that your target audience’s problems are effectively solved. 

The 5 stages of design thinking

Currently, one of the more popular models of design thinking is the model proposed by the Hasso-Plattner Institute of Design (or d.school) at Stanford. The main reason for its popularity is because of the success this process had in successful companies like Google, Apple, Toyota, and Nike. Here are the five steps designated by the d.school model that have helped many companies succeed.

1. Empathize stage

The first stage of the design thinking process is to look at the problem you’re trying to solve in an empathetic manner. To get an accurate representation of how the problem affects people, actively look for people who encountered this problem previously. Asking them how they would have liked to have the issue resolved is a good place to start, especially because of the human-centric nature of the design thinking process. 

Empathy is an incredibly important aspect of the design thinking process.  The design thinking process requires the designers to put aside any assumptions and unconscious biases they may have about the situation and put themselves in someone else’s shoes. 

For example, if your team is looking to fix the employee onboarding process at your company, you may interview recent new hires to see how their onboarding experience went. Another option is to have a more tenured team member go through the onboarding process so they can experience exactly what a new hire experiences.

2. Define stage

Sometimes a designer will encounter a situation when there’s a general issue, but not a specific problem that needs to be solved. One way to help designers clearly define and outline a problem is to create human-centric problem statements. 

A problem statement helps frame a problem in a way that provides relevant context in an easy to comprehend way. The main goal of a problem statement is to guide designers working on possible solutions for this problem. A problem statement frames the problem in a way that easily highlights the gap between the current state of things and the end goal. 

Tip: Problem statements are best framed as a need for a specific individual. The more specific you are with your problem statement, the better designers can create a human-centric solution to the problem. 

Examples of good problem statements:

We need to decrease the number of clicks a potential customer takes to go through the sign-up process.

We need to decrease the new subscriber unsubscribe rate by 10%. 

We need to increase the Android app adoption rate by 20%.

3. Ideate stage

This is the stage where designers create potential solutions to solve the problem outlined in the problem statement. Use brainstorming techniques with your team to identify the human-centric solution to the problem defined in step two. 

Here are a few brainstorming strategies you can use with your team to come up with a solution:

Standard brainstorm session: Your team gathers together and verbally discusses different ideas out loud.

Brainwrite: Everyone writes their ideas down on a piece of paper or a sticky note and each team member puts their ideas up on the whiteboard. 

Worst possible idea: The inverse of your end goal. Your team produces the most goofy idea so nobody will look silly. This takes out the rigidity of other brainstorming techniques. This technique also helps you identify areas that you can improve upon in your actual solution by looking at the worst parts of an absurd solution. 

It’s important that you don’t discount any ideas during the ideation phase of brainstorming. You want to have as many potential solutions as possible, as new ideas can help trigger even better ideas. Sometimes the most creative solution to a problem is the combination of many different ideas put together.

4. Prototype stage

During the prototype phase, you and your team design a few different variations of inexpensive or scaled down versions of the potential solution to the problem. Having different versions of the prototype gives your team opportunities to test out the solution and make any refinements. 

Prototypes are often tested by other designers, team members outside of the initial design department, and trusted customers or members of the target audience. Having multiple versions of the product gives your team the opportunity to tweak and refine the design before testing with real users. During this process, it’s important to document the testers using the end product. This will give you valuable information as to what parts of the solution are good, and which require more changes.

After testing different prototypes out with teasers, your team should have different solutions for how your product can be improved. The testing and prototyping phase is an iterative process—so much so that it’s possible that some design projects never end.

After designers take the time to test, reiterate, and redesign new products, they may find new problems, different solutions, and gain an overall better understanding of the end-user. The design thinking framework is flexible and non-linear, so it’s totally normal for the process itself to influence the end design. 

Tips for incorporating the design thinking process into your team

If you want your team to start using the design thinking process, but you’re unsure of how to start, here are a few tips to help you out. 

Start small: Similar to how you would test a prototype on a small group of people, you want to test out the design thinking process with a smaller team to see how your team functions. Give this test team some small projects to work on so you can see how this team reacts. If it works out, you can slowly start rolling this process out to other teams.

Incorporate cross-functional team members : The design thinking process works best when your team members collaborate and brainstorm together. Identify who your designer’s key stakeholders are and ensure they’re included in the small test team. 

Organize work in a collaborative project management software : Keep important design project documents such as user research, wireframes, and brainstorms in a collaborative tool like Asana . This way, team members will have one central source of truth for anything relating to the project they’re working on.

Foster collaborative design thinking with Asana

The design thinking process works best when your team works collaboratively. You don’t want something as simple as miscommunication to hinder your projects. Instead, compile all of the information your team needs about a design project in one place with Asana. 

Related resources

solving complex human problems

How Asana uses work management for employee onboarding

solving complex human problems

4 ways to establish roles and responsibilities for team success

solving complex human problems

Cost control: How to monitor project spending to increase profitability

solving complex human problems

How to use a feasibility study in project management

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Overview of the Problem-Solving Mental Process

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

solving complex human problems

Rachel Goldman, PhD FTOS, is a licensed psychologist, clinical assistant professor, speaker, wellness expert specializing in eating behaviors, stress management, and health behavior change.

solving complex human problems

  • Identify the Problem
  • Define the Problem
  • Form a Strategy
  • Organize Information
  • Allocate Resources
  • Monitor Progress
  • Evaluate the Results

Frequently Asked Questions

Problem-solving is a mental process that involves discovering, analyzing, and solving problems. The ultimate goal of problem-solving is to overcome obstacles and find a solution that best resolves the issue.

The best strategy for solving a problem depends largely on the unique situation. In some cases, people are better off learning everything they can about the issue and then using factual knowledge to come up with a solution. In other instances, creativity and insight are the best options.

It is not necessary to follow problem-solving steps sequentially, It is common to skip steps or even go back through steps multiple times until the desired solution is reached.

In order to correctly solve a problem, it is often important to follow a series of steps. Researchers sometimes refer to this as the problem-solving cycle. While this cycle is portrayed sequentially, people rarely follow a rigid series of steps to find a solution.

The following steps include developing strategies and organizing knowledge.

1. Identifying the Problem

While it may seem like an obvious step, identifying the problem is not always as simple as it sounds. In some cases, people might mistakenly identify the wrong source of a problem, which will make attempts to solve it inefficient or even useless.

Some strategies that you might use to figure out the source of a problem include :

  • Asking questions about the problem
  • Breaking the problem down into smaller pieces
  • Looking at the problem from different perspectives
  • Conducting research to figure out what relationships exist between different variables

2. Defining the Problem

After the problem has been identified, it is important to fully define the problem so that it can be solved. You can define a problem by operationally defining each aspect of the problem and setting goals for what aspects of the problem you will address

At this point, you should focus on figuring out which aspects of the problems are facts and which are opinions. State the problem clearly and identify the scope of the solution.

3. Forming a Strategy

After the problem has been identified, it is time to start brainstorming potential solutions. This step usually involves generating as many ideas as possible without judging their quality. Once several possibilities have been generated, they can be evaluated and narrowed down.

The next step is to develop a strategy to solve the problem. The approach used will vary depending upon the situation and the individual's unique preferences. Common problem-solving strategies include heuristics and algorithms.

  • Heuristics are mental shortcuts that are often based on solutions that have worked in the past. They can work well if the problem is similar to something you have encountered before and are often the best choice if you need a fast solution.
  • Algorithms are step-by-step strategies that are guaranteed to produce a correct result. While this approach is great for accuracy, it can also consume time and resources.

Heuristics are often best used when time is of the essence, while algorithms are a better choice when a decision needs to be as accurate as possible.

4. Organizing Information

Before coming up with a solution, you need to first organize the available information. What do you know about the problem? What do you not know? The more information that is available the better prepared you will be to come up with an accurate solution.

When approaching a problem, it is important to make sure that you have all the data you need. Making a decision without adequate information can lead to biased or inaccurate results.

5. Allocating Resources

Of course, we don't always have unlimited money, time, and other resources to solve a problem. Before you begin to solve a problem, you need to determine how high priority it is.

If it is an important problem, it is probably worth allocating more resources to solving it. If, however, it is a fairly unimportant problem, then you do not want to spend too much of your available resources on coming up with a solution.

At this stage, it is important to consider all of the factors that might affect the problem at hand. This includes looking at the available resources, deadlines that need to be met, and any possible risks involved in each solution. After careful evaluation, a decision can be made about which solution to pursue.

6. Monitoring Progress

After selecting a problem-solving strategy, it is time to put the plan into action and see if it works. This step might involve trying out different solutions to see which one is the most effective.

It is also important to monitor the situation after implementing a solution to ensure that the problem has been solved and that no new problems have arisen as a result of the proposed solution.

Effective problem-solvers tend to monitor their progress as they work towards a solution. If they are not making good progress toward reaching their goal, they will reevaluate their approach or look for new strategies .

7. Evaluating the Results

After a solution has been reached, it is important to evaluate the results to determine if it is the best possible solution to the problem. This evaluation might be immediate, such as checking the results of a math problem to ensure the answer is correct, or it can be delayed, such as evaluating the success of a therapy program after several months of treatment.

Once a problem has been solved, it is important to take some time to reflect on the process that was used and evaluate the results. This will help you to improve your problem-solving skills and become more efficient at solving future problems.

A Word From Verywell​

It is important to remember that there are many different problem-solving processes with different steps, and this is just one example. Problem-solving in real-world situations requires a great deal of resourcefulness, flexibility, resilience, and continuous interaction with the environment.

Get Advice From The Verywell Mind Podcast

Hosted by therapist Amy Morin, LCSW, this episode of The Verywell Mind Podcast shares how you can stop dwelling in a negative mindset.

Follow Now : Apple Podcasts / Spotify / Google Podcasts

You can become a better problem solving by:

  • Practicing brainstorming and coming up with multiple potential solutions to problems
  • Being open-minded and considering all possible options before making a decision
  • Breaking down problems into smaller, more manageable pieces
  • Asking for help when needed
  • Researching different problem-solving techniques and trying out new ones
  • Learning from mistakes and using them as opportunities to grow

It's important to communicate openly and honestly with your partner about what's going on. Try to see things from their perspective as well as your own. Work together to find a resolution that works for both of you. Be willing to compromise and accept that there may not be a perfect solution.

Take breaks if things are getting too heated, and come back to the problem when you feel calm and collected. Don't try to fix every problem on your own—consider asking a therapist or counselor for help and insight.

If you've tried everything and there doesn't seem to be a way to fix the problem, you may have to learn to accept it. This can be difficult, but try to focus on the positive aspects of your life and remember that every situation is temporary. Don't dwell on what's going wrong—instead, think about what's going right. Find support by talking to friends or family. Seek professional help if you're having trouble coping.

Davidson JE, Sternberg RJ, editors.  The Psychology of Problem Solving .  Cambridge University Press; 2003. doi:10.1017/CBO9780511615771

Sarathy V. Real world problem-solving .  Front Hum Neurosci . 2018;12:261. Published 2018 Jun 26. doi:10.3389/fnhum.2018.00261

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

American Psychological Association Logo

Psychology solving problems

Arthur C. Evans, Jr., PhD BOD

Vol. 53 No. 3 Print version: page 8

graphic representing people from all walks of life

A refrain I hear from psychologists is the importance of our field helping the public to better understand and use psychology. This is such a strong sentiment that it emerged as a major priority in APA’s strategic plan. As the association has pursued this goal, we have learned a lot about how we can be more effective at achieving it.

The key is connecting relevant science to the issues the public cares about and sharing this knowledge in digestible ways.

Here are three examples of what has been successful:

Consistently incorporating the human element. Complex problems are frequently framed in ways that omit the human element—human cognition, emotion, and behavior. This not only renders psychology irrelevant in the minds of the public, but it also weakens potential solutions to these challenges. Throughout the pandemic, we have seen the effectiveness of public health strategies wane without consideration of human factors issues, such as what motivates people to engage in healthy behaviors and take risks. Psychology can help leaders and policymakers solve complex problems, but we must ensure that the human element is included from the outset.

Word choice matters. The language we use to describe our work has a direct bearing on its success. For example, how we describe APA’s efforts to address racism impacts whether people see the work as central or extraneous to our mission. Word choice affects how people understand this work—across the country and the political spectrum. When addressing systemic racism is conveyed as not only a social justice issue, but as critical for building more equitable and effective organizations, people are more likely to see the relevance.

Using the lens of others. Effectively explaining complex concepts to solve real-world problems requires viewing them through the lens of others, not just our own. For example, in APA’s conversations with business leaders about fostering psychologically healthy workplaces, we emphasize the positive effects not only on employee well-being, but on the company’s efficiency, productivity, and bottom line.

While some of these strategies may seem obvious, putting them into consistent practice is hard. We must remind ourselves that how we communicate is not only key to advancing APA’s strategic vision, but an effective way to elevate psychology and the impact we want to have.

Arthur C. Evans Jr., PhD , is the chief executive officer of APA. Follow him on Twitter: @ArthurCEvans.

Contact APA

You may also like.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

How Humans Solve Complex Problems: The Case of the Knapsack Problem

Affiliations.

  • 1 Department of Finance, The University of Melbourne, Parkville, Victoria 3010, Australia.
  • 2 The Florey Institute of Neuroscience and Mental Health, Parkville, Victoria 3010, Australia.
  • PMID: 27713516
  • PMCID: PMC5054396
  • DOI: 10.1038/srep34851

Life presents us with problems of varying complexity. Yet, complexity is not accounted for in theories of human decision-making. Here we study instances of the knapsack problem, a discrete optimisation problem commonly encountered at all levels of cognition, from attention gating to intellectual discovery. Complexity of this problem is well understood from the perspective of a mechanical device like a computer. We show experimentally that human performance too decreased with complexity as defined in computer science. Defying traditional economic principles, participants spent effort way beyond the point where marginal gain was positive, and economic performance increased with instance difficulty. Human attempts at solving the instances exhibited commonalities with algorithms developed for computers, although biological resource constraints-limited working and episodic memories-had noticeable impact. Consistent with the very nature of the knapsack problem, only a minority of participants found the solution-often quickly-but the ones who did appeared not to realise. Substantial heterogeneity emerged, suggesting why prizes and patents, schemes that incentivise intellectual discovery but discourage information sharing, have been found to be less effective than mechanisms that reveal private information, such as markets.

PubMed Disclaimer

Figure 1. Overview of experimental paradigm.

( a ) Example instance of the 0–1 knapsack…

Figure 2. Variation in performance.

( a ) Success rates (proportion of successful attempts) for…

Figure 3. Economic Performance, Effort and Difficulty.

( a ) Scatter plot of economic performance…

Figure 4. Properties of search.

( a ) Scatter plot of mean success rates across…

Figure 5. Distribution of item ages.

Histogram of age of items (Supplementary Results 2.8). Age…

Similar articles

  • Landscape Analysis of a Class of NP-Hard Binary Packing Problems. Alyahya K, Rowe JE. Alyahya K, et al. Evol Comput. 2019 Spring;27(1):47-73. doi: 10.1162/evco_a_00237. Epub 2018 Oct 26. Evol Comput. 2019. PMID: 30365387
  • Insight and analysis problem solving in microbes to machines. Clark KB. Clark KB. Prog Biophys Mol Biol. 2015 Nov;119(2):183-93. doi: 10.1016/j.pbiomolbio.2015.08.018. Epub 2015 Aug 13. Prog Biophys Mol Biol. 2015. PMID: 26278642 Review.
  • A Case Study of Controlling Crossover in a Selection Hyper-heuristic Framework Using the Multidimensional Knapsack Problem. Drake JH, Özcan E, Burke EK. Drake JH, et al. Evol Comput. 2016 Spring;24(1):113-41. doi: 10.1162/EVCO_a_00145. Epub 2015 Jan 30. Evol Comput. 2016. PMID: 25635698
  • Solving the 0/1 knapsack problem by a biomolecular DNA computer. Taghipour H, Rezaei M, Esmaili HA. Taghipour H, et al. Adv Bioinformatics. 2013;2013:341419. doi: 10.1155/2013/341419. Epub 2013 Feb 18. Adv Bioinformatics. 2013. PMID: 23509451 Free PMC article.
  • Computational tools for the modern andrologist. Niederberger C. Niederberger C. J Androl. 1996 Sep-Oct;17(5):462-6. J Androl. 1996. PMID: 8957688 Review.
  • Not so smart? "Smart" drugs increase the level but decrease the quality of cognitive effort. Bowman E, Coghill D, Murawski C, Bossaerts P. Bowman E, et al. Sci Adv. 2023 Jun 16;9(24):eadd4165. doi: 10.1126/sciadv.add4165. Epub 2023 Jun 14. Sci Adv. 2023. PMID: 37315143 Free PMC article.
  • Computational strategies for deliberative thought. [No authors listed] [No authors listed] Nat Neurosci. 2023 May;26(5):735-736. doi: 10.1038/s41593-023-01309-4. Nat Neurosci. 2023. PMID: 37106257 No abstract available.
  • Computational complexity drives sustained deliberation. Hong T, Stauffer WR. Hong T, et al. Nat Neurosci. 2023 May;26(5):850-857. doi: 10.1038/s41593-023-01307-6. Epub 2023 Apr 24. Nat Neurosci. 2023. PMID: 37095398 Free PMC article.
  • Task-independent metrics of computational hardness predict human cognitive performance. Franco JP, Doroc K, Yadav N, Bossaerts P, Murawski C. Franco JP, et al. Sci Rep. 2022 Jul 28;12(1):12914. doi: 10.1038/s41598-022-16565-w. Sci Rep. 2022. PMID: 35902593 Free PMC article.
  • Human's Intuitive Mental Models as a Source of Realistic Artificial Intelligence and Engineering. Suomala J, Kauttonen J. Suomala J, et al. Front Psychol. 2022 May 30;13:873289. doi: 10.3389/fpsyg.2022.873289. eCollection 2022. Front Psychol. 2022. PMID: 35707640 Free PMC article.
  • Mathews G. B. On the partition of numbers. Proceedings of the London Mathematical Society 28, 486–490 (1897).
  • Kellerer H., Pferschy U. & Pisinger D. Knapsack Problems (Springer Science & Business Media, 2004).
  • Cook S. A. An overview of computational complexity. Communications of the ACM 26, 400–408 (1983).
  • Karp R. M. Reducibility among Combinatorial Problems. In Complexity of Computer Computations 85–103 (Springer: US,, 1972).
  • Moore C. & Mertens S. The Nature of Computation (Oxford University Press, Oxford, 2011).

Publication types

  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Europe PubMed Central
  • Nature Publishing Group
  • PubMed Central

Other Literature Sources

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Modeling Human Problem-Solving Behavior in Complex Production Systems

  • Conference paper
  • First Online: 14 September 2023
  • Cite this conference paper

solving complex human problems

  • Susanne Franke   ORCID: orcid.org/0000-0003-3144-6905 20 &
  • Ralph Riedel   ORCID: orcid.org/0000-0002-3704-8230 20  

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 689))

Included in the following conference series:

  • IFIP International Conference on Advances in Production Management Systems

1072 Accesses

Smart Manufacturing aims to digitize and automate manufacturing and decision-making processes by implementing technological systems ranging from assistance systems to complete enterprise resource planning solutions. The next step, Industry 5.0, emphasizes the importance of including employees and their knowledge in decision-making and problem-solving processes. This paper contributes to increase employees’ acceptance of and trust in technological systems and investigates correlations between the characteristics of the task to be solved, the environmental situation, the human and the technological support system. Thereby, the focus is set on production planning as a complex task with various influencing factors, which relies on human expert knowledge and can be supported using technological systems. A model is developed and validated using a case study. The proposed approach enables companies to maximize the usefulness and positive effects of technological systems while ensuring that employees voluntarily participate.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Rannertshauser, P., Kessler, M., Arlinghaus, J.C.: Human-centricity in the design of production planning and control systems: a first approach towards Industry 5.0. IFAC PapersOnLine 55 (10), 2641–2646 (2022)

Article   Google Scholar  

Grote, G.: Management of Uncertainty: Theory and Application in the Design of Systems and Organizations. Springer, London (2009). https://doi.org/10.1007/978-1-84882-373-0

Riedel, R., Müller, E.: Integrating planning and operations, technology and people in industrial engineering. In: Spath, D., Ilg, R., Krause, T. (eds.) Innovation in Product and Production. International Conference on Production Research ICPR (2011)

Google Scholar  

Funke, J.: Problemlösen [Problem Solving]. In: Betsch, T., Funke, J., Plessner, H. (eds.) Denken – Urteilen Entscheiden Problemlösen. Springer, Heidelberg (2011)

Riedel, R., Starker, U., von der Weth, R.: A multidisciplinary model of problem solving in complex production systems. In: Grabot, B., Vallespir, B., Gomes, S., Bouras, A., Kiritsis, D. (eds.) APMS 2014. IAICT, vol. 438, pp. 387–394. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-44739-0_47

Chapter   Google Scholar  

Leng, J., et al.: Industry 5.0: prospect and retrospect. J. Manuf. Syst. 65 , 279–295 (2022)

Romero, D., Stahre, J.: Towards the resilient operator 5.0: the future of work in smart resilient manufacturing systems. Procedia CIRP 104 , 1089–1096 (2021)

Wäfler, T., et al.: Human control capabilities. In: Fransoo, J.C., Wäfler, T., Wilson, J. (eds.) Behavioral Operations in Planning and Scheduling, pp. 199–230. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-13382-4_10

Fransoo, J.C., Wäfler, W.J.: Behavioral Operations in Planning and Scheduling. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-13382-4

Davis, F., Bagozzi, P., Warshaw, P.: User acceptance of computer technology - a comparison of two theoretical models. Manag. Sci. 35 (8), 982–1003 (1989)

Marangunić, N., Granić, A.: Technology acceptance model: a literature review from 1986 to 2013. Univ. Access Inf. Soc. 14 , 81–95 (2015)

Venkatesh, V., Davis, F.D.: A theoretical extension of the technology acceptance model, four longitudinal field studies. Manag. Sci. 46 (2), 186–204 (2000)

Zamani, S.Z.: Small and Medium Enterprises (SMEs) facing an evolving technological era: a systematic literature review on the adoption of technologies in SMEs. Eur. J. Innov. Manag. 25 (6), 735–757 (2022)

Vilone, G., Longo, L.: Explainable Artificial Intelligence: a Systematic Review. arXiv:2006.00093 (2020)

Riedel, R., Wiers, V., Fransoo, J.C.: Modelling dynamics in decision support systems. Behav. Inf. Technol. 31 (9), 927–941 (2012)

Funke, J.: Solving complex problems: human identification and control of complex systems. In: Sternberg, R.J., Frensch, P.A. (eds.) Complex Problem Solving: Principles and Mechanisms, pp. 185–222. Lawrence Erlbaum, Hillsdale (1991)

Fishbein, M., Ajzen, I.: Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research. Addison-Wesley, Reading (1975)

Hollweck, Trista: Robert K. Yin. (2014). case study research design and methods (5th ed.). Can. J. Prog. Eval. 30 (1), 108–110 (2015). https://doi.org/10.3138/cjpe.30.1.108

Davis, F.D., Venkatesh, V.: Toward preprototype user acceptance testing of new information systems: implications for software project management. IEEE Trans. Eng. Manag. 51 (1), 31–36 (2004)

Download references

Author information

Authors and affiliations.

University of Applied Sciences Zwickau, Scheffelstraße 39, 08066, Zwickau, Germany

Susanne Franke & Ralph Riedel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Susanne Franke .

Editor information

Editors and affiliations.

Norwegian University of Science and Technology, Trondheim, Norway

Erlend Alfnes

Anita Romsdal

Jan Ola Strandhagen

ZF Friedrichshafen AG, Friedrichshafen, Germany

Gregor von Cieminski

Tecnológico de Monterrey, Mexico City, Mexico

David Romero

Rights and permissions

Reprints and permissions

Copyright information

© 2023 IFIP International Federation for Information Processing

About this paper

Cite this paper.

Franke, S., Riedel, R. (2023). Modeling Human Problem-Solving Behavior in Complex Production Systems. In: Alfnes, E., Romsdal, A., Strandhagen, J.O., von Cieminski, G., Romero, D. (eds) Advances in Production Management Systems. Production Management Systems for Responsible Manufacturing, Service, and Logistics Futures. APMS 2023. IFIP Advances in Information and Communication Technology, vol 689. Springer, Cham. https://doi.org/10.1007/978-3-031-43662-8_43

Download citation

DOI : https://doi.org/10.1007/978-3-031-43662-8_43

Published : 14 September 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-43661-1

Online ISBN : 978-3-031-43662-8

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

Societies and partnerships

The International Federation for Information Processing

  • Find a journal
  • Track your research

solving complex human problems

  • Computers & Technology
  • Computer Science

Sorry, there was a problem.

Kindle app logo image

Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required .

Read instantly on your browser with Kindle for Web.

Using your mobile phone camera - scan the code below and download the Kindle app.

QR code to download the Kindle App

Image Unavailable

Toward Solving Complex Human Problems: Techniques for Increasing Our Understanding of What Matters in Doing So (Complex and Enterprise Systems Engineering)

  • To view this video download Flash Player

Follow the author

Brian E. White

Toward Solving Complex Human Problems: Techniques for Increasing Our Understanding of What Matters in Doing So (Complex and Enterprise Systems Engineering) 1st Edition

This book serves three basic purposes: (1) a tutorial-type reference for complex systems engineering (CSE) concepts and associated terminology, (2) a recommendation of a proposed methodology showing how the evolving practice of CSE can lead to a more unified theory, and (3) a complex systems (CSs) initiative for organizations to invest some of their resources toward helping to make the world a better place.

A wide variety of technical practitioners―e.g., developers of new or improved systems (particularly systems engineers), program and project managers, associated staff/workers, funders and overseers, government executives, military officers, systems acquisition personnel, contract specialists, owners of large and small businesses, professional society members, and CS researchers―may be interested in further exploring these topics.

Readers will learn more about CS characteristics and behaviors and CSE principles and will therefore be able to focus on techniques that will better serve them in their everyday work environments in dealing with complexity. The fundamental observation is that many systems inherently involve a deeper complexity because stakeholders are engaged in the enterprise. This means that such CSs are more difficult to invent, create, or improve upon because no one can be in total control since people cannot be completely controlled. Therefore, one needs to concentrate on trying to influence progress, then wait a suitable amount of time to see what happens, iterating as necessary. With just three chapters in this book, it seems to make sense to provide a tutorial introduction that readers can peruse only as necessary, considering their background and understanding, then a chapter laying out the suggested artifacts and methodology, followed by a chapter emphasizing worthwhile areas of application.

  • ISBN-10 0367638487
  • ISBN-13 978-0367638481
  • Edition 1st
  • Publisher CRC Press
  • Publication date December 18, 2020
  • Part of series Complex and Enterprise Systems Engineering
  • Language English
  • Dimensions 5.4 x 0.6 x 8.6 inches
  • Print length 140 pages
  • See all details

Editorial Reviews

About the author.

Brian E. White earned an MS and a PhD in computer science at the University of Wisconsin, and SM and SB degrees in electrical engineering at the Massachusetts Institute of Technology (MIT). He served in the U.S. Air Force and for 8 years was at the MIT Lincoln Laboratory. For five years Dr. White was a principal engineering manager at Signatron, Inc. In his 28 years at The MITRE Corporation, he held a variety of senior professional staff and project/resource management positions. He was Director of MITRE’s Systems Engineering Process Office from 2003 to 2009. Dr. White retired from MITRE in July 2010 and has since offered a consulting service, CAU- SES ("Complexity Are Us"―Systems Engineering Strategies). He has taught technical courses as an Adjunct Professor at several U.S. universities, and he is currently tutoring in basic mathematics, calculus, electrical engineering, and complex systems. He has edited and authored several books and book chapters, mostly in his Complex and Enterprise Systems Engineering Series with Taylor & Francis and CRC Press. He has presented a dozen tutorials in complex systems and published over a hundred conference papers and journal articles in complex systems, systems engineering, digital communications, etc., over his 55+-year career.

Product details

  • Publisher ‏ : ‎ CRC Press; 1st edition (December 18, 2020)
  • Language ‏ : ‎ English
  • Hardcover ‏ : ‎ 140 pages
  • ISBN-10 ‏ : ‎ 0367638487
  • ISBN-13 ‏ : ‎ 978-0367638481
  • Item Weight ‏ : ‎ 1.68 pounds
  • Dimensions ‏ : ‎ 5.4 x 0.6 x 8.6 inches
  • #1,384 in Computer & Video Game Design
  • #1,539 in Artificial Intelligence Expert Systems
  • #2,161 in Computer Hardware Design & Architecture

About the author

Brian e. white.

Brian E. White is a lifelong resident of Ohio. He graduated in 2017 from Kent State University Stark Regional Campus with a BA in English. "Shadow Land" is his first published volume of poetry. Also a fiction writer, he is currently at work on his first novel. When not writing, Brian enjoys playing the piano, reading, watching movies, spending time with friends and family, and joining his team for their weekly game of pub trivia. Since 2018, he has been a contributor to The Horror Syndicate, a website devoted to all things related to the horror genre.

Feel free to follow him at: instagram.com/shadowlandpoetry

Customer reviews

5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.

To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.

No customer reviews

  • About Amazon
  • Investor Relations
  • Amazon Devices
  • Amazon Science
  • Sell products on Amazon
  • Sell on Amazon Business
  • Sell apps on Amazon
  • Become an Affiliate
  • Advertise Your Products
  • Self-Publish with Us
  • Host an Amazon Hub
  • › See More Make Money with Us
  • Amazon Business Card
  • Shop with Points
  • Reload Your Balance
  • Amazon Currency Converter
  • Amazon and COVID-19
  • Your Account
  • Your Orders
  • Shipping Rates & Policies
  • Returns & Replacements
  • Manage Your Content and Devices
 
 
 
   
  • Conditions of Use
  • Privacy Notice
  • Consumer Health Data Privacy Disclosure
  • Your Ads Privacy Choices

solving complex human problems

SciTechDaily

  • July 2, 2024 | Scientists Uncover Brain-Boosting Potential of Vitamin B6
  • July 2, 2024 | Scientists Crack the Code on Broccoli’s Health Benefits
  • July 2, 2024 | Carbon Cataclysm: Scientists Shed New Light on Ancient Apocalypse That Affected the Entire Planet
  • July 2, 2024 | Orion Spacecraft to Take a Test Spin in the Vacuum of Space Without Leaving Earth
  • July 2, 2024 | Warning From the Deep: Ancient Extinction Holds Clues to Today’s Climate Crisis

Better Keep the Instructions: People Aren’t That Good at Solving Complex Problems

By Champalimaud Centre for the Unknown June 29, 2022

Consciousness Man Confusion Brain

A new study challenges prevalent theories about human capacity to solve complex problems and how certain mental disorders influence it.

How good are people at finding optimal solutions to complex problems? New research finds that people may not be as capable as generally assumed.

Who hasn’t felt the temptation to fling a lengthy manual into the trash bin, or just drive on instead of asking for directions? After all, following instructions is often tiresome, and we can just figure it out on our own… Or can we? A study published on May 19th, 2022, in the scientific journal Nature Human Behaviour challenges prevalent theories about our ability to tackle complex problems and how certain mental disorders affect it.

“Patients that suffer from Obsessive-Compulsive Disorder (OCD) are thought to have a problem with developing sophisticated problem-solving strategies,” said the study’s senior author, Albino Oliveira-Maia, head of the Neuropsychiatry Unit at the Champalimaud Foundation in Portugal. “However, our novel experimental approach provides strong evidence against this theory.”

Two Ways to Solve One Problem

Oliveira-Maia’s research team made this finding when investigating how healthy subjects and patients with OCD differ in the way they solve problems. “In general, people use a combination of two complementary strategies, known as the model-free and the model-based approaches,” Oliveira-Maia explained. “While healthy individuals use both strategies flexibly, patients with OCD tend to exercise the model-free approach.” 

The model-free strategy is relatively simple and it works well in stable environments. For example, imagine the following scenario: You have breakfast outside every morning on your way to work. There are two coffee shops on your route: “The Bean” and “Aroma.” Since you have to be at work early, with time, you learn that Aroma usually gets your breakfast staple — fresh croissants — delivered before the other shop does. So, following the model-free approach, you would typically go to Aroma first, and only when it doesn’t have croissants, you would head over to The Bean.

However, the model-free approach won’t work very well if the croissant supplier employed two delivery people who followed opposite routes. On weeks when the first delivery person is on duty, The Bean would get the croissants earlier. But if the second delivery person is working that week, then Aroma would receive them first.    

If you were able to discover the “model” — that the availability of croissants depends on which delivery person is working that week — you would save yourself unnecessary trips. So even if The Bean has had croissants bright and early for weeks, on the first Monday that it doesn’t, you would immediately know that this week Aroma is the safer choice.

“Even though the model-based strategy is more computationally heavy, especially while you are still working out what’s going on, it’s more effective for optimizing your actions in complex circumstances such as the one in this example,” said Oliveira-Maia.

Switching Up the Rules of the Game 

According to Oliveira-Maia, scientific studies that assess these strategies routinely apply a puzzle called the “Two-Step” task, which is similar to the second, more complex, scenario. 

“These studies have shown that healthy subjects use a mixture of the simpler model-free strategy with the more complex model-based strategy when solving these types of tasks. In contrast, patients with OCD tend to stick with the less efficient strategy. The proposed reason is that patients with OCD are extremely habitual, and so they tend to repeat actions even if they don’t serve a useful purpose,” Oliveira-Maia explained.

Though this conclusion seems straightforward and consistent, there’s a catch. Since the tasks used in these studies are usually very complex, test subjects always receive a full explanation of the model before they begin. However, no one had ever rigorously tested the effect of these preemptive instructions — particularly their absence — on the subjects’ problem-solving strategy!

No Explanation — No Way

To find out how people would do with just free experimentation, Oliveira-Maia’s team partnered up with Thomas Akam, a neuroscientist currently at Oxford University who had recently developed a two-step task for…. mice! 

“Since you cannot verbally instruct mice, Thomas created a task that was simple enough so that the animals will be able to decipher the model through trial and error. In his research article, published in the journal  Neuron  a little over a year ago, Thomas showed that mice were indeed able to crack the puzzle. So we decided to adjust this task for humans and test whether subjects would naturally adopt a model-based strategy as is generally assumed,” recounted former doctoral student Pedro Castro-Rodrigues, the first co-author of the study.

The results of the experiment caught the researchers by surprise. “Even with extensive experience with the task, only a small minority of the 200-subjects group developed a model-based strategy. This is striking given the relative simplicity of the task and suggests that humans are surprisingly poor at learning causal models from experience alone,” Castro-Rodrigues remarked.

OCD Patients Match-Up to Healthy Subjects

At the end of the third session, the researchers split subjects into two groups. One group received the full description of how the puzzle works, while the other did not. Then, the researchers ran a fourth and final session to test the effect of receiving instructions on the subjects’ problem-solving approach. 

The difference between the two groups was clear: almost all the subjects from the “explanation” group — both healthy volunteers and OCD patients — adopted a model-based strategy. On the other hand, most test subjects of the other group carried on with the model-free approach.          

“These results were fascinating,” said Ana Maia, a doctoral student that participated in the study. “Not only did they reveal that explanation plays more of a role than previously thought, but also that, given the right set of conditions, patients with OCD are in fact as capable of optimally solving a two-step task as healthy individuals.”

What is the reason for the discrepancy in results between this study and previous ones? According to the authors, there are several possible explanations. The first is that the task was relatively simple, and so were the instructions. “Since classic two-step tasks tend to be very intricate, the explanations are also very complex. So you can imagine that a person who is acutely ill and distressed will have a harder time processing this type of information,” explained Oliveira-Maia.

Another intriguing hypothesis is that beginning with free experimentation makes a difference. Is it possible that the three unguided sessions effectively prepared the patients for the explanation? 

“We didn’t directly test this question in this study, but there are some hints that it may have been the case. If future studies support this hypothesis, they might even lead to the development of novel psychotherapeutic and behavioral treatments for patients with OCD and perhaps other mental health disorders as well,” Castro-Rodrigues suggested. 

The team is continuing its exploration into this topic via several avenues. “In this project, we’ve also collected imaging data of subjects performing the task inside an MRI scanner. So our most immediate follow-up would be to search for the neural correlates associated with the transition from one strategy to the other after receiving an explanation,” said Castro-Rodrigues. 

“Pedro’s work is partly inscribed in a larger endeavor of the lab — the Neurocomp project,” added co-author Bernardo Barahona-Corrêa, a psychiatrist at the Champalimaud Foundation. “This project, which I am leading with Albino, will investigate many aspects of OCD, focussing particularly on a brain region called the orbitofrontal cortex. We believe this region is critical for both the core manifestations of this disorder and for the acquisition of model-based action-control in tasks such as the one we used in this experiment.”

“Ultimately, these results highlight the importance of explicit explanations in learning,” Oliveira-Maia pointed out. “It seems that pure free exploration may not be the most efficient route to gaining new knowledge. In fact, I’ve started talking with my kids about this,” he added playfully” “telling them to be sure to pay attention to their teachers.”

Reference: “Explicit knowledge of task structure is a primary determinant of human model-based action” by Pedro Castro-Rodrigues, Thomas Akam, Ivar Snorasson, Marta Camacho, Vitor Paixão, Ana Maia, J. Bernardo Barahona-Corrêa, Peter Dayan, H. Blair Simpson, Rui M. Costa and Albino J. Oliveira-Maia, 19 May 2022, Nature Human Behaviour . DOI: 10.1038/s41562-022-01346-2

More on SciTechDaily

Blood Clot in the Brain

Obsessive Compulsive Disorder Linked to Increased Stroke Risk

Childhood OCD Concept Crop

Scientists Baffled by Strange Form of Childhood OCD – Yale Researchers Propose Explanation: PANDAS

Transcranial Magnetic Stimulation

Symptoms of Obsessive Compulsive Disorder Improved by Deep Magnetic Stimulation

Mania Graphic

New Map Highlights the Brain Circuits Associated With Mania

Map Jaru Biological Reserve Flux Tower

Deeper Insight Into 2019 Fires From Satellite Study of Amazon Rainforest

Solving Complex Problems With Photonic Circuit

Solving Complex Problems at the Speed of Light

Kepler Discovers Variability in the Seven Sisters

Astronomers Demonstrate a Powerful New Technique for Observing Stars

Lemur

Giants of Madagascar Driven to Extinction by Humans and Climate Change

Be the first to comment on "better keep the instructions: people aren’t that good at solving complex problems", leave a comment cancel reply.

Email address is optional. If provided, your email will not be published or shared.

Save my name, email, and website in this browser for the next time I comment.

More From Forbes

Stumped five ways to hone your problem-solving skills.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Respect the worth of other people's insights

Problems continuously arise in organizational life, making problem-solving an essential skill for leaders. Leaders who are good at tackling conundrums are likely to be more effective at overcoming obstacles and guiding their teams to achieve their goals. So, what’s the secret to better problem-solving skills?

1. Understand the root cause of the problem

“Too often, people fail because they haven’t correctly defined what the problem is,” says David Ross, an international strategist, founder of consultancy Phoenix Strategic Management and author of Confronting the Storm: Regenerating Leadership and Hope in the Age of Uncertainty .

Ross explains that as teams grapple with “wicked” problems – those where there can be several root causes for why a problem exists – there can often be disagreement on the initial assumptions made. As a result, their chances of successfully solving the problem are low.

“Before commencing the process of solving the problem, it is worthwhile identifying who your key stakeholders are and talking to them about the issue,” Ross recommends. “Who could be affected by the issue? What is the problem – and why? How are people affected?”

He argues that if leaders treat people with dignity, respecting the worth of their insights, they are more likely to successfully solve problems.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024, 2. unfocus the mind.

“To solve problems, we need to commit to making time to face a problem in its full complexity, which also requires that we take back control of our thinking,” says Chris Griffiths, an expert on creativity and innovative thinking skills, founder and CEO of software provider OpenGenius, and co-author of The Focus Fix: Finding Clarity, Creativity and Resilience in an Overwhelming World .

To do this, it’s necessary to harness the power of the unfocused mind, according to Griffiths. “It might sound oxymoronic, but just like our devices, our brain needs time to recharge,” he says. “ A plethora of research has shown that daydreaming allows us to make creative connections and see abstract solutions that are not obvious when we’re engaged in direct work.”

To make use of the unfocused mind in problem solving, you must begin by getting to know the problem from all angles. “At this stage, don’t worry about actually solving the problem,” says Griffiths. “You’re simply giving your subconscious mind the information it needs to get creative with when you zone out. From here, pick a monotonous or rhythmic activity that will help you to activate the daydreaming state – that might be a walk, some doodling, or even some chores.”

Do this regularly, argues Griffiths, and you’ll soon find that flashes of inspiration and novel solutions naturally present themselves while you’re ostensibly thinking of other things. He says: “By allowing you to access the fullest creative potential of your own brain, daydreaming acts as a skeleton key for a wide range of problems.”

3. Be comfortable making judgment calls

“Admitting to not knowing the future takes courage,” says Professor Stephen Wyatt, founder and lead consultant at consultancy Corporate Rebirth and author of Antidote to the Crisis of Leadership: Opportunity in Complexity . “Leaders are worried our teams won’t respect us and our boards will lose faith in us, but what doesn’t work is drawing up plans and forecasts and holding yourself or others rigidly to them.”

Wyatt advises leaders to heighten their situational awareness – to look broadly, integrate more perspectives and be able to connect the dots. “We need to be comfortable in making judgment calls as the future is unknown,” he says. “There is no data on it. But equally, very few initiatives cannot be adjusted, refined or reviewed while in motion.”

Leaders need to stay vigilant, according to Wyatt, create the capacity of the enterprise to adapt and maintain the support of stakeholders. “The concept of the infallible leader needs to be updated,” he concludes.

4. Be prepared to fail and learn

“Organisations, and arguably society more widely, are obsessed with problems and the notion of problems,” says Steve Hearsum, founder of organizational change consultancy Edge + Stretch and author of No Silver Bullet: Bursting the Bubble of the Organisational Quick Fix .

Hearsum argues that this tendency is complicated by the myth of fixability, namely the idea that all problems, however complex, have a solution. “Our need for certainty, to minimize and dampen the anxiety of ‘not knowing,’ leads us to oversimplify and ignore or filter out anything that challenges the idea that there is a solution,” he says.

Leaders need to shift their mindset to cultivate their comfort with not knowing and couple that with being OK with being wrong, sometimes, notes Hearsum. He adds: “That means developing reflexivity to understand your own beliefs and judgments, and what influences these, asking questions and experimenting.”

5. Unleash the power of empathy

Leaders must be able to communicate problems in order to find solutions to them. But they should avoid bombarding their teams with complex, technical details since these can overwhelm their people’s cognitive load, says Dr Jessica Barker MBE , author of Hacked: The Secrets Behind Cyber Attacks .

Instead, she recommends that leaders frame their messages in ways that cut through jargon and ensure that their advice is relevant, accessible and actionable. “An essential leadership skill for this is empathy,” Barker explains. “When you’re trying to build a positive culture, it is crucial to understand why people are not practicing the behaviors you want rather than trying to force that behavioral change with fear, uncertainty and doubt.”

Sally Percy

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Complex Problem Solving: What It Is and What It Is Not

Dietrich dörner.

1 Department of Psychology, University of Bamberg, Bamberg, Germany

Joachim Funke

2 Department of Psychology, Heidelberg University, Heidelberg, Germany

Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems.

Succeeding in the 21st century requires many competencies, including creativity, life-long learning, and collaboration skills (e.g., National Research Council, 2011 ; Griffin and Care, 2015 ), to name only a few. One competence that seems to be of central importance is the ability to solve complex problems ( Mainzer, 2009 ). Mainzer quotes the Nobel prize winner Simon (1957) who wrote as early as 1957:

The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problem whose solution is required for objectively rational behavior in the real world or even for a reasonable approximation to such objective rationality. (p. 198)

The shift from well-defined to ill-defined problems came about as a result of a disillusion with the “general problem solver” ( Newell et al., 1959 ): The general problem solver was a computer software intended to solve all kind of problems that can be expressed through well-formed formulas. However, it soon became clear that this procedure was in fact a “special problem solver” that could only solve well-defined problems in a closed space. But real-world problems feature open boundaries and have no well-determined solution. In fact, the world is full of wicked problems and clumsy solutions ( Verweij and Thompson, 2006 ). As a result, solving well-defined problems and solving ill-defined problems requires different cognitive processes ( Schraw et al., 1995 ; but see Funke, 2010 ).

Well-defined problems have a clear set of means for reaching a precisely described goal state. For example: in a match-stick arithmetic problem, a person receives a false arithmetic expression constructed out of matchsticks (e.g., IV = III + III). According to the instructions, moving one of the matchsticks will make the equations true. Here, both the problem (find the appropriate stick to move) and the goal state (true arithmetic expression; solution is: VI = III + III) are defined clearly.

Ill-defined problems have no clear problem definition, their goal state is not defined clearly, and the means of moving towards the (diffusely described) goal state are not clear. For example: The goal state for solving the political conflict in the near-east conflict between Israel and Palestine is not clearly defined (living in peaceful harmony with each other?) and even if the conflict parties would agree on a two-state solution, this goal again leaves many issues unresolved. This type of problem is called a “complex problem” and is of central importance to this paper. All psychological processes that occur within individual persons and deal with the handling of such ill-defined complex problems will be subsumed under the umbrella term “complex problem solving” (CPS).

Systematic research on CPS started in the 1970s with observations of the behavior of participants who were confronted with computer simulated microworlds. For example, in one of those microworlds participants assumed the role of executives who were tasked to manage a company over a certain period of time (see Brehmer and Dörner, 1993 , for a discussion of this methodology). Today, CPS is an established concept and has even influenced large-scale assessments such as PISA (“Programme for International Student Assessment”), organized by the Organization for Economic Cooperation and Development ( OECD, 2014 ). According to the World Economic Forum, CPS is one of the most important competencies required in the future ( World Economic Forum, 2015 ). Numerous articles on the subject have been published in recent years, documenting the increasing research activity relating to this field. In the following collection of papers we list only those published in 2010 and later: theoretical papers ( Blech and Funke, 2010 ; Funke, 2010 ; Knauff and Wolf, 2010 ; Leutner et al., 2012 ; Selten et al., 2012 ; Wüstenberg et al., 2012 ; Greiff et al., 2013b ; Fischer and Neubert, 2015 ; Schoppek and Fischer, 2015 ), papers about measurement issues ( Danner et al., 2011a ; Greiff et al., 2012 , 2015a ; Alison et al., 2013 ; Gobert et al., 2015 ; Greiff and Fischer, 2013 ; Herde et al., 2016 ; Stadler et al., 2016 ), papers about applications ( Fischer and Neubert, 2015 ; Ederer et al., 2016 ; Tremblay et al., 2017 ), papers about differential effects ( Barth and Funke, 2010 ; Danner et al., 2011b ; Beckmann and Goode, 2014 ; Greiff and Neubert, 2014 ; Scherer et al., 2015 ; Meißner et al., 2016 ; Wüstenberg et al., 2016 ), one paper about developmental effects ( Frischkorn et al., 2014 ), one paper with a neuroscience background ( Osman, 2012 ) 1 , papers about cultural differences ( Güss and Dörner, 2011 ; Sonnleitner et al., 2014 ; Güss et al., 2015 ), papers about validity issues ( Goode and Beckmann, 2010 ; Greiff et al., 2013c ; Schweizer et al., 2013 ; Mainert et al., 2015 ; Funke et al., 2017 ; Greiff et al., 2017 , 2015b ; Kretzschmar et al., 2016 ; Kretzschmar, 2017 ), review papers and meta-analyses ( Osman, 2010 ; Stadler et al., 2015 ), and finally books ( Qudrat-Ullah, 2015 ; Csapó and Funke, 2017b ) and book chapters ( Funke, 2012 ; Hotaling et al., 2015 ; Funke and Greiff, 2017 ; Greiff and Funke, 2017 ; Csapó and Funke, 2017a ; Fischer et al., 2017 ; Molnàr et al., 2017 ; Tobinski and Fritz, 2017 ; Viehrig et al., 2017 ). In addition, a new “Journal of Dynamic Decision Making” (JDDM) has been launched ( Fischer et al., 2015 , 2016 ) to give the field an open-access outlet for research and discussion.

This paper aims to clarify aspects of validity: what should be meant by the term CPS and what not? This clarification seems necessary because misunderstandings in recent publications provide – from our point of view – a potentially misleading picture of the construct. We start this article with a historical review before attempting to systematize different positions. We conclude with a working definition.

Historical Review

The concept behind CPS goes back to the German phrase “komplexes Problemlösen” (CPS; the term “komplexes Problemlösen” was used as a book title by Funke, 1986 ). The concept was introduced in Germany by Dörner and colleagues in the mid-1970s (see Dörner et al., 1975 ; Dörner, 1975 ) for the first time. The German phrase was later translated to CPS in the titles of two edited volumes by Sternberg and Frensch (1991) and Frensch and Funke (1995a) that collected papers from different research traditions. Even though it looks as though the term was coined in the 1970s, Edwards (1962) used the term “dynamic decision making” to describe decisions that come in a sequence. He compared static with dynamic decision making, writing:

  • simple  In dynamic situations, a new complication not found in the static situations arises. The environment in which the decision is set may be changing, either as a function of the sequence of decisions, or independently of them, or both. It is this possibility of an environment which changes while you collect information about it which makes the task of dynamic decision theory so difficult and so much fun. (p. 60)

The ability to solve complex problems is typically measured via dynamic systems that contain several interrelated variables that participants need to alter. Early work (see, e.g., Dörner, 1980 ) used a simulation scenario called “Lohhausen” that contained more than 2000 variables that represented the activities of a small town: Participants had to take over the role of a mayor for a simulated period of 10 years. The simulation condensed these ten years to ten hours in real time. Later, researchers used smaller dynamic systems as scenarios either based on linear equations (see, e.g., Funke, 1993 ) or on finite state automata (see, e.g., Buchner and Funke, 1993 ). In these contexts, CPS consisted of the identification and control of dynamic task environments that were previously unknown to the participants. Different task environments came along with different degrees of fidelity ( Gray, 2002 ).

According to Funke (2012) , the typical attributes of complex systems are (a) complexity of the problem situation which is usually represented by the sheer number of involved variables; (b) connectivity and mutual dependencies between involved variables; (c) dynamics of the situation, which reflects the role of time and developments within a system; (d) intransparency (in part or full) about the involved variables and their current values; and (e) polytely (greek term for “many goals”), representing goal conflicts on different levels of analysis. This mixture of features is similar to what is called VUCA (volatility, uncertainty, complexity, ambiguity) in modern approaches to management (e.g., Mack et al., 2016 ).

In his evaluation of the CPS movement, Sternberg (1995) compared (young) European approaches to CPS with (older) American research on expertise. His analysis of the differences between the European and American traditions shows advantages but also potential drawbacks for each side. He states (p. 301): “I believe that although there are problems with the European approach, it deals with some fundamental questions that American research scarcely addresses.” So, even though the echo of the European approach did not enjoy strong resonance in the US at that time, it was valued by scholars like Sternberg and others. Before attending to validity issues, we will first present a short review of different streams.

Different Approaches to CPS

In the short history of CPS research, different approaches can be identified ( Buchner, 1995 ; Fischer et al., 2017 ). To systematize, we differentiate between the following five lines of research:

  • simple (a) The search for individual differences comprises studies identifying interindividual differences that affect the ability to solve complex problems. This line of research is reflected, for example, in the early work by Dörner et al. (1983) and their “Lohhausen” study. Here, naïve student participants took over the role of the mayor of a small simulated town named Lohhausen for a simulation period of ten years. According to the results of the authors, it is not intelligence (as measured by conventional IQ tests) that predicts performance, but it is the ability to stay calm in the face of a challenging situation and the ability to switch easily between an analytic mode of processing and a more holistic one.
  • simple (b) The search for cognitive processes deals with the processes behind understanding complex dynamic systems. Representative of this line of research is, for example, Berry and Broadbent’s (1984) work on implicit and explicit learning processes when people interact with a dynamic system called “Sugar Production”. They found that those who perform best in controlling a dynamic system can do so implicitly, without explicit knowledge of details regarding the systems’ relations.
  • simple (c) The search for system factors seeks to identify the aspects of dynamic systems that determine the difficulty of complex problems and make some problems harder than others. Representative of this line of research is, for example, work by Funke (1985) , who systematically varied the number of causal effects within a dynamic system or the presence/absence of eigendynamics. He found, for example, that solution quality decreases as the number of systems relations increases.
  • simple (d) The psychometric approach develops measurement instruments that can be used as an alternative to classical IQ tests, as something that goes “beyond IQ”. The MicroDYN approach ( Wüstenberg et al., 2012 ) is representative for this line of research that presents an alternative to reasoning tests (like Raven matrices). These authors demonstrated that a small improvement in predicting school grade point average beyond reasoning is possible with MicroDYN tests.
  • simple (e) The experimental approach explores CPS under different experimental conditions. This approach uses CPS assessment instruments to test hypotheses derived from psychological theories and is sometimes used in research about cognitive processes (see above). Exemplary for this line of research is the work by Rohe et al. (2016) , who test the usefulness of “motto goals” in the context of complex problems compared to more traditional learning and performance goals. Motto goals differ from pure performance goals by activating positive affect and should lead to better goal attainment especially in complex situations (the mentioned study found no effect).

To be clear: these five approaches are not mutually exclusive and do overlap. But the differentiation helps to identify different research communities and different traditions. These communities had different opinions about scaling complexity.

The Race for Complexity: Use of More and More Complex Systems

In the early years of CPS research, microworlds started with systems containing about 20 variables (“Tailorshop”), soon reached 60 variables (“Moro”), and culminated in systems with about 2000 variables (“Lohhausen”). This race for complexity ended with the introduction of the concept of “minimal complex systems” (MCS; Greiff and Funke, 2009 ; Funke and Greiff, 2017 ), which ushered in a search for the lower bound of complexity instead of the higher bound, which could not be defined as easily. The idea behind this concept was that whereas the upper limits of complexity are unbound, the lower limits might be identifiable. Imagine starting with a simple system containing two variables with a simple linear connection between them; then, step by step, increase the number of variables and/or the type of connections. One soon reaches a point where the system can no longer be considered simple and has become a “complex system”. This point represents a minimal complex system. Despite some research having been conducted in this direction, the point of transition from simple to complex has not been identified clearly as of yet.

Some years later, the original “minimal complex systems” approach ( Greiff and Funke, 2009 ) shifted to the “multiple complex systems” approach ( Greiff et al., 2013a ). This shift is more than a slight change in wording: it is important because it taps into the issue of validity directly. Minimal complex systems have been introduced in the context of challenges from large-scale assessments like PISA 2012 that measure new aspects of problem solving, namely interactive problems besides static problem solving ( Greiff and Funke, 2017 ). PISA 2012 required test developers to remain within testing time constraints (given by the school class schedule). Also, test developers needed a large item pool for the construction of a broad class of problem solving items. It was clear from the beginning that MCS deal with simple dynamic situations that require controlled interaction: the exploration and control of simple ticket machines, simple mobile phones, or simple MP3 players (all of these example domains were developed within PISA 2012) – rather than really complex situations like managerial or political decision making.

As a consequence of this subtle but important shift in interpreting the letters MCS, the definition of CPS became a subject of debate recently ( Funke, 2014a ; Greiff and Martin, 2014 ; Funke et al., 2017 ). In the words of Funke (2014b , p. 495):

  • simple  It is funny that problems that nowadays come under the term ‘CPS’, are less complex (in terms of the previously described attributes of complex situations) than at the beginning of this new research tradition. The emphasis on psychometric qualities has led to a loss of variety. Systems thinking requires more than analyzing models with two or three linear equations – nonlinearity, cyclicity, rebound effects, etc. are inherent features of complex problems and should show up at least in some of the problems used for research and assessment purposes. Minimal complex systems run the danger of becoming minimal valid systems.

Searching for minimal complex systems is not the same as gaining insight into the way how humans deal with complexity and uncertainty. For psychometric purposes, it is appropriate to reduce complexity to a minimum; for understanding problem solving under conditions of overload, intransparency, and dynamics, it is necessary to realize those attributes with reasonable strength. This aspect is illustrated in the next section.

Importance of the Validity Issue

The most important reason for discussing the question of what complex problem solving is and what it is not stems from its phenomenology: if we lose sight of our phenomena, we are no longer doing good psychology. The relevant phenomena in the context of complex problems encompass many important aspects. In this section, we discuss four phenomena that are specific to complex problems. We consider these phenomena as critical for theory development and for the construction of assessment instruments (i.e., microworlds). These phenomena require theories for explaining them and they require assessment instruments eliciting them in a reliable way.

The first phenomenon is the emergency reaction of the intellectual system ( Dörner, 1980 ): When dealing with complex systems, actors tend to (a) reduce their intellectual level by decreasing self-reflections, by decreasing their intentions, by stereotyping, and by reducing their realization of intentions, (b) they show a tendency for fast action with increased readiness for risk, with increased violations of rules, and with increased tendency to escape the situation, and (c) they degenerate their hypotheses formation by construction of more global hypotheses and reduced tests of hypotheses, by increasing entrenchment, and by decontextualizing their goals. This phenomenon illustrates the strong connection between cognition, emotion, and motivation that has been emphasized by Dörner (see, e.g., Dörner and Güss, 2013 ) from the beginning of his research tradition; the emergency reaction reveals a shift in the mode of information processing under the pressure of complexity.

The second phenomenon comprises cross-cultural differences with respect to strategy use ( Strohschneider and Güss, 1999 ; Güss and Wiley, 2007 ; Güss et al., 2015 ). Results from complex task environments illustrate the strong influence of context and background knowledge to an extent that cannot be found for knowledge-poor problems. For example, in a comparison between Brazilian and German participants, it turned out that Brazilians accept the given problem descriptions and are more optimistic about the results of their efforts, whereas Germans tend to inquire more about the background of the problems and take a more active approach but are less optimistic (according to Strohschneider and Güss, 1998 , p. 695).

The third phenomenon relates to failures that occur during the planning and acting stages ( Jansson, 1994 ; Ramnarayan et al., 1997 ), illustrating that rational procedures seem to be unlikely to be used in complex situations. The potential for failures ( Dörner, 1996 ) rises with the complexity of the problem. Jansson (1994) presents seven major areas for failures with complex situations: acting directly on current feedback; insufficient systematization; insufficient control of hypotheses and strategies; lack of self-reflection; selective information gathering; selective decision making; and thematic vagabonding.

The fourth phenomenon describes (a lack of) training and transfer effects ( Kretzschmar and Süß, 2015 ), which again illustrates the context dependency of strategies and knowledge (i.e., there is no strategy that is so universal that it can be used in many different problem situations). In their own experiment, the authors could show training effects only for knowledge acquisition, not for knowledge application. Only with specific feedback, performance in complex environments can be increased ( Engelhart et al., 2017 ).

These four phenomena illustrate why the type of complexity (or degree of simplicity) used in research really matters. Furthermore, they demonstrate effects that are specific for complex problems, but not for toy problems. These phenomena direct the attention to the important question: does the stimulus material used (i.e., the computer-simulated microworld) tap and elicit the manifold of phenomena described above?

Dealing with partly unknown complex systems requires courage, wisdom, knowledge, grit, and creativity. In creativity research, “little c” and “BIG C” are used to differentiate between everyday creativity and eminent creativity ( Beghetto and Kaufman, 2007 ; Kaufman and Beghetto, 2009 ). Everyday creativity is important for solving everyday problems (e.g., finding a clever fix for a broken spoke on my bicycle), eminent creativity changes the world (e.g., inventing solar cells for energy production). Maybe problem solving research should use a similar differentiation between “little p” and “BIG P” to mark toy problems on the one side and big societal challenges on the other. The question then remains: what can we learn about BIG P by studying little p? What phenomena are present in both types, and what phenomena are unique to each of the two extremes?

Discussing research on CPS requires reflecting on the field’s research methods. Even if the experimental approach has been successful for testing hypotheses (for an overview of older work, see Funke, 1995 ), other methods might provide additional and novel insights. Complex phenomena require complex approaches to understand them. The complex nature of complex systems imposes limitations on psychological experiments: The more complex the environments, the more difficult is it to keep conditions under experimental control. And if experiments have to be run in labs one should bring enough complexity into the lab to establish the phenomena mentioned, at least in part.

There are interesting options to be explored (again): think-aloud protocols , which have been discredited for many years ( Nisbett and Wilson, 1977 ) and yet are a valuable source for theory testing ( Ericsson and Simon, 1983 ); introspection ( Jäkel and Schreiber, 2013 ), which seems to be banned from psychological methods but nevertheless offers insights into thought processes; the use of life-streaming ( Wendt, 2017 ), a medium in which streamers generate a video stream of think-aloud data in computer-gaming; political decision-making ( Dhami et al., 2015 ) that demonstrates error-proneness in groups; historical case studies ( Dörner and Güss, 2011 ) that give insights into the thinking styles of political leaders; the use of the critical incident technique ( Reuschenbach, 2008 ) to construct complex scenarios; and simulations with different degrees of fidelity ( Gray, 2002 ).

The methods tool box is full of instruments that have to be explored more carefully before any individual instrument receives a ban or research narrows its focus to only one paradigm for data collection. Brehmer and Dörner (1993) discussed the tensions between “research in the laboratory and research in the field”, optimistically concluding “that the new methodology of computer-simulated microworlds will provide us with the means to bridge the gap between the laboratory and the field” (p. 183). The idea behind this optimism was that computer-simulated scenarios would bring more complexity from the outside world into the controlled lab environment. But this is not true for all simulated scenarios. In his paper on simulated environments, Gray (2002) differentiated computer-simulated environments with respect to three dimensions: (1) tractability (“the more training subjects require before they can use a simulated task environment, the less tractable it is”, p. 211), correspondence (“High correspondence simulated task environments simulate many aspects of one task environment. Low correspondence simulated task environments simulate one aspect of many task environments”, p. 214), and engagement (“A simulated task environment is engaging to the degree to which it involves and occupies the participants; that is, the degree to which they agree to take it seriously”, p. 217). But the mere fact that a task is called a “computer-simulated task environment” does not mean anything specific in terms of these three dimensions. This is one of several reasons why we should differentiate between those studies that do not address the core features of CPS and those that do.

What is not CPS?

Even though a growing number of references claiming to deal with complex problems exist (e.g., Greiff and Wüstenberg, 2015 ; Greiff et al., 2016 ), it would be better to label the requirements within these tasks “dynamic problem solving,” as it has been done adequately in earlier work ( Greiff et al., 2012 ). The dynamics behind on-off-switches ( Thimbleby, 2007 ) are remarkable but not really complex. Small nonlinear systems that exhibit stunningly complex and unstable behavior do exist – but they are not used in psychometric assessments of so-called CPS. There are other small systems (like MicroDYN scenarios: Greiff and Wüstenberg, 2014 ) that exhibit simple forms of system behavior that are completely predictable and stable. This type of simple systems is used frequently. It is even offered commercially as a complex problem-solving test called COMPRO ( Greiff and Wüstenberg, 2015 ) for business applications. But a closer look reveals that the label is not used correctly; within COMPRO, the used linear equations are far from being complex and the system can be handled properly by using only one strategy (see for more details Funke et al., 2017 ).

Why do simple linear systems not fall within CPS? At the surface, nonlinear and linear systems might appear similar because both only include 3–5 variables. But the difference is in terms of systems behavior as well as strategies and learning. If the behavior is simple (as in linear systems where more input is related to more output and vice versa), the system can be easily understood (participants in the MicroDYN world have 3 minutes to explore a complex system). If the behavior is complex (as in systems that contain strange attractors or negative feedback loops), things become more complicated and much more observation is needed to identify the hidden structure of the unknown system ( Berry and Broadbent, 1984 ; Hundertmark et al., 2015 ).

Another issue is learning. If tasks can be solved using a single (and not so complicated) strategy, steep learning curves are to be expected. The shift from problem solving to learned routine behavior occurs rapidly, as was demonstrated by Luchins (1942) . In his water jar experiments, participants quickly acquired a specific strategy (a mental set) for solving certain measurement problems that they later continued applying to problems that would have allowed for easier approaches. In the case of complex systems, learning can occur only on very general, abstract levels because it is difficult for human observers to make specific predictions. Routines dealing with complex systems are quite different from routines relating to linear systems.

What should not be studied under the label of CPS are pure learning effects, multiple-cue probability learning, or tasks that can be solved using a single strategy. This last issue is a problem for MicroDYN tasks that rely strongly on the VOTAT strategy (“vary one thing at a time”; see Tschirgi, 1980 ). In real-life, it is hard to imagine a business manager trying to solve her or his problems by means of VOTAT.

What is CPS?

In the early days of CPS research, planet Earth’s dynamics and complexities gained attention through such books as “The limits to growth” ( Meadows et al., 1972 ) and “Beyond the limits” ( Meadows et al., 1992 ). In the current decade, for example, the World Economic Forum (2016) attempts to identify the complexities and risks of our modern world. In order to understand the meaning of complexity and uncertainty, taking a look at the worlds’ most pressing issues is helpful. Searching for strategies to cope with these problems is a difficult task: surely there is no place for the simple principle of “vary-one-thing-at-a-time” (VOTAT) when it comes to global problems. The VOTAT strategy is helpful in the context of simple problems ( Wüstenberg et al., 2014 ); therefore, whether or not VOTAT is helpful in a given problem situation helps us distinguish simple from complex problems.

Because there exist no clear-cut strategies for complex problems, typical failures occur when dealing with uncertainty ( Dörner, 1996 ; Güss et al., 2015 ). Ramnarayan et al. (1997) put together a list of generic errors (e.g., not developing adequate action plans; lack of background control; learning from experience blocked by stereotype knowledge; reactive instead of proactive action) that are typical of knowledge-rich complex systems but cannot be found in simple problems.

Complex problem solving is not a one-dimensional, low-level construct. On the contrary, CPS is a multi-dimensional bundle of competencies existing at a high level of abstraction, similar to intelligence (but going beyond IQ). As Funke et al. (2018) state: “Assessment of transversal (in educational contexts: cross-curricular) competencies cannot be done with one or two types of assessment. The plurality of skills and competencies requires a plurality of assessment instruments.”

There are at least three different aspects of complex systems that are part of our understanding of a complex system: (1) a complex system can be described at different levels of abstraction; (2) a complex system develops over time, has a history, a current state, and a (potentially unpredictable) future; (3) a complex system is knowledge-rich and activates a large semantic network, together with a broad list of potential strategies (domain-specific as well as domain-general).

Complex problem solving is not only a cognitive process but is also an emotional one ( Spering et al., 2005 ; Barth and Funke, 2010 ) and strongly dependent on motivation (low-stakes versus high-stakes testing; see Hermes and Stelling, 2016 ).

Furthermore, CPS is a dynamic process unfolding over time, with different phases and with more differentiation than simply knowledge acquisition and knowledge application. Ideally, the process should entail identifying problems (see Dillon, 1982 ; Lee and Cho, 2007 ), even if in experimental settings, problems are provided to participants a priori . The more complex and open a given situation, the more options can be generated (T. S. Schweizer et al., 2016 ). In closed problems, these processes do not occur in the same way.

In analogy to the difference between formative (process-oriented) and summative (result-oriented) assessment ( Wiliam and Black, 1996 ; Bennett, 2011 ), CPS should not be reduced to the mere outcome of a solution process. The process leading up to the solution, including detours and errors made along the way, might provide a more differentiated impression of a person’s problem-solving abilities and competencies than the final result of such a process. This is one of the reasons why CPS environments are not, in fact, complex intelligence tests: research on CPS is not only about the outcome of the decision process, but it is also about the problem-solving process itself.

Complex problem solving is part of our daily life: finding the right person to share one’s life with, choosing a career that not only makes money, but that also makes us happy. Of course, CPS is not restricted to personal problems – life on Earth gives us many hard nuts to crack: climate change, population growth, the threat of war, the use and distribution of natural resources. In sum, many societal challenges can be seen as complex problems. To reduce that complexity to a one-hour lab activity on a random Friday afternoon puts it out of context and does not address CPS issues.

Theories about CPS should specify which populations they apply to. Across populations, one thing to consider is prior knowledge. CPS research with experts (e.g., Dew et al., 2009 ) is quite different from problem solving research using tasks that intentionally do not require any specific prior knowledge (see, e.g., Beckmann and Goode, 2014 ).

More than 20 years ago, Frensch and Funke (1995b) defined CPS as follows:

  • simple  CPS occurs to overcome barriers between a given state and a desired goal state by means of behavioral and/or cognitive, multi-step activities. The given state, goal state, and barriers between given state and goal state are complex, change dynamically during problem solving, and are intransparent. The exact properties of the given state, goal state, and barriers are unknown to the solver at the outset. CPS implies the efficient interaction between a solver and the situational requirements of the task, and involves a solver’s cognitive, emotional, personal, and social abilities and knowledge. (p. 18)

The above definition is rather formal and does not account for content or relations between the simulation and the real world. In a sense, we need a new definition of CPS that addresses these issues. Based on our previous arguments, we propose the following working definition:

  • simple  Complex problem solving is a collection of self-regulated psychological processes and activities necessary in dynamic environments to achieve ill-defined goals that cannot be reached by routine actions. Creative combinations of knowledge and a broad set of strategies are needed. Solutions are often more bricolage than perfect or optimal. The problem-solving process combines cognitive, emotional, and motivational aspects, particularly in high-stakes situations. Complex problems usually involve knowledge-rich requirements and collaboration among different persons.

The main differences to the older definition lie in the emphasis on (a) the self-regulation of processes, (b) creativity (as opposed to routine behavior), (c) the bricolage type of solution, and (d) the role of high-stakes challenges. Our new definition incorporates some aspects that have been discussed in this review but were not reflected in the 1995 definition, which focused on attributes of complex problems like dynamics or intransparency.

This leads us to the final reflection about the role of CPS for dealing with uncertainty and complexity in real life. We will distinguish thinking from reasoning and introduce the sense of possibility as an important aspect of validity.

CPS as Combining Reasoning and Thinking in an Uncertain Reality

Leading up to the Battle of Borodino in Leo Tolstoy’s novel “War and Peace”, Prince Andrei Bolkonsky explains the concept of war to his friend Pierre. Pierre expects war to resemble a game of chess: You position the troops and attempt to defeat your opponent by moving them in different directions.

“Far from it!”, Andrei responds. “In chess, you know the knight and his moves, you know the pawn and his combat strength. While in war, a battalion is sometimes stronger than a division and sometimes weaker than a company; it all depends on circumstances that can never be known. In war, you do not know the position of your enemy; some things you might be able to observe, some things you have to divine (but that depends on your ability to do so!) and many things cannot even be guessed at. In chess, you can see all of your opponent’s possible moves. In war, that is impossible. If you decide to attack, you cannot know whether the necessary conditions are met for you to succeed. Many a time, you cannot even know whether your troops will follow your orders…”

In essence, war is characterized by a high degree of uncertainty. A good commander (or politician) can add to that what he or she sees, tentatively fill in the blanks – and not just by means of logical deduction but also by intelligently bridging missing links. A bad commander extrapolates from what he sees and thus arrives at improper conclusions.

Many languages differentiate between two modes of mentalizing; for instance, the English language distinguishes between ‘thinking’ and ‘reasoning’. Reasoning denotes acute and exact mentalizing involving logical deductions. Such deductions are usually based on evidence and counterevidence. Thinking, however, is what is required to write novels. It is the construction of an initially unknown reality. But it is not a pipe dream, an unfounded process of fabrication. Rather, thinking asks us to imagine reality (“Wirklichkeitsfantasie”). In other words, a novelist has to possess a “sense of possibility” (“Möglichkeitssinn”, Robert Musil; in German, sense of possibility is often used synonymously with imagination even though imagination is not the same as sense of possibility, for imagination also encapsulates the impossible). This sense of possibility entails knowing the whole (or several wholes) or being able to construe an unknown whole that could accommodate a known part. The whole has to align with sociological and geographical givens, with the mentality of certain peoples or groups, and with the laws of physics and chemistry. Otherwise, the entire venture is ill-founded. A sense of possibility does not aim for the moon but imagines something that might be possible but has not been considered possible or even potentially possible so far.

Thinking is a means to eliminate uncertainty. This process requires both of the modes of thinking we have discussed thus far. Economic, political, or ecological decisions require us to first consider the situation at hand. Though certain situational aspects can be known, but many cannot. In fact, von Clausewitz (1832) posits that only about 25% of the necessary information is available when a military decision needs to be made. Even then, there is no way to guarantee that whatever information is available is also correct: Even if a piece of information was completely accurate yesterday, it might no longer apply today.

Once our sense of possibility has helped grasping a situation, problem solvers need to call on their reasoning skills. Not every situation requires the same action, and we may want to act this way or another to reach this or that goal. This appears logical, but it is a logic based on constantly shifting grounds: We cannot know whether necessary conditions are met, sometimes the assumptions we have made later turn out to be incorrect, and sometimes we have to revise our assumptions or make completely new ones. It is necessary to constantly switch between our sense of possibility and our sense of reality, that is, to switch between thinking and reasoning. It is an arduous process, and some people handle it well, while others do not.

If we are to believe Tuchman’s (1984) book, “The March of Folly”, most politicians and commanders are fools. According to Tuchman, not much has changed in the 3300 years that have elapsed since the misguided Trojans decided to welcome the left-behind wooden horse into their city that would end up dismantling Troy’s defensive walls. The Trojans, too, had been warned, but decided not to heed the warning. Although Laocoön had revealed the horse’s true nature to them by attacking it with a spear, making the weapons inside the horse ring, the Trojans refused to see the forest for the trees. They did not want to listen, they wanted the war to be over, and this desire ended up shaping their perception.

The objective of psychology is to predict and explain human actions and behavior as accurately as possible. However, thinking cannot be investigated by limiting its study to neatly confined fractions of reality such as the realms of propositional logic, chess, Go tasks, the Tower of Hanoi, and so forth. Within these systems, there is little need for a sense of possibility. But a sense of possibility – the ability to divine and construe an unknown reality – is at least as important as logical reasoning skills. Not researching the sense of possibility limits the validity of psychological research. All economic and political decision making draws upon this sense of possibility. By not exploring it, psychological research dedicated to the study of thinking cannot further the understanding of politicians’ competence and the reasons that underlie political mistakes. Christopher Clark identifies European diplomats’, politicians’, and commanders’ inability to form an accurate representation of reality as a reason for the outbreak of World War I. According to Clark’s (2012) book, “The Sleepwalkers”, the politicians of the time lived in their own make-believe world, wrongfully assuming that it was the same world everyone else inhabited. If CPS research wants to make significant contributions to the world, it has to acknowledge complexity and uncertainty as important aspects of it.

For more than 40 years, CPS has been a new subject of psychological research. During this time period, the initial emphasis on analyzing how humans deal with complex, dynamic, and uncertain situations has been lost. What is subsumed under the heading of CPS in modern research has lost the original complexities of real-life problems. From our point of view, the challenges of the 21st century require a return to the origins of this research tradition. We would encourage researchers in the field of problem solving to come back to the original ideas. There is enough complexity and uncertainty in the world to be studied. Improving our understanding of how humans deal with these global and pressing problems would be a worthwhile enterprise.

Author Contributions

JF drafted a first version of the manuscript, DD added further text and commented on the draft. JF finalized the manuscript.

Authors Note

After more than 40 years of controversial discussions between both authors, this is the first joint paper. We are happy to have done this now! We have found common ground!

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank the Deutsche Forschungsgemeinschaft (DFG) for the continuous support of their research over many years. Thanks to Daniel Holt for his comments on validity issues, thanks to Julia Nolte who helped us by translating German text excerpts into readable English and helped us, together with Keri Hartman, to improve our style and grammar – thanks for that! We also thank the two reviewers for their helpful critical comments on earlier versions of this manuscript. Finally, we acknowledge financial support by Deutsche Forschungsgemeinschaft and Ruprecht-Karls-Universität Heidelberg within their funding programme Open Access Publishing .

1 The fMRI-paper from Anderson (2012) uses the term “complex problem solving” for tasks that do not fall in our understanding of CPS and is therefore excluded from this list.

  • Alison L., van den Heuvel C., Waring S., Power N., Long A., O’Hara T., et al. (2013). Immersive simulated learning environments for researching critical incidents: a knowledge synthesis of the literature and experiences of studying high-risk strategic decision making. J. Cogn. Eng. Deci. Mak. 7 255–272. 10.1177/1555343412468113 [ CrossRef ] [ Google Scholar ]
  • Anderson J. R. (2012). Tracking problem solving by multivariate pattern analysis and hidden markov model algorithms. Neuropsychologia 50 487–498. 10.1016/j.neuropsychologia.2011.07.025 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barth C. M., Funke J. (2010). Negative affective environments improve complex solving performance. Cogn. Emot. 24 1259–1268. 10.1080/02699930903223766 [ CrossRef ] [ Google Scholar ]
  • Beckmann J. F., Goode N. (2014). The benefit of being naïve and knowing it: the unfavourable impact of perceived context familiarity on learning in complex problem solving tasks. Instruct. Sci. 42 271–290. 10.1007/s11251-013-9280-7 [ CrossRef ] [ Google Scholar ]
  • Beghetto R. A., Kaufman J. C. (2007). Toward a broader conception of creativity: a case for “mini-c” creativity. Psychol. Aesthetics Creat. Arts 1 73–79. 10.1037/1931-3896.1.2.73 [ CrossRef ] [ Google Scholar ]
  • Bennett R. E. (2011). Formative assessment: a critical review. Assess. Educ. Princ. Policy Pract. 18 5–25. 10.1080/0969594X.2010.513678 [ CrossRef ] [ Google Scholar ]
  • Berry D. C., Broadbent D. E. (1984). On the relationship between task performance and associated verbalizable knowledge. Q. J. Exp. Psychol. 36 209–231. 10.1080/14640748408402156 [ CrossRef ] [ Google Scholar ]
  • Blech C., Funke J. (2010). You cannot have your cake and eat it, too: how induced goal conflicts affect complex problem solving. Open Psychol. J. 3 42–53. 10.2174/1874350101003010042 [ CrossRef ] [ Google Scholar ]
  • Brehmer B., Dörner D. (1993). Experiments with computer-simulated microworlds: escaping both the narrow straits of the laboratory and the deep blue sea of the field study. Comput. Hum. Behav. 9 171–184. 10.1016/0747-5632(93)90005-D [ CrossRef ] [ Google Scholar ]
  • Buchner A. (1995). “Basic topics and approaches to the study of complex problem solving,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Erlbaum; ), 27–63. [ Google Scholar ]
  • Buchner A., Funke J. (1993). Finite state automata: dynamic task environments in problem solving research. Q. J. Exp. Psychol. 46A , 83–118. 10.1080/14640749308401068 [ CrossRef ] [ Google Scholar ]
  • Clark C. (2012). The Sleepwalkers: How Europe Went to War in 1914 . London: Allen Lane. [ Google Scholar ]
  • Csapó B., Funke J. (2017a). “The development and assessment of problem solving in 21st-century schools,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 19–31. [ Google Scholar ]
  • Csapó B., Funke J. (eds) (2017b). The Nature of Problem Solving. Using Research to Inspire 21st Century Learning. Paris: OECD Publishing. [ Google Scholar ]
  • Danner D., Hagemann D., Holt D. V., Hager M., Schankin A., Wüstenberg S., et al. (2011a). Measuring performance in dynamic decision making. Reliability and validity of the Tailorshop simulation. J. Ind. Differ. 32 225–233. 10.1027/1614-0001/a000055 [ CrossRef ] [ Google Scholar ]
  • Danner D., Hagemann D., Schankin A., Hager M., Funke J. (2011b). Beyond IQ: a latent state-trait analysis of general intelligence, dynamic decision making, and implicit learning. Intelligence 39 323–334. 10.1016/j.intell.2011.06.004 [ CrossRef ] [ Google Scholar ]
  • Dew N., Read S., Sarasvathy S. D., Wiltbank R. (2009). Effectual versus predictive logics in entrepreneurial decision-making: differences between experts and novices. J. Bus. Ventur. 24 287–309. 10.1016/j.jbusvent.2008.02.002 [ CrossRef ] [ Google Scholar ]
  • Dhami M. K., Mandel D. R., Mellers B. A., Tetlock P. E. (2015). Improving intelligence analysis with decision science. Perspect. Psychol. Sci. 10 753–757. 10.1177/1745691615598511 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dillon J. T. (1982). Problem finding and solving. J. Creat. Behav. 16 97–111. 10.1002/j.2162-6057.1982.tb00326.x [ CrossRef ] [ Google Scholar ]
  • Dörner D. (1975). Wie Menschen eine Welt verbessern wollten [How people wanted to improve a world]. Bild Der Wissenschaft 12 48–53. [ Google Scholar ]
  • Dörner D. (1980). On the difficulties people have in dealing with complexity. Simulat. Gam. 11 87–106. 10.1177/104687818001100108 [ CrossRef ] [ Google Scholar ]
  • Dörner D. (1996). The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. New York, NY: Basic Books. [ Google Scholar ]
  • Dörner D., Drewes U., Reither F. (1975). “Über das Problemlösen in sehr komplexen Realitätsbereichen,” in Bericht über den 29. Kongreß der DGfPs in Salzburg 1974 Band 1 , ed. Tack W. H. (Göttingen: Hogrefe; ), 339–340. [ Google Scholar ]
  • Dörner D., Güss C. D. (2011). A psychological analysis of Adolf Hitler’s decision making as commander in chief: summa confidentia et nimius metus. Rev. Gen. Psychol. 15 37–49. 10.1037/a0022375 [ CrossRef ] [ Google Scholar ]
  • Dörner D., Güss C. D. (2013). PSI: a computational architecture of cognition, motivation, and emotion. Rev. Gen. Psychol. 17 297–317. 10.1037/a0032947 [ CrossRef ] [ Google Scholar ]
  • Dörner D., Kreuzig H. W., Reither F., Stäudel T. (1983). Lohhausen. Vom Umgang mit Unbestimmtheit und Komplexität. Bern: Huber. [ Google Scholar ]
  • Ederer P., Patt A., Greiff S. (2016). Complex problem-solving skills and innovativeness – evidence from occupational testing and regional data. Eur. J. Educ. 51 244–256. 10.1111/ejed.12176 [ CrossRef ] [ Google Scholar ]
  • Edwards W. (1962). Dynamic decision theory and probabiIistic information processing. Hum. Factors 4 59–73. 10.1177/001872086200400201 [ CrossRef ] [ Google Scholar ]
  • Engelhart M., Funke J., Sager S. (2017). A web-based feedback study on optimization-based training and analysis of human decision making. J. Dynamic Dec. Mak. 3 1–23. [ Google Scholar ]
  • Ericsson K. A., Simon H. A. (1983). Protocol Analysis: Verbal Reports As Data. Cambridge, MA: Bradford. [ Google Scholar ]
  • Fischer A., Greiff S., Funke J. (2017). “The history of complex problem solving,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 107–121. [ Google Scholar ]
  • Fischer A., Holt D. V., Funke J. (2015). Promoting the growing field of dynamic decision making. J. Dynamic Decis. Mak. 1 1–3. 10.11588/jddm.2015.1.23807 [ CrossRef ] [ Google Scholar ]
  • Fischer A., Holt D. V., Funke J. (2016). The first year of the “journal of dynamic decision making.” J. Dynamic Decis. Mak. 2 1–2. 10.11588/jddm.2016.1.28995 [ CrossRef ] [ Google Scholar ]
  • Fischer A., Neubert J. C. (2015). The multiple faces of complex problems: a model of problem solving competency and its implications for training and assessment. J. Dynamic Decis. Mak. 1 1–14. 10.11588/jddm.2015.1.23945 [ CrossRef ] [ Google Scholar ]
  • Frensch P. A., Funke J. (eds) (1995a). Complex Problem Solving: The European Perspective. Hillsdale, NJ: Erlbaum. [ Google Scholar ]
  • Frensch P. A., Funke J. (1995b). “Definitions, traditions, and a general framework for understanding complex problem solving,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Lawrence Erlbaum; ), 3–25. [ Google Scholar ]
  • Frischkorn G. T., Greiff S., Wüstenberg S. (2014). The development of complex problem solving in adolescence: a latent growth curve analysis. J. Educ. Psychol. 106 1004–1020. 10.1037/a0037114 [ CrossRef ] [ Google Scholar ]
  • Funke J. (1985). Steuerung dynamischer Systeme durch Aufbau und Anwendung subjektiver Kausalmodelle. Z. Psychol. 193 435–457. [ Google Scholar ]
  • Funke J. (1986). Komplexes Problemlösen - Bestandsaufnahme und Perspektiven [Complex Problem Solving: Survey and Perspectives]. Heidelberg: Springer. [ Google Scholar ]
  • Funke J. (1993). “Microworlds based on linear equation systems: a new approach to complex problem solving and experimental results,” in The Cognitive Psychology of Knowledge , eds Strube G., Wender K.-F. (Amsterdam: Elsevier Science Publishers; ), 313–330. [ Google Scholar ]
  • Funke J. (1995). “Experimental research on complex problem solving,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Erlbaum; ), 243–268. [ Google Scholar ]
  • Funke J. (2010). Complex problem solving: a case for complex cognition? Cogn. Process. 11 133–142. 10.1007/s10339-009-0345-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Funke J. (2012). “Complex problem solving,” in Encyclopedia of the Sciences of Learning Vol. 38 ed. Seel N. M. (Heidelberg: Springer; ), 682–685. [ Google Scholar ]
  • Funke J. (2014a). Analysis of minimal complex systems and complex problem solving require different forms of causal cognition. Front. Psychol. 5 : 739 10.3389/fpsyg.2014.00739 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Funke J. (2014b). “Problem solving: what are the important questions?,” in Proceedings of the 36th Annual Conference of the Cognitive Science Society , eds Bello P., Guarini M., McShane M., Scassellati B. (Austin, TX: Cognitive Science Society; ), 493–498. [ Google Scholar ]
  • Funke J., Fischer A., Holt D. V. (2017). When less is less: solving multiple simple problems is not complex problem solving—A comment on Greiff et al. (2015). J. Intell. 5 : 5 10.3390/jintelligence5010005 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Funke J., Fischer A., Holt D. V. (2018). “Competencies for complexity: problem solving in the 21st century,” in Assessment and Teaching of 21st Century Skills , eds Care E., Griffin P., Wilson M. (Dordrecht: Springer; ), 3. [ Google Scholar ]
  • Funke J., Greiff S. (2017). “Dynamic problem solving: multiple-item testing based on minimally complex systems,” in Competence Assessment in Education. Research, Models and Instruments , eds Leutner D., Fleischer J., Grünkorn J., Klieme E. (Heidelberg: Springer; ), 427–443. [ Google Scholar ]
  • Gobert J. D., Kim Y. J., Pedro M. A. S., Kennedy M., Betts C. G. (2015). Using educational data mining to assess students’ skills at designing and conducting experiments within a complex systems microworld. Think. Skills Creat. 18 81–90. 10.1016/j.tsc.2015.04.008 [ CrossRef ] [ Google Scholar ]
  • Goode N., Beckmann J. F. (2010). You need to know: there is a causal relationship between structural knowledge and control performance in complex problem solving tasks. Intelligence 38 345–352. 10.1016/j.intell.2010.01.001 [ CrossRef ] [ Google Scholar ]
  • Gray W. D. (2002). Simulated task environments: the role of high-fidelity simulations, scaled worlds, synthetic environments, and laboratory tasks in basic and applied cognitive research. Cogn. Sci. Q. 2 205–227. [ Google Scholar ]
  • Greiff S., Fischer A. (2013). Measuring complex problem solving: an educational application of psychological theories. J. Educ. Res. 5 38–58. [ Google Scholar ]
  • Greiff S., Fischer A., Stadler M., Wüstenberg S. (2015a). Assessing complex problem-solving skills with multiple complex systems. Think. Reason. 21 356–382. 10.1080/13546783.2014.989263 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Stadler M., Sonnleitner P., Wolff C., Martin R. (2015b). Sometimes less is more: comparing the validity of complex problem solving measures. Intelligence 50 100–113. 10.1016/j.intell.2015.02.007 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Fischer A., Wüstenberg S., Sonnleitner P., Brunner M., Martin R. (2013a). A multitrait–multimethod study of assessment instruments for complex problem solving. Intelligence 41 579–596. 10.1016/j.intell.2013.07.012 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Holt D. V., Funke J. (2013b). Perspectives on problem solving in educational assessment: analytical, interactive, and collaborative problem solving. J. Problem Solv. 5 71–91. 10.7771/1932-6246.1153 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Wüstenberg S., Molnár G., Fischer A., Funke J., Csapó B. (2013c). Complex problem solving in educational contexts—something beyond g: concept, assessment, measurement invariance, and construct validity. J. Educ. Psychol. 105 364–379. 10.1037/a0031856 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Funke J. (2009). “Measuring complex problem solving: the MicroDYN approach,” in The Transition to Computer-Based Assessment. New Approaches to Skills Assessment and Implications for Large-Scale Testing , eds Scheuermann F., Björnsson J. (Luxembourg: Office for Official Publications of the European Communities; ), 157–163. [ Google Scholar ]
  • Greiff S., Funke J. (2017). “Interactive problem solving: exploring the potential of minimal complex systems,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 93–105. [ Google Scholar ]
  • Greiff S., Martin R. (2014). What you see is what you (don’t) get: a comment on Funke’s (2014) opinion paper. Front. Psychol. 5 : 1120 10.3389/fpsyg.2014.01120 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greiff S., Neubert J. C. (2014). On the relation of complex problem solving, personality, fluid intelligence, and academic achievement. Learn. Ind. Diff. 36 37–48. 10.1016/j.lindif.2014.08.003 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Niepel C., Scherer R., Martin R. (2016). Understanding students’ performance in a computer-based assessment of complex problem solving: an analysis of behavioral data from computer-generated log files. Comput. Hum. Behav. 61 36–46. 10.1016/j.chb.2016.02.095 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Stadler M., Sonnleitner P., Wolff C., Martin R. (2017). Sometimes more is too much: a rejoinder to the commentaries on Greif et al. (2015). J. Intell. 5 : 6 10.3390/jintelligence5010006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greiff S., Wüstenberg S. (2014). Assessment with microworlds using MicroDYN: measurement invariance and latent mean comparisons. Eur. J. Psychol. Assess. 1 1–11. 10.1027/1015-5759/a000194 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Wüstenberg S. (2015). Komplexer Problemlösetest COMPRO [Complex Problem-Solving Test COMPRO]. Mödling: Schuhfried. [ Google Scholar ]
  • Greiff S., Wüstenberg S., Funke J. (2012). Dynamic problem solving: a new assessment perspective. Appl. Psychol. Measure. 36 189–213. 10.1177/0146621612439620 [ CrossRef ] [ Google Scholar ]
  • Griffin P., Care E. (2015). “The ATC21S method,” in Assessment and Taching of 21st Century Skills , eds Griffin P., Care E. (Dordrecht, NL: Springer; ), 3–33. [ Google Scholar ]
  • Güss C. D., Dörner D. (2011). Cultural differences in dynamic decision-making strategies in a non-linear, time-delayed task. Cogn. Syst. Res. 12 365–376. 10.1016/j.cogsys.2010.12.003 [ CrossRef ] [ Google Scholar ]
  • Güss C. D., Tuason M. T., Orduña L. V. (2015). Strategies, tactics, and errors in dynamic decision making in an Asian sample. J. Dynamic Deci. Mak. 1 1–14. 10.11588/jddm.2015.1.13131 [ CrossRef ] [ Google Scholar ]
  • Güss C. D., Wiley B. (2007). Metacognition of problem-solving strategies in Brazil, India, and the United States. J. Cogn. Cult. 7 1–25. 10.1163/156853707X171793 [ CrossRef ] [ Google Scholar ]
  • Herde C. N., Wüstenberg S., Greiff S. (2016). Assessment of complex problem solving: what we know and what we don’t know. Appl. Meas. Educ. 29 265–277. 10.1080/08957347.2016.1209208 [ CrossRef ] [ Google Scholar ]
  • Hermes M., Stelling D. (2016). Context matters, but how much? Latent state – trait analysis of cognitive ability assessments. Int. J. Sel. Assess. 24 285–295. 10.1111/ijsa.12147 [ CrossRef ] [ Google Scholar ]
  • Hotaling J. M., Fakhari P., Busemeyer J. R. (2015). “Dynamic decision making,” in International Encyclopedia of the Social & Behavioral Sciences , 2nd Edn, eds Smelser N. J., Batles P. B. (New York, NY: Elsevier; ), 709–714. [ Google Scholar ]
  • Hundertmark J., Holt D. V., Fischer A., Said N., Fischer H. (2015). System structure and cognitive ability as predictors of performance in dynamic system control tasks. J. Dynamic Deci. Mak. 1 1–10. 10.11588/jddm.2015.1.26416 [ CrossRef ] [ Google Scholar ]
  • Jäkel F., Schreiber C. (2013). Introspection in problem solving. J. Problem Solv. 6 20–33. 10.7771/1932-6246.1131 [ CrossRef ] [ Google Scholar ]
  • Jansson A. (1994). Pathologies in dynamic decision making: consequences or precursors of failure? Sprache Kogn. 13 160–173. [ Google Scholar ]
  • Kaufman J. C., Beghetto R. A. (2009). Beyond big and little: the four c model of creativity. Rev. Gen. Psychol. 13 1–12. 10.1037/a0013688 [ CrossRef ] [ Google Scholar ]
  • Knauff M., Wolf A. G. (2010). Complex cognition: the science of human reasoning, problem-solving, and decision-making. Cogn. Process. 11 99–102. 10.1007/s10339-010-0362-z [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kretzschmar A. (2017). Sometimes less is not enough: a commentary on Greiff et al. (2015). J. Intell. 5 : 4 10.3390/jintelligence5010004 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kretzschmar A., Neubert J. C., Wüstenberg S., Greiff S. (2016). Construct validity of complex problem solving: a comprehensive view on different facets of intelligence and school grades. Intelligence 54 55–69. 10.1016/j.intell.2015.11.004 [ CrossRef ] [ Google Scholar ]
  • Kretzschmar A., Süß H.-M. (2015). A study on the training of complex problem solving competence. J. Dynamic Deci. Mak. 1 1–14. 10.11588/jddm.2015.1.15455 [ CrossRef ] [ Google Scholar ]
  • Lee H., Cho Y. (2007). Factors affecting problem finding depending on degree of structure of problem situation. J. Educ. Res. 101 113–123. 10.3200/JOER.101.2.113-125 [ CrossRef ] [ Google Scholar ]
  • Leutner D., Fleischer J., Wirth J., Greiff S., Funke J. (2012). Analytische und dynamische Problemlösekompetenz im Lichte internationaler Schulleistungsvergleichsstudien: Untersuchungen zur Dimensionalität. Psychol. Rundschau 63 34–42. 10.1026/0033-3042/a000108 [ CrossRef ] [ Google Scholar ]
  • Luchins A. S. (1942). Mechanization in problem solving: the effect of einstellung. Psychol. Monogr. 54 1–95. 10.1037/h0093502 [ CrossRef ] [ Google Scholar ]
  • Mack O., Khare A., Krämer A., Burgartz T. (eds) (2016). Managing in a VUCA world. Heidelberg: Springer. [ Google Scholar ]
  • Mainert J., Kretzschmar A., Neubert J. C., Greiff S. (2015). Linking complex problem solving and general mental ability to career advancement: does a transversal skill reveal incremental predictive validity? Int. J. Lifelong Educ. 34 393–411. 10.1080/02601370.2015.1060024 [ CrossRef ] [ Google Scholar ]
  • Mainzer K. (2009). Challenges of complexity in the 21st century. An interdisciplinary introduction. Eur. Rev. 17 219–236. 10.1017/S1062798709000714 [ CrossRef ] [ Google Scholar ]
  • Meadows D. H., Meadows D. L., Randers J. (1992). Beyond the Limits. Vermont, VA: Chelsea Green Publishing. [ Google Scholar ]
  • Meadows D. H., Meadows D. L., Randers J., Behrens W. W. (1972). The Limits to Growth. New York, NY: Universe Books. [ Google Scholar ]
  • Meißner A., Greiff S., Frischkorn G. T., Steinmayr R. (2016). Predicting complex problem solving and school grades with working memory and ability self-concept. Learn. Ind. Differ. 49 323–331. 10.1016/j.lindif.2016.04.006 [ CrossRef ] [ Google Scholar ]
  • Molnàr G., Greiff S., Wüstenberg S., Fischer A. (2017). “Empirical study of computer-based assessment of domain-general complex problem-solving skills,” in The Nature of Problem Solving: Using research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 125–141. [ Google Scholar ]
  • National Research Council (2011). Assessing 21st Century Skills: Summary of a Workshop. Washington, DC: The National Academies Press. [ PubMed ] [ Google Scholar ]
  • Newell A., Shaw J. C., Simon H. A. (1959). A general problem-solving program for a computer. Comput. Automat. 8 10–16. [ Google Scholar ]
  • Nisbett R. E., Wilson T. D. (1977). Telling more than we can know: verbal reports on mental processes. Psychol. Rev. 84 231–259. 10.1037/0033-295X.84.3.231 [ CrossRef ] [ Google Scholar ]
  • OECD (2014). “PISA 2012 results,” in Creative Problem Solving: Students’ Skills in Tackling Real-Life problems , Vol. 5 (Paris: OECD Publishing; ). [ Google Scholar ]
  • Osman M. (2010). Controlling uncertainty: a review of human behavior in complex dynamic environments. Psychol. Bull. 136 65–86. 10.1037/a0017815 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Osman M. (2012). The role of reward in dynamic decision making. Front. Neurosci. 6 : 35 10.3389/fnins.2012.00035 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Qudrat-Ullah H. (2015). Better Decision Making in Complex, Dynamic Tasks. Training with Human-Facilitated Interactive Learning Environments. Heidelberg: Springer. [ Google Scholar ]
  • Ramnarayan S., Strohschneider S., Schaub H. (1997). Trappings of expertise and the pursuit of failure. Simulat. Gam. 28 28–43. 10.1177/1046878197281004 [ CrossRef ] [ Google Scholar ]
  • Reuschenbach B. (2008). Planen und Problemlösen im Komplexen Handlungsfeld Pflege. Berlin: Logos. [ Google Scholar ]
  • Rohe M., Funke J., Storch M., Weber J. (2016). Can motto goals outperform learning and performance goals? Influence of goal setting on performance, intrinsic motivation, processing style, and affect in a complex problem solving task. J. Dynamic Deci. Mak. 2 1–15. 10.11588/jddm.2016.1.28510 [ CrossRef ] [ Google Scholar ]
  • Scherer R., Greiff S., Hautamäki J. (2015). Exploring the relation between time on task and ability in complex problem solving. Intelligence 48 37–50. 10.1016/j.intell.2014.10.003 [ CrossRef ] [ Google Scholar ]
  • Schoppek W., Fischer A. (2015). Complex problem solving – single ability or complex phenomenon? Front. Psychol. 6 : 1669 10.3389/fpsyg.2015.01669 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schraw G., Dunkle M., Bendixen L. D. (1995). Cognitive processes in well-defined and ill-defined problem solving. Appl. Cogn. Psychol. 9 523–538. 10.1002/acp.2350090605 [ CrossRef ] [ Google Scholar ]
  • Schweizer F., Wüstenberg S., Greiff S. (2013). Validity of the MicroDYN approach: complex problem solving predicts school grades beyond working memory capacity. Learn. Ind. Differ. 24 42–52. 10.1016/j.lindif.2012.12.011 [ CrossRef ] [ Google Scholar ]
  • Schweizer T. S., Schmalenberger K. M., Eisenlohr-Moul T. A., Mojzisch A., Kaiser S., Funke J. (2016). Cognitive and affective aspects of creative option generation in everyday life situations. Front. Psychol. 7 : 1132 10.3389/fpsyg.2016.01132 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Selten R., Pittnauer S., Hohnisch M. (2012). Dealing with dynamic decision problems when knowledge of the environment is limited: an approach based on goal systems. J. Behav. Deci. Mak. 25 443–457. 10.1002/bdm.738 [ CrossRef ] [ Google Scholar ]
  • Simon H. A. (1957). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations , 2nd Edn New York, NY: Macmillan. [ Google Scholar ]
  • Sonnleitner P., Brunner M., Keller U., Martin R. (2014). Differential relations between facets of complex problem solving and students’ immigration background. J. Educ. Psychol. 106 681–695. 10.1037/a0035506 [ CrossRef ] [ Google Scholar ]
  • Spering M., Wagener D., Funke J. (2005). The role of emotions in complex problem solving. Cogn. Emot. 19 1252–1261. 10.1080/02699930500304886 [ CrossRef ] [ Google Scholar ]
  • Stadler M., Becker N., Gödker M., Leutner D., Greiff S. (2015). Complex problem solving and intelligence: a meta-analysis. Intelligence 53 92–101. 10.1016/j.intell.2015.09.005 [ CrossRef ] [ Google Scholar ]
  • Stadler M., Niepel C., Greiff S. (2016). Easily too difficult: estimating item difficulty in computer simulated microworlds. Comput. Hum. Behav. 65 100–106. 10.1016/j.chb.2016.08.025 [ CrossRef ] [ Google Scholar ]
  • Sternberg R. J. (1995). “Expertise in complex problem solving: a comparison of alternative conceptions,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Erlbaum; ), 295–321. [ Google Scholar ]
  • Sternberg R. J., Frensch P. A. (1991). Complex Problem Solving: Principles and Mechanisms. (eds) Sternberg R. J., Frensch P. A. Hillsdale, NJ: Erlbaum. [ Google Scholar ]
  • Strohschneider S., Güss C. D. (1998). Planning and problem solving: differences between brazilian and german students. J. Cross-Cult. Psychol. 29 695–716. 10.1177/0022022198296002 [ CrossRef ] [ Google Scholar ]
  • Strohschneider S., Güss C. D. (1999). The fate of the Moros: a cross-cultural exploration of strategies in complex and dynamic decision making. Int. J. Psychol. 34 235–252. 10.1080/002075999399873 [ CrossRef ] [ Google Scholar ]
  • Thimbleby H. (2007). Press On. Principles of Interaction. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Tobinski D. A., Fritz A. (2017). “EcoSphere: a new paradigm for problem solving in complex systems,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 211–222. [ Google Scholar ]
  • Tremblay S., Gagnon J.-F., Lafond D., Hodgetts H. M., Doiron M., Jeuniaux P. P. J. M. H. (2017). A cognitive prosthesis for complex decision-making. Appl. Ergon. 58 349–360. 10.1016/j.apergo.2016.07.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tschirgi J. E. (1980). Sensible reasoning: a hypothesis about hypotheses. Child Dev. 51 1–10. 10.2307/1129583 [ CrossRef ] [ Google Scholar ]
  • Tuchman B. W. (1984). The March of Folly. From Troy to Vietnam. New York, NY: Ballantine Books. [ Google Scholar ]
  • Verweij M., Thompson M. (eds) (2006). Clumsy Solutions for A Complex World. Governance, Politics and Plural Perceptions. New York, NY: Palgrave Macmillan; 10.1057/9780230624887 [ CrossRef ] [ Google Scholar ]
  • Viehrig K., Siegmund A., Funke J., Wüstenberg S., Greiff S. (2017). “The heidelberg inventory of geographic system competency model,” in Competence Assessment in Education. Research, Models and Instruments , eds Leutner D., Fleischer J., Grünkorn J., Klieme E. (Heidelberg: Springer; ), 31–53. [ Google Scholar ]
  • von Clausewitz C. (1832). Vom Kriege [On war]. Berlin: Dämmler. [ Google Scholar ]
  • Wendt A. N. (2017). The empirical potential of live streaming beyond cognitive psychology. J. Dynamic Deci. Mak. 3 1–9. 10.11588/jddm.2017.1.33724 [ CrossRef ] [ Google Scholar ]
  • Wiliam D., Black P. (1996). Meanings and consequences: a basis for distinguishing formative and summative functions of assessment? Br. Educ. Res. J. 22 537–548. 10.1080/0141192960220502 [ CrossRef ] [ Google Scholar ]
  • World Economic Forum (2015). New Vsion for Education Unlocking the Potential of Technology. Geneva: World Economic Forum. [ Google Scholar ]
  • World Economic Forum (2016). Global Risks 2016: Insight Report , 11th Edn Geneva: World Economic Forum. [ Google Scholar ]
  • Wüstenberg S., Greiff S., Funke J. (2012). Complex problem solving — more than reasoning? Intelligence 40 1–14. 10.1016/j.intell.2011.11.003 [ CrossRef ] [ Google Scholar ]
  • Wüstenberg S., Greiff S., Vainikainen M.-P., Murphy K. (2016). Individual differences in students’ complex problem solving skills: how they evolve and what they imply. J. Educ. Psychol. 108 1028–1044. 10.1037/edu0000101 [ CrossRef ] [ Google Scholar ]
  • Wüstenberg S., Stadler M., Hautamäki J., Greiff S. (2014). The role of strategy knowledge for the application of strategies in complex problem solving tasks. Technol. Knowl. Learn. 19 127–146. 10.1007/s10758-014-9222-8 [ CrossRef ] [ Google Scholar ]

Meg Selig

Cognitive Reappraisal

3 simple solutions to complex health challenges, ease suffering and improve well-being with these easy-to-use actions..

Updated June 30, 2024 | Reviewed by Ray Parker

  • What Is Cognitive Reappraisal
  • Take our Emotional Intelligence Test
  • Find a therapist near me.
  • People with complex mental health issues are more likely to seek help if the referral process is easy.
  • Labels on toxic products raise awareness and reduce addictive behaviors.
  • Simple reminders are an effective way to increase healthy behaviors.

Many health and mental health problems defy easy solutions. But some quick fixes are so simple and effective that they can lead to amazing progress. In fact, they make me want to slap my forehead and shout, “D'oh! Why didn’t I think of that?”

Here are three huge steps toward remedying complicated health challenges, starting with the most brilliant one of all.

1. Call 988 to get help and referrals for psychological problems. Who was the genius who realized there could be a three-digit phone number, parallel to 911, to call for mental health referrals and crises? Whoever you are, thank you! Before 988, a helping professional, a concerned friend, or a troubled person had to memorize or look up a random 10-digit number or fumble around for a crisis helpline business card. But now, people can just call 988 to reach the Suicide and Crisis Lifeline and speak with a trained crisis counselor. The website states, “The 988 Lifeline provides 24/7, free and confidential support for people in distress, prevention and crisis resources for you or your loved ones, and best practices for professionals in the United States."

Since 988 was rolled out in mid-2022, call volume has increased by 46%, texts by 1,135%, and chats by 141%, according to an April 2024 report by the Substance Abuse and Mental Health Services Administration (SAMHSA). On the darker side, many Americans are still unaware of the new number, and some states have not adequately funded the program. Still, the increased volume of calls and contacts indicates that the 988 system is working.

Remember: 988 is the 911 of mental health .

2. Stick a label on toxic substances and reduce addictive behaviors. Labeling harmful substances, such as cigarettes and other tobacco products, has been a surprisingly effective way to curb smoking —the one habit with the greatest potential to cause death, disability, and disease. Since 1965, labels on cigarette packets have displayed this warning or a variation of it: Warning: Cigarette Smoking Is Hazardous to Your Health. In the U.S., in 1965, approximately 42% of adults were smokers (52% of men and 34% of women); in 2021, only 11.5% of U.S. adults were smokers. Providing information and raising awareness via labels was one reason for the steady drop over time.

Labels are now being considered for other harmful substances, too. Most people are aware that smoking causes cancer, but did you know that alcohol increases the risk of cancer, too? Only about one in three Americans are aware that drinking alcoholic beverages can increase their risk of cancer, according to this New York Times article . While some people do associate liver disease and liver cancer with alcohol, it is not widely known that alcohol has also been associated with other cancers, such as breast, colorectal, and esophageal cancers.

In the U.S., labels on alcohol are small, vague, and on the back of the bottle. The U.S. does not currently require warning labels, calorie information, or nutrition labels on alcohol. If they did, usage would likely fall. For example, in a study from the Yukon Territory, “Sales of products carrying the (warning) labels ... fell by around 7 percent during the intervention and several months that followed.”

Interestingly, labels can also work in the psychological domain. A series of studies by UCLA psychologist Matthew Lieberman showed the value of attaching labels to your own swirling thoughts and feelings. Study participants who inwardly named emotions like “ anger ” or “ fear ” had less activity in the amygdala, the fight-or-flight part of the brain, and more activity in the prefrontal cortex, the thinking part of the brain. In other words, labeling their feelings shifted them from an emotional state to a problem-solving state. (More here .)

3. Simple reminders can increase healthy behaviors. In this busy life, it’s easy for things you really want to do—like send a birthday card to your friend—to slide to the bottom of your to-do list, and then off of it altogether.

So it is with COVID vaccination boosters: Even people who want to get one might neglect to prioritize it. A recent study looked at whether offering free round-trip transportation to vaccination sites would increase the number of people getting boosters. Nope. The researchers discovered that offering a free ride had no effect on increasing vaccination rates.

What did work, however, were simple text reminders . As the research abstract puts it, “…behaviourally informed COVID-19 vaccination reminders…increased the 30-day COVID-19 booster uptake by 21% (1.05 percentage points) and spilled over to increase 30-day influenza vaccinations by 8% (0.34 percentage points) in our megastudy.” In other studies, simple reminders from healthcare providers also significantly raised booster rates.

solving complex human problems

What could be easier and more cost-effective than sending a text? Yet such a simple action can significantly reduce hospitalizations, disease, and deaths from COVID-19, flu, and other diseases.

A Simple Summary

  • Make the right thing to do the easy thing to do.
  • Labeling, whether products or thoughts, can raise awareness and reduce harmful and even addictive behaviors.
  • Simple reminders are often the best way to help people increase healthy and socially valuable behaviors.

(c) Meg Selig, 2024. All rights reserved. For permissions, click here .

Milkman, K.L., et al. "Megastudy shows that reminders boost vaccination but adding free rides does not." Nature , 26 June 2024. Megastudy shows that reminders boost vaccination but adding free rides does not | Nature

https://www.nytimes.com/2024/04/09/health/alcohol-cancer-warning.html

Meg Selig

Meg Selig is the author of Changepower! 37 Secrets to Habit Change Success .

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

share this!

June 24, 2024 feature

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Merging AI and human efforts to tackle complex mathematical problems

by Ingrid Fadelli , Phys.org

Merging AI and human efforts to tackle complex mathematical problems

By rapidly analyzing large amounts of data and making accurate predictions, artificial intelligence (AI) tools could help to answer many long-standing research questions. For instance, they could help to identify new materials to fabricate electronics or the patterns in brain activity associated with specific human behaviors.

One area in which AI has so far been rarely applied is number theory, a branch of mathematics focusing on the study of integers and arithmetic functions. Most research questions in this field are solved by human mathematicians, often years or decades after their initial introduction.

Researchers at the Israel Institute of Technology (Technion) recently set out to explore the possibility of tackling long-standing problems in number theory using state-of-the-art computational models.

In a recent paper , published in the Proceedings of the National Academy of Sciences , they demonstrated that such a computational approach can support the work of mathematicians, helping them to make new exciting discoveries.

"Computer algorithms are increasingly dominant in scientific research , a practice now broadly called 'AI for Science,'" Rotem Elimelech and Ido Kaminer, authors of the paper, told Phys.org.

"However, in fields like number theory, advances are often attributed to creativity or human intuition. In these fields, questions can remain unresolved for hundreds of years, and while finding an answer can be as simple as discovering the correct formula, there is no clear path for doing so."

Elimelech, Kaminer and their colleagues have been exploring the possibility that computer algorithms could automate or augment mathematical intuition. This inspired them to establish the Ramanujan Machine research group, a new collaborative effort aimed at developing algorithms to accelerate mathematical research.

Their research group for this study also included Ofir David, Carlos de la Cruz Mengual, Rotem Kalisch, Wolfram Berndt, Michael Shalyt, Mark Silberstein, and Yaron Hadad.

"On a philosophical level, our work explores the interplay between algorithms and mathematicians," Elimelech and Kaminer explained. "Our new paper indeed shows that algorithms can provide the necessary data to inspire creative insights, leading to discoveries of new formulas and new connections between mathematical constants."

Merging AI and human efforts to tackle complex mathematical problems

The first objective of the recent study by Elimelech, Kaminer and their colleagues was to make new discoveries about mathematical constants. While working toward this goal, they also set out to test and promote alternative approaches for conducting research in pure mathematics.

"The 'conservative matrix field' is a structure analogous to the conservative vector field that every math or physics student learns about in first year of undergrad," Elimelech and Kaminer explained. "In a conservative vector field, such as the electric field created by a charged particle, we can calculate the change in potential using line integrals.

"Similarly, in conservative matrix fields, we define a potential over a discrete space and calculate it through matrix multiplications rather than using line integrals. Traveling between two points is equivalent to calculating the change in the potential and it involves a series of matrix multiplications."

In contrast with the conservative vector field, the so-called conservative matrix field is a new discovery. An important advantage of this structure is that it can generalize the formulas of each mathematical constant, generating infinitely many new formulas of the same kind.

"The way by which the conservative matrix field creates a formula is by traveling between two points (or actually, traveling from one point all the way to infinity inside its discrete space)," Elimelech and Kaminer said. "Finding non-trivial matrix fields that are also conservative is challenging."

As part of their study, Elimelech, Kaminer and their colleagues used large-scale distributed computing, which entails the use of multiple interconnected nodes working together to solve complex problems. This approach allowed them to discover new rational sequences that converge to fundamental constants (i.e., formulas for these constants).

"Each sequence represents a path hidden in the conservative matrix field," Elimelech and Kaminer explained. "From the variety of such paths, we reverse-engineered the conservative matrix field. Our algorithms were distributed using BOINC , an infrastructure for volunteer computing. We are grateful to the contribution by hundreds of users worldwide who donated computation time over the past two and a half years, making this discovery possible."

The recent work by the research team at the Technion demonstrates that mathematicians can benefit more broadly from the use of computational tools and algorithms to provide them with a "virtual lab." Such labs provide an opportunity to try ideas experimentally in a computer, resembling the real experiments available in physics and in other fields of science. Specifically, algorithms can carry out mathematical experiments providing formulas that can be used to formulate new mathematical hypotheses.

"Such hypotheses, or conjectures, are what drives mathematical research forward," Elimelech and Kaminer said. "The more examples supporting a hypothesis, the stronger it becomes, increasing the likelihood to be correct. Algorithms can also discover anomalies, pointing to phenomena that are the building-blocks for new hypotheses. Such discoveries would not be possible without large-scale mathematical experiments that use distributed computing."

Another interesting aspect of this recent study is that it demonstrates the advantages of building communities to tackle problems. In fact, the researchers published their code online from their project's early days and relied on contributions by a large network of volunteers.

"Our study shows that scientific research can be conducted without exclusive access to supercomputers, taking a substantial step toward the democratization of scientific research," Elimelech and Kaminer said. "We regularly post unproven hypotheses generated by our algorithms, challenging other math enthusiasts to try proving these hypotheses, which when validated are posted on our project website . This happened on several occasions so far. One of the community contributors, Wolfgang Berndt, got so involved that he is now part of our core team and a co-author on the paper."

The collaborative and open nature of this study allowed Elimelech, Kaminer and the rest of the team to establish new collaborations with other mathematicians worldwide. In addition, their work attracted the interest of some children and young people, showing them how algorithms and mathematics can be combined in fascinating ways.

In their next studies, the researchers plan to further develop the theory of conservative matrix fields. These matrix fields are a highly powerful tool for generating irrationality proofs for fundamental constants, which Elimelech, Kaminer and the team plan to continue experimenting with.

"Our current aim is to address questions regarding the irrationality of famous constants whose irrationality is unknown, sometimes remaining an open question for over a hundred years, like in the case of the Catalan constant ," Elimelech and Kaminer said.

"Another example is the Riemann zeta function, central in number theory , with its zeros at the heart of the Riemann hypothesis, which is perhaps the most important unsolved problem in pure mathematics. There are many open questions about the values of this function, including the irrationality of its values. Specifically, whether ζ(5) is irrational is an open question that attracts the efforts of great mathematicians."

The ultimate goal of this team of researchers is to successfully use their experimental mathematics approach to prove the irrationality of one of these constants. In the future, they also hope to systematically apply their approach to a broader range of problems in mathematics and physics. Their physics-inspired hands-on research style arises from the interdisciplinary nature of the team, which combines people specialized in CS, EE, math, and physics.

"Our Ramanujan Machine group can help other researchers create search algorithms for their important problems and then use distributed computing to search over large spaces that cannot be attempted otherwise," Elimelech and Kaminer added. "Each such algorithm, if successful, will help point to new phenomena and eventually new hypotheses in mathematics, helping to choose promising research directions. We are now considering pushing forward this strategy by setting up a virtual user facility for experimental mathematics," inspired by the long history and impact of user facilities for experimental physics.

Journal information: Proceedings of the National Academy of Sciences

© 2024 Science X Network

Explore further

Feedback to editors

solving complex human problems

Two new species of Psilocybe mushrooms discovered in southern Africa

4 hours ago

solving complex human problems

UV radiation damage leads to ribosome roadblocks, causing early skin cell death

5 hours ago

solving complex human problems

Dual-laser approach could lower cost of high-resolution 3D printing

solving complex human problems

Novel method enhances size-controlled production of luminescent quantum dots

solving complex human problems

Cosmic simulation reveals how black holes grow and evolve

6 hours ago

solving complex human problems

How climate change is affecting where species live

solving complex human problems

Human presence shifts balance between leopards and hyenas in East Africa

solving complex human problems

Physicists' laser experiment excites atom's nucleus, may enable new type of atomic clock

solving complex human problems

Treatment with a mixture of antimicrobial peptides found to impede antibiotic resistance

7 hours ago

solving complex human problems

Study reveals fireworks' impact on air quality

Relevant physicsforums posts, shrinking a polygon -- calculation logic.

14 hours ago

Why are the axes taken as perpendicular to each other?

16 hours ago

The sum of positive integers up to infinity: Was Sirinivasa right?

Jun 29, 2024

Views On Complex Numbers

Jun 27, 2024

P-adic numbers and the Ramanujan summation

Jun 26, 2024

Is PI (##\pi##) really a number?

Jun 25, 2024

More from General Math

Related Stories

solving complex human problems

The Ramanujan Machine: Researchers have developed a 'conjecture generator' that creates mathematical conjectures

Feb 5, 2021

Ramanujan machine automatically generates conjectures for fundamental constants

Jul 15, 2019

solving complex human problems

Researchers distill the facts of a chemical separation process and upend a decades-old theory

Jun 17, 2024

solving complex human problems

A theory of strong-field non-perturbative physics driven by quantum light

Sep 5, 2023

solving complex human problems

Maths researchers hail breakthrough in applications of artificial intelligence

Dec 1, 2021

solving complex human problems

Scientists use AI to investigate structure and long-term behavior of galaxies

Feb 5, 2024

Recommended for you

solving complex human problems

New mathematical proof helps to solve equations with random components

Jun 24, 2024

solving complex human problems

Study finds cooperation can still evolve even with limited payoff memory

Jun 19, 2024

solving complex human problems

Study shows the power of social connections to predict hit songs

Jun 11, 2024

solving complex human problems

Wire-cut forensic examinations currently too unreliable for court, new study says

Jun 10, 2024

solving complex human problems

How can we make good decisions by observing others? A videogame and computational model have the answer

Jun 4, 2024

solving complex human problems

Data scientists aim to improve humanitarian support for displaced populations

Jun 3, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

IMAGES

  1. Problem-Solving Strategies: Definition and 5 Techniques to Try

    solving complex human problems

  2. Toward Solving Complex Human Problems (Complex and Enterprise Systems

    solving complex human problems

  3. Solving Complex Problems

    solving complex human problems

  4. The Power Of Solving Complex Problems

    solving complex human problems

  5. 35 problem-solving techniques and methods for solving complex problems

    solving complex human problems

  6. Solving Complex Problems

    solving complex human problems

VIDEO

  1. Human problems have remained the same over the years #history #mentalhealth #psychology

  2. Solving Complex Problems: The Role of a Construction Engineer

  3. Chinese technology will help developing countries accelerate energy transformation and provide

  4. Prioritizing First Things First for Wisdom and Success #shorts

  5. The pitfalls of misunderstandings

  6. More Human Problems

COMMENTS

  1. What It Takes to Think Deeply About Complex Problems

    And third, pay attention to how you're feeling. Embracing complexity means learning to better manage tough emotions like fear and anger. The problems we're facing often seem as complex as they ...

  2. Complex Problem Solving: What It Is and What It Is Not

    Complex problem solving is a collection of self-regulated psychological processes and activities necessary in dynamic environments to achieve ill-defined goals that cannot be reached by routine actions. ... M., and Wolf, A. G. (2010). Complex cognition: the science of human reasoning, problem-solving, and decision-making. Cogn. Process. 11, 99 ...

  3. The Process of Solving Complex Problems

    realistic problems (Wenke, Frensch, & Funke, 2005). Since then, research on human problem solving focused on interviewing experts of certain knowledge domains, on studying the effects of expertise on problem solving activities and decision making, or on simulating complex problems1 based on real systems humans could have to deal with in their daily

  4. (PDF) The Process of Solving Complex Problems

    This article is about Complex Problem Solving (CPS), its history in a variety of research. domains (e.g., human problem solving, expertise, decision making, and intelligence), a. formal de nition ...

  5. How To Solve Complex Problems

    A synthesis definition. By pulling the main themes of these definitions together, we can get a sense of what complex problem-solvers must do: Gain a better understanding of the phenomena of a complex problem or mess. Use a discipline-agnostic approach in order to develop deliberate interventions.

  6. Complex cognition: the science of human reasoning, problem-solving, and

    The present "Special Corner: complex cognition" deals with questions in this regard that have often received little consideration. Under the headline "complex cognition", we summarize mental activities such as thinking, reasoning, problem-solving, and decision-making that typically rely on the combination and interaction of more elementary processes such as perception, learning, memory ...

  7. The Role of Motivation in Complex Problem Solving

    A thinking-aloud protocol is provided to illustrate the role of motivational and cognitive dynamics in CPS. Problems are part of all the domains of human life. The field of CPS investigates problems that are complex, dynamic, and non-transparent (Dörner, 1996 ). Complex problems consist of many interactively interrelated variables.

  8. How Humans Solve Complex Problems: The Case of the Knapsack Problem

    Life presents us with problems of varying complexity. Yet, complexity is not accounted for in theories of human decision-making. Here we study instances of the knapsack problem, a discrete ...

  9. Complex Problem-Solving: Definition and Steps

    Complex problem solving is a series of observations and informed decisions used to find and implement a solution to a problem. Beyond finding and implementing a solution, complex problem solving also involves considering future changes to circumstance, resources and capabilities that may affect the trajectory of the process and success of the ...

  10. Solve Problems Using the Design Thinking Process [2024] • Asana

    The design thinking process is a problem-solving design methodology that helps you develop solutions in a human-focused way. Initially designed at Stanford's d.school, the five stage design thinking method can help solve ambiguous questions, or more open-ended problems. Learn how these five steps can help your team create innovative solutions ...

  11. The Problem-Solving Process

    Problem-solving is a mental process that involves discovering, analyzing, and solving problems. The ultimate goal of problem-solving is to overcome obstacles and find a solution that best resolves the issue. The best strategy for solving a problem depends largely on the unique situation. In some cases, people are better off learning everything ...

  12. Clinical Psychology Solves Complex Human Problems

    In fact, clinical psychology is a complex and diverse specialty area within psychology. It addresses a breadth of mental, emotional and behavioral disorders, integrating the science of psychology with the prevention, assessment, diagnosis and treatment of a wide variety of complicated human problems.

  13. Opportunities of artificial intelligence for supporting complex problem

    In the context of supporting cognitive processes of complex problem-solving, it is crucial to observe this domain as an emerging, rather than static (de Laat et al., 2020). That is, the cognitive dimension of human engagement unfolds in various ways across the phases of complex problem-solving (e.g., brainstorming, data collection, decision ...

  14. Psychology solving problems

    Here are three examples of what has been successful: Consistently incorporating the human element. Complex problems are frequently framed in ways that omit the human element—human cognition, emotion, and behavior. This not only renders psychology irrelevant in the minds of the public, but it also weakens potential solutions to these challenges.

  15. Solving Complex Problems

    Invented at MIT some 60 years later and first offered in 2000, "Solving Complex Problems" is a class designed to do just that ().A freshman-year elective for students with a wide range of backgrounds and prospective majors, it typically attracts between 5 and 10% of the MIT freshman class who develop through it an enthusiasm for tackling difficult, multifaceted problems.

  16. Complex or 'Wicked Issues'

    Previously, traditions of rationalist problem solving had sought to remove the social or human from problems and create a vacuum within which problem-solving occurred. A similar rationalization is found in traditional scientific, positivist approaches to research, development and extension ii.

  17. How Humans Solve Complex Problems: The Case of the Knapsack Problem

    Human attempts at solving the instances exhibited commonalities with algorithms developed for computers, although biological resource constraints-limited working and episodic memories-had noticeable impact. Consistent with the very nature of the knapsack problem, only a minority of participants found the solution-often quickly-but the ones who ...

  18. Modeling Human Problem-Solving Behavior in Complex ...

    The model is to be used in perspective, on the one hand to explain the behavior of people in the problem-solving process (system use) and on the other hand to coordinate tasks, users, situational conditions and the system. Fig. 3. Model for human problem-solving behavior in complex situations. Full size image.

  19. Toward Solving Complex Human Problems: Techniques for Increasing Our

    This book serves three basic purposes: (1) a tutorial-type reference for complex systems engineering (CSE) concepts and associated terminology, (2) a recommendation of a proposed methodology showing how the evolving practice of CSE can lead to a more unified theory, and (3) a complex systems (CSs) initiative for organizations to invest some of their resources toward helping to make the world a ...

  20. (PDF) Complex Problem Solving through Human-AI Collaboration

    Solving complex problems has been proclaimed as one major challenge for hybrid teams of humans and artificial intelligence (AI) systems. Human-AI collaboration brings immense opportunities in ...

  21. Better Keep the Instructions: People Aren't That Good at Solving

    A study published on May 19th, 2022, in the scientific journal Nature Human Behaviour challenges prevalent theories about our ability to tackle complex problems and how certain mental disorders affect it. "Patients that suffer from Obsessive-Compulsive Disorder (OCD) are thought to have a problem with developing sophisticated problem-solving ...

  22. Stumped? Five Ways To Hone Your Problem-Solving Skills

    Respect the worth of other people's insights. getty. Problems continuously arise in organizational life, making problem-solving an essential skill for leaders.

  23. Complex Problem Solving: What It Is and What It Is Not

    Succeeding in the 21st century requires many competencies, including creativity, life-long learning, and collaboration skills (e.g., National Research Council, 2011; Griffin and Care, 2015), to name only a few.One competence that seems to be of central importance is the ability to solve complex problems (Mainzer, 2009).Mainzer quotes the Nobel prize winner Simon (1957) who wrote as early as 1957:

  24. PDF Complex Problem Solving: What It Is and What It Is Not

    to name only a few. One competence that seems to be of central importance is the ability to solve complex problems (Mainzer,2009). Mainzer quotes the Nobel prize winnerSimon(1957)who wrote as early as 1957: The capacity of the human mind for formulating and solving complex problems is very small compared

  25. 3 Simple Solutions to Complex Health Challenges

    Since 988 was rolled out in mid-2022, call volume has increased by 46%, texts by 1,135%, and chats by 141%, according to this April 2024 report by the Substance Abuse and Mental Health Services ...

  26. Merging AI and human efforts to tackle complex mathematical problems

    As part of their study, Elimelech, Kaminer and their colleagues used large-scale distributed computing, which entails the use of multiple interconnected nodes working together to solve complex ...