skip to main content

Self-Assessing Creative Problem Solving

published icon  |  category icon education

Remember when I said creativity self-assessment is nonsense? We took a stab at designing a self-assessment test anyway. I still stand by my arguments: creativity is a socio-cultural construct, and thus, it’s mainly up to someone else to deem your work creative. Yet, that doesn’t mean you can’t keep track of how good or bad you’re doing during the creative work—or the creative process.

We designed a simple test with a set of typical Likert-scale questions (completely agree, agree, …) that gives an idea of how creative you’re tackling a certain project. The result is a pilot study with 270 students spanning two academic years where a factor analysis revealed a few interesting findings. The paper was presented yesterday at ITiCSE 2022. You can read the full report here.

We call it the Creative Programming Problem Solving Test (CPPST). If you have ten minutes to spare, you can take a shortened version of the test yourself! Remember to replace “student” and “teacher” with “co-worker” if you’ve outgrown university. The result is context-sensitive, meaning every project will likely net in another result. It could very well be that you’re feeling the vibe right now since the project is aligned with your interests, yet you don’t have the colleagues needed to discuss and improve much needed ideas. You get the idea. Here’s a possible result:

A spider chart of CPPST results.

What are these weird seven dimensions like “creative state of mind” and “curiosity” you ask? Those are the creative problem solving domains we identified in an earlier paper that explored the role of creativity in software engineering. I summarized those findings here as well. Needless to say, we continued were we left off and selected 8 questions for each domain. The plan was to see if we could come up with interesting questions that effectively gauge what we were trying to gauge: creative problem solving.

To validate that, and to ensure psychometric qualities of any self-assessment test, you need to do a factor analysis: hence this study. I won’t bore you with the technical details, but we feel felt confident, as:

  1. Our baseline was previous work, which involved brainstorm sessions of 35 industry experts;
  2. The questions we selected were again validated with a subset of that group;
  3. We administered the test for both first-year and last-year computing students from different universities;
  4. Most of the factor loadings showed promising results.

In fact, the analysis showed that we could group our questions in three bigger categories: ability (knowledge oriented), mindset (are you willing to put in effort? are you curious?), and interaction (the social aspects of creativity). It also revealed two slightly alarming things. First, last-year students think they don’t need to ask for feedback because they feel too confident in their own abilities. They don’t see feedback as part of the creative process. Second, while students showed a “Growth Mindset”—in Carol Dweck’s words—when it came to computing knowledge, they exhibited very much a “Fixed Mindset” when it came to creativity. In other words, they think they can learn to program and grow in that, but they either think they’re creative, or they’re not. Which, in turn, makes it difficult for us as educators to amplify their creative problem solving skills.

So what’s the point of all this? Well, since we now have a self-test that measures more than simply divergent thinking and is specifically geared towards computing education, we could start experimenting with interventions in courses and measure its effects pre and post intervention using the CPPST (and other means and correlate them etc etc). In fact, we did exactly that in April and are writing up the results as I type this. “Why couldn’t you just use an existing test” you say? Because none of those aligned with our previous findings, namely that creativity in context of programming is multi-dimensional. We’re not interested in creativity as a means to express yourself: we’re interested in it because it can help you fix a problem—the software engineering part.

Take the CPPST test yourself and you’ll see what I mean. We hope that it might even help you as programmer in the industry identify creative shortcomings of the current project. Of course, I’m not naive: it’s a self-test, some questions might not always be applicable, and so forth. It comes with the same drawbacks as say Angela Duckworth’s Grit Scale—which I of course was heavily influenced by. The additional problem with scales such as Grit is that the end result is only a single number. You can’t reduce creative problem solving to a single number by for instance calculating the average result of all domains: that would render the test completely useless.

In the end, CPPST only measures creative engagement with a certain project. Whether or not the outcome of that, the product, is deemed creative, is for your end users—our in the case of students, the teaching staff—to decide.

tags icon creativity phd

I'm Wouter Groeneveld, a Brain Baker, and I love the smell of freshly baked thoughts (and bread) in the morning. I sometimes convince others to bake their brain (and bread) too.

If you found this article amusing and/or helpful, you can support me via PayPal or Ko-Fi. I also like to hear your feedback via Mastodon or email. Thanks!