The usefulness of a digital design platform for supporting design uncertainty identification and mitigation, was evaluated through an empirical study based on a Solomon Four Groups experiment design. Its usefulness was evaluated according to H1, useful for uncertainty identification and modelling, and H2, useful for transforming uncertainties into certainties.
The results suggest that the design platform enabled discussions about activity risk, and its use to propose measures to deal with uncertainties without compromising product quality while maintaining an affordable validation schedule. For example, the platform enables the identification of test activities with low risk and that can be removed or replaced with an inexpensive alternative.
However, strong arguments are required to remove test activities and the design platform is not enough to make this type of decisions. Instead, it can point designers to open a conversation about this possibility and to gather the necessary data or acquire the expertise to make such decision.
It was observed that implementing the design platform, the number of manufacturing and test concerns identified (uncertainties) and the number of proposed measures to deal with those uncertainties increased with the use of the design platform.
As the number of identified concerns increased with the implementation of the constraints replacement process, it can be concluded that this process is useful for identifying UU. Moreover, certain concerns were only identified through the constraints replacement method, such as “Max. manufacturing volume”, “Max. manufacturing speed to avoid overheating”, “Max. aspect ratio”, or “Fit in the test chamber”.
The relevance of identifying manufacturing and test related UU is highlighted, as non-functional requirements, such as manufacturability and test ability, are drivers of the design and decision-making process (Shankar et al., 2020).
However, a drawback from this technique is that UU, are discovered if their traditional technology counterpart is well modelled. In this sense, the usefulness of this process is related with the model´s depth, breadth, and fidelity (Haskins et al., 2015). The concerns “Min. feature size to enable removal of powder” and “Part geometry to enable removal of support structures” for example, are well-known AM concerns, however, they do not have a direct machining counterpart and were not mentioned by any group using the design platform.
These results resonate well with the work by. Roth et al. (2010) which state that the performance of past designs does not address all the sources of design uncertainty related with the introduction of new technologies. The main problem in this case is that not every constraint of a new technology can be found in a traditional technology counterpart. However, the design platform and the constraints replacement method itself serve as conversation openers, to initiate a discussion about technology uncertainties, and to promote knowledge exchange among practitioners within different disciplines. As proposed by Browning and Ramasesh (2015), an organization that is actively looking for UU is more likely to identify them and turn them into KU.
It was also observed during the experiments that many uncovered UU were not really UU, but aspects of the design, manufacturing process or testing that scaped the mind of the participants at the moment of the experiment. For instance, the concern “Max. manufacturing volume” is one of the more evident AM limitations, however, this concern was only mentioned by participants using the design platform and constraints replacement method. This observation was in line with literature (Ramasesh and Browning, 2014), in real design scenarios, UU are sometimes things that spaced the mind of practitioners or things that no one has bothered to find out. Moreover, they propose two types of UU, the knowable UU and the unknowable UU. The constraints replacement technique addresses the knowable UU.
Regarding UUs, there is a connection between the practitioners´ expertise and the number of UU they are able to identify with or without the design platform.
In group C from Iteration 1, one of the participants has 2 years of experience working exclusively with AM and design for AM, which enriched the discussion in this group. This group identified 17 manufacturing and manufacturing processes concerns, from the discussions enabled by the process of constraints replacement.
On the contrary, the participants of Group A from Iteration 1 where not that experienced in AM. Through the process of constraints replacement, this group identified eight manufacturing and manufacturing parameters concerns, seven of which were almost identical to their machining counterpart.
Workshops participants had different experience levels, and uncertainties that are unknown for a practitioner might be known for another. The design platform made explicit the knowledge of experienced practitioners, facilitating knowledge transfer to the less experienced.
The design platform and constraints replacement procedure support unexperienced designers, that otherwise, would not be able to identify as many uncertainties. Furthermore, for unexperienced practitioners the design platform serves as a training tool as well, as it was recognized as an efficient way to acquire and store information.
Furthermore, the platform´s customizable depth, breadth, and fidelity, allows for the adaptability and flexibility required when changes such as new technologies, new organizational processes or change of market focus, occur (Subrahmanianet al., 2003).
The results from the experiments suggest that while the platform supports uncertainty identification and modeling, it also supports the process of proposing affordable measures to deal with the uncertainties. While several measures to deal with uncertainty are experience dependent, the link between the FM and the PERT diagram highlights design and validation schedule changes that could reduce uncertainty or mitigate their effects in a cost and time efficient manner. For instance, only participants that did not use the platform proposed “Perform serial testing” as a measure to reduce uncertainties, however, this measure is time and resource intensive and can render a project unaffordable (Brice, 2011). Participants using the design platform had the possibility of simulating and observing the effect that these types of measures would have on the overall schedule affordability. Moreover, only participants using the platform proposed “Focus on process repeatability” to reduce the need for test activities performed on the product.
However, the platform is not intended to replace other design and uncertainty identification methods, rather, is intended to complement them. For example, practitioners stated that the platform would work better if it was connected to a CAD (physical) product representation that enables the analysis of a product´s geometry. These statements are in line with the work by authors such as McKoy et al. (2001) who sustain that graphical product representations are better than textual representations for engineering design idea generation processes.
Moreover, other techniques for UU identification, such as interviews, observations, workshops, scenarios, or prototypes (Sutcliffe and Sawyer, 2013), should not be left aside. They should be performed before or in parallel with the implementation of the design platform and complement its information. Designers should strive for a communication, and an alertness design culture that fosters the purposeful and systematic identification of UU (Ramasesh and Browning, 2014; Browning and Ramasesh, 2015).
The proposed design platform and other UU identification methods are a powerful complement for the multidisciplinary product development modelling strategies and performance simulations (Mavrik, 1999; Struck and Hensen, 2007; Ogaji et al., 2007; Goldberg et al., 2018) currently implemented for uncertainty assessment and reduction during conceptual stages.
In general, implementing the design platform, the number of uncertainties and proposed measures to deal with those uncertainties increased. However, the sample size of this study (which is its most relevant limitation) is too small and the obtained results cannot be generalized (Yin, 2003). Moreover, due to time and resources constrains, getting a randomized pool of experienced industrial practitioners able to participate in a one-and-a-half-hour experiment is very unlikely. In this situation, a non-random group of practitioners (from the available research project) and a non-random assignment of to the groups was necessary, this choice undermines the strength of the experiment as well.
Another limitation of this study is the artificial design setting. Practitioners from different companies were paired according to their availability and their previous experience with design and AM. In addition, one of the authors of this study was present during the experiments guiding the design analysis activities, which have likely impacted the results.
Moreover, experiment participants were unfamiliar with the type of product proposed for the case studies, as well as with each other. Team and case study familiarity is possibly another confounding variable in the study, research shows that the performance increases when team members are familiar with each other and trust each other (Arrow et al. 2000; Hargadon and Bechky, 2006).
Adding to the design setting artificial nature, the design intervals were restricted to 20 minutes each. On one side, some sessions needed to be cut short, it is possible that additional uncertainties and related mitigation measures would have been mentioned. On the other side, some other groups needed to “be forced” to talk during the 20 minutes, possibly due to the discomfort generated by a lack of perceived expertise on the unfamiliar case studies and a lack of familiarity with the design partner. Another consequence of the time-constrained design set up is that the pre-test seems to have a negative effect on the amount of the number of concerns identified and the number of proposed measures to address uncertainties during the test. One possible explanation that the authors find for this phenomenon is that the pre-test and test case studies we designed to foster the identification of similar concerns, to enable case study comparison. However, as both pre-test and test were performed within the same hour, having similar uncertainties, it is possible that some uncertainties that were mentioned on the pre-test were not mentioned on the test as an unconscious attempt to avoid redundancy.
It is evident that the artificial design setting affected the study results, however, literature shows that a large proportion of the research experiments performed for testing new tools and design methods are conducted in artificial settings and their results are still useful for industry and academia (Ellis and Dix, 2006).
Despite this study not presenting statistically relevant results, it was useful as a discussion enabler about design support platforms and a culture of uncertainty seeking. Moreover, the results of this study were used to obtain further funding that will enable the development of an improved version of the digital design platform that can be implemented in real industrial settings for further research purposes.