The result section is split into two parts: technical implementation and collaboration activity.
4.1 Design Collaboration
The team members’ feedback on collaboration activity was investigated at different time steps in the project by means of two questionnaires. The results from the questionnaires are shown in Fig. 8. The figures in the circles correspond to the classification of the collaborative design tasks as: 1) discovering the problem, 2) defining the solution, 3) developing the analysis, and 4) delivering the result.
The figure illustrates the slight changes between the groups’ expectations and what they did in reality. This can be interpreted as the members becoming more conservative when faced with the challenges of implementation. For instance, in discovering the problem, members believe that they are able to investigate the problem and develop the models individually (number one in figure 8, left). However, in reality they expect some extra help from the group leader or organization (number one in figure 8, right). This can be seen in almost all the above tasks except delivering the result (number 4 in figure 8), where the teams believe that domain experts are chosen to deliver the results and interpret and explain them to the organization (in this study the teacher).
The results of changes in the points illustrated in figure. 8 highlight the importance of the following facts:
- Assigning a leader to manage and control the MDO tasks is essential.
- Groups should be preselected by the organization based on their expertise (i.e. the teacher for this study or the manager in industry).
- There should be a person or authority to take the final decision on when the level of the results is satisfactory; otherwise the process could be very tedious since the results can in many cases be improve with each iteration.
- The documentation of the result is moving more toward the open axis. This means that everyone in the group (domain experts) should participate in interpreting the results and documenting them. This correlates with the results from interviews.
- More collaboration between domain experts is needed to better understand and formulate the problem.
4.2 Improved Guideline
The conducted survey highlights challenges in the technical implementation. It also emphasizes that the CMDO implementation requires a structured way of speeding up the process by predicting implementation difficulties and proposing a strategy to avoid or circumvent them. A revised guideline is therefore introduced in this paper to eliminate or ease the challenges summarized in Table 1. The proposed guideline is presented in Fig. 9 and consists of four blocks:
- Object identification and system decomposition
- Model development and evaluation
- Model integration and assessment
- Optimization and post-processing
The first block helps formulate the problem, forming the group and decomposing the product into multiple disciplines. IE and CE are responsible for performing the tasks in block 1 efficiently. However, to avoid poor system decomposition, as an important activity, collaboration with DEs is required. DEs are mainly responsible for the tasks in the second block regarding development and evaluation of the disciplinary models within the design space. The developed models are integrated and assessed with respect to multidisciplinary requirements, conditions, and effectiveness of design parameters regarding the design objectives. The IE plays an important role in collaborating with the CE and the DEs to realize these tasks. Finally, in block 4, the OE is responsible for the optimization and methods to compute the objectives more efficiently. The results are then further processed with the help of the CE.
Block 1: Object identification
This guideline is intended to facilitate the process of multidisciplinary design analysis at conceptual level. The conceptual engineer should therefore define the objectives and requirements of the analysis. In order to facilitate the decision-making process, it is recommended that the objectives be prioritized. It is also possible to start from simple and most important objectives but always discuss the other objectives with the interface expert and domain experts who are responsible for creating the models. This helps develop the models in an efficient way that can be seamlessly upgraded with higher fidelity in order to provide more information if needed later on in the process. The DEs are also able to accommodate an increasing degree of difficulty encountered by the optimization and data-handling tool.
Block 1: Role assignment
The interface expert, who has experience in both conceptual and detail analysis, is responsible for forming the team. Preliminary selection of team members is done based mainly on the experience of the interface expert about the expertise in an organization and with respect to the objectives. This team might be modified after doing system decomposition and mapping data flow when new requirements and perhaps new expertise are required.
Block 1: System decomposition and mapping data flow
A complex product consists of many systems and subsystems. In order to set up an MDO framework, it is crucial to understand the relationships between different systems and subsystems. Block 1 is very central in an MDO process. A minor error in this block can lead to a series of costly repetitive work. In this study, system decomposition is done in two steps:
- Early system decomposition: where the interface expert and the conceptual engineer propose the most appropriate system decomposition with regard to the most crucial objectives and constraints for conceptual analysis. Based on this system decomposition, the domain experts are selected by the interface expert.
- Final system decomposition: where the interface expert invites the selected domain experts to interactively evaluate the proposed early system decomposition and try to map the data flow between the disciplines. The evaluation also helps more realistic objectives and constraints to be chosen by bringing broader knowledge about the domains to the table. The result may lead to upgrading of the early system decomposition, e.g. adding or removing disciplines and the associated domain experts and modifying the team, respectively.
The overall flowchart of implementing block 1 is shown in Fig. 10.
Block 2: Model development
Mapping data flow helps clarify the inputs and outputs of each disciplinary model. Although this contributes to initiation of the model development process, in reality the models may need different sets of inputs based on the physical entity that is used inside the models. These elements of model development should be decided by domain experts bearing in mind that they should not affect the predefined outputs in the data flow. If the models do not provide the predefined outputs, the issue should be reported for further investigation by the team.
Block 2: Disciplinary feasibility analysis (DFA)
Analysis of the disciplinary model is evaluated using a method known as disciplinary feasibility analysis (DFA). DFA helps to evaluate the models within the design space prior to integrating them into the MDO framework. In DFA, the performance of each model within its design space to fulfil the predefined requirements is evaluated. The flowchart of executing DFA is illustrated in Fig. 11.
Block 3: Model integration
The disciplinary models approved by means of DFA then need to be integrated into the framework. Tool integrator software such as modeFRONTIER (Esteco, 2016), iSIGHT (iSIGHT, 2016) and OpenMDAO (OpenMDAO, 2016) facilitates the integration process by providing standard interfaces between the design tools in a centralized environment.
Block 3: Multidisciplinary feasibility analysis (MFA)
The performance of the individual models may change when they are coupled together because of unexpected integration effects, mainly due to the adjustment of the constraints and the design space. The performance of coupled models is therefore evaluated through a process of multidisciplinary feasibility analysis (MFA). The analysis is made through the joint efforts of domain experts and the interface expert. However, the interface expert has the main responsibility for performing the task and domain experts are supposed to facilitate the process of integration and tracking the data flow mapped in block 1. The flow diagram of MFA is shown in figure 12.
Block 3: Sensitivity analysis
The sensitivity analysis is mainly used to determine how different values of design parameters will impact particular objectives or constraints under a given set of assumptions (Saltelli et al., 2008). The sensitivity analysis in this study is used before the optimization in order to provide an estimation regarding the influence of each design parameter on the objectives or constraints. It could also accurately determine the size and position of the design space. Since the design points have already been created for the multidisciplinary feasibility analysis, using the same design points can perform the sensitivity analysis easily and less expensively. The sensitivity analysis helps remove the less effective design points to calculate the objectives and constraints and consequently speed up the optimization process. The result of the sensitivity analysis should be reported to the CE, who is responsible for making the final decision.
The result of the sensitivity analysis also highlights the effectiveness of system decomposition for calculating the design objectives and constraints and hence system decomposition can be improved. However, this is a drastic action because it leads to a series of repetitive and time-consuming tasks, e.g. remodelling and reintegration of the systems, see Fig. 13.
Block 4: Optimization and efficient computing
Rapid concept evaluation is a central demand in conceptual analysis. MDO analysis in the presence of higher fidelity models, however, can be very time-consuming, especially when using high fidelity and computationally intensive models such as CFD or FEM analyses.
To improve the optimization process time-wise, it is suggested that computationally cheap analyses be placed as early as possible in the framework. If these analyses reveal that the suggested solution is inadequate, there is no need to perform the computationally demanding analyses. These can therefore be omitted and the solution given a penalty to its objective function value to indicate to the optimization algorithm that the design is unsatisfactory.
Another popular way to speed up analyses is to replace computationally demanding analyses with computationally efficient surrogate models (SMs), also known as meta-models (Myers, 2009). The demanding analysis needs to be performed numerous times to gather data points. An SM is then fitted to these data points and placed into the optimization framework instead of the original analysis. The simulation results created for DFA in Block 3 could be used to create the SMs. The SMs are created as the first task in Block 4, see Fig 9.
Block 4: Post-processing
The generated optimization results should be further processed for easy use by the conceptual engineer. Typically, conceptual engineers require holistic information about the design that will help them choose the best concept with respect to the requirements. The detail information regarding the concepts may therefore not be of such great interest from a conceptual perspective. However, it has been proven that detail information is sometimes vital and helpful to choose the better concept. The detail results should therefore be represented from conceptual prospect, e.g. how they can influence the product objectives and constraints. The optimization problems in this study are categorized as multi-objective problems. The results can therefore be represented as compromises, for example with a Pareto front, which is a popular method to illustrate results. Figure 14 shows the data flow of block 4 (optimization and post-processing) of the guideline.
Table 1: Results from the interviews with the students regarding the task involved in a CMDO project.