Abstract
Keywords
Introduction
The information complexity of architectural designs compounded with the stakeholders’ diversity can significantly hinder the collaborative evaluation of alternatives and decision-making.1–3 Collaboration can suffer from ambiguity in design information, incomplete or missing details or terms, unavailability, unfitness of means for information sharing, sequential discourse, etc. Some of these can be addressed through tools encouraging the stakeholders’ involvement in, for example, design review meetings, 4 online design modelling, 5 planning. 6 However, these existing tools are either highly specialised for designers or configured for specific purposes overlooking most stakeholder needs. Therefore, there is a need to understand better and support design collaboration between design stakeholders and designers. We pose and seek answers to the following three main research questions in this study: (a) how can we enable collaborative design decision-making on alternative solutions starting as early as possible in the design process? (b) What are the critical system features to support data-informed evaluation of design alternatives by design stakeholders? And (c) how do the interactive design-collaboration interfaces look and can be an integral part of design-share-feedback workflow ecosystems?
These are important questions for several pragmatic and theoretical reasons. Using an asynchronous and online system can allow stakeholders to investigate designs and help start an equal dialogue. Each can share their opinion without having the discussion dominated by few stakeholders, as usually observed in meetings. 4 One of the bottlenecks in successful collaboration between designers and design stakeholders is the lack of collaboration support tools that can seamlessly integrate with various design workflows and, at the same time, can engage the stakeholders in decision-making. This bottleneck may increase the cost of collaboration and decrease the motivation for implementing new practices. Hence, a collaborative tool should communicate with various other systems without any unnecessary layers of tasks to motivate more feedback cycles, increase stakeholders’ engagement in design and lead to higher levels of satisfaction in the design outcome. 7
In this paper, we present the Design Alternatives Reporting Tool (D-ART). As part of the
Developing such a system can face significant challenges. The diversity of the stakeholder interests and their expertise levels demand different abstractions of complex design data. Second, such systems should accommodate interactive techniques for enabling the expressiveness of design feedback. And third, compiling, structuring and presenting stakeholders’ rich and diverse input on design ideas will require computational methods that can detect, filter and link conversations to design ideas. D-ART prototypes are mainly means to explore the potential of visual analytics interfaces for data presentation, feedback input and feedback presentation. We conducted a formative evaluation of the D-ART’s minimally viable version to validate and improve our understanding of the data-informed collaborative decision-making.
Background
This research builds on three fields of knowledge: Design, Computer Supported Collaborative Work (CSCW), and Visual Analytics (VA). In the intersection of these knowledge fields, we narrowed the focus on Design Analytics, Collaborative Visualisation and Participatory Design.
Collaborative visualisation
Collaborative Visualisation is an emerging field at the intersection of Computer-Supported Collaborative Work (CSCW) and Visual Analytics. Isenberg et al.
12
define collaborative visualisation as: ‘
Participatory design
D-ART aims to increase the involvement of stakeholders in design. To understand the possibilities and methods for increasing involvement, we surveyed some of the works in the Participatory Design field. Participatory focuses on engaging design stakeholders such as clients, users, consultants, construction teams, etc., in the design process. Participatory design approaches, particularly in the early stages, may help better-understanding client needs and lead to fewer disagreements and design changes. 21
Identifying the tool needs to support stakeholders’ involvement in design decision-making depends on understanding the stakeholders’ participation and engagement levels. Arnstein 22 defines three categories of participation: ‘non-Participation’, ‘Degree of Tokenism’ and ‘Degree of Citizen Power’. At Non-Participation, the objective is not to enable stakeholders to participate but to enable powerholders to influence stakeholders through manipulation. At the Degree of Tokenism, stakeholders can voice their opinion and inform the powerholders of their input. However, compared to Degree of Citizen Power, this category of participation lacks the tools and practices for ensuring that the participants’ voices and opinions are heard and followed. Building on Arnstein’s work, Wilcox 23 defined five levels of participation. The Degree of Tokenism has two levels of involvement: ‘informing’ and ‘consulting’. Furthermore, the Degree of Citizen Power has three levels of participation: ‘deciding together’, ‘acting together’ and ‘supporting’.
The tool types for the digital collaboration systems can be studied considering the stakeholders’ expected interaction with the content. For example, Jutraz and Zupancic 24 categorise such tools considering the higher-level tasks performed at different levels of participation: tools for informing, tools for communication, tools for decision-making, tools for collaboration process and tools for empowering process and engagement. In comparison, the tools give control to the stakeholders for empowering them in decision-making, the tools for informing the public limit the feedback to be received from the citizens.
Design analytics
The use of data leads to better decisions in fields of science, engineering and design. While the designer’s experience and tacit knowledge are valuable in identifying aspects of design problems and solutions, they may remain limited when working with multi-criteria problems with complex and often conflicting objectives. 25 Today’s computational tools allow for incorporating performance analysis in the early phases of design. 26 Designers use parametric or non-parametric design tools with analysis tools, such as EnergyPlus, 27 Radiance28,29 and OpenStudio. 30 This combination allows generating design performance data in the early design phases. Design data presented in a visual format enhances design cognition. 31 The visual forms, combined with interactivity, can enable designers to explore more and switch their attention to different objectives.
Collaborative design systems
We compared three systems considering if they can support asynchronous and distributed collaboration with design stakeholders. There are few systems as the adoption of collaboration systems in architecture—especially in the AEC industry—is still in its early stages.
Modelo 32 is an online commercial Web application for design assets management. It supports sharing CAD files from, such as Rhino, 33 Revit, 34 SketchUp, 35 3DsMax. 36 Its presentation features include images, interactive 3D views, and VR walkthroughs. It can also enable feedback, for example, as 3D annotations and text comments. Although Modelo has a robust 3D presentation feature, it lacks data visualisation for presenting performance metrics and feedback given by the stakeholders. In addition, a comparison of multiple alternatives is not supported in Modelo, which we believe is necessary as every alternative reveals different aspects of the problem at hand. DesignLink is a decision-support environment that allows professionals from diverse backgrounds to share their data, make sense of shared data from different collaborators, and decide ‘trade-offs’ based on performance. 37 It combines data across other design systems into customisable interfaces for presenting, for example, cost, physics, spatial composition information. The users can evaluate design alternatives through 2D visualisations in tabular form and images. However, DesignLink does not offer feedback functionality for commenting or discussions.
Konieva et al. 5 developed an online urban design system to facilitate communication between design stakeholders using an interface connected to a CAD environment. The interface controls can modify parameters in the script, allowing stakeholders to generate different combinations and view the results in 3D. In addition, users can hide and show different design layers to change the level of fidelity. The system also displays analytical data on the 3D view and the side panel. It lacks feedback controls as well as comparison controls.
A comparison of the asynchronous collaboration systems (Y: System offers the feature).
Design data presentation
The presentation of design alternatives’ form and performance data and stakeholders’ associated feedback.
Feedback input
The stakeholders' input as feedback on the design components and performance. The input should support continuous discussions through exchanges of ideas, replies and answers.
Alternatives comparison
The comparison of design alternatives’ forms and how they perform; and the stakeholders’ feedback and reaction to the design alternatives.
Methodology
We surveyed the current research on interactive systems for data-informed, asynchronous and sustained collaboration between designers and design stakeholders. Parallel to this, we worked with our industry partners to learn from their workflows and identify needs for sharing design alternatives with the stakeholders. We applied the lessons learnt from these activities to develop prototypes through use-case-driven system development and combined interactive visualisations of design form and performance data in the interfaces for ‘
The design study method’s fundamental principle is iterative development and evaluation of ‘
We developed five high-level system requirements parallel to developing D-ART prototypes guided by these requirements. We also used these requirements to develop system features and their formative evaluations in increments. DRQ1. Present each design alternative with customisable visualisation of form and performance data and the structured stakeholders’ feedback. DRQ2. Enable feedback as comments and questions on design form and performance data and responding to the other feedback. DRQ3. Allow visual comparison of form, performance and feedback on design alternatives. DRQ4. Accessible online, decoupled from the design environments, workflows and systems. DRQ5. Provide adaptable interfaces to accommodate a variety of stakeholders with different interests and backgrounds.
System description
D-ART adapts client-server Web application architecture to enable communication between the design data server and the D-ART’s interfaces, visualise data on browsers and collect feedback from the stakeholders (Figure 1). The high-level requirements were refined in use-cases and data models during the system development. D-ART receives three different data types: 1) design form, 2) performance metrics as categorical or numerical data and 3) design objectives. This generality enables reusing D-ART in multiple collaboration scenarios (DRQ4). The stakeholders’ input is associated with the alternatives and persistently stored for retrieval and report generation. System overview: Design modelling is independent of D-ART and can be of any modelling or computing environment. D-ART can be linked through a custom adaptor or directly by pushing data in the data repository, for example, FlowUI
10
initiates storing data in the backend.
D-ART interface composition
D-ART interfaces consist of five main interfaces (Figure 2) (c.f. 40 for details). Projects Browser shows projects with a set of design alternatives that designers curate and upload through D-ART. Their summary data is accessible on-demand on an expandable panel (Figure 3). Grid View displays alternatives on a juxtaposed layout with controls to access the other views (DRQ1) (Figure 4). In this view, the stakeholders can choose the detail level, compare alternatives side-by-side (DRQ3), and access Alternative Overview (Figure 5) or Building Components Overview (Figure 6). The stakeholders can review and respond to others’ feedback in threads. Comments are captured with additional metadata that shows the creator of the input and date. The stakeholders’ input in the discourse is used in summary generation. D-Art interfaces and transition between them. Project browser allows for browsing projects and displaying project information. Grid view presents alternatives and their performance summaries on expanding and collapsing panels. Alternative view enables inspection of an alternative’s performance and feedbacks. A block-by-block analysis of performance in building components view.




Alternative View supports inspection of design data and stakeholders’ feedback on a particular alternative (DRQ1) (Figure 5). The data on a customisable tabular view shows the select metrics and their targets set in the requirements (DRQ5). The designers choose the images and 3D models at the time of curating the alternatives. The stakeholders can create data visualisations like a bar chart to study the relationship between different metrics (DRQ5). They can select any combination of metrics to display on graphs. Providing feedback is a dominant feature in D-ART to motivate discussion among the stakeholders.39,40 The design stakeholders can express their opinions, ask questions or request changes on the feedback interface (DRQ2, DC6).
Building Components View supports inspecting the parts of an alternative, which can be any cluster of geometry with a particular meaning, for example, residential or commercial blocks, as in a massing model (Figure 6) (DRQ2). Alternatives Comparison View enables comparing multiple alternatives considering their form and performance data and review feedback given to selected alternatives (DRQ3) (Figure 7). Comparison of alternatives and providing feedback on the compare view.
Evaluation
To validate the design of D-ART, we performed an expert review as a formative evaluation. We questioned how domain experts might receive our approach to the collaborative assessment of design alternatives. The domain experts were asked to comment on the system’s objectives and potential in practice using D-ART’s minimally viable version.
We invited 7 experts (5 female and 2 male) with a minimum of 5-year experience in the AEC industry. They played different roles as architects, project managers, construction managers, interior designers, BIM managers, computational designers or researchers. Four participants never used data visualisation systems in their projects, and the rest had used tools such as Tableau and PowerBI. All experts held in-person meetings to communicate design information with stakeholders. One expert noted sharing annotated PDF reports for receiving feedback.
We used the first session as a pilot to refine the review process factors, for example, the task details and their completion time, interview questions and the setup for hosting D-ART. The evaluation took place in online meetings where each participant individually joined the evaluation session. Each session took between 1.5 to 2 h. Following a brief introduction of D-ART, the experts were asked to review alternatives, compose charts and respond to the other stakeholders’ feedback. This was followed by semi-structured interviews where we asked the experts their opinion on our approach for engaging the stakeholders in providing feedback considering design data on alternatives.
Results
We structured the results by the high-level themes that emerged during the coding of the findings.
Data presentation
Overall, the experts described the data presentation features as ‘
Feedback input and presentation
The experts agreed on improvements for the feedback features, for example, supporting annotations on images, 3D models and charts. For example, Expert-6 recommended support for image or file attachments to comments. While text comments would cover most clients’ concerns, adding annotations and image sketches will support other edge cases. Expert-3 suggested using voice recording for commenting to make the stakeholders feel closer to the designers and motivate them to express their opinions independent from typing restrictions. Expert-5 suggested using reactions such as ‘
Expert 1 noted that ‘
Alternatives comparison
The experts described the comparison features as ‘
Workflows integration
Expert-1 believed that D-ART’s real-world value would increase as it integrates with more design workflows. Expert-6 found value in the D-ART’s current support for one typical workflow and commented, ‘
D-ART in other domains
The experts agreed that D-ART could help evaluate ‘alternatives’ in other domains such as Web design, engineering design, product design. Expert-1 questioned, ‘
Discussion
A summary of the issues in categories identified through the expert review.
The confusion about the meanings of performance metrics (IS1) can be due to unfamiliarity or unclear names. The complexity of design data would benefit from having permanent descriptors to create a common ground for discussions. It is also crucial for the design team to onboard the stakeholders to a project by providing descriptions of the terms expected in any collaborative decision-making. D-ART can present on-demand definitions of the metrics.
IS2 is about the limits of having one type of chart. While bar charts are simple and frequently used, they are limited in working with multiple data units. The significant differences between the values make the comparison difficult and ‘ (Left) The updated comment design allows users to expand, hide and filter comments. (Right) Added option to visualise performance data on radar charts.
Expert-4 mentioned that using one target can be insufficient to evaluate a metric (IS3) and recommended using some performance classifications for comparing alternatives. However, the creation of classifications is not trivial, for example, the values cannot be universal. For example, the same metric may have different efficiency value depending on the climate.
The experts agreed on annotations as localised feedback (IS4). However, designing a unified annotation feature for multiple types of visualisations has technical challenges. One solution would be to have data-agnostic screenshot-based annotations attached to a comment. While this could work with any presentation, it lacks the benefits of capturing data-custom annotations. The stakeholders can provide feedback on, for example, a 3D component by directly attaching a comment. As another solution, annotations can be done through code references in the discussions. A data-agnostic annotation would integrate with any data type that D-ART might grow to support. However, data-custom annotations would refine input and support the generation of more thorough reports for reviewing feedback. The latter will also enable the recreation of feedback visualisations. IS5 and IS6 focus on enhancements for the feedback features adapting to the stakeholders’ level of interest and experience. Advanced stakeholders might be interested in attaching reference images or sketches. Enabling quick feedback such as ‘Like’ and ‘Dislike’ ratings can help less experienced stakeholders and have the negative obscuring-information effect of why and how they like and dislike an alternative.
The comments on Feedback Presentation were minimal, possibly due to the novelty of the problem. IS7 sheds some light on its importance. The original feedback input area was taking space from the display. We reimplemented it to expand or collapse (Figure 8-Left). We also added filtering features to show only a subset of comment types and resolved or unresolved comments. Expert-4 recommended having upper and lower target limits (IS3). While domain experts may know about, for example, good ‘energy use’, stakeholders may not. Hence, Expert-4’s suggestion on using performance labels can help compare alternatives without interpreting their implications. However, the classification of such labels is not trivial. The first intuition is that the designers could create them, which assumes that the labels can be universal, which is not usually the case. For example, the same metric may have different efficiency criteria depending on the climate or location. Nevertheless, we find the utility of these labels high for decision-making.
IS9 highlights a bottleneck in the current integration of D-ART with CAD workflows. The required modifications to the CAD models are increasing the integration friction. We see this as a technical challenge but not necessarily directly questioning the design analytics approach. There could be various solutions for this problem, for example, the models can be saved in lower resolution and decomposed into a set of smaller models that are accessed on-demand. Another potential solution can be an intermediate interface to control the data density and access frequency. The interface could mediate sifting the models before shared. Developing such an interface for all integration workflows would be challenging as each CAD tool would have its specifications. However, such an interface would remain necessary for reducing integration friction.
Conclusion and future work
Collaboration is a core activity in design and can happen between diverse stakeholders. Successful collaboration requires a system of tools that integrate with the existing design ecosystem. D-ART aims to support collaboration at different levels of participation by empowering design stakeholders with a degree of citizen power and help them to reach an agreement. D-ART integrates tools for design data presentation, communication as feedback input and comparing alternatives. The main challenge was the integration of these functions in a tool to work together and not as independent features. D-ART can be seen as an instance for such integration. The expert review validates our approach and provides insights for the visualisation and interface design improvements. Although we have implemented some of the feedback we received on D-ART’s current version, such as filtering capabilities and radar charts, the following features are planned for future work.
Annotation
The experts highlighted a need for design data annotation to localise feedbacks in the relevant context, that is, visualisations. The feature addressing this need should enable annotating diverse data visualisations such as images, 2D charts, 3D geometry, tables, etc. The added complexity with this feature may create usability challenges and cognitive overload for non-expert users. In addition, managing the scalability of annotations can be another challenge.
Feedback presentation and report exportation
While D-ART supports basic filtering, there is still room for improvement. Complex feedback queries across alternatives and the generation of interactive online reports are examples of future enhancement.
D-ART in other workflows
D-ART works independently from design environments. However, we validated its integration with one design workflow to test its generic design. More work is needed to test its integration with other design workflows.
D-ART in other domains
The experts agreed that D-ART design is reusable in different fields of design. As an experiment, we uploaded real estate alternatives to D-ART in a scenario for shopping for a house in Vancouver. The result encouraged testing D-ART in evaluating alternatives other than designs, for example, consumer products, cars or even insurance policies.
As a formative evaluation, the expert review helped us assess D-ART’s plausible use in design data sharing and collaboration of diverse stakeholders. Following this evaluation, a critical question is how D-ART, and our approach, would be received in the real world where the stakeholders work on real projects. Expert-5’s comment demonstrates concerns around the cost-benefit trade-off using D-ART. The expert noted that ‘
We envision the summative design study to be conducted by real users in our industry partner’s firm. We can observe the potential of D-ART, the users’ subjective and objective responses through observations and interviews. Outside of the lab, we can also answer if and how our approach can change the practice. A summative evaluation will require a systematic preparation and execution that we plan as future work. As one of the experts said, ‘…
