An Architecture for Exploring Large Design Spaces
-
This paper describes an architecture for exploring very large design spaces.
-
Design, can be formulated as a search in a problem space, and the design candidates are usually evaluated using multiple criteria.
-
Commonly used search strategy such as hill climbing is not always applicable. When design candidates are generated by changes of components and configurations, which may be unordered, there may be no adjacency relationship to exploit in the space of design candidates. Moreover, hill climbing, by requiring that a single evaluation function be defined, preclude explicit, local reasoning about tradeoffs among multiple performance criteria.
An interactive decision-support architecture for design is presented, the architeture consists of:
-
Good-Design Seeker: generates design candidates by selecting components from the library and composing them, according to configuration templates (generic devices), to satisfy the given constraints.
-
Dominance Filter: based on a lossless filtering criterion(i.e., one for which there are guarantees that there is no danger of excluding good designs),selects a relatively small number of designs that are worth examining further.
-
Viewer: a user-interface visualizes the set of surviving designs by way of interactive, connected, tradeoff diagrams which enables the user to zoom in on subsets with desirable tradeoff characteristics, and so reduce the number of designs for further investigation to a manageable few.
- Good-Design Seeker: Exhaustive exploration and constraint checking.
- Dominance Filtering:
-
Definition: design candidate A dominates candidate B if A is superior or equal to B with respect to every criterion of evaluation and distinctly superior with respect to at least one criterion. Dominated designs need not be considered further - they may be filtered out. Among the designs that survive the dominance-filtering process, none is clearly superior to another.
-
The Size of the Surviving Set: increase as the number of evaluation criteria increases. However, may become smaller as the size of the design space increases.
-
Independence of criteria: Dominance checking does not require evaluation criteria to be independent.
-
Accuracy of Models: The dominance filter can be adapted to take into account suspected model inaccuracy.
-
Distributed Computing: A client-server architecture is employed that uses idle workstation time to allow the criticism of candidate designs to proceed in parallel.
- The Exploration Interface: presents the designer with a set of tradeoff diagrams.
-
The Domain: hybrid electric vehicles, 4 design criteria
-
Efficiency of filtering: 4 experiments with 4 data size, from 1.8k to 1800k, survivors ratio from 3.9% to 0.06%
-
Scale of computation:207 workstations, from 0-159 were running at any one time. The experiment used 164 hr. 41 min. of wall-clock time, 14 hr. 54 min. of CPU time on the server, for generation and evaluation, and approximately 4.5 hr. of wall-clock time for dominance filtering performed as serial post processing.
-
Exploration Interface: greately helps to ensure model accuracy.
This paper presented a software architecture for large-scale exploration. The architecture is comprised of: a Good-Design Seeker,filters and a visualization environment.
Filtering based on dominance is practical to implement and promises to reduce the number of alternatives to be considered from vast to manageable;
The visualization environment presents the user with the results of multi-criterial evaluation, without forcing the evaluation to a single criterion. Tradeoffs are displayed and the user is able to bring human evaluation and judgement
Distribute the criticism of designs over a network of workstations is one way to make the demanding computations practical.
The search task in the design space is multi-criteria evaluation. Dominance filtering is proposed in this paper as the main technique. Visualization is the complementary one.
Commonly used multi-criteria evaluation method includes assigning weight to each criterion and sum them into a single one. Compared to this, dominance filtering is more objective yet complex. The biggest advantage may be that it guranteed not to exclude any potentially important design, which might be easily overlooked by the subjectiveness introduced in the weight assignment process.
| Build 11. Apr 12, 2003
Home
About this site
Literature Review
Data Mining
Machine Learning
Software Engineering
Research Notes
Hholte93.pod
Very Simple Classification Rules Perform Well On Most Commonly Used Datasets
Jjj99.pod
An Architecture for Exploring Large Design Spaces
Mmair00.pod
An Investigation of Machine Learning Based Prediction Systems mbre01bo.pod
Using Machine Learning to Predict Projct Effort: Empirical Case Studies in Data-Starved Domains
Qquinlan86.pod
Induction of Decision Trees
Sshawlik91.pod
Symbolic and Neural Learning Algorithms: An Experimental Comparison
KKARDIO.pod
Qualitative modelling and learning in KARDIO |