Reusing Models for Requirements Engineering(File Last Modified Wed, May 29, 2002.)
MBRE01 Review1
Significance:How important is the work reported? Does it attack an important/difficult problem or a peripheral/simple one? Does the approach offer an advance in the state of the art? I believe this paper is an ambitious undertaking. This is meant in a very good way. There is definitely a lack of data early in the software life-cycle. Furthemore, there are huge financial consequences for making poor decisions early in the life-cycle. Applying knowledge farming early in the life-cycle makes it possible to create a baseline model which may always be refined. The concept of treatment learning is an important mechanism for the automation decision-making. ML approaches tend to either classify data or mimic a polynomial (or higher) mathematical model. Treatment learning focus on the delta aspect of the curve, meaning what happens if I move from decision point A to point B. Sometimes the journey is more important than the destination.
Originality:Has this or similar work been previously reported? Are the problems and approaches completely new? Is this a novel combination of familiar techniques? Does the paper discuss relevant research, or is it reinventing the wheel using new terminology? It looks like similar work was reported in: Practical Large Scale What-if Queries: A Case Study Using COCOMO-II (ASE2000) (Cited in paper) and Workshop on Intelligent Software Engineering (ICSE 2000 workshop, 1999) (Not cited) There seems to be a lot of overlap between these two papers and the current paper. I would like to see a stronger emphasis on how this paper extends or distinguishes itself from earlier efforts. You may also want to include some of Shepperd's work: Shepperd, Martin and Michelle Cartwright, Predicting with Sparse Data, Empirical Software Engineering
Quality:Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? Some dimensions for evaluation include generality, empirical behavior, theoretical analysis, and psychological validity. Page one argues that there are 2^11 combination of proposed changes. This presumes that all changes are independent of each other. My guess is that there are positive correlations between certain changes. As a result it is may be possible to collapse some of these factors and reduce the search space. On page two you claim that most companies operate at a level below CMM-3. I agree with this claim. In fact, Royce claims in his project management book that the distribution is something like: 70% = CMM-1, 15% = CMM-2, and 15% CMM-3 or higher. These numbers do not surprise me. However, if we assume that most organizations are at CMM-1, then the implication is that they have a poor process. There are certain assumptions about organizations that use the COCOMO II cost model. I would expect an organization to be at a CMM-3 level if they are going to receive substantial benefit from using COCOMO II. Somehow this needs to be rectified within the paper. The paper presents the 'Post-Architecture' version of COCOMO II. I am curious as to why you didn't investigate, or at least mention, any experiments regarding the 'Early Design' version of COCOMO II. Here is a nit-picky point: page 4 mentions 23 cost drivers; I believe you want 22 cost drivers. Page 4 you mentioned about picking N at random. Any reason why you randomly picked an N? Why did you use 2N and 3N? Page 4 presents a formula for 'worth.' How does 'worth' compare with the notion of 'confidence' in statistical analysis. I am not sure if I completely understood your point in the external validity section where you said, ``treatments that are ''best`` in generic data sets may not be relevant to particular projects. Are you claiming that this approach is not reliable? A question if you are at the beginning of the software life-cycle, then how do you discern whether the treatment is relevant.
Clarity:Does the paper describe the methods in sufficient detail for readers to replicate the work? Does it describe inputs, outputs, and the basic algorithms employed? Is the paper well organized and well written? This paper was extremely well-organized and well-written. I have had an opportunity to read previous works regarding Tarzan and this paper has done the best job at explaining the concepts relative to those previous papers.
General comments for the authors:What changes should be made? The paper reads very well. It is one of those papers I enjoyed reading a second and third time. Besides the changes mentioned above, I would offer the following suggestion (space permitting): a section on future direction. I could see extending this research to include domain analysis. Keep up the good work!
MBRE01 Review2
Significance:How important is the work reported? Does it attack an important/difficult problem or a peripheral/simple one? Does the approach offer an advance in the state of the art? This work does seem to attack one of the important problems in requirements engineering - that of obtaining data to load into models when such data is not abundant, and of analyzing such generated data to produce conclusions (treatments) that are useful for software project managers,
Originality:Has this or similar work been previously reported? Are the problems and approaches completely new? Is this a novel combination of familiar techniques? Does the paper discuss relevant research, or is it reinventing the wheel using new terminology? The article cites related work. This work seems to be a fairly novel combination of some useful AI techniques in the area of machine learning to the domain of requirements engineering
Quality:Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? Some dimensions for evaluation include generality, empirical behavior, theoretical analysis, and psychological validity. The work reported here does seem to be technically quite sound. The inclusion of reports about validation of the technique were useful in establishing its soundness. The paper could have said more about limitations of the contributions. There were a few questions that I had.
Clarity:Does the paper describe the methods in sufficient detail for readers to replicate the work? Does it describe inputs, outputs, and the basic algorithms employed? Is the paper well organized and well written? The paper was well written. The organization flows naturally from setting of context, description of related work, presentation of the method with examples, and on to conclusions. There were a few grammatical error, but these did not detract from the message of the paper. | Build 11. Apr 12, 2003 ![]() ![]() Literature Review![]() ![]() ![]() ![]() A agents.pod
C context.pod
constrain.pod
R reuse.pod
H hyref.pod
|