Figure 1: DSM Representation of Three Projects and Their Aggregate DSM
The aggregate DSM, shown in the lower part of Figure
1, combines the DSMs for each project. The square marks (“n”)
inside the aggregate DSM represent information constraints, while the diamond
marks (“v”) represent resource
constraints. Note that the diamond marks shown in the figure represent
only one possible resource constraint profile. The arrangement shown assumes
that activities in project 1 are preferred to activities in project 2,
and activities in project 2 are preferred to activities in project 3.For
example, the diamond mark in row 5 and column 1 conveys that A2 depends
on A1 finish for its execution. Inserting this mark in row 1 column 5,
instead, would reverse the resource dependency. In general, if a processor
is assigned to p projects, then there are (p!) possible priority
profiles for that processor.
The aggregate DSM provides managers of multiple
projects a simple snapshot of the whole development portfolio, allowing
them to clearly see and trace information dependencies within projects
and across multiple projects, while explicitly accounting for resource
contention.[1]
Partitioning the aggregate DSM reveals an improved information flow and
clear resource prioritization across the whole PD portfolio as compared
to independently partitioning the individual project DSMs.
Each utility curve is defined over [0,1], and all
of the weights are positive and sum to one, so the overall preference index
is also defined over [0,1]. Once a preference index is calculated for each
activity in a processor’s active queue, the activity with the highest preference
index is preferred.
Activity Attributes | 1. Expectations for activity rework (Rework Risk). People’s choice of whether or not to work on an activity can be influenced by their knowledge of the likelihood of that activity to change or be reworked later. Prior to running the simulation, a “rework risk” is estimated for each activity based on the likelihood and the impact of changes in the activity’s inputs. The estimate combines the impacts of both first- and second-order rework. First-order rework is caused by changes in inputs from downstream activities and is the sum of the product of rework probability and rework impact for all such inputs. Second-order rework is caused by changes in inputs from upstream activities which themselves have some risk of first-order rework. Second-order rework is the sum of the product of rework probability and rework impact for all such inputs, where each such product is further multiplied by the upstream activity’s estimated rework risk. |
2. Number of dependent activities. People are more likely to work on activities upon which a large number of other people depend. The number of marks in each column of the binary DSM is counted to determine the number of dependent activities. | |
3. Nearness to completion. This attribute is a function of the amount of work remaining for the activity. It assumes that people would rather finish nearly complete activities than begin new activities. (The attribute can also be set to reflect the reverse situation). | |
4. Relative duration. This attribute assumes that shorter activities will be preferred to longer ones. Prior to running the simulation (but after a random sample has been obtained for each activity’s duration), the relative duration of each processor’s activities is determined. Each processor’s activities are classified as either the shortest or the longest duration activity for that processor, or else in-between. (This attribute can also be set to prefer longer activities.) | |
Project Attributes | 5. Cost of project delay. A cost of delay is supplied for each project as an input to the simulation. Activities in projects with higher costs of delay are given greater priority. |
6. Project type. Project type is also given for each project as an input to the simulation. We classify project type based on project visibility and/or priority—which is either “high,” “medium/normal,” or “low.” | |
7. Schedule pressure. As a project falls behind schedule, pressure builds to finish its activities. Each project and each activity within it have scheduled completion times or deadlines, which are provided as inputs to the simulation. Project schedule pressure is the sum of the schedule pressure contributions of all of its activities. | |
Processor Attribute | 8. Personal and interpersonal factors. It is impossible to represent the comprehensive set of factors that influence a processor’s choice of projects and activities. Therefore, we utilize a random factor to represent other influences, including personal and interpersonal (and even irrational) factors. The random factor can represent personality, risk propensity or averseness, interpersonal relationships—really anything not attributable to the other attributes. |
Table 1: The Eight Attributes Used in Developing the Preference Function
(a) Lead time distributions for the overall project portfolio: with and without resource constraints |
(b) Lead time distributions for Project 1: with and without resource constraints |
(c) Lead time distributions for Project 2 |
(d) Lead time distributions for Project 3 |
In our simulation, project execution is guided by
a specific work policy. We use a simple work policy that requires all predecessor
activities to be completed before an activity is eligible to begin work.
However, coupled activities can begin work without inputs from downstream
activities within their block (by making assumptions about the nature of
those inputs). Prior to the beginning of the simulation, the aggregate
DSM is partitioned in order to the minimal set of coupled activities. Then,
a nominal duration for each activity is randomly sampled from its triangular
distribution.[2]These
sampled durations are saved in a temporary vector for later use within
a single simulation run.
Once the DSM is partitioned and nominal activity
durations are sampled, the simulation proceeds through increments of time, Dt.
At each time step, the simulation must (a) determine which activities are
worked on (b) record the amount of work accomplished, and (c) check for
possible rework. Determining which activities can be worked requires two
steps:
a. Determine which activities are eligible to do work based only on information constraints. Activities that have work to do and for which all predecessor activities are completed (i.e., for which all upstream input information is available) are eligible.
b. Assign eligible activities to their processors, thereby determining each processor’s active queue. If any processor has more than one activity in their active queue, then the preference algorithm is invoked to select the activity the processor will address in the current time step. This step essentially entails temporarily inserting additional marks into the DSM that represent processors’ preferences for eligible activities.
Each processor’s active queue is redetermined at each time step. If it is empty, then the processor is idle. If it contains a single activity, then the processor works on that activity. If the active queue contains more than one activity, then the preference function is invoked to determine which activity the processor will work on. At each time step, the simulation deducts a fraction of “work to be done” from each working activity. Finally, at the end of each time step, the simulation macro notes all activities that have just finished (nominal duration or rework). The macro performs another check to determine the coupled ones by inspecting the partitioned aggregate DSM. For each one of these finished and coupled activities, a probabilistic call is made to determine whether this activity will trigger rework for other coupled activities within its block. If rework is triggered, then the amount of rework is determined from the impact DSM and added to the “work to be done” on the impacted activity.