Simulating DSMs Based on Information and Resource Constraints

Ali Yassine & Tyson Browning


1. Introduction

DSM literature focuses on a single projects, where precedence information constraints determine the execution sequence for the activities and the resultant project lead-time.  In this research, we consider a DSM model for multiple projects that share a common set of design resources.  Especially in this setting, precedence information availability is insufficient to assure that activities will execute on time.  We extend the DSM modeling literature by including resource availability, where activity execution is based on both information and resource constraints.
 
 

2. The Multi-Project DSM Model 

Here we discuss the extension of the single-project DSM to include multiple projects and resource constraints. We refer to the multi-project DSM as the aggregate DSM.Building the aggregate DSM is best illustrated through a simple example. Consider three development projects occurring simultaneously (projects 1, 2, and 3) and involving four development resources/processors (A, B, C, and D), as shown in the top part of Figure 1.Processor A is assigned to all projects as depicted in the Figure by A1, A2, and A3.Processor B is assigned to projects 1 and 2 as depicted in the Figure by B1 and B2.The same representation holds for processors C and D.

Figure 1: DSM Representation of Three Projects and Their Aggregate DSM

The aggregate DSM, shown in the lower part of Figure 1, combines the DSMs for each project. The square marks (“n”) inside the aggregate DSM represent information constraints, while the diamond marks (“v”) represent resource constraints. Note that the diamond marks shown in the figure represent only one possible resource constraint profile. The arrangement shown assumes that activities in project 1 are preferred to activities in project 2, and activities in project 2 are preferred to activities in project 3.For example, the diamond mark in row 5 and column 1 conveys that A2 depends on A1 finish for its execution. Inserting this mark in row 1 column 5, instead, would reverse the resource dependency. In general, if a processor is assigned to p projects, then there are (p!) possible priority profiles for that processor.

The aggregate DSM provides managers of multiple projects a simple snapshot of the whole development portfolio, allowing them to clearly see and trace information dependencies within projects and across multiple projects, while explicitly accounting for resource contention.[1] Partitioning the aggregate DSM reveals an improved information flow and clear resource prioritization across the whole PD portfolio as compared to independently partitioning the individual project DSMs. 

2.1 Project Queues

Each processor can be involved in multiple activities (from the same or different projects), but they can only work on one project’s activity at any given moment. The list of activities from different projects is called the project queue for a processor. We will refer to each processor’s project queue as Qi, where i is the processor index. At any time, t, Qi(t) consists of two mutually exclusive and collectively exhaustive partitions: Ai(t) and Ii(t). The processor’s active queue, Ai(t), contains a subset of Qi(t) that is readily available for work (based on predecessor information availability only) at time t. The processor must choose one of these activities to work on. The inactive queue, Ii(t), contains all other activities. Coupled activities are those activities involved in an iterative block.  A work policy is used to allocate coupled activities to either Ai(t) or Ii(t) based on how the activities are sequenced in the DSM (the process architecture determined by partitioning). The first activities in the coupled block make assumptions about the later activities’ inputs and are assigned to Ai(t).

2.2 The Preference Function

If a processor is assigned to two projects and it can perform work on either one, at a given instant in time, the choice is made based on a preference function that includes activity, project, and processor characteristics. The preference function addresses the following concerns:
a. When faced with a choice, which activity (in which project) should a processor work on?
b. What is the impact of the above decision on PD time?
The preference function (or priority profile) ranks the activities in processors’ active queues, so that the activity with the highest priority is selected to work on. Preference or priority is a function of activity, project, and processor attributes. The attribute hierarchy is shown in the tree of Figure 2.Determining how to weight the attributes requires conducting interviews with the processors involved in the different projects to solicit their mental value model they use when determining activity priorities. The preference function consists of a weighted combination of the eight attributes listed and discussed in Table 1.
Each branch, i, in the attribute hierarchy (Figure 2) has a weight, ki, and a utility or preference value, Ui, which is a function of the attribute level, xi.
To arrive at an overall preference index for each activity, we can use a weighted sum: U(x1, …., xn) =

Each utility curve is defined over [0,1], and all of the weights are positive and sum to one, so the overall preference index is also defined over [0,1]. Once a preference index is calculated for each activity in a processor’s active queue, the activity with the highest preference index is preferred.
 
 



 
Activity Attributes 1. Expectations for activity rework (Rework Risk). People’s choice of whether or not to work on an activity can be influenced by their knowledge of the likelihood of that activity to change or be reworked later. Prior to running the simulation, a “rework risk” is estimated for each activity based on the likelihood and the impact of changes in the activity’s inputs. The estimate combines the impacts of both first- and second-order rework. First-order rework is caused by changes in inputs from downstream activities and is the sum of the product of rework probability and rework impact for all such inputs. Second-order rework is caused by changes in inputs from upstream activities which themselves have some risk of first-order rework. Second-order rework is the sum of the product of rework probability and rework impact for all such inputs, where each such product is further multiplied by the upstream activity’s estimated rework risk.
2.     Number of dependent activities.  People are more likely to work on activities upon which a large number of other people depend.  The number of marks in each column of the binary DSM is counted to determine the number of dependent activities.
3.     Nearness to completion.  This attribute is a function of the amount of work remaining for the activity.  It assumes that people would rather finish nearly complete activities than begin new activities. (The attribute can also be set to reflect the reverse situation).
4.     Relative duration.  This attribute assumes that shorter activities will be preferred to longer ones.  Prior to running the simulation (but after a random sample has been obtained for each activity’s duration), the relative duration of each processor’s activities is determined.  Each processor’s activities are classified as either the shortest or the longest duration activity for that processor, or else in-between.  (This attribute can also be set to prefer longer activities.)
Project Attributes 5.     Cost of project delay.  A cost of delay is supplied for each project as an input to the simulation.  Activities in projects with higher costs of delay are given greater priority.
6.     Project type.  Project type is also given for each project as an input to the simulation.  We classify project type based on project visibility and/or priority—which is either “high,” “medium/normal,” or “low.”
7.     Schedule pressure.  As a project falls behind schedule, pressure builds to finish its activities.  Each project and each activity within it have scheduled completion times or deadlines, which are provided as inputs to the simulation.  Project schedule pressure is the sum of the schedule pressure contributions of all of its activities.
Processor Attribute 8.     Personal and interpersonal factors.  It is impossible to represent the comprehensive set of factors that influence a processor’s choice of projects and activities.  Therefore, we utilize a random factor to represent other influences, including personal and interpersonal (and even irrational) factors.  The random factor can represent personality, risk propensity or averseness, interpersonal relationships—really anything not attributable to the other attributes.

Table 1: The Eight Attributes Used in Developing the Preference Function

3. Simulating the Model

The simulation is implemented in a spreadsheet model using Excel Visual Basic macro programming (click here to download). All input information (i.e. project data and the corresponding activity details) is entered into the Excel spreadsheet. Then, the simulation starts by executing the simulation macros. The results of the simulation are displayed as a set of lead-time distributions for each project and for the whole portfolio as shown in Figure 3.
 
 

(a) Lead time distributions for the overall project portfolio: with and without resource constraints

(b) Lead time distributions for Project 1: with and without resource constraints

(c) Lead time distributions for Project 2

(d) Lead time distributions for Project 3
Figure 3: Sample Simulation Results (Solid curves: No resource constraints - Dotted curves: With resource constraints)

In our simulation, project execution is guided by a specific work policy. We use a simple work policy that requires all predecessor activities to be completed before an activity is eligible to begin work. However, coupled activities can begin work without inputs from downstream activities within their block (by making assumptions about the nature of those inputs). Prior to the beginning of the simulation, the aggregate DSM is partitioned in order to the minimal set of coupled activities. Then, a nominal duration for each activity is randomly sampled from its triangular distribution.[2]These sampled durations are saved in a temporary vector for later use within a single simulation run.

Once the DSM is partitioned and nominal activity durations are sampled, the simulation proceeds through increments of time, Dt. At each time step, the simulation must (a) determine which activities are worked on (b) record the amount of work accomplished, and (c) check for possible rework. Determining which activities can be worked requires two steps:

a. Determine which activities are eligible to do work based only on information constraints. Activities that have work to do and for which all predecessor activities are completed (i.e., for which all upstream input information is available) are eligible.

b. Assign eligible activities to their processors, thereby determining each processor’s active queue. If any processor has more than one activity in their active queue, then the preference algorithm is invoked to select the activity the processor will address in the current time step. This step essentially entails temporarily inserting additional marks into the DSM that represent processors’ preferences for eligible activities.

Each processor’s active queue is redetermined at each time step. If it is empty, then the processor is idle. If it contains a single activity, then the processor works on that activity. If the active queue contains more than one activity, then the preference function is invoked to determine which activity the processor will work on. At each time step, the simulation deducts a fraction of “work to be done” from each working activity. Finally, at the end of each time step, the simulation macro notes all activities that have just finished (nominal duration or rework). The macro performs another check to determine the coupled ones by inspecting the partitioned aggregate DSM. For each one of these finished and coupled activities, a probabilistic call is made to determine whether this activity will trigger rework for other coupled activities within its block. If rework is triggered, then the amount of rework is determined from the impact DSM and added to the “work to be done” on the impacted activity.

 

[1] Here, we assume no information dependencies (only resource constraints) between projects, but the aggregate DSM can accommodate such dependencies by allowing the insertion of a combination of square and diamond marks outside the individual projects blocks.
[2] The triangular distribution for each activity duration is represented by the minimum, likely, and maximum duration values in the input spreadsheet


Back to Tutorial Index

Back to DSM Homepage