Software Architecture Document
Overall, the system is soundly based
- The architecture appears to be stable.
The need for stability is dictated by the nature of the Construction
phase: in Construction the project typically expands, adding developers who
will work in parallel, communicating loosely with other developers as they
produce the product. The degree of independence and parallelism needed in
Construction simply cannot be achieved if the architecture is not stable.
The importance of a stable architecture cannot be overstated. Do not be
deceived into thinking that 'pretty close is good enough' - unstable is
unstable, and it is better to get the architecture right and delay the onset
of Construction rather than proceed. The coordination problems involved in
trying to repair the architecture while developers are trying to build upon
its foundation will easily erase any apparent benefits of accelerating the
schedule. Changes to architecture during Construction have broad impact:
they tend to be expensive, disruptive and demoralizing.
The real difficulty of assessing architectural stability is that
"you don't know what you don't know"; stability is measured
relative to expected change. As a result, stability is essentially a
subjective measure. We can, however, base this subjectivity on more than
just conjecture. The architecture itself is developed by considering
'architecturally significant' scenarios - sub-sets of use cases which
represent the most technologically challenging behavior the system must
support. Assessing the stability of the architecture involves ensuring that
the architecture has broad coverage, to ensure that there will be no
'surprises' in the architecture going forward.
Past experience with the architecture can also be a good indicator: if
the rate of change in the architecture is low, and remains low as new
scenarios are covered, there is good reason to believe that the architecture
is stabilizing. Conversely, if each new scenario causes changes in the
architecture, it is still evolving and baselining is not yet warranted.
- The complexity of the system matches the
functionality it provides.
- The conceptual complexity is appropriate given the skill and
experience of its:
- The system has a single consistent, coherent architecture
- The number and types of component is reasonable
- The system has a consistent system-wide security
facility. All the security components work together to safeguard the
- The system will meet its availability targets.
- The architecture will permit the system to be recovered in the
event of a failure within the required amount of time.
- The products and techniques on which the system is based match
its expected life?
- An interim (tactical) system with a short life can safely
be built using old technology because it will soon be discarded.
- A system with a long life expectancy (most systems) should
be built on up-to-date technology and methods so it can be maintained
and expanded to support future requirements.
- The architecture provides clear interfaces to enable
partitioning for parallel team development.
- The designer of a model element can understand enough from the
architecture to successfully design and develop the model element.
- The packaging approach reduces complexity and improves
- Packages have been defined to be highly cohesive within the
package, while the packages themselves are loosely coupled.
- Similar solutions within the common application domain have
- The proposed solution can be easily understood by someone
generally knowledgeable in the problem domain.
- All people on the team share the same view of the architecture
as the one presented by the software architect.
- The Software Architecture Document is current.
- The Design Guidelines have been followed.
- All technical risks have either been mitigated or have been
addressed in a contingency plan. Newly discovered risks have been documented
and analyzed for their potential impact.
- The key performance requirements (established budgets) have
- Test cases, test harnesses, and test configurations have been
- The architecture does not appear to be
- The mechanisms in place appear to be simple enough to use.
- The number of mechanisms is modest and consistent with the
scope of the system and the demands of the problem domain.
- All use-case realizations defined for the current iteration can
be executed by the architecture, as demonstrated by diagrams depicting:
- Interactions between objects,
- Interactions between tasks and processes,
- Interaction between physical nodes.
- Subsystem and package partitioning and layering is logically
- All analysis mechanisms have been identified and described.
- The services (interfaces) of subsystems in upper-level layers
have been defined.
- The dependencies between subsystems and packages correspond
to dependency relationships between the contained classes.
- The classes in a subsystem support the services identified
for the subsystem.
- The key entity classes and their relationships have been
- Relationships between key entity classes have been defined.
- The name and description of each class clearly reflects the
role it plays.
- The description of each class accurately captures the
responsibilities of the class.
- The entity classes have been mapped to analysis mechanisms
- The role names of aggregations and associations accurately
describe the relationship between the related classes.
- The multiplicities of the relationships are correct.
- The key entity classes and their relationships are consistent
with the business model (if it exists), domain model (if it exists),
requirements, and glossary entries.
- The model is at an appropriate level of detail given the
- For the business model, requirements model or the design
model during the elaboration phase, there is not an over-emphasis on
- For the design model in the construction phase, there is a
good balance of functionality across the model elements, using composition
of relatively simple elements to build a more complex design.
- The model demonstrates familiarity and competence with the
full breadth of modeling concepts applicable to the problem domain;
modeling techniques are used appropriately for the problem at hand.
- Concepts are modeled in the simplest way possible.
- The model is easily evolved; expected changes can be easily
- At the same time, the model has not been overly structured to
handle unlikely change, at the expense of simplicity and
- The key assumptions behind the model are documented and
visible to reviewers of the model. If the assumptions are applicable to a
given iteration, then the model should be able to be evolved within those
assumptions, but not necessarily outside of those assumptions. Documenting
assumptions is a way of indemnifying designers from not looking at
"all" possible requirements. In an iterative process, it is
impossible to analyze all possible requirements, and to define a model
which will handle every future requirement.
- The purpose of the diagram is clearly stated and easily
- The graphical layout is clean and clearly conveys the
- The diagram conveys just enough to accomplish its objective,
but no more.
- Encapsulation is effectively used to hide detail and improve
- Abstraction is effectively used to hide detail and improve
- Placement of model elements effectively conveys
relationships; similar or closely coupled elements are grouped together.
- Relationships among model elements are easy to understand.
- Labeling of model elements contributes to understanding.
- Each model element has a distinct purpose.
- There are no superfluous model elements; each one plays an
essential role in the system.
- For each error or exception, a policy defines how the system
is restored to a "normal" state.
- For each possible type of input error from the user or wrong
data from external systems, a policy defines how the system is restored to
a "normal" state.
- There is a consistently applied policy for handling
- There is a consistently applied policy for handling data
corruption in the database.
- There is a consistently applied policy for handling database
unavailability, including whether data can still be entered into the
system and stored later.
- If data is exchanged between systems, there is a policy for
how systems synchronize their views of the data.
- In the system utilizes redundant processors or nodes to
provide fault tolerance or high availability, there is a strategy for
ensuring that no two processors or nodes can 'think' that they are
primary, or that no processor or node is primary.
- The failure modes for a distributed system have been
identified and strategies defined for handling the failures.
- The process for upgrading an existing system without loss of
data or operational capability is defined and has been tested.
- The process for converting data used by previous releases is
defined and has been tested.
- The amount of time and resources required to upgrade or
install the product is well-understood and documented.
- The functionality of the system can be activated one use case
at a time.
- Disk space can be reorganized or recovered while the system
- The responsibilities and procedures for system configuration
have been identified and documented.
- Access to the operating system or administration functions is
- Licensing requirements are satisfied.
- Diagnostics routines can be run while the system is running.
- The system monitors operational performance itself (e.g.
capacity threshold, critical performance threshold, resource exhaustion).
- The actions taken when thresholds are reached are
- The alarm handling policy is defined.
- The alarm handling mechanism is defined and has been
prototyped and tested.
- The alarm handling mechanism can be 'tuned' to
prevent false or redundant alarms.
- The policies and procedures for network (LAN, WAN) monitoring
and administration are defined.
- Faults on the network can be isolated.
- There is an event tracing facility that can enabled to aid in
- The overhead of the facility is understood.
- The administration staff possesses the knowledge to use
the facility effectively.
- It is not possible for a malicious user to:
- enter the system.
- destroy critical data.
- consume all resources.
- Memory budgets for the application have been defined.
- Actions have been taken to detect and prevent memory leaks.
- There is a consistently applied policy defining how the
virtual memory system is used, monitored and tuned.
- The actual number of lines of code developed thus far agrees
with the estimated lines of code at the current milestone.
- The estimation assumptions have been reviewed and remain
- Cost and schedule estimates have been re-computed using the
most recent actual project experience and productivity performance.
- Portability requirements have been met.
- Programming Guidelines provide specific guidance on creating
- Design Guidelines provide specific guidance on designing
- A 'test port' has been done to verify portability claims.
- Measures of quality (MTBF, number of outstanding defects,
etc.) have been met.
- The architecture provides for recovery in the event of
disaster or system failure
- Security requirements have been met.
- Are the teams well-structured? Are responsibilities
well-partitioned between teams?
- Are there political, organizational or administrative issues
that restrict the effectiveness of the teams?
- Are there personality conflicts?
The Logical View section of the Software Architecture Document:
- accurately and completely presents an overview of the
architecturally significant elements of the design.
- presents the complete set of architectural mechanisms used in
the design along with the rationale used in their selection.
- presents the layering of the design, along with the rationale
used to partition the layers.
- presents any frameworks or patterns used in the design, along
with the rationale used to select the patterns or frameworks.
- The number of architecturally significant model elements is
proportionate to the size and scope of the system, and is of a size which
still renders the major concepts at work in the system understandable.
© 2011 École Polytechnique de Montréal