Should Cost Modelling for Civil Engineering, ultimate how to guide.

  • Home
  • Should Cost Modelling for Civil Engineering, the ultimate, friendly, how to guidance.

Should Cost Modelling for Civil Engineering, the ultimate, friendly, how to guidance.

October 3, 2022 Civil Bites 0 Comments

What is a Should Cost Model?

Introduction to Should Cost Models

An SCM forecasts what a service, project, or program should cost over its lifetime. As summarised below, SCMs may vary in design as procurement lifecycle requirements alter cost modelling. This Guidance Note refers to all of them as SCM.
SCMs anticipate public works project costs over the build phase and design life. This involves risks and profits. It includes the impact of risk and uncertainty on cost and schedule. Not the initial purchase price, but ‘full life cost’ is crucial. Early in the procurement process, SCMs should be used to:
Inform the delivery model assessment (DMA), which analyses cost and non-cost parameters such full life carbon;

  • Understand the whole-life costs, hazards, and possibilities of alternative options and scenarios.
  • Understand the impact of risk and uncertainty on cost and schedule to create more realistic budgets.
  • Inform the initial business case (department and ALB Strategic Outline Case);
  • Inform bidder engagement and commercial strategy, including ways to incentivize supply chain whole-life value.
  • SCM helps analyse public service delivery model possibilities.

In-house – The total cost to deliver a service utilising internal resources and knowledge. It covers acquisition, maintenance, and capability costs. This should be utilised early in the procurement to compare costs against EMC and/or ME choices to guide a DMA (see Chapter 3 in the Sourcing Playbook).
Expected Market Cost is the whole-life cost of a service from an outside supplier. It contains risks and profits. ‘Expected Market Cost’ can be used to inform bidder engagement if the service is outsourced. Early market interaction can assist evolve the model structure to compare to market offers.
Mixed Economy – A delivery approach may combine insourcing and outsourcing. In certain circumstances, a ‘Mixed Economy’ option can be utilised to assess the cost of the service.

House model with real estate agent working at model office,Contracting home loans, home buying planning, business investment planning, business housing.

It’s necessary to describe the SCM’s costs, how they’re addressed, and its restrictions for decision-makers. An SCM estimates a specified set of costs over a defined time period, usually the life of a service, project, or programme.

  • An SCM is a financial and analytical model that should follow the Green Book and Aqua Book principles. SCM:
  • Analyze unit cost by volume;
  • Consider uncertainties and risks;
  • Use day rates and employee numbers;
  • Model multiple choices for comparison and sensitivity.

Levels of complexity

SCM complexity and creation time vary. SCM complexity should mirror the criticality and complexity of what you’re sourcing. Complexity depends on SCM’s goal (e.g. high-level analysis compared with detailed forecasting). Key cost factors and assumptions could be documented in a spreadsheet for a simple, low-value SCM. Complex SCM could be a thorough financial model that takes months to create for high-value or complex sourcing.
When defining a procurement timeline for a bid, include the time suppliers need to generate cost models. Complex SCMs with advanced features require more resources than simple ones. Any programme or purchase should allow enough time to plan, design, create, and test SCM. An SCM’s level of detail can vary greatly, and it may need to be iteratively developed as more information becomes available and more assurance is needed. Early in the decision-making and procurement process, simple models based on a service, project, or programme definition and major cost drivers may be appropriate.
As the procurement process advances, the service, project, or programme specification and other determining criteria change, SCMs may need to become more complex and their data more robust. Assumptions may need to be reevaluated.

Evolution of a Should Cost Model

DMAs should be iterated over time in line with the Green Book’s Business Case creation process. SCMs should adapt as new information becomes available and requirements change. These needs may include using SCM to demonstrate value for money, inform payment mechanism development, or prevent the government from ‘low-cost bid bias’

  • SCMs are useful throughout the procurement process, although the amount of information may require iteration. Their growth parallels the Green Book’s Business Case approach.
  • Initial Should Cost Modeling — Informs strategic delivery model assessment (Strategic Outline Business Case).
  • Should Cost Modelling – A thorough model that evaluates options to show value for money. (Case Study)
  • This is a full-cost model that contains all cost factors and data to evaluate supplier return costs. This is only allowed if the SCM was shared with bidders. Contracting authorities should explore SCM sharing risks and rewards (Full Business Case).
  • This is a full cost model using real cost data and quantities, allowing comparison to expectations and open book contract management (Full Business Case).

Why create an SCM?

Why Should Cost Models Are Beneficial

SCMs provide a better awareness of the costs associated with multiple delivery model possibilities, insight into potential delivery models, and protection from ‘low-cost bid bias’ (the tendency to favour the lowest cost bid as the preferred option).

Complex services, projects, or programmes risk low-cost bid bias, thus departments must refer to abnormally low bids before accepting them (see Chapter 10 in the Sourcing Playbook and Chapter 9 in the Construction Playbook):

If a bid is 10% lower than the average of other bids or the Should Cost Model estimate, it should be forwarded to the Continuous Commercial Improvement Team (i.e. if it fails either of these criteria it should be referred). SCM comparisons should be done at the 50% confidence range for probabilistic estimates or the median scenario for deterministic estimations.

SCMs can facilitate a wide range of analyses. These can be understood using the Green Book’s Five Case paradigm.

Strategic Case — Supporting the change case by specifying the offering’s scope and delivery costs, including confidence ranges. Depending on the SCM’s breadth, this may also incorporate a quantitative understanding of Business as Usual (BAU).

Economic Case – A should cost model helps explain and understand the business case’s pricing parts and supports discussions about options and value for money. In measuring value for money, entire life cost concerns and benefits will be included. The SCM’s breadth and how it will assist the Economic Case will drive its design.

Commercial Case — Designing an SCM for first-generation outsourcing or unique services, projects, or programmes can assist generate an understanding of their commercial viability by better understanding cost components, including risk and timing, and whether or not they can be procured.

A thorough cost profile of a target service, project, or programme will help establish affordability and financial sustainability by emphasising whole-life costs and confidence ranges. Similar to the Economic Case, the amount to which the SCM will support the Financial Case will effect its overall costs and benefits and design.

Management Case – Knowing delivery costs helps project management. It can structure reporting by providing baseline expenses against which to measure variations.

SCMs may assist many components of the procurement lifecycle, provided they have correct management information and market data.


Options analysis provides realistic cost estimates and drivers for alternative choice combinations, enhancing delivery options.

Switching Values — Determines, using sensitivity analysis, the input value threshold at which an option becomes infeasible.

Major Cost Driver Analysis – Provides further insight by allowing a more complete knowledge of key cost components and their underlying causes;

Maximizing Value for Money — Establishes cost by expense category, offering transparency over cost and production;

SCM’s ability to compare bidders’ bids at the element level and the gaps between their suggested prices and the anticipated baseline during negotiation or competitive dialogue procurement procedures is known as “negotiation support.”

Budgeting – Provides a foundation for budgeting.

Once a contract is executed, the accompanying SCM may be updated with the contractual costs to offer a cost baseline for analysis, emphasising differences between outturn and plan at a granular level and allowing additional study;

Contract Management — Provides a cost baseline to monitor contract and supplier performance, challenge VFM, and guide contract modification.

When to generate an SCM?


The necessity to generate an SCM when making sourcing choices and contracting outside suppliers for public services is outlined in the Sourcing Playbook (Chapter 3) and in the Construction Playbook for public works projects or programmes (see Chapter 5).

It’s recommended practise to build an SCM for all procurements as part of the business case and procurement planning step, before advertising the contract and publishing procurement papers.

An SCM should be utilised with non-cost criteria (e.g. full life carbon) to inform the suggested delivery model.

Before building the SCM, it’s crucial to define the service, project, or programme, including what excellent looks like, intended results, and key performance metrics. Here’s more on service definitions and delivery model evaluations.

Should Cost Models to evaluate bids

SCMs may assist develop the assessment model by revealing various delivery models and cost drivers. SCM can help determine what expenses to include and guide bidder discussions.

SCMs may be utilised during a competitive dialogue or competitive process including negotiation to guarantee that suppliers give transparency of all important cost drivers during the service, project or programme’s life. SCM helps contracting authorities comprehend costs. If they’re more or lower than anticipated, the bidder should explain how they calculated the price. The SCM is not shared with bidders during dialogue/negotiation but is used to inform the contracting authority’s negotiating stance and bid robustness and deliverability.

Sourcing Playbook (Chapter 3) and Construction Playbook (for public works projects or programmes) emphasise the need for an SCM when sourcing and contracting outside vendors for public services (see Chapter 5).

Before advertising the contract and posting procurement papers, it’s suggested to construct an SCM as part of the business case and procurement planning process.

An SCM with non-cost criteria (e.g. whole life carbon) should guide the delivery model.

Before constructing the SCM, describe the service, project, or program’s purpose, anticipated outcomes, and key performance indicators. Service definitions and delivery model assessments.

Cost Models for bidding

SCMs expose delivery methods and cost factors to help construct the assessment model. SCM can influence bidder talks and choose what expenditures to include.

planning savings money of coins to buy a home, concept for property ladder, mortgage and real estate investment. for saving or investment for a house, growing business

SCMs may be used throughout a competitive conversation or process involving negotiation to ensure suppliers provide transparency of all essential cost drivers during the service, project or programme’s life. SCM helps contractors understand costs. If they’re higher or lower than expected, the bidder must explain. The SCM isn’t shared with bidders during dialogue/negotiation, but it informs the contracting authority’s negotiating posture, bid robustness, and deliverability.

Figure 1: Using an SCM to compare cost components

Cost components may differ between different delivery options

The process of creating an SCM

Five stages

SCM production follows a five-step procedure, whether complex or simple (See Figure 1). Financial, economic, statistical, and commercial operations may need to interact. Here’s how model development works:

Create a model Scope and delivery, data, and QA plans. Explain why and how an SCM is needed. Determine design, complexity, data, delivery, and resource needs and whether to create SCM internally or buy it. Confirm stakeholders, timeframes, governance, and QA. The model Scope becomes the model Specification (Inc. Design), which guides its technical development.

Create an SCM model Specification (including design). Codify the model’s inputs and outputs, write the major computations and formulas, and describe its general architecture. Update the Data, Delivery, and QA Plans before starting model construction. The SCM Development Guidance helps prepare Delivery, Data, and QA Plans and a model Specification and Design.

Develop the model. Good modelling practises decrease risk and improve SCM usefulness. These principles should guide model development per the model Specification. SCM Technical Build Guidance outlines model development best practises. The model developer should self-test before official QA and testing.

Perform formal QA and testing on the model. QA and testing needs will be highlighted in the model Scope and QA Plan. Formal QA and testing occur at this phase of model development. HMT’s ‘Review of Quality Assurance of Government Analytical Models’ and the Aqua Book describe QA.

Governance and control methods assist assure the model’s viability. The model may be employed after it’s designed, tested, and approved. At this stage, governance and control are needed to assure the SCM’s viability throughout its lifespan.

Cabinet Office created the following Tools and Templates to facilitate SCM development and good practise approaches:

Initial Model Assessment Tool – influences SCM development methodology and QA/testing levels;

Scoping Template – proforma template with critical questions to structure SCM Scope;

Planning Template – project management help to schedule SCM activities and monitor roles, responsibilities, delivery risks, and project progress;

QA Plan Template — for formalising SCM QA and testing operations throughout model development;

Development Checklist – Checklist for QA and governance at each level of model development;

Specification Template Example — main headers with content samples to guide model Specification (inc. Design);

SCM Build Template — model build template with best practises (error check network, deadlines, style guide, etc.);

Customizable template for recording SCM data and assumptions and directing data gathering;

Good Practice Build Tools – toolkit to organise and automate SCM good practise review;

Version Control Log – template utility to manage and record SCM changes;

User Guide Example – important headers and content samples for a model User Guide; Testing Procedures – procedural instructions for testing an SCM (also includes QA Report and Test Memos).

SCM preparation

Structured approach

After deciding to create an SCM, it’s vital to design it well and give a disciplined approach to development by:

An Initial Model Assessment (IMA) is required in order to identify the criticality and complexity of the SCM and, as a result, the related requirements for its governance, quality assurance and testing; Outlining the SCM’s purpose and functionality, data requirements and high-level design, stakeholders and delivery dates.

Model Scope, Delivery, Data, and QA Plans should outline model development. These papers connect requirements and SCM development. Scope, Delivery, Data, and QA Plans should be model-by-model. 

Critical decision SCMs may be more complex, need more granular data, and have more advanced functionality. Complexity increases development time and mistake risk. SCMs should be developed with proportionality in mind.

An Initial Model Assessment (IMA) may help in SCM creation, QA, and testing. Both the criticality of the choice and the SCM’s intelligence must be considered. Consider how deep the analysis must be, how well-defined the supplied service is, the accessibility and robustness of data, and the availability of adequately competent workers. Cabinet Office created an IMA Tool for this evaluation. It’s based on the Cabinet Office Tiering Tool, which provides a first assessment of a decision’s commercial importance.

Considerations for:

Procedures and Controls – how the SCM should be maintained and regulated throughout its lifespan, including what QA and testing should be applied;

What are the SCM’s lifecycle roles and responsibilities? This includes assessing qualifications and experience2.

When preparing the model Scope, the factors outlined in Figure 4 should be considered proportionally in the context of the criticality of the SCM. The model Scope should be owned by the Model Senior Responsible Owner (Model SRO), who has overall responsibility for the SCM, including its development and use. Once prepared, the model Scope, Delivery and QA Plans should be agreed and signed off via appropriate governance.

Defining resource needs

Qualifications and experience

Modelling Scope determines resource needs. For all models, skilled, experienced individuals with enough time and resources are needed. Time and resources are critical to establishing good-practice, risk-managed models. Figure 5 shows typical duties to consider when allocating resources. These are roles, not job titles, and one individual may fill multiple. Less complicated models may have the same architect and developer.

Differentiate between model developers and QA/testers.

Assess if internal resources have the expertise and experience to enable SCM development and testing, or whether external resources are needed. No universal professional credentials cover model building and testing, although analytical, commercial, financial, and economic capabilities are needed to construct a solid SCM. Individuals responsible for financial input should have suitable financial degrees and costing expertise; statisticians should have applicable qualifications.

Procuring an SCM


If internal resources are limited or unavailable, external service providers may be ideal for SCM. Before purchasing all or part of an SCM to help development, consider the following:


What modelling qualifications do service providers need to create and evaluate SCMs for the public sector?

How acquainted are service providers with the target market and do they have the right technical skills?

When will the model Specification and Design, draught model, tested model, and User Guide be delivered?

Service providers as possible suppliers? Conflict of interest?


How will service providers communicate throughout the model design process to ensure needs are properly reflected? Will this be agreed via a thorough model Specification (inc. Design)?

How will model design modifications be agreed upon and accommodated throughout development?


Do service providers develop SCMs to best practises?

Will intermediate SCM versions be provided for discussion, and how will comments be incorporated?

Will service providers populate the SCM with data? If so, what data?

Are strategies in place to obtain and make the customer’s data available?

Will delivery include a User Guide, Technical or Developer Guide, Book of Assumptions/Data Log, and training materials, if needed?

Will software add-ins (e.g., for Monte Carlo simulation) be needed for SCM development? Are there licencing implications?


What will the exams include and how thorough will they be?

Will extra software be utilised for Verification (e.g. logic testing) and what results will be provided?

What test documentation (e.g. Test Memos) will be provided?

How many testing “cycles” (where bugs are resolved and re-tested) are expected and supported?

How will the model’s suitability be verified?


Will the service provider run the model, and if not, what training and support will be provided?

Is there enough internal experience to manage service provider questions or anomalies?

Who owns the SCM’s IP? Are there constraints on distribution, usage, or redevelopment?

What’s the post-delivery bug-fixing and refinement support like?

If SCM is password-protected, will all passwords be provided?

Contextual Advice

SCM development takes time and resources. Ensuring that competent individuals construct well-designed models using best practises can help this investment provide higher returns by increasing accuracy, efficiency, and knowledge across the organisation.

This paper focuses on identifying and managing SCM risks, as well as producing a model that delivers insight.


This paper aims to enhance awareness of SCM risks and give a systematic framework to assist control their growth.

This publication draws on HMT’s Aqua Book, Review of Quality Assurance of Government Models, and Green Book. This paper aims to provide professionals building and supervising SCMs better operational insight.

This paper promotes a rigorous and consistent strategy to managing SCMs and related risks. Model development structure advantages include:

Risk identification helps ensuring hazards are detected early and mitigated;

Prioritizing risks ensures they are appropriately prioritised and handled throughout model creation and usage.

Increased Efficiency – reduces redundant effort and enables efficiency via process optimization, automation, and reducing repetition and rework;

Greater Focus – follows a defined development strategy with frequent stakeholder involvement to boost model specificity and maintain it aligned with consumer demands; and

Confidence — Helps control model development risk and boost SCM-supported choices.


This guide applies to all SCMs and their files, such as feeder models or data manipulation and analysis files.

It focuses on Excel-based spreadsheet models. The fundamental ideas also apply to SCMs constructed utilising databases.

It should be commensurate to an SCM’s risks and utilisation. Cabinet Office’s SCM guideline includes:

SCM Guidance Note – explains what SCMs are, when and why contracting authorities should prepare them, and critical development and procurement issues;

Initial Model Assessment Tool – a tool to guide SCM development and enable contracting authorities determine a proportionate amount of QA and testing. It’s based on the Cabinet Office Tiering Tool, which provides a first assessment of a decision’s business criticality;

SCM Technical Build Guidance outlines best practises for constructing SCMs. It’s technical and for SCM builders.

Cabinet Office has also provided practical Tools and Templates to facilitate SCM development and reinforce best practises. These and the preceding recommendations are linked with model development lifecycle phases/stages.

Responsibility and Accountability

Model SRO is responsible for SCM and its usage. In practise, enabling a Model SRO to approve an SCM may be best accomplished by adopting a formal process for SCM development inside contracting authority. This publication and Sourcing Programme instructions, tools, and templates help inform that framework.

In addition to having a framework, SCMs must be produced by skilled and experienced employees in accordance with the framework. Someone must supervise model development and ensure framework conformance. While the Model SRO could handle this position, delegating supervision may be preferable.

This article will assist overseers determine what to look for, and the Sourcing Programme’s SCM tools and templates (see Section 10) will help them fulfil their tasks.

The advise should be commensurate to the risks of a certain SCM and its usage. Model SRO should approve any application decisions.

Should Cost Model Development Lifecycle

Should-Cost Model Definition

Should Cost Models anticipate what a service, project, or program’should’ cost throughout its life. As summarised below, SCMs may vary in design as procurement lifecycle needs alter. This paper uses Should Cost Model (SCM) to refer to all of them.

SCMs anticipate public works project costs throughout the construction phase and design life. This involves risks and profits. It includes the effect of risk and uncertainty on cost and schedule. Not initial purchase price, but ‘full life cost’ is crucial.

SCM helps analyse public service delivery model possibilities.

In-house — This is the total cost to offer a service utilising internal resources and knowledge. It covers acquisition, maintenance, and capability costs.

Expected Market Cost is the whole-life cost of a service from an outside source. It contains risks and profits.

Mixed Economy – A delivery approach may include insourcing and outsourcing. In certain circumstances, a ‘Mixed Economy’ option may be utilised to assess the cost of the service.

SCMs should be utilised early to:

Inform the DMA2’s cost and non-cost criteria;

Understand the whole-life costs, hazards, and possibilities of alternative choices and situations.

Understand the effect of risk and uncertainty on cost and schedule to create more realistic budgets.

See Sourcing Playbook and Construction Playbook for evaluation tips.

Inform the initial business case (department and ALB Strategic Outline Case);

Inform bidder interaction and commercial strategy, including ways to incentivize supplier chain whole-life value.

SCMs develop as more information becomes accessible and new needs are made. If pursuing external procurement, a new SCM or the structure of a previously built SCM may be needed to offer insight into supplier bids and safeguard the government from low-cost bid bias (see Chapters 10 and 9 in the Sourcing and Construction Playbooks respectively). Early market interaction should be employed to ensure the model structure allows comparison to market offers.

The SCM Guidance Note explains Should Cost Models and their business uses.


An SCM should have a well-defined lifecycle with stage-specific operations and controls (see Figure 2). Summary:

Planning, creating, developing, and testing a model should be proportional to the decision’s importance.

Structured – Process phases and responsibilities should be clearly defined and separated (e.g., SCM build from formal QA and testing);

Sequential – Model development should follow sequential steps, with one stage finished before the next (e.g., model design before development); and

Before moving forward, stages should be signed off. Checklists may help.

This article covers model design, development, testing, and usage (see Figure 2). The SCM Guidance Note advises the following first SCM planning activities:

Initial Model Assessment – This evaluates the criticality of the decision the model is meant to support and the inherent risk to guide the degree of governance, QA, and testing necessary to manage development risk. Cabinet Office created an IMA Tool for this evaluation. It’s based on the Cabinet Office Tiering Tool, which provides a first assessment of a decision’s business criticality;

Plan the model, including its aim, important features, expenses, modelling approaches, and tool/software. It should also include an initial Data Plan outlining provisional data requirements, a Delivery Plan outlining resource requirements, timelines, risks, and mitigations, and a QA Plan outlining QA and testing requirements (informed by the Initial Model Assessment) and how the model will be governed and controlled over its lifecycle.

As procurement develops and SCM needs change, it may need to adapt. In some circumstances, this may be done by developing the current SCM; in others, a new SCM may be needed. In any instance, the model development lifecycle should be organised (see Section 3.3).

Reuse, reconfigure, or rebuild

Decision-support models may be offered. When starting a modelling effort, it’s important to check if current modes can give useful knowledge. Existing models and reduced development effort may provide efficiencies.

When assessing model compatibility, consider:

Relevance – how closely the current choice relates to the prior one based on prices and service offering;

Is the prior model’s analysis detailed enough to justify the present decision?

Does the provided model have the needed capabilities to support the current decision?

If directly comparable models are available, their re-use may be suitable. Partially compatible

in a paradigm with some similarities to the decision-support model. These models may need reworking.

A model’s availability doesn’t imply it should be utilised to make a choice. If present models are incompatible, a new one may be needed.

Reusing, modifying, or constructing a new model requires following the proportionality method below. Model preparation may have included planning, developing, and testing. Reusing or changing a model still requires action. Despite differences, the steps must be performed. Example: Confirm if a previously prepared QA Plan is still appropriate for a reused model rather than creating a new one.

Risk Management


Structured SCMs help control risk. Models are employed as decision support tools, and a wide variety of hazards may lead to wrong judgments or other harmful effects.

Risk repercussions aren’t limited to financial, corporate, or economic consequences of a bad action. Non-financial repercussions might be legal, compliance, or reputational, and a wide grasp of risks is needed to handle them successfully.

Effective SCM risk management should be built into processes. It should be fostered and empowered by a good and supportive culture.

Best practises SCM development with an organised methodology, proportionate controls, consistent risk assessment, explicit documentation, ownership, and responsibility manages model development lifecycle risks.

Construction dangers

Purpose-built models exist. They employ particular computations to obtain discrete results. Models may be used for unintended objectives, and their results may not be adequate to justify a conclusion. The design phase (Section 4) of model creation codifies the model’s capabilities and constraints and offers a clear and visible record of the model’s analysis, data, and operation.


Model error is broader than user error, when a model operator corrupts it. Errors fall into four categories:

Incorrect source data, poor maturity or quality data, unsuitable data, unit translation issues, data input mistakes, etc.

Calculation mistakes, such as pointing, range, signage, logic, overtyping, etc.;

Versioning, configuration, copy-paste, operation, etc. problems;

Interpretation errors occur when a model’s outputs are misconstrued or it is utilised to support a conclusion it was not planned to support. Omission errors occur when model pieces are missing because they were not scoped, produced, or input data is incomplete.

Prepare a Data Plan, use and maintain a Book of Assumptions / Data Log (Section 4), and employ Quality Assurance and testing to manage data issues (Section 7). Good practise development (Section 6) and rigorous Quality Assurance and testing help reduce calculating mistakes (Section 7). Developing a model based on best practises, offering enough documentation and training, and using a regulated file management system helps reduce use mistakes (Section 9). Model description, training, and effective usage may reduce interpretation mistakes (Section 6). (Section 8). Omission mistakes may be controlled with excellent model preparation (see SCM Scoping in SCM Guidance Note and Section 4) and QA/testing (Section 7).


SCMs may include confidential information. The nature of government procurement and the sensitivity of the data used to model scenarios, or the scenarios themselves, implies that incorrect access and use of a model must be managed. Risks include:

Hidden rows, columns, spreadsheets, comments, VBA code may contain data and offer undesired access;

Protection Failure — unintentional access to sensitive information via poor protection or password failures (e.g. password not applied, too simple or ineffectual, or shared with an unauthorised user);

Access Control – granting data access via improper controls (e.g. keeping an SCM on a shared disc, without limiting access rights, emailing SCMs); and

Data Control – leaking sensitive data. Some of the model’s data may be off-limits (e.g. GDPR requirements).

Documenting a list of approved users, selecting and deciding on a secure environment to keep the model, and implementing access and sharing rules can assist lessen the risks (Section 9.2).


Unavailable SCMs might create delays. In certain cases, this might affect the contracting authority’s operations, finances, or reputation. Availability concerns include:

Loss comprises file loss, file corruption, and file access loss, including password loss;

Loss of knowledge of how to run or maintain an SCM, including critical staff or documents;

Incompatibility due to software modifications or incompatibilities between development and user environments;

Delay involves late deployment due to poor planning, data availability, development, and QA/testing delays.

Adopting a regimented approach to file management (Section 9), producing model documentation (Section 6.6, 6.7, and 6.9), confirming the operating environment before development and undertaking up front planning (Section 4 and 14), and developing models in line with good practise approaches (see SCM Technical Build Guidance) will help mitigate the above risks.


SCMs must represent decision costs. They must be comprehensive. They should represent scope and not have unpopulated regions or incomplete estimates. Two types of omission errors exist:

Only part of the overall demand is incorporated in the model. Not populating all model input years.

A critical aspect was omitted from the model. This mistake isn’t limited to model development. Omissions include not identifying a relevant cost while planning and constructing the model and not including aspects of the calculation logic in the model’s Specification.

Designing a Should Cost Model

Model Specification Inc.

During the design step (see Figure 2), the model Scope is turned into an analytical plan that considers inputs, analytical procedures, and outputs (see HMT Aqua Book). Creating a model Specification may help (inc. Design). Appendix IV: Designing a Model contains a procedural guide for creating a model Specification.

Preparing a model Specification (including Design) allows for debate on the model’s inputs, outputs, calculation logic, and structure. Obtaining customer, developer, and other key stakeholder agreement before development minimises the chance of iterating, changing, or updating a model throughout development. It ensures that a wide variety of stakeholders’ needs are met and that the overall model design is effective. Some features, such as model input and output templates, may be built in a spreadsheet-based programme (e.g. MS Excel).

Model design reduces hazards. Key choices about a model’s purpose, functionality, data sets, calculation logic, and architecture may be documented to reduce design and interpretation risks.

Iteration may be needed to account for dependencies in the model Specification (incl. Design). If certain input data sets are unknown, more advanced modelling approaches, such as sensitivity or statistical analysis, may be needed to assist decision makers (see the Aqua Book for further information on uncertainty).

 Key Components of a Model Specification inc. Design

The model Specification (inc. Design) will build upon the model Scope prepared during the initial planning phase (See Figure 2). Whereas the focus of the model Scope is setting out what the model is required to do the focus of the model Specification (inc. Design) is setting out how this will be achieved. It provides the blueprint for developing the model and, as such, will be more granular than the model Scope.

QA, Data and Delivery Plans

During the design process, initial QA, Data, and Delivery Plans should be evaluated and revised to reflect any changes. The QA Plan may not have accounted for sophisticated features or VBA, which might affect testing, resource, data, and delivery timeframes.

As design detail increases, all data needs must be recorded in the Data Plan. A Data Plan reduces the chance that a model needs unavailable data by codifying and discussing it. It should include the time period the data must cover, the format it must be in, where it will be obtained, handling and storage requirements, processing needs, maturity and quality levels, when it is needed, and who will deliver it and do any preprocessing. It should examine significant hazards (e.g. around data availability). As model development advances, the Data Plan will become a Book of Assumptions/Data Log that records the model’s data (see Section 6.9).

Preparing and sharing QA, Data, and Delivery Plans allows stakeholders to discuss and commit to resources and schedules, enhances delivery risk awareness, and formalises agreement. Throughout model development, they should be updated (e.g. owing to model circumstance changes).

Should Cost Model Development

Overview of Development

SCM development involves constructing the model, filling it with data, and developer testing before official QA and testing.

Investing time in model planning and design reduces development risk by:

Helping decrease development modifications or iterations once model building begins; Ensuring a model is appropriate for purpose and efficiently created.

Good model development practises decrease model design flaws and help identify them (see SCM Technical Build Guidance). Transparent and rational calculations will enhance model QA and testing and decrease use mistakes.

Good practise

The SCM Technical Build Guidance helps construct Should Cost Models. It helps control risk, boost transparency, facilitate QA and testing, and improve model usability. Models should be:

SCM development should follow a disciplined strategy that starts with extensive upfront preparation.

Logic should flow from top to bottom, left to right, and front to rear in an SCM.

Structured and presented consistently, the SCM should be aligned;

Separated – the SCM should be modular and clearly separate inputs, computations, and outputs;

The SCM should be straightforward and give transparency over computations and logic, with no hidden data or calculations.

Integrity: The SCM should be accurate, neutral, and transparent. No calculation logic manipulation to get a desired outcome; and

QA should cover the SCM lifecycle, from up-front planning through in-model inspections to formal QA and testing.

The SCM Technical Create Guidance tells the model creator how to build the model, focusing on:

Organization, characteristics, and procedures;

Design, concepts, and presentation of worksheets;

Input Sheets (data organisation and source referencing);

Calculation Sheets (layout, concepts, formula building); Formulas and Functions (including those to avoid);

Output Sheets include use limitations and end-user communication. Checking and Control include in-model error checks and overall QA. Other Governance Elements include documentation and version control.


The model will be updated using data. This data may not be the final dataset utilised by the model to provide its analysis, but rather test data used during development. A Book of Assumptions / Data Log (drawing on the Data Plan) should be updated and maintained to trace model data provenance. This helps model QA and testing and gives consumers a comprehensive record of the model’s data. QA and testing a model without actual data may be impossible.

Model Integrity

The SCM should have built-in, transparent inspections. When deciding what checks to run, developers should examine error-prone sections of models or computation logic. Examples include:

Add the numbers correctly;

Outliers are balances outside the anticipated range.

Incorrect input types, such letters instead of numbers; and

All expected inputs should be entered.

These model checks let the developer undertake some self-testing. They warn against incorrect model usage. They may indicate that the model continues to function within the checks once in use and flag erroneous inputs or outputs to the user. They assist control design and error risk by ensuring computations are within checks and haven’t been distorted.


As the model is built, the developer should frequently test it. Check formulas against the model Specification’s calculation list (Section 4) to verify they provide the expected results. Before releasing the model for formal QA and testing, the developer should evaluate the computation logic and correctness of formulas. The developer should run a variety of data sets through the model to ensure that the computation mechanism works as planned and delivers mathematically valid conclusions. Section 10 discusses using checklists to organise developer model testing.


The model needs a User Guide. It explains the model’s purpose, limits, and operation. It should be as short and easy as possible, yet cover how to interact with the model.

Importing/updating data – the guide should tell the user how to import data properly, where to put it, and how to delete data from prior iterations;

Running the model – the guide should explain how to run sensitivities or scenarios and provide outcomes; and

The guidance should include individuals responsible for or employing model outcomes. It should underline the model’s uses and limits and give direction on interpreting outcomes.

User Guide QA and testing will help it. Its main aim is to train model users once it has been launched. User Guides decrease key person reliance and prevent model abuse.

Developer’s Manual

Some models may need a Technical or Developer Guide, but it’s essential for most. Models that: May be iterated or altered over time; Are intended to be utilised or needed for a lengthy duration; Are complicated or include advanced capabilities, such as VBA; Support crucial or business-critical choices;

Are externally procured or when future access to the model’s developer is unlikely.

User Guides are not Technical or Developer Guides. It explains technical characteristics of a model to developers to help maintenance or update and has QA and testing uses. It should encompass model creation, function, and interaction (the elements that are the most complex or atypical requiring the greatest clarification). Technical or Developer Guides may cover model modifications or expansions. If a model can’t be reliably changed, it may be rendered useless.

Model changes increase mistake risk. This danger rises if modifications are made after the initial development or by a different developer. A Technical or Developer Guide may help lessen this risk, but any modifications should be resubmitted for rigorous QA and testing.

Model reuse

User Guides and Technical or Developer Guides are helpful when using an existing model to make a choice. Model scoping and other planning tasks, such as QA planning, must be completed, and the model’s alignment to requirements must be assessed (see Section 8.5). When previously built models aren’t matched with current needs, establishing a new model may be more suitable than modifying it. Reuse may be suitable if the current model closely matches the needs.

Data Log

Preparing a Data Plan, which becomes a Book of Assumptions/Data Log as a model is constructed, is recommended practise (see Section 5.3).

In the absence of a Book of Assumptions/Data Log, it may be difficult or impossible to verify data provenance. An Anthology

Assumptions / Data Log is a list of a model’s data, assumptions, and prominent traits. It includes:

  • Reference No. – to relate to model inputs;
  • Data area (e.g. salary inflation);
  • Input Label: model data label
  • Description of data;
  • Why data/data source is utilised;
  • Included assumptions (e.g. inc. VAT)
  • Measurement units (£k, $m, #)
  • Base Year – Economic Conditions
  • Filename/Path – source path or files;
  • Source/Provider – source or provider;
  • How recent the data is;
  • Date Supplied — date data was provided;
  • Who owns the data/assumption?
  • Date of latest data review;
  • Date to update data;
  • Validity – any known data validity periods; Issues – any known data issues; Manipulation – any pre-processing done;

Maturity Level assesses data readiness; Maturity Plan outlines how data will develop over time.

The user benefits from the model’s Book of Assumptions/Data Log. It offers an overview of the model’s data and its source. It permits evaluation and challenge of the model’s source data and input assumptions, as well as how it is utilised to create results. It reduces the danger that a model’s outputs can’t be explained if its underlying data becomes disconnected. The Book of Assumptions / Data Log improves user comprehension of the model and drives auditability and data traceability.

In the lack of a Book of Assumptions/Data Log, formal QA and testing will take longer. Such requests might delay the model’s delivery and effectiveness.

Construction Approval

Before building the model, approval that it meets user demands should have been sought. Once developed, before formal QA and testing, the model is

Get user approval (e.g. via User Acceptance Testing). Before formal QA and testing, disseminating the draught model allows for comments. This reduces the chance of last-minute alterations without enough time to evaluate them.

Before submitting for formal QA and testing, the model developer should validate that the model meets the model Specification and has been self-tested. Preparing a checklist (see Section 10.8) for the developer to complete may assist guarantee thorough checks and record model self-testing.

Quality Assuring and Testing

Q&T Overview

QA should be done throughout the model lifespan, not only during the Review and Formal QA stages (see Figure 2). Initial model planning should include testing, additional QA, and controls. This should be reviewed during the model’s lifespan and after any scope or usage modifications. Planning beforehand ensures that QA and testing aren’t left to the last minute.

The model assurance actions should be proportional to the model’s risk and the criticality of the decision or problem it supports. Complex models have a greater error rate and risk. Complex/sophisticated models that support key choices should be tested more thoroughly.

IMA Tool suggests QA and testing based on model criticality and complexity (see Section 3.2.2). Appendix I: Quality Assurance presents an overview of the many kinds of QA and testing activities indicated by the IMA Tool. The Sourcing Programme’s SCM Testing Procedures provide procedural level instructions for performing these tests. The Aqua Book and Review of Quality Assurance of Government Models discuss government QA and testing.

Quality assurance personnel should be qualified and experienced. Review of Quality Assurance of Government Models says skill and experience are essential. Many quality assurance workers should have equivalent or more abilities and expertise than the model creator. Quality assurance and testing are crucial and shouldn’t be left to younger staff.


Quality Assurance involves more than checking for mistakes and specification alignment (verification). It must also confirm that the analysis is acceptable for its usage (validation). The Aqua Book discusses SCM-relevant verification and validation of Government Analysis. Model testing requires evaluation, documentation, and sign-off.

Review includes applying agreed-upon test techniques to the SCM and its artefacts. QA Plan should include suitable tests based on Initial Model Assessment. Each of

Testers should be qualified and experienced. Each test should be given enough time to be done properly, including time for numerous “review cycles.” These may be needed if model developers didn’t overcome quality assurance concerns or added new ones. Given the ubiquity of model flaws, a single’review cycle’ is generally insufficient.

QA Reports should contain test descriptions and outcomes. Test Memos should be thorough enough for independent replication. Copies of the reviewed SCM, any subsequent versions as a result of’review cycles,’ all formal correspondence between the model developer and quality assurers, all supporting artefacts (e.g. the model Scope, Specification, User Guide and Book of Assumptions / Data Log) and any subsequent versions, and details of all tests performed and their associated results should be stored in accordance with file management procedures (see Section 9).

Before utilising the model or its results, QA and testing must be approved off. Unresolved problems from QA and testing may cause concerns. Those signing off on the model, and eventually the Model SRO, must determine whether any unsolved concerns must be rectified before the model is used. Unresolved problems should be disclosed in QA reports and referenced in the model. As part of continuous QA operations, plans to remediate unsolved concerns should be evaluated periodically.

Should Cost Model Use


Developing governance and advice to support the SCM will assist users run it, give a framework to manage updates and modifications, and guarantee the model stays fit for purpose. Once a model is issued, it should undergo continual QA and testing. Notably, model documentation, such as the model Specification and Book of Assumptions/Data Log, should be updated to reflect model modifications and proper QA and testing.


This step involves handing over the model to persons who will use it to make decisions. Developers should examine what end users need to know before a handover. Model User Guides should provide this operational information. As part of a handover, important users should be shown how to:

How to clear old data sets and add new ones to the model;

How to update the model’s version control log and configuration control log (see Sections 10.4 and 10.5)

If the model can run scenarios, sensitivity, or alter analytic settings, demonstrate how to use it (e.g., change inputs or flip switches);

If outputs may be customised (e.g., a chart’s time series), educate those responsible for creating them how to do so.

SCM tracking may influence choices with multi-year consequences. Consider how a model will interact with long-term planning, budgeting, and forecasting when using it. When approaching the usage phase, how to monitor model performance versus its assumptions should be addressed. When a model is used, it’s important to evaluate whether to recalibrate it to account for environmental changes.

Mapping stakeholder

Modelling Scope identifies stakeholders engaged in a model’s creation, development, or conclusion. Appendix II: Roles & Responsibilities provides examples. When a model is used and a choice is made, the stakeholder population may be broader (e.g. those in Finance or Commercial). Identifying and engaging with relevant stakeholders is advised to promote transparency and performance monitoring.

Revisions, reuse, sign-off

Scope and Specification of a model describe the sort of choice it supports and what it can and cannot perform. While many procurements need a unique model, an existing model may support a comparable conclusion (see Section 3.3). In some cases, the model may need re-configuration and fresh data to justify the revised judgement.

Consider design constraints, time to finish, and resource availability when deciding whether to rework or create a new model (will there be sufficient resources to oversee the design, development, QA and testing of the model). The fastest scenario is usually chosen.

If a model is reconfigured, the risk that it stops working should be handled. Adding features and modifying the model (e.g. to enable new cost categories) poses a risk that the model gets corrupted and does not give accurate information.

Model development lifecycle must be followed while revising. Copies of the model Scope and Specification (inc. Design) should be examined, changed as needed, and approved (e.g. the Data, QA and Delivery Plans). Any model updates should follow SCM Technical Build Guidance and be developer-tested before formal QA and testing. Even if the model underwent QA and testing before first usage, it should be retested to address the new data set, check for mistakes, and ensure that changes meet the new objective.

Consider if model revisions or usage have changed its risk profile (see Section 3.2.2 for Initial Model Assessment). If a model has been altered to incorporate a new sort of advanced analysis, the original QA Plan may not be adequate for the following iteration, and a more complete set of tests may be needed. Limits need special care.

original model or quality assurance suggestions not implemented (see Section 7.2.4).

Revisions might be riskier than starting from scratch, particularly if there is no development continuity or Technical or Developer Guide (see Section 6.7). This is why it’s important to use the same development lifecycle and risk management best practises while reconfiguring a model.

House model on top of stack of money as growth of mortgage credit, Concept of property management. Invesment and Risk Management.

File Management


Models may be business-critical or contain sensitive data, so only a few people may access them. Model creation involves numerous files, including the model Specification, Book of Assumptions/Data Log, source data files, QA Reports, and model versions. All should be controlled. This guide has focused on how to design a model to control risk and increase efficiency. This section discusses controlling file dangers and access.

Managing files

Sensitivity Control/Protective Markings – Consider both the files and model’s sensitivity. Where a protective marking is necessary, all files should be categorised and managed according to criteria for managing sensitive information (see Government Security Classifications and Guide to GDPR).

Iterated models, datasets, planning, and QA documentation. Using a version control system to manage files is recommended. Version control logs (see Section 10.4) may record the version number, modifications from the previous version, and who created and authorised the changes.

File Locations – Store files in an agreed place and avoid emailing them. Planning, data, and model files should be separated in a logical folder structure. Previous versions or draughts should be in a separate folder. This boosts the model’s auditability, helps with access rights by restricting sensitive folders, and decreases the chance of using the wrong version.

All files should be kept in secure shared areas that only relevant people may see or write to. Files kept in places with broader access should be password secured to prevent unauthorised access and/or alteration (see Password Usage below).

Cell and worksheet protection reduces the danger of accidental corruption in spreadsheet-based SCMs (see SCM Technical Build Guidance).

Passwords offer security to a model (subject to contracting authority policies). Worksheet and workbook level passwords may help safeguard data.

misused or incorrectly entered data corrupt. Consider file-level encryption for sensitive files (e.g., PHI). However, this should be carefully evaluated and guidance obtained.

Passwords must comply with contracting authority rules. Passwords may make a model useless if lost or forgotten. Secure strings and a centrally maintained vault may lessen dangers.

File backups – The contracting authority’s IT system should provide sufficient backups, and further steps should be taken as needed. On shared discs, backups should be in distinct directories and version- and access-controlled.

File Archiving – All SCMs should save historical versions to support crucial decisions. Overwriting historical data, breaking data linkages, saving read-only files, and future programme compatibility should be carefully considered.

File Sharing — Controls for sharing SCMs should be determined case-by-case. Consider data sensitivity, version control, and a Non-Disclosure Agreement (NDA) if third parties are involved.

Some SCMs may be BCMs and need to be added to inventories. These records should contain the model developer, quality assurers, Model SRO, SCM development status, current QA status, and scheduled QA and testing activities.

Periodic Review — SCM and file management is not a one-time occurrence. Certain actions should be done at regular intervals and by occurrences. Retesting the SCM and examining file management policies, including access privileges, retention, and backup methods. They may also evaluate the capacity to recover archived or backed-up data.

Tools, Templates & Checklists


SCMs benefit from tools, templates, and checklists.

Tools may expand a model’s capability (such as spreadsheet add-ins for risk and uncertainty analysis) or automate its evaluation (e.g. tools that support Verification or Analytical Review).

Templates organise model development, promote good practise, and minimise development time.

Checklists help formalise requirements. As models travel between development phases, they might tell the receiver about past checks or actions. They may help those managing model development (see Section 2.4) and drive SCM lifecycle benefits:

  • Codifying, organising, and simplifying the process increases efficiency.
  • Effectiveness – promoting excellent practise and reducing rework;
  • Completeness and homogeneity of approach; and
  • Audits record activity and signatures.

Design templates

Model planning and design templates, including delivery and QA planning templates, give structure and promote consistency. They may increase efficiency and promote excellent practise. They may assist ensure that resource planning goes beyond model developers and incorporates other roles (e.g. quality assurance) and that resource needs reflect operational roles like Model SROs and customers.

The Sourcing Programme has created planning and design templates for SCMs. These include SCM Scoping, Planning, QA Plan, and Specification Template Examples. The SCM Build Template contains a Book of Assumptions/Data Log that may be used to create a Data Plan.

Builder Model

  • Model build templates, which may be used to design an SCM, offer various advantages.
  • Compliance – promotes excellent practise.
  • Reduces the time needed to build, develop, and teach model users.
  • Consistency promotes style and clarity.
  • Good features (e.g. error check network).
  • The Sourcing Programme created an SCM Build Template that may be changed as needed.

Log Version

Version control logs document model changes over time. All SCMs should keep them. As model modifications and updates are iterative, version control is necessary. This allows users to audit modifications and ensures they’re utilising the right version of SCM.

Sourcing Programme’s SCM Build Template contains a Version Control Log that may be changed and/or utilised as needed.

Control Configuration

Configuration logs capture model configurations like sub-option combinations. While configuration control may fit in version control logs, a standalone log is recommended. This may be important for SCMs with flexible configurations. Configuration control audits model configuration and helps users verify their SCM is properly setup.


Automated SCM testing tools (e.g. Logic Testing and Analytical Review) are crucial to successful risk management. Their usage boosts test efficacy, decreases test time, and enables tests that could not otherwise be performed. Use of these technologies, indicated for more rigorous testing, may need larger considerations, such as conformity with the contracting authority’s IT policy, how users will be taught, and how testing will be done and recorded.

Logic Testing tools are often called Model Auditing or Verification tools. Model maps (a diagrammatic depiction of a model’s components and their connection to neighbouring elements), unique formula listings, and additional reports such as VBA, Named Range, and External Link listings may aid with model testing. Websites sell, download, and install them.

Practice Checklist

Good practise checklists give a disciplined way to validate an SCM was constructed correctly (e.g., the SCM Technical Build Guidance). They shouldn’t replace other types of QA and testing, but should promote completeness and verify compliance.

The Sourcing Programme created an SCM Good Practice Build Toolkit to frame and facilitate a Good Practice Critique.

Checklist Development

An good risk management method should include a checklist that confirms certain tasks, such developer testing. It helps SCM stakeholders communicate actions, reinforces good practise, and identifies review focus areas. It enables people do QA in an auditable, disciplined, documentable fashion.

The Sourcing Programme created an SCM Development Checklist that may be utilised during model development.


This book contains best practises for developing a Should Cost Model (SCM) in Microsoft Excel. Many concepts apply to other software. It discusses excellent practise but not SCM material.

Sourcing Programme designed an SCM Build Template that incorporates good practise recommendations. It may be customised for SCM development.

Why have Guidance

This document helps contracting authorities construct SCMs consistently. It codifies and documents good practise concepts and methodologies for the under-construction model.

Good practise improves SCM quality, reduces development and testing time, improves usability, increases transparency, and reduces mistake risk. Good practise benefits users and contracting authorities.

Who is this Guidance for

Scaled down models (SCMs) demand skills and expertise. This publication requires MS Excel skills. Those controlling SCMs must ensure essential staff are qualified and experienced.

Principles of Good Practice Model Development

Should Cost Models (SCM) should be organised, logical, and easy to understand. An organised method of developing SCM should be used, starting with a broad overview and working its way up to specifics. There should be no manipulation of the computation logic to get a desired outcome in the SCM.

Structure of this Document

There is an introduction to SCM Planning and Design followed by a discussion of best practises for the more structural parts. These include: Workbook Considerations, which include the organisation, features, and protocols; Worksheet Structure, which includes sheet design, principles, and presentation; Input Sheets, which include the arrangement of input data and source referring; Formulas and Functions, which include those that need to be avoided; Checking and Control, which includes in-model error checks and overall quality assurance; and Input Sheets, which include the arrangement of input data and source referencing.

What to keep in mind while designing your workbook


Individual worksheets will also be affected by this advice, which is valid at the workbook level as well.

Separate and Organise Worksheets

Worksheets in a workbook should be organised into distinct categories that are well-structured. Sections for each of these sorts of worksheets should be properly labelled as indicated below:

Figure 2: Workbook Organisation

Having separate spreadsheets for each category might help decrease the possibility of formulae being overwritten during data entry and inputs being forgotten while updating the SCM.. The following is a common breakdown of worksheet categories:. This section should just include inputs to the model and not include any computations that are part of the model’s logic. Pages that include items that are utilised throughout the document, such as timelines and constants (e.g. days in a week). Links or ‘call-ups’ to computations or inputs should be included in these sheets in order to show outcomes in previously agreed-upon formats.

Define Formats and Styles

It is essential that an SCM has an established format and colour scheme that aids model operators. To help the user understand how to use and interpret the model, it is essential that the colours and styles used for input cells, calculation cells, and output cells be clearly defined. A separate page in the workbook should record these formats and styles.

Adopt a consistent signage convention

There should be consideration given to adopting an uniform signage standard that facilitates model usability. A few examples are the inflow/outflow convention or using positives as the default (all numbers in the model are positive). It’s a good idea to prevent incorrect data entering by using error checks, Data Validation, and labelling that says “+ve Values Expected.”

Use Intelligent Formatting

Conditional Formatting in Microsoft Excel has a lot of power and may be put to good use when doing error checks. Aside from that, it may be used as a way to direct the user to submit certain information, such as alter settings that need it. It should be evaluated to see whether it improves the model’s usability or has a negative influence on its performance.

Contents Page, Model Description and Disclaimers

Users of the model outputs should be able to see how the model was built using a Model Map, which should be included in every SCM. Using this map will assist in the testing and QA of the model, and it will also make navigating easier. The contents page should be linked to the various worksheets that are included in the package.

A disclaimer page or an appropriate Important Notice may be required for certain SCMs, especially if they are shared with other parties. In addition to the state of the model, this may contain information on who or what organisations the model is intended to be used by and the basis for distribution and usage.

Include a User Guide within the SCM

Using a model effectively necessitates the inclusion of an accompanying User Guide. How the model results should be utilised should be included in this discussion, as well. Additionally, certain models may benefit from a Technical or Developer’s Manual (e.g. to support changes to the model).

Use Succinct and Logical Worksheet Names

There should be an indication at the beginning of a worksheet name to indicate which worksheet category it belongs to (e.g. Inputs or Calculations). To eliminate the presence of spaces in worksheet names, use a hyphen, underscore, or something similar. The name of a referenced worksheet is automatically surrounded by quotation marks in Microsoft Excel. As a result, the formula becomes more difficult to understand.

Use Consistent Worksheet Headers

Designing a header that can be used on all worksheets is an important step. In addition to a consistent column header and a reference to the master and worksheet level error check status, it should include important information like the name of the worksheet and the model’s protective marking. A link back to the contents page should also be included to make navigating easier.


Labelling columns should assist the user to traverse the model. Consistently labelling the same idea across worksheets reduces user mistakes. Capitalize consistently. Brief yet detailed label writing should express the item’s essence.

Keep a UOM column

There should be a column in every worksheet to tell the user of the modelling element’s units of measurement. Each ingredient should be labelled with its unit, money, weight, or percentage.

For monetary units, the assumptions should be specified, such as whether they contain VAT or inflation. If the model uses constant and outturn or real and nominal terms, a definition should be supplied. Output costs imply VAT and inflation.

Maintaining a uniform unit of measurement throughout the model lowers mistakes, enhances comprehension, and simplifies QA and testing.

Should timelines Horizontal Timelines should span columns, not rows. Possible variations include presenting time-based results vertically or easing user selection of time-based inputs. In certain cases, a vertical chronology driven by the horizontal master timeline may be needed.

Consistent timelines

Time line positioning should be consistent in all spreadsheets with a time range. Each worksheet should start and finish in the same column and reference the master timeline in the constants section. This improves SCM usability and reduces mistake risk.

Primary (e.g. months) and secondary (e.g. years) time periods should have the same length. A model with a 36-month main phase should have a 3-year secondary term.

Where timeframes are based on financial years, this should be indicated in the model/model documentation and the header should make it clear which years or periods are covered (e.g. 2021/22 cf., FY22). If the model includes various fiscal years, provide the month in the header (e.g. Apr-21/Mar-22).

Appropriate schedules

Model timelines should be monthly or yearly. When monthly numbers aren’t needed, annualize timeframes to decrease model complexity. Calendar-based timelines are best (i.e. using date-based formats).

Reduce duplication

Never duplicate inputs in a model. All Treasury Discount Rate computations should relate to a single record in an SCM. This makes SCM easy to comprehend, update, and decreases the chance of missing inputs.

Models shouldn’t repeat the same computation. Instead of repeating the same computation in a model, it should be done once and then ‘called-up’ as needed. This enhances SCM performance and lowers the danger of inconsistent computations. Error checks (see Section 9.2) need repeated computations.


SCMs shouldn’t be spread over many workbooks. Interlinked workbooks are hard to manage and pose dangers. If source data is in a separate worksheet, copy and paste it, deleting formulae, rather than using links (see Section 5.4 on the use of external links).

Reduce Copies

Multiple copies of the same SCM should be avoided in favour of a scenario-aware approach. Multiple model administration is difficult, resource-intensive, and increases the risk of models not being updated or maintained.

Avoid VBA

VBA, sometimes called ‘macros,’ is a programming language in MS Excel. Native Excel is more visible, accessible, and error-prone. Avoid using VBA for SCM-related computations. If VBA is needed, its development should be limited to SQEP and only done if:

The VBA code follows good practise guidelines, and Independent SQEP quality assures and tests it.

Avoid custom functions.

Avoid designing a VBA Custom Function to do a custom calculation (e.g., to mark up a cost). Custom Functions lack clarity compared to Excel formulae, increase mistake risk, and are less accessible.

Worksheet Structure


This advice applies to all SCM worksheets.

Worksheets need a purpose

Worksheets should be specific. Inputs, computations, and outputs are their principal uses. There will be governance and usability workbooks (e.g. a contents page). These separate objectives shouldn’t be merged. Separating worksheet objectives decreases mistake risk, increases usability, and simplifies a model.

Logical worksheet flow

Worksheets should’read like books’ Left-to-right and top-to-bottom logic and computations. Avoid referencing worksheet computations farther down. If needed, they’re best done in a different spreadsheet and ‘called up’ as needed. Mark any deviations.

Calculation organisation across worksheets should decrease inputs and outputs. Significant inter-worksheet links may indicate inadequate model design.

Consistently use columns

Each column on a worksheet should have a consistent function. Having columns that consistently represent a given data point (e.g. unit of measure or time independent values) or placing all table titles in one column can offer consistency to the user, making a model simpler to traverse and comprehend, and decrease mistake.

Freeze-pane totals

Display totals enhance comprehension and QA/testing. When summing time-series data, totals should be:

Separate – kept in a column outside of the time series columns (see Section 4.6); Frozen – positioned on the left side of the worksheet in the frozen pane so they’re visible while scrolling.

Separate constants and time series

When developing the model, specify time-dependent and non-time-dependent data. Columns should show whether data is time-dependent or independent. Time-dependent data should be organised using the model’s timeline sheet. Time-independent data, such a discount factor, should be in a column that doesn’t overlap with time-sensitive data or calculations. Below is a layout for constant and time-series data.

Figure 4: Example Constant & Time-Series Layout

Arrange in Labeled Sections

Separate worksheet parts and subsections. Indenting subsection labels helps to understand. Indenting columns helps navigation (Ctrl+ and Ctrl+).

Unique computations should be labelled to distinguish them. When repeating calculations, section headers may be needed (e.g. Division A Total Personnel Costs, Division B Total Personnel Costs). Dynamic labels connected to a master label may help.

Show all Model Elements

SCM shouldn’t be secret. No model components should be concealed in worksheet columns or rows. Having all components accessible and clear enhances comprehension, promotes confidence, and decreases the chance of unchecked and overwritten pieces. Exclusions:

Group & Outline may increase usability and navigation in Microsoft Excel.

Hide columns to the right or below the whole utilised range to ease model navigation. and

If data must be buried, the whole worksheet should be hidden, not just sections of it.

True-value numbers

True numbers should be shown. They should be formatted to represent their correct value (e.g., 000s as MM or -ve as +ve). Error checks are an exception. Avoid rounding up or down.

Don’t merge cells

Merge cells carefully. It hinders cell selection and VBA, and unmerging cells might affect computations. Excel’s Centre Across Selection function may replace cell merging.

Cell merging may also hinder verification or logic testing software and may need to be deleted to allow testing.

Input Sheets


Models import data using input sheets. No calculations or analyses that feed the model’s calculation logic should be conducted on them.

Logical inputs

Structure input sheets logically. There are no rules, although they’re organised by:

  • Nature – constant or time sequence;
  • Historical and present balances are organised by form.
  • Costs and revenues are organised by function, and providers are listed.
  • Inputs should be organised like computations.

Input References

Input sheets should include a field for referencing the Book of Assumptions/Data Log (see Section 10.2). Figure 4’s ‘Ref.’ column refers to a Book of Assumptions/Data Log entry. This permits establishing the provenance of input data and understanding model computations and outputs in the context of their underlying data.

Include a comments column to offer more information instead of Microsoft Excel’s less visible Cell Comment function.

External Links

Interlinked workbooks are superior to manual data exchange between an SCM and other models or “feeders.” Splitting the model should be considered if direct linkages are needed to transmit live data dynamically for computations.

If direct linkages are needed, implement these best practises:

Formulas connecting to external files should be ‘call-ups’ that link to the original data. Call-up formulas shouldn’t do calculations.

Links to external files are quasi-inputs. They should be organised on a data import sheet. Links from a source file should be regarded outputs and organised on a data export sheet.

Source data should be in a Named Range, and the destination file should link to it. This maintains the link’s integrity if the source file’s structure is changed.

The source file should explicitly indicate which data is being exported and the destination.

Validate properly

Use Microsoft Excel’s Data Validation to limit inputs. They may be limited to certain numbers, ranges, or kinds. Labels or the labelling function in Microsoft Excel’s Data Validation might help users understand the constraints.


Data Validation in Microsoft Excel may assist assure suitable inputs, but it’s not always practical to impose constraints. It can’t prevent the deletion of values or fill them. Checking that all needed input cells are occupied may be acceptable. Microsoft Excel’s Data Validation cannot always be depended upon, thus extra in-built tests that feed into the error check network (see Section 9.4) may be necessary.

Calculation Sheets


Mathematical functions used to inputs create model analysis and insight. Calculation sheets should not need user input.

Formula consistency

A series or block’s computations should be consistent horizontally and vertically and span the whole range. Formulas shouldn’t alter mid-row or mid-column without a blank column or row. Exceptional situations should be explicitly defined (e.g. through formatting).

Formulas referencing a cell range should refer to the complete range, not sub-elements. Partial range references are harder to evaluate and raise change-related mistake risk. When referencing a sub-element, use INDEX or SUMIF to extract it from the whole range.


Calculations should be separated and labelled. Each block should be divided by blank rows and/or columns and labelled with section headings. If included, totals should be separated from computation blocks.

Calculation blocks should follow a left-to-right, top-to-bottom formula, like worksheets. This decreases mistake risk, makes them simpler to spot, and speeds up testing since fewer unique formulae must be checked. Deviations should be evident.

Blocks should have the minimal needed cell anchoring (‘$’ sign) to decrease formula visual complexity and facilitate copying and reusing.

Insert row

If a block of computations requires many lines, an insertion row should be supplied. This decreases the chance that later-added items won’t be included in the SUM. Model mistakes are common. Here’s an example:

Figure 5: Example Insertion Rows in Sum Formula

Input ranges that may extend in the future should also have insertion rows underneath them and in range references.

Order consistently

When repeating calculating components, keep their order constant. The sequence of “call-ups,” time flags, and calculation block should be repeated whenever relevant.

Single-function formulas

When possible, formulas should have one purpose. Avoid formulas with many calculations. It’s preferable to break down computations into a simple sequence than to build a complicated formula. Single-purpose formulae are easy to learn, use, and alter. Sequentially approaching computations helps detect and correct problems.

A formula with several functions or longer than the reading window should be split down. This must be weighed against the model’s complexity and understandability.


SCMs are based on time-based assumptions and should employ timing flags. These should use Boolean logic (1s and 0s) and show component rows. This aids understanding and testing.

Flags and partial period factors may need their own worksheet in the constants section, depending on their purpose (see Section 3.2.2).

Formulize Simple formulas get the best results. Complex formulas are hard to explain and grasp. If a simpler solution would do, avoid complex Excel formulae. Avoiding new formulae like Array formulas increases usability and makes the model simpler to test and comprehend.

AvoidArray Formulas

Array Formulas (shown graphically inside”) may calculate an array of data. They may reduce workbook performance, increase model complexity, and complicate computation logic.

If a model requires one or more array-based computations, Microsoft Excel’s native formulae should be used (e.g. SUMPRODUCT).

Any formula, like an Array Formula, that depends on a specified range will likely need modification to assure valid formulae and outputs.

Remove Sheet References

Formulas that contain the current worksheet’s name should be modified. Excel auto-includes these references in formulae. Current worksheet names add unneeded visual complexity to formulae.

3D Formulas

When possible, avoid employing three-dimensional formulae that use data from another sheet. They’re hard to comprehend, tough to test, and error-prone. Use ‘call-ups’ to pull data from one worksheet to another.

Avoid circular math

Avoid circular model logic. They’re hard to comprehend, test, and provide erroneous findings. Always have alternatives. VBA (see Section 3.18) might be used to handle circular computations, with integral checks to assure a stable SCM result.

Formulas shouldn’t have constants.

Formulas shouldn’t have constants or hardcoded values. Models shouldn’t use constants in formulas (e.g., 100) Having a specified input section with the formula’s constant value maintains transparency, increases model auditability, simplifies model updates, and reduces mistake risk. Predefined formula parameters (e.g. 1/0 for True/False) are exceptions.

Readable formulas

Readability-friendly formulae. Don’t overuse brackets or $ signs. Use brackets and anchors to accomplish math or functional goals. Spaces and carriage returns help read formulae. These strategies make formulae simpler to comprehend, understand, and test.

Temporary Code

Before pausing work, code should be done (e.g. over a weekend). Models stored with temporary code should be explicitly marked as such.

Formulas and Functions


Microsoft Excel calculates and organises data using formulas and functions. Different formulae and functions may often produce the same outcome, depending on the model developer’s inclination or style. Certain formulae and functions have benefits and should be used properly.

Skip NPV

Excel’s inbuilt NPV calculation lacks transparency and might yield unintended outcomes by applying end-of-period discounting. Avoid it.

Using a time series discount factor to calculate Net Present Value is preferable. Displaying the discount factor helps understand temporal worth of money. This may boost model operator testing and confidence.


In certain cases, the OFFSET function is the sole way to calculate depreciation. OFFSET should be avoided since interdependencies are hard to detect and hinder testing.


INDIRECT is sometimes the only method for dynamic selection. INDIRECT outcomes are variable and error-prone since they rely on specified names and ranges. Avoid the function.

If worksheet or cell references are included as constants and the workbook’s structure is modified, the INDIRECT function may fail. When using INDIRECT, employ named ranges or dynamic cell address labels to boost its resilience.


CHOOSE may pick across worksheets. In general, utilise INDEX over CHOOSE.

feasible function With INDEX, it’s simpler to extend selectable alternatives and less memory-intensive than CHOOSE.

INDEX trumps IF

Choose values using INDEX instead of IF. Expanding selected things may be more robust and easy.


Instead of VLOOKUP or HLOOKUP, use INDEX and MATCH. It’s more reliable, memory-efficient, modifiable, and testable. V/HLOOKUPS generate model mistakes.

Blanket Wraps

ISERR and ISERROR may disguise actual problems. Use traps that catch specific errors or the underlying cause of errors, such as dividing by zero (trap the ‘0’, not the resultant ‘#DIV/0!’).

Name ranges Cautiously

Named Ranges are used to reference cells from VBA code and transmit data across connected workbooks. They may be needed for dynamic selection (e.g. dynamic Named Ranges) and for VAT. Overusing named ranges might hinder comprehension and flexibility.

Named Range names should be brief, meaningful, and logical where relevant.

In model documentation, list all Named Ranges along with their placement and function. Because dynamic Named Ranges are less transparent, this is especially crucial.

Output Sheets


Output sheets include SCM computations in the chosen format. Calculations, inputs, controls, tables, graphs, and charts are used.

Output sheets

SCM outputs should be on specific sheets that ‘call up’ the relevant computations or inputs. Using output sheets minimises good practise tradeoffs needed to create and organise computations to fulfil output format criteria.

Except for linkages to other sheets (i.e. ‘call-ups’), output sheets should not include computations. If calculations are needed, they should be done on the calculation sheets or on interim sheets.

Inputs shouldn’t be on output sheets. If inputs need to be provided on output sheets, they should be included on the relevant input sheet and ‘called up’ Scenario or sensitivity inputs are exceptions. Consider a scenario and/or sensitivity control sheet or switches attached to relevant inputs.

Fit-for-purpose outputs

Output sheets should address essential model questions. When assessing the SCM’s overall suitability, required presentation formats should be evaluated. The degree of depth and/or summary supplied and whether outputs provide the right information for model clients are also important.

Label outputs

Outputs should be labelled to minimise model operator misunderstanding. Without calculating logic, output labels must be more explicit.

Flag errors clearly

Output sheets should clearly indicate any model flaws or known limits. This prevents outputs from being misused. Some SCMs may benefit from obfuscating outputs to avoid access in case of an in-model mistake.

KPIs and graphs or charts may assist discover faults and establish trust in the SCM’s validity.

Properly constructed graphs

SCM visuals shouldn’t be deceiving. For example, scenario model graphs or charts should label outcomes (e.g. what scenario they relate to).

SCM should properly define graph and chart data. This is best done using a separate page or area of a computation sheet for graph or chart data streams, which may also contain labels (e.g. chart title including the scenario name).

Use pivot tables sparingly

Avoid using Pivot Tables for model logic computations. Static nature and lack of configuration management increase error risk. They present model results in an organised fashion that can be quickly modified and studied. If outputs must be presented in a consistent way, a preset table should be utilised.

Pivot Table Rows, Columns, and Fields settings should be shown consistently and not concealed to minimise confusion. Include instructions for refreshing or configuring Pivot Tables.

Limit table use

Data tables might hinder model performance and flexibility compared to previous techniques. Avoid using them.

Checking and Control


All SCMs should have built-in controls. Their inclusion and implementation help make SCMs effective.

Aqua Book discusses checks and controls.

SCM needs built-in checks

Notably, SCM checks and controls are complimentary to developer testing and formal QA and testing, not a replacement. All SCMs must be QA-tested (see SCM Development Guidance).

From the start, the SCM should incorporate error checks and warning flags. Not an afterthought, they should be part of the SCM design.

Checks may reassure users that the SCM is operating effectively and that its inputs and outputs meet expectations. These checks must be defined and implemented into the SCM and may include:

Arithmetic – do the numbers add up?

Outliers – do values fall inside predicted ranges, such as upper and lower bounds?

Format – are inputs in the required format, such as numerical or alphanumerical? Completeness – have all inputs been populated?

Robust checks

“Sum zero” is a well-known error-checking procedure. This flags check cells with non-zero values as mistakes. This approach combines error checks into a ‘network’

Error checks should be consistent across the model to prevent misunderstanding and/or mistakes inside the check-cells (e.g. adopt sum-zero approach throughout).

Robust error checks are needed. Sum-zero check cells must be sign limited to avoid +ve and -ve check-sums.

Incompatible. Additional steps should be investigated to prevent sum-zero error checks from registering as ‘OK’ if their underlying formula is destroyed or hardwired to zero.

To avoid false triggers, sum-zero tests may need a little error tolerance. Microsoft Excel’s rounding “errors” may cause these. Tolerance values should be in constants, not error checks (see Section 6.13).

Networking checks

Individual worksheet error checks should be merged into a worksheet and workbook level check that indicates the workbook’s error status. This should be done on a dedicated error check summary worksheet that provides error check status for each worksheet and the whole workbook.

The error check summary sheet is important for assessing the model’s state and locating error-prone worksheets.

Visible checks

Error checks should be obvious to the model operator. Conditional Formatting in Microsoft Excel (green for OK, red for ERR) is more effective than returning an error check’s result.

Error checks should be added where errors may occur. This alerts model operators, including those making updates, of data input, operation, corruption, or other issues.

Every worksheet should have worksheet-level error checks. These checks should be in the worksheet header in the frozen pane so they’re always visible, even when scrolling.

Each worksheet should provide a workbook-level error check. This check should alert the user if another worksheet or workbook has an error.

Worksheet and workbook-level error checks should be uniform across worksheets. This reduces the chance that model operator misses mistakes caught by built-in checks.

Protect cells

Protect model cells that aren’t meant for user input to prevent corruption. Cell and worksheet protection reduces inadvertent mistakes.

changes. Overtyping or hardwiring computations or outputs. Passwords should be avoided and handled if used (see SCM Development Guidance).


During SCM development, the model developer should do stress testing. This includes:

Extremes – evaluating model behaviour with huge or tiny inputs (e.g., how it handles zero or negative numbers);

Examine supplied data for over-precision.

Patterns – analysing how the model responds with basic inputs (e.g. 1%, 10%’1, 10, 100) to better visualise behaviour;

Relative – compare quantities or measures to evaluate whether they make sense (price > cost; working days working days);

Examine patterns for validity (e.g. cost growing over time). Graphs, charts, and KPIs may help with sense checking. and

Can the model be ‘broken’ (by removing input values or formulas)? Are built-in checks and protections enough to prevent this?

The model developer should perform a complete set of basic tests before releasing the SCM for formal QA and testing. Minimum requirements are:

Fitness – Ensure the model Specification (including Design) is current and the SCM satisfies criteria, including delivering appropriate outputs;

Check the SCM’s Technical Build Guidance (e.g. Planned, Logical, Aligned, Separated, Transparent, Integrous, Checked). Create formula lists and maps using verification or logic testing tools. Check maps’ correctness and formulas for constants.

Current and appropriate documentation, including flow diagrams;

Book of Assumptions/Data Log — Ensure adequate and up-to-date data documentation;

Checks — make sure built-in checks perform as intended.

Controls – verify current controls (e.g. version, config);

Check Named Ranges for accurate coverage, #Ref! errors, and inclusion in the model/model documentation (particularly dynamic Named Ranges);

Verify external links are recorded and none are added inadvertently;

Protection – verify application and SCM operation;

If using VBA, validate the operation and good code;

System – make sure SCM works in the target system and/or Excel;

Run data sets through the SCM to ensure anticipated outcomes and that switches, sensitivities, scenarios, KPIs, and graphs operate;

Analyze the effect of input changes on outputs, do stress testing, and analyse KPIs and graphs to validate predictions.

Developing isn’t independent. An SCM or its outputs shouldn’t be used unless they’ve undergone enough QA and testing and have been signed off on (see SCM Development Guidance).

A checklist may guide and document model developer testing. The Sourcing Programme created an SCM Development Checklist that encompasses developer testing.

The Sourcing Programme has also created a Good Practice Build Toolkit that may be used to examine a model’s adherence to many of the good practice standards in this article.

Before use, an SCM should be formally tested.

Before usage, SCM should undergo QA and testing. An SCM or its outputs shouldn’t be utilised or distributed unless a quality assurance has examined and approved it. The model should reference QA and testing records and outcomes (see SCM Development Guidance) (see Section 3.6.3).

QA and testing are time-limited and may become invalid if data, structure, scope, or a time period elapses. Some SCMs may have a shorter maximum validity term than three years. QA Plan should include this (see SCM Development Guidance).

The Sourcing Programme created an SCM QA Plan Template for QA planning and SCM Testing Procedures (including QA Report and Test Memo Templates) for model testing.

Formal QA and Testing Documentation

When submitting an SCM for formal QA and testing, it should be supported with documentation (e.g., model Specification and Book of Assumptions/Data Log). All SCM tests measure something (e.g. against the model Specification or this SCM Technical Build Guidance). Without model documentation, such as the SCM’s purpose, it may not be feasible to prove its suitability.

Insufficient documentation might increase QA and testing time.

The Sourcing Programme has created a variety of templates for model documentation. SCM Scoping Template, SCM Specification Template Example, and Book of Assumptions / Data Log Template.

Formal QA and testing should minimise changes.

If formal QA and testing reveal problems requiring model developer adjustments to SCM, the developer should:

Changes to documents: Update the SCM’s Version Control Log (see Section 10.3.1);

Keep modifications minimal. Deleting columns or rows might hinder checking (see below);

Submit for Checking – After any modifications, the SCM should be returned to the quality assurers to ensure that the problems have been adequately fixed and no new issues have been added;

Maintain independence — The model developer should not seek or receive specific assistance from quality assurance on how to fix errors.

Other Governance Elements


Besides error checks and testing, an SCM should incorporate the following governance features.

Data Log

An SCM should include a Book of Assumptions / Data Log (BoA / DL) that lists all input data sources. Data and assumptions in the model should reference the BoA/DL using a dedicated column (see Section 5.3). The BoA/DL should indicate where the data came from and provide enough information to corroborate it. Some information about the data should be put in the SCM next to the data (e.g. units), but it’s best to add a more thorough set of information in the Book of Assumptions / Data Log, for example:

Reference No. – to relate to model inputs;

  • Data area (e.g. salary inflation);
  • Input Label: model data label
  • Description of data;
  • Why data/data source is utilised;
  • Included assumptions (e.g. inc. VAT)
  • Units – measure (£k, $m, #).
  • Base Year – Economic Conditions
  • Filename/Path – source path or files;
  • Source/Provider – source or provider;
  • How recent the data is;
  • Date Supplied — date data was provided;
  • Who owns the data/assumption?
  • Date of latest data review;
  • Date to update data;
  • Validity – data expiration dates;
  • Issues — known data problems;
  • Details of any pre-processing;

Maturity Level assesses data readiness; Maturity Plan outlines how data will develop over time.

Depending on the number of data sources and information in the BoA/DL, it may be suitable to create a separate document.

Managing a separate BoA/DL (e.g. keeping it updated) has risks that must be carefully considered (see SCM Development Guidance for more information on file management).

The Sourcing Programme created a Book of Assumptions / Data Log Template.


Every SCM needs a VCL. This log should describe model changes across its lifespan, capture QA status, and guarantee effective version control. Important fields include:

File Name – Name of the affected file/model;

Version Number – the file’s new version;

Date of change;

Name of model creator or data supplier;

Changes (data, structure, spreadsheets)

To repair a mistake or update input data, etc.

Changes in outputs or characteristics;

Authorized — model owner authorised alterations; and

Current QA Status (e.g. pending or completed).

The sourcing Programme’s SCM Build Template contains a Version Control Log that may be changed and used as needed.


Configuration Control is distinct from Version Control since it focuses on output settings. It may not apply to all SCMs that can execute scenarios or sensitivity.

If Configuration Control is significant to an SCM, a Configuration Control Log should be provided to recreate and/or validate findings. Configuration Control may be part of Version Control or independent.

About SCM

In addition to the SCM’s objectives and constraints (see Section 3.6), essential persons like the model creator and Model Senior Responsible Owner should be included (Model SRO).

leave a comment