The common thread in nearly all sustained protests is cognitive bias within the technical evaluation process. Identifying and mitigating these biases is essential for an objective, transparent, and fair decision process.

BY BRIAN CIHACEK

 

As reported by the Government Accountability Office (GAO),[1] in 2016, not only did the amount of bid protests increase substantially from the year prior, but the amount of protests sustained nearly doubled.

This article presents the argument that the key, shared factor among sustained protests is cognitive bias within the technical evaluation process—specifically examining relevant types of bias, how the “red” rule prevents institutional bias but not personal bias, common protest findings regarding bias, and how bias can be reduced in the evaluation process.

What Is Cognitive Bias?

Bias is the inclination to hold a partial perspective, often accompanied by a refusal to consider the possible merits of alternative points of view. A cognitive bias, however, is a bias that creates a systematic error in the thinking process.

Cognitive biases often form to serve as a heuristic—i.e., a mental shortcut that allows for making an inference without extensive or deliberate analysis. There are numerous forms of cognitive biases, but 12 are particularly common in decision making (refer to FIGURE 1.) These cognitive biases can implicitly impact the technical review process at the individual level and, as individual opinions are stated, at the group level as well. The result can lead to deviations from process or submitted information, which in turn opens the door for protests or other administrative actions.

How Does the Federal Acquisition Regulation (FAR) Help to Contain Cognitive Bias?

The FAR does not directly address cognitive bias, but it does provide a framework for the solicitation and proposal evaluation process.[2] This structure, which contains numerous elements, helps provide a process and requirements that help mitigate cognitive bias through defining objective rules and forcing evaluators to provide justifications for their decisions. Specifically, the FAR requirements include the following:

  • Requestors should evaluate competitive proposals and then asses their relative qualities solely on the factors and subfactors specified in the solicitation;
  • Any rating method or combination of rating methods can be used;
  • The relative strength, deficiencies, weakness and risks supporting proposal evaluation shall be documented in the contract file;
  • When competing on a firm-fixed-price or firm-fixed-price with economic price adjustment basis, a comparison of the proposed prices will usually satisfy the requirement to perform a price analysis;
  • On a cost reimbursement basis, evaluations shall include a cost realism analysis to determine what the government should realistically expect to pay for the proposed effort, to evaluate the offerors’ understanding of the work, and evaluate the offerors’ ability to perform the contract;
  • The contracting officer will document any cost or price evaluation;
  • The solicitation shall describe the approach used for evaluating past performance, including evaluating offerors with no relevant performance history, and shall provide an opportunity to identify past or current contracts for efforts similar to those in the solicitation (and the source selection authority shall determine the relevance of similar past performance information);
  • In the case of an offeror without a record of relevant past performance or for whom information on past performance is not available, the offeror may not be evaluated favorably or unfavorably on past performance;
  • A trade-off process is appropriate when it may be in the best interest of the government to consider award to other than the lowest-priced offeror or other than the highest technically rated offeror;
  • When trade-offs are performed, the source selection records shall include an assessment of each offeror’s ability to accomplish the technical requirements and a summary, matrix, or a quantitative ranking, along with appropriate supporting narrative, of each technical proposal using the evaluation factors;
  • If a trade-off process will be used, all evaluation factors and significant subfactors that will affect contract award and their relative importance shall be clearly stated in the solicitation—as well as whether all factors other than cost or price, when combined, are significantly more important than, approximately equal to, or significantly less than cost or price; and
  • A trade-off process permits tradeoff among cost and price and non-cost factors and allows the government to accept other than the lowest-priced proposal; however, the perceived benefits of the higher-priced proposal that merit the additional cost and the rationale for tradeoffs must be documented in the file.[3]

The “Red Rules”

The FAR also provides “red rules,” which are rules that cannot be broken, and in the case they are broken, provides a specific process to provide due process for remedy. These red rules are limited in their scope and application, which in turn creates a set of “blue rules”—rules that signify institutional practices, but are not codified in any policy, regulation, or statute.[4]  

Blue rules are meant to provide additional processes and guidance to address ambiguity or create consistency, but because they are not codified they can become malleable and open to interpretation. At worst, these blue rules can obscure the actual rule, create contradictory information, or even become the primary means of transmitting incorrect guidance; at best, blue rules allow for flexibility in the application of rules. Blue rules also allow for the transmission of organizational bias through the appearance of cultural norms or standards—e.g., “that is how we do things here,” or “that is the way it has always been done here.”[5]

An important organizational control is an evaluation of how red rules are being transmitted and sustained versus blue rules. The goal of the evaluation should be to move a blue rule to a red rule through formalizing the process into a policy, regulation, or statute or to abandon blue rules when they are found to act counter to the language or intent of a red rule.

Impacts of Cognitive Bias on Evaluation Processes

As stated in the previously mentioned 2016 GAO report, four of the most common reasons for sustaining protests included:

  • Unreasonable technical evaluation by the bidders,
  • Unreasonable evaluation of past performance,
  • Unreasonable determination of cost or price, and
  • Flawed decisions in selecting the contract awarded.[6]

These reasons trend with the most frequently reported reasons for sustained protests from 2015 (i.e., “failure to follow evaluation criteria” and “inadequate documentation of the record”).[7] All of these are a result of evaluator bias, and spring from a cognitive bias on the part of the evaluators during the proposal evaluation process.

More specifically, during the evaluation process, cognitive bias creates a failure of evaluators to utilize the stated criteria to perform an objective measure of the proposal or a failure to adequately document their process in evaluation.

Consider the following protest decisions and how cognitive bias created the opportunity for protest.

Deliotte Consulting, LLP[8]

GAO sustained a claim that the experience provided by the awarded proposal did not meet the requirements from the solicitation. GAO rejected the claim that the evaluator provided the same alternate experience requirements to the protestor and the awardee. GAO recognized that the misevaluation of the experience prejudiced the award since the alternate requirements for the awardee and the protestor were not demonstrated to be equivalent in the rating provided. The evaluator documented the alternate requirements for the protestor’s key personnel, but not the awardee personnel, which led to the appearance of bias.

Rotech Health Care, Inc.[9]

The protestor alleged the proposal’s past performance narrative was not treated equivalently as the awardee—whose past performance examples were all less in dollar value than the anticipated contract award. GAO sustained the protest, stating that past performance should have been similar in size, scope, and complexity to merit the rating provided to the awardee. The evaluator failed to provide documentation supporting the finding that the past performance was similar or relevant to the requirements in the solicitation.

Valor Healthcare Inc.[10]

GAO sustained the protest because the terms of the solicitation were not followed in the evaluation process. In this case, a price realism analysis was not conducted despite it being included as part of the evaluation process in the solicitation. The contracting officer was not able to provide documentation supporting that a price realism analysis was conducted as part of the evaluation.

Castro and Company[11]

GAO sustained the protestor’s claim against the awarded firm (the previous contract incumbent) because the evaluation panel did not adequately review and score relevant experience. Additionally, the overall score applied to key qualifications and staffing was based on a single evaluator’s comments, despite scores of contrary comments from other evaluators.

The agency’s source selection evaluation failed to explain the weaknesses and strengths, the negative and positive marks provided by evaluators, or why the agency was willing to pay a higher price for the awardee’s proposal. In fact, the comments provided for the protesting firm—e.g., “extraordinary wealth of knowledge, applicable skills and their method…which lead to a comprehensive view of the organization”—should have logically led to a better score than the one given to the awardee vendor—which had comments of “technical approach and personnel met the requirements.”

As GAO stated in its decision, “an agency may assign adjectival ratings and points scores, but those are guides to—not substitute for—intelligent decision making.” GAO determined that the agency did not consider all evaluations in determining the ranking of the proposers and did not conduct a thoughtful evaluation in accordance with the parameters outlined in the request for quotations.

Means to Address and Mitigate Bias

These four protest examples demonstrate four action steps that can be used to address cognitive bias in the evaluation process:

  • 1.)Encourage diverse voices and create consensus as part of the deliberation process—As the decision in Castro[12] indicates, all evaluators in the evaluation process have an equivalent voice, and all must be represented in the evaluation findings. An award decision should not be based on a single evaluator, but rather should represent, to various degrees, the findings of the panel overall.
  • 2.)Say what you are going to do, then do it—As the decision in Valor[13] indicates, the solicitation’s instructions provide the guidance not only to the proposers, but to evaluation panel members and serve as an important control on bias. To ensure fairness, the solicitation should provide key elements of the evaluation process to include a rubric and a general process for evaluation. Evaluators should be provided with this information plus any internal guidance on awarding the contract—and then be held accountable to follow that information.
  • 3.)Treat all proposers the same—As the decisions in Deloitte and Rotech[14] demonstrate, it is important to maintain the same standards of review across the evaluation to prevent bias. All steps described in the evaluation process should be followed for all like proposers and the results of the proposers’ abilities and actions, rather than an evaluator’s perception, should be what creates the ultimate decision.

    A tool that can be used by evaluators in ensuring consistent treatment and to identify bias is a “reversal test.” When a certain parameter is thought to have bad overall consequences, then the reversal test is used to determine the merits of the same parameter in the opposite direction—i.e., decreased instead of increased. If the resulting change is also thought to have bad overall consequences, then the burden is on those who reached these conclusions to explain why the parameter cannot be improved through changes. If they are unable to do so, then there is reason to suspect they suffer from bias.[15]

    For example, if one party argues that awarding a contract to a firm that has significant project experience is bad because that firm’s methods lack innovation, the reversal test would involve examining whether it would be better to the decrease the amount of experience. The first party would then need to defend the merits of decreasing the amount of experience, and in so doing would expose a status quo bias.

  • 4.)Recognize, address, and move past biases—There is a final lesson that is found implicitly in all four of the cited protest examples, which is also the hardest job for all evaluation team members, and that is to be able to recognize, address, and move beyond their specific biases to effectively utilize the processes to create an objective, transparent, and fair award of a contract. Beyond requiring attestations of confidentially and conflict of interest, panel members should also be asked to make an intentional effort to address their biases as part of their participation in the evaluation process.

 

Conclusion

The mitigation of cognitive bias is one the most important tasks in the evaluation process—yet it is too often overlooked. A reduction in cognitive bias allows for a better alignment to legal requirements; creates an objective, transparent, and fair decision process; and allows the solicitation process to proceed as it was designed. CM

 

Brian Cihacek

  • Manager of Buying Services (Professional Services) for the City of Minneapolis
  • Served as a procurement professional for federal, state and regional governments

 

Endnotes


[1] GAO Bid Protest Annual Report to Congress for Fiscal Year 2016, GAO-17-314SP (December 15, 2016).

[2] FAR 15.101-1 and 15.305.

[3] Ibid.

[4] See, e.g., David Eaves, “Don’t Believe the Hype! Procurement Reform is a Red Herring,” The Lectern Blog, FCW.com (June 15, 2017), available at https://fcw.com/blogs/lectern/2017/06/dont-reform-procurement-eaves.aspx

[5] Both statements are examples of a decline bias.

[6] GAO-17-314SP (see note 1).

[7] Ibid.

[8] Deloitte Consulting, LLP, B-412125.2, B-4212124.3, CPD (April 15, 2016) (hereinafter “Deloitte”).

[9] Rotech Health Care, Inc., B-413024 et al. (August 17, 2016) (hereinafter “Rotech”).

[10] Valor Healthcare Inc., B-412960, B-412960.2 (July 15, 2016) (hereinafter “Valor”).

[11] Castro and Company, B-412398 (January 29, 2016) (hereinafter “Castro”).

[12] Ibid.

[13] Valor, see note 10.

[14] Deloitte and Rotech (see notes 8 and 9, respectively).

[15] See Nick Bostrom and Toby Ord, “The Reversal Test: Eliminating Status Quo Bias in Applied Ethics,” Ethics 116 (The University of Chicago, July 2006): 656–679, available at https://www.nickbostrom.com/ethics/statusquo.pdf.

Advertisement
Advertisement