Vulnerabilities. Known vs. Actual risk.

A new vulnerability is discovered in an open source package used in your codebase.

Should you block the codebase from being released until the vulnerability is fixed ?

Instinctually you may argue it should be blocked, but I suspect for the wrong reasons. I suspect you want to block the release because it contains a vulnerability. The paradox is that the newly discovered vulnerability exists in your running codebase as well, and therefore, the blocked release actually contains no more risk than the currently running release in your production environment. Blocking the release of the software doesn't prevent risk from being introduced, it just artificially makes it look like you are.

There are two types of risk here: known risk, and actual risk. A new CVE discovered in an existing package, increases known risk, but does not increase actual risk. Actual risk remains the same, and that is ultimately what we really care about. Basing quality gates solely on known risk, is 1) making it out that the software itself is more vulnerable than before, and 2) preventing progress in the name of the wrong numbers.

We want to either reduce or keep the same our actual risk. Blocking releases of software containing new known risks just slows down developers and software releases without much benefit.

If you are blocking releases as an explicit method of forcing the decrease of actual risk, based on an increase in known risk, I will happily argue the effectiveness of this process, but congratulate you on thinking about your processes beyond gut instinct.

To reduce risk I would advocate for a parallel process of risk reduction alongside software work, and trying to define quality gates based on actual risk rather than known risk.