Security is not about eliminating risk. It is about managing it. You will never reach a state where your systems face zero risk — the goal is to understand your risks, reduce them to an acceptable level, and make informed decisions about what remains. This page covers how to think about and act on risk in a cybersecurity context.

Understanding risk

Risk in cybersecurity is the product of three factors:
Risk = Threat × Vulnerability × Cost
  • Threat — Who wants to attack you, and with what? A nation-state adversary with sophisticated tools is a different threat than an opportunistic script-kiddie scanning for known CVEs. Understanding your threat model means knowing who is realistically motivated to target you and what capabilities they bring.
  • Vulnerability — How exposed are you? A known, unpatched vulnerability in an internet-facing service is high-exposure. The same vulnerability on an air-gapped system with no network connectivity is low-exposure. Vulnerability is about how easy it is to exploit a weakness given your actual environment.
  • Cost — What do you lose if an attack succeeds? This includes direct costs (data recovery, regulatory fines, customer compensation) and indirect costs (reputational damage, lost business, operational disruption). Cost is what makes a risk worth caring about.
Multiplying these together gives you a sense of the priority order for your defenses. A high-threat, high-vulnerability, high-cost scenario demands immediate attention. A low-threat, low-vulnerability, low-cost scenario may be acceptable to leave unaddressed.

Risk assessment

A risk assessment is the structured process of identifying and evaluating your risks before deciding how to respond to them.
1

Identify assets

What are you protecting? List the systems, data, and services that matter to your organization. A development environment and a production payment system have very different risk profiles.
2

Identify threats

For each asset, who might want to attack it and how? Consider external attackers, malicious insiders, accidental damage, and infrastructure failures. Your threat actors, their motivation, and their likely methods shape every subsequent decision.
3

Identify vulnerabilities

Where are your weaknesses? Review patch status, configuration, access controls, and third-party dependencies. Databases like the National Vulnerability Database (NVD) and CVE list help you identify known weaknesses in software you run.
4

Estimate impact and likelihood

For each threat-vulnerability pair, estimate how likely an attack is to succeed and what the impact would be if it did. This does not have to be mathematically precise — relative rankings (high/medium/low) are often sufficient to prioritize action.
5

Decide on a response

For each identified risk, choose one of the four responses described in the next section. Document your decisions, including risks you chose to accept.
Risk assessment is not a one-time activity. Reassess regularly — when you deploy new systems, after security incidents, when your threat landscape changes, or when audits reveal new weaknesses.

Risk mitigation strategies

Once you have assessed your risks, you have four options for each one:
StrategyWhen to use itExample
MitigateThe risk is significant and a cost-effective control existsApply a patch, add MFA, encrypt a database
AvoidThe activity generating the risk is not worth the exposureStop storing credit card numbers; use a payment processor instead
AcceptThe cost of mitigation exceeds the likely impactA small internal tool with no sensitive data may not justify the same controls as a production payment system
TransferThe risk can be moved to a third partyCyber insurance; outsourcing a high-risk function to a specialized provider with contractual guarantees
Mitigation itself has two dimensions: you can reduce likelihood (make an attack harder to carry out) or reduce impact (limit the damage when an attack does succeed). Defense in depth is an example of reducing likelihood; backups and incident response plans reduce impact.

The economics of defense

Cybersecurity defense is asymmetric by nature. An attacker needs to find and exploit just one flaw. A defender needs to address all of them. This asymmetry is one of the most important structural facts about the field, and it shapes every budget and prioritization decision you make. A direct consequence is that spending must be proportional to what you are protecting. You do not buy a 5,000titaniumlocktosecurea5,000 titanium lock to secure a 50 bike. Apply the same logic to security controls:
  • Defense cost must not exceed the value of the asset being protected.
  • Controls that cost more than the expected loss from the risk they address are economically irrational — unless there are regulatory, reputational, or safety considerations that go beyond direct financial value.
There is also a usability dimension. Security measures must be psychologically acceptable to the people who use the systems. If a control is too burdensome, users will find ways around it. A password policy requiring 30-character random strings may produce excellent passwords — written on sticky notes attached to monitors. A control that is bypassed provides no protection.
When you make security too hard to use, you create shadow IT: users adopt unauthorized tools and processes that meet their needs but operate outside your security controls. This often creates far more exposure than the original control was meant to prevent.

Bug bounty programs

A bug bounty program is a structured reward system where an organization pays independent security researchers to find and responsibly disclose vulnerabilities in its systems. It is sometimes described as the “gig economy” of cybersecurity. How a bug bounty program works:
1

Launch the program

The organization defines the scope (which systems are in scope for testing), the rules of engagement, and the reward structure (often tiered by vulnerability severity).
2

Researchers find and report bugs

Independent researchers — ranging from hobbyists to professional security consultants — test the in-scope systems and submit reports for vulnerabilities they discover.
3

Researcher writes a report

A valid submission explains the vulnerability, provides steps to reproduce it, assesses its potential impact, and ideally suggests a fix.
4

Organization validates and pays

The security team triages the report, validates the vulnerability, determines its severity, and pays the agreed bounty. Severity ratings (critical, high, medium, low) directly influence payout amounts.
Bug bounty programs are an economical way to augment your internal security testing. Professional penetration tests are expensive and time-limited; a bug bounty program provides continuous coverage from a diverse pool of researchers, paying only for confirmed, valid findings. Major platforms that host and manage bug bounty programs include HackerOne, Bugcrowd, and Intigriti. Bug bounty programs exist alongside — not instead of — internal security testing, vulnerability management, and secure development practices.
Bug bounty programs work best when the organization has a mature vulnerability management process to act on the reports that come in. Launching a program before you have the capacity to triage and fix submissions quickly leads to researcher frustration and, potentially, unpatched vulnerabilities sitting in a queue.

Service Level Agreements (SLAs)

A Service Level Agreement (SLA) is a formal contract between a service provider and a customer that defines the expected level of service, including performance metrics, responsibilities, and remedies when service falls short. In cybersecurity and availability contexts, SLAs matter in two ways: 1. Availability commitments. An SLA might guarantee 99.9% uptime for a cloud hosting service. If the service falls below that threshold, the provider owes compensation — typically service credits. This creates a financial incentive for the provider to invest in availability controls and gives customers a contractual basis for recourse. 2. Security incident response commitments. SLAs often specify how quickly a provider must notify customers of security incidents, how quickly critical vulnerabilities must be patched, and what audit rights customers have. These provisions are directly relevant to your risk posture when you depend on third-party services. When evaluating third-party services, review SLA terms carefully:
  • What uptime percentage is guaranteed, and how is downtime measured and reported?
  • What are the notification obligations in the event of a data breach or security incident?
  • What remedies are available if SLA commitments are not met, and are those remedies commensurate with the potential harm?
  • Who is responsible for security at each layer of the service (shared responsibility models in cloud computing define what the provider secures vs. what you must secure yourself)?
SLAs are a form of risk transfer: they move some of the financial and operational consequences of service failures from you to the provider. They do not transfer the reputational or regulatory consequences of a data breach — those remain yours — so they complement, but do not replace, your own security controls.