Cybersecurity is the practice of protecting computer systems, networks, and data from unauthorized access, damage, or disruption. It covers an enormous range of concerns — from securing your personal laptop to defending national power grids. Before you can reason about specific controls or attacks, you need to be comfortable with the core vocabulary and mental models the field builds on.

What is cybersecurity?

Cybersecurity is not a single product or a checkbox you tick. It is an ongoing practice that involves preventing, detecting, and responding to three broad categories of threat:
  • Unauthorized access — someone reading data they are not permitted to see.
  • Unauthorized modification — someone changing or deleting data without permission.
  • Denial of authorized access — blocking legitimate users from reaching resources they need.
Every computer professional is implicitly responsible for this. Security is not a separate function owned by a single team; it is a property of the systems you design and operate.
The most secure computer is one that is completely unplugged and inaccessible — but it is also completely useless. Good security always balances protection with the practical needs of the people who use the system.

Key terms

A vulnerability is a weakness in software, hardware, configuration, or processes that can be exploited to violate security. Vulnerabilities are catalogued in public databases such as the National Vulnerability Database (NVD) and the CVE list. They can range from a misconfigured file permission to a flaw deep in an operating system kernel.Vulnerabilities are distinct from the attacks that exploit them. A vulnerability is a condition; an exploit is the action that takes advantage of it.
A threat is any potential cause of an unwanted event that could harm a system or organization. Threats can come from external attackers, malicious insiders, natural disasters, or simple human error. When you think about threats, ask: who wants to attack us, and what do they want to achieve?
Risk is the combination of a threat, a vulnerability, and the potential cost of a successful attack. A common formulation is:
Risk = Threat × Vulnerability × Cost
  • Threat: Who is likely to attack, and what method might they use?
  • Vulnerability: How easy is it to exploit a weakness in your system?
  • Cost: What do you lose if the attack succeeds — data, revenue, reputation, safety?
You rarely eliminate risk entirely. Instead, you manage it — reducing likelihood, reducing impact, or accepting what remains after countermeasures are in place.
An exploit is a piece of code, a technique, or a sequence of actions that takes advantage of a vulnerability to cause unintended behavior in a system. A vulnerability is the weakness; an exploit is the tool or method that weaponizes it.
The attack surface is the sum of all potential entry points where an attacker could try to subvert a system. Any way data gets into your system — email, network ports, USB drives, downloaded software, SMS messages, even hardware chips — is part of your attack surface. Reducing your attack surface is one of the most effective things you can do to improve security.

Privacy vs. security

Privacy and security are closely related but solve different problems.
PrivacySecurity
ConcernWho controls personal informationWho can access systems and data
Question askedWhat data is shared, with whom, and for what purpose?Is data protected from unauthorized access and tampering?
Example measureOpt-out settings, data minimization policiesEncryption, firewalls, access control
Privacy is an individual’s right to control their own personal information — setting boundaries on what is collected, how it is used, and who sees it. Security is the set of technical and organizational measures that protect data from unauthorized access, modification, or destruction. Security is a prerequisite for privacy: you cannot keep personal data private if an attacker can read the database. But strong security does not automatically guarantee privacy — a system can be perfectly locked down while still collecting and sharing data in ways users did not consent to.

Defense in depth

No single security control is infallible. Defense in depth is the strategy of layering multiple independent controls so that if one fails, others still protect the system. Think of it like an onion: each layer has to be peeled back before an attacker can reach the core. The layers typically span three categories:
  1. Physical controls — locks, access badges, security cameras, and guards that restrict who can physically touch your hardware.
  2. Technical controls — firewalls, intrusion detection systems, encryption, antivirus software, and access control mechanisms that protect systems at the network, host, application, and data layers.
  3. Administrative controls — security policies, staff training, multi-factor authentication requirements, patch management processes, and incident response plans.
Controls can also be classified by their function:
  • Preventive — stop an attack before it succeeds (e.g., firewalls, access control).
  • Detective — identify and log an attack in progress (e.g., intrusion detection systems, audit logs).
  • Corrective — recover from an attack after it occurs (e.g., backups, incident response procedures).
A practical illustration of defense in depth: if a laptop is stolen (physical control fails), full-disk encryption (a data-layer technical control) still protects the information on it. No single failure leads to total compromise.

Zero-day vulnerabilities

A zero-day vulnerability is a flaw that is unknown to the vendor and therefore has no patch available. An exploit that targets a zero-day does so on “day zero” of public awareness — before anyone has had a chance to fix it. Zero-days are particularly dangerous because they remain effective even on fully updated, otherwise well-secured machines. They are discovered by researchers, criminal groups, and nation-states alike. Some zero-days are responsibly disclosed to vendors; others are bought and sold on grey and black markets, or stockpiled by governments for offensive use.
A zero-day exploit in an operating system or network application is a real risk even when all available patches are applied. Layered defenses, network segmentation, and monitoring are essential precisely because no patch exists at the time of attack.
The history of computing is full of examples where zero-days led to significant harm — from the Morris Worm in 1988, which exploited undisclosed vulnerabilities to take down over 6,000 internet-connected computers, to Stuxnet in 2009, where 30 lines of code physically destroyed industrial centrifuges. Understanding these fundamentals is the starting point for reasoning about how to defend against them.