The CIA triad is the foundational model for information security. Every security control, policy, and architecture decision you make can be traced back to protecting one or more of its three properties: Confidentiality, Integrity, and Availability. Together they define what it means for a system to be secure.

Confidentiality

Information is not disclosed to unauthorized parties. Only those with explicit permission can read or access sensitive data.

Integrity

Information is not altered or destroyed in an unauthorized manner. Data remains accurate and trustworthy from creation to deletion.

Availability

Information is accessible and usable upon demand by authorized entities. Systems remain operational when legitimate users need them.

Confidentiality

Confidentiality means that sensitive information is accessible only to those who are authorized to see it. A breach of confidentiality occurs when data is read, copied, or transmitted without permission — whether by an external attacker, a malicious insider, or an accidental misconfiguration. Examples of confidentiality controls:
  • Encryption in transit (TLS) and at rest (full-disk encryption) so data is unreadable even if intercepted or stolen.
  • Access control lists that restrict which users or processes can open a file or query a database.
  • Network segmentation that prevents systems from communicating with resources they have no legitimate reason to reach.
When confidentiality fails: A database containing customer records is exposed through a misconfigured cloud storage bucket. Anyone with the URL can download the data. The technical controls (access policies) did not enforce confidentiality.

Integrity

Integrity means that data and systems remain accurate and unmodified except through authorized processes. An integrity violation occurs whenever someone changes, deletes, or injects data without permission — even if they never exfiltrate it. Integrity is important not just for stored data but for the systems that process it. If an attacker can modify code in transit (a supply-chain attack), tamper with logs, or inject fraudulent database records, the system may appear to function normally while producing wrong results. Examples of integrity controls:
  • Cryptographic hashing to detect whether a file has been altered.
  • Digital signatures on software packages so you can verify they have not been tampered with since the developer signed them.
  • Write-protected audit logs that record every change alongside who made it and when.
  • Input validation in web applications to prevent injection attacks that would corrupt stored data.
When integrity fails: An attacker modifies a financial transaction record, changing the destination account number. The data is never disclosed to an unauthorized party, but its accuracy has been destroyed — a confidentiality-preserving, integrity-violating attack.

Availability

Availability means that authorized users can access the systems and data they need, when they need them. Security controls that are so restrictive they prevent legitimate work fail the availability requirement just as surely as an attacker taking a service offline. Examples of availability controls:
  • Redundant hardware and geographic failover so that a single server failure does not take down a service.
  • Distributed Denial of Service (DDoS) mitigation to absorb or filter volumetric attacks.
  • Regular backups with tested restore procedures so systems can recover after ransomware or hardware failure.
  • Capacity planning and rate limiting to prevent accidental or deliberate overload.
When availability fails: A ransomware attack encrypts a hospital’s patient records system. Even if no data is ever leaked, the unavailability of records during an emergency is itself a serious security failure — and potentially a life-safety issue.
The three properties are in natural tension. Stronger confidentiality controls (e.g., requiring multi-factor authentication for every access) can reduce availability by adding friction. Maximizing availability (e.g., caching credentials aggressively) can undermine confidentiality. Part of security design is finding the right balance for your threat model.

Non-repudiation

Non-repudiation is a fourth property that underpins the enforceability of the CIA triad. It means the system can prove that a specific user performed a specific action — and that user cannot credibly deny it. Repudiation is the opposite problem: a user denies having performed an action, and the system has no reliable way to prove they did. For example, a user claims they never approved a money transfer, and there is no trustworthy log to confirm or refute the claim. Non-repudiation matters whenever accountability is important — in financial systems, audit trails, legal records, and any situation where disputes must be resolved after the fact. Examples of non-repudiation controls:
  • Digital signatures — a cryptographic signature ties a specific action (signing a document, authorizing a transaction) to a private key that only the user possesses. The user cannot later deny signing without also claiming their private key was compromised.
  • Tamper-evident audit logs — logs that are cryptographically chained (or stored in append-only systems) so that deleting or altering an entry is detectable.
  • Timestamps from trusted sources — paired with signatures, trusted timestamps prove not just who acted but when.
Non-repudiation requires that authentication be strong in the first place. If an attacker can log in as another user, any actions they take will be attributed to that user — and non-repudiation evidence will point to the wrong person. Strong authentication (MFA, certificate-based login) is the prerequisite.

Connecting the triad to threats

Every category of computer security threat maps onto one or more CIA properties:
Threat categoryCIA property violated
Unauthorized disclosureConfidentiality
Unauthorized modificationIntegrity
Denial of authorized accessAvailability
ForgeryIntegrity
RepudiationIntegrity (of records), Availability (of accountability)
SpoofingConfidentiality, Integrity
When you analyze a security incident or design a new system, using the CIA triad as a checklist helps ensure you are not inadvertently protecting one property while leaving another exposed.