Good security does not happen by accident. It is the result of applying well-understood design principles consistently across a system’s architecture, implementation, and operations. This page covers the principles that have guided secure systems design for decades — and how contemporary models like zero trust build on them.

Saltzer and Schroeder’s design principles

In 1975, Jerome Saltzer and Michael Schroeder published a set of principles for the design of secure computer systems. Fifty years later, these principles remain the bedrock of secure software and systems design.
Give each user, process, or component the minimum set of privileges needed to perform its function — nothing more.Why it matters: If a process is compromised, the attacker inherits only the permissions that process held. A web server running as root gives an attacker root access when exploited. A web server running as a restricted service account gives an attacker far less.In practice: Run services as dedicated low-privilege accounts. Grant database users only the tables and operations they need. Use role-based access control to avoid giving broad permissions “just in case”.
Default to no access. Access must be explicitly granted; it should never be implicit.Why it matters: It is easier to grant access when someone needs it than to revoke access that was given too broadly. There is also an important asymmetry in feedback: users who lack access complain immediately. Users who have too much access rarely notice — so over-permissioning tends to grow silently.In practice: New user accounts should start with minimal permissions. Firewall rules should deny by default and permit by exception. File system permissions in Unix are restrictive by default for a reason.
Keep designs as simple and small as possible. Complexity is the enemy of security.Why it matters: Every line of code, every configuration option, every protocol feature is a potential hiding place for a vulnerability. Simpler systems are easier to analyze, easier to audit, and easier to reason about. When you cannot understand a system fully, you cannot secure it fully.In practice: Prefer simple, well-understood protocols over complex ones. Avoid feature creep in security-critical components. Remove code and services you do not need.
Every access to every resource must be checked for authorization — every single time, without exception.Why it matters: If some accesses bypass authorization checks (due to caching, shortcuts, or forgotten code paths), those unchecked paths become exploitable. An attacker who discovers an unguarded access route can bypass all your other controls.In practice: In a web application, even after a user authenticates, each request to access a file, a database record, or a privileged function must be checked against their current permissions. A user with read-only access must not be able to exploit a cached credential to perform write operations. This is the principle behind server-side authorization checks — never trust client-provided data about what the user is allowed to do.
The security of a system must not depend on the secrecy of its design or implementation.Why it matters: If security relies on attackers not knowing how the system works, it will fail the moment the design leaks — through reverse engineering, an insider, a disgruntled employee, or a security researcher. Security through obscurity is not security.In practice: Use well-analyzed, public cryptographic algorithms (AES, RSA, Ed25519) rather than home-grown ones. Open-source security software benefits from review by many independent experts. Proprietary algorithms that have never been publicly scrutinized carry hidden risk.The corollary is that secrets belong in keys and credentials, not in algorithms and designs.

Defense in depth

The Saltzer and Schroeder principles tell you how to design individual components. Defense in depth tells you how to compose those components into a resilient system. The core insight is that no single control is infallible. Defense in depth layers multiple independent controls so that an attacker who defeats one layer still faces others. Think of it as an onion: each layer must be bypassed before the attacker can reach the core.
1

Physical controls

The outermost layer. Locks, access badges, security cameras, and guards control who can physically interact with your hardware. Even the best software security is irrelevant if an attacker can walk out of your data center with a hard drive.
2

Perimeter and network defenses

Firewalls, intrusion detection and prevention systems (IDS/IPS), DMZs, and VPNs control which traffic can enter and leave your network and which systems can communicate with each other.
3

Host and endpoint defenses

Antivirus, endpoint detection and response (EDR), host-based firewalls, and system monitoring catch threats that reach individual machines after passing the network perimeter.
4

Application defenses

Input validation, secure coding practices, and web application firewalls stop attacks that reach running services — SQL injection, cross-site scripting, and similar application-layer threats.
5

Data defenses

Encryption at rest, cryptographic hashing, and regular backups protect the data itself even if every layer above is bypassed. A stolen encrypted hard drive still protects its contents.
6

Administrative controls

Policies, staff training, multi-factor authentication, patch management, and incident response plans address the human and operational dimensions of security. Technical controls alone cannot compensate for untrained users or absent procedures.
Controls within each layer are also classified by function:
TypePurposeExamples
PreventiveStop attacks before they succeedFirewalls, access control, input validation
DetectiveIdentify attacks in progressIDS, audit logs, monitoring
CorrectiveRecover from attacksBackups, incident response, patching

Castle and moat vs. zero trust

For decades, organizations modeled their security posture on the castle and moat: a strong perimeter that keeps threats out, with relatively open access inside the walls. Firewalls, intrusion detection systems, and access controls formed the moat. Once inside the perimeter, users and systems were largely trusted. The limitations of this model are significant:
  • It assumes threats come primarily from outside the network. Insider threats and lateral movement after an initial compromise are not addressed.
  • Once an attacker breaches the perimeter, they often have broad, relatively unrestricted access to internal systems.
  • The model assumes a stable perimeter. Cloud computing, remote work, and SaaS applications mean that “inside” and “outside” the network are no longer meaningful distinctions for most organizations.
Zero Trust Architecture (ZTA) is the modern response to these limitations. Its guiding principle is: never trust, always verify.
Zero trust does not mean you trust nobody. It means you do not grant implicit trust based on network location. A request from inside the corporate network is no more trusted than one from a coffee shop, because the network perimeter is no longer a reliable security boundary.
Zero trust has four operational pillars:
  1. Continuous verification — every request for access, from any user or device, is authenticated and authorized at the time of the request, not just at login.
  2. Micro-segmentation — the network is divided into small zones. Compromising one segment does not give access to others. Lateral movement is severely limited.
  3. Least privilege access — users and systems receive only the permissions they need for the specific task at hand, revoked or reduced as soon as the task is complete.
  4. Strong authentication — multi-factor authentication (MFA) is required. Single passwords are not sufficient to grant access to sensitive resources.
Zero trust and defense in depth are complementary, not competing. Zero trust tells you to verify every access request regardless of where it originates. Defense in depth tells you to build multiple independent layers so that no single verification failure leads to total compromise. Use both together.