Most of the most usable computers in the world don't have passwords. Imagine a store full of new laptops, running default software that lets you quickly get a feel for the hardware. Or consider a family computer that never locks itself with a screensaver or sleep cycle. These systems are usually on, are accessed by multiple trusted and untrusted users, and are designed to be as simple as possible to use.

Now imagine the most unusable computers in the world: network-isolated and headless cloud servers with ssh disabled, ephemeral virtual containers running clustered workloads, or desktop machines at the Pentagon, where I hear they have 9FA.

Of course a machine's utility is separate from its usability. Many of those cloud servers function perfectly well in their isolation. The challenge arises when we need to debug a process running on one of them, in the absence of log streams or observability agents. As machine security increases, it comes at the expense of accessibility and usability by humans.

A simple chart of this intersectionality looks like this:

The desirability of usability depends on which users you want to enable or disable. As security increases, system usability by attackers decreases, and this is desirable. Yet when security improvements cause your teammates pain and slow down their workflows, this is undesirable, and it's where we should focus effort on building pragmatic, human-centric security solutions.

When a business is starting up, employee and customer usability should be as high as possible, and systems security should be non-zero yet less than 50% of where you put your energy and money. As your business expands so does your attack surface, and so should your security budget. Wherever computers interface with humans, people themselves become attackable through phishing and social engineering exploits. You must continually increase attention and spending on security, at the expense of efficiency and usability.

For example, investing in email filtration tools and security training for your team can help prevent phishing attacks, but can also create a queue of false-positives, accidentally quarantined emails that slow your team down, and compulsory drudgery that people hate but must spend time on anyway, when they would prefer to be shipping products. These costs may be worth it, depending on the fiscal or reputational penalties that your brand would experience should your systems be compromised by a breach.

Or consider digital voting systems. In a trusted intranet environment, voting security may be low to ensure that usability is high. Users can participate easily, which makes voting outcomes better. But the open Internet is an adversarial environment, and users act in non-obvious and malicious ways when there are incentives to do so.  Here, we need to engineer anti-ballot-stuffing technology, identity systems to track participation, and strong encryption choices for data in transit and at rest. In these cases, security trade-offs directly impact the usability of shared software systems, and can lead to abuse by bad actors, or low engagement from legit users who don't want to deal with the security hassle.

There are no easy or always correct answers to systems design which can maximize both usability and security. But this does not mean that we should not try, or that a balanced intersection of these axes does not exist for a given system. Sam Harris has often discussed this distinction when discussing the nature of reality:

there are countless facts to be known in principle that we will never know in practice. Exactly how many birds are in flight over the surface of the earth at this instant? What is their combined weight in grams? We cannot possibly answer such questions, but they have simple, numerical answers.

The fact that we cannot currently answer a question accurately does not mean that the answer to that question is unknowable or not worth knowing. It simply means we don't yet have the capabilities to produce an optimal answer, and that we must do the hard work to get closer to the correct answer.

This is how I see security technology and human-centric usability patterns: we may not have all the answers now, and we might even be wrong more often than we are right, but working towards an optimal ratio of security and usability is a challenge that can make our lives meaningfully better in two critical ways: by building systems that people enjoy using, in balance with secure design principles that keep our data safe.