• Home
  • About

Why Do Type Systems Behave Like Proofs?

From seeing type checks as compiler hurdles to reading type systems as logic


Why Do Type Systems Behave Like Proofs?

We use types every day and face countless type errors. Each time, we read the error message, search for a fix, and move on — but we rarely stop to think deeply about why the type system behaves the way it does.

Knowing how to use types is like understanding grammar. Understanding the type system is closer to understanding why that grammar was designed the way it is.

I previously translated the “Category Theory for Programmers” series, covering concepts like categories, functors, and composition. Looking back, it had a fairly steep learning curve, so I wanted to write a more accessible version.

In this post, I’ll focus on the correspondence between types and logic, and explore what it actually means when the type checker says “pass.”

Is Type Checking Really Just “Checking”?

When most developers first learn TypeScript, they treat the type system like a linter. Put in the wrong type and you get a red underline; put in the right type and it passes.

function add(a: number, b: number): number {
  return a + b;
}

// Error: Argument of type 'string' is not assignable to parameter of type 'number'
add(1, "2");

This isn’t wrong, but it’s missing something.

A type system isn’t just a tool for catching errors. Behind it lies a mathematical structure, and passing type checking carries meaning beyond simply “no errors found.” Types correspond to propositions in logic, and code that satisfies a type corresponds to a proof of that proposition.

So what are the type errors we encounter? Simple mistakes — or signals that some contract we implicitly entered into has been broken?

To understand this story, we first need to talk about the fundamental mathematical concepts underlying type systems.

The Three Pillars of Type Systems: Formal Logic, Set Theory, and Lambda Calculus

Type systems rest on three mathematical ideas. The sudden mention of math might feel intimidating, but don’t worry — these are things we already work with in code every day.

Formal Logic: What It Means to “Prove Something True”

Formal logic is a system for handling propositions like ”AA and BB,” ”AA or BB,” and “if AA then BB.” The core idea is: “if you want to claim something is true, you have to prove it.”

Symbol Everyday Meaning Example
and “I have an umbrella and rain boots”
or “I’ll take the bus or the subway”
if … then “If it rains, the ground gets wet”

For example, to claim that ABA \lor B is true, you need to show that AA is true or show that BB is true. “One of them must be true, but I don’t know which” doesn’t count as a proof.

And once you accept ”ABA \lor B,” you can see that to derive anything from it, you need branches that handle both cases (AA and BB). This is structurally identical to why type narrowing is needed when working with string | number in code.

  // To claim "A or B" is true, you must concretely show which one
  type Result = string | number;

  function process(x: Result) {
    // "It's one of the two" isn't enough. You need to check which case.
    if (typeof x === 'string') {
      return x.toUpperCase();
    } else {
      return x.toFixed(2);
    }
  }

Set Theory: Types Are Collections of Elements

Think of a set as a bag that holds similar things. The number type is a bag containing every number the program can represent. The string type is a bag of all strings. And number | string is the combined contents of both bags.

// Pull one from the number bag
const n: number = 42;

// Pull one from the string bag
const s: string = "hello";

// Pull from either bag
const x: number | string = Math.random() > 0.5 ? 42 : "hello";

When you think of types as sets, many things become natural. The never type is an empty bag, and unknown is the bag that contains everything.

Strictly speaking, TypeScript’s actual type system differs from pure set theory in some ways, but this level of understanding is sufficient for building intuition.

Lambda Calculus: Functions, and More

Lambda calculus is a concept created in the 1930s by mathematician Alonzo Church to answer the question: “What is computation?” The name sounds intimidating, but if you’re a developer, you already use it every day.

// In lambda calculus: λx.x
const identity = x => x;

// In lambda calculus: λx.λy.x
const first = x => y => x;

Arrow functions are close to the modern syntax for lambda (λ\lambda) notation, and they most directly demonstrate lambda calculus’s core idea of “function definition and application.” The remarkable thing Church proved is that these simple concepts alone can express arithmetic, conditionals, loops, and even numbers themselves.

Of course, real JavaScript functions aren’t 100% identical to lambda calculus due to side effects, exceptions, this binding, and the runtime object model. But they share the essential insight that “computation is ultimately about creating and applying functions,” and that’s enough for our purposes.

Where Three Worlds Meet: The Curry-Howard Correspondence

So how are these three concepts used in type systems? To answer this, we need to talk about the Curry-Howard correspondence — the concept that unifies them.

The Curry-Howard correspondence (also called the Curry-Howard isomorphism) is a direct relationship between computer programs and logical proofs. More precisely, it’s the claim that proof rules in logic correspond to type rules in type systems.

This pattern was first noticed in the 1930s by logician Haskell Curry, who discovered it while adding types to lambda calculus. Later, in 1969, William Howard dug deeper and established the following correspondences:

In Logic In Programming
Proposition Type
Proof Program
If A then B Function from A to B
A and B Tuple/object containing both A and B
A or B Union type: A or B
False (unprovable) never (no value can exist)

The essence of this concept in one line:

Types correspond to propositions, and programs that produce values of those types correspond to proofs of those propositions.

TypeScript is a language that partially borrows this idea for practical purposes. “Partially” because escape hatches like any and type assertions exist.

So in this post, when I say “proof,” think of it as “fulfilling the contract that the type rules demand.” I’ll revisit TypeScript’s limitations at the end.

Logical Structures Through Types

Let’s now look at how the Curry-Howard correspondence actually manifests, one piece at a time.

Functions: Evidence of “If A Then B”

Let’s start with the most fundamental correspondence: function types. The function type (a: A) => B corresponds to “if AA then BB” in logic.

What does this mean? Let me put it in a more everyday analogy.

Say a friend promises: “If you give me flour, I’ll make you bread.” This is the proposition “flour → bread.” How do you verify this promise is real? Simple — you give them flour, and bread comes out.

function makeBread(flour: Flour): Bread {
  // Knead the flour, let it ferment, bake it...
  return bread;
}

This function passing type checking shows that the promise “give me flour and I can make bread” is fulfillable within the type rules. The function’s implementation itself is the fulfillment of that contract.

What happens when a type error occurs?

function makeBread(flour: Flour): Bread {
  // Error: Type 'Flour' is not assignable to type 'Bread'
  return flour;
}

You promised to take flour and make bread, but you just handed back the flour. That’s a breach of contract, hence the type error.

When this contract structure is properly maintained, we can start inferring function results.

In logic, there’s an inference rule called Modus Ponens:

If “if AA then BB” is true, and AA is true, then BB is also true.

For example, if the proposition “if it rains, the ground gets wet” is true and “it’s raining” is also true, then we can easily infer “the ground is wet.” This structure is identical to function calls in programming, which is why we can infer the result type of a function.

// The "flour → bread" promise
const makeBread: (flour: Flour) => Bread = /* ... */;

// We have flour
const flour: Flour = getFlour();

// Therefore we get bread
const bread: Bread = makeBread(flour);

The function makeBread is the proposition “if you give me flour, I can make bread.” If this proposition is true and “flour is given” is also true, then naturally the value stored in bread being of type Bread is also true.

The function calls and return type inference we do every day share this exact structure with logical inference.

Tuples: Evidence of “And”

The logical ”AA and BB” corresponds to tuples or objects in programming.

Think of an everyday example. To travel abroad, you need to satisfy the condition “I have a passport and I have a plane ticket.” How do you fulfill this? Show both.

type CanTravel = [Passport, Ticket];

// Fulfilling the "can travel" condition = a tuple with both passport and ticket
const proof: CanTravel = [myPassport, myTicket];

Once the CanTravel type is satisfied — in other words, once “I have a passport and a plane ticket” holds — you can freely extract either one from the tuple.

function getPassport(travel: CanTravel): Passport {
  // Extract A from "A and B"
  return travel[0];
}

function getTicket(travel: CanTravel): Ticket {
  // Extract B from "A and B"
  return travel[1];
}

This mirrors the logical rule “if ABA \land B is true, then AA is true.” Extracting an element from a tuple corresponds to logical inference.

Unions: Evidence of “Or”

TypeScript’s union type A | B corresponds to ”AA or BB” in logic. The key point is that to claim ”AA or BB” is true, you must concretely show either that AA is true or that BB is true.

type PaymentMethod = CreditCard | BankTransfer;

// Fulfilling "credit card or bank transfer" = concretely provide one
const payment1: PaymentMethod = { type: 'credit', cardNumber: '1234...' };
const payment2: PaymentMethod = { type: 'bank', accountNumber: '5678...' };

And even if the proposition ”AA or BB” is true, you can’t know which one it is until you concretely check.

So to derive anything from this proposition, you need to handle both cases.

function processPayment(method: PaymentMethod): Receipt {
  if (method.type === 'credit') {
    // Handle case A
    return chargeCreditCard(method);
  } else {
    // Handle case B
    return processBankTransfer(method);
  }
}

This is why exhaustiveness checking matters in TypeScript. Logically, when handling ”AA or BB,” you need strategies for both cases to have complete coverage.

That said, whether TypeScript automatically checks that all cases are handled depends on your configuration and code patterns. If you want to strictly enforce exhaustiveness, use a pattern like this:

function assertNever(x: never): never {
  throw new Error("Unexpected value");
}
type Shape = { type: 'circle'; radius: number }
  | { type: 'square'; side: number }
  | { type: 'triangle'; height: number };

function area(shape: Shape): number {
  switch (shape.type) {
    case 'circle':
      return Math.PI * shape.radius ** 2;
    case 'square':
      return shape.side ** 2;
    default:
      // Argument of type '{ type: "triangle"; height: number; }' is not assignable to parameter of type 'never'.
      return assertNever(shape);
  }
}

If all union cases are handled, the default branch is unreachable and shape becomes never. But if you add a new shape and forget to handle it, shape won’t be never anymore, and you’ll get a compile error.

Never: Expressing “Impossible”

TypeScript’s never type means “no such value can exist.” In logic, this is expressed as contradiction or falsehood.

Using the set theory analogy from earlier, never is an empty bag. Nothing is in it, and nothing ever will be.

function throwError(message: string): never {
  throw new Error(message);
  // This function never returns normally
  // A return value "cannot exist"
}

Here’s where an interesting principle from logic comes in: “from a contradiction, anything can be derived.” In Latin, this is called “Ex falso quodlibet” — if an impossible situation has occurred, then anything goes.

We can express this principle in TypeScript:

function absurd<T>(x: never): T {
  return x;  // From never, you can convert to any type
}

This function expresses the rule “if never, then T.” Try replacing never with any concrete type and you’ll immediately get an error. Since never represents falsehood/contradiction, any type can follow it without being logically wrong.

To be precise, TypeScript doesn’t exactly implement Ex falso in the formal sense. The never type is treated as a bottom type — a subtype of all types — so values of type never can be assigned to any type position. It resembles the Ex falso principle, but it’s really a consequence of subtyping.

Generics: Contracts That Hold “For All Cases”

When first learning generics, we’re usually told they let you “decide the type later.” This isn’t wrong, but what generics mean goes deeper.

In logic, there’s something called a universal proposition: “for all xx, P(x)P(x) holds.” Like “all humans are mortal.”

Generic functions have a structure similar to this universal proposition.

function identity<T>(x: T): T {
  return x;
}

This function expresses the contract: “for every type T, if you give me a T, I’ll return a T.”

The crucial point is that you must show this holds for all cases — which means you can’t assume anything about any specific case.

That’s why assuming something specific about a generic type immediately produces a type error:

function broken<T>(x: T): T {
  // Assuming all T is number...
  // Operator '+' cannot be applied to types 'T' and 'number'.
  return x + 1;
}

The moment you assume T is number, this is no longer a contract about all T — it’s narrowed to a contract about number. It’s no longer a universal proposition.

We solve this by adding constraints to the “for all” premise:

function addOne<T extends number>(x: T): number {
  return x + 1;  // OK
}

With T extends number, T becomes “a T that is a subset of number.” We’re adjusting the scope of “all.” That’s why we use the extends keyword for these constraints.

Composition and Transformation

So far we’ve looked at the basic correspondences between types and logic. Now let’s talk about how to combine and transform them.

Function Composition: Chaining Inferences

In logic, there’s the syllogism. The most basic hypothetical syllogism can be expressed as:

If AA then BB. If BB then CC. Therefore, if AA then CC.

For example, if “eating makes you full” is true and “being full makes you sleepy” is true, then “eating makes you sleepy” is also true. In programming, this is expressed as function composition.

function compose<A, B, C>(
  f: (a: A) => B,
  g: (b: B) => C
): (a: A) => C {
  return (a) => g(f(a));
}
const parseNumber = (s: string): number => parseInt(s, 10);
const isPositive = (n: number): boolean => n > 0;

// Connecting string → number
// and number → boolean
// gives us string → boolean
const isPositiveString = compose(parseNumber, isPositive);

isPositiveString("42"); // true

This is like chaining small inferences together to build larger, more complex ones. The ability to stack small, simple structures into larger ones is what makes function composition powerful.

This is why concepts like pipe and compose, and related ideas like Functors and Monads, are central to functional programming.

In category theory, composition is said to be the very essence of a category. If you’re interested, see “Category Theory for Programmers - 1. Category: The Essence of Composition”.

Currying: A Logically Natural Transformation

Currying is the pattern of transforming a multi-argument function into a chain of single-argument functions. It makes composition easier because any function can be reduced to the common pattern of unary functions.

An interesting point is that this transformation from multi-argument to single-argument functions is always possible. Why?

In logic, there’s an equivalence called the Exportation law, and currying is a pattern based on this concept:

(AB)CA(BC)(A \land B) \rightarrow C \equiv A \rightarrow (B \rightarrow C)

“If (A and B) then C” is logically equivalent to “If A, then (if B then C).”

Strictly speaking, (AB)C(A \land B) \rightarrow C corresponds to “a function that takes a tuple of AA and BB and returns CC,” and JavaScript’s multi-argument functions are a practical approximation of this.

In more everyday terms: “If you have flour and water, you can make dough” and “If you have flour, then (if you have water, you can make dough)” mean the same thing. This is why the transformation is always guaranteed to be safe.

// Takes two arguments at once
function makeDough(flour: Flour, water: Water): Dough {
  return mix(flour, water);
}

// Takes one argument at a time (curried)
function makeDoughCurried(flour: Flour): (water: Water) => Dough {
  return (water) => mix(flour, water);
}

// Both are logically equivalent
const dough1 = makeDough(flour, water);
const dough2 = makeDoughCurried(flour)(water);

Currying is possible not because it’s merely a programming technique, but because it mirrors a logical equivalence.

Type Algebra: Like Addition and Multiplication

Have you heard the term Algebraic Data Types? It means that types support operations analogous to arithmetic in algebra.

For example, there are product types and sum types. A representative product type is the tuple (“and”), and a sum type is the union type (“or”).

type Bool = true | false;  // 2 values
type Pair = [Bool, Bool];  // 2 * 2 = 4 values
// [true, true], [true, false], [false, true], [false, false]
type Three = 'a' | 'b' | 'c';  // 3 values
type Five = Three | Bool;       // 3 + 2 = 5 values
// 'a', 'b', 'c', true, false

This is why they’re called Algebraic Data Types. When you look at types in terms of the number of possible values, sum types behave like addition and product types like multiplication.

What’s even more interesting is that the distributive law from algebra also shows up in types. Mathematically:

A×(B+C)(A×B)+(A×C)A \times (B + C) \equiv (A \times B) + (A \times C)

In TypeScript, this structure becomes very clear in one particular feature: conditional types.

TypeScript’s conditional types automatically distribute over union types. This property lets you directly observe the distributive law of type algebra:

type PairWith<A, B> = B extends any ? [A, B] : never;

type A = string;
type B = number | boolean;

// Since boolean = true | false, this distributes into a union of three products
// [string, number] | [string, false] | [string, true]
type Result = PairWith<A, B>;

This example shows that when B is the sum type number | boolean, the product type [A, B] distributes into the sum type [A, number] | [A, boolean].

At the type level, we see the same algebraic distributive law: A×(B+C)(A×B)+(A×C)A \times (B + C) \equiv (A \times B) + (A \times C).

TypeScript’s type system doesn’t always automatically treat these distributed forms as identical types. But at the level of type computation and transformation rules, the ideas of algebraic data type addition, multiplication, and distribution are clearly alive.

In category theory, products and sums are treated from an even more abstract perspective called Universal Construction. See “Category Theory for Programmers - 9. Function Types” for how these algebraic structures are extended.

TypeScript Isn’t 100% Logically Sound

Now that we’ve covered the basic concepts, let’s look at the TypeScript limitations I mentioned earlier.

I said TypeScript partially borrows the Curry-Howard correspondence. What are its limits?

TypeScript intentionally provides several escape hatches that can lead to logical contradictions, all in the name of practicality. I’ll cover two representative ones here.

The first escape hatch is the existence of any and type assertions. any is a declaration to skip type checking entirely. In logical terms, it’s like “accepting anything as true without verification.” Type assertions via as are the same.

These are escape hatches that bypass the proof system, which is why TypeScript becomes a language that permits logical contradictions.

const x: any = "hello";
const y: number = x; // Assigning a string to a number type

const z = "hello" as unknown as number;  // Forced type conversion

The second escape hatch is corner cases in structural typing. Notably, function parameter bivariance, index signatures, and optional properties can all undermine logical interpretation.

Logically, function parameters should be contravariant. In simple terms, if Dog is a subtype of Animal, then (animal: Animal) => void should be a subtype of (dog: Dog) => void — because a function that accepts a broader type can substitute for one that accepts a narrower type.

But TypeScript treats method-syntax function declarations as bivariant:

  interface Animal { name: string }
  interface Dog extends Animal { breed: string }

  type Handler = { handle(animal: Animal): void };

  const dogHandler: Handler = {
    // A function that only handles Dogs is assigned where Animal is expected
    // Not caught even with strictFunctionTypes
    handle(dog: Dog) { console.log(dog.breed) }
  };

  dogHandler.handle({ name: "cat" }); // Possible runtime error

Index signatures can also lead to logical contradictions. Even when accessing a key that doesn’t exist on an object, TypeScript infers the declared type rather than undefined:

const scores: { [name: string]: number } = { alice: 100 };

const bobScore = scores["bob"]; // Type is number, actual value is undefined
console.log(bobScore.toFixed(2)); // Runtime error

If you want stricter type checking, enable the noUncheckedIndexedAccess option in tsconfig to properly infer number | undefined.

Finally, there’s the confusion between optional properties and undefined. In a type like { a?: string }, the state where a is undefined and the state where a doesn’t exist at all aren’t logically distinguished:

type Config = { timeout?: number };

const config1: Config = { timeout: undefined }; // OK
const config2: Config = {}; // OK

// Both are treated as the same type, but
// Object.hasOwn(config1, 'timeout') and Object.hasOwn(config2, 'timeout') differ

Since TypeScript 4.4, the exactOptionalPropertyTypes option lets you more strictly distinguish between “doesn’t exist” and “exists as undefined.”

These escape hatches are the result of TypeScript choosing practical JavaScript compatibility over 100% logical soundness.

Wrapping Up

This post isn’t about how to use TypeScript better. If you’ve worked with types extensively in practice, most of the examples here are probably familiar.

What I wanted to convey is more about the perspective with which we approach the type system. Instead of seeing type errors as “obstacles to get past,” try reading them as a tool that tells you what contract you declared and where that contract is breaking down.

Once this perspective takes hold, your attitude toward type errors naturally shifts. Before asking “why doesn’t this work,” you start thinking about “what contract am I violating.”

Type systems have more to say than we realize. It might be a waste to read them as nothing more than a pass/fail from the compiler.

If you found this post interesting, check out the “Category Theory for Programmers” series. You’ll see how concepts covered here — composition, product and sum types, function types — are generalized in category theory, and how ideas like functors and natural transformations extend them further.

관련 포스팅 보러가기

Nov 28, 2020

Designing Extensible Components with TypeScript

Programming/Architecture/JavaScript
Jan 27, 2020

How Can We Safely Compose Functions?

Programming/Architecture
May 03, 2017

[Simulating Celestial Bodies with JavaScript] Implementing Planetary Motion

Programming/Graphics