When failure equals death...

Embellished JavaScript functions for safety

When failure equals death...


Function signatures are contracts for callers. Failure can be reduced by implementing contracts at code boundary points.

Failure equals death

When I first graduated, I was lucky enough to hear James Cameron speak about his hobby building submersibles. The interviewer asked if he was scared being so deep in the ocean using his own inventions and James Cameron responded with something that has influenced me deeply. To paraphrase, he said:

In engineering, failure equals death.

He went on to explain how we use elevators, bridges, and cars everyday without worry, but as an engineer, if we fail to do things properly it can result in death. This applies equally to software engineering.

An e-commerce site that creates an incorrect shipping order might not seem critical, but if it is used to sell pharmaceuticals, then all of sudden, failure to ship the correct medicine can lead to death. Likewise, a social media site that leaks private information through incorrect security, can also lead to death .

In all systems, unit tests and documentation may not be enough. Establishing clear contracts between code boundaries is another tool to ensure proper usage and flow of data. In this post, we'll explore defining contracts at the micro-scale with function definitions.

No Parachute

I like to think of JavaScript development as skydiving without a parachute. You get to move very, very fast, but you get also get to the ground equally as fast. JavaScript lets you develop very, very fast, but your code can become buggy equally as fast. Since it is an interpreted dynamically type language, there's no compiler to lean on to prevent some bugs.

Using JSDoc or TypeScript with a linter helps to avoid a large swath of bugs, so the main take away is use these tools. However, there are many, many reasons why someone might have to write a critical system in JavaScript. The most common way to do this is through implicit knowledge of the system. Implicit knowledge is where the team has a mutual understanding of how the things should work through a verbal agreement. Unfortunately, in situations where you inherit the code, that implicit knowledge is lost.

In legacy JavaScript systems without documentation or unit tests, safety can only be achieved by adding validation and verification logic at the start of every public function call. In essence, no assumptions can be made.

For context, I have supported and extended several legacy medical systems build using JavaScript. In all these cases, I wished I had a type system.

Complete Functions vs. Partial Functions

First let's define the difference between a complete and partial function. A function is considered complete if it can accept ALL values of a type in it's signature. If it can only accept a sub-set, then it is considered a partial function.

Consider FizzBuzz.

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”.

Given these requirements, the FizzBuzz function is a partial function because it only accepts numbers from 1 to 100 and not all numbers from negative infinity to positive infinity.

If a function is described as "For all values of type x then ...", it is probably a complete function. If the function is described as "For all values of type x where ... then ...", then it is probably a partial function.

JavaScript has entered the chat...

So how is this a problem for JavaScript? Well consider the following code:

const f = (x, y) => x + y

Since there are no types, f can only be described as "For all values of all types then apply the + operator to both arguments". There are several interpretations that a developer could make through assumptions and usage:

  • For all numbers, then calculate the sum
  • For all strings, then append the second string to the first
  • For all arrays, then concatenate the two arrays
  • For all functions, then compose y after x
  • For all procedures, then multicast x and y

Only the first interpretation can hold true in JavaScript if the function is to be considered complete. This example seems trivial, but show's the challenges of the implicit understanding of the intent of the function. Renaming the function to addNumbers or appendStrings or composeFunctions helps, but it does not prevent a developer from using addNumbers to concatenate strings. The contract is still implicit and can only be verified through code review.

In critical systems, a safe assumption is that if a developer can do it, they will.

Unless a function can operate on all values and all types in JavaScript, then for the majority of the time, all functions in JavaScript are probably partial functions. This can be avoided in several ways:

  • Use a type system
  • Unit Test
  • Documentation
  • Proper naming
  • Verbal communication (not always possible with legacy code)

The key point here, in any language, is that:

A public/exported function's signature is the contract for all callers


Normally, we could code defensively and apply validation on all inputs and throw if the validation fails. However, we can use a technique called embellishment to enable partial functions to accept all values and have the caller decide what to do upon failure. This technique is an inversion of control because we defer the error handling to the caller.

Embellishment works by wrapping the external inputs in a type and having internal embellished operators work on these types. Since the operations are internal and known, the assumptions can remain implicit. This is only marginally safer. The result is an embellished type, which can allow the caller to either validate the embellished type or pass it onto something else.

At this point, if you are already familiar with Monads, then you probably don't need to read on.

Verified Embellishment

Let's create an embellished type called Verified.

const VALID = Symbol("Value representing a valid value");
const INVALID = Symbol("Value representing an invalid value");

const createVerified = predicate => value => predicate(value)
  ? { [VALID]: value }
  : { [INVALID] : value }

It takes in a predicate and applies it to the value. If the predicate is true, then it returns a verified valid object, otherwise it returns a verified invalid object. This is very explicit, but it also forces the caller to "unwrap" the value from the result. By unwrapping the value, the caller is also responsible for checking if it is VALID or INVALID.

One reason to consider using a Symbol in place of a string as the property name is to avoid collisions with other objects that might have a property name of "VALID" or "INVALID". The use of Symbol can give some confidence that the verified object was created through the createVerified factory. It still doesn't stop a developer from hand crafting a verified object, but it reduces the chances that it could happen.

This may look a lot like boolean flags, but the difference is that it is explicit. A boolean flag requires implicit knowledge that the boolean property must be checked. In this case, if the caller wants the value, they must verify if the result as a VALID or INVALID property.

As an example, let's create a verified embellishment for integers where even numbers are considered valid:

const verifiedEven = createVerified(x => x % 2 === 0);

const result = verifiedEven(5);

if (Reflect.has(result, VALID))
  console.log("Valid: ", result[VALID]);
  console.log("Invalid: ", result[INVALID]);

We've created an explicit, standardized and reusable way to encapsulate validation logic. Wait, there's more!


One of the more powerful things about embellished types is how you can compose them with complete functions. As an example, it would be handy to have a map function that only applies them to verified valid types.

The transformation can be a complete or partial function, but will only be applied if the value is VALID. This can be entirely reusable everywhere.

const map = transformation => verified =>
  Reflect.has(verified, VALID) 
    ? { [VALID]: transformation(verified[VALID]) } 
    : verified;

const twosOnly = createVerified(x => x % 2 === 0);
const addOne = map(x => x + 1);

console.log(addOne(twosOnly(4))); // { [VALID] : 5 }
console.log(addOne(twosOnly(3))); // { [INVALID] : 3 }

One could implement other operators, such as flatMap, compose, etc...


Consider the following FizzBuzz implementation:

export const fizzBuzz = i => {
  const divisibleByThree = i % 3 === 0;
  const divisibleByFive = i % 5 === 0;

  if (divisibleByThree && divisibleByFive) return "FizzBuzz";
  if (divisibleByThree) return "Fizz";
  if (divisibleByFive) return "Buzz";
  return i;

If we consider the contract for FizzBuzz from the requirements, what we want is "For all integers between 1 and 100 inclusive, then ...". If we didn't want to touch the implementation (for legacy reason), we could compose the original implementation with an embellishment to provide the validation logic.

This does change the interface of fizzBuzz, but we could use Find usages in the IDE to understand the dependencies and the cost of refactoring. Alternatively, if we knew how to handle invalid FizzBuzz-able values, we could implement that within the module.

const unsafeFizzBuzz = i => {
  const divisibleByThree = i % 3 === 0;
  const divisibleByFive = i % 5 === 0;

  if (divisibleByThree && divisibleByFive) return "FizzBuzz";
  if (divisibleByThree) return "Fizz";
  if (divisibleByFive) return "Buzz";
  return i;

const verify = createVerified(i => Number.isInteger(i) && i >= 1 && i <= 100);
const applyFizzBuzz = map(unsafeFizzBuzz);
export const fizzBuzz = i => applyFizzBuzz(verify(i))

TypeScript has entered chat...

If this all seems kind of complex, well, it is. Using TypeScript or a type safe language can make things much easier. As an comparison, we can see the TypeScript implementation that uses Interfaces to describe the contract:

interface IVerified {

class Valid<T> implements IVerified {
    constructor(public readonly value: T){}

class Invalid<T> implements IVerified {
    constructor(public readonly value: T){}

type Transform<I, O> = (x: I) => O;
type Predicate<T> = (x: T) => boolean;

function createVerified<T>(precondition: Predicate<T>)
  : Transform<T, IVerified> {
    return x => precondition(x) ? new Valid(x) : new Invalid(x);

export const FizzBuzzableNumber = 
  createVerified((i: number) => i >= 1 && i <= 100);

export function fizzBuzz({ value: i }: Valid<number>): string {
  const divisibleByThree = i % 3 === 0;
  const divisibleByFive = i % 5 === 0;

  if (divisibleByThree && divisibleByFive) return "FizzBuzz";
  if (divisibleByThree) return "Fizz";
  if (divisibleByFive) return "Buzz";
  return i.toString();

In the TypeScript version, the responsibility for sending a valid number is now delegated to the caller. This forces callers to make the check and the IDE will provide the necessary feedback if one were to accidentally provide a non-FizzBuzzable number.


In critical systems, it is highly recommended to use a language with a type system. When creating functions, consider the inputs. If you choose to use a primitive as an input, verify that this function will work for the entire set of values for that primitive. Given the nature where it is unpredictable on what kind of software you may end up supporting, it's always good to have some techniques and tools to help build safety into the software.

There are much better explanations out there, I highly recommend the following resources. They helped me out immensely and hopefully will be useful for you as well!