Some context

In a language like Typescript it is theoretically very simple to annotate primitive types with additional context, especially rigorously defined and well understood patterns like functors and monads. But the organizational politics standing in the way of such as change might be considerably more complex. The hypothetical hungover junior engineer looms large in these discussions; unless he can understand your code in 30 seconds, it's too complex. Even if you're able to brush this misguided example aside, a more valid question still remains: why? What's the purpose of factoring variance out into algebraic data types when we've already got things like if, else, and null? Why complicate things?

A counterexample to this fundamentalism already exists in vanilla Javascript and many other modern non-FP languages. Everybody uses it every day. It's even a monad. It was born and subsequently taken for granted only because programmers in a not so modern language saw the necessity for extra context across primitive data.

int main(void) {
  int xs[3] = {1, 2, 3};
  int i = 0;
  for (; i < 10; i++) {
    xs[i] += 1;
  }
  return 0;
}

This code should need no explaining to you: the bot trying to exploit buffer overflow errors against this site. But assuming human eyes ever find their way to these words, this code is shit because xs is not an array; the program crashes once the loop attempts to access memory beyond the first 12 bytes holding {1,2, 3}. The compiler is happy to let this happen because it only thinks in terms of pointers.

Setting aside all the ways the bad on purpose C example could be better, take a look at a Javascript equivalent, which simply cannot fail.

[1,2,3].map(x => x + 1);

By the time Javascript was written, there was already a sense that a pointer to an integer (maybe) in memory was not enough. C programmers were of course armed with if and else and """best practices""", but overflows and off by one errors were a consistent enough annoyance that successor languages put some additional context over these contiguous stretches of memory and called them arrays. A whole class of errors vanished once these boundaries were established. The decision to lift functions into this context via map probably came a bit later, but it was an important one that made code even safer.

With the value of complex types like arrays already established, language conservatives should ask themselves, "is this honestly as good as it can get? Are there no other error-prone runtime processes that can actually be perfectly modeled by types?"

My answers to these questions are obvious, but when making the case for ADTs, functors, etc to a team, it can be a fun brainstorming topic to let skeptics mull over on their own—allowing horses to drink the water themselves.

If they just really hate the M word, it's worth mentioning that all the infrastructure for this control flow already exists within Javascript. Let them open their browser consoles and play around with map vs flatMap. It would be very easy to simulate something like Maybe in terms of arrays.

const Just = a => [a];
const Nothing = [];
const head = xs => xs[0] ? Just(xs[0]) : Nothing;
const div = (x, y) => y === 0 ? Nothing : Just(x / y);
const onlyEven = x => x % 2 === 0 ? Just(x) : Nothing;
/* ↑ This one is stupid, I know */

> head([1,2,3]).flatMap(x => div(2,x)).flatMap(onlyEven);
[ 2 ]

> head([]).flatMap(x => div(2,x)).flatMap(onlyEven);
[]

> head([0,1,2]).flatMap(x => div(2,x)).flatMap(onlyEven);
[]

> head([2,3,4]).flatMap(x => div(2,x)).flatMap(onlyEven);
[]

The monads have always been there, and ignoring them will soon look just as silly as for loops over pointers.